You could use it for example to monitor your Raspberry Pi Servers running at home. It takes only a few steps of configuration and after that it displays much relevant server measurement data in your browser:
+ System Load
+ Memory Usage
+ Uptime / Boot Time
+ Costs (calculated)
+ Battery (e.g. for monitoring a mobile device)
+ WiFi Signal Strength
+ Processor (Cores, Speed, Usages, …)
+ System (Distro, Version, Architecture, …)
+ Network Services (Open Listening Ports)
+ Network Devices & Addresses
+ Network Interfaces IO (bytes sent/received)
+ Disk Storage Usage (used & total space)
+ Disk Device IO (bytes read/written)
+ Users logged in (name, login date, …)
The architecture is divided up into three parts (original MVC)
* Server Component
This component needs to be executed on the server-system which you want to monitor. It’s basically a simple webserver. For security reasons it just has readonly access to the system. The Auth is done via HTTP Basic Auth, so don’t use it in untrusted networks!
The Server component provides realtime data only. There are no cyclical background tasks or other stuff running which occupy the processor/memory/disk. If you don’t access the server it will take up almost no resources. The server is built with Python3, which of course needs to be installed. Default server port is 9393.
* Controller Component
One server needs to have the additional role of the controller. The controller is also just a webserver which provides a REST-API to manage the application.
The Auth is also done via HTTP Basic Auth, so also don’t use it in untrusted networks! The controller is built with node.js and express.js. If you don’t have those installed, you could also use the Docker-Image called prod. Default controller port is 8080.
* Client Component
Finally the client represents the web-page and gets served by the Controller (on port 8080). The client is built with Angular, some Bootstrap CSS and a subset of FontAwesome Icons. A refresh-cycle to display new data (every 3sec per default) incurs requesting all your servers to get updates of the measured data.
git clone https://github.com/knrdl/yamot && cd yamot Server 1. Install python3 and psutil and ujson on every server with sudo apt-get install python3-psutil python3-ujson. If you are not running an apt-based system (Debian or Ubuntu) use sudo pip3 install psutil instead. 2. Copy the file yamot_server.py to your server (e.g. under /opt/yamot) and add it to /etc/rc.local as sudo -u username dash -c 'cd /opt/yamot && python3 /opt/yamot/yamot_server.py' & in front of the “exit 0”-line (enable autostart) 3. Run the server once interactively via python3 yamot_server.py to generate a config file (needs one-time write permission in the same folder). 4. If you are running a firewall on your server (like ufw) open the specified port sudo ufw allow 9393 (default port is 9393) Client & Controller 1. The controller-component needs to be running on a server in your network (the same network where also the servers are running). The server which runs the controller can also run the server component at the same time. 2. There you will need a node.js installation with express.js (or docker, if you use the prod image) 3. Use node controller.js to start the controller and check if it is working 4. The login credentials will be provided by the controller on startup in the shell. 5. Now you could also add it to the autostart of the system. Don’t forget to open the port if you are using a firewall. 6. If you are done, open browser and navigate to http://ip-of-the-controller-device:8080 (8080 is the default controller port) URL: http://localhost:8080/ login credentials: + Username: yamot + Password: test123
NOTE: Pulse-Monitor is designed to take a specific action when the Monitor system loses touch with the Heartbeat system. An alternate use, however, is to install only the Heartbeat role. This essentially builds a logging system in which the Monitor system (with no Pulse-Monitor components installed) has a log file that is updated regularly by the Heartbeat system, per arguments supplied to ./install-heartbeat.sh. In this setup, no logic is performed on any missed heartbeats, so the Monitor system takes no action. It does make for a handy heartbeat/connectivity logging tool, thoug
Use and Download:
git clone https://github.com/viiateix/Pulse-Monitor && cd Pulse-Monitor Example: ./install-heartbeat.sh 2 /home/seclist/.ssh/id_rsa remoteuser 22.214.171.124 22 /home/seclist/heartbeat.log "Hello there"
The standard logs facilities provided by iptables do not easily allow us to associate addresses behind the firewall to their source-natted equivalents before the firewall. Natlog was designed to fill in that particular niche.
When running natlog, messages are sent to the syslog daemon and/or to the standard output stream showing the essential characteristics of the connection using source natting.
Natlog depends on facilities provided by iptables, but may also generate logs directly using facilities offered by the pcap library.
+ g++ (>= 4.7.1), icmake (>= 7.19.00),
+ libbobcat-dev (>=3.01.00), libpcap-dev, and yodl (>=3.00.0)
Use and Download:
git clone https://github.com/fbb-git/natlog && cd natlog cd natlog ./build -q (for how to Build) ./build program
Explanation of keys:
– name: needed to specify this network as action parameter
– monitoring: can be all (scan complete subnet for unkown devices) or list-only (only scan specified hosts).
– exclude: must be an array containing at most vulnerability (skip vulnerability scan for this host) and mac (do not check if MAC address matches).
git clone https://github.com/temparus/network-monitoring.py && cd network-monitoring.py python network-monitoring.py -h Just copy the source files to a directory on your machine.
+ Golang https://golang.org/doc/install
+ Linux, MacOS, Windows Opeating System Support.
How to Build and Use:
git clone https://github.com/alphasoc/flightsim && cd flightsim go build ./flightsim --help ./flightsim run --help
Why use it?
Wirespy is not a replacement for tcpdump, wireshark or any of the other network sniffers. It has a specific purpose in providing long term metadata about network traffic including TCP flow logging. It is efficent and can monitoring live network traffic or process PCAP files.
I use it on my network recorders to extract metadata from the PCAP files that takes up less space, further extended the number of months of network intelligence I can save before running out of disk space.
The TCP flow capability is tollerant of lost packets which are common when passively monitoring network traffic.
How to use it?
Wirespy can run as a daemon if you are using it to monitor live network traffic and can also process PCAP files saved using other tools that support libpcap format files.
git clone https://github.com/rondilley/wirespy && cd wirespy autoconf ./bootstrap ./configure make make install sudo ./wsd -i eth0
Requirements for deploying WEFFLES :
– Active Directory – we need to be able to create and link a GPO that will apply to all of the machines we want in scope of monitoring. I would hope this would include desktops, servers, and domain controllers for the sake of completeness, but the flexibility to link the GPO that enables Windows Event Forwarding to a testing Organizational Unit is also a great way to start.
– A server to act as the Windows Event Collector – I recommend using a dedicated server as the collector, for performance and security reasons. The server does not have to be massive in spec though, even if you have a lot of endpoints you plan to have checking in to it. The log data should not go over 10GB for even large organizations (500k endpoints is my biggest WEFFLES deployment so far) and the solution exports data to CSV files for safekeeping, which are quite small. The main performance need on a collector is memory to hold the log files. We scope the size of the event log as 1GB as it acts as a holding place only before the events get exported to CSV in this solution, but the general rule of thumb is if you wanted a larger event log you need : amount of memory required to run windows and do things like backups + specified event log size.
– PowerBI Desktop – The console/data slicer itself is built using PowerBI Desktop. If you’d rather use another data slicer or the most widely used incident response tool on the planet (Microsoft Excel) the output weffles.csv file can be loaded into many different tools. There is a pre-built weffles.pbix PowerBI Desktop file in the GitHub repo that allows you to use the same data slicer console view I show in this post.
WEFFLES uses the EventLogWatcher(https://pseventlogwatcher.codeplex.com/) script from CodePlex to output the CSV file, and it’s kicked off via ScheduledTask as system startup, so reboot the machine now. The next part takes a while to “cook” so have patience and maybe walk away for 10 minutes as the subscriptions start to work and the script starts to parse the logs.
Use and Download:
git clone https://github.com/jepayneMSFT/WEFFLES && cd WEFFLES .\wefsetup.ps1 1. Browse to the c:\weffles directory, and you should see a bookmarks.stream file and weffles.csv - that means everything is working! 2. If you download create a c:\weffles directory on your machine and copy the weffles.pbix from the GitHub repo and the weffles.csv from your environment to it, you should be able to open weffles.pbix (assuming you installed PowerBI Desktop) and click "Refresh" and it will pull the data from your environment into my example slicers
Source: https://github.com/jepayneMSFT | https://aka.ms/weffles]]>
Use standart OS ICMP packet-size – Linux=64 bytes, Windows=32 bytes. When any node change state, Jennom write message to local DB, send message to remote syslog-server and can send email for you. Only state changes will be fixed ! Support both IPv4 and IPv6. Successfully tested in Windows and Linux with Mozilla Firefox and Google Chrome browsers for more 200 nodes. Developed by JavaEE stack technologies: JSF + PrimeFaces, CDI, JPA, EJB, Security – Apache Shiro, and JavaEE certified server Apache TomEE v1.7.4. (jax-rs release).
+ Java 1.8
+ Project is free, portable, cross-platform and 100%-pure java
+ Jennom use ICMP to check nodes, if it is unavailable, it tries to check with TCP/echo.
+ Jennom calculate loss packets and all sending packets
+ Support filtering by different fields and export data to PDF/XLS/XML/CSV files
+ Support logging to local DB, remote syslog-server and can send email for you..
+ Only state changes are fixed.
+ Support both IPv4 and IPv6.
+ Based on client-server architecture
+ After run jennom-server try “http://<your-server-IP>:8080/jennom/” in web-browser
+ Successfully tested in Windows and Linux with Mozilla Firefox and Google Chrome browsers for more 200 nodes.
Download: jennom_jee_build_20-10-17_bin.zip (62.1 MB)
The tool is not meant for complete accuracy. There are very serious recommendations normally to not rely on the output of GNU core-utils such as ls for tool input. In other words; one should rarely build tools to parse and rely on this type of output as it can change all the time. Realistically the output of these tools is relatively stable as a lot of people and automatic tools already rely on their outputs for all kinds of purposes.
However the tradeoff for dawgmon is the following; we would need to implement a lot of logic to do file system monitoring ourselves, build complex binaries that include libraries to do the parsing and monitoring of block devices, the network interfaces and what not more. This will also make the tool way more
complex and less maintainable. On projects right now one can add a new command including change detection in very little time as the main dawgmon tool already takes care of caching, executing the command and then supplying the previous and current output when running a comparision to a command implementation. This means that on time-constrained projects one can very quickly add a new command
and run analysises including those new commands.
git clone https://github.com/anvilventures/dawgmon && cd dawgmon ./dawgmon -h (run must root)
Features of PiSavar:
+ Detects PineAP activities
+ Detects networks opened by PineAP.
+ Starts deauthentication attack for PineAP.
Features to add:
+ List of clients connected to fake access points
+ Record activities – Logging
+ python 2.7.x with tercolor module
git clone https://github.com/besimaltnok/PiSavar && cd PiSavar python pisavar.py interface(wlan0,wlan1) (Monitor mode)
+ libboost-all-dev make g++ pkg-config libssl-dev libnetfilter-queue-dev
+ All Linux Platform
For Learn you can read this:
git clone https://github.com/magwitch324/AITF && cd AITF ./Setup.sh (run wit root user)
It covers hardening and security best practices for all AWS regions related to:
+ Identity and Access Management (24 checks)
+ Logging (8 checks)
+ Monitoring (15 checks)
+ Networking (5 checks)
+ Extra checks (3 checks) *see Extras section
For a comprehesive list and resolution look at the guide on the link above.
With Prowler you can:
– get a colourish or monochrome report
– a CSV format report for diff
– run specific checks without having to run the entire report
– check multiple AWS accounts in parallel
STS expired token
If you are using an STS token for AWS-CLI and your session is expired you probably get this error:
– A client error (ExpiredToken) occurred when calling the GenerateCredentialReport operation: The security token included in the request is expired
git clone https://github.com/Alfresco/prowler && cd prowler pip install awscli Make sure you have properly configured your AWS-CLI with a valid Access Key and Region: aws configure Example Policy ARN is arn:aws:iam::aws:policy/SecurityAudit ./prowler