Why another fuzzer?
My main motivation was to write a script that would allow me to fuzz a website based on a dictionary but that allowed me to filter words on that dictionary based on regex patterns. This necessity came from the frustration of trying to find the pages from the partial results returned by the Soroush’s IIS shortname scanner tool (https://github.com/irsdl/iis-shortname-scanner/). In case that you’re not aware of, most IIS web servers version 7.5 or below are vulnerable to filenames partial name discovery by requesting those pages in the format 8.3, for example: abcdef~1.zip
Why is it so fast?
Filebuster was built based on one of the fastest HTTP classes in the world (of PERL) – Furl::HTTP. Also the thread modelling is a bit optimized to run as fast as possible.
Features, It packs a ton of features like:
* The already mentioned Regex patterns
* Supports HTTP/HTTPS/SOCKS proxy
* Allows for multiple wordlists using wildcards
* Additional file extensions
* Adjustable timeouts and retries
* Adjustable delays / throttling
* Hide results based on HTTP code, length or words in headers or body
* Support for custom cookies
* Support for custom headers
* Supports multiple versions of the TLS protocol
* Automatic TTY detection
* Recursive scans
* Integrated wordlists
Usage and Download:
having problem to install from Net::DNS::Lite cpan?
tar xf Net-DNS-Lite-0.12.tar.gz
cpan install YAML Furl Switch Benchmark Cache::LRU Net::DNS::Lite List::MoreUtils IO::Socket::SSL URI::Escape HTML::Entities IO::Socket::Socks::Wrapper
git clone https://github.com/henshin/filebuster && cd filebuster
perl filebuster.pl -h
Upgrade: git pull