.
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | Most Advanced Open Source Intelligence (OSINT) Framework
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 | # ReconSpider
16 |
17 | ReconSpider is most Advanced Open Source Intelligence (OSINT) Framework for scanning IP Address, Emails, Websites, Organizations and find out information from different sources.
18 |
19 | ReconSpider can be used by Infosec Researchers, Penetration Testers, Bug Hunters and Cyber Crime Investigators to find deep information about their target.
20 |
21 | ReconSpider aggregate all the raw data, visualize it on a dashboard and facilitate alerting and monitoring on the data.
22 |
23 | Recon Spider also combines the capabilities of [Wave](https://github.com/adithyan-ak/WAVE), [Photon](https://github.com/s0md3v/Photon) and [Recon Dog](https://github.com/s0md3v/ReconDog) to do a comprehensive enumeration of attack surface.
24 |
25 | # Why it's called ReconSpider ?
26 |
27 | ```ReconSpider``` = ```Recon``` + ```Spider```
28 |
29 |
30 | **Recon** = **Reconnaissance**
31 |
32 | Reconnaissance is a mission to obtain information by various detection methods, about the activities and resources of an enemy or potential enemy, or geographic characteristics of a particular area.
33 |
34 |
35 | **Spider = Web crawler**
36 |
37 | A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing (web spidering).
38 |
39 |
40 | # Table Of Contents
41 |
42 | 1. [Version (beta)](https://github.com/bhavsec/reconspider#version-beta)
43 | 2. [Overview of the tool](https://github.com/bhavsec/reconspider#overview-of-the-tool)
44 | 3. [Mind Map (v1)](https://github.com/bhavsec/reconspider#mind-map-v1)
45 | 4. [License Information](https://github.com/bhavsec/reconspider#license-information)
46 | 5. [ReconSpider Banner](https://github.com/bhavsec/reconspider#reconspider-banner)
47 | 6. [Documentation](https://github.com/bhavsec/reconspider#documentation)
48 | 7. [Setting up the environment](https://github.com/bhavsec/reconspider#setting-up-the-environment)
49 | 8. [Updating API Keys](https://github.com/bhavsec/reconspider#updating-api-keys)
50 | 9. [Usage](https://github.com/bhavsec/reconspider#usage)
51 | 10. [Contact](https://github.com/bhavsec/reconspider#contact)
52 | 11. [Wiki & How-to Guide](https://github.com/bhavsec/reconspider#reconspider-full-wiki-and-how-to-guide)
53 | 12. [Updates](https://github.com/bhavsec/reconspider#frequent--seamless-updates)
54 |
55 |
56 | # Version (beta)
57 |
58 | ReconSpider : 1.0.7
59 |
60 |
61 | # Overview of the tool:
62 |
63 | * Performs OSINT scan on a IP Address, Emails, Websites, Organizations and find out information from different sources.
64 | * Correlates and collaborate the results, show them in a consolidated manner.
65 | * Use specific script / launch automated OSINT for consolidated data.
66 | * Currently available in only Command Line Interface (CLI).
67 |
68 |
69 | # Mind Map (v1)
70 |
71 | Check out our mind map to see visually organize information of this tool regarding api, services and techniques and more.
72 |
73 | https://bhavsec.com/img/reconspider_map.png
74 |
75 |
76 |
77 | # License Information
78 | ```
79 | ReconSpider and its documents are covered under GPL-3.0 (General Public License v3.0)
80 | ```
81 |
82 |
83 |
84 | ## ReconSpider Banner
85 |
86 | ```
87 | __________ _________ __ ___
88 | \______ \ ____ ____ ____ ____ / _____/_____ |__| __| _/___________
89 | | _// __ \_/ ___\/ _ \ / \ \_____ \\____ \| |/ __ |/ __ \_ __ \
90 | | | \ ___/\ \__( <_> ) | \ / \ |_> > / /_/ \ ___/| | \/
91 | |____|_ /\___ >\___ >____/|___| / /_______ / __/|__\____ |\___ >__|
92 | \/ \/ \/ \/ \/|__| \/ \/
93 |
94 |
95 |
96 | ENTER 0 - 13 TO SELECT OPTIONS
97 |
98 | 1. IP Enumerate information from IP Address
99 | 2. DOMAIN Gather information about given DOMAIN
100 | 3. PHONENUMBER Gather information about Phonenumber
101 | 4. DNS MAP Map DNS records associated with target
102 | 5. METADATA Extract all metadata of the given file
103 | 6. REVERSE IMAGE SEARCH Obtain domain name or IP address mapping
104 | 7. HONEYPOT Check if it's honeypot or a real system
105 | 8. MAC ADDRESS LOOKUP Obtain information about give Macaddress
106 | 9. IPHEATMAP Draw out heatmap of locations of IP
107 | 10. TORRENT Gather torrent download history of IP
108 | 11. USERNAME Extract Account info. from social media
109 | 12. IP2PROXY Check whether IP uses any VPN / PROXY
110 | 13. MAIL BREACH Checks given domain has breached Mail
111 | 99. UPDATE Update ReconSpider to its latest version
112 |
113 | 0. EXIT Exit from ReconSpider to your terminal
114 | ```
115 |
116 |
117 |
118 | # Documentation
119 |
120 | Installing and using ReconSpider is very easy. Installation process is very simple.
121 |
122 | 1. Downloading or cloning ReconSpider github repository.
123 | 2. Installing all dependencies.
124 | 3. Setting up the Database.
125 |
126 | Let's Begin !!
127 |
128 |
129 | ### Setting up the environment
130 |
131 | Step 1 - Cloning ReconSpider on your linux system.
132 |
133 | In order to download ReconSpider simply clone the github repository. Below is the command which you can use in order to clone ReconSpider repository.
134 | ```
135 | git clone https://github.com/bhavsec/reconspider.git
136 | ```
137 |
138 | Step 2 - Make sure python3 and python3-pip is installed on your system.
139 |
140 | You can also perform a check by typing this command in your terminal.
141 |
142 | ```
143 | sudo apt install python3 python3-pip
144 | ```
145 |
146 | Step 3 - Installing all dependencies.
147 |
148 | Once you clone and check python installation, you will find directory name as **reconspider**. Just go to that directory and install using these commands:
149 | ```
150 | cd reconspider
151 | sudo python3 setup.py install
152 | ```
153 |
154 | Step 4 - Setting up the Database.
155 |
156 | **IP2Proxy Database**
157 |
158 | ```
159 | https://lite.ip2location.com/database/px8-ip-proxytype-country-region-city-isp-domain-usagetype-asn-lastseen
160 | ```
161 | Download database, extract it and move to `reconspider/plugins/` directory.
162 |
163 |
164 | # Updating API Keys
165 |
166 | APIs included in ReconSpider are FREE and having limited & restricted usage per month, Please update the current APIs with New APIs in `setup.py` file, and re-install once done to reflect the changes.
167 |
168 | > Warning: Not updating the APIs can result in not showing the expected output or it may show errors.
169 |
170 | You need to create the account and get the API Keys from the following websites.
171 |
172 | * Shodan.io - https://developer.shodan.io/api
173 | * NumVerify - https://numverify.com/documentation
174 | * IP Stack - https://ipstack.com/documentation
175 | * Google Maps - https://developers.google.com/maps/documentation/places/web-service/get-api-key
176 |
177 |
178 | # Usage
179 |
180 |
181 | ReconSpider is very handy tool and easy to use. All you have to do is just have to pass values to parameter.
182 | In order to start ReconSpider just type:
183 | ```
184 | python3 reconspider.py
185 | ```
186 |
187 | **1. IP**
188 |
189 | This option gathers all the information of given IP Address from public resources.
190 | ```
191 | ReconSpider >> 1
192 | IP >> 8.8.8.8
193 | ```
194 |
195 | **2. DOMAIN**
196 |
197 | This option gathers all the information of given URL Address and check for vulneribility.
198 | ```
199 | Reconspider >> 2
200 | HOST (URL / IP) >> vulnweb.com
201 | PORT >> 443
202 | ```
203 |
204 | **3. PHONENUMBER**
205 |
206 | This option allows you to gather information of given phonenumber.
207 | ```
208 | Reconspider >> 3
209 | PHONE NUMBER (919485247632) >>
210 | ```
211 |
212 | **4. DNS MAP**
213 |
214 | This option allows you to map an organizations attack surface with a virtual DNS Map of the DNS records associated with the target organization.
215 | ```
216 | ReconSpider >> 4
217 | DNS MAP (URL) >> vulnweb.com
218 | ```
219 |
220 | **5. METADATA**
221 |
222 | This option allows you to extract all metadat of the file.
223 | ```
224 | Reconspider >> 5
225 | Metadata (PATH) >> /root/Downloads/images.jpeg
226 | ```
227 |
228 | **6. REVERSE IMAGE SEARCH**
229 |
230 | This option allows you to obtain information and similar image that are available in internet.
231 | ```
232 | Reconspider >> 6
233 | REVERSE IMAGE SEARCH (PATH) >> /root/Downloads/images.jpeg
234 | Open Search Result in web broser? (Y/N) : y
235 | ```
236 |
237 | **7. HONEYPOT**
238 |
239 | This option allows you to identify honeypots! The probability that an IP is a honeypot is captured in a "Honeyscore" value that can range from 0.0 to 1.0
240 | ```
241 | ReconSpider >> 7
242 | HONEYPOT (IP) >> 1.1.1.1
243 | ```
244 |
245 | **8. MAC ADDRESS LOOKUP**
246 |
247 | This option allows you to identify Mac address details who is manufacturer, address, country, etc.
248 |
249 | ```
250 | Reconspider >> 8
251 | MAC ADDRESS LOOKUP (Eg:08:00:69:02:01:FC) >>
252 | ```
253 |
254 | **9. IPHEATMAP**
255 |
256 | This option provided you heatmap of the provided ip or single ip, if connect all the provided ip location with accurate Coordinator.
257 | ```
258 | Reconspider >> 9
259 |
260 | 1) Trace single IP
261 | 2) Trace multiple IPs
262 | OPTIONS >>
263 | ```
264 |
265 | **10. TORRENT**
266 |
267 | This option allows you to gathers history of Torrent download history.
268 | ```
269 | Reconspider >> 10
270 | IPADDRESS (Eg:192.168.1.1) >>
271 | ```
272 |
273 | **11. USERNAME**
274 |
275 | This option allows you to gathers account information of the provided username from social media like Instagram, Twitter, Facebook.
276 | ```
277 | Reconspider >> 11
278 |
279 | 1.Facebook
280 | 2.Twitter
281 | 3.Instagram
282 |
283 | Username >>
284 | ```
285 |
286 | **12. IP2PROXY**
287 |
288 | This option allows you to identify whether IP address uses any kind of VPN / Proxy to hide his identify.
289 | ```
290 | Reconspider >> 12
291 | IPADDRESS (Eg:192.168.1.1) >>
292 | ```
293 |
294 | **13. MAIL BREACH**
295 |
296 | This option allows you to identify all breached mail ID from given domain.
297 | ```
298 | Reconspider >> 13
299 | DOMAIN (Eg:intercom.io) >>
300 | ```
301 |
302 | **99. UPDATE**
303 |
304 | This option allows you to check for updates. If a newer version will available, ReconSpider will download and merge the updates into the current directory without overwriting other files.
305 | ```
306 | ReconSpider >> 99
307 | Checking for updates..
308 | ```
309 |
310 | **0. EXIT**
311 |
312 | This option allows you to exit from ReconSpider Framework to your current Operating System's terminal.
313 | ```
314 | ReconSpider >> 0
315 | Bye, See ya again..
316 | ```
317 |
318 |
319 |
320 | # Contact Developer
321 |
322 | Do you want to have a conversation in private?
323 |
324 | Twitter: @bhavsec
325 | Facebook: fb.com/bhavsec
326 | Instagram: instagram.com/bhavsec
327 | LinkedIn: linkedin.com/in/bhavsec
328 | Email: bhavsec@gmail.com
329 | Website: bhavsec.com
330 |
331 |
332 |
333 | # ReconSpider Full Wiki and How-to Guide
334 |
335 | Please go through the [ReconSpider Wiki Guide](https://github.com/bhavsec/reconspider/wiki) for a detailed explanation of each and every option and feature.
336 |
337 |
338 | # Frequent & Seamless Updates
339 | ReconSpider is under development and updates for fixing bugs. optimizing performance & new features are being rolled. Custom error handling is also not implemented, and all the focus is to create required functionality.
340 |
341 |
342 | # Special Thanks & Contributors
343 |
344 | * [Aravindha](https://github.com/Aravindha1234u)
345 | * [Ishan Batish](https://www.linkedin.com/in/ishanbatish/)
346 | * [Adithyan AK](https://github.com/adithyan-ak)
347 | * [S0md3v](https://github.com/s0md3v/)
348 | * [Parshant](mailto:parshant.dhall@gmail.com)
349 |
--------------------------------------------------------------------------------
/core/__init__.py:
--------------------------------------------------------------------------------
1 | from .repl_prompt import *
2 |
--------------------------------------------------------------------------------
/core/colors.py:
--------------------------------------------------------------------------------
1 | import sys
2 |
3 | colors = True # Output should be colored
4 | machine = sys.platform # Detecting the os of current system
5 | if machine.lower().startswith(('os', 'win', 'darwin', 'ios')):
6 | colors = False # Colors shouldn't be displayed in mac & windows
7 | if not colors:
8 | end = red = white = green = yellow = run = bad = good = info = que = ''
9 | else:
10 | white = '\033[97m'
11 | green = '\033[92m'
12 | red = '\033[91m'
13 | yellow = '\033[93m'
14 | end = '\033[0m'
15 | back = '\033[7;91m'
16 | info = '\033[93m[!]\033[0m'
17 | que = '\033[94m[?]\033[0m'
18 | bad = '\033[91m[-]\033[0m'
19 | good = '\033[32m[+]\033[0m'
20 | run = '\033[97m[~]\033[0m'
21 |
--------------------------------------------------------------------------------
/core/repl_prompt.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | from __future__ import unicode_literals
3 |
4 | from plugins.censys import censys_ip
5 | from plugins.dnsdump import dnsmap
6 | from plugins.honeypot import honeypot
7 | from plugins.shodan_io import shodan_host
8 | from plugins.domain import domain
9 | from plugins.Phonenumber import Phonenumber
10 | from plugins.reverseimagesearch import reverseimagesearch
11 | from plugins.metadata import gps_analyzer
12 | from plugins.macaddress import MacAddressLookup
13 | from plugins.ipaddress import IPHeatmap
14 | from plugins.torrent import torrent
15 | from plugins.proxy import ip2Proxy
16 | from plugins.maildb import maildb
17 | from plugins.Username import user
18 | from core.updater import update
19 | from prompt_toolkit import prompt
20 | from reconspider import menu
21 |
22 |
23 | def repl(): # Read–eval–print loop
24 | while 1:
25 | print(menu())
26 | user_input = prompt("\nReconspider >> ")
27 | if len(user_input)==0:
28 | print("\n")
29 | continue
30 | try:
31 | choice = int(user_input)
32 | except ValueError:
33 | print("\n")
34 | continue
35 |
36 | if choice == 1:
37 | while 1:
38 | ip = prompt("IP >> ")
39 | break
40 | shodan_host(ip)
41 | censys_ip(ip)
42 | continue
43 |
44 | elif choice == 2:
45 | while 1:
46 | host = input("HOST (URL / IP) >> ")
47 | port = input("PORT >> ")
48 | try:
49 | if port == "":
50 | port=80
51 | elif int(port) not in [80,443]:
52 | print("Invalid port - Available(80,443)")
53 | continue
54 | except ValueError:
55 | port=80
56 | break
57 | domain(host,int(port))
58 | continue
59 |
60 | elif choice == 3:
61 | while 1:
62 | ph = prompt("PHONE NUMBER (with CountryCode example) >> ")
63 | break
64 | Phonenumber(ph)
65 | continue
66 |
67 | elif choice == 4:
68 | while 1:
69 | dnsmap_inp = prompt("DNS MAP (URL) >> ")
70 | break
71 | dnsmap(dnsmap_inp)
72 | continue
73 |
74 | elif choice == 5:
75 | while 1:
76 | img_path = prompt("Metadata (PATH) >> ")
77 | break
78 | gps_analyzer(img_path)
79 | continue
80 |
81 | elif choice == 6:
82 | while 1:
83 | img = prompt("REVERSE IMAGE SEARCH (PATH) >> ")
84 | break
85 | reverseimagesearch(img)
86 | continue
87 |
88 | elif choice == 7:
89 | while 1:
90 | hp_inp = prompt("HONEYPOT (IP) >> ")
91 | break
92 | honeypot(hp_inp)
93 | continue
94 |
95 | elif choice == 8:
96 | while 1:
97 | mac = prompt("MAC ADDRESS LOOKUP (Eg:08:00:69:02:01:FC) >> ")
98 | break
99 | MacAddressLookup(mac)
100 | continue
101 |
102 | elif choice == 9:
103 | while 1:
104 | IPHeatmap()
105 | break
106 | continue
107 |
108 | elif choice == 10:
109 | while 1:
110 | IP = prompt("IPADDRESS (Eg:192.168.1.1) >> ")
111 | break
112 | torrent(IP)
113 | continue
114 |
115 | elif choice == 11:
116 | while 1:
117 | print("\n1.Facebook \n2.Twitter \n3.Instagram\n")
118 | username = input("Username >> ")
119 | choice = input("choice >> ")
120 | break
121 | user(choice,username)
122 | continue
123 |
124 | elif choice == 12:
125 | while 1:
126 | IP = prompt("IPADDRESS (Eg:192.168.1.1) >> ")
127 | break
128 | ip2Proxy(IP)
129 | continue
130 |
131 | elif choice == 13:
132 | while 1:
133 | web = prompt("DOMAIN (Eg:intercom.io) >> ")
134 | break
135 | maildb(web)
136 | continue
137 |
138 | elif choice == 99:
139 | while 1:
140 | break
141 | update()
142 | continue
143 |
144 | elif choice == 0:
145 | exit('\nBye, See ya again..')
146 |
147 | else:
148 | pass
149 |
150 |
151 | # Handling ctrl+c
152 | try:
153 | repl()
154 | except KeyboardInterrupt:
155 | quit('\nBye, See ya again..')
156 |
--------------------------------------------------------------------------------
/core/update_log.py:
--------------------------------------------------------------------------------
1 | changes = '''Added Codacy quality badge + Bug fixes;Update feature support for Python 2;Update feature Bug Fixes;New Update feature for ReconSpider;Update usage info of reconspider.py;Added link to ReconSpider Wiki Guide;Added Development board and Contact details in README.md;Rename args file to repl_prompt;DNS Map image not auto-open Fix'''
2 |
--------------------------------------------------------------------------------
/core/updater.py:
--------------------------------------------------------------------------------
1 | import os
2 | import re
3 | import sys
4 | from requests import get
5 |
6 | if sys.version_info[0] > 2:
7 | from .update_log import changes
8 | from .colors import run, que, good, bad, info, end, green
9 |
10 | else:
11 | from update_log import changes
12 | from colors import run, que, good, bad, info, end, green
13 |
14 | def update():
15 | print('\n%s Checking for updates..' % run)
16 | latestCommit = get('https://raw.githubusercontent.com/bhavsec/reconspider/master/core/update_log.py').text
17 |
18 | if changes not in latestCommit: # just a hack to see if a new version is available
19 | changelog = re.search(r"changes = '''(.*?)'''", latestCommit)
20 | changelog = changelog.group(1).split(';') # splitting the changes to form a list
21 | print('\n%s A new version of ReconSpider is available.' % good)
22 | print('\n%s Changes:' % info)
23 | for change in changelog: # print changes
24 | print('%s>%s %s' % (green, end, change))
25 |
26 | currentPath = os.getcwd().split('/') # if you know it, you know it
27 | folder = currentPath[-1] # current directory name
28 | path = '/'.join(currentPath) # current directory path
29 |
30 | if sys.version_info[0] > 2:
31 | choice = input('\n%s Would you like to update? [Y/n] ' % que).lower()
32 |
33 | else:
34 | choice = raw_input('\n%s Would you like to update? [Y/n] ' % que).lower()
35 |
36 |
37 | if choice == 'y':
38 | print('\n%s Updating ReconSpider..' % run)
39 | os.system('git clone --quiet https://github.com/bhavsec/reconspider %s' % (folder))
40 | os.system('cp -r %s/%s/* %s && rm -r %s/%s/ 2>/dev/null' % (path, folder, path, path, folder))
41 | print('\n%s Update successful!' % good)
42 | sys.exit()
43 | else:
44 | print('\n%s Update Canceled!' % bad)
45 |
46 | else:
47 | print('\n%s ReconSpider is up to date!' % good)
48 |
--------------------------------------------------------------------------------
/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bhavsec/reconspider/5dae8c16b2845b5ccb73aba93e333fe308b0754a/logo.png
--------------------------------------------------------------------------------
/plugins/IP2PROXY-LITE-PX8.BIN.ZIP:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bhavsec/reconspider/5dae8c16b2845b5ccb73aba93e333fe308b0754a/plugins/IP2PROXY-LITE-PX8.BIN.ZIP
--------------------------------------------------------------------------------
/plugins/Phonenumber.py:
--------------------------------------------------------------------------------
1 | from plugins.api import phoneapis
2 | import requests
3 |
4 | def Phonenumber(ph):
5 | print ('[+]' + ' Fetching Phonenumber Details...' + '\n')
6 | apikey=phoneapis()
7 | if apikey == "":
8 | print("Add NumVerify api key to plugins/api.py")
9 | exit()
10 | ph=''.join([i for i in ph if i.isdigit()])
11 | for api_key in apikey.split(","):
12 | url = ("http://apilayer.net/api/validate?access_key="+api_key+"&number="+str(ph))
13 | try:
14 | response=requests.get(url)
15 | if 'error' in response.json().keys():
16 | continue
17 | elif response.json()['valid']==False:
18 | print("Error: Invalid Mobile Number")
19 | return
20 | else:
21 | get=response.json()
22 | print("Number: "+get['number'])
23 | print("Type: "+get['line_type'])
24 | print("CountryCode: "+get['country_code'])
25 | print("Country: "+get['country_name'])
26 | print("Location: "+get['location'])
27 | print("Carrier: "+get['carrier'])
28 | print("")
29 | return
30 | except:
31 | continue
32 | print(str(response.json()['error']['info']).split(".")[0])
33 |
34 |
--------------------------------------------------------------------------------
/plugins/Username.py:
--------------------------------------------------------------------------------
1 | import requests
2 | from bs4 import BeautifulSoup
3 | import tweepy
4 |
5 | out=[]
6 |
7 | def user(choice,username):
8 | if choice == '1':
9 | pass
10 | elif choice == '2':
11 | ScrapTweets(username)
12 | return()
13 | elif choice == '3':
14 | Instagram(username)
15 | return()
16 | else:
17 | exit()
18 |
19 | search_string = "https://en-gb.facebook.com/" + username
20 |
21 | #response is stored after request is made
22 | response = requests.get(search_string)
23 |
24 | #Response is stored and parsed to implement beautifulsoup
25 | soup = BeautifulSoup(response.text, 'html.parser')
26 |
27 | #List that will store the data that is to be fetched
28 |
29 | ###Finding Name of the user
30 | #Min div element is found which contains all the information
31 | main_div = soup.div.find(id="globalContainer")
32 |
33 | #finding name of the user
34 | def find_name():
35 | name = main_div.find(id="fb-timeline-cover-name").get_text()
36 | print("\n"+"Name:"+name)
37 |
38 | ###Finding About the user details
39 | #finding work details of the user
40 | def find_eduwork_details():
41 | try:
42 | education = soup.find(id="pagelet_eduwork")
43 | apple=education.find(attrs={"class":"_4qm1"})
44 | if (apple.get_text() != " "):
45 | for category in education.find_all(attrs={"class":"_4qm1"}):
46 | print(category.find('span').get_text() + " : ")
47 | for company in category.find_all(attrs={"class":"_2tdc"}):
48 | if (company.get_text() != " "):
49 | print(company.get_text())
50 | else:
51 | continue
52 | else:
53 | print("No work details found")
54 | except Exception as e:
55 | print(str(e))
56 | print()
57 |
58 | #finding home details of the user
59 | def find_home_details():
60 | if(soup.find(id="pagelet_hometown") !=" "):
61 | home = soup.find(id="pagelet_hometown")
62 | for category in home.find_all(attrs={"class":"_4qm1"}):
63 | print(category.find('span').get_text() + " : ")
64 | for company in category.find_all(attrs={"class":"_42ef"}):
65 | if (company.get_text() != " "):
66 | print(company.get_text())
67 | else:
68 | continue
69 | else:
70 | print("No Home details found")
71 |
72 | #finding contact details of the user
73 |
74 |
75 | ###Logic for finding the status of the response
76 | if ("200" in str(response)):
77 | find_name()
78 | find_eduwork_details()
79 | find_home_details()
80 |
81 | elif ("404" in str(response)):
82 | print("Error: Profile not found")
83 | else:
84 | print("Error: some other response")
85 | return()
86 |
87 | def Instagram(username):
88 |
89 | r = requests.get("https://www.instagram.com/"+ username +"/?__a=1")
90 | if r.status_code == 200:
91 | res = r.json()['graphql']['user']
92 | print("\nUsername: " + res['username'])
93 | print("Full Name: "+res['full_name'])
94 | try:
95 | print("Business Category: "+res['edge_follow']['business_category_name'])
96 | except Exception as e:
97 | print("Account :"+" Private" + str(e))
98 | finally:
99 | print("Biograph: " + res['biography'])
100 | print("URL: "+ str(res['external_url']))
101 | print("Followers: "+str(res['edge_followed_by']['count']))
102 | print("Following: "+str(res['edge_follow']['count']))
103 | print("Profile Picture HD: " + res['profile_pic_url_hd'])
104 | elif r.status_code == 404:
105 | print("Error: Profile Not Found")
106 | else:
107 | print("Error: Something Went Wrong")
108 |
109 | def ScrapTweets(username):
110 | auth = tweepy.OAuthHandler("f0rCnr7Tln5EnIqiD6JcuMIJ8", "DmwOASEbukzltfyZx66KQGbguORJkEqpZdGMNvbiefJoIeYvWl")
111 | auth.set_access_token("884691164900737025-nTLY2Z4KVMX4IS294Ap43hPxmDZrXSW", "oDo8dV8RgPaJpa6ifYFgp5F0K7huAb1rIhhUSl2p2ewxA")
112 | api = tweepy.API(auth)
113 | screen_name = username
114 | user = api.get_user(screen_name)
115 |
116 |
117 | try:
118 | print("Full Name of the User is : " + user.screen_name)
119 | except Exception as e:
120 | print("User Name -->"+" Not Found" + str(e))
121 | print()
122 |
123 | try:
124 | ID = user.id_str
125 | print("The ID of the user is : " + ID)
126 | except Exception as e:
127 | print("User Id--> "+"Not Found" + str(e))
128 | print()
129 |
130 | for friend in api.friends(screen_name):
131 | print(friend.screen_name)
132 |
133 | description = api.blocks_ids(screen_name)
134 | print("This User is blocked by : " + str(description))
135 |
136 |
137 |
138 |
--------------------------------------------------------------------------------
/plugins/__init__.py:
--------------------------------------------------------------------------------
1 | from .censys import *
2 | from .dnsdump import *
3 | from .honeypot import *
4 | from .nslookup import *
5 | from .portscan import *
6 | from .shodan_io import *
7 | from .whois import *
8 |
--------------------------------------------------------------------------------
/plugins/api.py:
--------------------------------------------------------------------------------
1 | def phoneapis():
2 | api= "e01791e4d18fbbdfa0c9033bf207decd,2f8c8e865a0b25bbf4da08c4db039b8d"
3 | return str(api)
4 | def ipstack():
5 | api="276cfee2c31729505691e515e8321a02"
6 | return str(api)
7 | def gmap():
8 | api="AIzaSyAKGik6Fok3_mbIsgquaAnDGNy-h_AjhVw"
9 | return str(api)
10 |
--------------------------------------------------------------------------------
/plugins/censys.py:
--------------------------------------------------------------------------------
1 | import json
2 | from requests import get
3 |
4 |
5 | def censys_ip(IP):
6 | try:
7 | dirty_response = get('https://censys.io/ipv4/%s/raw' % IP).text
8 | clean_response = dirty_response.replace('"', '"')
9 | x = clean_response.split('')[1].split('
')[0]
10 | censys = json.loads(x)
11 |
12 | print("\n[+] Gathering Location Information from [censys]\n")
13 | print("Country -------> "+str(censys["location"]["country"]))
14 | print("Continent -----> "+str(censys["location"]["continent"]))
15 | print("Country Code --> "+str(censys["location"]["country_code"]))
16 | print("Latitude ------> "+str(censys["location"]["latitude"]))
17 | print("Longitude -----> "+str(censys["location"]["longitude"]))
18 | except:
19 | print("Unavailable")
20 |
--------------------------------------------------------------------------------
/plugins/dnsdump.py:
--------------------------------------------------------------------------------
1 | import re
2 | import os
3 | import requests
4 | import platform
5 |
6 |
7 | def dnsmap(dnsmap_inp):
8 | domain = dnsmap_inp
9 |
10 | image = requests.get('https://dnsdumpster.com/static/map/%s.png' % domain)
11 |
12 | if image.status_code == 200:
13 | image_name = domain.replace(".com","")
14 | with open('%s.png' % image_name, 'wb') as f:
15 | f.write(image.content)
16 | print("\n%s.png DNS Map image stored to current reconspider directory" % image_name)
17 |
18 | if (platform.system() != "Windows"):
19 | pass
20 | else:
21 | os.startfile('%s.png' % image_name)
22 | else:
23 | print("Sorry We Would not find the dnsmap")
24 |
--------------------------------------------------------------------------------
/plugins/domain.py:
--------------------------------------------------------------------------------
1 | import socket
2 | from .webosint.cmsdetect import CMSdetect
3 | from .webosint.nslookup import nsLookup
4 | from .webosint.portscan import DefaultPort,Customrange
5 | from .webosint.reverseip import ReverseIP
6 | from .webosint.subdomain import SubDomain
7 | from .webvuln.bruteforce import ssh
8 | from .webvuln.clickjacking import ClickJacking
9 | from .webvuln.cors import Cors
10 | from .webvuln.hostheader import HostHeader
11 | from .webosint.header import header
12 | from .webosint.crawler import crawler
13 | from .webosint.who.whoami import whoami
14 | from .portscan import PortScan
15 |
16 | # Checking whether the target host is alive or dead
17 | def CheckTarget(host,port):
18 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
19 | result = s.connect_ex((host, port))
20 |
21 | if result == 0:
22 | return True
23 | else:
24 | return False
25 |
26 | # Main Method
27 | def domain(host,port):
28 |
29 | if CheckTarget(host,port)==True:
30 | print("\nTarget Alive \n")
31 | Menu(host,port)
32 | else:
33 | print("The Host is Unreachable \n")
34 | exit()
35 |
36 |
37 | NmapFunctions = {
38 | 1: DefaultPort,
39 | 2: Customrange,
40 | }
41 |
42 |
43 | def nmaprec(host,port):
44 | try:
45 | Choice = 1
46 | while True:
47 | print("1. Scan Default Ports (22-443)")
48 | print("2. Enter Custom Range")
49 | print("3. Back to Main Menu")
50 | print('')
51 | Choice = int(input(">> "))
52 | if (Choice >= 0) and (Choice < 3):
53 | NmapFunctions[Choice](host, port)
54 | elif Choice == 3:
55 | Menu(host,port)
56 | else:
57 | print("Please choose an Appropriate option")
58 | except AttributeError:
59 | PortScan(host)
60 |
61 |
62 | BruteFunctions = {1: ssh}
63 |
64 | def BruteForce(host, port):
65 | print("\nBrute Forcing SSH")
66 | BruteFunctions[1](host,port)
67 |
68 |
69 | MainFunctions = {
70 | 1: ReverseIP,
71 | 2: SubDomain,
72 | 3: nsLookup,
73 | 4: CMSdetect,
74 | 5: nmaprec,
75 | 6: BruteForce,
76 | 7: ClickJacking,
77 | 8: Cors,
78 | 9: HostHeader,
79 | 10:header,
80 | 11:crawler,
81 | 12:whoami
82 | }
83 |
84 | def Menu(host,port):
85 | Selection = 1
86 | while True:
87 | print('')
88 | print("1."+" ReverseIP")
89 | print("2."+" SubDomain")
90 | print("3."+" nsLookup")
91 | print("4."+" CMSDetect")
92 | print("5."+" PortScan")
93 | print("6."+" Bruteforce")
94 | print("7."+" ClickJacking")
95 | print("8."+" CORS")
96 | print("9."+" Host Header Injection")
97 | print("10."+" Header")
98 | print("11."+" Crawler")
99 | print("12."+" Whoami")
100 | print("99."+" Exit")
101 | print('')
102 | Selection = int(input("Domain >> "))
103 | if (Selection >= 0) and (Selection <=12):
104 | MainFunctions[Selection](host, port)
105 | elif Selection == 99:
106 | return
107 | else:
108 | print("Error: Please choose an Appropriate option")
109 | print('')
110 |
--------------------------------------------------------------------------------
/plugins/honeypot.py:
--------------------------------------------------------------------------------
1 | import sys
2 | from requests import get
3 | from core.config import shodan_api
4 |
5 |
6 | def honeypot(inp):
7 | honey = 'https://api.shodan.io/labs/honeyscore/%s?key=%s' % (inp, shodan_api)
8 | try:
9 | result = get(honey).text
10 | except:
11 | result = None
12 | sys.stdout.write('\n%s No information available' % bad + '\n')
13 | if "error" in result or "404" in result:
14 | print("IP Not found")
15 | return
16 | elif result:
17 | probability = str(float(result) * 10)
18 | print('\n[+] Honeypot Probabilty: %s%%' % (probability) + '\n')
19 | else:
20 | print("Something went Wrong")
21 |
--------------------------------------------------------------------------------
/plugins/ipaddress.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import gmplot
3 | from plugins.api import ipstack
4 | import webbrowser
5 | import re
6 | from plugins.api import gmap
7 | from ipaddress import *
8 | from plugins.webosint.who.whois import *
9 |
10 | api_key = ipstack()
11 | if api_key == "" :
12 | print("Add you ipstack api key to plugins/api.py")
13 | exit()
14 | if gmap() == "" :
15 | print("Add you Google Heatmap api key to plugins/api.py")
16 | exit()
17 |
18 | def IPHeatmap():
19 | print('''
20 | 1) Trace single IP
21 | 2) Trace multiple IPs''')
22 | choice = input("OPTIONS >> ")
23 |
24 | if choice == '1':
25 | IP = input("Enter the IP : ")
26 | read_single_IP(IP)
27 | elif choice == '2':
28 | IP_file = input("Enter the IP File Location : ")
29 | read_multiple_IP(IP_file)
30 | else:
31 | print("\nError: Please choose an appropriate option")
32 |
33 | def read_single_IP(IP):
34 | print ('[+]' + "Processing IP: %s ..." %IP + '\n')
35 | if not re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$",IP):
36 | print("Invalid IP Address")
37 | IPHeatmap()
38 | lats = []
39 | lons = []
40 | r = requests.get("http://api.IPstack.com/" + IP + "?access_key=" + api_key)
41 | response = r.json()
42 | print('')
43 | print("IP :"+response['ip'])
44 | print("Location : " + response['region_name'])
45 | print("Country : " +response['country_name'])
46 | print("Latitude :"+" {latitude}".format(**response))
47 | print("Longitude :"+" {longitude}".format(**response))
48 | if input("Want More Whois Details (Y/N): ") in ("Y","y"):
49 | whois_more(IP)
50 | if response['latitude'] and response['longitude']:
51 | lats = response['latitude']
52 | lons = response['longitude']
53 | maps_url = "https://maps.google.com/maps?q=%s,+%s" % (lats, lons)
54 | print("")
55 | openWeb = input("Open GPS location in web broser? (Y/N) ")
56 | if openWeb.upper() == 'Y':
57 | webbrowser.open(maps_url, new=2)
58 | else:
59 | pass
60 |
61 | def read_multiple_IP(IP_file):
62 | lats = []
63 | lons = []
64 | try:
65 | f = open(IP_file, "r")
66 | f1 = f.readlines()
67 | print ('[+]' + " Processing....." + '\n')
68 | for line in f1:
69 | IP=re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$",line)
70 | IP=IP.group()
71 | r = requests.get("http://api.IPstack.com/" + IP + "?access_key=" + api_key)
72 | response = r.json()
73 | if response['latitude'] and response['longitude']:
74 | lats.append(response['latitude'])
75 | lons.append(response['longitude'])
76 | heat_map(lats,lons)
77 | except IOError:
78 | print("ERROR : File Does not Exist\n")
79 | IPHeatmap()
80 |
81 |
82 | def heat_map(lats,lons):
83 | gmap3 = gmplot.GoogleMapPlotter(20.5937, 78.9629, 5)
84 | gmap3.heatmap(lats,lons)
85 | gmap3.scatter(lats,lons, '#FF0000', size=50, marker=False)
86 | gmap3.plot(lats,lons, 'cornflowerblue', edge_width = 3.0)
87 | save_location = input("Enter the location to save file : ")
88 | gmap3.apikey = gmap()
89 | location = save_location + "/heatmap.html"
90 | gmap3.draw(location)
91 | print("Heatmap saved at " + location)
92 | openWeb = input("Open Heatmap in web broser? (Y/N) : ")
93 | if openWeb.upper() == 'Y':
94 | webbrowser.open(url=("file:///"+location))
95 | else:
96 | pass
97 |
--------------------------------------------------------------------------------
/plugins/macaddress.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | def MacAddressLookup(mac):
4 | url = ("https://macvendors.co/api/" + mac)
5 | response=requests.get(url)
6 | result=response.json()
7 | if result["result"]:
8 | final=result['result']
9 | print("Company:" + final["company"])
10 | print("Address:" + final["address"])
11 | print("Country:" + final["country"])
12 | print("")
13 | else:
14 | print("Error: Something Went Wrong")
15 |
--------------------------------------------------------------------------------
/plugins/maildb.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | def maildb(emailaddress):
4 | if ("@" and ".com") or ("@" and ".in") in emailaddress:
5 | req=requests.get("https://api.hunter.io/v2/domain-search?domain="+emailaddress+"&api_key=9f189e87e011a1d2f3013ace7b14045dec60f62c")
6 | j=req.json()
7 | print("[+] Breaching from "+emailaddress+"...\n")
8 | for i in range(len(j['data']['emails'])):
9 | print("Email ID :",j['data']['emails'][i]['value'])
10 | print("First Name :",j['data']['emails'][i]['first_name'])
11 | print("Last Name :",j['data']['emails'][i]['last_name'])
12 | if j['data']['emails'][i]['position']!=None:
13 | print("Position :",j['data']['emails'][i]['position'])
14 | if j['data']['emails'][i]['linkedin']!=None:
15 | print("Linkedin :",j['data']['emails'][i]['linkedin'])
16 | if j['data']['emails'][i]['twitter']!=None:
17 | print("Twitter :",j['data']['emails'][i]['twitter'])
18 | print()
19 | else:
20 | print("Error: Invalid Email Address")
21 |
--------------------------------------------------------------------------------
/plugins/metadata.py:
--------------------------------------------------------------------------------
1 | import webbrowser
2 | from PIL import Image
3 | from PIL.ExifTags import *
4 |
5 | def get_exif(fn):
6 | try:
7 | ret = {}
8 | print ('[+]' + 'Checking the Metadata...' + '\n')
9 | i = Image.open(fn)
10 | info = i._getexif()
11 | if str(info) == "None":
12 | print("Metadata is not Much Informative:")
13 | return -1
14 | for tag, value in info.items():
15 | decoded = TAGS.get(tag, tag)
16 | ret[decoded] = value
17 | return ret
18 | except IOError:
19 | print('')
20 | print("ERROR : File not found")
21 | exit()
22 |
23 | def gps_analyzer(img_path):
24 |
25 | a = get_exif(img_path)
26 |
27 | if a==-1:
28 | return
29 | for x,y in a.items():
30 | print("%s : %s" %(x, y))
31 |
32 | if "GPSInfo" in a:
33 | lat = [float(x) / float(y) for x, y in a['GPSInfo'][2]]
34 | latref = a['GPSInfo'][1]
35 | lon = [float(x) / float(y) for x, y in a['GPSInfo'][4]]
36 | lonref = a['GPSInfo'][3]
37 |
38 | lat = lat[0] + lat[1] / 60 + lat[2] / 3600
39 | lon = lon[0] + lon[1] / 60 + lon[2] / 3600
40 | if latref == 'S':
41 | lat = -lat
42 | if lonref == 'W':
43 | lon = -lon
44 | map_it(lat, lon)
45 |
46 | else:
47 | print('')
48 | print("GPS location not found")
49 |
50 |
51 | def map_it(lat, lon):
52 | # Prints latitude and longitude values
53 | print('')
54 | print("Accurate Latitude : %s" % lat)
55 | print("Accurate Longitude : %s" % lon)
56 | print('')
57 | # Creates the URL for the map using the latitude and longitude values
58 | maps_url = "https://maps.google.com/maps?q=%s,+%s" % (lat, lon)
59 | # Prompts the user to launch a web browser with the map
60 | openWeb = input("Open GPS location in web broser? (Y/N) ")
61 | if openWeb.upper() == 'Y':
62 | webbrowser.open(maps_url, new=2)
63 |
--------------------------------------------------------------------------------
/plugins/nslookup.py:
--------------------------------------------------------------------------------
1 | from requests import get
2 |
3 |
4 | def nslookup(inp):
5 | result = get('http://api.hackertarget.com/dnslookup/?q=' + inp).text
6 | print('\n' + result)
7 |
--------------------------------------------------------------------------------
/plugins/output.csv:
--------------------------------------------------------------------------------
1 | Target,Type,Data
2 |
--------------------------------------------------------------------------------
/plugins/portscan.py:
--------------------------------------------------------------------------------
1 | from requests import get
2 |
3 |
4 | def PortScan(inp):
5 | result = get('http://api.hackertarget.com/nmap/?q=' + inp).text
6 | print('\n' + result + '\n')
7 |
--------------------------------------------------------------------------------
/plugins/proxy.py:
--------------------------------------------------------------------------------
1 | import IP2Proxy
2 | import re
3 | import requests
4 | from plugins.api import *
5 | from plugins.webosint.who.whois import *
6 |
7 |
8 | def ip2Proxy(IP):
9 |
10 | if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$",IP):
11 | db = IP2Proxy.IP2Proxy()
12 | db.open("./plugins/IP2PROXY-LITE-PX8.BIN")
13 | record = db.get_all(IP)
14 | db.close()
15 | if record['is_proxy']!=0:
16 | #print(record)
17 | print("Proxy: " + "Enabled")
18 | print("Proxy Type:" + record['proxy_type'])
19 | print("Country Code:" + record['country_short'])
20 | print("Country:" + record['country_long'])
21 | print("Region Name:" + record['region'])
22 | print("City:" + record['city'])
23 | print("Isp:" + record['isp'])
24 | print("Domain:" + record['domain'])
25 | print("Usage:" + record['usage_type'])
26 | print("ASN:" + record['asn'])
27 | print("Name:" + record['as_name'])
28 | api_key = ipstack()
29 | if api_key == "":
30 | print("Add you ipstack api key to plugins/api.py")
31 | exit()
32 | r = requests.get("http://api.IPstack.com/" + IP + "?access_key=" + api_key)
33 | response = r.json()
34 | print("Latitude :"+" {latitude}".format(**response))
35 | print("Longitude :"+" {longitude}".format(**response))
36 | if input("Want More Whois Details (Y/N):") in ["Y","y"]:
37 | whois_more(IP)
38 | if response['latitude'] and response['longitude']:
39 | lats = response['latitude']
40 | lons = response['longitude']
41 | url = "https://maps.google.com/maps?q=%s,+%s" % (lats, lons)
42 | print("Google Map Link :" + url)
43 | else:
44 | print("IP does not use any Proxy or VPN")
45 | else:
46 | print("\nEnter a Valid IP Address")
47 | print("")
48 |
--------------------------------------------------------------------------------
/plugins/reverseimagesearch.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import webbrowser
3 |
4 | def reverseimagesearch(img):
5 | try:
6 | surl='https://www.google.co.in/searchbyimage/upload'
7 | murl={'encoded_image': (img, open(img, 'rb')), 'image_content': ''}
8 | response = requests.post(surl, files=murl, allow_redirects=False)
9 | fetchUrl = response.headers['Location']
10 | openWeb = input("Open Search Result in web broser? (Y/N) : ")
11 | if openWeb.upper() == 'Y':
12 | webbrowser.open(fetchUrl)
13 | else:
14 | pass
15 | except IOError:
16 | print()
17 | print("ERROR : File Does not Exist\n")
18 |
--------------------------------------------------------------------------------
/plugins/shodan_io.py:
--------------------------------------------------------------------------------
1 | import shodan
2 | from core.config import shodan_api
3 |
4 | api = shodan.Shodan(shodan_api)
5 |
6 |
7 | def shodan_host(IP):
8 | try:
9 | host = api.host(IP)
10 | print("\n[+] Gathering IP Address Information from [shodan]\n")
11 | print("IP Address ----> " + str(host['ip_str']))
12 | print("Country -------> " + str(host['country_name']))
13 | print("City ----------> " + str(host['city']))
14 | print("Organization --> " + str(host['org']))
15 | print("ISP -----------> " + str(host['isp']))
16 | print("Open ports ----> " + str(host['ports']))
17 | except:
18 | print("Unavailable")
19 |
20 |
21 | def shodan_ip(IP):
22 | try:
23 | host = api.host(IP)
24 | print("\n[+] Gathering Domain Information from [shodan]\n")
25 | print("IP Address ----> " + str(host['ip_str']))
26 | print("Country -------> " + str(host['country_name']))
27 | print("City ----------> " + str(host['city']))
28 | print("Organization --> " + str(host['org']))
29 | print("ISP -----------> " + str(host['isp']))
30 | print("Open ports ----> " + str(host['ports']))
31 | except:
32 | print("Unavailable")
33 |
--------------------------------------------------------------------------------
/plugins/torrent.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 |
4 | def torrent(IP):
5 |
6 | r = requests.get("https://api.antitor.com/history/peer/?ip="+ IP +"&key=3cd6463b477d46b79e9eeec21342e4c7")
7 | res = r.json()
8 | print ( '[+]' + " Processing Torrent....." + '\n')
9 | if len(res)>4:
10 | print("IP Address: "+res['ip'])
11 | print("ISP: "+res['isp'])
12 | print("Country: "+res['geoData']['country'])
13 | print("Latitude: "+str(res['geoData']['latitude']))
14 | print("Longitude: "+str(res['geoData']['longitude'])+"\n")
15 | for i in res['contents']:
16 | print("Category:"+i['category'])
17 | print("Name:"+i['name'])
18 | print("Start:" + i['startDate'])
19 | print("End:" + i['endDate'])
20 | print("Size:"+str(i['torrent']['size']))
21 | print("")
22 | else:
23 | print("Error: Something Went Wrong")
24 |
--------------------------------------------------------------------------------
/plugins/webosint/cmsdetect.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | def CMSdetect(domain, port):
4 | payload = {'key': '1641c3b9f2b1c8676ceaba95d00f7cf2e3531830c5fa9a6cc5e2d922b2ed7165dcce66', 'url': domain}
5 | cms_url = "https://whatcms.org/APIEndpoint/Detect"
6 | response = requests.get(cms_url, params=payload)
7 | cms_data = response.json()
8 | cms_info = cms_data['result']
9 | if cms_info['code'] == 200:
10 | print('Detected CMS : %s' % cms_info['name'])
11 | print('Detected Version : %s' % cms_info['version'])
12 | print('Confidence : %s' % cms_info['confidence'])
13 | else:
14 | print(cms_info['msg'])
15 | print('Detected CMS : %s' % cms_info['name'])
16 | print('Detected Version : %s' % cms_info['version'])
17 |
--------------------------------------------------------------------------------
/plugins/webosint/crawler.py:
--------------------------------------------------------------------------------
1 | import os
2 | import bs4
3 | import requests
4 |
5 | user_agent = {'User-Agent' : 'Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0'}
6 |
7 | def crawler(target,port):
8 | if port == 80:
9 | port="http://"
10 | elif port == 443:
11 | port="https://"
12 | else:
13 | print("Could'nt fetch data for the given PORT")
14 |
15 | total = []
16 | r_total = []
17 | sm_total = []
18 | js_total = []
19 | css_total = []
20 | int_total = []
21 | ext_total = []
22 | img_total = []
23 | print ('\n' + '[+]' + ' Crawling Target...'+ '\n')
24 | try:
25 | target=port+target
26 | rqst = requests.get(target, headers=user_agent, verify=True, timeout=10)
27 | sc = rqst.status_code
28 | if sc == 200:
29 | domain = target.split('//')
30 | domain = domain[1]
31 | page = rqst.content
32 | soup = bs4.BeautifulSoup(page, 'lxml')
33 | file = '{}.dump'.format(domain)
34 | path = os.getcwd()
35 | r_url = 'http://{}/robots.txt'.format(domain)
36 | sm_url = 'http://{}/sitemap.xml'.format(domain)
37 |
38 | print( '[+]' + ' Looking for robots.txt' , end = '')
39 | r_rqst = requests.get(r_url, headers=user_agent, verify=True, timeout=10)
40 | r_sc = r_rqst.status_code
41 |
42 | if r_sc == 200:
43 | print('['.rjust(9, '.') + ' Found ]' )
44 | print('[+]' + ' Extracting robots Links', end = '')
45 | r_page = r_rqst.text
46 | r_scrape = r_page.split('\n')
47 | for entry in r_scrape:
48 | if 'Disallow' in entry:
49 | url = entry.split(':')
50 | try:
51 | url = url[1]
52 | url = url.strip()
53 | total.append(url)
54 | r_total.append(target + url)
55 | except Exception as e:
56 | print(e)
57 | elif 'Allow' in entry:
58 | url = entry.split(':')
59 | try:
60 | url = url[1]
61 | url = url.strip()
62 | total.append(url)
63 | r_total.append(target + url)
64 | except Exception as e:
65 | print(e)
66 | r_total = set(r_total)
67 | print('['.rjust(8, '.') + ' {} ]'.format(str(len(r_total))))
68 |
69 | elif r_sc == 404:
70 | print( '['.rjust(9, '.') + ' Not Found ]' )
71 | else:
72 | print( '['.rjust(9, '.') + ' {} ]'.format(r_sc) )
73 |
74 | print('[+]' + ' Looking for sitemap.xml' , end = '')
75 | sm_rqst = requests.get(sm_url, headers=user_agent, verify=True, timeout=10)
76 | sm_sc = sm_rqst.status_code
77 | if sm_sc == 200:
78 | print('['.rjust(8, '.') + ' Found ]' )
79 | print('[+]' + ' Extracting sitemap Links', end = '')
80 | sm_page = sm_rqst.content
81 | sm_soup = bs4.BeautifulSoup(sm_page, 'xml')
82 | links = sm_soup.find_all('loc')
83 | for url in links:
84 | url = url.get_text()
85 | if url is not None:
86 | total.append(url)
87 | sm_total.append(url)
88 | sm_total = set(sm_total)
89 | print('['.rjust(7, '.') + ' {} ]'.format(str(len(sm_total))))
90 |
91 | elif sm_sc == 404:
92 | print( '['.rjust(8, '.') + ' Not Found ]' )
93 | else:
94 | print( '['.rjust(8, '.') + ' {} ]'.format(sm_sc) )
95 |
96 | print('[+]' + ' Extracting CSS Links' , end = '')
97 | css = soup.find_all('link')
98 | for link in css:
99 | url = link.get('href')
100 | if url is not None and '.css' in url:
101 | total.append(url)
102 | css_total.append(url)
103 | css_total = set(css_total)
104 | print('['.rjust(11, '.') + ' {} ]'.format(str(len(css_total))))
105 |
106 | print('[+]' + ' Extracting Javascript Links' , end = '')
107 | js = soup.find_all('script')
108 | for link in js:
109 | url = link.get('src')
110 | if url is not None and '.js' in url:
111 | total.append(url)
112 | js_total.append(url)
113 | js_total = set(js_total)
114 | print('['.rjust(4, '.') + ' {} ]'.format(str(len(js_total))))
115 |
116 | print('[+]' + ' Extracting Internal Links' , end = '')
117 | links = soup.find_all('a')
118 | for link in links:
119 | url = link.get('href')
120 | if url is not None:
121 | if domain in url:
122 | total.append(url)
123 | int_total.append(url)
124 | int_total = set(int_total)
125 | print('['.rjust(6, '.') + ' {} ]'.format(str(len(int_total))))
126 |
127 | print('[+]' + ' Extracting External Links' , end = '')
128 | for link in links:
129 | url = link.get('href')
130 | if url is not None:
131 | if domain not in url and 'http' in url:
132 | total.append(url)
133 | ext_total.append(url)
134 | ext_total = set(ext_total)
135 | print('['.rjust(6, '.') + ' {} ]'.format(str(len(ext_total))))
136 |
137 | print('[+]' + ' Extracting Images' , end = '')
138 | images = soup.find_all('img')
139 | for link in images:
140 | src = link.get('src')
141 | if src is not None and len(src) > 1:
142 | total.append(src)
143 | img_total.append(src)
144 | img_total = set(img_total)
145 | print('['.rjust(14, '.') + ' {} ]'.format(str(len(img_total))))
146 |
147 | total = set(total)
148 | print('\n' + '[+]' + ' Total Links Extracted : ' + str(len(total)) + '\n')
149 |
150 | if len(total) is not 0:
151 | print('[+]' + ' Dumping Links in ' + '{}/dumps/{}'.format(path, file))
152 | with open(path+'/dumps/{}'.format('{}.dump'.format(domain)), 'w') as dumpfile:
153 | dumpfile.write('URL : {}'.format(target) + '\n\n')
154 | try:
155 | dumpfile.write('Title : {}'.format(soup.title.string) + '\n')
156 | except AttributeError as e:
157 | dumpfile.write('Title : None' + '\n')
158 | dumpfile.write('\nrobots Links : ' + str(len(r_total)))
159 | dumpfile.write('\nsitemap Links : ' + str(len(sm_total)))
160 | dumpfile.write('\nCSS Links : ' + str(len(css_total)))
161 | dumpfile.write('\nJS Links : ' + str(len(js_total)))
162 | dumpfile.write('\nInternal Links : ' + str(len(int_total)))
163 | dumpfile.write('\nExternal Links : ' + str(len(ext_total)))
164 | dumpfile.write('\nImages Links : ' + str(len(img_total)))
165 | dumpfile.write('\nTotal Links Found : ' + str(len(total)) + '\n')
166 | print(str(e))
167 |
168 | if len(r_total) is not 0:
169 | dumpfile.write('\nrobots :\n\n')
170 | for item in r_total:
171 | dumpfile.write(str(item) + '\n')
172 | if len(sm_total) is not 0:
173 | dumpfile.write('\nsitemap :\n\n')
174 | for item in sm_total:
175 | dumpfile.write(str(item) + '\n')
176 | if len(css_total) is not 0:
177 | dumpfile.write('\nCSS :\n\n')
178 | for item in css_total:
179 | dumpfile.write(str(item) + '\n')
180 | if len(js_total) is not 0:
181 | dumpfile.write('\nJavascript :\n\n')
182 | for item in js_total:
183 | dumpfile.write(str(item) + '\n')
184 | if len(int_total) is not 0:
185 | dumpfile.write('\nInternal Links :\n\n')
186 | for item in int_total:
187 | dumpfile.write(str(item) + '\n')
188 | if len(ext_total) is not 0:
189 | dumpfile.write('\nExternal Links :\n\n')
190 | for item in ext_total:
191 | dumpfile.write(str(item) + '\n')
192 | if len(img_total) is not 0:
193 | dumpfile.write('\nImages :\n\n')
194 | for item in img_total:
195 | dumpfile.write(str(item) + '\n')
196 |
197 | else:
198 | print ( '[-]' + ' Error : ' + str(sc))
199 | except Exception as e:
200 | print( '[-] Error : ' + str(e))
201 |
--------------------------------------------------------------------------------
/plugins/webosint/header.py:
--------------------------------------------------------------------------------
1 | import requests
2 | requests.packages.urllib3.disable_warnings()
3 |
4 | def header(target,port):
5 | if port == 80:
6 | port="http://"
7 | elif port == 443:
8 | port="https://"
9 | else:
10 | print("Could'nt fetch data for the given PORT")
11 | exit()
12 | print ('\n' + '[+]' + ' Headers :' + '\n')
13 | rqst = requests.get(port+target, verify=True, timeout=10)
14 | for k, v in rqst.headers.items():
15 | print ('[+]' + ' {} : '.format(k) + v)
16 |
--------------------------------------------------------------------------------
/plugins/webosint/nslookup.py:
--------------------------------------------------------------------------------
1 | from requests import get
2 |
3 | R = '\033[31m' # red
4 | G = '\033[32m' # green
5 | C = '\033[36m' # cyan
6 | W = '\033[0m' # white
7 |
8 | def nsLookup(host, port):
9 | print ( '[+]' + 'Fetching Details...' + '\n')
10 | result = get('http://api.hackertarget.com/dnslookup/?q=' + host).text
11 | print(result)
12 |
--------------------------------------------------------------------------------
/plugins/webosint/portscan.py:
--------------------------------------------------------------------------------
1 | import nmap
2 | import json
3 |
4 | def DefaultPort(Xhost, Yport):
5 | print('')
6 | print("Starting port scan with range 22-443")
7 | nm = nmap.PortScanner()
8 | result = nm.scan(Xhost, '22-443')
9 | display(result)
10 |
11 | def Customrange(Xhost, Yport):
12 | print('')
13 | port_range = input("Enter the range : ")
14 | print('')
15 | print("Starting port scan with range %s"%port_range)
16 | nm = nmap.PortScanner()
17 | result = nm.scan(Xhost, port_range)
18 | display(result)
19 |
20 | def display(result):
21 | new = next(iter(result['scan'].values()))
22 | ip_add = new['addresses']
23 | print('')
24 | print("IP Address : %s" % ip_add['ipv4'])
25 | hosting = new['hostnames']
26 | hostname0 = hosting[0]
27 | hostname1 = hosting[1]
28 | print('')
29 | print("Hostname 1 : %s" % hostname0['name'])
30 | print("Hostname 2 : %s" % hostname1['name'])
31 | print('')
32 | print("Open Ports : ")
33 | print('')
34 | ports = new['tcp']
35 | json_scan = json.dumps(ports)
36 | parsed = json.loads(json_scan)
37 | print(json.dumps(parsed, indent=4, sort_keys=True))
38 | print('')
39 |
--------------------------------------------------------------------------------
/plugins/webosint/reverseip.py:
--------------------------------------------------------------------------------
1 | from requests import get
2 |
3 |
4 | def ReverseIP(host, port):
5 | print ( '[+]' + 'Checking whether the Target is reachable ...' + '\n')
6 | lookup = 'https://api.hackertarget.com/reverseiplookup/?q=%s' % host
7 | try:
8 | result = get(lookup).text
9 | print(result)
10 | except Exception as e:
11 | print('Error: Invalid IP address '+e)
12 |
--------------------------------------------------------------------------------
/plugins/webosint/subdomain.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | def SubDomain(host, port):
4 | print ('[+]' + 'Fetching Subdomains of Target...' + '\n')
5 | url = 'https://www.virustotal.com/vtapi/v2/domain/report'
6 |
7 | params = {'apikey':'1af37bfeb7b1628ba10695fb187987a6651793e37df006a5cdf8786b0e4f6453','domain':host}
8 |
9 | response = requests.get(url, params=params)
10 |
11 | subdomains = response.json()
12 |
13 | for x in subdomains['domain_siblings']:
14 | print(x)
15 |
--------------------------------------------------------------------------------
/plugins/webosint/who/output.txt:
--------------------------------------------------------------------------------
1 | % [whois.apnic.net]
2 | % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html
3 |
4 | % Information related to '182.72.162.0 - 182.72.162.63'
5 |
6 | % Abuse contact for '182.72.162.0 - 182.72.162.63' is 'ipspamsupport@airtel.com'
7 |
8 | inetnum: 182.72.162.0 - 182.72.162.63
9 | netname: KRCF-1933857-Coimbatore
10 | descr: KUMARAGURU COLLEGE OF TEC
11 | descr: n/a
12 | descr: KUMARAGURU COLLEGE OF TECHNOLOGY THUDIYALUR
13 | descr: ROAD SARAVANAMPATTI COIMBATORE-641035
14 | descr: Coimbatore
15 | descr: TAMIL NADU
16 | descr: India
17 | descr: Contact Person: N SIVARAMAKRISHNAN
18 | descr: Email: sivaramakrishnan.n.support@kct.ac.in
19 | descr: Phone: 9789559327
20 | country: IN
21 | admin-c: NA40-AP
22 | tech-c: NA40-AP
23 | mnt-by: MAINT-IN-BBIL
24 | mnt-irt: IRT-BHARTI-IN
25 | status: ASSIGNED NON-PORTABLE
26 | last-modified: 2017-02-27T10:46:40Z
27 | source: APNIC
28 |
29 | irt: IRT-BHARTI-IN
30 | address: Bharti Airtel Ltd.
31 | address: ISP Division - Transport Network Group
32 | address: 234 , Okhla Industrial Estate,
33 | address: Phase III, New Delhi-110020, INDIA
34 | e-mail: ipspamsupport@airtel.com
35 | abuse-mailbox: ipspamsupport@airtel.com
36 | admin-c: NA40-AP
37 | tech-c: NA40-AP
38 | auth: # Filtered
39 | remarks: ipspamsupport@airtel.com was validated on 2019-12-14
40 | mnt-by: MAINT-IN-BBIL
41 | last-modified: 2019-12-14T08:39:37Z
42 | source: APNIC
43 |
44 | person: Network Administrator
45 | nic-hdl: NA40-AP
46 | e-mail: noc-dataprov@airtel.com
47 | address: Bharti Airtel Ltd.
48 | address: ISP Division - Transport Network Group
49 | address: Plot no.16 , Udyog Vihar , Phase -IV , Gurgaon - 122015 , Haryana , INDIA
50 | address: Phase III, New Delhi-110020, INDIA
51 | phone: +91-124-4222222
52 | fax-no: +91-124-4244017
53 | country: IN
54 | mnt-by: MAINT-IN-BBIL
55 | last-modified: 2018-12-18T12:52:19Z
56 | source: APNIC
57 |
58 | % Information related to '182.72.162.0/24AS9498'
59 |
60 | route: 182.72.162.0/24
61 | descr: BHARTI-IN
62 | descr: Bharti Airtel Limited
63 | descr: Class A ISP in INDIA .
64 | descr: Plot No. CP-5,sector-8,
65 | descr: IMT Manesar
66 | descr: INDIA
67 | country: IN
68 | origin: AS9498
69 | mnt-by: MAINT-IN-BBIL
70 | last-modified: 2010-05-15T09:59:58Z
71 | source: APNIC
72 |
73 | % This query was served by the APNIC Whois Service version 1.88.15-46 (WHOIS-JP3)
74 |
75 |
76 |
--------------------------------------------------------------------------------
/plugins/webosint/who/whoami.py:
--------------------------------------------------------------------------------
1 | import whois
2 | from pythonping import ping
3 | import re
4 |
5 | def whoami(target,post):
6 | #target=input("Enter the IP Address/Domain:")
7 | getweb=str(ping(target))
8 | ip = re.compile('(([2][5][0-5]\.)|([2][0-4][0-9]\.)|([0-1]?[0-9]?[0-9]\.)){3}'
9 | +'(([2][5][0-5])|([2][0-4][0-9])|([0-1]?[0-9]?[0-9]))')
10 | match = ip.search(getweb)
11 | if match:
12 | #target=match.group()
13 | w = whois.whois(target)
14 | print("Domain Name:"+ str(w['domain_name']))
15 | print("Register:"+str(w['registrar']))
16 | try:
17 | print("Whois Server:"+str(w['whois_server']))
18 | except Exception as e:
19 | print(e)
20 | print("Server:"+str(w['name_servers']))
21 | print("Emails:"+str(w['emails']))
22 | try:
23 | print("Organisation:"+str(w['org']))
24 | except Exception as e:
25 | print("Organisation:"+str(w['organization']))
26 | print(e)
27 | try:
28 | print("Address:"+str(w['address']))
29 | print("City:"+str(w['city']))
30 | print("State:"+str(w['state']))
31 | print("Zipcode:"+str(w['zipcode']))
32 | except Exception as e:
33 | print(e)
34 | print("Country:"+str(w['country']))
35 |
--------------------------------------------------------------------------------
/plugins/webosint/who/whois.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | def whois_more(IP):
4 | result = requests.get('http://api.hackertarget.com/whois/?q=' + IP).text
5 | print('\n'+ result + '\n')
6 |
--------------------------------------------------------------------------------
/plugins/webvuln/bruteforce.py:
--------------------------------------------------------------------------------
1 | import paramiko
2 | import socket
3 |
4 | def ssh(host, port):
5 | print("1. Default Port (22)")
6 | print("2. Custom Port")
7 | choice = int(input("BruteForce >>"))
8 | if choice == 2:
9 | port = int(input("Enter the Custom Telnet Port : "))
10 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
11 | s.settimeout(10)
12 | try:
13 | connect = s.connect_ex((host, port))
14 | if connect != 0:
15 | print("[+] Port %s: Closed" %port)
16 | s.close()
17 |
18 | elif connect == 0:
19 | print("[+] Port %s: Open" %port)
20 | s.close()
21 | wordlist = input("Enter Wordlist location (Press Enter for Default Wordlist) : ")
22 | if wordlist == '':
23 | f = open("src/telnet.ini", "r")
24 | f1 = f.readlines()
25 | else:
26 | f = open(wordlist, "r")
27 | f1 = f.readlines()
28 | for x in f1:
29 | y = x.split(':')
30 | username = y[0].strip(":")
31 | password = y[1].strip("\n")
32 | ssh = paramiko.SSHClient()
33 | ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
34 | print("Checking with Username : %s , Password : %s" % (username, password))
35 | try:
36 | ssh.connect(host, port=port, username=username, password=password, timeout=10)
37 | flag = 0
38 |
39 | except paramiko.AuthenticationException:
40 | flag = 1
41 |
42 | except socket.error as e:
43 | flag = 2
44 | print(e)
45 |
46 | except KeyboardInterrupt:
47 | print("\n User Interrupt! Exitting...")
48 | exit()
49 |
50 | ssh.close()
51 |
52 | if flag == 0:
53 | print('')
54 | print("Credentials Found")
55 | print("Username : %s" % username)
56 | print(("Password : %s") % password)
57 | print('')
58 | elif flag == 1:
59 | print("Invalid Credentials")
60 | except socket.error as e:
61 | print("Error : %s" %e)
62 |
63 | elif choice == 1 | choice!= 2:
64 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
65 | s.settimeout(10)
66 | try:
67 | connect = s.connect_ex((host, 22))
68 | if connect != 0:
69 | print("[+] Port 22: Closed")
70 | s.close()
71 |
72 | elif connect == 0:
73 | print("[+] Port 22: Open")
74 | s.close()
75 | wordlist = input("Enter Wordlist location (Press Enter for Default Wordlist) : ")
76 | if wordlist == '':
77 | f = open("src/ssh.ini", "r")
78 | f1 = f.readlines()
79 | else:
80 | f = open(wordlist, "r")
81 | f1 = f.readlines()
82 | for x in f1:
83 | y = x.split(':')
84 | username = y[0].strip(":")
85 | password = y[1].strip("\n")
86 | ssh = paramiko.SSHClient()
87 | ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
88 | print("Checking with Username : %s , Password : %s" % (username, password))
89 | try:
90 | ssh.connect(host, port=22, username=username, password=password, timeout=10)
91 | flag = 0
92 |
93 | except paramiko.AuthenticationException:
94 | flag = 1
95 |
96 | except socket.error as e:
97 | flag = 2
98 | print(e)
99 |
100 | except KeyboardInterrupt:
101 | print("\n User Interrupt! Exitting...")
102 | exit()
103 |
104 | ssh.close()
105 |
106 | if flag == 0:
107 | print('')
108 | print("Credentials Found")
109 | print("Username : %s" % username)
110 | print(("Password : %s") % password)
111 | print('')
112 | elif flag == 1:
113 | print("Invalid Credentials")
114 | except socket.error as e:
115 | print("Error : %s" % e)
116 |
--------------------------------------------------------------------------------
/plugins/webvuln/clickjacking.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | def ClickJacking(host, port):
4 |
5 | if port == 80:
6 | port = 'http://'
7 | elif port == 443:
8 | port = 'https://'
9 | else:
10 | print("Could'nt fetch data for the given PORT")
11 |
12 |
13 | url = (port+host)
14 | page=requests.get(url)
15 | headers=page.headers
16 | if not "X-Frame-Options" in headers:
17 | print("Website is vulnerable to ClickJacking")
18 |
19 | else:
20 | print("Website is not Vulnerable to ClickJacking")
21 |
--------------------------------------------------------------------------------
/plugins/webvuln/cors.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 |
4 | header1 = None
5 | domain2 = None
6 | header2 = None
7 | domain3 = None
8 | header3 = None
9 |
10 |
11 | def Cors(host, port):
12 | if port == 80:
13 | port = 'http://'
14 | elif port == 443:
15 | port = 'https://'
16 | else:
17 | print("Could'nt fetch data for the given PORT")
18 | exit()
19 | print("1. CORS check in Default Host")
20 | print("2. CORS check in Host's Custom Endpoint")
21 | print('')
22 | choice = int(input('CORS >>'))
23 | print('')
24 | cookies = input("Paste the Cookies (If None,then hit enter) : ")
25 | global header1
26 | global domain2
27 | global header2
28 | global domain3
29 | global header3
30 | if cookies == '':
31 |
32 | header1 = {'Origin': 'http://evil.com'}
33 |
34 | domain2 = host + '.evil.com'
35 |
36 | header2 = {'Origin': port + domain2}
37 |
38 | domain3 = host + '%60cdl.evil.com'
39 |
40 | header3 = {'Origin': port + domain3}
41 |
42 | Choices(host, port, choice)
43 | else:
44 |
45 | header1 = {'Origin': 'http://evil.com', 'Cookie': cookies}
46 |
47 | domain2 = host + '.evil.com'
48 |
49 | header2 = {'Origin': port + domain2,'Cookie': cookies}
50 |
51 | domain3 = host + '%60cdl.evil.com'
52 |
53 | header3 = {'Origin': port + domain3,'Cookie': cookies}
54 |
55 | Choices(host, port, choice)
56 |
57 |
58 | def Choices(host, port, choice):
59 | if choice == 2:
60 | endpoint = input("Enter the Custom Endpoint : ")
61 | host = endpoint
62 | WrongChoice(host, port)
63 |
64 | elif choice == 1:
65 | print("Checking Default Host ")
66 | url = (port + host)
67 | print("Testing with Payload %s" % header1)
68 | response = requests.get(url, headers=header1)
69 | if 'evil.com' in response.headers:
70 | print("Vulnerable to Cross Origin Resource Sharing")
71 | else:
72 | print("Not Vulnerable to Cross Origin Resource Sharing")
73 | print('')
74 |
75 | print("Testing with Payload %s" % header2)
76 | response = requests.get(url, headers=header2)
77 |
78 | if domain2 in response.headers:
79 | print("Vulnerable to Cross Origin Resource Sharing")
80 | else:
81 | print("Not Vulnerable to Cross Origin Resource Sharing")
82 | print('')
83 |
84 | print("Testing with Payload %s" % header3)
85 | response = requests.get(url, headers=header3)
86 | if domain2 in response.headers:
87 | print("Vulnerable to Cross Origin Resource Sharing")
88 | else:
89 | print("Not Vulnerable to Cross Origin Resource Sharing")
90 | print('')
91 | else:
92 | print("Wrong Choice")
93 | print("Checking Default Host")
94 | WrongChoice(host, port)
95 |
96 | def WrongChoice(host, port):
97 | url = (port + host)
98 | print("Testing with Payload %s" % header1)
99 | response = requests.get(url, headers=header1)
100 | if 'evil.com' in response.headers:
101 | print("Vulnerable to Cross Origin Resource Sharing")
102 | else:
103 | print("Not Vulnerable to Cross Origin Resource Sharing")
104 | print('')
105 |
106 | print("Testing with Payload %s" % header2)
107 | response = requests.get(url, headers=header2)
108 |
109 | if domain2 in response.headers:
110 | print("Vulnerable to Cross Origin Resource Sharing")
111 | else:
112 | print("Not Vulnerable to Cross Origin Resource Sharing")
113 | print('')
114 |
115 | print("Testing with Payload %s" % header3)
116 | response = requests.get(url, headers=header3)
117 | if domain2 in response.headers:
118 | print("Vulnerable to Cross Origin Resource Sharing")
119 | else:
120 | print("Not Vulnerable to Cross Origin Resource Sharing")
121 | print('')
122 |
--------------------------------------------------------------------------------
/plugins/webvuln/hostheader.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | def HostHeader(host, port):
4 | if port == 80:
5 | port = 'http://'
6 | elif port == 443:
7 | port = 'https://'
8 | else:
9 | print("Could'nt fetch data for the given PORT")
10 | return
11 | url = (port + host)
12 | headers = {'Host': 'http://evil.com'}
13 | response = requests.get(url, headers=headers)
14 | if 'evil.com' in response.headers:
15 | print("Vulnerable to Host Header Injection")
16 | else:
17 | print("Not Vulnerable to Host header injection")
18 |
--------------------------------------------------------------------------------
/plugins/webvuln/src/ftp.ini:
--------------------------------------------------------------------------------
1 | anonymous:anonymous
2 | root:rootpasswd
3 | root:12hrs37
4 | ftp:b1uRR3
5 | admin:admin
6 | localadmin:localadmin
7 | admin:1234
8 | apc:apc
9 | admin:nas
10 | Root:wago
11 | Admin:wago
12 | User:user
13 | Guest:guest
14 | ftp:ftp
15 | admin:password
16 | a:avery
17 | admin:123456
18 | adtec:none
19 | admin:admin12345
20 | none:dpstelecom
21 | instrument:instrument
22 | user:password
23 | root:password
24 | default:default
25 | admin:default
26 | nmt:1234
27 | admin:Janitza
28 | supervisor:supervisor
29 | user1:pass1
30 | avery:avery
31 | IEIeMerge:eMerge
32 | ADMIN:12345
33 | beijer:beijer
34 | Admin:admin
35 | admin:1234
36 | admin:1111
37 | root:admin
38 | se:1234
39 | admin:stingray
40 | device:apc
41 | apc:apc
42 | dm:ftp
43 | dmftp:ftp
44 | httpadmin:fhttpadmin
45 | user:system
46 | MELSEC:MELSEC
47 | QNUDECPU:QNUDECPU
48 | ftp_boot:ftp_boot
49 | uploader:ZYPCOM
50 | ftpuser:password
51 | USER:USER
52 | qbf77101:hexakisoctahedron
53 | ntpupdate:ntpupdate
54 | sysdiag:factorycast@schneider
55 | wsupgrade:wsupgrade
56 | pcfactory:pcfactory
57 | loader:fwdownload
58 | test:testingpw
59 | webserver:webpages
60 | fdrusers:sresurdf
61 | nic2212:poiuypoiuy
62 | user:user00
63 | su:ko2003wa
64 | MayGion:maygion.com
65 | admin:9999
66 | PlcmSpIp:PlcmSpIp
--------------------------------------------------------------------------------
/plugins/webvuln/src/ssh.ini:
--------------------------------------------------------------------------------
1 | root:calvin
2 | root:root
3 | adithya:toor
4 | root:toor
5 | administrator:password
6 | NetLinx:password
7 | administrator:Amx1234!
8 | adithya:toor
9 | amx:password
10 | amx:Amx1234!
11 | admin:1988
12 | admin:admin
13 | Administrator:Vision2
14 | cisco:cisco
15 | c-comatic:xrtwk318
16 | root:qwasyx21
17 | admin:insecure
18 | pi:raspberry
19 | user:user
20 | root:default
21 | root:leostream
22 | leo:leo
23 | localadmin:localadmin
24 | fwupgrade:fwupgrade
25 | root:rootpasswd
26 | admin:password
27 | root:timeserver
28 | admin:password
29 | admin:motorola
30 | cloudera:cloudera
31 | root:p@ck3tf3nc3
32 | apc:apc
33 | device:apc
34 | eurek:eurek
35 | netscreen:netscreen
36 | admin:avocent
37 | root:linux
38 | sconsole:12345
39 | root:5up
40 | cirros:cubswin:)
41 | root:uClinux
42 | root:alpine
43 | root:dottie
44 | root:arcsight
45 | root:unitrends1
46 | vagrant:vagrant
47 | root:vagrant
48 | m202:m202
49 | demo:fai
50 | root:fai
51 | root:ceadmin
52 | maint:password
53 | root:palosanto
54 | root:ubuntu1404
55 | root:cubox-i
56 | debian:debian
57 | root:debian
58 | root:xoa
59 | root:sipwise
60 | debian:temppwd
61 | root:sixaola
62 | debian:sixaola
63 | myshake:shakeme
64 | stackato:stackato
65 | root:screencast
66 | root:stxadmin
67 | root:nosoup4u
68 | root:indigo
69 | root:video
70 | default:video
71 | default:
72 | ftp:video
73 | nexthink:123456
74 | ubnt:ubnt
75 | root:ubnt
76 | sansforensics:forensics
77 | elk_user:forensics
78 | osboxes:osboxes.org
79 | root:osboxes.org
80 | sans:training
81 | user:password
82 | misp:Password1234
83 | hxeadm:HXEHana1
84 | acitoolkit:acitoolkit
85 | osbash:osbash
86 | enisa:enisa
87 | geosolutions:Geos
88 | pyimagesearch:deeplearning
89 | root:NM1$88
90 | remnux:malware
91 | hunter:hunter
92 | plexuser:rasplex
93 | root:openelec
94 | root:rasplex
95 | root:plex
96 | root:openmediavault
97 | root:ys123456
98 | root:libreelec
99 | openhabian:openhabian
100 | admin:ManagementConsole2015
101 | public:publicpass
102 | admin:hipchat
103 | nao:nao
104 | support:symantec
105 | root:max2play
106 | admin:pfsense
107 | root:root01
108 | root:nas4free
109 | USERID:PASSW0RD
110 | Administrator:p@ssw0rd
111 | root:freenas
112 | root:cxlinux
113 | admin:symbol
114 | admin:Symbol
115 | admin:superuser
116 | admin:admin123
117 | root:D13HH[
118 | root:blackarch
119 | root:dasdec1
120 | root:7ujMko0admin
121 | root:7ujMko0vizxv
122 | root:Zte521
123 | root:zlxx
--------------------------------------------------------------------------------
/plugins/whois.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | def whois(wh):
4 | url = wh
5 | result = requests.get('http://api.hackertarget.com/whois/?q=' + url).text
6 | print('\n'+ result + '\n')
7 |
--------------------------------------------------------------------------------
/reconspider.py:
--------------------------------------------------------------------------------
1 | import sys
2 |
3 | def banner():
4 | return ("""
5 | __________ _________ __ ___
6 | \______ \ ____ ____ ____ ____ / _____/_____ |__| __| _/___________
7 | | _// __ \_/ ___\/ _ \ / \ \_____ \\\____ \| |/ __ |/ __ \_ __ \\
8 | | | \ ___/\ \__( <_> ) | \ / \ |_> > / /_/ \ ___/| | \/
9 | |____|_ /\___ >\___ >____/|___| / /_______ / __/|__\____ |\___ >__|
10 | \/ \/ \/ \/ \/|__| \/ \/
11 | """)
12 | def menu():
13 | return ("""
14 | ENTER 0 - 13 TO SELECT OPTIONS
15 |
16 | 1. IP Enumerate information from IP Address
17 | 2. DOMAIN Gather information about given DOMAIN
18 | 3. PHONENUMBER Gather information about Phonenumber
19 | 4. DNS MAP Map DNS records associated with target
20 | 5. METADATA Extract all metadata of the given file
21 | 6. REVERSE IMAGE SEARCH Obtain domain name or IP address mapping
22 | 7. HONEYPOT Check if it's honeypot or a real system
23 | 8. MAC ADDRESS LOOKUP Obtain information about give Macaddress
24 | 9. IPHEATMAP Draw out heatmap of locations of IP
25 | 10. TORRENT Gather torrent download history of IP
26 | 11. USERNAME Extract Account info. from social media
27 | 12. IP2PROXY Check whether IP uses any VPN / PROXY
28 | 13. MAIL BREACH Checks given domain has breached Mail
29 | 99. UPDATE Update ReconSpider to its latest version
30 |
31 | 0. EXIT Exit from ReconSpider to your terminal
32 | """)
33 |
34 | if __name__ == '__main__':
35 | if sys.version_info[0] > 2:
36 | try:
37 | print(banner())
38 | from core import repl_prompt
39 | except ModuleNotFoundError:
40 | print('\nSeems like you haven\'t installed Requirements or You are not using python3 version, Please install using: python3 setup.py install')
41 | quit()
42 | else:
43 | try:
44 | from core import repl_prompt
45 | except ImportError:
46 | print('\nSeems like you haven\'t installed Requirements or You are not using python3 version,, Please install using: python3 setup.py install')
47 | quit()
48 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 | import os
3 | import pip
4 |
5 | fout = open("core/config.py", "w")
6 |
7 | # Shodan.io API (https://developer.shodan.io/api)
8 | fout.write("shodan_api = " + '"' + "e9SxSRCE1xDNS4CzyWzOQTUoE55KB9HX" + '"' + "\n")
9 | fout.close()
10 |
11 | fout = open("plugins/api.py", "w")
12 |
13 | # NumVerify API (https://numverify.com/documentation)
14 | fout.write("def phoneapis():"+ "\n")
15 | fout.write(" api= "+ '"' + "ecf584dd7bccdf2c152fdf3f5595ba20" + '"' + "\n")
16 |
17 | # IP Stack API (https://ipstack.com/documentation)
18 | fout.write(" return str(api)"+ "\n")
19 | fout.write("def ipstack():"+ "\n")
20 | fout.write(" api="+ '"' +"406792616a740641c6a0588a0ee1c509"+ '"' + "\n")
21 | fout.write(" return str(api)"+ "\n")
22 |
23 | # Google Maps API (hhttps://developers.google.com/maps/documentation/places/web-service/get-api-key)
24 | fout.write("def gmap():"+ "\n")
25 | fout.write(" api="+ '"' +"AIzaSyBY9Rfnjo3UWHddicUrwHCHY37OoqxI478"+ '"' + "\n")
26 | fout.write(" return str(api)"+ "\n")
27 | fout.close()
28 |
29 | setup(
30 | name="ReconSpider",
31 | version="1.0.7",
32 | description="Most Advanced OSINT Framework",
33 | url="https://github.com/bhavsec/reconspider/",
34 | author="BhavKaran (bhavsec.com)",
35 | author_email="bhavsec@gmail.com",
36 | license="GPL-3.0",
37 | install_requires=["shodan", "requests", "prompt_toolkit","wget","beautifulsoup4","click","urllib3","IP2proxy","wget","paramiko","h8mail","nmap","pythonping","whois","gmplot","pillow","lxml","tweepy"],
38 | console=["reconspider.py"],
39 | )
40 |
41 | try:
42 | import wget
43 | except Exception as e:
44 | print(e)
45 | pip.main(['install','wget'])
46 | import wget
47 |
48 | # ip2 Location Database (https://lite.ip2location.com/database/px8-ip-proxytype-country-region-city-isp-domain-usagetype-asn-lastseen?lang=en_US)
49 | url="https://www.ip2location.com/download?token=hg5uYe2Jvri4R7P1j8b71Pk8dnvIU2M6A9jz2tvcVtGx8ZK2UPQgzr6Hk3cV68oH&file=PX8LITEBIN"
50 | print('\nDownloading IP2PROXY-IP-PROXYTYPE-COUNTRY-REGION-CITY-ISP-DOMAIN-USAGETYPE-ASN-LASTSEEN.BIN...')
51 | filepath=os.getcwd()+"/plugins/"
52 | wget.download(url,out=filepath)
53 | print('\nDownload Finished')
54 |
55 | import zipfile
56 | print('\nExtracting Files')
57 | with zipfile.ZipFile(filepath+"IP2PROXY-LITE-PX8.BIN.ZIP","r") as zip_ref:
58 | zip_ref.extract("IP2PROXY-LITE-PX8.BIN",filepath)
59 |
60 | print("\nInstallation Successfull")
61 | print("\n\nNote: APIs included in ReconSpider are FREE and having limited & restricted usage per month, Please update the current APIs with New APIs in setup.py file, and re-install once done.")
62 | print("\nWarning: Not updating the APIs can result in not showing the expected output or it may show errors.")
--------------------------------------------------------------------------------