.
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # N4xD0rk
2 |
3 | Listing subdomains about a main domain using the technique called Hacking with search engines.
4 |
5 | # Instalation
6 |
7 | You can download the latest version of N4xD0rk by cloning the GitHub repository:
8 |
9 | git clone https://github.com/n4xh4ck5/N4xD0rk.git
10 |
11 | Install the dependencies via pip:
12 |
13 | pip install -r requirements.txt
14 |
15 | To install properly phantomJS follow the next steps:
16 |
17 |
18 | Linux (Debian, Ubuntu, Kali)
19 |
20 | apt-get update && apt-get install phantomjs
21 |
22 | Linux (other distributions)
23 |
24 | Get the latest phantomjs program. The current version was 2.1.1 at the time of writing this tutorial.
25 |
26 | http://phantomjs.org/download.html
27 |
28 | https://bitbucket.org/ariya/phantomjs/downloads
29 |
30 |
31 | Download it on your system (choose 32 or 64bits) and untar it wherever you want, let's say /opt
32 |
33 | $ cd /opt
34 |
35 | $ wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2
36 |
37 | $ tar xvf phantomjs-2.1.1-linux-x86_64.tar.bz2
38 |
39 | Make a symlink to the phantomjs binary in your /usr/local/bin directory
40 |
41 | $ ln -s /opt/phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs
42 |
43 | Execute the binary with the -v option to check that everything works
44 |
45 | $ phantomjs -v
46 |
47 | 2.1.1
48 |
49 | # Usage
50 |
51 | usage: n4xd0rk.py [-h] -t TARGET -n NUMBER [-e EXPORT] [-l LANGUAGE]
52 | [-c CAPTURE]
53 |
54 | This script searchs the subdomains about a domain using the results indexed of Google and Bing search.
55 |
56 | optional arguments:
57 | -h, --help show this help message and exit
58 | -t TARGET, --target TARGET
59 | The domain or IP which wants to search.
60 | -n NUMBER, --number NUMBER
61 | Indicate the number of the search which you want to do.
62 | -e EXPORT, --export EXPORT
63 | Export the results to a json file (Y/N)
64 | Format available:
65 | 1.json
66 | 2.xlsx
67 | -l LANGUAGE, --language LANGUAGE
68 | Indicate the language of the search
69 | (es)-Spanish(default)
70 | -c CAPTURE, --capture CAPTURE
71 | Indicate if you want to take a screenshot of each web (y/n)
72 |
73 |
74 | # Example
75 |
76 | python n4xd0rk.py -t apple.com -n 1
77 |
78 | _ _ _ _ _____ ___ _
79 | | \ | | || | | __ \ / _ \ | |
80 | | \| | || |___ _| | | | | | |_ __| | __
81 | | . ` |__ _\ \/ / | | | | | | '__| |/ /
82 | | |\ | | | > <| |__| | |_| | | | <
83 | |_| \_| |_| /_/\_\_____/ \___/|_| |_|\_\
84 |
85 |
86 |
87 | ** Tool to search the subdomains about a domain using the results indexed of Google and Bing search
88 | ** Author: Ignacio Brihuega Rodriguez a.k.a N4xh4ck5
89 | ** DISCLAMER This tool was developed for educational goals.
90 | ** The author is not responsible for using to others goals.
91 | ** A high power, carries a high responsibility!
92 | ** Version 2.1
93 |
94 | This script obtains the IP associated a domain
95 |
96 | Example of usage: python n4xd0rk.py -t apple.com -n 5
97 |
98 | Looking domains and subdomains of target apple.com
99 |
100 | Domains and subdomains of apple.com are:
101 |
102 |
103 | - www.apple.com [23.XXX.XX.83]
104 |
105 |
106 | - communities.apple.com [23.XXX.XXX.242]
107 |
108 |
109 | - selfsolve.apple.com [88.XXX.XXX.168]
110 |
111 |
112 | - checkcoverage.apple.com [88.XXX.XXX.168]
113 |
114 |
115 | - support.apple.com [104.XXX.XXX.98]
116 |
117 |
118 | - itunes.apple.com [23.XXX.XXX.95]
119 |
120 |
121 | - araes.apple.com [17.XXX.XXX.53]
122 |
123 |
124 |
125 | # Author
126 |
127 | Ignacio Brihuega Rodríguez aka n4xh4ck5
128 |
129 | Twitter: @n4xh4ck5
130 |
131 | Web: fwhibbit.es
132 |
133 | # Disclamer
134 |
135 | The use of this tool is your responsability. I hereby disclaim any responsibility for actions taken with this tool.
136 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
--------------------------------------------------------------------------------
/modules/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
--------------------------------------------------------------------------------
/modules/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/__init__.pyc
--------------------------------------------------------------------------------
/modules/deleteduplicate/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | __all__ = ['deleteduplicate']
--------------------------------------------------------------------------------
/modules/deleteduplicate/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/deleteduplicate/__init__.pyc
--------------------------------------------------------------------------------
/modules/deleteduplicate/deleteduplicate.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | def DeleteDuplicate(data1,data2):
5 | #va's local
6 | ss=[]
7 | fs=[]
8 | urls_union=[]
9 | #Use the function set to work with arrays
10 | try:
11 | ss=set(data1)
12 | fs=set (data2)
13 | #Select the only values in both arrays
14 | urls_union = ss.union(fs)
15 | except Exception as e:
16 | print e
17 |
18 | finally:
19 | return urls_union
--------------------------------------------------------------------------------
/modules/deleteduplicate/deleteduplicate.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/deleteduplicate/deleteduplicate.pyc
--------------------------------------------------------------------------------
/modules/dorkgoo/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | __all__ = ['dorkgoo']
--------------------------------------------------------------------------------
/modules/dorkgoo/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/dorkgoo/__init__.pyc
--------------------------------------------------------------------------------
/modules/dorkgoo/dorkgo.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | import requests
3 | import urllib2
4 | from requests.packages.urllib3.exceptions import InsecureRequestWarning
5 | requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
6 | #Disable warning by SSL certificate
7 | import ssl
8 | ssl._create_default_https_context = ssl._create_unverified_context
9 | #Libraries to export results
10 | import xlsxwriter
11 | import json
12 | from urlparse import urlparse
13 | from bs4 import BeautifulSoup
14 | import optparse
15 | #Parser arguments
16 | import argparse
17 | from argparse import RawTextHelpFormatter
18 | import socket
19 | # encoding=utf8
20 | import sys
21 | reload(sys)
22 | sys.setdefaultencoding('utf8')
23 | import re #Expression regular to parse Google with Beautifoul Soup
24 | #define global vars
25 | url_google =[]
26 |
27 | """ FUNCTION DELETE DUPLICATES """
28 |
29 | def DeleteDuplicate(data):
30 | urls_union = []
31 | for i in data:
32 | if i not in urls_union:
33 | urls_union.append(i)
34 | return urls_union
35 |
36 | def SearchGoogle(num,target,language):
37 | start_page = 0
38 | nlink = ""
39 | user_agent = {'User-agent': 'Mozilla/5.0'}
40 | nlink_clean = ""
41 | response =""
42 | soup = ""
43 | raw_links = ""
44 | url_google_final =[]
45 | #Split the target in domain and extension
46 | domain = target.replace(".es",'')
47 | extension = target.split(".")[1]
48 | print "\nLooking domains and subdomains of target",target
49 | for start in range(start_page, (start_page + num)):
50 | SearchGoogle = "https://www.google.com/search?q=(site:*."+target+"+OR+site:*"+target+"+OR+site:"+domain+"*."+extension+")+-site:www."+target+"+-site:"+target+"&lr=lang_"+language+"&filter=&num=100"
51 | #https://www.google.es/search?q=(site:*.vodafone.com+OR+site:*vodafone.com+OR+site:vodafone*.com)+-site:www.vodafone.com+-site:vodafone.com&lr=lang_en
52 | #inurl:"http?://*vodafone*.es" -site:www.vodafone.es -site:vodafone.es
53 | #(site:*.vodafone.es OR site:*vodafone.es OR site:vodafone*.es) -site:vodafone.es
54 | try:
55 | response = requests.get(SearchGoogle, headers = user_agent)
56 | except requests.exceptions.RequestException as e:
57 | print "\nError connection to server!" #+ response.url,
58 | pass
59 | except requests.exceptions.ConnectTimeout as e:
60 | print "\nError Timeout",target
61 | pass
62 | try:
63 | #Parser HTML of BeautifulSoup
64 | soup = BeautifulSoup(response.text, "html.parser")
65 | if response.text.find("Our systems have detected unusual traffic") != -1:
66 | print "CAPTCHA detected - Plata or captcha !!!Maybe try form another IP..."
67 | return True
68 | #Parser url's throught regular expression
69 | raw_links = soup.find_all("a",href=re.compile("(?<=/url\?q=)(htt.*://.*)"))
70 | #print raw_links
71 | for link in raw_links:
72 | #Cache Google
73 | if link["href"].find("webcache.googleusercontent.com") == -1:
74 | nlink = link["href"].replace("/url?q=","")
75 | #Parser links
76 | nlink = re.sub(r'&sa=.*', "", nlink)
77 | nlink = urllib2.unquote(nlink).decode('utf8')
78 | nlink_clean = nlink.split("//")[-1].split("/")[0]
79 | url_google.append(nlink_clean)
80 | url_google_final =DeleteDuplicate(url_google)
81 | return url_google_final
82 | except Exception as e:
83 | print e
84 | if len(raw_links) < 2:
85 | #Verify if Google's Captcha has caught us!
86 | print "No more results!!!"
87 | #captcha = True
88 | return True
89 | else:
90 | return False
--------------------------------------------------------------------------------
/modules/dorkgoo/dorkgoo.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | import requests
5 | import urllib2
6 | from requests.packages.urllib3.exceptions import InsecureRequestWarning
7 | requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
8 | #Disable warning by SSL certificate
9 | import ssl
10 | ssl._create_default_https_context = ssl._create_unverified_context
11 | #Libraries to export results
12 | import xlsxwriter
13 | import json
14 | from urlparse import urlparse
15 | from bs4 import BeautifulSoup
16 | import optparse
17 | #Parser arguments
18 | import argparse
19 | from argparse import RawTextHelpFormatter
20 | import socket
21 | # encoding=utf8
22 | import sys
23 | reload(sys)
24 | sys.setdefaultencoding('utf8')
25 | import re #Expression regular to parse Google with Beautifoul Soup
26 | #define global vars
27 | url_google =[]
28 |
29 | """ FUNCTION DELETE DUPLICATES """
30 |
31 | def DeleteDuplicate(data):
32 | urls_union = []
33 | for i in data:
34 | if i not in urls_union:
35 | urls_union.append(i)
36 | return urls_union
37 |
38 | def SearchGoogle(num,target,language):
39 | start_page = 0
40 | nlink = ""
41 | user_agent = {'User-agent': 'Mozilla/5.0'}
42 | nlink_clean = ""
43 | response =""
44 | soup = ""
45 | raw_links = ""
46 | url_google_final =[]
47 | #Split the target in domain and extension
48 | domain = target.split(".")[0]
49 | extension = target.split(".")[1]
50 |
51 | print "\nLooking domains and subdomains of target",target
52 | for start in range(start_page, (start_page + num)):
53 | SearchGoogle = "https://www.google.com/search?q=(site:*."+target+"+OR+site:*"+target+"+OR+site:"+domain+"*."+extension+")+-site:www."+target+"&lr=lang_="+language+"&filter=&num=100"
54 |
55 | try:
56 | response = requests.get(SearchGoogle, headers = user_agent)
57 | except requests.exceptions.RequestException as e:
58 | print "\nError connection to server!"
59 | pass
60 | except requests.exceptions.ConnectTimeout as e:
61 | print "\nError Timeout" #,target
62 | pass
63 | try:
64 | #Parser HTML of BeautifulSoup
65 | soup = BeautifulSoup(response.text, "html.parser")
66 | if response.text.find("Our systems have detected unusual traffic") != -1:
67 | print "CAPTCHA detected - Plata or captcha !!!Maybe try form another IP..."
68 | return True
69 | #Parser url's throught regular expression
70 | raw_links = soup.find_all("a",href=re.compile("(?<=/url\?q=)(htt.*://.*)"))
71 | #print raw_links
72 | for link in raw_links:
73 | #Cache Google
74 | if link["href"].find("webcache.googleusercontent.com") == -1:
75 | nlink = link["href"].replace("/url?q=","")
76 | #Parser links
77 | nlink = re.sub(r'&sa=.*', "", nlink)
78 | nlink = urllib2.unquote(nlink).decode('utf8')
79 | nlink_clean = nlink.split("//")[-1].split("/")[0]
80 | url_google.append(nlink_clean)
81 | url_google_final =DeleteDuplicate(url_google)
82 | return url_google_final
83 | except Exception as e:
84 | print e
85 | if len(raw_links) < 2:
86 | #Verify if Google's Captcha has caught us!
87 | print "No more results!!!"
88 | #captcha = True
89 | return True
90 | else:
91 | return False
--------------------------------------------------------------------------------
/modules/dorkgoo/dorkgoo.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/dorkgoo/dorkgoo.pyc
--------------------------------------------------------------------------------
/modules/exportresults/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | __all__ = ['exportresults']
--------------------------------------------------------------------------------
/modules/exportresults/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/exportresults/__init__.pyc
--------------------------------------------------------------------------------
/modules/exportresults/exportresults.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | import xlsxwriter
5 | import json
6 | def ExportResults(data,array_ip,output):
7 | # Start from the first cell. Rows and columns are zero indexed.
8 | row = 0
9 | col = 0
10 | if output == "js":
11 | #Export the results in json format
12 | print "Exporting the results in an json"
13 | with open ('output.json','w') as f:
14 | json.dump(data,f)
15 | elif (output == "xl"):
16 | #Export the results in excel format
17 | print "\nExporting the results in an excel"
18 | # Create a workbook and add a worksheet.
19 | workbook = xlsxwriter.Workbook('output.xlsx')
20 | worksheet = workbook.add_worksheet()
21 | worksheet.write(row, col, "Domain")
22 | worksheet.write(row, col+1, "IP")
23 | row +=1
24 | for domain in data:
25 | col = 0
26 | worksheet.write(row, col, domain)
27 | row += 1
28 | #update row
29 | row = 1
30 | for ip in array_ip:
31 | col = 1
32 | worksheet.write(row, col, ip)
33 | row += 1
34 | #close the excel
35 | workbook.close()
36 | else:
37 | exit(1)
--------------------------------------------------------------------------------
/modules/exportresults/exportresults.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/exportresults/exportresults.pyc
--------------------------------------------------------------------------------
/modules/n4xd0rk/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | __all__ = ['n4xd0rk']
--------------------------------------------------------------------------------
/modules/n4xd0rk/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/n4xd0rk/__init__.pyc
--------------------------------------------------------------------------------
/modules/n4xd0rk/n4xd0rk.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | import requests
5 | from urlparse import urlparse
6 | from bs4 import BeautifulSoup
7 | import argparse
8 | from argparse import RawTextHelpFormatter
9 | import json
10 | import xlsxwriter
11 | import socket
12 | from urllib2 import urlopen
13 | from contextlib import closing
14 |
15 | #define vars
16 | bing_dork=["site:","-site:","language:","domain:","ip:"]
17 | delete_bing=["microsoft","msn","bing","hostinet"]
18 |
19 | """ FUNCTION SENDREQUEST"""
20 | def SendRequest (target,num,option,initial,language):
21 | count_bing= 9
22 | iteration = 0
23 | url_final_temp= []
24 | url_final_n4xd0rk = []
25 | response =""
26 | try:
27 | while (iteration < num):
28 | iteration += 1
29 | if initial == True:
30 | initial = False
31 | #First search in Bing
32 | if option == 1:
33 | SearchBing = "https://www.bing.com/search?q="+bing_dork[3]+target+"+"+bing_dork[2]+language+"+"+bing_dork[1]+"www."+target
34 | else:
35 | SearchBing = "https://www.bing.com/search?q="+bing_dork[4]+target+"&go=Buscar"
36 | else:
37 | if option == 1:
38 | SearchBing = "https://www.bing.com/search?q="+bing_dork[3]+target+"+"+bing_dork[2]+language+"+"+bing_dork[1]+"www."+target+"&first="+str(count_bing)+"&FORM=PORE"
39 | else:
40 | SearchBing = "https://www.bing.com/search?q="+bing_dork[4]+target+"&first="+str(count_bing)+"&FORM=PORE"
41 | count_bing=count_bing+50
42 |
43 | #Requests
44 | response=requests.get(SearchBing,allow_redirects=True)
45 | url_final_temp= parser_html(response.text)
46 | [url_final_n4xd0rk.append(i) for i in url_final_temp if not i in url_final_n4xd0rk]
47 |
48 | except Exception as e:
49 | print str(e)
50 | pass
51 |
52 | finally:
53 | return url_final_n4xd0rk
54 |
55 | """FUNCTION PARSER_HTML"""
56 | def parser_html(content):
57 | urls = []
58 | urls_clean = []
59 | urls_final =[]
60 | i = 0;
61 | soup = BeautifulSoup(content, 'html.parser')
62 | for link in soup.find_all('a'):
63 | try:
64 | if (urlparse(link.get('href'))!='' and urlparse(link.get('href'))[1].strip()!=''):
65 | urls.append(urlparse(link.get('href'))[1])
66 | except Exception as e:
67 | #print(e)
68 | pass
69 | try:
70 | #Delete duplicates
71 | [urls_clean.append(i) for i in urls if not i in urls_clean]
72 | except:
73 | pass
74 | try:
75 | #Delete not domains belongs to target
76 | for value in urls_clean:
77 | if (value.find(delete_bing[0]) == -1):
78 | #Delete Bing's domains
79 | if (value.find(delete_bing[1]) == -1):
80 | if (value.find(delete_bing[2]) == -1):
81 | if (value.find(delete_bing[3]) == -1):
82 | urls_final.append(value)
83 | except:
84 | pass
85 | return urls_final
--------------------------------------------------------------------------------
/modules/n4xd0rk/n4xd0rk.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/n4xd0rk/n4xd0rk.pyc
--------------------------------------------------------------------------------
/modules/screenshot/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | __all__ = ['screenshot']
--------------------------------------------------------------------------------
/modules/screenshot/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/screenshot/__init__.pyc
--------------------------------------------------------------------------------
/modules/screenshot/screenshot.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #!/usr/bin/env python
3 |
4 | from selenium import webdriver
5 | import requests
6 | import os
7 |
8 | def screen(array,target):
9 | dom =""
10 | createDir(target)
11 | try:
12 | for dom in array:
13 | snapshot(str(dom.encode("utf-8")))
14 | MoveCaptures(target)
15 |
16 | except Exception as e:
17 | print e
18 |
19 | def snapshot(url):
20 | driver = webdriver.PhantomJS(service_args=['--ignore-ssl-errors=true']) # or add to your PATH
21 | driver.set_page_load_timeout(15)
22 | driver.set_window_size(1024, 768) # optional
23 | try:
24 | driver.get('https://{0}'.format(url))
25 | except:
26 | driver.get('http://{0}'.format(url))
27 | driver.save_screenshot(url+".png")
28 | return
29 |
30 | def createDir(target):
31 | try:
32 | if not os.path.exists(target):
33 | os.mkdir(target)
34 |
35 | except Exception as e:
36 | print e
37 |
38 | def MoveCaptures(target):
39 | os.system('mv *.png '+ str(target))
--------------------------------------------------------------------------------
/modules/screenshot/screenshot.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/screenshot/screenshot.pyc
--------------------------------------------------------------------------------
/modules/sh4d0m/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | __all__ = ['sh4d0m']
--------------------------------------------------------------------------------
/modules/sh4d0m/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/sh4d0m/__init__.pyc
--------------------------------------------------------------------------------
/modules/sh4d0m/sh4d0m.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | import sys
5 | import shodan
6 |
7 | def create_shodan_object():
8 | shodan_object =""
9 | # Add your shodan API key here
10 | api_key = "API_KEY"
11 | shodan_object = shodan.Shodan(api_key)
12 | return shodan_object
13 |
14 | def shodan_ip_search(shodan_search_object, shodan_search_ip):
15 | port_target = []
16 | result = ""
17 | try:
18 | print "\nSearching Shodan for info about " + shodan_search_ip + "...\n"
19 | # Search Shodan
20 | result = shodan_search_object.host(shodan_search_ip)
21 | try:
22 | for i in result['data']:
23 | print 'Port: %s' % i['port']
24 | port_target.append(i['port'])
25 | except Exception as e:
26 | print e
27 | except Exception as e:
28 | print e
29 | return port_target
30 |
31 | def CreateShodan(ip):
32 | ports =""
33 | port_target=[]
34 | search_ip = ""
35 | try:
36 | search_ip = ip
37 | shodan_api_object = create_shodan_object()
38 | port_target = shodan_ip_search(shodan_api_object, search_ip)
39 | ports = str(port_target).replace("[","").replace("]","")
40 |
41 | except Exception as e:
42 | print 'Error: %s' % e
43 | sys.exit(1)
44 | finally:
45 | return ports
--------------------------------------------------------------------------------
/modules/sh4d0m/sh4d0m.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/sh4d0m/sh4d0m.pyc
--------------------------------------------------------------------------------
/modules/showresults/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | __all__ = ['showresults']
--------------------------------------------------------------------------------
/modules/showresults/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/showresults/__init__.pyc
--------------------------------------------------------------------------------
/modules/showresults/showresults.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | def ShowResults(urls,array_ip,target, option):
5 | newlist=[]
6 | ip = ""
7 | contador = 0
8 | try:
9 | if option == 1:
10 | print "\n Domains and subdomains of "+ str(target) + " are:"
11 | #Read the list to print the value in a line
12 | for i in urls:
13 | ip = array_ip[contador]
14 | print "\n"
15 | print "\t- " + i+ " ["+ip+"]"
16 | contador += 1
17 | if option == 2:
18 | print "\nDomains contained in the IP "+ str(target) + " are:"
19 | #print "\nDomains contained in the IP {} {} are:".format(target,target)
20 | #Read the list to print the value in a line
21 | for i in urls:
22 | if i not in newlist:
23 | newlist.append(i)
24 | print "\n"
25 | print "\t- " + i
26 | except Exception as e:
27 | print e
28 | pass
--------------------------------------------------------------------------------
/modules/showresults/showresults.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/showresults/showresults.pyc
--------------------------------------------------------------------------------
/modules/th4sd0m/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | __all__ = ['th4sd0m']
--------------------------------------------------------------------------------
/modules/th4sd0m/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/th4sd0m/__init__.pyc
--------------------------------------------------------------------------------
/modules/th4sd0m/th4sd0m.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | import requests
5 | from urlparse import urlparse
6 | from bs4 import BeautifulSoup
7 | import argparse
8 | from argparse import RawTextHelpFormatter
9 | from urllib2 import urlopen
10 | from contextlib import closing
11 | import json
12 | #define vars
13 | bing_dork=["site:","-site:","language:","domain:","ip:"]
14 |
15 | delete_bing=["microsoft","msn","bing","hostinet"]
16 |
17 | """ FUNCTION SENDEQUEST """
18 | #Use Bing to obtain all domains contained in an IP
19 | def SendRequest(ip,num,initial):
20 | iteration = 0
21 | count_bing=9
22 | url_th4sd0m = []
23 | url_temp = []
24 | try:
25 | while (iteration < num):
26 | iteration = iteration +1
27 | if initial==True:
28 | print "\nSearching domains hosted in this IP...\n"
29 | initial = False
30 | #First search in Bing
31 | SearchBing = "https://www.bing.com/search?q="+bing_dork[4]+ip
32 | else:
33 | #Bring the next Bing results - 50 in each page
34 | SearchBing = "https://www.bing.com/search?q="+bing_dork[4]+ip+"&first="+str(count_bing)+"&FORM=PORE"
35 | count_bing=count_bing+50
36 | #Use requests to do the search
37 | response=requests.get(SearchBing,allow_redirects=True)
38 | url_temp = parser_html(response.text)
39 | [url_th4sd0m.append(i) for i in url_temp if not i in url_th4sd0m]
40 |
41 | except Exception as e:
42 | print e
43 | pass
44 |
45 | finally:
46 | return url_th4sd0m
47 | #********************************************************#
48 | """ FUNCTION PARSER_HTML"""
49 | def parser_html(content):
50 | urls = []
51 | urls_clean = []
52 | urls_final =[]
53 | i = 0;
54 | soup = BeautifulSoup(content, 'html.parser')
55 | #try:
56 | for link in soup.find_all('a'):
57 | try:
58 | if (urlparse(link.get('href'))!='' and urlparse(link.get('href'))[1].strip()!=''):
59 | urls.append(urlparse(link.get('href'))[1])
60 | except Exception as e:
61 | #print(e)
62 | pass
63 | try:
64 | #Delete duplicates
65 | [urls_clean.append(i) for i in urls if not i in urls_clean]
66 | except:
67 | pass
68 | try:
69 | #Delete not domains belongs to target
70 | for value in urls_clean:
71 | if (value.find(delete_bing[0]) == -1):
72 | #Delete Bing's domains
73 | if (value.find(delete_bing[1]) == -1):
74 | if (value.find(delete_bing[2]) == -1):
75 | urls_final.append(value)
76 | except:
77 | pass
78 | #except Exception as e:
79 | # print e,"Failed in parser_HTML of th4sd0m"
80 | return urls_final
--------------------------------------------------------------------------------
/modules/th4sd0m/th4sd0m.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/th4sd0m/th4sd0m.pyc
--------------------------------------------------------------------------------
/modules/verifyip/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding:utf-8 -*-
3 |
4 | __all__ = ['verifyip']
--------------------------------------------------------------------------------
/modules/verifyip/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/verifyip/__init__.pyc
--------------------------------------------------------------------------------
/modules/verifyip/verifyip.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding:utf-8 -*-
3 |
4 | from IPy import IP
5 |
6 | def VerifyIp(ip):
7 |
8 | ip_type = None
9 |
10 | try:
11 |
12 | ip_type = IP(ip).iptype()
13 |
14 | if ip_type is not "PUBLIC":
15 |
16 | ip_type = 'Private'
17 |
18 | else:
19 |
20 | ip_type = 'Public'
21 |
22 | except Exception:
23 |
24 | pass
25 |
26 | finally:
27 |
28 | return ip_type
--------------------------------------------------------------------------------
/modules/verifyip/verifyip.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/verifyip/verifyip.pyc
--------------------------------------------------------------------------------
/modules/verifytarget/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding:utf-8 -*-
3 |
4 | __all__ = ['verifytarget']
--------------------------------------------------------------------------------
/modules/verifytarget/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/verifytarget/__init__.pyc
--------------------------------------------------------------------------------
/modules/verifytarget/verifytarget.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding:utf-8 -*-
3 |
4 | import re
5 | import socket
6 | # IP -> True
7 | #Domain -> False
8 | def VerifyTarget(target):
9 |
10 | match = False
11 |
12 | try:
13 |
14 | if re.match(r'^((\d{1,2}|1\d{2}|2[0-4]\d|25[0-5])\.){3}(\d{1,2}|1\d{2}|2[0-4]\d|25[0-5])$', target):
15 |
16 | match = True
17 |
18 | elif re.match(r'^(([a-zA-Z]{1})|([a-zA-Z]{1}[a-zA-Z]{1})|([a-zA-Z]{1}[0-9]{1})|([0-9]{1}[a-zA-Z]{1})|([a-zA-Z0-9][a-zA-Z0-9-_]{1,61}[a-zA-Z0-9]))\.([a-zA-Z]{2,6}|[a-zA-Z0-9-]{2,30}\.[a-zA-Z]{2,3})$', target):
19 |
20 | match = False
21 |
22 | else:
23 |
24 | match = None
25 |
26 | except Exception:
27 | pass
28 |
29 | finally:
30 |
31 | return match
--------------------------------------------------------------------------------
/modules/verifytarget/verifytarget.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/verifytarget/verifytarget.pyc
--------------------------------------------------------------------------------
/modules/whoip/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | __all__ = ['whoip']
--------------------------------------------------------------------------------
/modules/whoip/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/whoip/__init__.pyc
--------------------------------------------------------------------------------
/modules/whoip/whoip.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | import socket
5 | """FUNCTION WHOIP
6 | Obtain the reverse IP about a domain
7 | """
8 | def WhoIP(domain):
9 | ip=""
10 | try:
11 | ip = socket.gethostbyname(domain)
12 | except:
13 | #print "It can't obtain the reverse IP"
14 | ip = "0.0.0.0"
15 |
16 | finally:
17 | return ip
--------------------------------------------------------------------------------
/modules/whoip/whoip.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/whoip/whoip.pyc
--------------------------------------------------------------------------------
/modules/whois/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | __all__ = ['whois']
--------------------------------------------------------------------------------
/modules/whois/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/whois/__init__.pyc
--------------------------------------------------------------------------------
/modules/whois/whois.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | from urllib2 import urlopen
4 | from contextlib import closing
5 | import json
6 |
7 | """FUNCTION WhoISMYIP"""
8 | def WhoismyIP(ip):
9 | url =""
10 | url = 'http://freegeoip.net/json/'+ip
11 | try:
12 | with closing(urlopen(url)) as response:
13 | location = json.loads(response.read())
14 | print "\n\t-Direction IP:", location['ip']
15 | print "\n\t-Country_code:",location['country_code']
16 | print "\n\t-Country name:",location['country_name']
17 | print "\n\t-Region code:",location['region_code']
18 | print "\n\t-Region name:",location['region_name']
19 | print "\n\t-City:",location['city']
20 | print "\n\t-Zip code:",location['zip_code']
21 | print "\n\t-Time zone:",location['time_zone']
22 | print "\n\t-Latitude:",location['latitude']
23 | print "\n\t-Longitude:",location['longitude']
24 | except Exception as e:
25 | #print e
26 | print "Don't find any information about IP in Whois"
27 | pass
--------------------------------------------------------------------------------
/modules/whois/whois.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/n4xh4ck5/N4xD0rk/63e7cb04e82a723070b638b0a2fe5750bef72d7a/modules/whois/whois.pyc
--------------------------------------------------------------------------------
/n4xd0rk.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | import requests
5 | from urlparse import urlparse
6 | from bs4 import BeautifulSoup
7 | import argparse
8 | from argparse import RawTextHelpFormatter
9 | import json
10 | import xlsxwriter
11 | import socket
12 | from urllib2 import urlopen
13 | from contextlib import closing
14 | import sys
15 | #imports
16 | from modules.deleteduplicate import *
17 | from modules.whoip import *
18 | from modules.exportresults import *
19 | from modules.showresults import *
20 | from modules.th4sd0m import *
21 | from modules.whois import *
22 | from modules.n4xd0rk import *
23 | from modules.dorkgoo import *
24 | from modules.verifytarget import *
25 | from modules.verifyip import *
26 | from modules.screenshot import *
27 | from modules.sh4d0m import *
28 | """ FUNCTION BANNER """
29 | def banner():
30 | print """
31 | _ _ _ _ _____ ___ _
32 | | \ | | || | | __ \ / _ \ | |
33 | | \| | || |___ _| | | | | | |_ __| | __
34 | | . ` |__ _\ \/ / | | | | | | '__| |/ /
35 | | |\ | | | > <| |__| | |_| | | | <
36 | |_| \_| |_| /_/\_\_____/ \___/|_| |_|\_\ """
37 | print"\n"
38 | print """
39 | ** Tool to search the subdomains about a domain using the results indexed of Bing search
40 | ** Author: Ignacio Brihuega Rodriguez a.k.a N4xh4ck5
41 | ** DISCLAMER This tool was developed for educational goals.
42 | ** The author is not responsible for using to others goals.
43 | ** A high power, carries a high responsibility!
44 | ** Version 2.1"""
45 |
46 | """FUNCTION HELP """
47 | def help():
48 | print """ \nThis script obtains the IP associated a domain
49 |
50 | Example of usage: python n4xd0rk.py -d apple.com -n 5 """
51 | def main (argv):
52 | parser = argparse.ArgumentParser(description='This script searchs the subdomains about a domain using the results indexed of Bing search.', formatter_class=RawTextHelpFormatter)
53 | parser.add_argument('-t','--target', help="The domain or IP which wants to search.",required=True)
54 | parser.add_argument('-n','--number', help="Indicate the number of the search which you want to do.",required=True)
55 | parser.add_argument('-e','--export', help="Export the results to a json file (Y/N)\n Format available:\n\t1.json\n\t2.xlsx", required=False)
56 | parser.add_argument('-l','--language', help="Indicate the language of the search\n\t(es)-Spanish(default)", required=False)
57 | parser.add_argument('-c','--capture', help="Indicate if you want to take a screenshot of each web (y/n)", required=False)
58 | args = parser.parse_args()
59 | #Asignation from arguments to variables.
60 | #convertion to int the argument
61 | N = int (args.number)
62 | target=args.target
63 | output= str(args.export)
64 | output = output.lower()
65 | capture = args.capture
66 | if capture is None:
67 | capture = 'n'
68 | capture = capture.lower()
69 | if capture != 'y' and capture != 'n':
70 | print "The capture is incorrect. Please, enter a valid capture."
71 | exit (1)
72 | language = args.language
73 | if language is None:
74 | language="es"
75 | #Local var's
76 | urls_target =[]
77 | urls_n4xd0rk = []
78 | urls_d0rkgo0 = []
79 | initial = 1 #by n4xd0rk
80 | option = None
81 | export = ""
82 | newlist =[]
83 | direction_ip = []
84 | initial = True
85 | flag = None
86 | banner()
87 | help()
88 | #Verify inputs
89 | if target == None:
90 | print "The target is empty. Please, enter a valid target"
91 | flag = verifytarget.VerifyTarget(target)
92 | if flag == False:
93 | option = 1
94 | if flag == True:
95 | flag_ip = verifyip.VerifyIp(target)
96 | if flag_ip == 'Public':
97 | option = 2
98 | else:
99 | print "The IP is not public or ipv4"
100 | exit(1)
101 | if output is None:
102 | output = 'n'
103 | if (output == 'y'):
104 | print "Select the output format:"
105 | print "\n\t(js).json"
106 | print "\n\t(xl).xlsx"
107 | export = raw_input ().lower()
108 | if ((export != "js") and (export != "xl")):
109 | print "Incorrect output format selected."
110 | exit(1)
111 | try:
112 | if option == 1:
113 | #BING
114 | urls_n4xd0rk = n4xd0rk.SendRequest(target,N,1,initial,language)
115 | #GOOGLE
116 | urls_d0rkgo0 = dorkgoo.SearchGoogle(N,target,language)
117 | #Join Bing and Google and delete duplicate results
118 | urls_target = deleteduplicate.DeleteDuplicate(urls_n4xd0rk,urls_d0rkgo0)
119 | if capture == 'y':
120 | screenshot.screen(urls_target,target)
121 | for i in urls_target:
122 | ip = whoip.WhoIP(i)
123 | direction_ip.append(ip)
124 | else:
125 | print "Information about the IP",target+"\n"
126 | whois.WhoismyIP(target)
127 | sh4d0m.CreateShodan(target)
128 | urls_target = th4sd0m.SendRequest(target,N,True)
129 | if capture == 'y':
130 | screenshot.screen(urls_target,target)
131 | try:
132 | direction_ip.append(str(target))
133 | except Exception as e:
134 | #print e
135 | pass
136 | #Visuresults showresults
137 | showresults.ShowResults(urls_target,direction_ip,target,option)
138 | if (output == 'y'):
139 | exportresults.ExportResults(urls_target,direction_ip,export)
140 | except Exception as e:
141 | print e
142 | pass
143 |
144 | if __name__ == "__main__":
145 | main(sys.argv[1:])
146 |
--------------------------------------------------------------------------------
/old/N4xD0rk_1.0.py:
--------------------------------------------------------------------------------
1 | import requests
2 | from urlparse import urlparse
3 | from bs4 import BeautifulSoup
4 | import argparse
5 | from argparse import RawTextHelpFormatter
6 | import json
7 | #define vars
8 | bing_dork=["site:","-site:","language:","domain:"]
9 | urls = []
10 | urls_clean = []
11 | urls_final =[]
12 | delete_bing=["microsoft","msn","bing"]
13 | count_bing=9
14 | iteration=0
15 | initial=1
16 |
17 | #********************************************************#
18 | #Definition and treatment of the parameters
19 | def parser_html():
20 | i = 0;
21 | soup = BeautifulSoup(content, 'html.parser')
22 | for link in soup.find_all('a'):
23 | try:
24 | if (urlparse(link.get('href'))!='' and urlparse(link.get('href'))[1].strip()!=''):
25 | urls.append(urlparse(link.get('href'))[1])
26 | except Exception as e:
27 | #print(e)
28 | pass
29 | try:
30 | #Delete duplicates
31 | [urls_clean.append(i) for i in urls if not i in urls_clean]
32 | except:
33 | pass
34 | try:
35 | #Delete not domains belongs to target
36 | for value in urls_clean:
37 | if (value.find(delete_bing[0]) == -1):
38 | #Delete Bing's domains
39 | if (value.find(delete_bing[1]) == -1):
40 | if (value.find(delete_bing[2]) == -1):
41 | urls_final.append(value)
42 | except:
43 | pass
44 | ######FUNCTION EXPORT RESULTS #######
45 | def ExportResults(data):
46 | #Export the results in json format
47 | with open ('output.json','w') as f:
48 | json.dump(data,f)
49 | #MAIN
50 | parser = argparse.ArgumentParser(description='This script searchs the subdomains about a domain using the results indexed of Bing search', formatter_class=RawTextHelpFormatter)
51 | parser.add_argument('-d','--domain', help="The domain which it wants to search",required=False)
52 | parser.add_argument('-n','--search', help="Indicate the number of the search which you want to do",required=True)
53 | parser.add_argument('-e','--export', help='Export the results to a json file (Y/N)\n\n', required=False)
54 | parser.add_argument('-l','--language', help='Indicate the language of the search\n\n\t(es)-Spanish(default)\n\t(en)-English', required=False)
55 | args = parser.parse_args()
56 | print " _ _ _ _ _____ ___ _"
57 | print " | \ | | || | | __ \ / _ \ | |"
58 | print " | \| | || |___ _| | | | | | |_ __| | __"
59 | print " | . ` |__ _\ \/ / | | | | | | '__| |/ /"
60 | print " | |\ | | | > <| |__| | |_| | | | < "
61 | print " |_| \_| |_| /_/\_\_____/ \___/|_| |_|\_\ "
62 | print "\n"
63 | print """** Tool to search the subdomains about a domain using the results indexed of Bing search
64 | ** Author: Ignacio Brihuega Rodriguez a.k.a N4xh4ck5
65 | ** DISCLAMER This tool was developed for educational goals.
66 | ** The author is not responsible for using to others goals.
67 | ** A high power, carries a high responsibility!"""
68 | #Asignation from arguments to variables.
69 | #convertion to int the argument
70 | N = int (args.search)
71 | target=args.domain
72 | output=args.export
73 | if ((output != 'Y') and (output != 'N')):
74 | print "The output option is not valid"
75 | exit(1)
76 | language= args.language
77 | if language is None:
78 | language="es"
79 | if ((language != "es") and (language !="en")):
80 | print "The language is not valid"
81 | exit(1)
82 | try:
83 | while (iteration < N):
84 | iteration = iteration +1
85 | if initial==1:
86 | print "\nSearching subdomains...\n"
87 | initial = 0
88 | #First search in Bing
89 | SearchBing = "https://www.bing.com/search?q="+bing_dork[3]+target+"+"+bing_dork[2]+language+"+"+bing_dork[1]+"www."+target+"&go=Buscar"
90 | else:
91 | #Bring the next Bing results - 50 in each page
92 | SearchBing= SearchBing+"&first="+str(count_bing)+"&FORM=PORE"
93 | count_bing=count_bing+50
94 | try:
95 | #Requests
96 | response=requests.get(SearchBing,allow_redirects=True)
97 |
98 | except:
99 | pass
100 | content = response.text
101 | #PARSER HTML
102 | #normalize a called with parameters
103 | parser_html()
104 | except:
105 | pass
106 | newlist=[]
107 | print "Subdomains "+target+" are:\n"
108 | #Read the list to print the value in a line
109 | for i in urls_final:
110 | if i not in newlist:
111 | newlist.append(i)
112 | print "\n"
113 | print i
114 | #verify if the user wants to export results
115 | if output == 'Y':
116 | #Only it can enter if -j is put in the execution
117 | ExportResults(newlist)
118 |
--------------------------------------------------------------------------------
/old/n4xd0rk_1.2.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | import requests
3 | from urlparse import urlparse
4 | from bs4 import BeautifulSoup
5 | import argparse
6 | from argparse import RawTextHelpFormatter
7 | import json
8 | import xlsxwriter
9 | import socket
10 | from urllib2 import urlopen
11 | from contextlib import closing
12 | #define vars
13 | bing_dork=["site:","-site:","language:","domain:","ip:"]
14 | urls = []
15 | urls_clean = []
16 | urls_final =[]
17 | delete_bing=["microsoft","msn","bing"]
18 | direction_ip=[]
19 |
20 | """ FUNCTION SENDREQUEST"""
21 | def SendRequest (target,num,option,initial):
22 | count_bing= 9
23 | iteration = 0
24 | response =""
25 | try:
26 | while (iteration < num):
27 | iteration += 1
28 | if initial == True:
29 | initial = False
30 | #First search in Bing
31 | if option == 1:
32 | SearchBing = "https://www.bing.com/search?q="+bing_dork[3]+target+"+"+bing_dork[2]+language+"+"+bing_dork[1]+"www."+target
33 | else:
34 | SearchBing = "https://www.bing.com/search?q="+bing_dork[4]+target+"&go=Buscar"
35 | else:
36 | if option == 1:
37 | #SearchBing= SearchBing+"&first="+str(count_bing)+"&FORM=PORE"
38 | SearchBing = "https://www.bing.com/search?q="+bing_dork[3]+target+"+"+bing_dork[2]+language+"+"+bing_dork[1]+"www."+target+"&first="+str(count_bing)+"&FORM=PORE"
39 | else:
40 | SearchBing = "https://www.bing.com/search?q="+bing_dork[4]+target+"&first="+str(count_bing)+"&FORM=PORE"
41 | count_bing=count_bing+50
42 |
43 | #Requests
44 | response=requests.get(SearchBing,allow_redirects=True)
45 | parser_html(response.text)
46 | except Exception as e:
47 | print str(e)
48 | pass
49 |
50 | """FUNCTION PARSER_HTML"""
51 | def parser_html(content):
52 | i = 0;
53 | soup = BeautifulSoup(content, 'html.parser')
54 | for link in soup.find_all('a'):
55 | try:
56 | if (urlparse(link.get('href'))!='' and urlparse(link.get('href'))[1].strip()!=''):
57 | urls.append(urlparse(link.get('href'))[1])
58 | #print str(urlparse(link.get('href'))[1])
59 | except Exception as e:
60 | #print(e)
61 | pass
62 | try:
63 | #Delete duplicates
64 | [urls_clean.append(i) for i in urls if not i in urls_clean]
65 | except:
66 | pass
67 | try:
68 | #Delete not domains belongs to target
69 | for value in urls_clean:
70 | if (value.find(delete_bing[0]) == -1):
71 | #Delete Bing's domains
72 | if (value.find(delete_bing[1]) == -1):
73 | if (value.find(delete_bing[2]) == -1):
74 | urls_final.append(value)
75 | except:
76 | pass
77 |
78 | """FUNCTION WHO IS MY IP"""
79 | def WhoismyIP(domain,option):
80 | ip=""
81 | url =""
82 | try:
83 |
84 | if option == 1:
85 | ip = socket.gethostbyname(domain)
86 | direction_ip.append(ip)
87 | return ip
88 | else:
89 | url = 'http://freegeoip.net/json/'+domain
90 | try:
91 | with closing(urlopen(url)) as response:
92 | location = json.loads(response.read())
93 | print location
94 | location_city = location['city']
95 | location_state = location['region_name']
96 | location_country = location['country_name']
97 | location_zip = location['zipcode']
98 | except:
99 | pass
100 | except Exception as e:
101 | print e
102 | pass
103 |
104 | """ FUNCTION SHOW RESULTS """
105 | def ShowResults(target, option):
106 | newlist=[]
107 | ip = ""
108 | if option == 1:
109 | print "\nSubdomains "+target+" are:"
110 | #Read the list to print the value in a line
111 | for i in urls_final:
112 | if i not in newlist:
113 | ip=WhoismyIP(i,option)
114 | newlist.append(i)
115 | print "\n"
116 | print "\t- " + i+ " ["+ip+"]"
117 | else:
118 | print "Information about the IP",target+"\n"
119 | WhoismyIP(target,option)
120 | print "\nDomains contained in the IP "+target+" are:"
121 | #Read the list to print the value in a line
122 | for i in urls_final:
123 | if i not in newlist:
124 | newlist.append(i)
125 | print "\n"
126 | print "\t- " + i
127 | return newlist
128 |
129 | """FUNCTION EXPORT RESULTS"""
130 | def ExportResults(data,output):
131 | # Start from the first cell. Rows and columns are zero indexed.
132 | row = 0
133 | col = 0
134 | if output == "js":
135 | #Export the results in json format
136 | print "Exporting the results in an json"
137 | with open ('output.json','w') as f:
138 | json.dump(data,f)
139 | elif (output == "xl"):
140 | #Export the results in excel format
141 | print "\nExporting the results in an excel"
142 | # Create a workbook and add a worksheet.
143 | workbook = xlsxwriter.Workbook('output.xlsx')
144 | worksheet = workbook.add_worksheet()
145 | worksheet.write(row, col, "Domain")
146 | worksheet.write(row, col+1, "IP")
147 | row +=1
148 | for domain in data:
149 | col = 0
150 | worksheet.write(row, col, domain)
151 | row += 1
152 | #update row
153 | row = 1
154 | for ip in direction_ip:
155 | col = 1
156 | worksheet.write(row, col, ip)
157 | row += 1
158 | #close the excel
159 | workbook.close()
160 | else:
161 | exit(1)
162 | #MAIN
163 | parser = argparse.ArgumentParser(description='This script searchs the subdomains about a domain using the results indexed of Bing search.', formatter_class=RawTextHelpFormatter)
164 | parser.add_argument('-d','--domain', help="The domain which wants to search.",required=False)
165 | parser.add_argument('-i','--ip', help="The IP which to kown the domains to contain.",required=False)
166 | parser.add_argument('-o','--option', help="Select an option:\n\t1. Searching the subdomains about a domain using the results indexed.\n\t2. Searching the domains belong to an IP.",required=True)
167 | parser.add_argument('-n','--search', help="Indicate the number of the search which you want to do.",required=True)
168 | parser.add_argument('-e','--export', help="Export the results to a json file (Y/N)\n Format available:\n\t1.json\n\t2.xlsx", required=False)
169 | parser.add_argument('-l','--language', help="Indicate the language of the search\n\t(es)-Spanish(default)\n\t(en)-English", required=False)
170 | args = parser.parse_args()
171 | print " _ _ _ _ _____ ___ _"
172 | print " | \ | | || | | __ \ / _ \ | |"
173 | print " | \| | || |___ _| | | | | | |_ __| | __"
174 | print " | . ` |__ _\ \/ / | | | | | | '__| |/ /"
175 | print " | |\ | | | > <| |__| | |_| | | | < "
176 | print " |_| \_| |_| /_/\_\_____/ \___/|_| |_|\_\ "
177 | print "\n"
178 | print """** Tool to search the subdomains about a domain using the results indexed of Bing search
179 | ** Author: Ignacio Brihuega Rodriguez a.k.a N4xh4ck5
180 | ** DISCLAMER This tool was developed for educational goals.
181 | ** The author is not responsible for using to others goals.
182 | ** A high power, carries a high responsibility!
183 | ** Version 1.2"""
184 | #Asignation from arguments to variables.
185 | #convertion to int the argument
186 | N = int (args.search)
187 | option = int (args.option)
188 | target=args.domain
189 | ip = args.ip
190 | output=args.export
191 | export = ""
192 | newlist =[]
193 | initial = True
194 | if ((option != 1) and (option != 2)):
195 | print "The option is incorrect"
196 | exit(1)
197 | if option == 1:
198 | #Analyze if domain has got a domain
199 | if target is None:
200 | print "You don't enter a value to domain parameter to option 1"
201 | exit (1)
202 | elif option == 2:
203 | if ip is None:
204 | print "You don't enter a value to ip parameter to option 2"
205 | exit (1)
206 | if output is None:
207 | output = 'N'
208 | if ((output == 'y') or (output == 'Y')):
209 | print "Select the output format:"
210 | print "\n\t(js).json"
211 | print "\n\t(xl).xlsx"
212 | export = raw_input ()
213 | if ((export != "js") and (export != "xl")):
214 | print "Incorrect output format selected."
215 | exit(1)
216 | language= args.language
217 | if language is None:
218 | language="es"
219 | if ((language != "es") and (language !="en")):
220 | print "The language is not valid"
221 | exit(1)
222 | try:
223 | if option ==1:
224 | content = SendRequest(target,N,option,initial)
225 | newlist = ShowResults (target,option)
226 | else:
227 | # option == 2:
228 | content = SendRequest(ip,N,option,initial)
229 | newlist = ShowResults (ip,option)
230 | except:
231 | pass
232 | #verify if the user wants to export results
233 | if ((output == 'Y') or (output =='y')):
234 | #Only it can enter if -e is put in the execution
235 | ExportResults(newlist,export)
--------------------------------------------------------------------------------
/old/n4xd0rk_1_1.py:
--------------------------------------------------------------------------------
1 | import requests
2 | from urlparse import urlparse
3 | from bs4 import BeautifulSoup
4 | import argparse
5 | from argparse import RawTextHelpFormatter
6 | import json
7 | #define vars
8 | bing_dork=["site:","-site:","language:","domain:"]
9 | urls = []
10 | urls_clean = []
11 | urls_final =[]
12 | delete_bing=["microsoft","msn","bing"]
13 | count_bing=9
14 | iteration=0
15 | initial=1
16 |
17 | #********************************************************#
18 | #Definition and treatment of the parameters
19 | def parser_html():
20 | i = 0;
21 | soup = BeautifulSoup(content, 'html.parser')
22 | for link in soup.find_all('a'):
23 | try:
24 | if (urlparse(link.get('href'))!='' and urlparse(link.get('href'))[1].strip()!=''):
25 | urls.append(urlparse(link.get('href'))[1])
26 | except Exception as e:
27 | #print(e)
28 | pass
29 | try:
30 | #Delete duplicates
31 | [urls_clean.append(i) for i in urls if not i in urls_clean]
32 | except:
33 | pass
34 | try:
35 | #Delete not domains belongs to target
36 | for value in urls_clean:
37 | if (value.find(delete_bing[0]) == -1):
38 | #Delete Bing's domains
39 | if (value.find(delete_bing[1]) == -1):
40 | if (value.find(delete_bing[2]) == -1):
41 | urls_final.append(value)
42 | except:
43 | pass
44 | ######FUNCTION EXPORT RESULTS #######
45 | def ExportResults(data):
46 | #Export the results in json format
47 | with open ('output.json','w') as f:
48 | json.dump(data,f)
49 | #MAIN
50 | parser = argparse.ArgumentParser(description='This script searchs the subdomains about a domain using the results indexed of Bing search', formatter_class=RawTextHelpFormatter)
51 | parser.add_argument('-d','--domain', help="The domain which it wants to search",required=False)
52 | parser.add_argument('-n','--search', help="Indicate the number of the search which you want to do",required=True)
53 | parser.add_argument('-e','--export', help='Export the results to a json file (Y/N)\n\n', required=False)
54 | parser.add_argument('-l','--language', help='Indicate the language of the search\n\n\t(es)-Spanish(default)\n\t(en)-English', required=False)
55 | args = parser.parse_args()
56 | print " _ _ _ _ _____ ___ _"
57 | print " | \ | | || | | __ \ / _ \ | |"
58 | print " | \| | || |___ _| | | | | | |_ __| | __"
59 | print " | . ` |__ _\ \/ / | | | | | | '__| |/ /"
60 | print " | |\ | | | > <| |__| | |_| | | | < "
61 | print " |_| \_| |_| /_/\_\_____/ \___/|_| |_|\_\ "
62 | print "\n"
63 | print """** Tool to search the subdomains about a domain using the results indexed of Bing search
64 | ** Author: Ignacio Brihuega Rodriguez a.k.a N4xh4ck5
65 | ** DISCLAMER This tool was developed for educational goals.
66 | ** The author is not responsible for using to others goals.
67 | ** A high power, carries a high responsibility!"""
68 | ** Version 1.1
69 | #Asignation from arguments to variables.
70 | #convertion to int the argument
71 | N = int (args.search)
72 | target=args.domain
73 | output=args.export
74 | if ((output != 'Y') and (output != 'N')):
75 | print "The output option is not valid"
76 | exit(1)
77 | language= args.language
78 | if language is None:
79 | language="es"
80 | if ((language != "es") and (language !="en")):
81 | print "The language is not valid"
82 | exit(1)
83 | try:
84 | while (iteration < N):
85 | iteration = iteration +1
86 | if initial==1:
87 | print "\nSearching subdomains...\n"
88 | initial = 0
89 | #First search in Bing
90 | SearchBing = "https://www.bing.com/search?q="+bing_dork[3]+target+"+"+bing_dork[2]+language+"+"+bing_dork[1]+"www."+target+"&go=Buscar"
91 | else:
92 | #Bring the next Bing results - 50 in each page
93 | SearchBing= SearchBing+"&first="+str(count_bing)+"&FORM=PORE"
94 | count_bing=count_bing+50
95 | try:
96 | #Requests
97 | response=requests.get(SearchBing,allow_redirects=True)
98 |
99 | except:
100 | pass
101 | content = response.text
102 | #PARSER HTML
103 | #normalize a called with parameters
104 | parser_html()
105 | except:
106 | pass
107 | newlist=[]
108 | print "Subdomains "+target+" are:\n"
109 | #Read the list to print the value in a line
110 | for i in urls_final:
111 | if i not in newlist:
112 | newlist.append(i)
113 | print "\n"
114 | print i
115 | #verify if the user wants to export results
116 | if output == 'Y':
117 | #Only it can enter if -j is put in the execution
118 | ExportResults(newlist)
119 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | Python==2.7
2 | beautifulsoup4==4.5.1
3 | requests==2.10.0
4 | argparse==1.4.0
5 | shodan
6 | selenium
7 | phantomjs
8 | IPy
9 |
--------------------------------------------------------------------------------