├── .gitignore
├── README.md
└── domainExtractor.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | pip-wheel-metadata/
24 | share/python-wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 | MANIFEST
29 |
30 | # PyInstaller
31 | # Usually these files are written by a python script from a template
32 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
33 | *.manifest
34 | *.spec
35 |
36 | # Installer logs
37 | pip-log.txt
38 | pip-delete-this-directory.txt
39 |
40 | # Unit test / coverage reports
41 | htmlcov/
42 | .tox/
43 | .nox/
44 | .coverage
45 | .coverage.*
46 | .cache
47 | nosetests.xml
48 | coverage.xml
49 | *.cover
50 | *.py,cover
51 | .hypothesis/
52 | .pytest_cache/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # pipenv
88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
91 | # install all needed dependencies.
92 | #Pipfile.lock
93 |
94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
95 | __pypackages__/
96 |
97 | # Celery stuff
98 | celerybeat-schedule
99 | celerybeat.pid
100 |
101 | # SageMath parsed files
102 | *.sage.py
103 |
104 | # Environments
105 | .env
106 | .venv
107 | env/
108 | venv/
109 | ENV/
110 | env.bak/
111 | venv.bak/
112 |
113 | # Spyder project settings
114 | .spyderproject
115 | .spyproject
116 |
117 | # Rope project settings
118 | .ropeproject
119 |
120 | # mkdocs documentation
121 | /site
122 |
123 | # mypy
124 | .mypy_cache/
125 | .dmypy.json
126 | dmypy.json
127 |
128 | # Pyre type checker
129 | .pyre/
130 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # domainExtractor
2 | Extract domains/subdomains/FQDNs from files and URLs
3 |
4 |
Installation:
5 |
6 | ```bash
7 | git clone https://github.com/intrudir/domainExtractor.git
8 | ```
9 |
10 | Usage Examples:
11 | Run script without args to see usage
12 |
13 | ```bash
14 | python3 domainExtractor.py
15 | usage: domainExtractor.py [-h] [-f INPUTFILE] [-u URL] [-t TARGET] [-v]
16 |
17 | This script will extract domains from the file you specify and add it to a final file
18 |
19 | optional arguments:
20 | -h, --help show this help message and exit
21 | -f INPUTFILE, --file INPUTFILE
22 | Specify the file to extract domains from
23 | -u URL, --url URL Specify the web page to extract domains from. One at a time for now
24 | -t TARGET, --target TARGET
25 | Specify the target top-level domain you'd like to find and extract e.g. uber.com
26 | -v, --verbose Enable slightly more verbose console output
27 |
28 | ```
29 | Matching a specified target domain
30 | Specify your source and a target domain to search for and extract.
31 |
32 | Extracting from files
33 | Using any file with text in it, extracting all domains with yahoo.com as the TLD.
34 |
35 | ```bash
36 | python3 domainExtractor.py -f ~/Desktop/yahoo/test/test.html -t yahoo.com
37 | ```
38 | It will extract, sort and dedup all domains that are found.
39 |
40 | 
41 |
42 | You can specify multiple files using commas (no spaces)
43 | ```bash
44 | python3 domainExtractor.py -f amass.playstation.net.txt,subfinder.playstation.net.txt --target playstation.net
45 | ```
46 |
47 | 
48 |
49 | Example output:
50 |
51 | 
52 |
53 | Extracting from a web page
54 | Pulling data directly from Yahoo.com's homepage extracting all domains with 'yahoo.com' as the TLD.
55 |
56 | ```bash
57 | python3 domainExtractor.py -u "https://yahoo.com" -t yahoo.com
58 | ```
59 |
60 | 
61 |
62 |
63 | Specifying all domains
64 | You can either omit the --target flag completely, or specify 'all' and it will extract all domains it finds (at the moment .com, .net, .org, .tv, .io)
65 |
66 | ```bash
67 | # pulling from a file, extract all domains
68 | python3 domainExtractor.py -f test.html --target all
69 |
70 | # pull from yahoo.com home page, extract all domains. No target specified defaults to 'all'
71 | python3 domainExtractor.py -u "https://yahoo.com"
72 | ```
73 |
74 | 
75 |
76 | Example output:
77 |
78 | 
79 |
80 | Domains not previously found
81 | If you run the script again while checking for the same target, a few things occur:
82 |
1) if you already have a final file for it it will notify you of domains you didnt have before
83 |
2) It will append them to the final file
84 |
3) It will log the new domain to logs/newdomains.{target}.txt with date & time found
85 |
86 |
This allows you to check the same target across multiple files and be notified of any new domains found!
87 |
88 | I first use it against my Amass results, then against my Subfinder results.
89 |
The script will sort and dedup as well as notify me of how many new, unique domains came from subfinder's results.
90 |
91 | 
92 |
93 | It will add them to the final file and log just the new ones to logs/newdomains.{target}.txt
94 |
95 | 
96 |
97 |
98 |
--------------------------------------------------------------------------------
/domainExtractor.py:
--------------------------------------------------------------------------------
1 | from datetime import date, datetime
2 | import os, re, sys, argparse, urllib.parse, logging, requests
3 |
4 | parser = argparse.ArgumentParser(
5 | description="This script will extract domains from the file you specify and add it to a final file"
6 | )
7 | parser.add_argument('--file', action="store", default=None, dest='inputFile',
8 | help="Specify the file to extract domains from")
9 | parser.add_argument('--url', action="store", default=None, dest='url',
10 | help="Specify the web page to extract domains from. One at a time for now")
11 | parser.add_argument('--target', action="store", default='all', dest='target',
12 | help="Specify the target top-level domain you'd like to find and extract e.g. uber.com")
13 | parser.add_argument('--verbose', action="store_true", default=False, dest='verbose',
14 | help="Enable slightly more verbose console output")
15 | args = parser.parse_args()
16 |
17 | if not len(sys.argv) > 1:
18 | parser.print_help()
19 | print()
20 | exit()
21 |
22 | ### Set the logger up
23 | if not os.path.exists('logs'):
24 | os.makedirs('logs')
25 | logfileName = "logs/newdomains.{}.log".format(args.target)
26 | logging.basicConfig(filename=logfileName, filemode='a',
27 | format="%(asctime)s - %(levelname)s - %(message)s")
28 | logger = logging.getLogger(__name__)
29 | logger.setLevel(logging.INFO)
30 |
31 | outputFile = "final.{}.txt".format(args.target)
32 |
33 | def extractDomains(args, inputFile, rawData):
34 | domains = []
35 |
36 | if not args.target:
37 | print("No target specified, defaulting to finding 'all' domains")
38 |
39 | for i in rawData:
40 | matches = re.findall(r'(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-]{,61}[a-zA-Z0-9])?\.)+[a-zA-Z]{2,6}', urllib.parse.unquote(urllib.parse.unquote(i)))
41 | if not args.target.lower() == 'all':
42 | for j in matches:
43 |
44 | if j.find(args.target.lower()) != -1:
45 | domains.append(j)
46 | else:
47 | for j in matches:
48 | if j.find('.com') != -1:
49 | domains.append(j)
50 | elif j.find('.net') != -1:
51 | domains.append(j)
52 | elif j.find('.org') != -1:
53 | domains.append(j)
54 | elif j.find('.tv') != -1:
55 | domains.append(j)
56 | elif j.find('.io') != -1:
57 | domains.append(j)
58 | print("File: {} has {} possible domains...".format(inputFile, len(rawData)))
59 |
60 | return domains
61 |
62 |
63 | results = []
64 |
65 | # If files are specified, check them
66 | if args.inputFile:
67 | fileList = args.inputFile.split(',')
68 | for inputFile in fileList:
69 | try:
70 | with open(inputFile, 'r') as f:
71 | rawData = f.read().splitlines()
72 | except UnicodeDecodeError:
73 | with open(inputFile, 'r', encoding="ISO-8859-1") as f:
74 | rawData = f.read().splitlines()
75 |
76 | results += extractDomains(args, inputFile, rawData)
77 |
78 | # If a URL is specified, pull that
79 | if args.url:
80 | headers = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0"}
81 | rawData = requests.get(args.url, headers=headers)
82 | rawData = rawData.text.split('\n')
83 | results += extractDomains(args, args.url, rawData)
84 |
85 | # sort and dedupe our results
86 | finalDomains = sorted(set(results))
87 |
88 | # read all the domains we already have.
89 | try:
90 | with open(outputFile, 'r') as out:
91 | oldDomains = out.read().splitlines()
92 |
93 | # If no final file, create one
94 | except FileNotFoundError:
95 | print("Output file not found. Creating one...")
96 |
97 | with open(outputFile, 'w') as out:
98 | for i in finalDomains:
99 | out.write("{}\n".format(i))
100 |
101 | print("{} domains written to output file {}".format(len(finalDomains), outputFile))
102 |
103 | # loop through fresh domains. If we don't already have it, add it to final file, notify us, log it.
104 | else:
105 | newDomains = []
106 | with open(outputFile, 'a') as out:
107 | for i in finalDomains:
108 | if i not in oldDomains:
109 | newDomains.append(i)
110 | out.write("{}\n".format(i))
111 |
112 | if newDomains:
113 | print("{} new domains were found and added to {}".format(len(newDomains), outputFile))
114 | for i in newDomains:
115 | logger.info("New domain found: {}".format(i))
116 |
117 | else:
118 | print("No new domains found.")
119 |
120 |
--------------------------------------------------------------------------------