├── .github
└── workflows
│ └── python-app.yml
├── .gitignore
├── LICENSE
├── README.md
├── cloud_enum.py
├── enum_tools
├── __init__.py
├── aws_checks.py
├── azure_checks.py
├── azure_regions.py
├── fuzz.txt
├── gcp_checks.py
├── gcp_regions.py
└── utils.py
├── manpage
├── cloud_enum.1
└── cloud_enum.txt
├── requirements.txt
├── setup.py
└── tests
├── __init__.py
└── test_utils.py
/.github/workflows/python-app.yml:
--------------------------------------------------------------------------------
1 | # This workflow will install Python dependencies, run tests and lint with a single version of Python
2 | # For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
3 |
4 | name: Python application
5 |
6 | on:
7 | push:
8 | branches: [ master ]
9 | pull_request:
10 | branches: [ master ]
11 |
12 | permissions:
13 | contents: read
14 |
15 | jobs:
16 | build:
17 |
18 | runs-on: ubuntu-latest
19 |
20 | steps:
21 | - uses: actions/checkout@v3
22 | - name: Set up Python 3.10
23 | uses: actions/setup-python@v3
24 | with:
25 | python-version: "3.10"
26 | - name: Install dependencies
27 | run: |
28 | python -m pip install --upgrade pip
29 | pip install flake8 pytest
30 | if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
31 | - name: Lint with flake8
32 | run: |
33 | # stop the build if there are Python syntax errors or undefined names
34 | flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
35 | # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
36 | flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
37 | - name: Test with pytest
38 | run: |
39 | pytest
40 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | *.egg-info/
24 | .installed.cfg
25 | *.egg
26 | MANIFEST
27 |
28 | # PyInstaller
29 | # Usually these files are written by a python script from a template
30 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
31 | *.manifest
32 | *.spec
33 |
34 | # Installer logs
35 | pip-log.txt
36 | pip-delete-this-directory.txt
37 |
38 | # Unit test / coverage reports
39 | htmlcov/
40 | .tox/
41 | .coverage
42 | .coverage.*
43 | .cache
44 | nosetests.xml
45 | coverage.xml
46 | *.cover
47 | .hypothesis/
48 | .pytest_cache/
49 |
50 | # Translations
51 | *.mo
52 | *.pot
53 |
54 | # Django stuff:
55 | *.log
56 | local_settings.py
57 | db.sqlite3
58 |
59 | # Flask stuff:
60 | instance/
61 | .webassets-cache
62 |
63 | # Scrapy stuff:
64 | .scrapy
65 |
66 | # Sphinx documentation
67 | docs/_build/
68 |
69 | # PyBuilder
70 | target/
71 |
72 | # Jupyter Notebook
73 | .ipynb_checkpoints
74 |
75 | # pyenv
76 | .python-version
77 |
78 | # celery beat schedule file
79 | celerybeat-schedule
80 |
81 | # SageMath parsed files
82 | *.sage.py
83 |
84 | # Environments
85 | .env
86 | .venv
87 | env/
88 | venv/
89 | ENV/
90 | env.bak/
91 | venv.bak/
92 |
93 | # Spyder project settings
94 | .spyderproject
95 | .spyproject
96 |
97 | # Rope project settings
98 | .ropeproject
99 |
100 | # mkdocs documentation
101 | /site
102 |
103 | # mypy
104 | .mypy_cache/
105 |
106 | # vim swap files
107 | *.swp
108 |
109 | # vscode
110 | .vscode/
111 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2022 initstring
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # cloud_enum
2 |
3 | ## Future of cloud_enum
4 |
5 | I built this tool in 2019 for a pentest involving Azure, as no other enumeration tools supported it at the time. It grew from there, and I learned a lot while adding features.
6 |
7 | Building tools is fun, but maintaining tools is hard. I haven't actively used this tool myself in a while, but I've done my best to fix bugs and review pull requests.
8 |
9 | Moving forward, it makes sense to consolidate this functionality into a well-maintained project that handles the essentials (web/dns requests, threading, I/O, logging, etc.). [Nuclei](https://github.com/projectdiscovery/nuclei) is really well suited for this. You can see my first PR to migrate cloud_enum functionality to Nuclei [here](https://github.com/projectdiscovery/nuclei-templates/pull/6865).
10 |
11 | I encourage others to contribute templates to Nuclei, allowing us to focus on detecting cloud resources while leaving the groundwork to Nuclei.
12 |
13 | I'll still try to review PRs here to address bugs as time permits, but likely won't have time for major changes.
14 |
15 | Thanks to all the great contributors. Good luck with your recon!
16 |
17 | ## Overview
18 |
19 | Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
20 |
21 | Currently enumerates the following:
22 |
23 | **Amazon Web Services**:
24 | - Open / Protected S3 Buckets
25 | - awsapps (WorkMail, WorkDocs, Connect, etc.)
26 |
27 | **Microsoft Azure**:
28 | - Storage Accounts
29 | - Open Blob Storage Containers
30 | - Hosted Databases
31 | - Virtual Machines
32 | - Web Apps
33 |
34 | **Google Cloud Platform**
35 | - Open / Protected GCP Buckets
36 | - Open / Protected Firebase Realtime Databases
37 | - Google App Engine sites
38 | - Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names)
39 | - Open Firebase Apps
40 |
41 | See it in action in [Codingo](https://github.com/codingo)'s video demo [here](https://www.youtube.com/embed/pTUDJhWJ1m0).
42 |
43 |
44 |
45 |
46 | ## Usage
47 |
48 | ### Setup
49 | Several non-standard libaries are required to support threaded HTTP requests and dns lookups. You'll need to install the requirements as follows:
50 |
51 | ```sh
52 | pip3 install -r ./requirements.txt
53 | ```
54 |
55 | ### Running
56 | The only required argument is at least one keyword. You can use the built-in fuzzing strings, but you will get better results if you supply your own with `-m` and/or `-b`.
57 |
58 | You can provide multiple keywords by specifying the `-k` argument multiple times.
59 |
60 | Keywords are mutated automatically using strings from `enum_tools/fuzz.txt` or a file you provide with the `-m` flag. Services that require a second-level of brute forcing (Azure Containers and GCP Functions) will also use `fuzz.txt` by default or a file you provide with the `-b` flag.
61 |
62 | Let's say you were researching "somecompany" whose website is "somecompany.io" that makes a product called "blockchaindoohickey". You could run the tool like this:
63 |
64 | ```sh
65 | ./cloud_enum.py -k somecompany -k somecompany.io -k blockchaindoohickey
66 | ```
67 |
68 | HTTP scraping and DNS lookups use 5 threads each by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10.
69 |
70 | ```sh
71 | ./cloud_enum.py -k keyword -t 10
72 | ```
73 |
74 | **IMPORTANT**: Some resources (Azure Containers, GCP Functions) are discovered per-region. To save time scanning, there is a "REGIONS" variable defined in `cloudenum/azure_regions.py and cloudenum/gcp_regions.py` that is set by default to use only 1 region. You may want to look at these files and edit them to be relevant to your own work.
75 |
76 | **Complete Usage Details**
77 | ```
78 | usage: cloud_enum.py [-h] -k KEYWORD [-m MUTATIONS] [-b BRUTE]
79 |
80 | Multi-cloud enumeration utility. All hail OSINT!
81 |
82 | optional arguments:
83 | -h, --help show this help message and exit
84 | -k KEYWORD, --keyword KEYWORD
85 | Keyword. Can use argument multiple times.
86 | -kf KEYFILE, --keyfile KEYFILE
87 | Input file with a single keyword per line.
88 | -m MUTATIONS, --mutations MUTATIONS
89 | Mutations. Default: enum_tools/fuzz.txt
90 | -b BRUTE, --brute BRUTE
91 | List to brute-force Azure container names. Default: enum_tools/fuzz.txt
92 | -t THREADS, --threads THREADS
93 | Threads for HTTP brute-force. Default = 5
94 | -ns NAMESERVER, --nameserver NAMESERVER
95 | DNS server to use in brute-force.
96 | -l LOGFILE, --logfile LOGFILE
97 | Will APPEND found items to specified file.
98 | -f FORMAT, --format FORMAT
99 | Format for log file (text,json,csv - defaults to text)
100 | --disable-aws Disable Amazon checks.
101 | --disable-azure Disable Azure checks.
102 | --disable-gcp Disable Google checks.
103 | -qs, --quickscan Disable all mutations and second-level scans
104 | ```
105 |
106 | ## Thanks
107 | So far, I have borrowed from:
108 | - Some of the permutations from [GCPBucketBrute](https://github.com/RhinoSecurityLabs/GCPBucketBrute/blob/master/permutations.txt)
109 |
--------------------------------------------------------------------------------
/cloud_enum.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | """
4 | cloud_enum by initstring (github.com/initstring)
5 |
6 | Multi-cloud OSINT tool designed to enumerate storage and services in AWS,
7 | Azure, and GCP.
8 |
9 | Enjoy!
10 | """
11 |
12 | import os
13 | import sys
14 | import argparse
15 | import re
16 | from enum_tools import aws_checks
17 | from enum_tools import azure_checks
18 | from enum_tools import gcp_checks
19 | from enum_tools import utils
20 |
21 | BANNER = '''
22 | ##########################
23 | cloud_enum
24 | github.com/initstring
25 | ##########################
26 |
27 | '''
28 |
29 |
30 | def parse_arguments():
31 | """
32 | Handles user-passed parameters
33 | """
34 | desc = "Multi-cloud enumeration utility. All hail OSINT!"
35 | parser = argparse.ArgumentParser(description=desc)
36 |
37 | # Grab the current dir of the script, for setting some defaults below
38 | script_path = os.path.split(os.path.abspath(sys.argv[0]))[0]
39 |
40 | kw_group = parser.add_mutually_exclusive_group(required=True)
41 |
42 | # Keyword can given multiple times
43 | kw_group.add_argument('-k', '--keyword', type=str, action='append',
44 | help='Keyword. Can use argument multiple times.')
45 |
46 | # OR, a keyword file can be used
47 | kw_group.add_argument('-kf', '--keyfile', type=str, action='store',
48 | help='Input file with a single keyword per line.')
49 |
50 | # Use included mutations file by default, or let the user provide one
51 | parser.add_argument('-m', '--mutations', type=str, action='store',
52 | default=script_path + '/enum_tools/fuzz.txt',
53 | help='Mutations. Default: enum_tools/fuzz.txt')
54 |
55 | # Use include container brute-force or let the user provide one
56 | parser.add_argument('-b', '--brute', type=str, action='store',
57 | default=script_path + '/enum_tools/fuzz.txt',
58 | help='List to brute-force Azure container names.'
59 | ' Default: enum_tools/fuzz.txt')
60 |
61 | parser.add_argument('-t', '--threads', type=int, action='store',
62 | default=5, help='Threads for HTTP brute-force.'
63 | ' Default = 5')
64 |
65 | parser.add_argument('-ns', '--nameserver', type=str, action='store',
66 | default='1.1.1.1',
67 | help='DNS server to use in brute-force.')
68 | parser.add_argument('-nsf', '--nameserverfile', type=str,
69 | help='Path to the file containing nameserver IPs')
70 | parser.add_argument('-l', '--logfile', type=str, action='store',
71 | help='Appends found items to specified file.')
72 | parser.add_argument('-f', '--format', type=str, action='store',
73 | default='text',
74 | help='Format for log file (text,json,csv)'
75 | ' - default: text')
76 |
77 | parser.add_argument('--disable-aws', action='store_true',
78 | help='Disable Amazon checks.')
79 |
80 | parser.add_argument('--disable-azure', action='store_true',
81 | help='Disable Azure checks.')
82 |
83 | parser.add_argument('--disable-gcp', action='store_true',
84 | help='Disable Google checks.')
85 |
86 | parser.add_argument('-qs', '--quickscan', action='store_true',
87 | help='Disable all mutations and second-level scans')
88 |
89 | args = parser.parse_args()
90 |
91 | # Ensure mutations file is readable
92 | if not os.access(args.mutations, os.R_OK):
93 | print(f"[!] Cannot access mutations file: {args.mutations}")
94 | sys.exit()
95 |
96 | # Ensure brute file is readable
97 | if not os.access(args.brute, os.R_OK):
98 | print("[!] Cannot access brute-force file, exiting")
99 | sys.exit()
100 |
101 | # Ensure keywords file is readable
102 | if args.keyfile:
103 | if not os.access(args.keyfile, os.R_OK):
104 | print("[!] Cannot access keyword file, exiting")
105 | sys.exit()
106 |
107 | # Parse keywords from input file
108 | with open(args.keyfile, encoding='utf-8') as infile:
109 | args.keyword = [keyword.strip() for keyword in infile]
110 |
111 | # Ensure log file is writeable
112 | if args.logfile:
113 | if os.path.isdir(args.logfile):
114 | print("[!] Can't specify a directory as the logfile, exiting.")
115 | sys.exit()
116 | if os.path.isfile(args.logfile):
117 | target = args.logfile
118 | else:
119 | target = os.path.dirname(args.logfile)
120 | if target == '':
121 | target = '.'
122 |
123 | if not os.access(target, os.W_OK):
124 | print("[!] Cannot write to log file, exiting")
125 | sys.exit()
126 |
127 | # Set up logging format
128 | if args.format not in ('text', 'json', 'csv'):
129 | print("[!] Sorry! Allowed log formats: 'text', 'json', or 'csv'")
130 | sys.exit()
131 | # Set the global in the utils file, where logging needs to happen
132 | utils.init_logfile(args.logfile, args.format)
133 |
134 | return args
135 |
136 |
137 | def print_status(args):
138 | """
139 | Print a short pre-run status message
140 | """
141 | print(f"Keywords: {', '.join(args.keyword)}")
142 | if args.quickscan:
143 | print("Mutations: NONE! (Using quickscan)")
144 | else:
145 | print(f"Mutations: {args.mutations}")
146 | print(f"Brute-list: {args.brute}")
147 | print("")
148 |
149 |
150 | def check_windows():
151 | """
152 | Fixes pretty color printing for Windows users. Keeping out of
153 | requirements.txt to avoid the library requirement for most users.
154 | """
155 | if os.name == 'nt':
156 | try:
157 | import colorama
158 | colorama.init()
159 | except ModuleNotFoundError:
160 | print("[!] Yo, Windows user - if you want pretty colors, you can"
161 | " install the colorama python package.")
162 |
163 |
164 | def read_mutations(mutations_file):
165 | """
166 | Read mutations file into memory for processing.
167 | """
168 | with open(mutations_file, encoding="utf8", errors="ignore") as infile:
169 | mutations = infile.read().splitlines()
170 |
171 | print(f"[+] Mutations list imported: {len(mutations)} items")
172 | return mutations
173 |
174 |
175 | def clean_text(text):
176 | """
177 | Clean text to be RFC compliant for hostnames / DNS
178 | """
179 | banned_chars = re.compile('[^a-z0-9.-]')
180 | text_lower = text.lower()
181 | text_clean = banned_chars.sub('', text_lower)
182 |
183 | return text_clean
184 |
185 |
186 | def append_name(name, names_list):
187 | """
188 | Ensure strings stick to DNS label limit of 63 characters
189 | """
190 | if len(name) <= 63:
191 | names_list.append(name)
192 |
193 |
194 | def build_names(base_list, mutations):
195 | """
196 | Combine base and mutations for processing by individual modules.
197 | """
198 | names = []
199 |
200 | for base in base_list:
201 | # Clean base
202 | base = clean_text(base)
203 |
204 | # First, include with no mutations
205 | append_name(base, names)
206 |
207 | for mutation in mutations:
208 | # Clean mutation
209 | mutation = clean_text(mutation)
210 |
211 | # Then, do appends
212 | append_name(f"{base}{mutation}", names)
213 | append_name(f"{base}.{mutation}", names)
214 | append_name(f"{base}-{mutation}", names)
215 |
216 | # Then, do prepends
217 | append_name(f"{mutation}{base}", names)
218 | append_name(f"{mutation}.{base}", names)
219 | append_name(f"{mutation}-{base}", names)
220 |
221 | print(f"[+] Mutated results: {len(names)} items")
222 |
223 | return names
224 |
225 | def read_nameservers(file_path):
226 | try:
227 | with open(file_path, 'r') as file:
228 | nameservers = [line.strip() for line in file if line.strip()]
229 | if not nameservers:
230 | raise ValueError("Nameserver file is empty")
231 | return nameservers
232 | except FileNotFoundError:
233 | print(f"Error: File '{file_path}' not found.")
234 | exit(1)
235 | except ValueError as e:
236 | print(e)
237 | exit(1)
238 |
239 | def main():
240 | """
241 | Main program function.
242 | """
243 | args = parse_arguments()
244 | print(BANNER)
245 |
246 | # Generate a basic status on targets and parameters
247 | print_status(args)
248 |
249 | # Give our Windows friends a chance at pretty colors
250 | check_windows()
251 |
252 | # First, build a sorted base list of target names
253 | if args.quickscan:
254 | mutations = []
255 | else:
256 | mutations = read_mutations(args.mutations)
257 | names = build_names(args.keyword, mutations)
258 |
259 | # All the work is done in the individual modules
260 | try:
261 | if not args.disable_aws:
262 | aws_checks.run_all(names, args)
263 | if not args.disable_azure:
264 | azure_checks.run_all(names, args)
265 | if not args.disable_gcp:
266 | gcp_checks.run_all(names, args)
267 | except KeyboardInterrupt:
268 | print("Thanks for playing!")
269 | sys.exit()
270 |
271 | # Best of luck to you!
272 | print("\n[+] All done, happy hacking!\n")
273 | sys.exit()
274 |
275 |
276 | if __name__ == '__main__':
277 | main()
278 |
--------------------------------------------------------------------------------
/enum_tools/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/initstring/cloud_enum/d3c292c2068802fc851252629da0e9d8540cd232/enum_tools/__init__.py
--------------------------------------------------------------------------------
/enum_tools/aws_checks.py:
--------------------------------------------------------------------------------
1 | """
2 | AWS-specific checks. Part of the cloud_enum package available at
3 | github.com/initstring/cloud_enum
4 | """
5 |
6 | from enum_tools import utils
7 |
8 | BANNER = '''
9 | ++++++++++++++++++++++++++
10 | amazon checks
11 | ++++++++++++++++++++++++++
12 | '''
13 |
14 | # Known S3 domain names
15 | S3_URL = 's3.amazonaws.com'
16 | APPS_URL = 'awsapps.com'
17 |
18 | # Known AWS region names. This global will be used unless the user passes
19 | # in a specific region name. (NOT YET IMPLEMENTED)
20 | AWS_REGIONS = ['amazonaws.com',
21 | 'ap-east-1.amazonaws.com',
22 | 'us-east-2.amazonaws.com',
23 | 'us-west-1.amazonaws.com',
24 | 'us-west-2.amazonaws.com',
25 | 'ap-south-1.amazonaws.com',
26 | 'ap-northeast-1.amazonaws.com',
27 | 'ap-northeast-2.amazonaws.com',
28 | 'ap-northeast-3.amazonaws.com',
29 | 'ap-southeast-1.amazonaws.com',
30 | 'ap-southeast-2.amazonaws.com',
31 | 'ca-central-1.amazonaws.com',
32 | 'cn-north-1.amazonaws.com.cn',
33 | 'cn-northwest-1.amazonaws.com.cn',
34 | 'eu-central-1.amazonaws.com',
35 | 'eu-west-1.amazonaws.com',
36 | 'eu-west-2.amazonaws.com',
37 | 'eu-west-3.amazonaws.com',
38 | 'eu-north-1.amazonaws.com',
39 | 'sa-east-1.amazonaws.com']
40 |
41 |
42 | def print_s3_response(reply):
43 | """
44 | Parses the HTTP reply of a brute-force attempt
45 |
46 | This function is passed into the class object so we can view results
47 | in real-time.
48 | """
49 | data = {'platform': 'aws', 'msg': '', 'target': '', 'access': ''}
50 |
51 | if reply.status_code == 404:
52 | pass
53 | elif 'Bad Request' in reply.reason:
54 | pass
55 | elif reply.status_code == 200:
56 | data['msg'] = 'OPEN S3 BUCKET'
57 | data['target'] = reply.url
58 | data['access'] = 'public'
59 | utils.fmt_output(data)
60 | utils.list_bucket_contents(reply.url)
61 | elif reply.status_code == 403:
62 | data['msg'] = 'Protected S3 Bucket'
63 | data['target'] = reply.url
64 | data['access'] = 'protected'
65 | utils.fmt_output(data)
66 | elif 'Slow Down' in reply.reason:
67 | print("[!] You've been rate limited, skipping rest of check...")
68 | return 'breakout'
69 | else:
70 | print(f" Unknown status codes being received from {reply.url}:\n"
71 | " {reply.status_code}: {reply.reason}")
72 |
73 | return None
74 |
75 |
76 | def check_s3_buckets(names, threads):
77 | """
78 | Checks for open and restricted Amazon S3 buckets
79 | """
80 | print("[+] Checking for S3 buckets")
81 |
82 | # Start a counter to report on elapsed time
83 | start_time = utils.start_timer()
84 |
85 | # Initialize the list of correctly formatted urls
86 | candidates = []
87 |
88 | # Take each mutated keyword craft a url with the correct format
89 | for name in names:
90 | candidates.append(f'{name}.{S3_URL}')
91 |
92 | # Send the valid names to the batch HTTP processor
93 | utils.get_url_batch(candidates, use_ssl=False,
94 | callback=print_s3_response,
95 | threads=threads)
96 |
97 | # Stop the time
98 | utils.stop_timer(start_time)
99 |
100 |
101 | def check_awsapps(names, threads, nameserver, nameserverfile=False):
102 | """
103 | Checks for existence of AWS Apps
104 | (ie. WorkDocs, WorkMail, Connect, etc.)
105 | """
106 | data = {'platform': 'aws', 'msg': 'AWS App Found:', 'target': '', 'access': ''}
107 |
108 | print("[+] Checking for AWS Apps")
109 |
110 | # Start a counter to report on elapsed time
111 | start_time = utils.start_timer()
112 |
113 | # Initialize the list of domain names to look up
114 | candidates = []
115 |
116 | # Initialize the list of valid hostnames
117 | valid_names = []
118 |
119 | # Take each mutated keyword craft a domain name to lookup.
120 | for name in names:
121 | candidates.append(f'{name}.{APPS_URL}')
122 |
123 | # AWS Apps use DNS sub-domains. First, see which are valid.
124 | valid_names = utils.fast_dns_lookup(candidates, nameserver,
125 | nameserverfile, threads=threads)
126 |
127 | for name in valid_names:
128 | data['target'] = f'https://{name}'
129 | data['access'] = 'protected'
130 | utils.fmt_output(data)
131 |
132 | # Stop the timer
133 | utils.stop_timer(start_time)
134 |
135 |
136 | def run_all(names, args):
137 | """
138 | Function is called by main program
139 | """
140 | print(BANNER)
141 |
142 | # Use user-supplied AWS region if provided
143 | # if not regions:
144 | # regions = AWS_REGIONS
145 | check_s3_buckets(names, args.threads)
146 | check_awsapps(names, args.threads, args.nameserver, args.nameserverfile)
147 |
--------------------------------------------------------------------------------
/enum_tools/azure_checks.py:
--------------------------------------------------------------------------------
1 | """
2 | Azure-specific checks. Part of the cloud_enum package available at
3 | github.com/initstring/cloud_enum
4 | """
5 |
6 | import re
7 | import requests
8 | from enum_tools import utils
9 | from enum_tools import azure_regions
10 |
11 | BANNER = '''
12 | ++++++++++++++++++++++++++
13 | azure checks
14 | ++++++++++++++++++++++++++
15 | '''
16 |
17 | # Known Azure domain names
18 | BLOB_URL = 'blob.core.windows.net'
19 | FILE_URL= 'file.core.windows.net'
20 | QUEUE_URL = 'queue.core.windows.net'
21 | TABLE_URL = 'table.core.windows.net'
22 | MGMT_URL = 'scm.azurewebsites.net'
23 | VAULT_URL = 'vault.azure.net'
24 | WEBAPP_URL = 'azurewebsites.net'
25 | DATABASE_URL = 'database.windows.net'
26 |
27 | # Virtual machine DNS names are actually:
28 | # {whatever}.{region}.cloudapp.azure.com
29 | VM_URL = 'cloudapp.azure.com'
30 |
31 |
32 | def print_account_response(reply):
33 | """
34 | Parses the HTTP reply of a brute-force attempt
35 |
36 | This function is passed into the class object so we can view results
37 | in real-time.
38 | """
39 | data = {'platform': 'azure', 'msg': '', 'target': '', 'access': ''}
40 |
41 | if reply.status_code == 404 or 'The requested URI does not represent' in reply.reason:
42 | pass
43 | elif 'Server failed to authenticate the request' in reply.reason:
44 | data['msg'] = 'Auth-Only Account'
45 | data['target'] = reply.url
46 | data['access'] = 'protected'
47 | utils.fmt_output(data)
48 | elif 'The specified account is disabled' in reply.reason:
49 | data['msg'] = 'Disabled Account'
50 | data['target'] = reply.url
51 | data['access'] = 'disabled'
52 | utils.fmt_output(data)
53 | elif 'Value for one of the query' in reply.reason:
54 | data['msg'] = 'HTTP-OK Account'
55 | data['target'] = reply.url
56 | data['access'] = 'public'
57 | utils.fmt_output(data)
58 | elif 'The account being accessed' in reply.reason:
59 | data['msg'] = 'HTTPS-Only Account'
60 | data['target'] = reply.url
61 | data['access'] = 'public'
62 | utils.fmt_output(data)
63 | elif 'Unauthorized' in reply.reason:
64 | data['msg'] = 'Unathorized Account'
65 | data['target'] = reply.url
66 | data['access'] = 'public'
67 | utils.fmt_output(data)
68 | else:
69 | print(" Unknown status codes being received from " + reply.url +":\n"
70 | " "+ str(reply.status_code)+" : "+ reply.reason)
71 |
72 | def check_storage_accounts(names, threads, nameserver, nameserverfile=False):
73 | """
74 | Checks storage account names
75 | """
76 | print("[+] Checking for Azure Storage Accounts")
77 |
78 | # Start a counter to report on elapsed time
79 | start_time = utils.start_timer()
80 |
81 | # Initialize the list of domain names to look up
82 | candidates = []
83 |
84 | # Initialize the list of valid hostnames
85 | valid_names = []
86 |
87 | # Take each mutated keyword craft a domain name to lookup.
88 | # As Azure Storage Accounts can contain only letters and numbers,
89 | # discard those not matching to save time on the DNS lookups.
90 | regex = re.compile('[^a-zA-Z0-9]')
91 | for name in names:
92 | if not re.search(regex, name):
93 | candidates.append(f'{name}.{BLOB_URL}')
94 |
95 | # Azure Storage Accounts use DNS sub-domains. First, see which are valid.
96 | valid_names = utils.fast_dns_lookup(candidates, nameserver,
97 | nameserverfile, threads=threads)
98 |
99 | # Send the valid names to the batch HTTP processor
100 | utils.get_url_batch(valid_names, use_ssl=False,
101 | callback=print_account_response,
102 | threads=threads)
103 |
104 | # Stop the timer
105 | utils.stop_timer(start_time)
106 |
107 | # de-dupe the results and return
108 | return list(set(valid_names))
109 |
110 | def check_file_accounts(names, threads, nameserver, nameserverfile=False):
111 | """
112 | Checks File account names
113 | """
114 | print("[+] Checking for Azure File Accounts")
115 |
116 | # Start a counter to report on elapsed time
117 | start_time = utils.start_timer()
118 |
119 | # Initialize the list of domain names to look up
120 | candidates = []
121 |
122 | # Initialize the list of valid hostnames
123 | valid_names = []
124 |
125 | # Take each mutated keyword craft a domain name to lookup.
126 | # As Azure Storage Accounts can contain only letters and numbers,
127 | # discard those not matching to save time on the DNS lookups.
128 | regex = re.compile('[^a-zA-Z0-9]')
129 | for name in names:
130 | if not re.search(regex, name):
131 | candidates.append(f'{name}.{FILE_URL}')
132 |
133 | # Azure Storage Accounts use DNS sub-domains. First, see which are valid.
134 | valid_names = utils.fast_dns_lookup(candidates, nameserver,
135 | nameserverfile, threads=threads)
136 |
137 | # Send the valid names to the batch HTTP processor
138 | utils.get_url_batch(valid_names, use_ssl=False,
139 | callback=print_account_response,
140 | threads=threads)
141 |
142 | # Stop the timer
143 | utils.stop_timer(start_time)
144 |
145 | # de-dupe the results and return
146 | return list(set(valid_names))
147 |
148 | def check_queue_accounts(names, threads, nameserver, nameserverfile=False):
149 | """
150 | Checks Queue account names
151 | """
152 | print("[+] Checking for Azure Queue Accounts")
153 |
154 | # Start a counter to report on elapsed time
155 | start_time = utils.start_timer()
156 |
157 | # Initialize the list of domain names to look up
158 | candidates = []
159 |
160 | # Initialize the list of valid hostnames
161 | valid_names = []
162 |
163 | # Take each mutated keyword craft a domain name to lookup.
164 | # As Azure Storage Accounts can contain only letters and numbers,
165 | # discard those not matching to save time on the DNS lookups.
166 | regex = re.compile('[^a-zA-Z0-9]')
167 | for name in names:
168 | if not re.search(regex, name):
169 | candidates.append(f'{name}.{QUEUE_URL}')
170 |
171 | # Azure Storage Accounts use DNS sub-domains. First, see which are valid.
172 | valid_names = utils.fast_dns_lookup(candidates, nameserver,
173 | nameserverfile, threads=threads)
174 |
175 | # Send the valid names to the batch HTTP processor
176 | utils.get_url_batch(valid_names, use_ssl=False,
177 | callback=print_account_response,
178 | threads=threads)
179 |
180 | # Stop the timer
181 | utils.stop_timer(start_time)
182 |
183 | # de-dupe the results and return
184 | return list(set(valid_names))
185 |
186 | def check_table_accounts(names, threads, nameserver, nameserverfile=False):
187 | """
188 | Checks Table account names
189 | """
190 | print("[+] Checking for Azure Table Accounts")
191 |
192 | # Start a counter to report on elapsed time
193 | start_time = utils.start_timer()
194 |
195 | # Initialize the list of domain names to look up
196 | candidates = []
197 |
198 | # Initialize the list of valid hostnames
199 | valid_names = []
200 |
201 | # Take each mutated keyword craft a domain name to lookup.
202 | # As Azure Storage Accounts can contain only letters and numbers,
203 | # discard those not matching to save time on the DNS lookups.
204 | regex = re.compile('[^a-zA-Z0-9]')
205 | for name in names:
206 | if not re.search(regex, name):
207 | candidates.append(f'{name}.{TABLE_URL}')
208 |
209 | # Azure Storage Accounts use DNS sub-domains. First, see which are valid.
210 | valid_names = utils.fast_dns_lookup(candidates, nameserver,
211 | nameserverfile, threads=threads)
212 |
213 | # Send the valid names to the batch HTTP processor
214 | utils.get_url_batch(valid_names, use_ssl=False,
215 | callback=print_account_response,
216 | threads=threads)
217 |
218 | # Stop the timer
219 | utils.stop_timer(start_time)
220 |
221 | # de-dupe the results and return
222 | return list(set(valid_names))
223 |
224 | def check_mgmt_accounts(names, threads, nameserver, nameserverfile=False):
225 | """
226 | Checks App Management account names
227 | """
228 | print("[+] Checking for Azure App Management Accounts")
229 |
230 | # Start a counter to report on elapsed time
231 | start_time = utils.start_timer()
232 |
233 | # Initialize the list of domain names to look up
234 | candidates = []
235 |
236 | # Initialize the list of valid hostnames
237 | valid_names = []
238 |
239 | # Take each mutated keyword craft a domain name to lookup.
240 | # As Azure Storage Accounts can contain only letters and numbers,
241 | # discard those not matching to save time on the DNS lookups.
242 | regex = re.compile('[^a-zA-Z0-9]')
243 | for name in names:
244 | if not re.search(regex, name):
245 | candidates.append(f'{name}.{MGMT_URL}')
246 |
247 | # Azure Storage Accounts use DNS sub-domains. First, see which are valid.
248 | valid_names = utils.fast_dns_lookup(candidates, nameserver,
249 | nameserverfile, threads=threads)
250 |
251 | # Send the valid names to the batch HTTP processor
252 | utils.get_url_batch(valid_names, use_ssl=False,
253 | callback=print_account_response,
254 | threads=threads)
255 |
256 | # Stop the timer
257 | utils.stop_timer(start_time)
258 |
259 | # de-dupe the results and return
260 | return list(set(valid_names))
261 |
262 | def check_vault_accounts(names, threads, nameserver, nameserverfile=False):
263 | """
264 | Checks Key Vault account names
265 | """
266 | print("[+] Checking for Azure Key Vault Accounts")
267 |
268 | # Start a counter to report on elapsed time
269 | start_time = utils.start_timer()
270 |
271 | # Initialize the list of domain names to look up
272 | candidates = []
273 |
274 | # Initialize the list of valid hostnames
275 | valid_names = []
276 |
277 | # Take each mutated keyword craft a domain name to lookup.
278 | # As Azure Storage Accounts can contain only letters and numbers,
279 | # discard those not matching to save time on the DNS lookups.
280 | regex = re.compile('[^a-zA-Z0-9]')
281 | for name in names:
282 | if not re.search(regex, name):
283 | candidates.append(f'{name}.{VAULT_URL}')
284 |
285 | # Azure Storage Accounts use DNS sub-domains. First, see which are valid.
286 | valid_names = utils.fast_dns_lookup(candidates, nameserver,
287 | nameserverfile, threads=threads)
288 |
289 | # Send the valid names to the batch HTTP processor
290 | utils.get_url_batch(valid_names, use_ssl=False,
291 | callback=print_account_response,
292 | threads=threads)
293 |
294 | # Stop the timer
295 | utils.stop_timer(start_time)
296 |
297 | # de-dupe the results and return
298 | return list(set(valid_names))
299 |
300 |
301 | def print_container_response(reply):
302 | """
303 | Parses the HTTP reply of a brute-force attempt
304 |
305 | This function is passed into the class object so we can view results
306 | in real-time.
307 | """
308 | data = {'platform': 'azure', 'msg': '', 'target': '', 'access': ''}
309 |
310 | # Stop brute forcing disabled accounts
311 | if 'The specified account is disabled' in reply.reason:
312 | print(" [!] Breaking out early, account disabled.")
313 | return 'breakout'
314 |
315 | # Stop brute forcing accounts without permission
316 | if ('not authorized to perform this operation' in reply.reason or
317 | 'not have sufficient permissions' in reply.reason or
318 | 'Public access is not permitted' in reply.reason or
319 | 'Server failed to authenticate the request' in reply.reason):
320 | print(" [!] Breaking out early, auth required.")
321 | return 'breakout'
322 |
323 | # Stop brute forcing unsupported accounts
324 | if 'Blob API is not yet supported' in reply.reason:
325 | print(" [!] Breaking out early, Hierarchical namespace account")
326 | return 'breakout'
327 |
328 | # Handle other responses
329 | if reply.status_code == 404:
330 | pass
331 | elif reply.status_code == 200:
332 | data['msg'] = 'OPEN AZURE CONTAINER'
333 | data['target'] = reply.url
334 | data['access'] = 'public'
335 | utils.fmt_output(data)
336 | utils.list_bucket_contents(reply.url)
337 | elif 'One of the request inputs is out of range' in reply.reason:
338 | pass
339 | elif 'The request URI is invalid' in reply.reason:
340 | pass
341 | else:
342 | print(f" Unknown status codes being received from {reply.url}:\n"
343 | " {reply.status_code}: {reply.reason}")
344 |
345 | return None
346 |
347 |
348 | def brute_force_containers(storage_accounts, brute_list, threads):
349 | """
350 | Attempts to find public Blob Containers in valid Storage Accounts
351 |
352 | Here is the URL format to list Azure Blog Container contents:
353 | .blob.core.windows.net//?restype=container&comp=list
354 | """
355 |
356 | # We have a list of valid DNS names that might not be worth scraping,
357 | # such as disabled accounts or authentication required. Let's quickly
358 | # weed those out.
359 | print(f"[*] Checking {len(storage_accounts)} accounts for status before brute-forcing")
360 | valid_accounts = []
361 | for account in storage_accounts:
362 | try:
363 | reply = requests.get(f'https://{account}/')
364 | if 'Server failed to authenticate the request' in reply.reason:
365 | storage_accounts.remove(account)
366 | elif 'The specified account is disabled' in reply.reason:
367 | storage_accounts.remove(account)
368 | else:
369 | valid_accounts.append(account)
370 | except requests.exceptions.ConnectionError as error_msg:
371 | print(f" [!] Connection error on https://{account}:")
372 | print(error_msg)
373 |
374 | # Read the brute force file into memory
375 | clean_names = utils.get_brute(brute_list, mini=3)
376 |
377 | # Start a counter to report on elapsed time
378 | start_time = utils.start_timer()
379 |
380 | print(f"[*] Brute-forcing container names in {len(valid_accounts)} storage accounts")
381 | for account in valid_accounts:
382 | print(f"[*] Brute-forcing {len(clean_names)} container names in {account}")
383 |
384 | # Initialize the list of correctly formatted urls
385 | candidates = []
386 |
387 | # Take each mutated keyword and craft a url with correct format
388 | for name in clean_names:
389 | candidates.append(f'{account}/{name}/?restype=container&comp=list')
390 |
391 | # Send the valid names to the batch HTTP processor
392 | utils.get_url_batch(candidates, use_ssl=True,
393 | callback=print_container_response,
394 | threads=threads)
395 |
396 | # Stop the timer
397 | utils.stop_timer(start_time)
398 |
399 |
400 | def print_website_response(hostname):
401 | """
402 | This function is passed into the DNS brute force as a callback,
403 | so we can get real-time results.
404 | """
405 | data = {'platform': 'azure', 'msg': '', 'target': '', 'access': ''}
406 |
407 | data['msg'] = 'Registered Azure Website DNS Name'
408 | data['target'] = hostname
409 | data['access'] = 'public'
410 | utils.fmt_output(data)
411 |
412 |
413 | def check_azure_websites(names, nameserver, threads, nameserverfile=False):
414 | """
415 | Checks for Azure Websites (PaaS)
416 | """
417 | print("[+] Checking for Azure Websites")
418 |
419 | # Start a counter to report on elapsed time
420 | start_time = utils.start_timer()
421 |
422 | # Initialize the list of domain names to look up
423 | candidates = [name + '.' + WEBAPP_URL for name in names]
424 |
425 | # Azure Websites use DNS sub-domains. If it resolves, it is registered.
426 | utils.fast_dns_lookup(candidates, nameserver,
427 | nameserverfile,
428 | callback=print_website_response,
429 | threads=threads)
430 |
431 | # Stop the timer
432 | utils.stop_timer(start_time)
433 |
434 |
435 | def print_database_response(hostname):
436 | """
437 | This function is passed into the DNS brute force as a callback,
438 | so we can get real-time results.
439 | """
440 | data = {'platform': 'azure', 'msg': '', 'target': '', 'access': ''}
441 |
442 | data['msg'] = 'Registered Azure Database DNS Name'
443 | data['target'] = hostname
444 | data['access'] = 'public'
445 | utils.fmt_output(data)
446 |
447 |
448 | def check_azure_databases(names, nameserver, threads, nameserverfile=False):
449 | """
450 | Checks for Azure Databases
451 | """
452 | print("[+] Checking for Azure Databases")
453 | # Start a counter to report on elapsed time
454 | start_time = utils.start_timer()
455 |
456 | # Initialize the list of domain names to look up
457 | candidates = [name + '.' + DATABASE_URL for name in names]
458 |
459 | # Azure databases use DNS sub-domains. If it resolves, it is registered.
460 | utils.fast_dns_lookup(candidates, nameserver,
461 | nameserverfile,
462 | callback=print_database_response,
463 | threads=threads)
464 |
465 | # Stop the timer
466 | utils.stop_timer(start_time)
467 |
468 |
469 | def print_vm_response(hostname):
470 | """
471 | This function is passed into the DNS brute force as a callback,
472 | so we can get real-time results.
473 | """
474 | data = {'platform': 'azure', 'msg': '', 'target': '', 'access': ''}
475 |
476 | data['msg'] = 'Registered Azure Virtual Machine DNS Name'
477 | data['target'] = hostname
478 | data['access'] = 'public'
479 | utils.fmt_output(data)
480 |
481 |
482 | def check_azure_vms(names, nameserver, threads, nameserverfile=False):
483 | """
484 | Checks for Azure Virtual Machines
485 | """
486 | print("[+] Checking for Azure Virtual Machines")
487 |
488 | # Start a counter to report on elapsed time
489 | start_time = utils.start_timer()
490 |
491 | # Pull the regions from a config file
492 | regions = azure_regions.REGIONS
493 |
494 | print(f"[*] Testing across {len(regions)} regions defined in the config file")
495 |
496 | for region in regions:
497 |
498 | # Initialize the list of domain names to look up
499 | candidates = [name + '.' + region + '.' + VM_URL for name in names]
500 |
501 | # Azure VMs use DNS sub-domains. If it resolves, it is registered.
502 | utils.fast_dns_lookup(candidates, nameserver,
503 | nameserverfile,
504 | callback=print_vm_response,
505 | threads=threads)
506 |
507 | # Stop the timer
508 | utils.stop_timer(start_time)
509 |
510 |
511 | def run_all(names, args):
512 | """
513 | Function is called by main program
514 | """
515 | print(BANNER)
516 |
517 | valid_accounts = check_storage_accounts(names, args.threads,
518 | args.nameserver, args.nameserverfile)
519 | if valid_accounts and not args.quickscan:
520 | brute_force_containers(valid_accounts, args.brute, args.threads)
521 |
522 | check_file_accounts(names, args.threads, args.nameserver, args.nameserverfile)
523 | check_queue_accounts(names, args.threads, args.nameserver, args.nameserverfile)
524 | check_table_accounts(names, args.threads, args.nameserver, args.nameserverfile)
525 | check_mgmt_accounts(names, args.threads, args.nameserver, args.nameserverfile)
526 | check_vault_accounts(names, args.threads, args.nameserver, args.nameserverfile)
527 |
528 | check_azure_websites(names, args.nameserver, args.threads, args.nameserverfile)
529 | check_azure_databases(names, args.nameserver, args.threads, args.nameserverfile)
530 | check_azure_vms(names, args.nameserver, args.threads, args.nameserverfile)
531 |
--------------------------------------------------------------------------------
/enum_tools/azure_regions.py:
--------------------------------------------------------------------------------
1 | """
2 | File used to track the DNS regions for Azure resources.
3 | """
4 |
5 | # Some enumeration tasks will need to go through the complete list of
6 | # possible DNS names for each region. You may want to modify this file to
7 | # use the regions meaningful to you.
8 | #
9 | # Whatever is listed in the last instance of 'REGIONS' below is what the tool
10 | # will use.
11 |
12 |
13 | # Here is the list I get when running `az account list-locations` in Azure
14 | # Powershell:
15 | REGIONS = ['eastasia', 'southeastasia', 'centralus', 'eastus', 'eastus2',
16 | 'westus', 'northcentralus', 'southcentralus', 'northeurope',
17 | 'westeurope', 'japanwest', 'japaneast', 'brazilsouth',
18 | 'australiaeast', 'australiasoutheast', 'southindia', 'centralindia',
19 | 'westindia', 'canadacentral', 'canadaeast', 'uksouth', 'ukwest',
20 | 'westcentralus', 'westus2', 'koreacentral', 'koreasouth',
21 | 'francecentral', 'francesouth', 'australiacentral',
22 | 'australiacentral2', 'southafricanorth', 'southafricawest']
23 |
24 |
25 | # And here I am limiting the search by overwriting this variable:
26 | REGIONS = ['eastus', ]
27 |
--------------------------------------------------------------------------------
/enum_tools/fuzz.txt:
--------------------------------------------------------------------------------
1 | 0
2 | 001
3 | 002
4 | 003
5 | 01
6 | 02
7 | 03
8 | 1
9 | 2
10 | 2014
11 | 2015
12 | 2016
13 | 2017
14 | 2018
15 | 2019
16 | 2020
17 | 2021
18 | 2022
19 | 2023
20 | 2024
21 | 2025
22 | 3
23 | 4
24 | 5
25 | 6
26 | 7
27 | 8
28 | 9
29 | access-logs
30 | access.logs
31 | accounting
32 | admin
33 | administrator
34 | ae
35 | alpha
36 | amazon
37 | analytics
38 | android
39 | api
40 | app
41 | appengine
42 | appspot
43 | appspot.com
44 | archive
45 | artifacts
46 | assets
47 | attachments
48 | audit
49 | audit-logs
50 | aws
51 | aws-billing
52 | aws-logs
53 | aws.billing
54 | aws.logs
55 | azure
56 | azure-logs
57 | backup
58 | backups
59 | bak
60 | bamboo
61 | beta
62 | betas
63 | bigquery
64 | bigtable
65 | billing
66 | blob
67 | blog
68 | bucket
69 | build
70 | builds
71 | cache
72 | cdn
73 | ce
74 | central
75 | centralus
76 | cf
77 | chef
78 | client
79 | cloud
80 | cloudfunction
81 | club
82 | cluster
83 | com
84 | com.au
85 | common
86 | composer
87 | compute
88 | computeengine
89 | conf
90 | confidential
91 | config
92 | configuration
93 | consultants
94 | contact
95 | container
96 | content
97 | core
98 | corp
99 | corporate
100 | customer
101 | data
102 | data-private
103 | data-public
104 | data.private
105 | data.public
106 | database
107 | dataflow
108 | dataproc
109 | datastore
110 | db
111 | debug
112 | demo
113 | dev
114 | developer
115 | developers
116 | development
117 | devops
118 | directory
119 | discount
120 | dist
121 | dl
122 | dns
123 | docker
124 | docs
125 | download
126 | downloads
127 | dr
128 | ec2
129 | elastic
130 | emails
131 | endpoints
132 | es
133 | events
134 | exe
135 | export
136 | files
137 | fileshare
138 | filestorage
139 | filestore
140 | finance
141 | firebase
142 | firestore
143 | functions
144 | gateway
145 | gcp
146 | gcp-logs
147 | gcplogs
148 | git
149 | github
150 | gitlab
151 | gke
152 | graphite
153 | graphql
154 | gs
155 | gw
156 | help
157 | hidden
158 | hr
159 | hub
160 | iaas
161 | iam
162 | images
163 | img
164 | infra
165 | internal
166 | internal-dist
167 | internal-repo
168 | internal-tools
169 | internal.dist
170 | internal.repo
171 | ios
172 | iot
173 | it
174 | jenkins
175 | jira
176 | js
177 | k8s
178 | key
179 | keys
180 | kube
181 | kubeengine
182 | kubernetes
183 | kubernetesengine
184 | landing
185 | ldap
186 | loadbalancer
187 | logs
188 | logstash
189 | mail
190 | main
191 | manuals
192 | mattermost
193 | media
194 | memorystore
195 | mercurial
196 | ml
197 | mobile
198 | monitoring
199 | my
200 | mysql
201 | net
202 | northcentralus
203 | ops
204 | oracle
205 | org
206 | paas
207 | packages
208 | panel
209 | passwords
210 | photos
211 | pics
212 | pictures
213 | postgres
214 | pre-prod
215 | preprod
216 | presentations
217 | preview
218 | private
219 | pro
220 | processed
221 | prod
222 | product
223 | productcontent
224 | production
225 | products
226 | project
227 | projects
228 | psql
229 | public
230 | pubsub
231 | qa
232 | repo
233 | reports
234 | resources
235 | root
236 | rtdb
237 | s3
238 | saas
239 | screenshots
240 | scripts
241 | sec
242 | secret
243 | secrets
244 | secure
245 | security
246 | service
247 | services
248 | share
249 | shared
250 | shop
251 | site
252 | sitemaps
253 | slack
254 | snapshots
255 | source
256 | source-code
257 | spanner
258 | splunk
259 | sql
260 | sql-logs
261 | src
262 | ssh
263 | stackdriver
264 | stage
265 | staging
266 | static
267 | stats
268 | storage
269 | storageaccount
270 | store
271 | subversion
272 | support
273 | svc
274 | svn
275 | syslog
276 | tasks
277 | teamcity
278 | temp
279 | templates
280 | terraform
281 | test
282 | themes
283 | tmp
284 | tmp-logs
285 | tmp.logs
286 | trace
287 | traffic
288 | training
289 | travis
290 | troposphere
291 | uploads
292 | useast
293 | useast2
294 | userfiles
295 | userpictures
296 | users
297 | ux
298 | videos
299 | vm
300 | web
301 | website
302 | westcentralus
303 | westus
304 | westus2
305 | wp
306 | www
307 |
--------------------------------------------------------------------------------
/enum_tools/gcp_checks.py:
--------------------------------------------------------------------------------
1 | """
2 | Google-specific checks. Part of the cloud_enum package available at
3 | github.com/initstring/cloud_enum
4 | """
5 |
6 | from enum_tools import utils
7 | from enum_tools import gcp_regions
8 |
9 | BANNER = '''
10 | ++++++++++++++++++++++++++
11 | google checks
12 | ++++++++++++++++++++++++++
13 | '''
14 |
15 | # Known GCP domain names
16 | GCP_URL = 'storage.googleapis.com'
17 | FBRTDB_URL = 'firebaseio.com'
18 | APPSPOT_URL = 'appspot.com'
19 | FUNC_URL = 'cloudfunctions.net'
20 | FBAPP_URL = 'firebaseapp.com'
21 |
22 | # Hacky, I know. Used to store project/region combos that report at least
23 | # one cloud function, to brute force later on
24 | HAS_FUNCS = []
25 |
26 |
27 | def print_bucket_response(reply):
28 | """
29 | Parses the HTTP reply of a brute-force attempt
30 |
31 | This function is passed into the class object so we can view results
32 | in real-time.
33 | """
34 | data = {'platform': 'gcp', 'msg': '', 'target': '', 'access': ''}
35 |
36 | if reply.status_code == 404:
37 | pass
38 | elif reply.status_code == 200:
39 | data['msg'] = 'OPEN GOOGLE BUCKET'
40 | data['target'] = reply.url
41 | data['access'] = 'public'
42 | utils.fmt_output(data)
43 | utils.list_bucket_contents(reply.url + '/')
44 | elif reply.status_code == 403:
45 | data['msg'] = 'Protected Google Bucket'
46 | data['target'] = reply.url
47 | data['access'] = 'protected'
48 | utils.fmt_output(data)
49 | else:
50 | print(f" Unknown status codes being received from {reply.url}:\n"
51 | " {reply.status_code}: {reply.reason}")
52 |
53 |
54 | def check_gcp_buckets(names, threads):
55 | """
56 | Checks for open and restricted Google Cloud buckets
57 | """
58 | print("[+] Checking for Google buckets")
59 |
60 | # Start a counter to report on elapsed time
61 | start_time = utils.start_timer()
62 |
63 | # Initialize the list of correctly formatted urls
64 | candidates = []
65 |
66 | # Take each mutated keyword craft a url with the correct format
67 | for name in names:
68 | candidates.append(f'{GCP_URL}/{name}')
69 |
70 | # Send the valid names to the batch HTTP processor
71 | utils.get_url_batch(candidates, use_ssl=False,
72 | callback=print_bucket_response,
73 | threads=threads)
74 |
75 | # Stop the time
76 | utils.stop_timer(start_time)
77 |
78 |
79 | def print_fbrtdb_response(reply):
80 | """
81 | Parses the HTTP reply of a brute-force attempt
82 |
83 | This function is passed into the class object so we can view results
84 | in real-time.
85 | """
86 | data = {'platform': 'gcp', 'msg': '', 'target': '', 'access': ''}
87 |
88 | if reply.status_code == 404:
89 | pass
90 | elif reply.status_code == 200:
91 | data['msg'] = 'OPEN GOOGLE FIREBASE RTDB'
92 | data['target'] = reply.url
93 | data['access'] = 'public'
94 | utils.fmt_output(data)
95 | elif reply.status_code == 401:
96 | data['msg'] = 'Protected Google Firebase RTDB'
97 | data['target'] = reply.url
98 | data['access'] = 'protected'
99 | utils.fmt_output(data)
100 | elif reply.status_code == 402:
101 | data['msg'] = 'Payment required on Google Firebase RTDB'
102 | data['target'] = reply.url
103 | data['access'] = 'disabled'
104 | utils.fmt_output(data)
105 | elif reply.status_code == 423:
106 | data['msg'] = 'The Firebase database has been deactivated.'
107 | data['target'] = reply.url
108 | data['access'] = 'disabled'
109 | utils.fmt_output(data)
110 | else:
111 | print(f" Unknown status codes being received from {reply.url}:\n"
112 | " {reply.status_code}: {reply.reason}")
113 |
114 |
115 | def check_fbrtdb(names, threads):
116 | """
117 | Checks for Google Firebase RTDB
118 | """
119 | print("[+] Checking for Google Firebase Realtime Databases")
120 |
121 | # Start a counter to report on elapsed time
122 | start_time = utils.start_timer()
123 |
124 | # Initialize the list of correctly formatted urls
125 | candidates = []
126 |
127 | # Take each mutated keyword craft a url with the correct format
128 | for name in names:
129 | # Firebase RTDB names cannot include a period. We'll exlcude
130 | # those from the global candidates list
131 | if '.' not in name:
132 | candidates.append(f'{name}.{FBRTDB_URL}/.json')
133 |
134 | # Send the valid names to the batch HTTP processor
135 | utils.get_url_batch(candidates, use_ssl=True,
136 | callback=print_fbrtdb_response,
137 | threads=threads,
138 | redir=False)
139 |
140 | # Stop the time
141 | utils.stop_timer(start_time)
142 |
143 |
144 | def print_fbapp_response(reply):
145 | """
146 | Parses the HTTP reply of a brute-force attempt
147 |
148 | This function is passed into the class object so we can view results
149 | in real-time.
150 | """
151 | data = {'platform': 'gcp', 'msg': '', 'target': '', 'access': ''}
152 |
153 | if reply.status_code == 404:
154 | pass
155 | elif reply.status_code == 200:
156 | data['msg'] = 'OPEN GOOGLE FIREBASE APP'
157 | data['target'] = reply.url
158 | data['access'] = 'public'
159 | utils.fmt_output(data)
160 | else:
161 | print(f" Unknown status codes being received from {reply.url}:\n"
162 | " {reply.status_code}: {reply.reason}")
163 |
164 | def check_fbapp(names, threads):
165 | """
166 | Checks for Google Firebase Applications
167 | """
168 | print("[+] Checking for Google Firebase Applications")
169 |
170 | # Start a counter to report on elapsed time
171 | start_time = utils.start_timer()
172 |
173 | # Initialize the list of correctly formatted urls
174 | candidates = []
175 |
176 | # Take each mutated keyword craft a url with the correct format
177 | for name in names:
178 | # Firebase App names cannot include a period. We'll exlcude
179 | # those from the global candidates list
180 | if '.' not in name:
181 | candidates.append(f'{name}.{FBAPP_URL}')
182 |
183 | # Send the valid names to the batch HTTP processor
184 | utils.get_url_batch(candidates, use_ssl=True,
185 | callback=print_fbapp_response,
186 | threads=threads,
187 | redir=False)
188 |
189 | # Stop the time
190 | utils.stop_timer(start_time)
191 |
192 | def print_appspot_response(reply):
193 | """
194 | Parses the HTTP reply of a brute-force attempt
195 |
196 | This function is passed into the class object so we can view results
197 | in real-time.
198 | """
199 | data = {'platform': 'gcp', 'msg': '', 'target': '', 'access': ''}
200 |
201 | if reply.status_code == 404:
202 | pass
203 | elif str(reply.status_code)[0] == 5:
204 | data['msg'] = 'Google App Engine app with a 50x error'
205 | data['target'] = reply.url
206 | data['access'] = 'public'
207 | utils.fmt_output(data)
208 | elif reply.status_code in (200, 302, 404):
209 | if 'accounts.google.com' in reply.url:
210 | data['msg'] = 'Protected Google App Engine app'
211 | data['target'] = reply.history[0].url
212 | data['access'] = 'protected'
213 | utils.fmt_output(data)
214 | else:
215 | data['msg'] = 'Open Google App Engine app'
216 | data['target'] = reply.url
217 | data['access'] = 'public'
218 | utils.fmt_output(data)
219 | else:
220 | print(f" Unknown status codes being received from {reply.url}:\n"
221 | " {reply.status_code}: {reply.reason}")
222 |
223 |
224 | def check_appspot(names, threads):
225 | """
226 | Checks for Google App Engine sites running on appspot.com
227 | """
228 | print("[+] Checking for Google App Engine apps")
229 |
230 | # Start a counter to report on elapsed time
231 | start_time = utils.start_timer()
232 |
233 | # Initialize the list of correctly formatted urls
234 | candidates = []
235 |
236 | # Take each mutated keyword craft a url with the correct format
237 | for name in names:
238 | # App Engine project names cannot include a period. We'll exlcude
239 | # those from the global candidates list
240 | if '.' not in name:
241 | candidates.append(f'{name}.{APPSPOT_URL}')
242 |
243 | # Send the valid names to the batch HTTP processor
244 | utils.get_url_batch(candidates, use_ssl=False,
245 | callback=print_appspot_response,
246 | threads=threads)
247 |
248 | # Stop the time
249 | utils.stop_timer(start_time)
250 |
251 |
252 | def print_functions_response1(reply):
253 | """
254 | Parses the HTTP reply the initial Cloud Functions check
255 |
256 | This function is passed into the class object so we can view results
257 | in real-time.
258 | """
259 | data = {'platform': 'gcp', 'msg': '', 'target': '', 'access': ''}
260 |
261 | if reply.status_code == 404:
262 | pass
263 | elif reply.status_code == 302:
264 | data['msg'] = 'Contains at least 1 Cloud Function'
265 | data['target'] = reply.url
266 | data['access'] = 'public'
267 | utils.fmt_output(data)
268 | HAS_FUNCS.append(reply.url)
269 | else:
270 | print(f" Unknown status codes being received from {reply.url}:\n"
271 | " {reply.status_code}: {reply.reason}")
272 |
273 |
274 | def print_functions_response2(reply):
275 | """
276 | Parses the HTTP reply from the secondary, brute-force Cloud Functions check
277 |
278 | This function is passed into the class object so we can view results
279 | in real-time.
280 | """
281 | data = {'platform': 'gcp', 'msg': '', 'target': '', 'access': ''}
282 |
283 | if 'accounts.google.com/ServiceLogin' in reply.url:
284 | pass
285 | elif reply.status_code in (403, 401):
286 | data['msg'] = 'Auth required Cloud Function'
287 | data['target'] = reply.url
288 | data['access'] = 'protected'
289 | utils.fmt_output(data)
290 | elif reply.status_code == 405:
291 | data['msg'] = 'UNAUTHENTICATED Cloud Function (POST-Only)'
292 | data['target'] = reply.url
293 | data['access'] = 'public'
294 | utils.fmt_output(data)
295 | elif reply.status_code in (200, 404):
296 | data['msg'] = 'UNAUTHENTICATED Cloud Function (GET-OK)'
297 | data['target'] = reply.url
298 | data['access'] = 'public'
299 | utils.fmt_output(data)
300 | else:
301 | print(f" Unknown status codes being received from {reply.url}:\n"
302 | " {reply.status_code}: {reply.reason}")
303 |
304 |
305 | def check_functions(names, brute_list, quickscan, threads):
306 | """
307 | Checks for Google Cloud Functions running on cloudfunctions.net
308 |
309 | This is a two-part process. First, we want to find region/project combos
310 | that have existing Cloud Functions. The URL for a function looks like this:
311 | https://[ZONE]-[PROJECT-ID].cloudfunctions.net/[FUNCTION-NAME]
312 |
313 | We look for a 302 in [ZONE]-[PROJECT-ID].cloudfunctions.net. That means
314 | there are some functions defined in that region. Then, we brute force a list
315 | of possible function names there.
316 |
317 | See gcp_regions.py to define which regions to check. The tool currently
318 | defaults to only 1 region, so you should really modify it for best results.
319 | """
320 | print("[+] Checking for project/zones with Google Cloud Functions.")
321 |
322 | # Start a counter to report on elapsed time
323 | start_time = utils.start_timer()
324 |
325 | # Initialize the list of correctly formatted urls
326 | candidates = []
327 |
328 | # Pull the regions from a config file
329 | regions = gcp_regions.REGIONS
330 |
331 | print(f"[*] Testing across {len(regions)} regions defined in the config file")
332 |
333 | # Take each mutated keyword craft a url with the correct format
334 | for region in regions:
335 | candidates += [region + '-' + name + '.' + FUNC_URL for name in names]
336 |
337 | # Send the valid names to the batch HTTP processor
338 | utils.get_url_batch(candidates, use_ssl=False,
339 | callback=print_functions_response1,
340 | threads=threads,
341 | redir=False)
342 |
343 | # Retun from function if we have not found any valid combos
344 | if not HAS_FUNCS:
345 | utils.stop_timer(start_time)
346 | return
347 |
348 | # Also bail out if doing a quick scan
349 | if quickscan:
350 | return
351 |
352 | # If we did find something, we'll use the brute list. This will allow people
353 | # to provide a separate fuzzing list if they choose.
354 | print(f"[*] Brute-forcing function names in {len(HAS_FUNCS)} project/region combos")
355 |
356 | # Load brute list in memory, based on allowed chars/etc
357 | brute_strings = utils.get_brute(brute_list)
358 |
359 | # The global was built in a previous function. We only want to brute force
360 | # project/region combos that we know have existing functions defined
361 | for func in HAS_FUNCS:
362 | print(f"[*] Brute-forcing {len(brute_strings)} function names in {func}")
363 | # Initialize the list of initial URLs to check. Strip out the HTTP
364 | # protocol first, as that is handled in the utility
365 | func = func.replace("http://", "")
366 |
367 | # Noticed weird behaviour with functions when a slash is not appended.
368 | # Works for some, but not others. However, appending a slash seems to
369 | # get consistent results. Might need further validation.
370 | candidates = [func + brute + '/' for brute in brute_strings]
371 |
372 | # Send the valid names to the batch HTTP processor
373 | utils.get_url_batch(candidates, use_ssl=False,
374 | callback=print_functions_response2,
375 | threads=threads)
376 |
377 | # Stop the time
378 | utils.stop_timer(start_time)
379 |
380 |
381 | def run_all(names, args):
382 | """
383 | Function is called by main program
384 | """
385 | print(BANNER)
386 |
387 | check_gcp_buckets(names, args.threads)
388 | check_fbrtdb(names, args.threads)
389 | check_appspot(names, args.threads)
390 | check_functions(names, args.brute, args.quickscan, args.threads)
391 |
--------------------------------------------------------------------------------
/enum_tools/gcp_regions.py:
--------------------------------------------------------------------------------
1 | """
2 | File used to track the DNS regions for GCP resources.
3 | """
4 |
5 | # Some enumeration tasks will need to go through the complete list of
6 | # possible DNS names for each region. You may want to modify this file to
7 | # use the regions meaningful to you.
8 | #
9 | # Whatever is listed in the last instance of 'REGIONS' below is what the tool
10 | # will use.
11 |
12 |
13 | # Here is the list I get when running `gcloud functions regions list`
14 | REGIONS = ['us-central1', 'us-east1', 'us-east4', 'us-west2', 'us-west3',
15 | 'us-west4', 'europe-west1', 'europe-west2', 'europe-west3',
16 | 'europe-west6', 'asia-east2', 'asia-northeast1', 'asia-northeast2',
17 | 'asia-northeast3', 'asia-south1', 'asia-southeast2',
18 | 'northamerica-northeast1', 'southamerica-east1',
19 | 'australia-southeast1']
20 |
21 |
22 | # And here I am limiting the search by overwriting this variable:
23 | REGIONS = ['us-central1', ]
24 |
--------------------------------------------------------------------------------
/enum_tools/utils.py:
--------------------------------------------------------------------------------
1 | """
2 | Helper functions for network requests, etc
3 | """
4 |
5 | import time
6 | import sys
7 | import datetime
8 | import re
9 | import csv
10 | import json
11 | import ipaddress
12 | from multiprocessing.dummy import Pool as ThreadPool
13 | from functools import partial
14 | from urllib.parse import urlparse
15 | try:
16 | import requests
17 | import dns
18 | import dns.resolver
19 | from concurrent.futures import ThreadPoolExecutor
20 | from requests_futures.sessions import FuturesSession
21 | from concurrent.futures._base import TimeoutError
22 | except ImportError:
23 | print("[!] Please pip install requirements.txt.")
24 | sys.exit()
25 |
26 | LOGFILE = False
27 | LOGFILE_FMT = ''
28 |
29 |
30 | def init_logfile(logfile, fmt):
31 | """
32 | Initialize the global logfile if specified as a user-supplied argument
33 | """
34 | if logfile:
35 | global LOGFILE
36 | LOGFILE = logfile
37 |
38 | global LOGFILE_FMT
39 | LOGFILE_FMT = fmt
40 |
41 | now = datetime.datetime.now().strftime("%d/%m/%Y %H:%M:%S")
42 | with open(logfile, 'a', encoding='utf-8') as log_writer:
43 | log_writer.write(f"\n\n#### CLOUD_ENUM {now} ####\n")
44 |
45 |
46 | def is_valid_domain(domain):
47 | """
48 | Checks if the domain has a valid format and length
49 | """
50 | # Check for domain total length
51 | if len(domain) > 253: # According to DNS specifications
52 | return False
53 |
54 | # Check each label in the domain
55 | for label in domain.split('.'):
56 | # Each label should be between 1 and 63 characters long
57 | if not (1 <= len(label) <= 63):
58 | return False
59 |
60 | return True
61 |
62 |
63 | def get_url_batch(url_list, use_ssl=False, callback='', threads=5, redir=True):
64 | """
65 | Processes a list of URLs, sending the results back to the calling
66 | function in real-time via the `callback` parameter
67 | """
68 |
69 | # Start a counter for a status message
70 | tick = {}
71 | tick['total'] = len(url_list)
72 | tick['current'] = 0
73 |
74 | # Filter out invalid URLs
75 | url_list = [url for url in url_list if is_valid_domain(url)]
76 |
77 | # Break the url list into smaller lists based on thread size
78 | queue = [url_list[x:x+threads] for x in range(0, len(url_list), threads)]
79 |
80 | # Define the protocol
81 | if use_ssl:
82 | proto = 'https://'
83 | else:
84 | proto = 'http://'
85 |
86 | # Using the async requests-futures module, work in batches based on
87 | # the 'queue' list created above. Call each URL, sending the results
88 | # back to the callback function.
89 | for batch in queue:
90 | # I used to initialize the session object outside of this loop, BUT
91 | # there were a lot of errors that looked related to pool cleanup not
92 | # happening. Putting it in here fixes the issue.
93 | # There is an unresolved discussion here:
94 | # https://github.com/ross/requests-futures/issues/20
95 | session = FuturesSession(executor=ThreadPoolExecutor(max_workers=threads+5))
96 | batch_pending = {}
97 | batch_results = {}
98 |
99 | # First, grab the pending async request and store it in a dict
100 | for url in batch:
101 | batch_pending[url] = session.get(proto + url, allow_redirects=redir)
102 |
103 | # Then, grab all the results from the queue.
104 | # This is where we need to catch exceptions that occur with large
105 | # fuzz lists and dodgy connections.
106 | for url in batch_pending:
107 | try:
108 | # Timeout is set due to observation of some large jobs simply
109 | # hanging forever with no exception raised.
110 | batch_results[url] = batch_pending[url].result(timeout=30)
111 | except requests.exceptions.ConnectionError as error_msg:
112 | print(f" [!] Connection error on {url}:")
113 | print(error_msg)
114 | except TimeoutError:
115 | print(f" [!] Timeout on {url}. Investigate if there are"
116 | " many of these")
117 |
118 | # Now, send all the results to the callback function for analysis
119 | # We need a way to stop processing unnecessary brute-forces, so the
120 | # callback may tell us to bail out.
121 | for url in batch_results:
122 | check = callback(batch_results[url])
123 | if check == 'breakout':
124 | return
125 |
126 | # Refresh a status message
127 | tick['current'] += threads
128 | sys.stdout.flush()
129 | sys.stdout.write(f" {tick['current']}/{tick['total']} complete...")
130 | sys.stdout.write('\r')
131 |
132 | # Clear the status message
133 | sys.stdout.write(' \r')
134 |
135 | def read_nameservers(file_path):
136 | """
137 | Reads nameservers from a given file.
138 | Each line in the file should contain one nameserver IP address.
139 | Lines starting with '#' will be ignored as comments.
140 | """
141 | try:
142 | with open(file_path, 'r') as file:
143 | nameservers = [line.strip() for line in file if line.strip() and not line.startswith('#')]
144 | if not nameservers:
145 | raise ValueError("Nameserver file is empty or only contains comments")
146 | return nameservers
147 | except FileNotFoundError:
148 | print(f"Error: File '{file_path}' not found.")
149 | exit(1)
150 | except ValueError as e:
151 | print(e)
152 | exit(1)
153 |
154 | def is_valid_ip(address):
155 | try:
156 | ipaddress.ip_address(address)
157 | return True
158 | except ValueError:
159 | return False
160 |
161 | def dns_lookup(nameserver, name):
162 | """
163 | This function performs the actual DNS lookup when called in a threadpool
164 | by the fast_dns_lookup function.
165 | """
166 | nameserverfile = False
167 | if not is_valid_ip(nameserver):
168 | nameserverfile = nameserver
169 |
170 | res = dns.resolver.Resolver()
171 | res.timeout = 10
172 | if nameserverfile:
173 | nameservers = read_nameservers(nameserverfile)
174 | res.nameservers = nameservers
175 | else:
176 | res.nameservers = [nameserver]
177 |
178 | try:
179 | res.query(name)
180 | # If no exception is thrown, return the valid name
181 | return name
182 | except dns.resolver.NXDOMAIN:
183 | return ''
184 | except dns.resolver.NoNameservers as exc_text:
185 | print(" [!] Error querying nameservers! This could be a problem.")
186 | print(" [!] If you're using a VPN, try setting --ns to your VPN's nameserver.")
187 | print(" [!] Bailing because you need to fix this")
188 | print(" [!] More Info:")
189 | print(exc_text)
190 | return '-#BREAKOUT_DNS_ERROR#-'
191 | except dns.exception.Timeout:
192 | print(f" [!] DNS Timeout on {name}. Investigate if there are many"
193 | " of these.")
194 | return ''
195 |
196 |
197 | def fast_dns_lookup(names, nameserver, nameserverfile, callback='', threads=5):
198 | """
199 | Helper function to resolve DNS names. Uses multithreading.
200 | """
201 | total = len(names)
202 | current = 0
203 | valid_names = []
204 |
205 | print(f"[*] Brute-forcing a list of {total} possible DNS names")
206 |
207 | # Filter out invalid domains
208 | names = [name for name in names if is_valid_domain(name)]
209 |
210 | # Break the url list into smaller lists based on thread size
211 | queue = [names[x:x+threads] for x in range(0, len(names), threads)]
212 |
213 | for batch in queue:
214 | pool = ThreadPool(threads)
215 |
216 | # Because pool.map takes only a single function arg, we need to
217 | # define this partial so that each iteration uses the same ns
218 | if nameserverfile:
219 | dns_lookup_params = partial(dns_lookup, nameserverfile)
220 | else:
221 | dns_lookup_params = partial(dns_lookup, nameserver)
222 |
223 | results = pool.map(dns_lookup_params, batch)
224 |
225 | # We should now have the batch of results back, process them.
226 | for name in results:
227 | if name:
228 | if name == '-#BREAKOUT_DNS_ERROR#-':
229 | sys.exit()
230 | if callback:
231 | callback(name)
232 | valid_names.append(name)
233 |
234 | current += threads
235 |
236 | # Update the status message
237 | sys.stdout.flush()
238 | sys.stdout.write(f" {current}/{total} complete...")
239 | sys.stdout.write('\r')
240 | pool.close()
241 |
242 | # Clear the status message
243 | sys.stdout.write(' \r')
244 |
245 | return valid_names
246 |
247 |
248 | def list_bucket_contents(bucket):
249 | """
250 | Provides a list of full URLs to each open bucket
251 | """
252 | key_regex = re.compile(r'<(?:Key|Name)>(.*?)(?:Key|Name)>')
253 | reply = requests.get(bucket)
254 |
255 | # Make a list of all the relative-path key name
256 | keys = re.findall(key_regex, reply.text)
257 |
258 | # Need to remove URL parameters before appending file names
259 | # from Azure buckets
260 | sub_regex = re.compile(r'(\?.*)')
261 | bucket = sub_regex.sub('', bucket)
262 |
263 | # Format them to full URLs and print to console
264 | if keys:
265 | print(" FILES:")
266 | for key in keys:
267 | url = bucket + key
268 | print(f" ->{url}")
269 | else:
270 | print(" ...empty bucket, so sad. :(")
271 |
272 |
273 | def fmt_output(data):
274 | """
275 | Handles the output - printing and logging based on a specified format
276 | """
277 | # ANSI escape sequences are set based on accessibility of target
278 | # (basically, how public it is))
279 | bold = '\033[1m'
280 | end = '\033[0m'
281 | if data['access'] == 'public':
282 | ansi = bold + '\033[92m' # green
283 | if data['access'] == 'protected':
284 | ansi = bold + '\033[33m' # orange
285 | if data['access'] == 'disabled':
286 | ansi = bold + '\033[31m' # red
287 |
288 | sys.stdout.write(' ' + ansi + data['msg'] + ': ' + data['target'] + end + '\n')
289 |
290 | if LOGFILE:
291 | with open(LOGFILE, 'a', encoding='utf-8') as log_writer:
292 | if LOGFILE_FMT == 'text':
293 | log_writer.write(f'{data["msg"]}: {data["target"]}\n')
294 | if LOGFILE_FMT == 'csv':
295 | writer = csv.DictWriter(log_writer, data.keys())
296 | writer.writerow(data)
297 | if LOGFILE_FMT == 'json':
298 | log_writer.write(json.dumps(data) + '\n')
299 |
300 |
301 | def get_brute(brute_file, mini=1, maxi=63, banned='[^a-z0-9_-]'):
302 | """
303 | Generates a list of brute-force words based on length and allowed chars
304 | """
305 | # Read the brute force file into memory
306 | with open(brute_file, encoding="utf8", errors="ignore") as infile:
307 | names = infile.read().splitlines()
308 |
309 | # Clean up the names to usable for containers
310 | banned_chars = re.compile(banned)
311 | clean_names = []
312 | for name in names:
313 | name = name.lower()
314 | name = banned_chars.sub('', name)
315 | if maxi >= len(name) >= mini:
316 | if name not in clean_names:
317 | clean_names.append(name)
318 |
319 | return clean_names
320 |
321 |
322 | def start_timer():
323 | """
324 | Starts a timer for functions in main module
325 | """
326 | # Start a counter to report on elapsed time
327 | start_time = time.time()
328 | return start_time
329 |
330 |
331 | def stop_timer(start_time):
332 | """
333 | Stops timer and prints a status
334 | """
335 | # Stop the timer
336 | elapsed_time = time.time() - start_time
337 | formatted_time = time.strftime("%H:%M:%S", time.gmtime(elapsed_time))
338 |
339 | # Print some statistics
340 | print("")
341 | print(f" Elapsed time: {formatted_time}")
342 | print("")
343 |
--------------------------------------------------------------------------------
/manpage/cloud_enum.1:
--------------------------------------------------------------------------------
1 | .\" Text automatically generated by txt2man
2 | .TH cloud_enum 1 "01 Apr 2022" "cloud_enum-0.7" "Multi-cloud open source intelligence tool"
3 | .SH NAME
4 | \fBcloud_enum \fP- enumerates public resources matching user requested keyword
5 | \fB
6 | .SH SYNOPSIS
7 | .nf
8 | .fam C
9 | cloud_enum [OPTIONS] [ARGS] \.\.\.
10 |
11 | .fam T
12 | .fi
13 | .fam T
14 | .fi
15 | .SH DESCRIPTION
16 | Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
17 | Currently enumerates the following:
18 | .PP
19 | .nf
20 | .fam C
21 | Amazon Web Services:
22 | Open / Protected S3 Buckets
23 | awsapps (WorkMail, WorkDocs, Connect, etc.)
24 |
25 | Microsoft Azure:
26 | Storage Accounts
27 | Open Blob Storage Containers
28 | Hosted Databases
29 | Virtual Machines
30 | Web Apps
31 |
32 | Google Cloud Platform
33 | Open / Protected GCP Buckets
34 | Open / Protected Firebase Realtime Databases
35 | Google App Engine sites
36 | Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names)
37 |
38 | .fam T
39 | .fi
40 | .SH OPTIONS
41 | .TP
42 | .B
43 | \fB-h\fP, \fB--help\fP
44 | Show this help message and exit.
45 | .TP
46 | .B
47 | \fB-k\fP KEYWORD, \fB--keyword\fP KEYWORD
48 | Keyword. Can use argument multiple times.
49 | .TP
50 | .B
51 | \fB-kf\fP KEYFILE, \fB--keyfile\fP KEYFILE
52 | Input file with a single keyword per line.
53 | .TP
54 | .B
55 | \fB-m\fP MUTATIONS, \fB--mutations\fP MUTATIONS
56 | Mutations. Default: /usr/lib/cloud-enum/enum_tools/fuzz.txt.
57 | .TP
58 | .B
59 | \fB-b\fP BRUTE, \fB--brute\fP BRUTE
60 | List to brute-force Azure container names. Default: /usr/lib/cloud-enum/enum_tools/fuzz.txt.
61 | .TP
62 | .B
63 | \fB-t\fP THREADS, \fB--threads\fP THREADS
64 | Threads for HTTP brute-force. Default = 5.
65 | .TP
66 | .B
67 | \fB-ns\fP NAMESERVER, \fB--nameserver\fP NAMESERVER
68 | DNS server to use in brute-force.
69 | .TP
70 | .B
71 | \fB-l\fP LOGFILE, \fB--logfile\fP LOGFILE
72 | Will APPEND found items to specified file.
73 | .TP
74 | .B
75 | \fB-f\fP FORMAT, \fB--format\fP Format
76 | Format for log file (text,json,csv - defaults to text)
77 | .TP
78 | .B
79 | \fB--disable-aws\fP
80 | Disable Amazon checks.
81 | .TP
82 | .B
83 | \fB--disable-azure\fP
84 | Disable Azure checks.
85 | .TP
86 | .B
87 | \fB--disable-gcp\fP
88 | Disable Google checks.
89 | .TP
90 | .B
91 | \fB-qs\fP, \fB--quickscan\fP
92 | Disable all mutations and second-level scan.
93 | .SH EXAMPLES
94 | cloud_enum \fB-k\fP keyword
95 | .PP
96 | cloud_enum \fB-k\fP keyword \fB-t\fP 10
97 | .PP
98 | cloud_enum \fB-k\fP somecompany \fB-k\fP somecompany.io \fB-k\fP blockchaindoohickey
99 | .SH AUTHOR
100 | Written by initstring
101 | .PP
102 | This manual page was written by Guilherme de Paula Xavier Segundo
103 | for the Debian project (but may be used by others).
104 |
--------------------------------------------------------------------------------
/manpage/cloud_enum.txt:
--------------------------------------------------------------------------------
1 | NAME
2 | cloud_enum - enumerates public resources matching user requested keyword
3 |
4 | SYNOPSIS
5 | cloud_enum [OPTIONS] [ARGS] ...
6 |
7 | DESCRIPTION
8 | Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
9 | Currently enumerates the following:
10 |
11 | Amazon Web Services:
12 | Open / Protected S3 Buckets
13 | awsapps (WorkMail, WorkDocs, Connect, etc.)
14 |
15 | Microsoft Azure:
16 | Storage Accounts
17 | Open Blob Storage Containers
18 | Hosted Databases
19 | Virtual Machines
20 | Web Apps
21 |
22 | Google Cloud Platform
23 | Open / Protected GCP Buckets
24 | Open / Protected Firebase Realtime Databases
25 | Google App Engine sites
26 | Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names)
27 |
28 | OPTIONS
29 | -h, --help Show this help message and exit.
30 | -k KEYWORD, --keyword KEYWORD Keyword. Can use argument multiple times.
31 | -kf KEYFILE, --keyfile KEYFILE Input file with a single keyword per line.
32 | -m MUTATIONS, --mutations MUTATIONS Mutations. Default: /usr/lib/cloud-enum/enum_tools/fuzz.txt.
33 | -b BRUTE, --brute BRUTE List to brute-force Azure container names. Default: /usr/lib/cloud-enum/enum_tools/fuzz.txt.
34 | -t THREADS, --threads THREADS Threads for HTTP brute-force. Default = 5.
35 | -ns NAMESERVER, --nameserver NAMESERVER DNS server to use in brute-force.
36 | -l LOGFILE, --logfile LOGFILE Will APPEND found items to specified file.
37 | -f FORMAT, --format Format Format for log file (text,json,csv - defaults to text)
38 | --disable-aws Disable Amazon checks.
39 | --disable-azure Disable Azure checks.
40 | --disable-gcp Disable Google checks.
41 | -qs, --quickscan Disable all mutations and second-level scan.
42 |
43 | EXAMPLES
44 | cloud_enum -k keyword
45 |
46 | cloud_enum -k keyword -t 10
47 |
48 | cloud_enum -k somecompany -k somecompany.io -k blockchaindoohickey
49 |
50 | AUTHOR
51 | Written by initstring
52 |
53 | This manual page was written by Guilherme de Paula Xavier Segundo
54 | for the Debian project (but may be used by others).
55 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | dnspython
2 | requests
3 | requests_futures
4 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 |
3 | setup(
4 | name='cloud_enum',
5 | description='Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.',
6 | author='initstring',
7 | url='https://github.com/initstring/cloud_enum',
8 | license='MIT',
9 | packages=[
10 | 'enum_tools'
11 | ],
12 | py_modules=[
13 | 'cloud_enum'
14 | ],
15 | install_requires=[
16 | 'dnspython',
17 | 'requests',
18 | 'requests_futures'
19 | ],
20 | python_requires='>=3.0.0',
21 | entry_points={
22 | 'console_scripts': [
23 | 'cloud_enum = cloud_enum:main'
24 | ]
25 | }
26 | )
27 |
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/initstring/cloud_enum/d3c292c2068802fc851252629da0e9d8540cd232/tests/__init__.py
--------------------------------------------------------------------------------
/tests/test_utils.py:
--------------------------------------------------------------------------------
1 | # This test obviously does nothing, it is just setting up the framework
2 | def test1():
3 | assert 1 == 1
4 |
--------------------------------------------------------------------------------