├── .gitignore ├── .gitmodules ├── README.md ├── datasette ├── README.md ├── build_targets.py ├── fetch_targets.py ├── load_targets.py ├── metadata.json ├── remove_targets.py └── sql_data │ ├── sql_queries.py │ └── sql_views.py ├── docs └── view.png ├── download.py ├── files └── .gitkeep ├── requirements.txt ├── run.py ├── setup.sh └── targets.csv /.gitignore: -------------------------------------------------------------------------------- 1 | .env 2 | .vscode 3 | __pycache__ 4 | files/* 5 | venv 6 | *.db 7 | datasette/*.csv -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "targets"] 2 | path = targets 3 | url = https://github.com/infosec-us-team/Immunefi-Bug-Bounty-Programs-Unofficial.git -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Immunefi-terminal 2 | 3 | The only crypto bug bounty terminal you'll ever need. 4 | 5 | 1. **Use Datasette to browse all available bug bounties on Immunefi.** 6 | 2. Construct complex SQL queries with your personal bug bounty data warehouse. 7 | 3. Fetch updates from Immunefi for any project. 8 | 4. Find repositories and on-chain addresses you want to audit. 9 | 5. **Hassle-free download of any source code from any bug bounty program.** 10 | 6. Guaranteed compilation of on-chain targets with *patched* crytic-compile. 11 | 7. Publish your data with Datasette. 12 | 13 | ![Example views](docs/view.png) 14 | 15 | ## Installation 16 | 17 | **Manually:** 18 | 19 | Create a new Python environment, e.g., `python3 -m venv venv` 20 | 21 | Install all dependencies with `pip install -r requirements.txt` 22 | 23 | Fetch the submodule data - `git submodule update --init --recursive` 24 | 25 | **Automatically:** 26 | 27 | With `setup.sh`: 28 | 29 | `chmod +x setup.sh` 30 | 31 | `./setup.sh` 32 | 33 | Remember to activate your selected python interpreter in the vscode terminal. 34 | 35 | ## Running the Dashboard 36 | 37 | Execute `python run.py` to initialize the database. This will serve you a dashboard at `127.0.0.1:8001`. 38 | 39 | If `immunefi_data.db` is already present, `run.py` will look for new updates to insert. 40 | 41 | You can inspect new updates in `updates` table or by checking `git diff` / `git log` inside of the `targets` directory. 42 | 43 | ## Running the Downloader 44 | 45 | 46 | `python download.py` takes three arguments: 47 | 48 | `python download.py --bountyId --target :
--csv /path/to/targets.csv` 49 | 50 | `bountyId`: Download all targets for a specific bountyId (from SQLite database). 51 | 52 | ### `target` argument is broken due to changes in Etherscan API 53 | 54 | `target`: Download single `:
` target source code (for convenience). 55 | 56 | `csv`: Download from a CSV file containing targets (single column with list of `:
`). 57 | 58 | The script is integrated with `immunefi_data.db` for the bountyId argument. Find your bountyId in the UI and pass it as an argument to `download.py --bountyId `. 59 | 60 | All files are saved to the `/files` directory. Slither (if installed) will run out of the box. 61 | 62 | ## Development 63 | 64 | Feel free to experiment with Datasette canned queries, views, and additional table creation. All SQL operations are read from the `sql_data` directory. If something goes terribly wrong, just delete the whole database and start from scratch. Rebuilding takes seconds (you'll lose your `updates` table though). 65 | 66 | Good resources to understand Datasette (and SQL) better: 67 | 68 | https://datasette.io/tutorials/explore 69 | 70 | https://datasette.io/tutorials/learn-sql 71 | 72 | ## Acknowledgments 73 | 74 | This project is possible thanks to: 75 | 76 | https://github.com/simonw/datasette 77 | 78 | https://github.com/crytic/crytic-compile 79 | 80 | https://github.com/infosec-us-team/Immunefi-Bug-Bounty-Programs-Unofficial 81 | 82 | ## Similar Projects 83 | 84 | https://github.com/JoranHonig/bh 85 | 86 | https://github.com/tintinweb/bugbounty-companion 87 | 88 | ## End 89 | 90 | I am for hire, contact on Twitter. Protocol & Blockchain engineering (dev / security / tools / oss). Python, Solidity, JS, and TS focus (in such order). -------------------------------------------------------------------------------- /datasette/README.md: -------------------------------------------------------------------------------- 1 | # Datasette-related instructions 2 | 3 | You should execute `run.py` from the root of repository, but it's possible to also execute scripts individually. 4 | 5 | ### CLI commands 6 | 7 | regular order of execution with `run.py` is: `fetch_targets.py` > `load_targets.py` > `build_targets.py` 8 | 9 | however, you can: 10 | 11 | run `fetch_targets.py` inside of the directory to fetch data and initialize or update database with immunefi bounties. this function invokes `load_targets.py`. 12 | 13 | run `load_targets.py` to insert or update database with immunefi bounties from `/projects` directory. 14 | 15 | run `datasette serve immunefi_data.db -m metadata.json` afterwards to access datasette UI (`run.py` does that for you) 16 | 17 | you can run datasette with additional settings (default for `run.py`) 18 | 19 | `datasette serve immunefi_data.db -m metadata.json --setting max_returned_rows 2000` 20 | 21 | `metadata.json` contains saved query data, feel free to add more queries like so - https://docs.datasette.io/en/stable/sql_queries.html#canned-queries -------------------------------------------------------------------------------- /datasette/build_targets.py: -------------------------------------------------------------------------------- 1 | import re 2 | import sqlite3 3 | 4 | SUPPORTED_NETWORK = { 5 | "mainet": "etherscan.io", 6 | "optim": "optimistic.etherscan.io", 7 | "goerli": "goerli.etherscan.io", 8 | "sepolia": "sepolia.etherscan.io", 9 | "tobalaba": "tobalaba.etherscan.io", 10 | "bsc": "bscscan.com", 11 | "testnet.bsc": "testnet.bscscan.com", 12 | "arbi": "arbiscan.io", 13 | "testnet.arbi": "testnet.arbiscan.io", 14 | "poly": "polygonscan.com", 15 | "mumbai": "testnet.polygonscan.com", 16 | "avax": "snowtrace.io", 17 | "testnet.avax": "testnet.snowtrace.io", 18 | "ftm": "ftmscan.com", 19 | "goerli.base": "goerli.basescan.org", 20 | "base": "basescan.org", 21 | "gno": "gnosisscan.io", 22 | "polyzk": "zkevm.polygonscan.com", 23 | } 24 | 25 | PATH_TO_DB = "immunefi_data.db" 26 | 27 | 28 | def connect_db(): 29 | conn = sqlite3.connect(PATH_TO_DB) 30 | cursor = conn.cursor() 31 | return conn, cursor 32 | 33 | 34 | def extract_network_and_address(url): 35 | """ 36 | Extracts the network name and smart contract address from the given URL using regular expression. 37 | Builds slither target argument : 38 | """ 39 | eth_address_pattern = r"0x[a-fA-F0-9]{40}" 40 | 41 | addresses = re.findall(eth_address_pattern, url) 42 | 43 | if addresses: 44 | address = addresses[0] 45 | for network_prefix, domain in SUPPORTED_NETWORK.items(): 46 | if domain in url: 47 | return network_prefix, f"{network_prefix}:{address}" 48 | return "unknown", address 49 | else: 50 | return "unknown", url 51 | 52 | 53 | def insert_into_targets(cursor, bounty_id, network, path, updatedDate): 54 | cursor.execute( 55 | """ 56 | INSERT INTO targets (bountyId, network, target, updatedDate) 57 | VALUES (?, ?, ?, ?) 58 | """, 59 | (bounty_id, network, path, updatedDate), 60 | ) 61 | 62 | 63 | def build_targets_row(cursor, bountyId, url, updatedDate): 64 | network, path = extract_network_and_address(url) 65 | insert_into_targets(cursor, bountyId, network, path, updatedDate) 66 | -------------------------------------------------------------------------------- /datasette/fetch_targets.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import os 3 | 4 | def execute_git_pull(projects_directory): 5 | os.chdir(projects_directory) 6 | 7 | result = subprocess.run(['git', 'pull', 'origin', 'main'], capture_output=True, text=True) 8 | 9 | if "Already up to date." not in result.stdout: 10 | print("Updates found.") 11 | print(result.stdout) 12 | execute_load_targets() 13 | else: 14 | print("No updates found.") 15 | 16 | def execute_load_targets(): 17 | os.chdir(datasette_directory) 18 | subprocess.run(['python3', 'load_targets.py']) 19 | 20 | # Specify the directory 21 | # Get the directory of the current script 22 | current_directory = os.path.dirname(os.path.abspath(__file__)) 23 | 24 | # Get the parent directory of the current directory 25 | parent_directory = os.path.dirname(current_directory) 26 | 27 | # Define the datasette and projects directories 28 | datasette_directory = current_directory 29 | projects_directory = os.path.join(parent_directory, 'targets') 30 | 31 | # Execute the git pull command in the specified directory 32 | execute_git_pull(projects_directory) -------------------------------------------------------------------------------- /datasette/load_targets.py: -------------------------------------------------------------------------------- 1 | import json 2 | import sqlite3 3 | import os 4 | import build_targets 5 | import sql_data.sql_queries 6 | import sql_data.sql_views 7 | 8 | current_directory = os.path.dirname(os.path.abspath(__file__)) 9 | parent_directory = os.path.dirname(current_directory) 10 | json_directory = os.path.join(parent_directory, "targets", "project") 11 | 12 | 13 | def connect_db(): 14 | """Connects to the SQLite database and creates tables if they don't exist.""" 15 | conn = sqlite3.connect("immunefi_data.db") 16 | cursor = conn.cursor() 17 | 18 | cursor.execute(sql_data.sql_queries.CREATE_TABLE_BOUNTIES) 19 | 20 | cursor.execute(sql_data.sql_queries.CREATE_TABLE_REWARDS) 21 | 22 | cursor.execute(sql_data.sql_queries.CREATE_TABLE_ASSETS) 23 | 24 | cursor.execute(sql_data.sql_queries.CREATE_TABLE_IMPACTS) 25 | 26 | cursor.execute(sql_data.sql_queries.CREATE_TABLE_TAGS) 27 | 28 | cursor.execute(sql_data.sql_queries.CREATE_TABLE_TARGETS) 29 | 30 | cursor.execute(sql_data.sql_queries.CREATE_TABLE_UPDATES) 31 | 32 | return conn, cursor 33 | 34 | 35 | def insert_or_update_data(cursor, bounty): 36 | cursor.execute( 37 | "SELECT updatedDate FROM bounties WHERE bountyId = ?", (bounty["id"],) 38 | ) 39 | result = cursor.fetchone() 40 | 41 | if result: 42 | if result[0] < bounty["updatedDate"]: 43 | print("Updating:", bounty["id"]) 44 | cursor.execute( 45 | """ 46 | UPDATE bounties 47 | SET programOverview=?, prioritizedVulnerabilities=?, rewardsBody=?, outOfScopeAndRules=?, assetsBodyV2=?, project=?, maxBounty=?, launchDate=?, endDate=?, updatedDate=?, kyc=? 48 | WHERE bountyId=? 49 | """, 50 | ( 51 | bounty["programOverview"], 52 | bounty["prioritizedVulnerabilities"], 53 | bounty["rewardsBody"], 54 | bounty["outOfScopeAndRules"], 55 | bounty["assetsBodyV2"], 56 | bounty["project"], 57 | bounty["maxBounty"], 58 | bounty["launchDate"], 59 | bounty["endDate"], 60 | bounty["updatedDate"], 61 | bounty["kyc"], 62 | bounty["id"], 63 | ), 64 | ) 65 | cursor.execute("DELETE FROM updates WHERE bountyId = ?", (bounty["id"],)) 66 | cursor.execute( 67 | """ 68 | INSERT INTO updates (bountyId, updatedDate) 69 | VALUES (?, ?) 70 | """, 71 | ( 72 | bounty["id"], 73 | bounty["updatedDate"], 74 | ), 75 | ) 76 | update_nested_data(cursor, bounty["id"], bounty) 77 | else: 78 | print("Inserting:", bounty["id"]) 79 | cursor.execute( 80 | """ 81 | INSERT INTO bounties (bountyId, programOverview, prioritizedVulnerabilities, rewardsBody, outOfScopeAndRules, assetsBodyV2, project, maxBounty, launchDate, endDate, updatedDate, kyc) 82 | VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) 83 | """, 84 | ( 85 | bounty["id"], 86 | bounty["programOverview"], 87 | bounty["prioritizedVulnerabilities"], 88 | bounty["rewardsBody"], 89 | bounty["outOfScopeAndRules"], 90 | bounty["assetsBodyV2"], 91 | bounty["project"], 92 | bounty["maxBounty"], 93 | bounty["launchDate"], 94 | bounty["endDate"], 95 | bounty["updatedDate"], 96 | bounty["kyc"], 97 | ), 98 | ) 99 | insert_nested_data(cursor, bounty["id"], bounty) 100 | 101 | 102 | def insert_nested_data(cursor, bounty_id, bounty): 103 | if "rewards" in bounty: 104 | for item in bounty["rewards"]: 105 | cursor.execute( 106 | """ 107 | INSERT INTO rewards (bountyId, assetType, level, payout, pocRequired) 108 | VALUES (?, ?, ?, ?, ?) 109 | """, 110 | ( 111 | bounty_id, 112 | item.get("assetType"), 113 | item.get("level"), 114 | item.get("payout"), 115 | item.get("pocRequired"), 116 | ), 117 | ) 118 | if "assets" in bounty: 119 | for item in bounty["assets"]: 120 | cursor.execute( 121 | """ 122 | INSERT INTO assets (bountyId, type, url, description, isPrimacyOfImpact) 123 | VALUES (?, ?, ?, ?, ?) 124 | """, 125 | ( 126 | bounty_id, 127 | item.get("type"), 128 | item.get("url"), 129 | item.get("description"), 130 | item.get("isPrimacyOfImpact"), 131 | ), 132 | ) 133 | build_targets.build_targets_row( 134 | cursor, bounty_id, item.get("url"), bounty["updatedDate"] 135 | ) 136 | if "impacts" in bounty: 137 | for item in bounty["impacts"]: 138 | cursor.execute( 139 | """ 140 | INSERT INTO impacts (bountyId, title, type, severity) 141 | VALUES (?, ?, ?, ?) 142 | """, 143 | ( 144 | bounty_id, 145 | item.get("title"), 146 | item.get("type"), 147 | item.get("severity"), 148 | ), 149 | ) 150 | if "tags" in bounty: 151 | for tag_type, tag_values in bounty["tags"].items(): 152 | if tag_values is not None: 153 | for tag_value in tag_values: 154 | cursor.execute( 155 | """ 156 | INSERT INTO tags (bountyId, tag_type, tag_value) 157 | VALUES (?, ?, ?) 158 | """, 159 | ( 160 | bounty_id, 161 | tag_type, 162 | tag_value, 163 | ), 164 | ) 165 | 166 | 167 | def update_nested_data(cursor, bounty_id, bounty): 168 | cursor.execute("DELETE FROM rewards WHERE bountyId = ?", (bounty_id,)) 169 | cursor.execute("DELETE FROM assets WHERE bountyId = ?", (bounty_id,)) 170 | cursor.execute("DELETE FROM impacts WHERE bountyId = ?", (bounty_id,)) 171 | insert_nested_data(cursor, bounty_id, bounty) 172 | 173 | 174 | def process_json_files(cursor, json_directory): 175 | for filename in os.listdir(json_directory): 176 | if filename.endswith(".json"): 177 | with open(os.path.join(json_directory, filename), "r") as file: 178 | data = json.load(file) 179 | bounty = data["pageProps"]["bounty"] 180 | insert_or_update_data(cursor, bounty) 181 | 182 | 183 | def create_views(cursor, views): 184 | for view in views: 185 | cursor.execute( 186 | f"SELECT name FROM sqlite_master WHERE type='view' AND name='{view}';" 187 | ) 188 | if cursor.fetchone() is None: 189 | cursor.execute(sql_data.sql_views.CREATE_VIEW_QUICK_VIEW) 190 | 191 | 192 | def main(): 193 | conn, cursor = connect_db() 194 | try: 195 | process_json_files(cursor, json_directory) 196 | create_views(cursor, ["quick_view"]) 197 | finally: 198 | conn.commit() 199 | conn.close() 200 | 201 | 202 | if __name__ == "__main__": 203 | main() 204 | -------------------------------------------------------------------------------- /datasette/metadata.json: -------------------------------------------------------------------------------- 1 | { 2 | "databases": { 3 | "immunefi_data": { 4 | "tables": { 5 | "tags":{ 6 | "facets": ["tag_type"] 7 | } 8 | }, 9 | "queries": { 10 | "SELECT_DATE_RANGE": { 11 | "sql": "SELECT a.id, a.bountyId, a.type, a.url, a.description, b.updatedDate FROM assets a JOIN bounties b ON a.bountyId = b.bountyId WHERE b.updatedDate BETWEEN :start_date AND :end_date;", 12 | "description": "Show data for targets updated between specified dates" 13 | }, 14 | "TARGETS_ONCHAIN_COUNT": { 15 | "sql": "SELECT COUNT(*) FROM assets a JOIN bounties b ON a.bountyId = b.bountyId WHERE a.type = 'smart_contract' AND a.url NOT LIKE '%github%';", 16 | "description": "Count all targets available on-chain" 17 | }, 18 | "TARGETS_TYPES_COUNT": { 19 | "sql": "SELECT type, COUNT(*) as count FROM assets GROUP BY type;", 20 | "description": "Count all types of bug bounties" 21 | }, 22 | "TARGETS_NETWORK_COUNT": { 23 | "sql": "SELECT network, COUNT(*) as count FROM targets GROUP BY network ORDER BY count DESC;", 24 | "description": "Count all networks of target addresses and order by count" 25 | } 26 | } 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /datasette/remove_targets.py: -------------------------------------------------------------------------------- 1 | import sqlite3 2 | import argparse 3 | import csv 4 | 5 | LOCAL_DB_PATH = "immunefi_data.db" 6 | DEFAULT_CSV_PATH = "to_remove.csv" 7 | 8 | ''' 9 | Run: python remove_targets.py --target :
[--table ] 10 | If no --target is specified, read from to_remove.csv 11 | If --table is specified, removes the entire specified table instead of individual targets 12 | Remove selected rows or entire tables from targets_data db (useful to clean after working) 13 | ''' 14 | 15 | def connect_db(): 16 | conn = sqlite3.connect(LOCAL_DB_PATH) 17 | cursor = conn.cursor() 18 | return conn, cursor 19 | 20 | def remove_targets(targets): 21 | conn, cursor = connect_db() 22 | try: 23 | for target in targets: 24 | cursor.execute("DELETE FROM targets_data WHERE target = ?", (target,)) 25 | conn.commit() 26 | except Exception as e: 27 | print(f"Error: {e}") 28 | finally: 29 | conn.close() 30 | 31 | def remove_table(table_name): 32 | conn, cursor = connect_db() 33 | try: 34 | cursor.execute(f"DROP TABLE IF EXISTS {table_name}") 35 | conn.commit() 36 | print(f"Table {table_name} has been removed.") 37 | except Exception as e: 38 | print(f"Error: {e}") 39 | finally: 40 | conn.close() 41 | 42 | def read_targets_from_csv(file_path): 43 | with open(file_path, 'r') as file: 44 | reader = csv.reader(file) 45 | next(reader) 46 | targets = [row[0] for row in reader] 47 | return targets 48 | 49 | if __name__ == "__main__": 50 | parser = argparse.ArgumentParser() 51 | 52 | parser.add_argument("--target", type=str, help="A single target to remove") 53 | parser.add_argument("--table", type=str, help="Name of the table to remove entirely") 54 | args = parser.parse_args() 55 | 56 | if args.table: 57 | remove_table(args.table) 58 | elif args.target: 59 | targets = [args.target] 60 | remove_targets(targets) 61 | else: 62 | targets = read_targets_from_csv(DEFAULT_CSV_PATH) 63 | remove_targets(targets) 64 | -------------------------------------------------------------------------------- /datasette/sql_data/sql_queries.py: -------------------------------------------------------------------------------- 1 | CREATE_TABLE_BOUNTIES = """ 2 | CREATE TABLE IF NOT EXISTS bounties ( 3 | bountyId TEXT PRIMARY KEY, 4 | programOverview TEXT, 5 | prioritizedVulnerabilities TEXT, 6 | rewardsBody TEXT, 7 | outOfScopeAndRules TEXT, 8 | assetsBodyV2 TEXT, 9 | project TEXT, 10 | maxBounty INTEGER, 11 | launchDate TEXT, 12 | endDate TEXT, 13 | updatedDate TEXT, 14 | kyc BOOLEAN 15 | ) 16 | """ 17 | CREATE_TABLE_REWARDS = """ 18 | CREATE TABLE IF NOT EXISTS rewards ( 19 | id INTEGER PRIMARY KEY AUTOINCREMENT, 20 | bountyId TEXT, 21 | assetType TEXT, 22 | level TEXT, 23 | payout TEXT, 24 | pocRequired BOOLEAN, 25 | FOREIGN KEY (bountyId) REFERENCES bounties(bountyId) 26 | ) 27 | """ 28 | 29 | CREATE_TABLE_ASSETS = """ 30 | CREATE TABLE IF NOT EXISTS assets ( 31 | id INTEGER PRIMARY KEY AUTOINCREMENT, 32 | bountyId TEXT, 33 | type TEXT, 34 | url TEXT, 35 | description TEXT, 36 | isPrimacyOfImpact BOOLEAN, 37 | FOREIGN KEY (bountyId) REFERENCES bounties(bountyId) 38 | ) 39 | """ 40 | 41 | CREATE_TABLE_IMPACTS = """ 42 | CREATE TABLE IF NOT EXISTS impacts ( 43 | impactId INTEGER PRIMARY KEY AUTOINCREMENT, 44 | bountyId TEXT, 45 | title TEXT, 46 | type TEXT, 47 | severity TEXT, 48 | FOREIGN KEY (bountyId) REFERENCES bounties(bountyId) 49 | ) 50 | """ 51 | 52 | CREATE_TABLE_TAGS = """ 53 | CREATE TABLE IF NOT EXISTS tags ( 54 | id INTEGER PRIMARY KEY AUTOINCREMENT, 55 | bountyId TEXT, 56 | tag_type TEXT, 57 | tag_value TEXT, 58 | FOREIGN KEY (bountyId) REFERENCES bounties(bountyId) 59 | ) 60 | """ 61 | 62 | CREATE_TABLE_TARGETS = """ 63 | CREATE TABLE IF NOT EXISTS targets ( 64 | id INTEGER PRIMARY KEY AUTOINCREMENT, 65 | bountyId TEXT, 66 | network TEXT, 67 | target TEXT, 68 | updatedDate TEXT, 69 | FOREIGN KEY (bountyId) REFERENCES bounties(bountyId) 70 | ) 71 | """ 72 | 73 | CREATE_TABLE_UPDATES = """ 74 | CREATE TABLE IF NOT EXISTS updates ( 75 | bountyId TEXT PRIMARY KEY, 76 | updatedDate TEXT, 77 | FOREIGN KEY (bountyId) REFERENCES bounties(bountyId) 78 | ) 79 | """ 80 | -------------------------------------------------------------------------------- /datasette/sql_data/sql_views.py: -------------------------------------------------------------------------------- 1 | CREATE_VIEW_QUICK_VIEW = """ 2 | CREATE VIEW quick_view AS SELECT bountyId, programOverview, assetsBodyV2 FROM bounties; 3 | """ 4 | -------------------------------------------------------------------------------- /docs/view.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shortdoom/immunefi-terminal/6cd687d72ce49f29dc816b215c4cea4d573cc280/docs/view.png -------------------------------------------------------------------------------- /download.py: -------------------------------------------------------------------------------- 1 | from collections import deque 2 | from crytic_compile import CryticCompile 3 | import sqlite3 4 | import argparse 5 | import time 6 | import re 7 | import os 8 | import csv 9 | import requests 10 | import shutil 11 | from git import Repo 12 | 13 | """ 14 | Download source code for targets from the Immunefi database. 15 | 16 | Usage: 17 | python download.py --bountyId 18 | --target :
19 | --csv /path/to/file.csv 20 | 21 | Args: 22 | bountyId: Download all targets for a specific bountyId (from sqlite database) 23 | target: Download single :
target source code (for convenience) 24 | csv: Download from a CSV file containing targets (single column of :
) 25 | 26 | """ 27 | 28 | script_dir = os.path.dirname(os.path.realpath(__file__)) 29 | parent_dir = os.path.dirname(script_dir) 30 | 31 | # NOTE: XXXX project integration 32 | if os.path.exists(os.path.join(parent_dir, "core")): 33 | output_dir = os.path.join(parent_dir, "core", "files", "out") 34 | else: 35 | output_dir = os.path.join(script_dir, "files") 36 | db_path = os.path.join(script_dir, "datasette/immunefi_data.db") 37 | 38 | 39 | def connect_db(): 40 | conn = sqlite3.connect(db_path) 41 | cursor = conn.cursor() 42 | return conn, cursor 43 | 44 | 45 | def get_targets(bountyId): 46 | conn, cursor = connect_db() 47 | cursor.execute( 48 | "SELECT target FROM targets WHERE bountyId = ?", 49 | (bountyId,), 50 | ) 51 | targets = [item[0] for item in cursor.fetchall()] 52 | conn.close() 53 | return targets 54 | 55 | 56 | def get_targets_from_csv(csv_file_path): 57 | with open(csv_file_path, "r") as file: 58 | reader = csv.reader(file) 59 | targets = [row[0] for row in reader] 60 | return targets 61 | 62 | 63 | def sort_targets(targets): 64 | repo_targets = [] 65 | file_targets = [] 66 | network_targets = [] 67 | error_targets = [] 68 | 69 | for target in targets: 70 | # Check if target is a GitHub repository 71 | if re.match(r"^https://github\.com/[^/]+/[^/]+/?$", target): 72 | repo_targets.append(target) 73 | # Check if target is a GitHub file 74 | elif re.match(r"^https://github\.com/[^/]+/[^/]+/blob/.*$", target): 75 | file_targets.append(target) 76 | # Check if target is of the form network:address 77 | elif ":" in target and not "/" in target: 78 | network_targets.append(target) 79 | else: 80 | error_targets.append(target) 81 | 82 | return repo_targets, file_targets, network_targets, error_targets 83 | 84 | 85 | def clean_crytic_directory(target): 86 | crytic_path = os.path.join(output_dir, "etherscan-contracts") 87 | 88 | for contract_item in os.listdir(crytic_path): 89 | contract_item_path = os.path.join(crytic_path, contract_item) 90 | 91 | if contract_item == "crytic_compile.config.json": 92 | continue 93 | 94 | try: 95 | _, target_name_from_dir = contract_item.split("-", 1) 96 | if "-" in target_name_from_dir: 97 | _, contract_name = target_name_from_dir.rsplit("-", 1) 98 | else: 99 | contract_name = target_name_from_dir 100 | 101 | target_output = os.path.join(output_dir, f"{target}:{target_name_from_dir}") 102 | 103 | if os.path.exists(target_output): 104 | shutil.rmtree(crytic_path) 105 | print(f"Directory {target_output} already exists. Skipping.") 106 | continue 107 | 108 | os.mkdir(target_output) 109 | 110 | if os.path.isfile(contract_item_path): 111 | shutil.move( 112 | contract_item_path, 113 | os.path.join(target_output, f"{contract_name}"), 114 | ) 115 | 116 | # Move the crytic_compile.config.json too 117 | json_file = os.path.join(crytic_path, "crytic_compile.config.json") 118 | if os.path.exists(json_file): 119 | shutil.move( 120 | json_file, 121 | target_output, 122 | ) 123 | else: 124 | contract_items = os.listdir(contract_item_path) 125 | for item in contract_items: 126 | shutil.move(os.path.join(contract_item_path, item), target_output) 127 | 128 | # Delete the old directory 129 | shutil.rmtree(contract_item_path) 130 | shutil.rmtree(crytic_path) 131 | 132 | except Exception as e: 133 | raise Exception(f"Error processing {contract_item}: {e}") 134 | 135 | 136 | def download_source(targets, api_key=None): 137 | 138 | repo_targets, file_targets, network_targets, error_targets = sort_targets(targets) 139 | 140 | # deque to hold the timestamps of the last 5 requests 141 | timestamps = deque(maxlen=5) 142 | 143 | # Enter /files 144 | os.chdir(output_dir) 145 | 146 | for target in repo_targets: 147 | try: 148 | Repo.clone_from(target, os.path.join(output_dir, target.split("/")[-1])) 149 | except Exception as e: 150 | raise Exception(f"Error repo_targets: {e}") 151 | 152 | for target in file_targets: 153 | try: 154 | raw_url = target.replace( 155 | "https://github.com", "https://raw.githubusercontent.com" 156 | ) 157 | raw_url = raw_url.replace("/blob", "") 158 | r = requests.get(raw_url) 159 | with open(os.path.join(output_dir, target.split("/")[-1]), "wb") as f: 160 | f.write(r.content) 161 | except Exception as e: 162 | raise Exception(f"Error file_targets: {e}") 163 | 164 | for target in network_targets: 165 | # Rate limit requests to max 5 requests in 5 secondss 166 | while len(timestamps) == 5 and time.time() - timestamps[0] < 1: 167 | time.sleep(1 - (time.time() - timestamps[0])) 168 | if len(timestamps) == 5: 169 | timestamps.popleft() 170 | timestamps.append(time.time()) 171 | 172 | if not api_key: 173 | try: 174 | api_key = os.getenv("ETHERSCAN_API_KEY") 175 | except Exception as e: 176 | raise Exception(f"Error: ETHERSCAN_API_KEY required: {e}") 177 | 178 | try: 179 | crytic_object = CryticCompile(target, export_dir=output_dir) 180 | if crytic_object.bytecode_only: 181 | raise Exception("Error: Bytecode only accessible") 182 | else: 183 | clean_crytic_directory(target) 184 | except Exception as e: 185 | raise Exception(f"Error network_targets: {e}") 186 | 187 | for target in error_targets: 188 | print(f"Error error_targets: Invalid target {target}") 189 | 190 | 191 | if __name__ == "__main__": 192 | parser = argparse.ArgumentParser( 193 | description="Run targets against the app and filter the results" 194 | ) 195 | parser.add_argument( 196 | "--bountyId", help="Download all targets for a specific bountyId" 197 | ) 198 | parser.add_argument( 199 | "--target", help="Download single :
target source code" 200 | ) 201 | parser.add_argument("--csv", help="Download from a CSV file containing targets") 202 | args = parser.parse_args() 203 | 204 | if args.target: 205 | targets = [args.target] 206 | elif args.csv: 207 | targets = get_targets_from_csv(args.csv) 208 | else: 209 | targets = get_targets(args.bountyId) 210 | 211 | download_source(targets) 212 | -------------------------------------------------------------------------------- /files/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shortdoom/immunefi-terminal/6cd687d72ce49f29dc816b215c4cea4d573cc280/files/.gitkeep -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aiofiles==23.2.1 2 | anyio==4.3.0 3 | asgi-csrf==0.9 4 | asgiref==3.8.1 5 | cbor2==5.6.3 6 | certifi==2024.2.2 7 | charset-normalizer==3.3.2 8 | click==8.1.7 9 | click-default-group==1.2.4 10 | crytic-compile @ git+https://github.com/crytic/crytic-compile.git@0d121d1b36127d88f51498ee19e6adba2b4fcfdc 11 | datasette==0.64.6 12 | datasette-query-history==0.2.3 13 | exceptiongroup==1.2.1 14 | gitdb==4.0.11 15 | GitPython==3.1.43 16 | h11==0.14.0 17 | httpcore==1.0.5 18 | httpx==0.27.0 19 | hupper==1.12.1 20 | idna==3.7 21 | itsdangerous==2.2.0 22 | janus==1.0.0 23 | Jinja2==3.1.3 24 | MarkupSafe==2.1.5 25 | mergedeep==1.3.4 26 | packaging==24.0 27 | Pint==0.23 28 | pluggy==1.5.0 29 | pycryptodome==3.20.0 30 | python-multipart==0.0.9 31 | PyYAML==6.0.1 32 | requests==2.31.0 33 | smmap==5.0.1 34 | sniffio==1.3.1 35 | solc-select==1.0.4 36 | typing_extensions==4.11.0 37 | urllib3==2.2.1 38 | uvicorn==0.29.0 39 | -------------------------------------------------------------------------------- /run.py: -------------------------------------------------------------------------------- 1 | import os 2 | import subprocess 3 | 4 | script_dir = os.path.dirname(os.path.abspath(__file__)) 5 | 6 | def main(): 7 | datasette_dir = os.path.join(script_dir, "datasette") 8 | os.chdir(datasette_dir) 9 | subprocess.run(["python3", "fetch_targets.py"]) 10 | subprocess.run( 11 | [ 12 | "datasette", 13 | "serve", 14 | "immunefi_data.db", 15 | "-m", 16 | "metadata.json", 17 | "--setting", 18 | "max_returned_rows", 19 | "2000", 20 | ] 21 | ) 22 | 23 | 24 | if __name__ == "__main__": 25 | main() 26 | -------------------------------------------------------------------------------- /setup.sh: -------------------------------------------------------------------------------- 1 | # Check if the 'targets' directory is empty or not 2 | if [ -z "$(ls -A targets)" ]; then 3 | echo "The 'targets' directory is empty. Cloning the submodule..." 4 | git submodule update --init --recursive 5 | else 6 | echo "The 'targets' directory is not empty. The submodule has been cloned." 7 | fi 8 | 9 | # Create a new Python environment 10 | python3 -m venv venv 11 | 12 | # Activate the Python environment 13 | source venv/bin/activate 14 | 15 | # Install all dependencies 16 | pip install -r requirements.txt -------------------------------------------------------------------------------- /targets.csv: -------------------------------------------------------------------------------- 1 | https://github.com/starkware-libs/cairo-lang/blob/master/src/starkware/starknet/core/os/contract_address/contract_address.cairo 2 | https://github.com/starkware-libs/cairo-lang/blob/master/src/starkware/starknet/core/os/os_config/os_config.cairo 3 | arbi:0x7f86AC0c38bbc3211c610abE3841847fe19590A4 4 | mainet:0x4c362fab50bc81f0f58ef2da6b6e10b55fc1d478 5 | https://github.com/AstarNetwork/astar-frame --------------------------------------------------------------------------------