├── LICENSE ├── README.md ├── derrick.py ├── modules ├── CVE_Advisories.sh ├── CVE_Advisories_Parser.py ├── scrapper_leak_normal.py └── scrapper_leak_querry.py └── requirements.txt /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 Sup de Vinci 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DERRICK - Leak & CVE Scanner Tool 2 | ![image](https://github.com/user-attachments/assets/68cac3e3-0e70-4c68-9fc0-aa167b3c0dbf) 3 | 4 | ## Overview 5 | 6 | DERRICK is a powerful utility designed to scan and retrieve information about database leaks and CVE (Common Vulnerabilities and Exposures) advisories. It provides a user-friendly interface to search for leaked databases and security vulnerabilities, making it an essential tool for security researchers, penetration testers, and IT professionals. 7 | 8 | ### The tool consists of two main components: 9 | - **Leak Scanner**: Scans forums like BreachForums for leaked databases. 10 | - **CVE Scanner**: Fetches CVE advisories from the GitHub Security Advisories API. 11 | 12 | The tool is built using Python and leverages the `rich` library for a visually appealing command-line interface. 13 | 14 | ## Features 15 | 16 | ### Leak Scanner: 17 | - Scans BreachForums for leaked databases. 18 | - Allows filtering by date and keywords. 19 | - Displays results with titles, URLs, and dates. 20 | 21 | ### CVE Scanner: 22 | - Fetches CVE advisories from the GitHub Security Advisories API. 23 | - Allows filtering by severity, date range, and keywords. 24 | - Displays detailed information about each CVE, including CVSS scores and references. 25 | 26 | ### User-Friendly Interface: 27 | - Interactive menu for easy navigation. 28 | - Colorful and informative output using the `rich` library. 29 | 30 | ## Prerequisites 31 | 32 | Before using the tool, you need the following: 33 | 34 | ### GitHub Token: 35 | - A GitHub token is required to access the GitHub Security Advisories API. 36 | - You can generate a token from your GitHub account settings. 37 | - Replace the `GITHUB_TOKEN` in `CVE_Advisories.sh` with your own token. 38 | 39 | ### BreachForums Cookies: 40 | - To access BreachForums, you need to provide a `cookies.txt` file. 41 | - This file should contain your session cookies from BreachForums. 42 | - You can export cookies using browser extensions like "EditThisCookie" for Chrome or "Cookie-Editor" for Firefox. 43 | - Save the cookies in a file named `cookies.txt` in the root directory of the project. 44 | 45 | ## Installation 46 | 47 | ### Clone the Repository: 48 | ```bash 49 | git clone https://github.com/supdevinci/derrick.git 50 | cd derrick 51 | ``` 52 | 53 | ### Install Dependencies: 54 | ```bash 55 | pip install -r requirements.txt 56 | ``` 57 | 58 | ### Set Up GitHub Token: 59 | Replace the `GITHUB_TOKEN` in `CVE_Advisories.sh` with your own GitHub token. 60 | 61 | Alternatively, you can update the token using the `--add-token` option: 62 | ```bash 63 | ./CVE_Advisories.sh --add-token=your_github_token 64 | ``` 65 | 66 | ### Add BreachForums Cookies: 67 | Export your BreachForums cookies and save them in a file named `cookies.txt` in the root directory. 68 | 69 | ## Usage 70 | 71 | When you run the tool, you will be presented with the main menu: 72 | 73 | ``` 74 | 75 | 76 | 77 | ██████╗ ███████╗██████╗ ██████╗ ██╗ ██████╗██╗ ██╗ 78 | ██╔══██╗██╔════╝██╔══██╗██╔══██╗██║██╔════╝██║ ██╔╝ 79 | ██║ ██║█████╗ ██████╔╝██████╔╝██║██║ █████╔╝ 80 | ██║ ██║██╔══╝ ██╔══██╗██╔══██╗██║██║ ██╔═██╗ 81 | ██████╔╝███████╗██║ ██║██║ ██║██║╚██████╗██║ ██╗ 82 | ╚═════╝ ╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═════╝╚═╝ ╚═╝ 83 | The Ultimate LEAKED Tool v1.0 84 | 85 | 86 | 87 | Brute Force? Ich monitor! Du still use 'admin123'. 88 | |████████) / 89 | |█|█ __| 90 | |█||--|/.)) 91 | (█ ‾ ‾) 92 | | | 93 | ''.. |_) 94 | |'..'---_ ,____, 95 | / ''---| //' 96 | / \ \ ‾‾ ~ 97 | | \ \ \ 98 | 99 | 100 | 101 | 102 | Select an option: 103 | [1] Scan database leaks | [2] Scan CVE Advisories | [q] Quit 104 | 105 | 106 | Enter your choice [1/2/q] (1): 107 | ``` 108 | 109 | ### 1. Scan Database Leaks 110 | - Option 1: Scan for leaked databases. 111 | - You will be prompted to enter the number of days to search back and a query (optional). 112 | 113 | Example: 114 | ```bash 115 | Enter number of days for the search [5]: 7 116 | Enter query (leave empty for all leaks): France 117 | ``` 118 | 119 | ### 2. Scan CVE Advisories 120 | - Option 2: Scan for CVE advisories. 121 | - You will be prompted to enter the severity level, number of days to search back, and a query (optional). 122 | 123 | Example: 124 | ```bash 125 | Enter severity level (default: critical): critical 126 | Enter number of days for the search [5]: 7 127 | Enter query (leave empty for all advisories): Apache 128 | ``` 129 | 130 | ## Contributing 131 | Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes. 132 | 133 | ## License 134 | This project is licensed under the **MIT License**. See the LICENSE file for details. 135 | 136 | ## Acknowledgments 137 | - **BreachForums** for the leaked database information. 138 | - **GitHub Security Advisories** for the CVE data. 139 | - **Rich** for the beautiful terminal formatting. 140 | - **Inspecteur Derrick** for the inspiration and the iconic detective vibes. 141 | 142 | > **Note:** This tool is intended for **educational and research purposes only**. Use it responsibly and ensure you have permission to scan and access any resources. 143 | -------------------------------------------------------------------------------- /derrick.py: -------------------------------------------------------------------------------- 1 | import random 2 | import subprocess 3 | import os 4 | from datetime import datetime, timedelta 5 | from rich.console import Console 6 | from rich.prompt import Prompt 7 | from rich.progress import Progress, BarColumn, TextColumn, TimeElapsedColumn 8 | from rich.table import Table 9 | 10 | # Initialisation de la console 11 | console = Console() 12 | 13 | # Constantes 14 | VERSION = "v1.0" 15 | DEFAULT_DAYS = 5 # Nombre de jours par défaut 16 | 17 | random_phrases = [ 18 | # 🛡️ Leak-related phrases 19 | "Leak gefunden? Ich trace fast!", 20 | "Datenbank? Ich browse deep!", 21 | "Security? Ich break leaks!", 22 | "Dump gefunden? Ich scan alles!", 23 | "Passwords? Ich finde schnell!", 24 | "Darknet? Ich search leaks!", 25 | "Private keys? Ich expose now!", 26 | "Hacking? Ich leak info!", 27 | "Security breach? Ich index sofort!", 28 | "Daten verloren? Ich recover leaks!", 29 | "Black market? Ich analyze files!", 30 | "Secret data? Ich reveal leaks!", 31 | "Cybercrime? Ich detect sources!", 32 | "Anonym? Ich track footprints!", 33 | "Versteckte Dateien? Ich find alles!", 34 | "Forensik? Ich scrape tief!", 35 | "Exponiert? Ich search dumps!", 36 | "Verschlüsselung? Ich break schnell!", 37 | "Datenleck? Ich check alles!", 38 | 39 | # 🛡️ CVE-related phrases (Mit mehr Englisch & Ironie) 40 | "Zero-Day? Ich report it! But Du ignore it.", 41 | "Exploit gefunden? Ich analyze! Du hope it's fake.", 42 | "Schwachstelle? Ich scan it! Du trust die firewalls.", 43 | "Ungepatcht? Ich prüfe! Du say 'later'.", 44 | "Kernel Panic? Ich debug! Du restart and pray.", 45 | "Remote Code? Ich see it! Du still use die default creds?", 46 | "Buffer Overflow? Ich finde it! Du don’t care until der crash.", 47 | "Privilege Escalation? Ich detect! Du say 'not mein problem'.", 48 | "Patchday? Ich install die updates! Du click 'remind mich tomorrow'.", 49 | "CVE detected? Ich log it! Du close deine eyes.", 50 | "Exploit? Ich reverse-engineer! Du google 'how to fix'.", 51 | "Memory Corruption? Ich debug! Du ask ChatGPT for hilfe.", 52 | "Injection? Ich find it! Du think 'nobody tries this'.", 53 | "Brute Force? Ich monitor! Du still use 'admin123'.", 54 | "APT detected? Ich track! Du think 'we are too small'.", 55 | "XSS? Ich break it! Du ignore die security headers.", 56 | "Ransomware? Ich analyze! Du call der boss.", 57 | "Malware? Ich detect it! Du double-click anyway.", 58 | "Firewall Bypass? Ich check! Du whitelist everything.", 59 | "SOC Alert? Ich read die logs! Du delete them.", 60 | ] 61 | 62 | 63 | # Couleur pastel aléatoire 64 | def random_pastel_color(): 65 | r = random.randint(150, 255) 66 | g = random.randint(150, 255) 67 | b = random.randint(150, 255) 68 | return f"rgb({r},{g},{b})" 69 | 70 | def display_derrick(): 71 | colors = [random_pastel_color() for _ in range(6)] 72 | 73 | ascii_art_lines = [ 74 | " ", 75 | " ", 76 | " ", 77 | " ██████╗ ███████╗██████╗ ██████╗ ██╗ ██████╗██╗ ██╗", 78 | " ██╔══██╗██╔════╝██╔══██╗██╔══██╗██║██╔════╝██║ ██╔╝", 79 | " ██║ ██║█████╗ ██████╔╝██████╔╝██║██║ █████╔╝ ", 80 | " ██║ ██║██╔══╝ ██╔══██╗██╔══██╗██║██║ ██╔═██╗ ", 81 | " ██████╔╝███████╗██║ ██║██║ ██║██║╚██████╗██║ ██╗", 82 | " ╚═════╝ ╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═════╝╚═╝ ╚═╝" 83 | ] 84 | 85 | colored_ascii_art = "\n".join( 86 | f"[bold {colors[i % len(colors)]}]{line}[/bold {colors[i % len(colors)]}]" 87 | for i, line in enumerate(ascii_art_lines) 88 | ) 89 | 90 | title = f" [bold cyan]The Ultimate LEAKED Tool[/bold cyan] [bold magenta]{VERSION}[/bold magenta]".center(80) 91 | title2 = f" [bold cyan] [/bold cyan] [bold white] [/bold white]" 92 | 93 | second_ascii_art = r""" 94 | 95 | [bold white]""" + random.choice(random_phrases) + r"""[/bold white] 96 | [bold white] |████████) / 97 | |█|█ __| 98 | |█||--|/.)) 99 | (█ ‾ ‾) 100 | | | 101 | ''.. |_) 102 | |'..'---_ ,____, 103 | / ''---| //' 104 | / \ \ ‾‾ ~ 105 | | \ \ \ 106 | [/bold white] 107 | """ 108 | 109 | footer = f""" 110 | """.strip() 111 | 112 | combined_text = f"{colored_ascii_art}\n{title}\n{title2}\n{second_ascii_art}\n{footer}" 113 | console.print(combined_text) 114 | 115 | def get_date_range(days): 116 | end_date = datetime.today() 117 | start_date = end_date - timedelta(days=days) 118 | return f"{start_date.strftime('%Y-%m-%d')}..{end_date.strftime('%Y-%m-%d')}" 119 | 120 | def run_leak_scanner(): 121 | console.print("[bold magenta]\n🔎 Scanning database leaks...\n[/bold magenta]") 122 | days = Prompt.ask("[bold yellow]Enter number of days for the search [/bold yellow]", default=str(DEFAULT_DAYS)) 123 | query = Prompt.ask("[bold yellow]Enter query (leave empty for all leaks)[/bold yellow]", default="") 124 | 125 | if query.strip(): 126 | script_path = "modules/scrapper_leak_querry.py" 127 | args = ["-d", str(days), "-q"] + query.split() 128 | else: 129 | script_path = "modules/scrapper_leak_normal.py" 130 | args = ["-d", str(days)] 131 | 132 | if not os.path.exists(script_path): 133 | console.print(f"[bold red]❌ Error: {script_path} not found![/bold red]") 134 | return 135 | 136 | try: 137 | result = subprocess.run(["python3", script_path] + args, capture_output=True, text=True, check=True) 138 | console.print(f"[bold green]✅ Leak Scan Results:[/bold green]\n{result.stdout}") 139 | except subprocess.CalledProcessError as e: 140 | console.print(f"[bold red]❌ Leak scanner error: {e.stderr}[/bold red]") 141 | 142 | Prompt.ask("[bold cyan]Press Enter to return to the menu...[/bold cyan]", default="") 143 | 144 | os.system('clear' if os.name == 'posix' else 'cls') 145 | 146 | menu_selection() 147 | 148 | def run_cve_scanner(): 149 | console.print("[bold magenta]\n🛡️ Scanning CVE Advisories...\n[/bold magenta]") 150 | severity = Prompt.ask("[bold yellow]Enter severity level (default: critical)[/bold yellow]", default="critical") 151 | days = Prompt.ask("[bold yellow]Enter number of days for the search (-d value)[/bold yellow]", default=str(DEFAULT_DAYS)) 152 | query = Prompt.ask("[bold yellow]Enter query (leave empty for all advisories)[/bold yellow]", default="") 153 | 154 | script_path = "modules/CVE_Advisories_Parser.py" 155 | 156 | if not os.path.exists(script_path): 157 | console.print(f"[bold red]❌ Error: {script_path} not found![/bold red]") 158 | return 159 | 160 | args = ["-s", severity, "-d", str(days)] 161 | if query.strip(): 162 | args.extend(["-q", query]) 163 | 164 | try: 165 | result = subprocess.run(["python3", script_path] + args, capture_output=True, text=True, check=True) 166 | console.print(f"[bold green]✅ CVE Scan Results:[/bold green]\n{result.stdout}") 167 | except subprocess.CalledProcessError as e: 168 | console.print(f"[bold red]❌ CVE scanner error: {e.stderr}[/bold red]") 169 | 170 | Prompt.ask("[bold cyan]Press Enter to return to the menu...[/bold cyan]", default="") 171 | 172 | os.system('clear' if os.name == 'posix' else 'cls') 173 | 174 | menu_selection() 175 | 176 | def menu_selection(): 177 | display_derrick() 178 | console.print("\n[bold cyan]Select an option:[/bold cyan]") 179 | console.print("[bold yellow][1][/bold yellow] [bold white]Scan database leaks[/bold white] | " 180 | "[bold yellow][2][/bold yellow] [bold white]Scan CVE Advisories[/bold white] | " 181 | "[bold red]\[q][/bold red] [bold white]Quit[/bold white]\n") 182 | 183 | while True: 184 | choice = Prompt.ask("\n[bold yellow]Enter your choice [/bold yellow]", choices=["1", "2", "q"], default="1") 185 | 186 | if choice == "1": 187 | run_leak_scanner() 188 | elif choice == "2": 189 | run_cve_scanner() 190 | elif choice == "q": 191 | console.print("[bold red]🚪 Exiting program.[/bold red]") 192 | exit(0) 193 | 194 | 195 | 196 | if __name__ == "__main__": 197 | try: 198 | menu_selection() 199 | except KeyboardInterrupt: 200 | console.print("\n[bold red]❌ Interrupted by user. Exiting...[/bold red]") 201 | exit(0) 202 | 203 | -------------------------------------------------------------------------------- /modules/CVE_Advisories.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Script to interact with the GitHub Security Advisories API 4 | # Replace with your GitHub authentication token 5 | 6 | GITHUB_TOKEN="xxx_XXXXXXXXXXXXX" 7 | BASE_URL="https://api.github.com/advisories" 8 | 9 | # Function to display usage instructions 10 | usage() { 11 | echo "Usage: $0 [OPTIONS]" 12 | echo "Options:" 13 | echo " --query= Filter advisories containing a specific keyword (e.g., Fortinet, VMware)" 14 | echo " --ghsa_id= Filter by GHSA-ID" 15 | echo " --type= Filter by type (reviewed, malware, unreviewed)" 16 | echo " --cve_id= Filter by CVE-ID" 17 | echo " --ecosystem= Filter by ecosystem (npm, pip, etc.)" 18 | echo " --severity= Filter by severity (low, medium, high, critical)" 19 | echo " --cwes= Filter by CWEs (example: 79,284)" 20 | echo " --is_withdrawn= Include only withdrawn advisories" 21 | echo " --affects= Filter by affected packages" 22 | echo " --published= Filter by publication date (e.g., 2023-01-01..2023-12-31)" 23 | echo " --updated= Filter by update date" 24 | echo " --modified= Filter by modification date" 25 | echo " --epss_percentage= Filter by EPSS percentage" 26 | echo " --epss_percentile= Filter by EPSS percentile" 27 | echo " --direction= Sort order (default: desc)" 28 | echo " --sort= Property to sort by (published, updated, etc.)" 29 | echo " --last= Show only the last N results (default: 10)" 30 | echo " --add-token= Add or update the GitHub token" 31 | } 32 | 33 | # Initialize parameters 34 | PARAMS="direction=desc&per_page=100" 35 | QUERY_KEYWORD="" 36 | LAST_COUNT=10 # Default to 10 results if --last is not provided 37 | RESULTS_COUNT=0 # Counter for the number of matching results 38 | COLLECTED_RESULTS="[]" # Initialize empty JSON array for results 39 | 40 | # Parse arguments 41 | for arg in "$@"; do 42 | case $arg in 43 | --query=*) 44 | QUERY_KEYWORD="${arg#*=}" 45 | ;; 46 | --ghsa_id=*) 47 | ghsa_id="${arg#*=}" 48 | PARAMS+="&ghsa_id=$ghsa_id" 49 | ;; 50 | --type=*) 51 | type="${arg#*=}" 52 | PARAMS+="&type=$type" 53 | ;; 54 | --cve_id=*) 55 | cve_id="${arg#*=}" 56 | PARAMS+="&cve_id=$cve_id" 57 | ;; 58 | --ecosystem=*) 59 | ecosystem="${arg#*=}" 60 | PARAMS+="&ecosystem=$ecosystem" 61 | ;; 62 | --severity=*) 63 | severity="${arg#*=}" 64 | PARAMS+="&severity=$severity" 65 | ;; 66 | --cwes=*) 67 | cwes="${arg#*=}" 68 | PARAMS+="&cwes=$cwes" 69 | ;; 70 | --is_withdrawn=*) 71 | is_withdrawn="${arg#*=}" 72 | PARAMS+="&is_withdrawn=$is_withdrawn" 73 | ;; 74 | --affects=*) 75 | affects="${arg#*=}" 76 | PARAMS+="&affects=$affects" 77 | ;; 78 | --published=*) 79 | published="${arg#*=}" 80 | PARAMS+="&published=$published" 81 | ;; 82 | --updated=*) 83 | updated="${arg#*=}" 84 | PARAMS+="&updated=$updated" 85 | ;; 86 | --modified=*) 87 | modified="${arg#*=}" 88 | PARAMS+="&modified=$modified" 89 | ;; 90 | --epss_percentage=*) 91 | epss_percentage="${arg#*=}" 92 | PARAMS+="&epss_percentage=$epss_percentage" 93 | ;; 94 | --epss_percentile=*) 95 | epss_percentile="${arg#*=}" 96 | PARAMS+="&epss_percentile=$epss_percentile" 97 | ;; 98 | --direction=*) 99 | direction="${arg#*=}" 100 | PARAMS="${PARAMS//direction=desc/}&direction=$direction" 101 | ;; 102 | --sort=*) 103 | sort="${arg#*=}" 104 | PARAMS+="&sort=$sort" 105 | ;; 106 | --last=*) 107 | LAST_COUNT="${arg#*=}" 108 | ;; 109 | --add-token=*) 110 | new_token="${arg#*=}" 111 | escaped_token=$(printf '%s\n' "$new_token" | sed 's/[\/&]/\\&/g') 112 | sed -i "s/^GITHUB_TOKEN=.*/GITHUB_TOKEN=\"$escaped_token\"/" "$0" 113 | echo "GitHub token successfully updated." 114 | exit 0 115 | ;; 116 | *) 117 | echo "Unknown option: $arg" 118 | usage 119 | exit 1 120 | ;; 121 | esac 122 | done 123 | 124 | # Function to fetch advisories 125 | fetch_advisories() { 126 | local url="$1" 127 | curl -i -s -L \ 128 | -H "Accept: application/vnd.github+json" \ 129 | -H "Authorization: Bearer $GITHUB_TOKEN" \ 130 | -H "X-GitHub-Api-Version: 2022-11-28" \ 131 | "$url" 132 | } 133 | 134 | # Start URL 135 | URL="$BASE_URL?$PARAMS" 136 | NEXT_URL="$URL" 137 | 138 | # Fetch pages until the desired number of results is found 139 | while [ -n "$NEXT_URL" ] && [ "$RESULTS_COUNT" -lt "$LAST_COUNT" ]; do 140 | response=$(fetch_advisories "$NEXT_URL") 141 | 142 | # Extract the headers and body 143 | headers=$(echo "$response" | sed -n '/^HTTP\/2 200/,/^$/p') 144 | body=$(echo "$response" | sed -n '/^\[/,$p') 145 | 146 | # Check if the response is valid JSON 147 | if echo "$body" | jq -e . > /dev/null 2>&1; then 148 | # Filter results by keyword if specified 149 | if [ -n "$QUERY_KEYWORD" ]; then 150 | matching_results=$(echo "$body" | jq --arg keyword "$QUERY_KEYWORD" ' 151 | .[] | select( 152 | (.summary | test($keyword; "i")) or 153 | (.description | test($keyword; "i")) 154 | )' 155 | ) 156 | else 157 | matching_results=$(echo "$body") 158 | fi 159 | 160 | # Add matching results to the collected results 161 | if [ -n "$matching_results" ]; then 162 | COLLECTED_RESULTS=$(echo "$COLLECTED_RESULTS" "$matching_results" | jq -s 'flatten') 163 | RESULTS_COUNT=$(echo "$COLLECTED_RESULTS" | jq 'length') 164 | fi 165 | 166 | # Stop if the desired number of results is reached 167 | if [ "$RESULTS_COUNT" -ge "$LAST_COUNT" ]; then 168 | break 169 | fi 170 | fi 171 | 172 | # Check if there is a next page 173 | if echo "$headers" | grep -q 'rel="next"'; then 174 | NEXT_URL=$(echo "$headers" | grep -Eo '<[^>]+>; rel="next"' | sed -E 's/^<([^>]+)>; rel="next"/\1/') 175 | else 176 | NEXT_URL="" 177 | fi 178 | done 179 | 180 | # Display the results 181 | if [ "$RESULTS_COUNT" -eq 0 ]; then 182 | echo "No results found." 183 | else 184 | echo "$COLLECTED_RESULTS" | jq -s ".[0:$LAST_COUNT]" 185 | fi 186 | -------------------------------------------------------------------------------- /modules/CVE_Advisories_Parser.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import json 3 | import argparse 4 | import os 5 | from datetime import datetime, timedelta 6 | from rich.console import Console 7 | 8 | console = Console() 9 | 10 | def run_bash_script(severity, query, published): 11 | """ 12 | Exécute le script Bash avec les arguments spécifiés et récupère la sortie JSON. 13 | """ 14 | script_dir = os.path.dirname(os.path.abspath(__file__)) # Obtient le chemin du dossier modules 15 | script_path = os.path.join(script_dir, "CVE_Advisories.sh") # Construit le chemin absolu 16 | 17 | command = [script_path, f"--severity={severity}", f"--published={published}"] 18 | 19 | if query: 20 | command.append(f"--query={query}") 21 | 22 | try: 23 | result = subprocess.run(command, capture_output=True, text=True, check=True) 24 | return result.stdout.strip() 25 | except subprocess.CalledProcessError as e: 26 | return None 27 | 28 | def parse_json_output(json_str): 29 | """ 30 | Parse le JSON récupéré et extrait les informations essentielles. 31 | """ 32 | try: 33 | advisories = json.loads(json_str) 34 | if not advisories: 35 | console.print("[bold red]Aucun résultat trouvé.[/bold red]") 36 | return 37 | 38 | if isinstance(advisories, list) and len(advisories) > 0: 39 | advisories = advisories[0] # Suppression du tableau imbriqué inutile 40 | 41 | for advisory in advisories: 42 | ghsa_id = advisory.get("ghsa_id", "N/A") 43 | cve_id = advisory.get("cve_id", "N/A") 44 | summary = advisory.get("summary", "N/A") 45 | url = advisory.get("html_url", advisory.get("url", "N/A")) 46 | date_published = advisory.get("published_at", "N/A") 47 | references = advisory.get("references", []) 48 | cvss_score = advisory.get("cvss", {}).get("score", "N/A") 49 | 50 | console.print("\n[bold cyan]CVE:[/bold cyan]", cve_id) 51 | console.print("[bold cyan]GHSA:[/bold cyan]", ghsa_id) 52 | console.print("[bold cyan]Résumé:[/bold cyan]", summary) 53 | console.print("[bold cyan]URL:[/bold cyan]", url) 54 | console.print("[bold cyan]Date de publication:[/bold cyan]", date_published) 55 | console.print("[bold cyan]Score CVSS:[/bold cyan]", cvss_score) 56 | console.print("[bold cyan]Références:[/bold cyan]") 57 | 58 | for ref in references: 59 | console.print(f" ➜ [blue]{ref}[/blue]") 60 | 61 | console.print("-" * 40) 62 | 63 | except json.JSONDecodeError: 64 | console.print("[bold red]Aucun résultat trouvé.[/bold red]") 65 | 66 | def calculate_published_date(days): 67 | """ 68 | Calcule la période de publication basée sur le nombre de jours donné. 69 | """ 70 | end_date = datetime.today().strftime("%Y-%m-%d") 71 | start_date = (datetime.today() - timedelta(days=days)).strftime("%Y-%m-%d") 72 | return f"{start_date}..{end_date}" 73 | 74 | def main(): 75 | parser = argparse.ArgumentParser(description="Exécute CVE_Advisories.sh et formate la sortie JSON.") 76 | parser.add_argument("-s", "--severity", default="critical", help="Niveau de sévérité (par défaut: critical)") 77 | parser.add_argument("-q", "--query", help="Terme de recherche pour l'advisory (optionnel)") 78 | parser.add_argument("-d", "--days", type=int, default=5, help="Nombre de jours pour la recherche de publications (par défaut: 5)") 79 | args = parser.parse_args() 80 | 81 | published_range = calculate_published_date(args.days) 82 | json_output = run_bash_script(args.severity, args.query, published_range) 83 | 84 | if json_output: 85 | parse_json_output(json_output) 86 | 87 | if __name__ == "__main__": 88 | main() 89 | -------------------------------------------------------------------------------- /modules/scrapper_leak_normal.py: -------------------------------------------------------------------------------- 1 | from rich.text import Text 2 | from rich.style import Style 3 | from rich.live import Live 4 | import time 5 | import random 6 | import requests 7 | import shutil 8 | import argparse 9 | from bs4 import BeautifulSoup 10 | from rich.console import Console 11 | from rich.progress import Progress, BarColumn, TextColumn, TimeElapsedColumn 12 | 13 | # Initialize Rich for styling and color 14 | console = Console() 15 | 16 | 17 | 18 | # Argument to change the datecut value 19 | def parse_arguments(): 20 | parser = argparse.ArgumentParser(description="Scraper BreachForums") 21 | parser.add_argument("-d", "--datecut", type=int, default=5, help="Change the datecut value (default 5)") 22 | return parser.parse_args() 23 | 24 | args = parse_arguments() 25 | datecut_value = args.datecut 26 | 27 | # Base URL 28 | base_url = "https://breachforums.st" 29 | 30 | # Load cookies from a cookies.txt file 31 | cookies = {} 32 | try: 33 | with open('cookies.txt', 'r') as file: 34 | for line in file: 35 | if not line.startswith('#') and line.strip() != '': 36 | try: 37 | parts = line.strip().split('\t') 38 | if len(parts) == 7: 39 | domain, flag, path, secure, expiration, name, value = parts 40 | cookies[name] = value 41 | except ValueError: 42 | pass 43 | except FileNotFoundError: 44 | console.print("[bold red]Error: cookies.txt file not found![/bold red]") 45 | exit(1) 46 | 47 | # User-Agent header 48 | headers = { 49 | 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36' 50 | } 51 | 52 | def get_total_pages(soup): 53 | pagination = soup.find(class_="pagination") 54 | if pagination: 55 | page_links = pagination.find_all('a', href=True) 56 | page_numbers = [int(link['href'].split('page=')[1].split('&')[0]) for link in page_links if 'page=' in link['href']] 57 | return max(page_numbers) if page_numbers else 1 58 | return 1 59 | 60 | def scrape_page(url): 61 | response = requests.get(url, headers=headers, cookies=cookies) 62 | if response.status_code == 200: 63 | soup = BeautifulSoup(response.text, 'html.parser') 64 | subjects = soup.find_all(class_="subject_new") 65 | results = [] 66 | 67 | for subject in subjects: 68 | subject_text = subject.get_text(strip=True) 69 | subject_url = subject.find('a')['href'] 70 | full_url = f"{base_url}/{subject_url}" 71 | 72 | # Trouver la date du sujet 73 | date_tag = subject.find_next(class_="forum-display__thread-date") 74 | date = date_tag.get_text(strip=True) if date_tag else "Date non trouvée" 75 | 76 | results.append((subject_text, full_url, date)) 77 | 78 | return results 79 | return [] 80 | 81 | def run_scraper(): 82 | console.print("[bold magenta]Starting scanning...[/bold magenta]") 83 | first_page_url = f"{base_url}/Forum-Databases?page=1&datecut={datecut_value}&sortby=started&order=desc&prefix=14" 84 | response = requests.get(first_page_url, headers=headers, cookies=cookies) 85 | 86 | if response.status_code == 200: 87 | soup = BeautifulSoup(response.text, 'html.parser') 88 | total_pages = get_total_pages(soup) 89 | console.print(f"[bold cyan]Total pages found: {total_pages}[/bold cyan]") 90 | 91 | results = [] 92 | with Progress() as progress: 93 | task = progress.add_task("[cyan]Scraping...", total=total_pages) 94 | 95 | for page in range(1, total_pages + 1): 96 | page_url = f"{base_url}/Forum-Databases?page={page}&datecut={datecut_value}&sortby=started&order=desc&prefix=14" 97 | results.extend(scrape_page(page_url)) 98 | progress.update(task, advance=1) 99 | time.sleep(0.3) 100 | 101 | console.print(f"[bold cyan]\nResults found ({len(results)}) :\n\n[/bold cyan]") 102 | for subject, url, date in results: 103 | console.print(f"[bold cyan]Leaked:[/bold cyan] {subject}") 104 | console.print(f"[bold green]URL:[/bold green] {url}") 105 | console.print(f"[bold yellow]DATE:[/bold yellow] {date}") 106 | console.print("-" * 40, style="dim") 107 | else: 108 | console.print(f"[bold red]Error during request: {response.status_code}[/bold red]") 109 | 110 | # Then, run the scraper 111 | run_scraper() 112 | 113 | -------------------------------------------------------------------------------- /modules/scrapper_leak_querry.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from bs4 import BeautifulSoup 3 | from rich.console import Console 4 | import argparse 5 | 6 | # Initialisation de la console 7 | console = Console() 8 | 9 | # Base URL de BreachForums 10 | BASE_URL = "https://breachforums.st" 11 | 12 | # Chargement des cookies depuis le fichier cookies.txt 13 | def load_cookies(): 14 | cookies = {} 15 | try: 16 | with open("cookies.txt", "r") as file: 17 | for line in file: 18 | if not line.startswith("#") and line.strip(): 19 | parts = line.strip().split("\t") 20 | if len(parts) == 7: 21 | domain, flag, path, secure, expiration, name, value = parts 22 | cookies[name] = value 23 | except FileNotFoundError: 24 | console.print("[bold red]Error: cookies.txt not found![/bold red]") 25 | exit(1) 26 | return cookies 27 | 28 | # Headers pour éviter d’être bloqué 29 | HEADERS = { 30 | "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" 31 | } 32 | 33 | # Arguments pour la datecut et la recherche 34 | def parse_arguments(): 35 | parser = argparse.ArgumentParser(description="Scraper BreachForums") 36 | parser.add_argument("-d", "--datecut", type=int, default=7, help="Nombre de jours pour la recherche (par défaut 7)") 37 | parser.add_argument("-q", "--query", nargs="+", help="Mots-clés de recherche (ex: -q france vpn leak)") 38 | return parser.parse_args() 39 | 40 | args = parse_arguments() 41 | DATECUT_VALUE = args.datecut 42 | QUERY_LIST = args.query if args.query else [] 43 | 44 | # Construire l'URL avec les filtres 45 | def build_url(): 46 | if QUERY_LIST: 47 | # Recherche avancée avec query 48 | keywords_str = "+".join(QUERY_LIST) 49 | return f"{BASE_URL}/search.php?action=do_search&keywords={keywords_str}&postthread=2&author=&matchusername=1&forums[]=all&findthreadst=1&numreplies=&postdate={DATECUT_VALUE}&pddir=1&threadprefix[]=14&sortby=lastpost&sortordr=desc&showresults=threads&submit=Search" 50 | else: 51 | # Recherche standard dans les bases de données leaks 52 | return f"{BASE_URL}/Forum-Databases?datecut={DATECUT_VALUE}&sortby=started&order=desc&prefix=14" 53 | 54 | # Scraper une page spécifique 55 | def scrape_page(): 56 | url = build_url() 57 | response = requests.get(url, headers=HEADERS, cookies=load_cookies()) 58 | 59 | if response.status_code != 200: 60 | console.print(f"[bold red]Error fetching page: {response.status_code}[/bold red]") 61 | return [] 62 | 63 | soup = BeautifulSoup(response.text, "html.parser") 64 | 65 | # Vérification si on est sur la bonne page de résultats 66 | if QUERY_LIST: 67 | results_table = soup.find("table", class_="tborder") 68 | if not results_table: 69 | console.print("[bold red]No results found for your query.[/bold red]") 70 | return [] 71 | subjects = results_table.find_all("a", class_="subject_new") 72 | else: 73 | subjects = soup.find_all(class_="subject_new") 74 | 75 | leaks = [] 76 | for subject in subjects: 77 | title = subject.get_text(strip=True) 78 | link = subject.get("href") 79 | 80 | if not link: 81 | continue # Skip si aucun lien trouvé 82 | 83 | full_url = f"{BASE_URL}/{link}" 84 | 85 | # Trouver la date (spécifique aux threads) 86 | date_tag = subject.find_parent("tr").find_all("td")[-1] 87 | date = date_tag.get_text(strip=True) if date_tag else "Unknown Date" 88 | 89 | leaks.append({"title": title, "url": full_url, "date": date}) 90 | 91 | return leaks 92 | 93 | # Fonction principale pour scraper les résultats 94 | def run_scraper(): 95 | console.print("[bold yellow]Scanning database leaks...[/bold yellow]") 96 | 97 | leaks = scrape_page() 98 | 99 | # Affichage des résultats 100 | if not leaks: 101 | console.print("[bold red]No leaks found.[/bold red]") 102 | return 103 | 104 | console.print(f"\n[bold cyan]Leaks found ({len(leaks)}):\n[/bold cyan]") 105 | 106 | for leak in leaks: 107 | console.print(f"[bold cyan]Leak:[/bold cyan] {leak['title']}") 108 | console.print(f"[bold green]URL:[/bold green] {leak['url']}") 109 | console.print(f"[bold yellow]Date:[/bold yellow] {leak['date']}") 110 | console.print("-" * 60) 111 | 112 | # Exécution du scraper 113 | if __name__ == "__main__": 114 | run_scraper() 115 | 116 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | requests==2.31.0 2 | beautifulsoup4==4.12.2 3 | rich==13.7.0 4 | argparse==1.4.0 5 | --------------------------------------------------------------------------------