├── README.md └── subowner.py /README.md: -------------------------------------------------------------------------------- 1 | # subowner 2 | 3 | SubOwner - A Simple tool check for subdomain takeovers. This tool is designed to check for subdomain takeovers by resolving the CNAME records and verifying them against known vulnerable services. If a subdomain is found to be vulnerable, it saves the vulnerable URL in a file. 4 | 5 | ## Disclaimer 6 | 7 | > [!WARNING] 8 | > This tool is intended only for educational purposes and for testing in authorized environments. https://twitter.com/nav1n0x/ and https://github.com/ifconfig-me take no responsibility for the misuse of this code. Use it at your own risk. Do not attack a target you don't have permission to engage with. This tool uses the publicly released payloads and methods. 9 | 10 | 11 | ![image](https://github.com/user-attachments/assets/bd3a0f26-4551-45db-9f69-022a9421e581) 12 | 13 | 14 | ## Features 15 | 16 | - Supports multiple services for takeover (AWS S3, GitHub Pages, Heroku, Shopify, etc.). 17 | - Performs CNAME resolution and service-specific checks. 18 | - Outputs vulnerable subdomains to a file. 19 | - use the liste without schema (no http:// or https://) 20 | 21 | 22 | ### Upadte - 17/10/2024 23 | 24 | Modifications: 25 | 26 | 1. Updated the script with **ThreadPoolExecutor** to process subdomains concurrently. 27 | 2. The `-t` or `--threads` argument added to specify how many threads should be used. 28 | 3. Each subdomain is processed in a separate thread. 29 | 30 | This script will now check subdomains faster by utilizing multiple threads. You can adjust the number of threads via the `-t` argument. 31 | -------------------------------------------------------------------------------- /subowner.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | import requests 3 | import argparse 4 | import dns.resolver 5 | from bs4 import BeautifulSoup 6 | from colorama import Fore, init 7 | from urllib.parse import urlparse 8 | from concurrent.futures import ThreadPoolExecutor, as_completed 9 | 10 | def show_banner(): 11 | banner = f""" 12 | {BOLD_BLUE} ███▄ █ ▄▄▄ ██▒ █▓ ███▄ █ {NC} 13 | {BOLD_BLUE} ██ ▀█ █ ▒████▄ ▓██░ █▒ ██ ▀█ █ {NC} 14 | {BOLD_BLUE}▓██ ▀█ ██▒▒██ ▀█▄▓██ █▒░▓██ ▀█ ██▒{NC} 15 | {BOLD_BLUE}▓██▒ ▐▌██▒░██▄▄▄▄██▒██ █░░▓██▒ ▐▌██▒{NC} 16 | {BOLD_BLUE}▒██░ ▓██░ ▓█ ▓██▒▒▀█░ ▒██░ ▓██░{NC} 17 | {BOLD_BLUE}░ ▒░ ▒ ▒ ▒▒ ▓▒█░░ ▐░ ░ ▒░ ▒ ▒ {NC} 18 | {BOLD_BLUE}░ ░░ ░ ▒░ ▒ ▒▒ ░░ ░░ ░ ░░ ░ ▒░{NC} 19 | {BOLD_BLUE} ░ ░ ░ ░ ▒ ░░ ░ ░ ░ {NC} 20 | {BOLD_BLUE} ░ ░ ░ ░ ░ {NC} 21 | {BOLD_BLUE} ░ {NC} 22 | """ 23 | print(banner) 24 | 25 | init(autoreset=True) 26 | 27 | def get_page_title(content): 28 | try: 29 | soup = BeautifulSoup(content, 'html.parser') 30 | title = soup.title.string if soup.title else None 31 | if not title and '