├── LICENSE ├── README.md ├── Screenshots ├── default_threader.png └── threader.png ├── multi_ping.py ├── multi_port.py └── threading.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Tristram 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Multithreading tasks using python 3 2 | As you start to get more exposure with penetration testing, you're going to find out real quick that time is precious. You could spend hours on trivial tasks and at some point you're going to want to find ways to become more efficient by automating some of your workflows. 3 | 4 | To help myself overcome the barrier of time, I decided to do a deep dive to get more comfortable with using python. As efficient as this language is, eventually I wanted to learn how to incorporate multithreading to make my instructions run even faster. 5 | 6 | ## Disclaimer 7 | 8 | This repository and the data provided has been created purely for the purposes of academic research and for the development of effective security techniques and is not intended to be used to attack systems except where explicitly authorized. It is your responsibility to obey all applicable local, state and federal laws. 9 | 10 | Project maintainers assume no liability and are not responsible for any misuse or damage caused by the data therein. 11 | 12 | ## Sections: 13 | 14 | 1. What is a worker function? 15 | 2. What is a queue? 16 | 3. What is a thread? 17 | 4. Multithreaded Ping Sweep 18 | 5. Multithreaded Port Scanner 19 | 20 | ## What is a worker function? 21 | A worker function is the set of instructions you define to be run within each thread for every task in the queue. It could be an all-inclusive function, or can even contain other functions you define. Within this function, you need to grab a task from the queue which is how your worker is going to know each unique task it needs to complete. After you're finished, you gracefully tell the queue you're done with that task. 22 | 23 | When you write your series of instructions for a threaded operation, you need to guard against simultaneous access to an object. In the snippet below, the shared object is the item variable. If you do not use a lock, what will happen a script that uses a lot of threads might stumble upon each other and cause data corruption, or inaccurate output. This can be helpful in cases where you add mathematical computations in your logic as well. 24 | 25 | 26 | ```python 27 | def worker(): 28 | while True: 29 | item = q.get() 30 | with thread_lock: 31 | print(f'Working on {item}') 32 | print(f'Finished {item}') 33 | time.sleep(1) 34 | q.task_done() 35 | 36 | # Define a thread lock 37 | thread_lock = threading.Lock() 38 | ``` 39 | 40 | ## What is a queue? 41 | A queue provides a logical means to run a sequence of tasks in an organized manner, which is very useful when incorporated with threaded programs. You create your queue and fill it with tasks. For example, your script could be targeting a list of IP addresses so you fill your queue with IPs. When your worker function runs, it will grab an IP from the queue and will execute its series of instructions. Keep in mind that for every thread you have, that is how many tasks will be running at a time. 42 | 43 | * One Thread = One task at a time until the queue is empty 44 | * Two Threads = Two task at a time until the queue is empty 45 | * Ten Threads = Ten task at a time until the queue is empty 46 | 47 | Don't forget the importance of q.join(). This will instruct the main thread to wait until all the tasks are complete before moving on. This is an important piece of information. If you don't put this in place then after you create your queue and start your work, the main thread will end, killing every running task without giving them time to finish. 48 | 49 | ```python 50 | # Create our queue 51 | q = queue.Queue() 52 | 53 | # send ten task requests to the worker 54 | for item in range(10): 55 | q.put(item) 56 | 57 | # block until all tasks are done 58 | q.join() 59 | ``` 60 | 61 | ## What is a thread? 62 | A thread on its own is a consolidated set of instructions. With a single thread, one command runs at a time, such as a simple script. When you introduce more threads, or multithreading, you can have multiple series of instructions running at the time same for simultaneous execution. 63 | 64 | For example, let's say you have a script that sends four ICMP probes to two servers and it takes two seconds to probe each server. In this case, the entire processing time in a single thread takes four seconds. Might not sound like a big deal, but when you're scanning a /24 network that can take some time. Especially when you start talking about /16 or a /8. When you introduce two threads, with each thread probing one server, it'll take two seconds to probe both servers because tasks are running simultaneously 65 | 66 | While it sounds like the more threads you throw at a queue the faster your task will complete, however, keep in mind that this is not always true. You need to ensure your system can handle the resources that are being provisioned, especially if your system is already processing other tasks within the script and outside. 67 | 68 | ```python 69 | # Define number of threads 70 | for r in range(2): 71 | t = threading.Thread(target=worker) 72 | t.daemon = True 73 | t.start() 74 | ``` 75 | 76 | ## Skeleton Script 77 | Let's put the above narrative into functional code. As we can see with Python, it's very easy to create yourself a basic structure for a multithreaded process. 78 | 79 | **Let's summarize:** 80 | 81 | * You create a worker function with your set of instructions 82 | * You define the number of threads you want to provision 83 | * You send the job requests to the queue based on the tasks you want to run 84 | * You wait for the jobs to finish and end your script 85 | 86 | Take the time to increase the number of threads and tasks and see how it changes the output. I also added a sleep to the worker function so you can simulate the time it takes for one task to complete. 87 | 88 | ![Alt text](https://github.com/gh0x0st/python3_multithreading/blob/master/Screenshots/threader.png?raw=true "Default Threader") 89 | 90 | ```python 91 | #!/usr/bin/python3 92 | 93 | import threading # https://docs.python.org/3/library/threading.html 94 | import queue # https://docs.python.org/3/library/queue.html 95 | import time # https://docs.python.org/3/library/time.html 96 | 97 | 98 | def worker(): 99 | while True: 100 | item = q.get() 101 | with thread_lock: 102 | print(f'Working on {item}') 103 | print(f'Finished {item}') 104 | time.sleep(1) 105 | q.task_done() 106 | 107 | 108 | # Define a thread lock 109 | thread_lock = threading.Lock() 110 | 111 | # Create our queue 112 | q = queue.Queue() 113 | 114 | # Define number of threads 115 | for r in range(2): 116 | t = threading.Thread(target=worker) 117 | t.daemon = True 118 | t.start() 119 | 120 | # Start timer before sending tasks to the queue 121 | start_time = time.time() 122 | 123 | print(f"Creating a task request for each item in the given range\n") 124 | 125 | # send ten task requests to the worker 126 | for item in range(10): 127 | q.put(item) 128 | 129 | # block until all tasks are done 130 | q.join() 131 | 132 | print(f"All workers completed their tasks after {round(time.time() - start_time, 2)} seconds") 133 | 134 | ``` 135 | 136 | ## Penetration Testing Examples 137 | Once you learn how to create multithreaded operations in python, your imagination is the limit. While not everything needs the extra acceleration, you may find it helpful from time to time. Here are two ways you can use multithreading to accelerate penetration testing tasks, with ping sweeps and port scanning. 138 | 139 | ### Ping Sweep 140 | At one point or another during a pen test, you're going to need to discover live hosts on a network and one of the more efficient ways is to simply send out ICMP probes a see who responds back. 141 | 142 | When we're talking about a /24 network, this won't be too difficult. However, when you start talking about a /16 or even a /8, you're going to be running against the clock as we mentioned previously. 143 | 144 | To incorporate multithreading in this example, we're going to create tasks by filling the queue with a list of IP addresses and each thread will be used to ping one ip from the queue at a time. 145 | 146 | ```python 147 | #!/usr/bin/python3 148 | 149 | import threading # https://docs.python.org/3/library/threading.html 150 | import queue # https://docs.python.org/3/library/queue.html 151 | import ipaddress # https://docs.python.org/3/library/ipaddress.html 152 | import subprocess # https://docs.python.org/3/library/subprocess.html 153 | import time # https://docs.python.org/3/library/time.html 154 | 155 | 156 | def worker(): 157 | while True: 158 | target = q.get() 159 | send_ping(target) 160 | q.task_done() 161 | 162 | 163 | def send_ping(target): 164 | icmp = subprocess.Popen(['ping', '-c', '1', str(target)], stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate() 165 | with thread_lock: 166 | if "1 received" in icmp[0].decode('utf-8'): 167 | print(f"[*] {target} is UP") 168 | else: 169 | print(f"[*] {target} is DOWN") 170 | 171 | 172 | # Define a print lock 173 | thread_lock = threading.Lock() 174 | 175 | # Create our queue 176 | q = queue.Queue() 177 | 178 | # Define number of threads 179 | for r in range(100): 180 | t = threading.Thread(target=worker) 181 | t.daemon = True 182 | t.start() 183 | 184 | # Start timer before sending tasks to the queue 185 | start_time = time.time() 186 | 187 | # Network to scan 188 | cidr_network = '192.168.74.0/24' 189 | all_hosts = list(ipaddress.ip_network(cidr_network).hosts()) 190 | 191 | print(f"Creating a task request for each host in {cidr_network}\n") 192 | 193 | # send ten task requests to the worker 194 | for item in all_hosts: 195 | q.put(item) 196 | 197 | # block until all tasks are done 198 | q.join() 199 | 200 | print(f"\nAll workers completed their tasks after {round(time.time() - start_time, 2)} seconds") 201 | 202 | ``` 203 | 204 | ### Port Scanner 205 | Our previous example showed how we can speed up a ping sweep by filling our queue with ip addresses to target. Here, we'll use a slightly different approach by filling our queue with ports we want to scan against a single target. 206 | 207 | Keep in mind that using this socket connect method, it's going to attempt a complete tcp three way handshake, which is where the multithreading can help save us some time. However, having too many threads at time could cause performance issues (miss open ports) or even flag a SIEM or IPS, so find a happy balance with what you're trying to accomplish. 208 | 209 | ```python 210 | #!/usr/bin/python3 211 | 212 | import threading # https://docs.python.org/3/library/threading.html 213 | import queue # https://docs.python.org/3/library/queue.html 214 | import time # https://docs.python.org/3/library/time.html 215 | import socket # https://docs.python.org/3/library/socket.html 216 | 217 | 218 | def worker(): 219 | while True: 220 | port = q.get() 221 | scan_port(target, port) 222 | q.task_done() 223 | 224 | 225 | def scan_port(target, port): 226 | try: 227 | # create stream socket with a one second timeout and attempt a connection 228 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 229 | s.settimeout(1) 230 | s.connect((target, int(port))) 231 | 232 | # disallow further sends and receives 233 | s.shutdown(socket.SHUT_RDWR) 234 | with thread_lock: 235 | print(f'[+] Port {port} on {target} is OPEN') 236 | except ConnectionRefusedError: 237 | pass 238 | finally: 239 | s.close() 240 | 241 | # Device to scan 242 | target = '192.168.74.131' 243 | 244 | # Define a print lock 245 | thread_lock = threading.Lock() 246 | 247 | # Create our queue 248 | q = queue.Queue() 249 | 250 | # Define number of threads 251 | for r in range(100): 252 | t = threading.Thread(target=worker) 253 | t.daemon = True 254 | t.start() 255 | 256 | # Start timer before sending tasks to the queue 257 | start_time = time.time() 258 | 259 | print('Creating a task request for each port\n') 260 | 261 | # Create a task request for each possible port to the worker 262 | for port in range(1, 65535): 263 | q.put(port) 264 | 265 | # block until all tasks are done 266 | q.join() 267 | 268 | print(f"\nAll workers completed their tasks after {round(time.time() - start_time, 2)} seconds") 269 | 270 | ``` 271 | 272 | What we went through barely touches the surface and there are many other techniques available and more in depth information on what we summarized here. However, this is how I was able to grasp the concepts and hope that you found it useful as well. 273 | 274 | ## Resources 275 | * https://docs.python.org/3/library/threading.html 276 | * https://docs.python.org/3/library/queue.html 277 | * https://docs.python.org/3/library/socket.html 278 | -------------------------------------------------------------------------------- /Screenshots/default_threader.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gh0x0st/python3_multithreading/5f805d0dbf2392af99164fc2e6c4631fc57de5bd/Screenshots/default_threader.png -------------------------------------------------------------------------------- /Screenshots/threader.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gh0x0st/python3_multithreading/5f805d0dbf2392af99164fc2e6c4631fc57de5bd/Screenshots/threader.png -------------------------------------------------------------------------------- /multi_ping.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | 3 | import threading # https://docs.python.org/3/library/threading.html 4 | import queue # https://docs.python.org/3/library/queue.html 5 | import ipaddress # https://docs.python.org/3/library/ipaddress.html 6 | import subprocess # https://docs.python.org/3/library/subprocess.html 7 | import time # https://docs.python.org/3/library/time.html 8 | 9 | 10 | def worker(): 11 | while True: 12 | target = q.get() 13 | send_ping(target) 14 | q.task_done() 15 | 16 | 17 | def send_ping(target): 18 | icmp = subprocess.Popen(['ping', '-c', '1', str(target)], stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate() 19 | with thread_lock: 20 | if "1 received" in icmp[0].decode('utf-8'): 21 | print(f"[*] {target} is UP") 22 | else: 23 | print(f"[*] {target} is DOWN") 24 | 25 | 26 | # Define a print lock 27 | thread_lock = threading.Lock() 28 | 29 | # Create our queue 30 | q = queue.Queue() 31 | 32 | # Define number of threads 33 | for r in range(100): 34 | t = threading.Thread(target=worker) 35 | t.daemon = True 36 | t.start() 37 | 38 | # Start timer before sending tasks to the queue 39 | start_time = time.time() 40 | 41 | # Network to scan 42 | cidr_network = '192.168.74.0/24' 43 | all_hosts = list(ipaddress.ip_network(cidr_network).hosts()) 44 | 45 | print(f"Creating a task request for each host in {cidr_network}\n") 46 | 47 | # send ten task requests to the worker 48 | for item in all_hosts: 49 | q.put(item) 50 | 51 | # block until all tasks are done 52 | q.join() 53 | 54 | print(f"\nAll workers completed their tasks after {round(time.time() - start_time, 2)} seconds") 55 | -------------------------------------------------------------------------------- /multi_port.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | 3 | import threading # https://docs.python.org/3/library/threading.html 4 | import queue # https://docs.python.org/3/library/queue.html 5 | import time # https://docs.python.org/3/library/time.html 6 | import socket # https://docs.python.org/3/library/socket.html 7 | 8 | 9 | def worker(): 10 | while True: 11 | port = q.get() 12 | scan_port(target, port) 13 | q.task_done() 14 | 15 | 16 | def scan_port(target, port): 17 | try: 18 | # create stream socket with a one second timeout and attempt a connection 19 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 20 | s.settimeout(1) 21 | s.connect((target, int(port))) 22 | 23 | # disallow further sends and receives 24 | s.shutdown(socket.SHUT_RDWR) 25 | with thread_lock: 26 | print(f'[+] Port {port} on {target} is OPEN') 27 | except ConnectionRefusedError: 28 | pass 29 | finally: 30 | s.close() 31 | 32 | # Device to scan 33 | target = '192.168.74.131' 34 | 35 | # Define a print lock 36 | thread_lock = threading.Lock() 37 | 38 | # Create our queue 39 | q = queue.Queue() 40 | 41 | # Define number of threads 42 | for r in range(100): 43 | t = threading.Thread(target=worker) 44 | t.daemon = True 45 | t.start() 46 | 47 | # Start timer before sending tasks to the queue 48 | start_time = time.time() 49 | 50 | print('Creating a task request for each port\n') 51 | 52 | # Create a task request for each possible port to the worker 53 | for port in range(1, 65535): 54 | q.put(port) 55 | 56 | # block until all tasks are done 57 | q.join() 58 | 59 | print(f"\nAll workers completed their tasks after {round(time.time() - start_time, 2)} seconds") 60 | -------------------------------------------------------------------------------- /threading.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | 3 | import threading # https://docs.python.org/3/library/threading.html 4 | import queue # https://docs.python.org/3/library/queue.html 5 | import time # https://docs.python.org/3/library/time.html 6 | 7 | 8 | def worker(): 9 | while True: 10 | item = q.get() 11 | with thread_lock: 12 | print(f'Working on {item}') 13 | print(f'Finished {item}') 14 | time.sleep(1) 15 | q.task_done() 16 | 17 | 18 | # Define a thread lock 19 | thread_lock = threading.Lock() 20 | 21 | # Create our queue 22 | q = queue.Queue() 23 | 24 | # Define number of threads 25 | for r in range(2): 26 | t = threading.Thread(target=worker) 27 | t.daemon = True 28 | t.start() 29 | 30 | # Start timer before sending tasks to the queue 31 | start_time = time.time() 32 | 33 | print(f"Creating a task request for each item in the given range\n") 34 | 35 | # send ten task requests to the worker 36 | for item in range(10): 37 | q.put(item) 38 | 39 | # block until all tasks are done 40 | q.join() 41 | 42 | print(f"All workers completed their tasks after {round(time.time() - start_time, 2)} seconds") 43 | --------------------------------------------------------------------------------