├── .gitignore ├── README.md └── device-pharmer.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.txt 2 | *.swp 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | device-pharmer 2 | ====== 3 | 4 | Concurrently open either Shodan search results, a specified IP, IP range, domain, or list of IPs from a text file and print the status and title of the page if applicable. Add the -u and -p options to attempt to login to the page first looking for a form login and failing that, attempt HTTP Basic Auth. 5 | 6 | Use -f SEARCHSTRING to look for a certain string in the html response in order to test if authentication succeeded. Logs all devices that respond using either the Shodan search term or the target IPs/domain + _results.txt. One caveat with searching the response page's HTML is that some form login pages return a JSON object response after an authentication request rather than the post-login page's HTML source. Often you can determine whether or not you were successful by just using -f "success" in scenarios like this. 7 | 8 | Default timeout on the requests is 15 seconds. Sends batches of 1000 requests concurrently which can be adjust using the -c option. One should note that Shodan only allows the first page of results (100 hosts) if you are using their free API key. If you have their professional API key you can specify the number of search result pages to test with the -n NUMBER_OF_PAGES argument. By default it will only check page 1. 9 | 10 | Requirements: 11 | ----- 12 | Python 2.7 13 | * mechanize 14 | * gevents 15 | * BeautifulSoup 16 | * shodan (if giving the -s option) 17 | 18 | Modern linux 19 | * Tested on Kali 1.0.6 20 | 21 | Shodan API Key (only if you are giving the -s SEARCHTERM argument) 22 | * Give the script the -a YOUR_API_KEY argument OR 23 | * Edit line 82 to do it permanently and feel free to offer a pull request after you perform this so you have it in your records; safe hands and all ;). Don't have an API key? Get one free easily [from shodan](http://www.shodanhq.com/account/register)... alternatively, explore your Google dorking skills before downloading some Shodan ones. 24 | 25 | 26 | Usage 27 | ----- 28 | 29 | Simplest usage: 30 | ``` shell 31 | python device-pharmer.py -s 'dir-300' -a Wutc4c3T78gRIKeuLZesI8Mx2ddOiP4 32 | ``` 33 | Search Shodan for "dir-300" using the API key Wutc4c3T78gRIKeuLZesI8Mx2ddOiP4. Print the IP and title of the response page should it exist. 34 | 35 | ``` shell 36 | python device-pharmer.py -s 'dd-wrt' -a Wutc4c3T78gRIKeuLZesI8Mx2ddOiP4 -u admin -p password -n 5 -f ">Advanced Routing<" --proxy 123.12.12.123:8080 --timeout 30 37 | ``` 38 | Search Shodan for "dd-wrt" using the given api key and attempt to login to the results with the username "admin" and the password "password". Gather only the first 5 pages (500 hosts) of Shodan results and check if the landing page's HTML contains the string ">Advanced Routing<". Print "* MATCH *" along with the IP and title of the page in the output and log if the string is found. Finally, use an HTTP proxy at 123.12.12.123:8080 for all requests and set the timeout to 30s. 39 | 40 | 41 | ``` shell 42 | python device-pharmer.py -t 192.168.0-2.1-100 -c 100 43 | ``` 44 | Targeting 192.168.0-2.1-100 is telling the script to concurrently open 192.168.0.1-101, 192.168.1.1-101, and 192.168.2.1-101 and to gather the status and title of the response pages. -c 100 will limit concurrency to 100 pages at a time so this script will pass through 3 groups of 100 IPs each. Since the default timeout within the script is 15 seconds this will take about ~45 seconds to complete. 45 | 46 | ``` shell 47 | python device-pharmer.py -t www.reddit.com/login -ssl -u sirsmit418 -p whoopwhoop -f 'tattoos' 48 | ``` 49 | Try logging into www.reddit.com/login using HTTPS specifically with the username sirsmit418 and password whoopwhoop. Look for the text "tattoos" correlating to a subscribed subreddit in the response html to check for authentication success. 50 | 51 | ``` shell 52 | python device-pharmer.py --ipfile list_of_ips.txt 53 | ``` 54 | Test each IP from a textfile of newline-separated IPs 55 | 56 | 57 | ### All options: 58 | 59 | -a APIKEY: use this API key when searching Shodan (only necessary in conjunction with -s) 60 | 61 | -c CONCURRENT: send a specified number of requests concurrently; default=1000 62 | 63 | -f FINDTERMS: search for the argument string in the html of each response; upon a match print it and log it 64 | 65 | --ipfile IPTEXTFILE: test each IP in a list of newline-separated IPs from the specified text file 66 | 67 | -n NUMPAGES: go through specified amount of Shodan search result pages collecting IPs; 100 results per page 68 | 69 | -p PASSWORD: attempt to login using this password 70 | 71 | --proxy PROXY: use this proxy for making requests; to login to the proxy with HTTP Basic do something like, user:pass@123.12.12.123:8080 72 | 73 | -s SEARCHTERMS: search Shodan for term(s) and print each IP address, whether the page returned a response, and if so print the title of the returned page (follows redirects) 74 | 75 | -ssl: specifically send HTTPS requests to all targets 76 | 77 | -t IPADDRESS/DOMAIN/IPRANGE: try hitting this domain, IP, or IP range instead of using Shodan to populate the targets list and return response information 78 | 79 | --timeout TIMEOUT: set the timeout for each URI in seconds; default is 15 80 | 81 | -u USERNAME: attempt to login using this username 82 | 83 | 84 | License 85 | ------- 86 | 87 | Copyright (c) 2014, Dan McInerney 88 | All rights reserved. 89 | 90 | Redistribution and use in source and binary forms, with or without 91 | modification, are permitted provided that the following conditions are met: 92 | * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 93 | * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 94 | * Neither the name of Dan McInerney nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. 95 | 96 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 97 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 98 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 99 | DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY 100 | DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 101 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 102 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 103 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 104 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 105 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 106 | 107 | 108 | *** 109 | * [danmcinerney.org](http://danmcinerney.org) 110 | * [![Flattr this](http://api.flattr.com/button/flattr-badge-large.png)](https://flattr.com/submit/auto?user_id=DanMcInerney&url=https://github.com/DanMcInerney/device-pharmer&title=device-pharmer&language=&tags=github&category=software) 111 | -------------------------------------------------------------------------------- /device-pharmer.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # -*- coding: utf-8 -*- 3 | 4 | ''' 5 | Concurrently open either Shodan search results, a specified IP range, a 6 | single IP, or domain and print the status and title of the page. 7 | Add the -u and -p options to attempt to login to the page. 8 | Use -f to look for a certain string in the html response to check if 9 | authentication succeeded. Will attempt to login first via a login form 10 | and should that fail or not exist, login via HTTP Basic Auth. 11 | 12 | eequires: Linux 13 | Python 2.7 14 | gevents 15 | mechanize 16 | BeautifulSoup 17 | shodan 18 | 19 | 20 | __author__ = Dan McInerney 21 | danmcinerney.org 22 | Twitter: danhmcinerney 23 | Gmail: danhmcinerney 24 | ''' 25 | 26 | #This must be one of the first imports or else we get threading error on completion 27 | from gevent import monkey 28 | monkey.patch_all() 29 | 30 | # Overzealously prevent mechanize's gzip warning 31 | import warnings 32 | warnings.filterwarnings("ignore") 33 | 34 | import gevent 35 | import argparse 36 | import mechanize 37 | from bs4 import BeautifulSoup 38 | import cookielib 39 | from socket import setdefaulttimeout 40 | import re 41 | from sys import exit 42 | 43 | # Including lxml in case someone wants to use it instead of BeautifulSoup 44 | #import lxml 45 | #from lxml.html import fromstring 46 | 47 | def parse_args(): 48 | """Create the arguments""" 49 | parser = argparse.ArgumentParser( 50 | formatter_class=argparse.RawDescriptionHelpFormatter, 51 | epilog='-----------------------------------------------------------------------------------\n' 52 | 'Examples:\n\n' 53 | ' -Search Shodan for "dd-wrt" using the specified API key and print the title of each\n' 54 | ' result\'s response page:\n\n' 55 | ' python device-pharmer.py -s "dd-wrt" -a Wutc4c3T78gRIKeuLZesI8Mx2ddOiP4\n\n' 56 | ' -Open a range of IP addresses, print the title of each response page and\n' 57 | ' make 100 requests concurrently:\n\n' 58 | ' python device-pharmer.py -t 192.168.0-5.1-254 -c 100\n\n' 59 | ' -Search Shodan for "dd-wrt" and attempt to login with "root" using password "admin"\n' 60 | ' then check the response page\'s html for the string ">Advanced Routing<" and last,\n' 61 | ' use a proxy of 123.12.12.123 on port 8080:\n\n' 62 | ' python device-pharmer.py -s "dd-wrt" -u root -p admin -f ">Advanced Routing<" --proxy 123.12.12.123:8080\n\n' 63 | ' -Open www.reddit.com specifically with https:// and attempt to login using "sirsmit418"\n' 64 | ' with the password "whoopwhoop":\n\n' 65 | ' python device-pharmer.py -t www.reddit.com -ssl -u sirsmit418 -p whoopwhoop') 66 | 67 | parser.add_argument("-a", "--apikey", help="Your api key") 68 | parser.add_argument("-c", "--concurrent", default='1000', help="Enter number of concurrent requests to make; default = 1000") 69 | parser.add_argument("-f", "--findstring", help="Search html for a string; can be used to determine if a login was successful") 70 | parser.add_argument("-n", "--numpages", default='1', help="Number of pages deep to go in Shodan results with 100 results per page; default is 1") 71 | parser.add_argument("-p", "--password", help="Enter password after this argument") 72 | parser.add_argument("--proxy", help="Enter a proxy to use, e.g. --proxy user:password@123.12.12.123:8080 or just a simple IP:port will work too") 73 | parser.add_argument("-s", "--shodansearch", help="Your search terms") 74 | parser.add_argument("-ssl", help="Test all the results using https:// rather than default http://", action="store_true") 75 | parser.add_argument("-t", "--targets", help="Enter an IP, a domain, or a range of IPs to fetch (e.g. 192.168.0-5.1-254 will" 76 | "fetch 192.168.0.1 to 192.168.5.254; if using a domain include the subdomain if it exists: sub.domain.com or domain.com)") 77 | parser.add_argument("--timeout", default='15', help="Set the timeout for each URI request in seconds; default is 15") 78 | parser.add_argument("-u", "--username", help="Enter username after this argument") 79 | parser.add_argument("--ipfile", help="Test IPs from a text file (IPs should be separated by newlines)") 80 | return parser.parse_args() 81 | 82 | def shodan_search(search, apikey, pages, ipfile): 83 | import shodan 84 | 85 | if apikey: 86 | API_KEY = apikey 87 | else: 88 | API_KEY = 'ENTER YOUR API KEY HERE AND KEEP THE QUOTES' 89 | 90 | api = shodan.Shodan(API_KEY) 91 | 92 | ips_found = [] 93 | 94 | # Get IPs from Shodan search results 95 | try: 96 | results = api.search(search, page=1) 97 | total_results = results['total'] 98 | print '[+] Results: %d' % total_results 99 | print '[*] Page 1...' 100 | pages = max_pages(pages, total_results) 101 | for r in results['matches']: 102 | # Replace the following ports with port 80 since they'll virtually never have a web server running 103 | # ftp, ssh, telnet, smtp, smtp, netbios x3, smb 104 | if r['port'] in [21, 22, 23, 25, 26, 137, 138, 139, 445]: 105 | r['port'] = 80 106 | ips_found.append('%s:%s' % (r['ip_str'], r['port'])) 107 | 108 | if pages > 1: 109 | i = 2 110 | while i <= pages: 111 | results = api.search(search, page=i) 112 | print '[*] Page %d...' % i 113 | for r in results['matches']: 114 | ips_found.append(r['ip_str']) 115 | i += 1 116 | 117 | return ips_found 118 | 119 | except Exception as e: 120 | print '[!] Shodan search error:', e 121 | 122 | def get_ips_from_file(ipfile): 123 | ''' Read IPs from a file ''' 124 | ips_found = [] 125 | try: 126 | with open(ipfile) as ips: 127 | for line in ips: 128 | ip = line.strip() 129 | ips_found.append(ip) 130 | except IOError: 131 | exit('[!] Are you sure the file %s exists in this directory?' % ipfile) 132 | 133 | return ips_found 134 | 135 | def max_pages(pages, total_results): 136 | ''' Measures the max # of pages in Shodan results. Alternative to this 137 | would be to measure len(results['matches']) and stop when that is zero, 138 | but that would mean 1 extra api lookup which would add some pointless 139 | seconds to the search ''' 140 | 141 | total_pages = (total_results+100)/100 142 | if pages > total_pages: 143 | pages = total_pages 144 | return pages 145 | else: 146 | return pages 147 | 148 | def browser_mechanize(proxy, ssl): 149 | ''' Start headless browser ''' 150 | br = mechanize.Browser() 151 | # Cookie Jar 152 | cj = cookielib.LWPCookieJar() 153 | br.set_cookiejar(cj) 154 | # Browser options 155 | if proxy and ssl: 156 | print '[*] HTTPS proxies are unsupported, using the proxy specified anyway: %s' % proxy 157 | br.set_proxies({'http':proxy}) 158 | elif proxy: 159 | print '[*] HTTP proxy being employed: %s' % proxy 160 | br.set_proxies({'http':proxy}) 161 | br.set_handle_equiv(True) 162 | br.set_handle_gzip(True) 163 | br.set_handle_redirect(True) 164 | br.set_handle_referer(True) 165 | br.set_handle_robots(False) 166 | # Follows refresh 0 but not hangs on refresh > 0 167 | br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) 168 | br.addheaders = [('User-agent', 'Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko')] 169 | return br 170 | 171 | class Scraper(): 172 | 173 | def __init__(self, args): 174 | self.user = args.username 175 | self.passwd = args.password 176 | self.findstring = args.findstring 177 | self.search = args.shodansearch 178 | self.ipfile = args.ipfile 179 | self.ssl = args.ssl 180 | if self.ssl: 181 | self.uri_prefix = 'https://' 182 | else: 183 | self.uri_prefix = 'http://' 184 | self.proxy = args.proxy 185 | self.targets = args.targets 186 | self.br = browser_mechanize(self.proxy, self.ssl) 187 | 188 | def run(self, target): 189 | target = self.uri_prefix+target 190 | try: 191 | resp, brtitle = self.req(target) 192 | title, match = self.html_parser(resp, brtitle) 193 | if match: 194 | mark = '+' 195 | label = match 196 | else: 197 | mark = '*' 198 | label = 'Title: ' 199 | sublabel = title 200 | except Exception as e: 201 | mark = '-' 202 | label = 'Exception:' 203 | sublabel = str(e) 204 | 205 | self.final_print(mark, target, label, sublabel) 206 | 207 | def req(self, target): 208 | ''' Determine what type of auth to use, if any ''' 209 | if self.user and self.passwd: 210 | # Attempts to login via text boxes 211 | # Failing that, tries basic auth 212 | # Failing that, tries no auth 213 | return self.resp_to_textboxes(target) 214 | return self.resp_no_auth(target) 215 | 216 | ############################################################################# 217 | # Get response functions 218 | ############################################################################# 219 | def resp_no_auth(self, target): 220 | ''' No username/password argument given ''' 221 | no_auth_resp = self.br.open('%s' % target) 222 | try: 223 | brtitle = self.br.title() 224 | except Exception: 225 | brtitle = None 226 | return no_auth_resp, brtitle 227 | 228 | def resp_basic_auth(self, target): 229 | ''' When there are no login forms on page but -u and -p are given''' 230 | self.br.add_password('%s' % target, self.user, self.passwd) 231 | basic_auth_resp = self.br.open('%s' % target) 232 | try: 233 | brtitle = self.br.title() 234 | except Exception: 235 | brtitle = None 236 | return basic_auth_resp, brtitle 237 | 238 | def resp_to_textboxes(self, target): 239 | ''' Find the first form on the page that has exactly 1 text box and 1 password box. 240 | Fill it out with the credentials the user provides. If no form is found, try 241 | authenticating with HTTP Basic Auth and if that also fails, try just getting a response. ''' 242 | brtitle1 = None 243 | 244 | try: 245 | resp = self.br.open('%s' % target) 246 | try: 247 | brtitle1 = self.br.title() 248 | except Exception: 249 | brtitle1 = None 250 | forms = self.br.forms() 251 | self.br.form = self.find_password_form(forms) 252 | resp = self.fill_out_form() 253 | try: 254 | brtitle = self.br.title() 255 | except Exception: 256 | brtitle = None 257 | except Exception: 258 | # If trying to login via form, try basic auth 259 | try: 260 | resp, brtitle = self.resp_basic_auth(target) 261 | except Exception: 262 | # If basic auth failed as well, try no auth 263 | resp, brtitle = self.resp_no_auth(target) 264 | 265 | if brtitle == None and brtitle1: 266 | brtitle = brtitle1 267 | 268 | return resp, brtitle 269 | 270 | def find_password_form(self, forms): 271 | for f in forms: 272 | pw = 0 273 | text = 0 274 | for c in f.controls: 275 | if c.type == 'text': 276 | text = text+1 277 | if c.type == 'password': 278 | pw = pw+1 279 | if pw == 1 and text == 1: 280 | return f 281 | 282 | def fill_out_form(self): 283 | ''' Find the first text and password controls and fill them out ''' 284 | text_found = 0 285 | pw_found = 0 286 | for c in self.br.form.controls: 287 | if c.type == 'text': 288 | # Only get the first text control box 289 | if text_found == 0: 290 | c.value = self.user 291 | text_found = 1 292 | continue 293 | if c.type == 'password': 294 | c.value = self.passwd 295 | pw_found = 1 296 | break 297 | 298 | form_resp = self.br.submit() 299 | return form_resp 300 | ############################################################################# 301 | 302 | def html_parser(self, resp, brtitle): 303 | ''' Parse html, look for a match with user arg 304 | and find the title. ''' 305 | html = resp.read() 306 | 307 | # Find match 308 | match = self.find_match(html) 309 | 310 | # Including lxml in case someone has more success with it 311 | # My test showed that BeautifulSoup encountered a few less encoding errors (~3% vs 5% from lxml) 312 | # root = fromstring(html) 313 | # find_title = root.xpath('//title') 314 | # try: 315 | # title = find_title[0].text 316 | # except Exception as e: 317 | # title = '' 318 | 319 | # Get title 320 | soup = BeautifulSoup(html) 321 | title = None 322 | try: 323 | title = soup.title.string 324 | except AttributeError as e: 325 | if brtitle: 326 | title == brtitle 327 | except Exception as e: 328 | title = str(e) 329 | 330 | if brtitle and not title: 331 | title = brtitle 332 | 333 | return title, match 334 | 335 | def find_match(self, html): 336 | match = None 337 | if self.findstring: 338 | if self.findstring in html: 339 | match = '* MATCH * ' 340 | return match 341 | 342 | def final_print(self, mark, target, label, sublabel): 343 | target = target.ljust(29) 344 | 345 | if self.search: 346 | name = self.search 347 | elif self.targets: 348 | name = self.targets 349 | elif self.ipfile: 350 | name = self.ipfile 351 | 352 | name = name.replace('/', '') 353 | 354 | try: 355 | results = '[%s] %s | %s %s' % (mark, target, label, sublabel) 356 | if mark == '*' or mark == '+': 357 | with open('%s_results.txt' % name, 'a+') as f: 358 | f.write('[%s] %s | %s %s\n' % (mark, target, label, sublabel)) 359 | print results 360 | except Exception as e: 361 | results = '[%s] %s | %s %s' % (mark, target, label, str(e)) 362 | with open('%s_results.txt' % name, 'a+') as f: 363 | f.write('%s\n' % results) 364 | print results 365 | 366 | 367 | ############################################################################# 368 | # IP range target handlers 369 | # Taken from against.py by pigtails23 with minor modification 370 | ############################################################################# 371 | def get_targets_from_args(targets): 372 | target_type = check_targets(targets) 373 | if target_type: 374 | if target_type == 'domain' or target_type == 'ip': 375 | return ['%s' % targets] 376 | elif target_type == 'ip range': 377 | return ip_range(targets) 378 | 379 | def check_targets(targets): 380 | ''' This could use improvement but works fine would be 381 | nice to get a good regex just for finding IP ranges ''' 382 | if re.match('^[A-Za-z]', targets): # starts with a letter 383 | return 'domain' 384 | elif targets.count('.') == 3 and '-' in targets: 385 | return 'ip range' 386 | #if re.match('(?=.*-)', targets): 387 | # return 'ip range' 388 | elif re.match(r"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})$", targets): 389 | return 'ip' 390 | else: 391 | return None 392 | 393 | def handle_ip_range(iprange): 394 | parted = tuple(part for part in iprange.split('.')) 395 | rsa = range(4) 396 | rsb = range(4) 397 | for i in range(4): 398 | hyphen = parted[i].find('-') 399 | if hyphen != -1: 400 | rsa[i] = int(parted[i][:hyphen]) 401 | rsb[i] = int(parted[i][1+hyphen:]) + 1 402 | else: 403 | rsa[i] = int(parted[i]) 404 | rsb[i] = int(parted[i]) + 1 405 | return rsa, rsb 406 | 407 | def ip_range(iprange): 408 | rsa, rsb = handle_ip_range(iprange) 409 | ips = [] 410 | counter = 0 411 | for i in range(rsa[0], rsb[0]): 412 | for j in range(rsa[1], rsb[1]): 413 | for k in range(rsa[2], rsb[2]): 414 | for l in range(rsa[3], rsb[3]): 415 | ip = '%d.%d.%d.%d' % (i, j, k, l) 416 | ips.append(ip) 417 | return ips 418 | ############################################################################# 419 | 420 | def input_check(args): 421 | ''' Check for multi inputs, or lack of target inputs ''' 422 | if not args.targets and not args.shodansearch and not args.ipfile: 423 | exit('[!] No targets found. Use the -s option to specify a search term for Shodan, the -t option to specify an IP, IP range, or domain, or use the --ipfile option to read from a list of IPs in a text file') 424 | 425 | inputs = 0 426 | if args.targets: 427 | inputs += 1 428 | if args.shodansearch: 429 | inputs += 1 430 | if args.ipfile: 431 | inputs += 1 432 | if inputs > 1: 433 | exit('[!] Multiple target inputs specified, choose just one target input option: -t (IP, IP range, or domain), -s (Shodan search results), or --ipfile (IPs from a text file)') 434 | 435 | def main(args): 436 | 437 | S = Scraper(args) 438 | 439 | input_check(args) 440 | 441 | if args.targets: 442 | targets = get_targets_from_args(args.targets) 443 | elif args.shodansearch: 444 | targets = shodan_search(args.shodansearch, args.apikey, int(args.numpages), args.ipfile) 445 | elif args.ipfile: 446 | targets = get_ips_from_file(args.ipfile) 447 | 448 | if targets == [] or targets == None: 449 | exit('[!] No valid targets') 450 | 451 | # Mechanize doesn't respect timeouts when it comes to reading/waiting for SSL info so this is necessary 452 | setdefaulttimeout(int(args.timeout)) 453 | 454 | con = int(args.concurrent) 455 | 456 | # By default run 1000 concurrently at a time 457 | target_groups = [targets[x:x+con] for x in xrange(0, len(targets), con)] 458 | for chunk_targets in target_groups: 459 | jobs = [gevent.spawn(S.run, target) for target in chunk_targets] 460 | gevent.joinall(jobs) 461 | 462 | if __name__ == "__main__": 463 | main(parse_args()) 464 | --------------------------------------------------------------------------------