├── Generate_Report.py
├── LICENSE
├── README.md
├── Web-SurvivalScan.py
├── img
├── HTML-Out.png
├── Run.png
├── TXT.png
├── readme.md
└── 资产列表.png
└── requirements.txt
/Generate_Report.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # coding=utf-8
3 | ################
4 | # AabyssZG #
5 | ################
6 |
7 | from tqdm import tqdm
8 | from typing import Optional, Tuple
9 | from termcolor import cprint
10 |
11 | html='''
12 |
13 |
14 |
15 |
16 |
17 |
18 | The report
19 |
98 |
101 |
102 |
103 |
104 |
105 |

106 |
107 |
108 |
109 |
115 |
165 |
166 |
167 | '''
168 |
169 | def generaterReport():
170 | global html
171 | data = ""
172 | with open(".data/report.json",encoding="utf-8",mode="r+") as file:
173 | data = file.read()
174 | html = html.replace("{{}}",data)
175 | with open("report.html",encoding="utf-8",mode="+w") as file:
176 | file.write(html)
177 | cprint(f"[+][+][+] 已经将存活目标生成至HTML,导出至 report.html","red")
178 |
179 | if __name__ == '__main__':
180 | generaterReport()
181 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 曾哥
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | 
2 |
3 | **By 曾哥(AabyssZG) && jingyuexing**
4 |
5 |
6 | ## ✈️ 一、工具概述
7 |
8 | 在日常工作的渗透测试过程中,经常会碰到渗透测试项目,而Web渗透测试通常是渗透项目的重点或者切入口
9 |
10 | 通常拿到正规项目授权后,会给你一个IP资产列表和对应的Web资产地址,这时候就需要我们进行快速存活验证
11 |
12 | 
13 |
14 | 如图,这个项目给了一万多个IP,并列出了对应的Web资产地址,这个时候就需要快速验证资产存活
15 |
16 | 网上找了一圈,都没什么好用的工具。于是,就写了这个Web资产存活检测小工具:Web-SurvivalScan
17 |
18 | **如果这个工具帮助到了你,欢迎赏个Star~哈哈**
19 |
20 |
21 | ## 📝 二、TODO
22 |
23 | * [x] 修复网页Title为空导致报错的Bug
24 | * [x] 支持批量访问固定路径(验证简单漏洞的时候巨有效)
25 | * [x] 支持在导出HTML界面里面显示标题,更方便确认资产信息
26 | * [x] 支持对HTTP代理进行校验,检验代理可用性
27 | * [x] 支持使用HTTP/HTTPS代理所有流量,并可以使用HTTP认证
28 | * [x] 支持将 `:443` 自动识别转换为 `https://`
29 | * [x] 支持读取多种编码格式的TXT(ANSI和UTF-8均支持)
30 | * [x] 支持导出HTML文件,方便资产的可视化整理
31 | * [x] 多线程扫描,1万条Web资产实测20分钟左右跑完
32 | * [x] 存活验证时使用随机User-Agent请求头
33 | * [x] 解决目标SSL证书问题 (自签名证书请改成 `http://` 即可)
34 | * [x] 智能识别目标地址 (`example.com` 和`http://example.com:8080` 以及`http://example.com` 都不会报错)
35 |
36 |
37 | ## 🚨 三、安装Python依赖库
38 |
39 | ```
40 | pip3 install -r requirements.txt
41 | ```
42 |
43 |
44 | ## 🐉 四、工具使用
45 |
46 | 首先,需要将目标Web资产批量复制到TXT内,并将该TXT放到本脚本同目录,如下图:
47 |
48 | 
49 |
50 | **注:本脚本能智能识别目标地址 (`example.com` 和`http://example.com:8080` 以及`http://example.com` 都不会报错)**
51 |
52 | ```
53 | # python3 Web-SurvivalScan.py
54 |
55 | ╦ ╦┌─┐┌┐
56 | ║║║├┤ ├┴┐
57 | ╚╩╝└─┘└─┘
58 | ╔═╗┬ ┬┬─┐┬ ┬┬┬ ┬┌─┐┬ ╔═╗┌─┐┌─┐┌┐┌
59 | ╚═╗│ │├┬┘└┐┌┘│└┐┌┘├─┤│ ╚═╗│ ├─┤│││
60 | ╚═╝└─┘┴└─ └┘ ┴ └┘ ┴ ┴┴─┘╚═╝└─┘┴ ┴┘└┘
61 | Version: 1.11
62 | Author: 曾哥(@AabyssZG) && jingyuexing
63 | Whoami: https://github.com/AabyssZG
64 |
65 | 请输入目标TXT文件名
66 | FileName >>> 【TXT文件名】
67 | 请输入需要访问的路径(无则回车)
68 | DirName >>> 【需要批量访问的路径】
69 | 请输入代理IP和端口(无则回车)
70 | Proxy >>> 【HTTP认证账号:HTTP认证密码@代理IP:端口】
71 | Proxy >>> 【代理IP:端口(没有HTTP认证的情况下)】
72 | ```
73 |
74 | 
75 |
76 | 跑完后,即可拿到导出的两个文件:`output.txt` 和 `outerror.txt`
77 |
78 | - `output.txt`:导出验证存活成功(状态码200)的Web资产
79 | - `outerror.txt`:导出其他状态码的Web资产,方便后期排查遗漏和寻找其他脆弱点
80 | - `.data/report.json`:所有资产的运行数据,按JSON格式导出,方便处理
81 | - `report.html`:将所有资产进行HTML可视化导出,方便整理
82 |
83 | 
84 |
85 |
86 | ## 五,感谢各位师傅
87 |
88 | ### Stargazers
89 |
90 | [](https://github.com/AabyssZG/Web-SurvivalScan/stargazers)
91 |
92 |
93 | ### Forkers
94 |
95 | [](https://github.com/AabyssZG/Web-SurvivalScan/network/members)
96 |
97 |
98 | ### Star History
99 |
100 | [](https://star-history.com/#AabyssZG/Web-SurvivalScan&Date)
101 |
--------------------------------------------------------------------------------
/Web-SurvivalScan.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # coding=utf-8
3 | ################
4 | # AabyssZG #
5 | ################
6 |
7 | import _thread
8 | from enum import Enum
9 | import os
10 | import time
11 | from bs4 import BeautifulSoup
12 |
13 | import Generate_Report
14 |
15 | import requests, sys, random
16 | from tqdm import tqdm
17 | from typing import Optional, Tuple
18 | from termcolor import cprint
19 | from requests.compat import json
20 | import requests.packages.urllib3
21 | requests.packages.urllib3.disable_warnings()
22 |
23 | class EServival(Enum):
24 | REJECT = -1
25 | SURVIVE = 1
26 | DIED = 0
27 |
28 | reportData = []
29 |
30 | ua = [
31 | "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36,Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36",
32 | "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36,Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.17 Safari/537.36",
33 | "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36,Mozilla/5.0 (X11; NetBSD) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36",
34 | "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML like Gecko) Chrome/44.0.2403.155 Safari/537.36",
35 | "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.20.25 (KHTML, like Gecko) Version/5.0.4 Safari/533.20.27",
36 | "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:23.0) Gecko/20130406 Firefox/23.0",
37 | "Opera/9.80 (Windows NT 5.1; U; zh-sg) Presto/2.9.181 Version/12.00"]
38 |
39 | def logo():
40 | logo0 = r'''
41 | ╦ ╦┌─┐┌┐
42 | ║║║├┤ ├┴┐
43 | ╚╩╝└─┘└─┘
44 | ╔═╗┬ ┬┬─┐┬ ┬┬┬ ┬┌─┐┬ ╔═╗┌─┐┌─┐┌┐┌
45 | ╚═╗│ │├┬┘└┐┌┘│└┐┌┘├─┤│ ╚═╗│ ├─┤│││
46 | ╚═╝└─┘┴└─ └┘ ┴ └┘ ┴ ┴┴─┘╚═╝└─┘┴ ┴┘└┘
47 | Version: 1.11
48 | Author: 曾哥(@AabyssZG) && jingyuexing
49 | Whoami: https://github.com/AabyssZG
50 | '''
51 | print(logo0)
52 |
53 | def file_init():
54 | # 新建正常目标导出TXT
55 | f1 = open("output.txt", "wb+")
56 | f1.close()
57 | # 新建其他报错导出TXT
58 | f2 = open("outerror.txt", "wb+")
59 | if not os.path.exists(".data"):
60 | os.mkdir(".data")
61 | report = open(".data/report.json","w")
62 | report.close()
63 |
64 | def scanLogger(result:Tuple[EServival,Optional[int],str,int,str]):
65 | (status,code,url,length,title) = result
66 | if status == EServival.SURVIVE:
67 | cprint(f"[+] 状态码为: {code} 存活URL为: {url} 页面长度为: {length} 网页标题为: {title}","red")
68 | if(status == EServival.DIED):
69 | cprint(f"[-] 状态码为: {code} 无法访问URL为: {url} ","yellow")
70 | if(status == EServival.REJECT):
71 | cprint(f"[-] URL为 {url} 的目标积极拒绝请求,予以跳过!", "magenta")
72 |
73 | if(status == EServival.SURVIVE):
74 | fileName = "output.txt"
75 | elif(status == EServival.DIED):
76 | fileName = "outerror.txt"
77 | if(status == EServival.SURVIVE or status == EServival.DIED):
78 | with open(file=fileName, mode="a") as file4:
79 | file4.write(f"[{code}] {url}\n")
80 | collectionReport(result)
81 |
82 | def survive(url:str,proxies:dict):
83 | try:
84 | header = {"User-Agent": random.choice(ua)}
85 | requests.packages.urllib3.disable_warnings()
86 | r = requests.get(url=url, headers=header, proxies=proxies, timeout=10, verify=False) # 设置超时10秒
87 | soup = BeautifulSoup(r.content, 'html.parser')
88 | if soup.title == None:
89 | title = "Null"
90 | else:
91 | title = str(soup.title.string)
92 | except Exception:
93 | title = str("error")
94 | cprint("[-] URL为 " + url + " 的目标积极拒绝请求,予以跳过!", "magenta")
95 | return (EServival.REJECT,0,url,0,title)
96 | if r.status_code == 200 or r.status_code == 403:
97 | return (EServival.SURVIVE,r.status_code,url,len(r.content),title)
98 | else:
99 | title = str("error")
100 | return (EServival.DIED,r.status_code,url,0,title)
101 |
102 | def collectionReport(data):
103 | global reportData
104 | (status,statusCode,url,length,title) = data
105 | state = ""
106 | if status == EServival.DIED:
107 | state = "deaed"
108 | titlel = ""
109 | elif status == EServival.REJECT:
110 | state = "reject"
111 | titlel = ""
112 | elif status == EServival.SURVIVE:
113 | state = "servival"
114 | titlel = f"{title}"
115 | reportData.append({
116 | "url":url,
117 | "status":state,
118 | "statusCode":statusCode,
119 | "title":titlel
120 | })
121 |
122 | def dumpReport():
123 | with open(".data/report.json",encoding="utf-8",mode="w") as file:
124 | file.write(json.dumps(reportData))
125 |
126 | def getTask(filename=""):
127 | if(filename != ""):
128 | try:
129 | with open(file=filename,mode="r") as file:
130 | for url in file:
131 | yield url.strip()
132 | except Exception:
133 | with open(file=filename,mode="r",encoding='utf-8') as file:
134 | for url in file:
135 | yield url.strip()
136 |
137 | def end():
138 | count_out = len(open("output.txt", 'r').readlines())
139 | if count_out >= 1:
140 | print('\n')
141 | cprint(f"[+][+][+] 发现目标TXT有存活目标,已经导出至 output.txt ,共 {count_out} 行记录\n","red")
142 | count_error = len(open("outerror.txt", 'r').readlines())
143 | if count_error >= 1:
144 | cprint(f"[+][-][-] 发现目标TXT有错误目标,已经导出至 outerror.txt ,共行{count_error}记录\n","red")
145 |
146 | def main():
147 | logo()
148 | file_init()
149 | # 获取目标TXT名称
150 | txt_name = str(input("请输入目标TXT文件名\nFileName >>> "))
151 | dir_name = str(input("请输入需要访问的路径(无则回车)\nDirName >>> "))
152 | proxy_text = str(input("请输入代理IP和端口(无则回车)\nProxy >>> "))
153 | if proxy_text:
154 | proxies = {
155 | "http": "http://%(proxy)s/" % {'proxy': proxy_text},
156 | "https": "http://%(proxy)s/" % {'proxy': proxy_text}
157 | }
158 | cprint(f"================检测代理可用性中================", "cyan")
159 | testurl = "https://www.baidu.com/"
160 | headers = {"User-Agent": "Mozilla/5.0"} # 响应头
161 | try:
162 | requests.packages.urllib3.disable_warnings()
163 | res = requests.get(testurl, timeout=10, proxies=proxies, verify=False, headers=headers)
164 | print(res.status_code)
165 | # 发起请求,返回响应码
166 | if res.status_code == 200:
167 | print("GET www.baidu.com 状态码为:" + str(res.status_code))
168 | cprint(f"[+] 代理可用,马上执行!", "cyan")
169 | except KeyboardInterrupt:
170 | print("Ctrl + C 手动终止了进程")
171 | sys.exit()
172 | except:
173 | cprint(f"[-] 代理不可用,请更换代理!", "magenta")
174 | sys.exit()
175 | else:
176 | proxies = {}
177 | cprint("================开始读取目标TXT并批量测试站点存活================","cyan")
178 | # 读取目标TXT
179 | for url in getTask(txt_name):
180 | if((':443' in url) and ('://' not in url)):
181 | url = url.replace(":443","")
182 | url = f"https://{url}"
183 | elif('://' not in url):
184 | url = f"http://{url}"
185 | if str(url[-1]) != "/":
186 | url = url + "/"
187 | if (dir_name != "") and (str(dir_name[0]) == "/"):
188 | url = url + dir_name[1:]
189 | else:
190 | url = url + dir_name
191 | cprint(f"[.] 正在检测目标URL " + url,"cyan")
192 | try:
193 | _thread.start_new_thread(lambda url: scanLogger(survive(url,proxies)), (url, ))
194 | time.sleep(1.5)
195 | except KeyboardInterrupt:
196 | print("Ctrl + C 手动终止了进程")
197 | sys.exit()
198 | time.sleep(3)
199 | dumpReport()
200 | end()
201 | Generate_Report.generaterReport()
202 | sys.exit()
203 |
204 | if __name__ == '__main__':
205 | main()
206 |
--------------------------------------------------------------------------------
/img/HTML-Out.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AabyssZG/Web-SurvivalScan/db86ec6c65b8602fb360e50b130e9c82a9192082/img/HTML-Out.png
--------------------------------------------------------------------------------
/img/Run.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AabyssZG/Web-SurvivalScan/db86ec6c65b8602fb360e50b130e9c82a9192082/img/Run.png
--------------------------------------------------------------------------------
/img/TXT.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AabyssZG/Web-SurvivalScan/db86ec6c65b8602fb360e50b130e9c82a9192082/img/TXT.png
--------------------------------------------------------------------------------
/img/readme.md:
--------------------------------------------------------------------------------
1 | 图片存放目录
2 |
--------------------------------------------------------------------------------
/img/资产列表.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/AabyssZG/Web-SurvivalScan/db86ec6c65b8602fb360e50b130e9c82a9192082/img/资产列表.png
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | termcolor==1.1.0
2 | tqdm==4.62.3
3 | requests==2.27.1
4 | urllib3==1.26.4
5 | aiofiles==22.1.0
6 | httpx==0.23.3
7 | beautifulsoup4==4.11.1
8 |
--------------------------------------------------------------------------------