├── .idea
├── finger.iml
├── inspectionProfiles
│ ├── Project_Default.xml
│ └── profiles_settings.xml
├── misc.xml
├── modules.xml
└── vcs.xml
├── README.md
├── SyntaxFinger.py
├── api
├── __init__.py
├── api_output.py
├── fofa.py
├── hunter.py
└── quake.py
├── config
├── __init__.py
├── color.py
├── config.py
├── data.py
├── datatype.py
└── log.py
├── img
├── image-20230921163103334.png
├── image-20250312191119397.png
├── image-20250312194141280.png
├── image-20250312200636496.png
└── image-20250312200857418.png
├── lib
├── B_CheckCDN.py
├── C_ALL_GetCompany_ByDomain.py
├── C_GetRecord_BySubDomain.py
├── D_ALL_GetDomain_ByUrl.py
├── ICPAttributable.py
├── IpFactory.py
├── __init__.py
├── checkenv.py
├── cmdline.py
├── identify.py
├── ip2Region.py
├── ipAttributable.py
├── options.py
├── proxy.py
└── req.py
├── library
├── cdn_asn_list.json
├── cdn_cname_keywords.json
├── cdn_header_keys.json
├── cdn_ip_cidr.json
├── data
│ ├── GeoLite2-ASN.mmdb
│ ├── README.MD
│ └── ip2region.db
├── finger_old.json
└── nameservers_list.json
├── requirements.txt
└── target.txt
/.idea/finger.iml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
--------------------------------------------------------------------------------
/.idea/inspectionProfiles/Project_Default.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
10 |
11 |
12 |
--------------------------------------------------------------------------------
/.idea/inspectionProfiles/profiles_settings.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
--------------------------------------------------------------------------------
/.idea/misc.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
--------------------------------------------------------------------------------
/.idea/modules.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
--------------------------------------------------------------------------------
/.idea/vcs.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # SyntaxFinger
2 |
3 | **在整个红队攻防体系中,打点是最基础也是最重要的一步,在实际的攻防比赛中,由于资产数量庞大、红队人员稀缺以及时间紧迫等各种因素,导致打点的效果常常不尽如人意。在打点阶段快人一步、率先进入内网并获得更高的分数对于红队来说非常关键,打点的质量和效率直接影响着整个红队的表现和成绩。打点实质就是在众多的资产中脆弱资产与重点攻击系统,说白了,只要比别人队伍搜集资产的数目更多,比别人找到脆弱的资产更快,在打点方面就更胜一筹。**
4 |
5 | - **一般打点场景**
6 | - **根域名获取全量资产**:一般情况下都是拿到根域名,然后直接在资产测绘引擎获取到全量的URL资产,然后识别到通用指纹,再做漏洞利用。
7 | - **指纹获取对应资产归属:**有了某个指纹,但不知道对应的备案名,还得一个个确认归属。
8 | - **本工具解决痛点**
9 | - 标准化整合三大主流资产测绘引擎(fofa、quake、hunter)的语法差异,构建统一查询接口,用户只需输入一个根域名即可快速获取与目标有关的所有资产列表。
10 | - 编写多种ICP反查接口,一键获取资产归属。
11 |
12 | ## 0x00 功能概述
13 |
14 | **本工具不去主动的做端口扫描,通过聚合了三大资产测绘引擎接口语法,快速获取与目标有关的所有资产URL列表,并进行一键指纹识别(通用系统指纹、ICP归属等),筛出高价值及脆弱资产。**
15 |
16 | **✅多源数据智能聚合快速获取资产**
17 |
18 | - **✅通过标准化整合三大主流资产测绘引擎(fofa、quake、hunter)的语法差异,构建统一查询接口,用户可通过单一指令实现跨平台资产检索,快速获取与目标有关的所有资产列表**
19 | - **✅仅需提供公司全称与根域名或者IP**,便可一键获取对应资产(url、ip、domain),并进行数据库存储。
20 | - ✅兼容域名/IP/IP段输入格式,无需多余参数。(注:IP段仅支持fofa,不是其他不支持,只是太费钱)
21 |
22 |
23 |
24 | **✅通用系统指纹识别**
25 |
26 | - ✅url资产去重扫描
27 | - ✅指纹库集成(Ehole指纹库等)
28 | - ✅后台系统、登录表单识别
29 | - ✅IP资产归属识别ICP备案
30 | - ✅URL资产归属识别ICP备案
31 | - ✅CDN多方位检测
32 | - ✅web语言识别
33 |
34 | 
35 |
36 | ## 0x01 更新内容
37 |
38 | **2023.9.17 发版v1.0**
39 |
40 | **2023.9.18 解决了fofa、quake、hunter可能因为网络原因导致数据获取失败的情况。**
41 |
42 | **2023.9.19 优化了获取favicon的方式**
43 |
44 | **2023.9.21 优化了fofa查询,修复了通过cert语法查询导致的域名脏数据问题**
45 |
46 | **2023.9.21 优化了指纹识别时输出状态码颜色问题**
47 |
48 | **2023.9.21 优化了api查询代码逻辑,使api结果与excel输出解耦**
49 |
50 | **2023.9.21 增加了当查询时间过长(api服务端问题或者本地网络问题)时,可以手动ctrl-c进行下一个api查询**
51 |
52 | 
53 |
54 | **2023.9.22 增加了后台系统、登录表单识别**
55 |
56 | **2023.10.12 优化了指纹识别逻辑,增加0day||1day判断提示**
57 |
58 | **2023.10.12 优化了指纹格式**
59 |
60 | **2023.10.30 增加归属识别功能,可一键获取url对应的icp公司名称,通过使用 -vest开启**
61 |
62 | **2023.10.31 修复指纹识别逻辑校验bug,解决header报错**
63 |
64 | **2023.10.31 代码解耦,资产测绘、指纹识别、归属识别、excel导出逻辑分离**
65 |
66 | **2023.10.31 记录指纹识别时请求失败的url,生成excel**
67 |
68 | **2023.10.31 修复指纹识别重大逻辑bug,解决因day_type参数导致的url获取失败,导致无法获取指纹bug**
69 |
70 | **2023.10.31 更新UA头为bing机器人**
71 |
72 | **2023.11.06 增加web语言识别功能**
73 |
74 | **2023.11.13 增加CDN多维度识别功能(header、ASN、C段、CNAME)**
75 |
76 | **2023.12.25 修复quake获取结果url仅为host的问题**
77 |
78 | **2024.5.22 增加302判断,针对302跳转的url也会请求一次**
79 |
80 | **2024.5.29 修复了当server头为空时报错object of type 'NoneType' has no len()**
81 |
82 | **2024.6.13 增加动态代理功能,使用-proxy 1 为代理池, 2为burp**
83 |
84 | **2024.09 停更,已不从事演练相关活动**
85 |
86 | **2025.3.12 开源**
87 |
88 |
89 | ## 0x02 使用方法
90 |
91 | ### 2.1 使用前提:
92 |
93 | ```
94 | pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
95 |
96 | 进入目录config/config.py 填写资产测绘引擎的api-key
97 | ```
98 |
99 | ### 2.2 创建任务最佳实践1(如下即为资产测绘+指纹识别):
100 |
101 | - 功能:fscan就是仅fofa,fqscan就是fofa+quake。**只有加上-finger之后才可以进行指纹识别,加上-vest进行归属公司查询**
102 | - -d ICP
103 | - 支持输入根域名、单个IP,自动使用空间引擎对domain||cert或者IP进行查询
104 | - 支持输入ip段,但是输入ip段时建议-m fscan(仅fofa搜索),不然hunter积分顶不住,quake也比较卡
105 | - -search SEARCH 自定义语法输入,仅可输入各资产测绘拥有的语法
106 | - -m METHOD fscan qscan hscan fqscan qhscan fqhscan,选择其一即可,代表fofa、quake、hunter或者联合查询
107 | - 输出:
108 | - -o xx公司
109 | - 生成xx公司.xlsx,包含资产测绘原结果及经过指纹识别后的新结果
110 |
111 | #### 2.2.1 针对单一域名或者IP、IP段进行资产测绘
112 |
113 | ```
114 | python3 SyntaxFinger.py -d example.com -m fscan -o xx公司
115 | python3 SyntaxFinger.py -d example.com -m fqscan -o xx公司 -finger
116 |
117 | python3 SyntaxFinger.py -d 192.168.1.1 -m fqhscan -o xx公司
118 | python3 SyntaxFinger.py -d 192.168.1.1 -m fqhscan -o xx公司 -finger
119 |
120 | python3 SyntaxFinger.py -d 192.168.1.1/24 -m fscan -o xx公司
121 | python3 SyntaxFinger.py -d 192.168.1.1/24 -m fscan -o xx公司 -finger
122 | python3 SyntaxFinger.py -d 192.168.1.1/24 -m fscan -o xx公司 -finger -vest
123 | ```
124 |
125 | 
126 |
127 | 
128 |
129 | #### 2.2.2 使用自定义命令进行资产测绘
130 |
131 | python3 SyntaxFinger.py -search (body="mdc-section") -m fscan -o xx公司 -finger
132 | python3 SyntaxFinger.py -search (body="mdc-section") -m fscan -o xx公司 -finger -vest
133 |
134 | #### 2.2.3 针对批量域名建议结合sh脚本使用
135 |
136 | - 为什么不直接导入多个根域名?
137 | - 集成三大引擎,由于是命令行版本的,多个根域名跑一下数据不仅你自己看不过来,而且中间一个断了你都不知道。
138 |
139 | **linux|mac**
140 |
141 | run.sh
142 |
143 | ```
144 | while read line
145 | do
146 | project=`echo $line | awk -F " " '{print $1}'`
147 | host=`echo $line | awk -F " " '{print $2}'`
148 | echo $host,$project
149 | python3 SyntaxFinger.py -d $host -m fqhscan -o $project$host -finger -proxy 1
150 | done < target.txt
151 | ```
152 |
153 | **windows:**
154 |
155 | run.psl
156 |
157 | ```
158 | Get-Content target.txt | ForEach-Object {
159 | $line = $_
160 | $project = ($line -split " ")[0]
161 | $host = ($line -split " ")[1]
162 | Write-Output "$host,$project"
163 | python SyntaxFinger.py -d $host -m fqhscan -o $project$host -finger -proxy 1
164 | }
165 | ```
166 |
167 | target.txt
168 |
169 | ```
170 | 北京xx科技有限公司 example.com.cn
171 | 北京xx科技有限公司 example.com
172 | ```
173 |
174 |
175 |
176 | ### 2.3 创建任务最佳实践2(如下即为导入资产+指纹识别):
177 |
178 | - 功能:-u导入单个url,-f导入url文件, -finger进行指纹识别,-vest进行归属识别
179 |
180 | - -u URL 输入单个url,然后输入-finger进行单个指纹识别
181 | - -f FILE 输入url文件,然后输入-finger进行指纹识别
182 |
183 | - 输出:
184 |
185 | - xx公司.xlsx 经过指纹识别后的结果
186 |
187 |
188 |
189 | python3 SyntaxFinger.py -u http://www.example.com -finger -vest
190 | python3 SyntaxFinger.py -f url.txt -finger -vest
191 |
192 | ## 0x03 指纹编写方法
193 |
194 | 指纹格式如下:
195 |
196 | 1. manufacturer:厂商名称
197 | 2. product:产品名称
198 | 3. finger:具体指纹
199 | 1. 指纹名称(随意命名,尽量与指纹有关系即可)
200 | 1. method:三种识别方式 (分别为:keyword(使用关键词查询)、faviconhash(iconhash)、title(标题))
201 | 2. location:指纹识别位置(提供两个位置,一个为body,一个为header,faviconhash为null)
202 | 3. keyword:关键字(具体关键字、favicon图标hash)
203 |
204 | 4、relation:关系类型,支持【指纹名称 and 指纹名称】,否则为单个匹配
205 |
206 | 5、day_type:漏洞类型,-1为无day 0为0day 1为1day
207 |
208 | 6、team:随意写,个人id也行,团队id也行
209 |
210 | 以Richmail为例:
211 |
212 | ```json
213 | {
214 | "manufacturer": "深圳市彩讯科技有限公司",
215 | "product": "Richmail 邮件系统",
216 | "finger": {
217 | "keyword_title": {
218 | "method": "keyword",
219 | "position": "title",
220 | "match": "RichMail"
221 | },
222 | "keyword_body": {
223 | "method": "keyword",
224 | "position": "body",
225 | "match": "0;url=/webmail/"
226 | },
227 | "keyword_body_2": {
228 | "method": "keyword",
229 | "position": "body",
230 | "match": "richmail.config.js"
231 | },
232 | "keyword_body_3": {
233 | "method": "keyword",
234 | "position": "body",
235 | "match": "login"
236 | }
237 | },
238 | "relation": [
239 | "keyword_title",
240 | "keyword_body",
241 | "keyword_body_2 and keyword_body_3"
242 | ],
243 | "day_type": -1,
244 | "team": "公开"
245 | }
246 | ```
247 |
248 |
249 |
250 | ## 0x04 Thanks
251 |
252 | https://github.com/EASY233/Finger
253 |
254 | https://github.com/EdgeSecurityTeam/EHole
255 |
256 | @zzzzzzzzzzzzzzzzzzz
257 |
258 | 本项目的开发者、提供者和维护者不对使用者使用工具的行为和后果负责,工具的使用者应自行承担风险。
259 |
260 | 本项目由于开发者工作原因暂停更新,但功能皆可正常使用,开源给大家参考。
261 |
--------------------------------------------------------------------------------
/SyntaxFinger.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import os
5 |
6 | from api.api_output import api_output
7 | from config import config
8 | from config.data import logging, Webinfo, Urls, path
9 | from lib.cmdline import cmdline
10 | from lib.checkenv import CheckEnv
11 | from lib.req import Request
12 | from lib.ipAttributable import IpAttributable
13 | from lib.ICPAttributable import ICPAttributable
14 | from colorama import init as wininit
15 | from lib.options import initoptions
16 |
17 | wininit(autoreset=True)
18 |
19 | if __name__ == '__main__':
20 | # 打印logo
21 | print(config.Banner)
22 | # 检测环境
23 | CheckEnv()
24 |
25 | # 初始化参数
26 | finger = initoptions(cmdline())
27 |
28 | """
29 | 获取url 全局 Urls.url
30 | """
31 | if finger.method and (finger.icp or finger.search):
32 | # 空间引擎获取资产,并且url自动存入Urls.url
33 | api_res = finger.api_data()
34 | elif finger._url or finger._file:
35 | # 直接导入url,存储到Urls.url
36 | finger.target()
37 |
38 | """
39 | 指纹识别 全局
40 | 存活:Webinfo.result
41 | 失败:Urlerror.result
42 | """
43 | if finger._finger:
44 | # 对Urls.url进行指纹识别,结果存储到Webinfo.result
45 | Request()
46 |
47 | if finger._finger:
48 | # 对解析的IP进行归属查询
49 | IpAttributable().getAttributable()
50 | # 对url进行icp归属识别
51 | if finger.vest:
52 | ICPAttributable().getICPAttributable()
53 |
54 | """
55 | 导出结果,生成excel
56 | """
57 | # 导出空间引擎的url或者文件导入的url进行指纹识别的结果
58 | if finger.projectname:
59 | apioutput = api_output(finger.projectname)
60 | # 导出纯空间引擎资产
61 | if finger.method:
62 | apioutput.api_outXls(api_res)
63 | if finger._finger:
64 | apioutput.Succfinger_outXls()
65 | apioutput.Failfinger_outXls()
66 |
--------------------------------------------------------------------------------
/api/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/guchangan1/SyntaxFinger/575a3145fbbbf0c0facf83113f0286deebef7450/api/__init__.py
--------------------------------------------------------------------------------
/api/api_output.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | """
4 | @Time : 2023/9/9
5 | @Author : guchangan1
6 | """
7 | import os
8 | import time
9 |
10 | import openpyxl
11 | from openpyxl.cell.cell import ILLEGAL_CHARACTERS_RE
12 | from openpyxl.utils import get_column_letter
13 |
14 | from config.data import logging, path, Webinfo, Urlerror
15 |
16 |
17 | class api_output:
18 | def __init__(self, projectname):
19 | self.nowTime = time.strftime("%Y%m%d%H%M%S",time.localtime())
20 | #excel名称
21 | self.projectname = projectname + "_" +self.nowTime + ".xlsx"
22 | # excel的保存路径
23 | self.path_excel = os.path.join(path.apioutputdir, self.projectname)
24 | self.excelSavePath = self.path_excel
25 | self.excel = openpyxl.Workbook()
26 | self.Sheet_line = 1 # 空间引擎聚合结果表格的行
27 | self.Sheet2_line = 1 # 指纹识别结果表格的行
28 | self.Sheet3_line = 1 # 指纹识别失败结果表格的行
29 |
30 | def set_column_widths(sheet, column_width):
31 | for idx in range(1, column_width + 1):
32 | column_letter = get_column_letter(idx)
33 | sheet.column_dimensions[column_letter].width = 15 # 设定宽度为15
34 | def api_outXls(self, webSpaceResult):
35 | self.sheet = self.excel.create_sheet(title="空间引擎聚合结果", index=0) # 创建空间引擎聚合结果工作区
36 | if self.Sheet_line == 1:
37 | self.sheet.cell(self.Sheet_line, 1).value = '空间引擎名'
38 | self.sheet.cell(self.Sheet_line, 2).value = 'url'
39 | self.sheet.cell(self.Sheet_line, 3).value = 'title'
40 | self.sheet.cell(self.Sheet_line, 4).value = 'ip'
41 | self.sheet.cell(self.Sheet_line, 5).value = 'port'
42 | self.sheet.cell(self.Sheet_line, 6).value = 'server'
43 | self.sheet.cell(self.Sheet_line, 7).value = 'address'
44 | self.sheet.cell(self.Sheet_line, 8).value = 'icp_number'
45 | self.sheet.cell(self.Sheet_line, 9).value = 'icp_name'
46 | self.sheet.cell(self.Sheet_line, 10).value = 'isp'
47 | self.sheet.cell(self.Sheet_line, 11).value = '查询语句'
48 | self.Sheet_line += 1
49 |
50 | for result in webSpaceResult:
51 | try:
52 | title = ILLEGAL_CHARACTERS_RE.sub(r'', result["title"])
53 | except Exception as e:
54 | title = ''
55 | self.sheet.cell(self.Sheet_line, 1).value = result["api"]
56 | self.sheet.cell(self.Sheet_line, 2).value = result["url"]
57 | self.sheet.cell(self.Sheet_line, 3).value = title
58 | self.sheet.cell(self.Sheet_line, 4).value = result["ip"]
59 | self.sheet.cell(self.Sheet_line, 5).value = result["port"]
60 | self.sheet.cell(self.Sheet_line, 6).value = result["server"]
61 | self.sheet.cell(self.Sheet_line, 7).value = result["region"]
62 | self.sheet.cell(self.Sheet_line, 8).value = result["icp"]
63 | self.sheet.cell(self.Sheet_line, 9).value = result["icp_name"]
64 | self.sheet.cell(self.Sheet_line, 10).value = result["isp"]
65 | self.sheet.cell(self.Sheet_line, 11).value = result["syntax"]
66 | self.Sheet_line += 1
67 |
68 | self.excel.save(self.excelSavePath)
69 | successMsg = "空间引擎聚合结果成功保存!输出路径为:{0}".format(self.excelSavePath)
70 | logging.success(successMsg)
71 |
72 | def Succfinger_outXls(self):
73 | self.sheet2 = self.excel.create_sheet(title="指纹识别结果", index=1) # 创建指纹识别结果工作区
74 | if self.Sheet2_line == 1:
75 | self.sheet2.cell(self.Sheet2_line, 1).value = '评分'
76 | self.sheet2.cell(self.Sheet2_line, 2).value = 'url'
77 | self.sheet2.cell(self.Sheet2_line, 3).value = '标题'
78 | self.sheet2.cell(self.Sheet2_line, 4).value = '状态码'
79 | self.sheet2.cell(self.Sheet2_line, 5).value = '指纹&CMS'
80 | self.sheet2.cell(self.Sheet2_line, 6).value = 'Server头'
81 | self.sheet2.cell(self.Sheet2_line, 7).value = 'ip'
82 | self.sheet2.cell(self.Sheet2_line, 8).value = '归属地'
83 | self.sheet2.cell(self.Sheet2_line, 9).value = '是否CDN'
84 | self.sheet2.cell(self.Sheet2_line, 10).value = 'isp'
85 | self.sheet2.cell(self.Sheet2_line, 11).value = 'icp'
86 | self.Sheet2_line += 1
87 | for vaule in Webinfo.result:
88 | try:
89 | title = ILLEGAL_CHARACTERS_RE.sub(r'', vaule["title"])
90 | except Exception as e:
91 | title = ''
92 | self.sheet2.cell(self.Sheet2_line, 1).value = ""
93 | self.sheet2.cell(self.Sheet2_line, 2).value = vaule["url"]
94 | self.sheet2.cell(self.Sheet2_line, 3).value = title
95 | self.sheet2.cell(self.Sheet2_line, 4).value = vaule["status"]
96 | self.sheet2.cell(self.Sheet2_line, 5).value = vaule["cms"]
97 | self.sheet2.cell(self.Sheet2_line, 6).value = vaule["Server"]
98 | self.sheet2.cell(self.Sheet2_line, 7).value = vaule["ip"]
99 | self.sheet2.cell(self.Sheet2_line, 8).value = vaule["address"]
100 | self.sheet2.cell(self.Sheet2_line, 9).value = vaule["iscdn"]
101 | self.sheet2.cell(self.Sheet2_line, 10).value = vaule["isp"]
102 | self.sheet2.cell(self.Sheet2_line, 11).value = vaule["icp"]
103 | self.Sheet2_line += 1
104 |
105 | self.excel.save(self.excelSavePath)
106 | successMsg = "存活指纹识别结果成功保存!输出路径为:{0}".format(self.excelSavePath)
107 | logging.success(successMsg)
108 |
109 | def Failfinger_outXls(self):
110 | self.sheet3 = self.excel.create_sheet(title="请求失败结果", index=1) # 创建指纹识别结果工作区
111 | if self.Sheet3_line == 1:
112 | self.sheet3.cell(self.Sheet3_line, 1).value = '评分'
113 | self.sheet3.cell(self.Sheet3_line, 2).value = 'url'
114 | self.sheet3.cell(self.Sheet3_line, 3).value = '标题'
115 | self.sheet3.cell(self.Sheet3_line, 4).value = '状态码'
116 | self.sheet3.cell(self.Sheet3_line, 5).value = '指纹&CMS'
117 | self.sheet3.cell(self.Sheet3_line, 6).value = 'Server头'
118 | self.sheet3.cell(self.Sheet3_line, 7).value = 'ip'
119 | self.sheet3.cell(self.Sheet3_line, 8).value = '归属地'
120 | self.sheet3.cell(self.Sheet3_line, 9).value = '是否CDN'
121 | self.sheet3.cell(self.Sheet3_line, 10).value = 'isp'
122 | self.sheet3.cell(self.Sheet3_line, 11).value = 'icp'
123 | self.Sheet3_line += 1
124 | for vaule in Urlerror.result:
125 | self.sheet3.cell(self.Sheet3_line, 1).value = ""
126 | self.sheet3.cell(self.Sheet3_line, 2).value = vaule["url"]
127 | self.sheet3.cell(self.Sheet3_line, 3).value = vaule["title"]
128 | self.sheet3.cell(self.Sheet3_line, 4).value = vaule["status"]
129 | self.sheet3.cell(self.Sheet3_line, 5).value = vaule["cms"]
130 | self.sheet3.cell(self.Sheet3_line, 6).value = vaule["Server"]
131 | self.sheet3.cell(self.Sheet3_line, 7).value = vaule["ip"]
132 | self.sheet3.cell(self.Sheet3_line, 8).value = vaule["address"]
133 | self.sheet3.cell(self.Sheet3_line, 9).value = vaule["iscdn"]
134 | self.sheet3.cell(self.Sheet3_line, 10).value = vaule["isp"]
135 | self.sheet3.cell(self.Sheet3_line, 11).value = vaule["icp"]
136 | self.Sheet3_line += 1
137 |
138 | self.excel.save(self.excelSavePath)
139 | successMsg = "请求失败的结果成功保存!输出路径为:{0}".format(self.excelSavePath)
140 | logging.success(successMsg)
--------------------------------------------------------------------------------
/api/fofa.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import json
5 | import base64
6 | import random
7 | import re
8 |
9 | import requests
10 | from urllib.parse import quote
11 |
12 | from tldextract import tldextract
13 |
14 | from config.data import logging, Urls
15 | from config.config import Fofa_key, Fofa_email, head, Fofa_Size
16 | from lib.cmdline import cmdline
17 |
18 |
19 | class Fofa:
20 | def __init__(self, syntax):
21 | # Urls.url = []
22 | self.email = Fofa_email
23 | self.key = Fofa_key
24 | self.size = Fofa_Size
25 | self.old_syntax = syntax
26 | self.syntax = quote(str(base64.b64encode(syntax.encode()), encoding='utf-8'))
27 | self.headers = head
28 | if self.key == "":
29 | logging.error("请先在config/config.py文件中配置fofa的api")
30 | exit(0)
31 |
32 | def run(self):
33 | fofa_res_list = []
34 | logging.info("FoFa开始查询")
35 | logging.info("查询关键词为:{0},单次查询数量为:{1}".format(self.old_syntax, self.size))
36 | # 先看看有多少数据
37 | url = "https://fofa.info/api/v1/search/all?email={0}&key={1}&qbase64={2}&full=false&fields=host,ip,port,title,region,icp&size={3}".format(
38 | self.email, self.key, self.syntax, 10)
39 | try:
40 | response = requests.get(url, timeout=30, headers=self.headers)
41 | fofa_result = json.loads(response.text)
42 | if fofa_result["error"] == "true":
43 | logging.error(fofa_result["errmsg"])
44 | else:
45 | total = fofa_result["size"]
46 | total = int(total)
47 | if fofa_result["results"] == []:
48 | logging.warning("唉,fofa啥也没查到,过下一个站吧")
49 | else:
50 | logging.info("FOFA共获取到{0}条记录".format(total))
51 | if total > self.size:
52 | pages_total = total // self.size
53 | pages_total += 1
54 | else:
55 | pages_total = 1
56 | fofa_res = []
57 | for page in range(1, pages_total + 1):
58 | logging.warning("正在查询第{0}页".format(page))
59 | api_request = "https://fofa.info/api/v1/search/all?email={0}&key={1}&qbase64={2}&fields=host,title,ip,port,server,region,icp&size={3}&page={4}".format(
60 | self.email, self.key, self.syntax, self.size, page)
61 | json_result = requests.get(api_request, timeout=30, headers=self.headers)
62 | fofa_result = json.loads(json_result.text)
63 | if fofa_result["error"] == "true":
64 | logging.error(fofa_result["errmsg"])
65 | if fofa_result["error"] == "true":
66 | logging.error(fofa_result["errmsg"])
67 | fofa_res += fofa_result["results"]
68 | for singer_res in fofa_res:
69 |
70 | if cmdline().search or self.check_dirty(singer_res[0]):
71 | self.check_url(singer_res[0])
72 | fofa_singer_res_dict = {"api": "Fofa", "url": singer_res[0], "title": singer_res[1],
73 | "ip": singer_res[2], "port": singer_res[3], "server": singer_res[4],
74 | "region":
75 | singer_res[5], "icp": singer_res[6], "icp_name": "", "isp": "",
76 | "syntax": self.old_syntax}
77 | fofa_res_list.append(fofa_singer_res_dict)
78 | else:
79 | fofa_res.remove(singer_res)
80 | logging.info("Fofa查询完毕\n")
81 |
82 | except requests.exceptions.ReadTimeout:
83 | logging.error("请求超时")
84 | except requests.exceptions.ConnectionError:
85 | logging.error("网络超时")
86 | except json.decoder.JSONDecodeError:
87 | logging.error("获取失败,请重试")
88 | except KeyboardInterrupt:
89 | logging.warning("不想要fofa的结果,看下一个")
90 | except Exception as e:
91 | logging.error("获取失败,请检测异常"+e)
92 | return fofa_res_list
93 |
94 | def check(self):
95 | try:
96 | if self.email and self.key:
97 | auth_url = "https://fofa.info/api/v1/info/my?email={0}&key={1}".format(self.email, self.key)
98 | response = requests.get(auth_url, timeout=10, headers=self.headers)
99 | if self.email in response.text:
100 | return True
101 | else:
102 | return False
103 | else:
104 | return False
105 | except:
106 | return False
107 |
108 | def check_url(self, url):
109 | if not url.startswith('http') and url:
110 | # 若没有http头默认同时添加上http与https到目标上
111 | Urls.url.append("http://" + str(url))
112 | Urls.url.append("https://" + str(url))
113 | elif url:
114 | Urls.url.append(url)
115 |
116 | ###去除脏数据(目前仅去除掉cert!=domain的数据)
117 | def check_dirty(self, host):
118 | tld = tldextract.extract(host)
119 | # print(tld)
120 | if tld.suffix:
121 | domain = tld.domain + '.' + tld.suffix
122 | pattern = r'(?:domain|cert|ip)="([^"]*)"'
123 | matches = re.findall(pattern, self.old_syntax)
124 | if domain == str(matches[0]):
125 | return True
126 | else:
127 | return False
128 | else:
129 | return True
130 |
131 |
132 | if __name__ == '__main__':
133 | a = Fofa("domain=\"nmg.gov.cn\"")
134 |
--------------------------------------------------------------------------------
/api/hunter.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | """
4 | @Time : 2023/9/11
5 | @Author : guchangan1
6 | """
7 | import base64
8 | import json
9 | import math
10 | import random
11 | import time
12 |
13 | import requests
14 |
15 | from config.config import Hunter_token, head, Fofa_Size
16 | from config.data import logging, Urls
17 |
18 |
19 | class Hunter:
20 | def __init__(self, syntax):
21 | # Urls.url = []
22 | self.headers = head
23 | if Hunter_token == "":
24 | logging.warning("请先在config/config.py文件中配置hunter的api")
25 | exit(0)
26 | self.old_syntax = syntax
27 | self.syntax = self.old_syntax.encode('utf-8')
28 | self.syntax = base64.urlsafe_b64encode(self.syntax)
29 | self.syntax = str(self.syntax, 'utf-8')
30 | self.size = 100
31 | # 查询的最大数据量
32 | self.MaxTotal = 2000
33 | self.Hunter_token = Hunter_token
34 |
35 | def run(self):
36 | qianxin_Results = []
37 | logging.info("Hunter开始查询")
38 | logging.info("查询关键词为:{0},单次查询数据为:{1}".format(self.old_syntax, self.size))
39 | try:
40 | Hunter_api = "https://hunter.qianxin.com/openApi/search?&api-key={}&search={}&page=1&page_size={}&is_web=1&start_time=2023-08-21&end_time=2024-08-21". \
41 | format(self.Hunter_token, self.syntax, self.size)
42 | res = requests.get(Hunter_api, headers=self.headers, timeout=30)
43 | while (res.status_code != 200):
44 | logging.info(f'[*] 等待8秒后 将尝试重新获取数据')
45 | time.sleep(8)
46 | res = requests.get(Hunter_api, headers=self.headers, timeout=30)
47 | hunter_result = json.loads(res.text)
48 | total = hunter_result["data"]["total"]
49 | # # 消耗积分
50 | # consume_quota = hunter_result["data"]["consume_quota"]
51 | # 今日剩余积分
52 | rest_quota = hunter_result["data"]["rest_quota"]
53 | logging.info("Hunter共获取到{0}条记录| {2} | 将要消耗{1}个积分".format(total, total, rest_quota))
54 | total = self.MaxTotal if total > self.MaxTotal else total
55 | pages = total / self.size
56 | pages = math.ceil(pages)
57 | logging.info('[hunter] 限定查询的数量:{}'.format(total))
58 | logging.info('[hunter] 查询的页数:{}'.format(pages))
59 | if hunter_result['data']['arr']:
60 | qianxin_Results_tmp = self.data_clean(
61 | hunter_result['data']['arr'])
62 | qianxin_Results = qianxin_Results_tmp
63 | else:
64 | logging.warning("没有获取到数据")
65 | pages = 1
66 | if pages != 1:
67 | for page in range(2, pages + 1):
68 | logging.info("正在查询第{0}页".format(page))
69 | time.sleep(8)
70 | Hunter_api = "https://hunter.qianxin.com/openApi/search?&api-key={0}&search={1}&page={2}&page_size={3}&is_web=1&start_time=2023-08-21&end_time=2024-08-21".format(
71 | self.Hunter_token, self.syntax, page, self.size)
72 | res = requests.get(Hunter_api, headers=self.headers, timeout=30)
73 | while (res.status_code != 200):
74 | logging.info(f'[*] 等待8秒后 将尝试重新获取数据')
75 | time.sleep(8)
76 | res = requests.get(Hunter_api, headers=self.headers, timeout=30)
77 | hunter_result = json.loads(res.text)
78 | qianxin_Results_tmp = self.data_clean(
79 | hunter_result.get("data",None).get("arr",None))
80 | qianxin_Results.extend(qianxin_Results_tmp)
81 | logging.info("Hunter查询完毕\n")
82 |
83 | except KeyboardInterrupt:
84 | logging.warning("不想要hunter的结果,看下一个")
85 | except Exception as e:
86 | logging.error("Hunter获取失败" + e )
87 | return qianxin_Results
88 |
89 | def data_clean(self, hunter_result):
90 | qianxin_Results_tmp = []
91 | for singer_result in hunter_result:
92 | url = singer_result["url"]
93 | title = singer_result["web_title"]
94 | ip = singer_result["ip"]
95 | # subdomain = singer_result["domain"]
96 | port = singer_result["port"]
97 | server = str(singer_result["component"])
98 | protocol = singer_result["protocol"]
99 | address = singer_result["city"]
100 | company = singer_result["company"]
101 | isp = singer_result["isp"]
102 | # updated_at = singer_result["updated_at"]
103 | qianxin_Results_dict = {"api": "Hunter", "url": url, "title": title, "ip": ip, "port": port,
104 | "server": server,
105 | "region": address, "icp": "", "icp_name": company, "isp": isp,
106 | "syntax": self.old_syntax}
107 | # qianxin_Results_tmp.append([url, title, ip, port, server, address, company, isp])
108 | qianxin_Results_tmp.append(qianxin_Results_dict)
109 | self.check_url(url)
110 | return qianxin_Results_tmp
111 |
112 | def check_url(self, url):
113 | if not url.startswith('http') and url:
114 | # 若没有http头默认同时添加上http与https到目标上
115 | Urls.url.append("http://" + str(url))
116 | Urls.url.append("https://" + str(url))
117 | elif url:
118 | Urls.url.append(url)
119 |
120 |
121 | if __name__ == '__main__':
122 | Hunter("domain=\"xxxx.gov.cn\"")
123 |
--------------------------------------------------------------------------------
/api/quake.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import json
5 | import math
6 | import random
7 | import re
8 | import time
9 |
10 | import requests
11 |
12 | from config.data import Urls, logging
13 | from config.config import QuakeKey, user_agents, Fofa_Size
14 |
15 | requests.packages.urllib3.disable_warnings()
16 |
17 |
18 | class Quake:
19 | def __init__(self, syntax):
20 | # Urls.url = []
21 | self.headers = {
22 | "User-Agent": random.choice(user_agents),
23 | "X-QuakeToken": QuakeKey,
24 | "Content-Type": "application/json"
25 | }
26 | if QuakeKey == "":
27 | logging.warning("请先在config/config.py文件中配置quake的api")
28 | exit(0)
29 | if "||" in syntax:
30 | syntax_new = syntax.replace("||", " or ")
31 | else:
32 | syntax_new = syntax
33 | self.old_syntax = syntax
34 | self.syntax = syntax_new
35 | self.size = 200
36 | self.MaxSize = 10000 ##暂时没想好这块怎么写
37 |
38 | def run(self):
39 | quake_Results_tmp = []
40 | logging.info("Quake开始查询")
41 | logging.info("查询关键词为:{0},单次查询数量为:{1}".format(self.syntax, self.size))
42 | self.data = {
43 | "query": self.syntax,
44 | "start": 0,
45 | "size": self.size
46 | }
47 | try:
48 | response = requests.post(url="https://quake.360.net/api/v3/search/quake_service", headers=self.headers,
49 | json=self.data, timeout=30, verify=False)
50 | while (response.status_code != 200):
51 | logging.info(f'[*] 等待8秒后 将尝试重新获取数据')
52 | time.sleep(8)
53 | response = requests.post(url="https://quake.360.net/api/v3/search/quake_service", headers=self.headers,
54 | json=self.data, timeout=30, verify=False)
55 | datas = json.loads(response.text)
56 | total = datas['meta']['pagination']['total']
57 |
58 | if len(datas['data']) >= 1 and datas['code'] == 0:
59 | logging.info("Quake共获取到{0}条记录".format(total))
60 | datas_res = datas['data']
61 | pages = math.ceil(total / self.size)
62 | if pages > 1:
63 | for page in range(1, pages + 1):
64 | logging.info("正在查询第{0}页".format(page + 1))
65 | time.sleep(1)
66 | data = {
67 | "query": self.syntax,
68 | "start": page * self.size,
69 | "size": self.size
70 | }
71 | response = requests.post(url="https://quake.360.net/api/v3/search/quake_service",
72 | headers=self.headers,
73 | json=data, timeout=30, verify=False)
74 | while (response.status_code != 200):
75 | logging.info(f'[*] 等待8秒后 将尝试重新获取数据')
76 | time.sleep(8)
77 | response = requests.post(url="https://quake.360.net/api/v3/search/quake_service",
78 | headers=self.headers,
79 | json=data, timeout=30, verify=False)
80 | datas = json.loads(response.text)
81 | datas_res += datas['data']
82 | for singer_data in datas_res:
83 | ip, port, host, name, title, path, product, province_cn, favicon, x_powered_by, cert = '', '', '', '', '', '', '', '', '', '', ''
84 | ip = singer_data['ip'] # ip
85 | port = singer_data['port'] # port
86 | location = singer_data['location'] # 地址
87 | service = singer_data['service']
88 | province_cn = location['province_cn'] # 地址
89 | name = service['name'] # 协议
90 | product = "" # 服务
91 | # if 'cert' in service.keys():
92 | # cert = service['cert'] # 证书
93 | # cert = re.findall("DNS:(.*?)\n", cert)
94 | # 这个地方别疑惑,就是找title,因为service的key不管你是https还是http,肯定都包含http嘛,对的
95 | if 'http' in service.keys():
96 | http = service['http']
97 | host = http['host'] # 子域名
98 | title = http.get('title', 'not exist') # title
99 | title = title.strip()
100 | # x_powered_by = http['x_powered_by']
101 | # favicon = http['favicon']
102 | # path = http['path'] # 路径
103 | port_tmp = "" if port == 80 or port == 443 else ":{}".format(str(port))
104 | if 'http/ssl' == name:
105 | url = 'https://' + host + port_tmp
106 | # logging.info(url)
107 | elif 'http' == name:
108 | url = 'http://' + host + port_tmp
109 | # logging.info(url)
110 | else:
111 | url = name + "://" +ip + port_tmp
112 | url, title, ip, port, server, address = url, title, ip, port, product, province_cn
113 | quake_Result = {"api": "Quake", "url": url, "title": title, "ip": ip, "port": port,
114 | "server": server,
115 | "region": address, "icp": "", "icp_name": "", "isp": "",
116 | "syntax": self.syntax}
117 | quake_Results_tmp.append(quake_Result)
118 | self.check_url(url)
119 | else:
120 | logging.info("没有查询到数据")
121 | logging.info("Quake查询完毕\n")
122 |
123 | except requests.exceptions.ReadTimeout:
124 | logging.error("请求超时")
125 | except requests.exceptions.ConnectionError:
126 | logging.error("网络超时")
127 | except json.decoder.JSONDecodeError:
128 | logging.error("获取失败,请重试")
129 | except KeyboardInterrupt:
130 | logging.warning("不想要quake的结果,看下一个")
131 | except Exception as e :
132 | logging.error("Quake获取失败,检查异常"+e)
133 | pass
134 | return quake_Results_tmp
135 |
136 | def check_url(self, url):
137 | if not url.startswith('http') and url:
138 | # 若没有http头默认同时添加上http与https到目标上
139 | Urls.url.append("http://" + str(url))
140 | Urls.url.append("https://" + str(url))
141 | elif url:
142 | Urls.url.append(url)
143 |
144 |
145 | if __name__ == '__main__':
146 | Quake("domain=\"xxxx.gov.cn\"")
147 |
--------------------------------------------------------------------------------
/config/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 |
--------------------------------------------------------------------------------
/config/color.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | class Colored():
5 | def __init__(self):
6 | self.Red = '\033[1;31m' # 红色
7 | self.Green = '\033[1;32m' # 绿色
8 | self.Yellow = '\033[1;33m' # 黄色
9 | self.Blue = '\033[1;34m' # 蓝色
10 | self.Fuchsia = '\033[1;35m' # 紫红色
11 | self.Cyan = '\033[1;36m' # 青蓝色
12 | self.White = '\033[1;37m' # 白色
13 | self.Reset = '\033[0m' # 终端默认颜色
14 |
15 | def red(self, s):
16 | return "{0}{1}{2}".format(self.Red, s, self.Reset)
17 |
18 | def green(self, s):
19 | return "{0}{1}{2}".format(self.Green, s, self.Reset)
20 |
21 | def yellow(self, s):
22 | return "{0}{1}{2}".format(self.Yellow, s, self.Reset)
23 |
24 | def blue(self, s):
25 | return "{0}{1}{2}".format(self.Blue, s, self.Reset)
26 |
27 | def fuchsia(self, s):
28 | return "{0}{1}{2}".format(self.Fuchsia, s, self.Reset)
29 |
30 | def cyan(self, s):
31 | return "{0}{1}{2}".format(self.Cyan, s, self.Reset)
32 |
33 | def white(self, s):
34 | return "{0}{1}{2}".format(self.White, s, self.Reset)
35 |
36 | color = Colored()
37 |
--------------------------------------------------------------------------------
/config/config.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import os
5 | import random
6 |
7 | from fake_useragent import UserAgent
8 |
9 | Version = "V3.0"
10 | Author = "guchangan1"
11 | Github = "https://github.com/guchangan1"
12 | Banner = ''' ____ _ _____ _
13 | / ___| _ _ _ __ | |_ __ _ __ __ | ___| (_) _ __ __ _ ___ _ __
14 | \___ \ | | | | | '_ \ | __| / _` | \ \/ / | |_ | | | '_ \ / _` | / _ \ | '__|
15 | ___) | | |_| | | | | | | |_ | (_| | > < | _| | | | | | | | (_| | | __/ | |
16 | |____/ \__, | |_| |_| \__| \__,_| /_/\_\ |_| |_| |_| |_| \__, | \___| |_|
17 | |___/ |___/ --by guchangan1'''
18 |
19 | # 设置线程数,默认30
20 | threads = 5
21 | #注意,此处的Size指的是每页的大小,并不是最大限制。
22 | Fofa_Size= 1000
23 |
24 | # 设置Fofa key信息 普通会员API查询数据是前100,高级会员是前10000条根据自已的实际情况进行调整。
25 | Fofa_email = ""
26 | Fofa_key = ""
27 |
28 | # 设置360quake key信息,每月能免费查询3000条记录
29 |
30 | QuakeKey = ""
31 |
32 | # 设置qax hunter key信息
33 | Hunter_token = ""
34 |
35 | # 是否选择在线跟新指纹库,默认为True每次程序都会检查一遍指纹库是否是最新
36 | FingerPrint_Update = False
37 |
38 |
39 | user_agents = UserAgent().random
40 | head = {
41 | 'Accept': '*',
42 | 'Accept-Language': '*',
43 | 'Connection': 'close',
44 | 'User-Agent': user_agents
45 | }
--------------------------------------------------------------------------------
/config/data.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | from config.datatype import AttribDict
5 | from config.log import MY_LOGGER
6 |
7 | logging = MY_LOGGER
8 |
9 | path = AttribDict()
10 | Urls = AttribDict()
11 | Ips = AttribDict()
12 | Webinfo = AttribDict()
13 | Save = AttribDict()
14 | Urlerror = AttribDict()
15 | Icps = AttribDict()
16 | Search = AttribDict()
--------------------------------------------------------------------------------
/config/datatype.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import copy
5 | import types
6 |
7 |
8 | class AttribDict(dict):
9 | """
10 | This class defines the dictionary with added capability to access members as attributes
11 | """
12 |
13 | def __init__(self, indict=None, attribute=None):
14 | if indict is None:
15 | indict = {}
16 |
17 | # Set any attributes here - before initialisation
18 | # these remain as normal attributes
19 | self.attribute = attribute
20 | dict.__init__(self, indict)
21 | self.__initialised = True
22 |
23 | # After initialisation, setting attributes
24 | # is the same as setting an item
25 |
26 | def __getattr__(self, item):
27 | """
28 | Maps values to attributes
29 | Only called if there *is NOT* an attribute with this name
30 | """
31 |
32 | try:
33 | return self.__getitem__(item)
34 | except KeyError:
35 | raise AttributeError("unable to access item '%s'" % item)
36 |
37 | def __setattr__(self, item, value):
38 | """
39 | Maps attributes to values
40 | Only if we are initialised
41 | """
42 |
43 | # This test allows attributes to be set in the __init__ method
44 | if "_AttribDict__initialised" not in self.__dict__:
45 | return dict.__setattr__(self, item, value)
46 |
47 | # Any normal attributes are handled normally
48 | elif item in self.__dict__:
49 | dict.__setattr__(self, item, value)
50 |
51 | else:
52 | self.__setitem__(item, value)
53 |
54 | def __getstate__(self):
55 | return self.__dict__
56 |
57 | def __setstate__(self, dict):
58 | self.__dict__ = dict
59 |
60 | def __deepcopy__(self, memo):
61 | retVal = self.__class__()
62 | memo[id(self)] = retVal
63 |
64 | for attr in dir(self):
65 | if not attr.startswith('_'):
66 | value = getattr(self, attr)
67 | if not isinstance(value, (types.BuiltinFunctionType, types.FunctionType, types.MethodType)):
68 | setattr(retVal, attr, copy.deepcopy(value, memo))
69 |
70 | for key, value in self.items():
71 | retVal.__setitem__(key, copy.deepcopy(value, memo))
72 |
73 | return retVal
--------------------------------------------------------------------------------
/config/log.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import logging
5 | import sys
6 | from config.color import color
7 | from colorama import init as wininit
8 | wininit(autoreset=True)
9 |
10 | # 设置日志等级
11 | class LoggingLevel:
12 | SUCCESS = 9
13 | SYSINFO = 8
14 | ERROR = 7
15 | WARNING = 6
16 |
17 | logging.addLevelName(LoggingLevel.SUCCESS, color.cyan("[+]"))
18 | logging.addLevelName(LoggingLevel.SYSINFO, color.green("[INFO]"))
19 | logging.addLevelName(LoggingLevel.ERROR, color.red("[ERROR]"))
20 | logging.addLevelName(LoggingLevel.WARNING, color.yellow("[WARNING]"))
21 |
22 | # 初始化日志
23 | LOGGER = logging.getLogger('Finger')
24 | # 设置输出格式
25 | formatter = logging.Formatter(
26 | "%(asctime)s %(levelname)s %(message)s",
27 | datefmt=color.fuchsia("[%H:%M:%S]")
28 | )
29 | LOGGER_HANDLER = logging.StreamHandler(sys.stdout)
30 | LOGGER_HANDLER.setFormatter(formatter)
31 | LOGGER.addHandler(LOGGER_HANDLER)
32 | LOGGER.setLevel(LoggingLevel.WARNING)
33 |
34 | class MY_LOGGER:
35 | def info(msg):
36 | return LOGGER.log(LoggingLevel.SYSINFO, msg)
37 |
38 | def error(msg):
39 | return LOGGER.log(LoggingLevel.ERROR, msg)
40 |
41 | def warning(msg):
42 | return LOGGER.log(LoggingLevel.WARNING, msg)
43 |
44 | def success(msg):
45 | return LOGGER.log(LoggingLevel.SUCCESS, msg)
--------------------------------------------------------------------------------
/img/image-20230921163103334.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/guchangan1/SyntaxFinger/575a3145fbbbf0c0facf83113f0286deebef7450/img/image-20230921163103334.png
--------------------------------------------------------------------------------
/img/image-20250312191119397.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/guchangan1/SyntaxFinger/575a3145fbbbf0c0facf83113f0286deebef7450/img/image-20250312191119397.png
--------------------------------------------------------------------------------
/img/image-20250312194141280.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/guchangan1/SyntaxFinger/575a3145fbbbf0c0facf83113f0286deebef7450/img/image-20250312194141280.png
--------------------------------------------------------------------------------
/img/image-20250312200636496.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/guchangan1/SyntaxFinger/575a3145fbbbf0c0facf83113f0286deebef7450/img/image-20250312200636496.png
--------------------------------------------------------------------------------
/img/image-20250312200857418.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/guchangan1/SyntaxFinger/575a3145fbbbf0c0facf83113f0286deebef7450/img/image-20250312200857418.png
--------------------------------------------------------------------------------
/lib/B_CheckCDN.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | """
4 | @Time : 2023/11/6
5 | @Author : guchangan1
6 | """
7 | import ipaddress
8 | import json
9 | import os
10 | import dns.resolver
11 | from config.data import path
12 |
13 |
14 | class CheckCDN:
15 | def __init__(self):
16 | cdn_ip_cidr = os.path.join(path.library, 'cdn_ip_cidr.json')
17 | cdn_asn_list = os.path.join(path.library, 'cdn_asn_list.json')
18 | cdn_cname_keyword = os.path.join(path.library, 'cdn_cname_keywords.json')
19 | cdn_header_key = os.path.join(path.library, 'cdn_header_keys.json')
20 | nameservers_list = os.path.join(path.library, 'nameservers_list.json')
21 | with open(cdn_ip_cidr, 'r', encoding='utf-8') as file:
22 | self.cdn_ip_cidr_file = json.load(file)
23 | with open(cdn_asn_list, 'r', encoding='utf-8') as file:
24 | self.cdn_asn_list_file = json.load(file)
25 | with open(cdn_cname_keyword, 'r', encoding='utf-8') as file:
26 | self.cdn_cname_keyword_file = json.load(file)
27 | with open(cdn_header_key, 'r', encoding='utf-8') as file:
28 | self.cdn_header_key_file = json.load(file)
29 | with open(nameservers_list, 'r', encoding='utf-8') as file:
30 | self.nameservers_list_file = json.load(file)
31 |
32 |
33 | def check_cname_keyword(self, cnames):
34 | if not cnames:
35 | return False
36 | for name in cnames:
37 | for keyword in self.cdn_cname_keyword_file.keys():
38 | if keyword in name:
39 | return True
40 |
41 |
42 | def check_header_key(self, header):
43 | if isinstance(header, str):
44 | header = json.loads(header)
45 | if isinstance(header, dict):
46 | header = set(map(lambda x: x.lower(), header.keys()))
47 | for key in self.cdn_header_key_file:
48 | if key in header:
49 | return True
50 | else:
51 | return False
52 |
53 |
54 | def check_cdn_cidr(self, ips):
55 | for ip in ips:
56 | try:
57 | ip = ipaddress.ip_address(ip)
58 | except Exception as e:
59 | return False
60 | for cidr in self.cdn_ip_cidr_file:
61 | if ip in ipaddress.ip_network(cidr):
62 | return True
63 |
64 |
65 | def check_cdn_asn(self, asns):
66 | for asn in asns:
67 | if isinstance(asn, str):
68 | if asn in self.cdn_asn_list_file:
69 | return True
70 | return False
71 |
72 | def check_cdn_dns_resolve(self, domain):
73 | local_Resolver = dns.resolver.Resolver(configure=False)
74 | local_Resolver.retry_servfail = True
75 | local_Resolver.timeout = 3
76 | result_list = []
77 |
78 | for name_server in self.nameservers_list_file:
79 | # time.sleep(1)
80 | local_Resolver.nameservers = [name_server]
81 | if len(result_list) > 3:
82 | return True
83 | try:
84 | myAnswers = local_Resolver.resolve(domain, "A", lifetime=1)
85 | for rdata in myAnswers:
86 | if rdata.address not in result_list and ":" not in rdata.address: # and len(result_list) < 5:
87 | result_list.append(rdata.address)
88 | except Exception as error:
89 | continue
90 | if len(result_list) < 4:
91 | return False
92 |
93 |
94 |
95 | if __name__ == '__main__':
96 | host = "rk800.mdzzz.org"
97 | header = ""
98 | record = GetRecord(host)
99 | result_CNAME = record.dns_resolve_CNAME().get("CNAME")
100 | result_A = record.dns_resolve_A().get("A")
101 | result_ASN = record.getASN_ByIP(result_A).get("ASN")
102 |
103 | cdncheck = CheckCDN()
104 | # cdncheck.check_cname_keyword(result_CNAME)
105 | # cdncheck.check_cdn_cidr(result_A)
106 | # cdncheck.check_header_key(header)
107 | print(cdncheck.check_cdn_asn(result_ASN))
108 |
--------------------------------------------------------------------------------
/lib/C_ALL_GetCompany_ByDomain.py:
--------------------------------------------------------------------------------
1 | import requests
2 | from bs4 import BeautifulSoup
3 | from lxml import etree
4 |
5 | requests.urllib3.disable_warnings()
6 | requests.warnings.filterwarnings("ignore")
7 |
8 | class find_icp_by_domain():
9 |
10 | """
11 | 通过域名查找icp
12 | """
13 | def __init__(self, domain, proxy=None):
14 | self.domain = domain
15 | self.proxy = proxy
16 | self.resultDict = {"domain": self.domain, "unitName": "-", "unitICP": "-"}
17 |
18 | result = []
19 | header1 = {
20 | "Host":"www.beianx.cn",
21 | "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0",
22 | "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
23 | "Accept-Language": "zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2",
24 | }
25 | header2 = {
26 | 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
27 | 'Accept-Encoding': 'gzip, deflate, br',
28 | 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
29 | 'Cache-Control': 'no-cache',
30 | 'Connection': 'keep-alive',
31 | 'Cookie': 'allSites=baidu.com%2C0; _csrf=acbe3685c56e8c51a17b499298dad6c19d2313d4d924001f5afdf44db886270ba%3A2%3A%7Bi%3A0%3Bs%3A5%3A%22_csrf%22%3Bi%3A1%3Bs%3A32%3A%22gonHG5mJdHz-E-sukWk5KT792jxOmQle%22%3B%7D; Hm_lvt_b37205f3f69d03924c5447d020c09192=1695301492,1695346451; Hm_lvt_de25093e6a5cdf8483c90fc9a2e2c61b=1695353426; Hm_lpvt_de25093e6a5cdf8483c90fc9a2e2c61b=1695353426; PHPSESSID=vuuel7fg33joirp3qlvue7up26; Hm_lpvt_b37205f3f69d03924c5447d020c09192=1695366436',
32 | 'Host': 'icp.aizhan.com',
33 | 'Pragma': 'no-cache',
34 | 'Referer': 'https://icp.aizhan.com/',
35 | 'Sec-Fetch-Dest': 'document',
36 | 'Sec-Fetch-Mode': 'navigate',
37 | 'Sec-Fetch-Site': 'same-origin',
38 | 'Sec-Fetch-User': '?1',
39 | 'Upgrade-Insecure-Requests': '1',
40 | 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36 Edg/117.0.2045.36',
41 | 'sec-ch-ua': '"Microsoft Edge";v="117", "Not;A=Brand";v="8", "Chromium";v="117"',
42 | 'sec-ch-ua-mobile': '?0',
43 | 'sec-ch-ua-platform': '"Windows"'
44 | }
45 | header3 = {
46 | "Host":"icplishi.com",
47 | "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0",
48 | "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
49 | "Accept-Language": "zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2",
50 | }
51 |
52 | msg1 = "网络出现问题!"
53 | msg2 = "ip被ban啦!"
54 | msg3 = "未查询到相关结果!"
55 | msg4 = "网站结构发生改变,请联系开发处理!"
56 |
57 |
58 |
59 | def get_request(self,url,header):
60 | try:
61 | res = requests.get(url,headers=header,verify=False,timeout=1,proxies=self.proxy)
62 | return res
63 | except:
64 | print(self.msg1)
65 | return self.resultDict
66 |
67 |
68 | def get_ba1(self):
69 | """
70 | 调用www.beianx.cn进行查询
71 | """
72 | resultDict = self.resultDict
73 | res = self.get_request(f"https://www.beianx.cn/search/{self.domain}",header=self.header1)
74 | if res == None:
75 | print(self.msg1)
76 | return resultDict
77 | html = res.text
78 | soup = BeautifulSoup(html, 'html.parser')
79 | ip_baned = soup.find('div', class_='text-center')
80 | if ip_baned:
81 | if ip_baned.text[0:6] == "您的IP地址":
82 | print(self.msg2)
83 | return resultDict
84 | no_records = soup.find('td', class_='text-center')
85 | if no_records:
86 | if no_records.text != "1":
87 | msg = no_records.text.replace('\r\n', '').replace(" ", "")
88 | print(msg)
89 | return resultDict
90 | table = soup.find('table', class_='table')
91 | if table:
92 | rows = table.find_all('tr')
93 | for row in rows[1:]:
94 | columns = row.find_all('td')
95 | if len(columns) >= 5:
96 | unitName = columns[1].text.strip()
97 | serviceLicence = columns[3].text.strip()
98 | resultDict = {"domain": self.domain, "unitName": unitName, "unitICP": serviceLicence}
99 | return resultDict
100 |
101 |
102 | def get_ba2(self):
103 | """
104 | 调用icp.aizhan.com进行查询
105 | """
106 | resultDict = self.resultDict
107 | res = self.get_request(f"https://icp.aizhan.com/{self.domain}/",header=self.header2)
108 | if res == None:
109 | print(self.msg1)
110 | return resultDict
111 | html = res.content
112 | soup = BeautifulSoup(html, 'html.parser')
113 | no_recodes = soup.find("div", class_="cha-default")
114 | if no_recodes:
115 | print(self.msg3)
116 | return resultDict
117 | table = soup.find("table", class_="table")
118 | if table:
119 | rows = table.find_all('tr')
120 | unitName = rows[0].find_all('td')[1].text.strip()[:-8]
121 | serviceLicence = rows[2].find_all('td')[1].text.strip()[:-11]
122 | resultDict = {"domain": self.domain, "unitName": unitName, "unitICP": serviceLicence}
123 | return resultDict
124 |
125 |
126 | def getICP_icplishi(self, replayNun=0):
127 | # print(domain)
128 | resultDict = self.resultDict
129 | try:
130 | req = requests.get(url=f"https://icplishi.com/{self.domain}", headers=self.header3, timeout=20)
131 | if req.status_code != 200 and replayNun < 2:
132 | replayNun += 1
133 | return self.getICP_icplishi(replayNun)
134 | if req.status_code != 200 and replayNun == 2:
135 | resultDict = {"domain": self.domain, "unitName": f"NtError:{req.status_code},请加延时",
136 | "unitICP": f"NtError:{req.status_code}"}
137 | # print(resultDict)
138 | return resultDict
139 | html = etree.HTML(req.text, etree.HTMLParser())
140 |
141 | # 备案类型、备案时间
142 | SpanTag = html.xpath(
143 | '//div[@class="module mod-panel"]/div[@class="bd"]/div[@class="box"]/div[@class="c-bd"]/table/tbody/tr/td/span/text()')
144 | # 备案号、备案名
145 | ATag = html.xpath(
146 | '//div[@class="module mod-panel"]/div[@class="bd"]/div[@class="box"]/div[@class="c-bd"]/table/tbody/tr/td/a/text()')
147 |
148 | token = html.xpath('//div[@id="J_beian"]/@data-token')
149 | # 直接获取到备案
150 | if len(ATag) >= 2 and len(SpanTag) >= 2 and (SpanTag[1] != "未备案"):
151 | resultDict = {"domain": self.domain, "unitName": ATag[0], "unitICP": ATag[1]}
152 | return resultDict
153 | if (token and resultDict["unitName"] == "-") or (token and "ICP" not in resultDict["unitICP"]) or (
154 | token and '-' in SpanTag[1]):
155 | resultDict = self.getIcpFromToken(token[0])
156 | except Exception as e:
157 | resultDict = {"domain": self.domain, "unitName": "-", "unitICP": "-"}
158 | return resultDict
159 |
160 | # 两次出现"msg"="暂无结果",为未查询出结果
161 | def getIcpFromToken(self, token, replayNun=0):
162 | try:
163 | req = requests.get(f"https://icplishi.com/query.do?domain={self.domain}&token={token}", headers=self.header3,
164 | timeout=20)
165 | if (req.status_code != 200 or req.json()["msg"] == "暂无结果") and replayNun < 2:
166 | replayNun += 1
167 | return self.getIcpFromToken(token, replayNun)
168 | data = req.json()["data"]
169 | if req.status_code != 200 or req.json()["msg"] == "暂无结果" or len(data) == 0 or len(data[0]) == 0 or \
170 | data[0][
171 | "license"] == "未备案":
172 | resultDict = {"domain": self.domain, "unitName": "-", "unitICP": "-"}
173 | else:
174 | resultDict = {"domain": self.domain, "unitName": data[0]["company"],
175 | "unitICP": data[0]["license"]}
176 | except Exception as e:
177 | resultDict = {"domain": self.domain, "unitName": "-", "unitICP": "-"}
178 | return resultDict
179 |
180 |
181 | if __name__ == "__main__":
182 | fd = find_icp_by_domain("crccbfjy.com")
183 | result = fd.getICP_icplishi()
184 | # result = fd.get_ba1()
185 | print(result)
--------------------------------------------------------------------------------
/lib/C_GetRecord_BySubDomain.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | """
4 | @Time : 2023/11/6
5 | @Author : guchangan1
6 | """
7 | import os
8 | import socket
9 | import time
10 | import geoip2.database
11 | from config.data import path, logging
12 |
13 | import dns.resolver
14 |
15 |
16 | class GetRecord:
17 | def __init__(self, domain):
18 | self.fast_nameservers = ["114.114.114.114", "8.8.8.8", "80.80.80.80", "223.5.5.5"]
19 | self.domain = domain
20 | self.local_Resolver = dns.resolver.Resolver(configure=False)
21 | self.local_Resolver.retry_servfail = True
22 | self.local_Resolver.timeout = 3
23 | GeoLite2ASN = os.path.join(path.library, "data", 'GeoLite2-ASN.mmdb')
24 | self.reader = geoip2.database.Reader(GeoLite2ASN)
25 |
26 | def getASN_ByIP(self,iplist):
27 | asn_list = []
28 | try:
29 | for singer_ip in iplist:
30 | asn_response = self.reader.asn(singer_ip)
31 | asn = "AS"+ str(asn_response.autonomous_system_number)
32 | asn_list.append(asn)
33 | except Exception as e:
34 | logging.error(f"IP {singer_ip} 处理时发生异常:{e}")
35 | return result_ASN
36 |
37 |
38 | def dns_resolve_A(self):
39 | result_list = []
40 | for name_server in self.fast_nameservers:
41 | # time.sleep(1)
42 | self.local_Resolver.nameservers = [name_server]
43 | if len(result_list) > 3:
44 | return result_list
45 | try:
46 | myAnswers = self.local_Resolver.resolve(self.domain, "A", lifetime=1)
47 | for rdata in myAnswers:
48 | if rdata.address not in result_list and ":" not in rdata.address: # and len(result_list) < 5:
49 | result_list.append(rdata.address)
50 | except Exception as error:
51 | continue
52 | result_A = {"A": result_list}
53 | return result_A
54 |
55 | def dns_resolve_CNAME(self):
56 | result_list = []
57 | # time.sleep(1)
58 | self.local_Resolver.nameservers = ["223.5.5.5"]
59 | try:
60 | myAnswers = self.local_Resolver.resolve(self.domain, "CNAME", lifetime=1)
61 | for rdatas in myAnswers.response.answer:
62 | for rdata in rdatas.items:
63 | result_list.append(rdata.to_text())
64 | except Exception as error:
65 | pass
66 | result_CNAME = {"CNAME": result_list}
67 | return result_CNAME
68 |
69 |
70 |
71 |
72 | if __name__ == '__main__':
73 | record = GetRecord("rk800.mdzzz.org")
74 | start = time.time()
75 | result_A = record.dns_resolve_A()
76 | result_ASN = record.getASN_ByIP(result_A.get("A"))
77 | print(result_ASN)
78 |
79 |
80 | # result_CNAME = record.dns_resolve_CNAME()
81 | # print(result_CNAME)
82 | #
83 | # result_A = record.dns_resolve_A()
84 | # print(result_A)
85 |
86 |
87 | end = time.time()
88 | print(end - start)
89 |
90 |
--------------------------------------------------------------------------------
/lib/D_ALL_GetDomain_ByUrl.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | """
4 | @Time : 2023/10/26
5 | @Author : guchangan1
6 | """
7 | import re
8 | import socket
9 |
10 | import requests
11 | from tldextract import tldextract
12 | from lxml import etree
13 |
14 | from lib.proxy import proxies
15 |
16 | requests.packages.urllib3.disable_warnings()
17 |
18 | headers = {
19 | "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36",
20 | "Connection": "close",
21 | }
22 |
23 |
24 | class find_domain_by_ip():
25 |
26 | def __init__(self, ip):
27 | self.ip = ip
28 |
29 | ##调用138接口
30 | def getDomain_138(self, replayNun=0):
31 | allData = [] # 爬取反查域名信息
32 | domainList = [] # 最终反查域名信息
33 | argIsDoamin = False # 参数默认非domain
34 | try:
35 | req1 = requests.get(url=f"https://site.ip138.com/{self.ip}/", headers=headers, timeout=20, verify=False, proxies=proxies())
36 | if req1.status_code != 200 and replayNun < 2:
37 | replayNun += 1
38 | return self.getDomain_138(self.ip, replayNun)
39 | if req1.status_code != 200 and replayNun == 2:
40 | domainList.append(f"NtError c:{req1.status_code}")
41 | return domainList
42 | html = etree.HTML(req1.text, etree.HTMLParser())
43 | if re.match(r"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$",
44 | self.ip):
45 | allData = html.xpath(
46 | '//ul[@id="list"]/li/a[@target="_blank"]/text()') # 获取a节点下的内容,获取到ip曾解析到的domain 存在老旧数据
47 | else:
48 | argIsDoamin = True
49 | # histryIp = html.xpath('//div[@id="J_ip_history"]/p/a[@target="_blank"]/text()') #获取a节点下的内容,获取到域名解析到的ip 存在老旧数据
50 | allData.append(self.ip)
51 | for domin in allData:
52 | # 确保反查到的域名可解析到当前ip 剔除老旧数据
53 | if argIsDoamin or (self.ip in self.getIpList(domin)):
54 | # 剔除相同域名
55 | domainObj = tldextract.extract(domin)
56 | domainData = f"{domainObj.domain}.{domainObj.suffix}"
57 | if domainData not in domainList:
58 | domainList.append(domainData)
59 | # ipPosition=html.xpath('//div[@class="result result2"]/h3/text()') #获取ip位置信息
60 | except Exception as e:
61 | # print(f"\033[31m[Error] url:https://site.ip138.com/{ip}/ {e}\033[0m")
62 | domainList.append("NtError")
63 | return domainList
64 |
65 |
66 | # 获取域名解析出的IP列表(确保反查到的域名可解析到当前ip)
67 | def getIpList(self, domain):
68 | ip_list = []
69 | try:
70 | addrs = socket.getaddrinfo(domain, None)
71 | for item in addrs:
72 | if item[4][0] not in ip_list:
73 | ip_list.append(item[4][0])
74 | except Exception as e:
75 | pass
76 | return ip_list
77 |
78 |
79 | def GetDomain_ByUrl(url):
80 | tld = tldextract.extract(url)
81 | # print(tld)
82 | if tld.suffix != '':
83 | domain = tld.domain + '.' + tld.suffix
84 | return domain
85 | else:
86 | ip = tld.domain
87 | domain = find_domain_by_ip(ip).getDomain_138()
88 | if not domain:
89 | return
90 | else:
91 | return domain[0]
92 |
93 |
94 | if __name__ == '__main__':
95 | domainList = GetDomain_ByUrl("221.122.179.142")
96 | print(domainList)
--------------------------------------------------------------------------------
/lib/ICPAttributable.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | """
4 | @Time : 2023/10/27
5 | @Author : guchangan1
6 | """
7 | from tldextract import tldextract
8 |
9 | from config.data import Webinfo, Urls, logging
10 | from lib.C_ALL_GetCompany_ByDomain import find_icp_by_domain
11 | from lib.D_ALL_GetDomain_ByUrl import GetDomain_ByUrl
12 | from lib.proxy import proxies
13 |
14 |
15 | class ICPAttributable:
16 | def __init__(self, ):
17 | logging.info(("正在查询ICP公司归属"))
18 | self.proxy = proxies()
19 |
20 | """
21 | 取出指纹识别结果,进行icp查询
22 | """
23 | def getICPAttributable(self):
24 | try:
25 | for value in Webinfo.result:
26 | url = value["url"]
27 | domain = GetDomain_ByUrl(url)
28 | company_obj = find_icp_by_domain(domain,self.proxy)
29 | company_dict = company_obj.getICP_icplishi()
30 | Webinfo.result[Webinfo.result.index(value)]["icp"] = company_dict["unitName"]
31 | except Exception as e:
32 | pass
33 | logging.success("ICP公司归属查询完毕")
34 |
35 |
--------------------------------------------------------------------------------
/lib/IpFactory.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import os
5 | import re
6 | import json
7 | import socket
8 | import ipaddress
9 | from lib.C_GetRecord_BySubDomain import GetRecord
10 | from lib.B_CheckCDN import CheckCDN
11 | from urllib.parse import urlsplit
12 | from config.data import path
13 |
14 |
15 |
16 | class IPFactory:
17 | def __init__(self):
18 | cdn_ip_cidr = os.path.join(path.library, 'cdn_ip_cidr.json')
19 | cdn_asn_list = os.path.join(path.library, 'cdn_asn_list.json')
20 | cdn_cname_keyword = os.path.join(path.library, 'cdn_cname_keywords.json')
21 | cdn_header_key = os.path.join(path.library, 'cdn_header_keys.json')
22 | nameservers_list = os.path.join(path.library, 'nameservers_list.json')
23 | with open(cdn_ip_cidr, 'r', encoding='utf-8') as file:
24 | self.cdn_ip_cidr_file = json.load(file)
25 | with open(cdn_asn_list, 'r', encoding='utf-8') as file:
26 | self.cdn_asn_list_file = json.load(file)
27 | with open(cdn_cname_keyword, 'r', encoding='utf-8') as file:
28 | self.cdn_cname_keyword_file = json.load(file)
29 | with open(cdn_header_key, 'r', encoding='utf-8') as file:
30 | self.cdn_header_key_file = json.load(file)
31 | with open(nameservers_list, 'r', encoding='utf-8') as file:
32 | self.nameservers_list_file = json.load(file)
33 |
34 | #获取host
35 | def parse_host(self,url):
36 | host = urlsplit(url).netloc
37 | if ':' in host:
38 | host = re.sub(r':\d+', '', host)
39 | return host
40 |
41 |
42 | #域名解析获取ip列表并排除CDN
43 | def factory(self, url, header):
44 | try:
45 | ip_list = []
46 | host = self.parse_host(url)
47 | pattern = r'^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$'
48 | if re.match(pattern, host):
49 | return 0, ip_list
50 | record = GetRecord(host)
51 | checkcdn = CheckCDN()
52 | ip_list = record.dns_resolve_A().get("A")
53 | asn_list = record.getASN_ByIP(ip_list).get("ASN")
54 | cname_list = record.dns_resolve_CNAME().get("CNAME")
55 | if ip_list:
56 | if checkcdn.check_cdn_cidr(ip_list) or checkcdn.check_cdn_asn(asn_list) or checkcdn.check_cname_keyword(cname_list) or checkcdn.check_header_key(header):
57 | return "是", ip_list
58 | else:
59 | return "否", ip_list
60 | else:
61 | return 0, ip_list
62 |
63 | # items = socket.getaddrinfo(host, None)
64 | # for ip in items:
65 | # if ip[4][0] not in ip_list:
66 | # ip_list.append(ip[4][0])
67 | # if len(ip_list) > 1:
68 | # return 1, ip_list
69 | # else:
70 | # for cdn in self.cdn_ip_cidr_file:
71 | # if ipaddress.ip_address(ip_list[0]) in ipaddress.ip_network(cdn):
72 | # return 1, ip_list
73 | # return 0, ip_list
74 | except Exception as e:
75 | return 0, ip_list
76 |
77 |
--------------------------------------------------------------------------------
/lib/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 |
--------------------------------------------------------------------------------
/lib/checkenv.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import platform
5 | import os
6 | import time
7 | import requests
8 | import hashlib
9 | from config.config import head,FingerPrint_Update
10 | from config.data import path,logging
11 |
12 | class CheckEnv:
13 | def __init__(self):
14 | self.pyVersion = platform.python_version()
15 | self.path = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
16 | self.python_check()
17 | self.path_check()
18 | if FingerPrint_Update:
19 | self.update()
20 |
21 | def python_check(self):
22 | if self.pyVersion < "3":
23 | logging.error("此Python版本 ('{0}') 不兼容,成功运行程序你必须使用版本 >= 3.6 (访问 ‘https://www.python.org/downloads/".format(self.pyVersion))
24 | exit(0)
25 |
26 | def path_check(self):
27 | try:
28 | os.path.isdir(self.path)
29 | except UnicodeEncodeError:
30 | errMsg = "your system does not properly handle non-ASCII paths. "
31 | errMsg += "Please move the project root directory to another location"
32 | logging.error(errMsg)
33 | exit(0)
34 | path.home = self.path
35 | # path.output = os.path.join(self.path,'output')
36 | path.apioutputdir = os.path.join(self.path, 'results')
37 | path.library = os.path.join(self.path, 'library')
38 | if not os.path.exists(path.apioutputdir):
39 | warnMsg = "The apioutput folder is not created, it will be created automatically"
40 | logging.warning(warnMsg)
41 | os.mkdir(path.apioutputdir)
42 |
43 | def update(self):
44 | try:
45 | is_update = True
46 | nowTime = time.strftime("%Y%m%d%H%M%S", time.localtime())
47 | logging.info("正在在线更新指纹库。。")
48 | Fingerprint_Page = ""
49 | response = requests.get(Fingerprint_Page,timeout = 10,headers = head)
50 | filepath = os.path.join(path.library,"finger.json")
51 | bakfilepath = os.path.join(path.library,"finger_{}.json.bak".format(nowTime))
52 | with open(filepath,"rb") as file:
53 | if hashlib.md5(file.read()).hexdigest() == hashlib.md5(response.content).hexdigest():
54 | logging.info("指纹库已经是最新")
55 | is_update = False
56 | if is_update:
57 | logging.info("检查到指纹库有更新,正在同步指纹库。。。")
58 | os.rename(filepath,bakfilepath)
59 | with open(filepath,"wb") as file:
60 | file.write(response.content)
61 | with open(filepath,'rb') as file:
62 | Msg = "更新成功!" if hashlib.md5(file.read()).hexdigest() == hashlib.md5(response.content).hexdigest() else "更新失败"
63 | logging.info(Msg)
64 | except Exception as e:
65 | logging.warning("在线更新指纹库失败!")
66 |
67 |
68 | def python_check(self):
69 | if self.pyVersion < "3":
70 | logging.error("此Python版本 ('{0}') 不兼容,成功运行程序你必须使用版本 >= 3.6 (访问 ‘https://www.python.org/downloads/".format(self.pyVersion))
71 | exit(0)
72 |
73 | def path_check(self):
74 | try:
75 | os.path.isdir(self.path)
76 | except UnicodeEncodeError:
77 | errMsg = "your system does not properly handle non-ASCII paths. "
78 | errMsg += "Please move the project root directory to another location"
79 | logging.error(errMsg)
80 | exit(0)
81 | path.home = self.path
82 | # path.output = os.path.join(self.path,'output')
83 | path.apioutputdir = os.path.join(self.path, 'results')
84 | path.library = os.path.join(self.path, 'library')
85 | if not os.path.exists(path.apioutputdir):
86 | warnMsg = "The apioutput folder is not created, it will be created automatically"
87 | logging.warning(warnMsg)
88 | os.mkdir(path.apioutputdir)
89 |
90 | def update(self):
91 | try:
92 | is_update = True
93 | nowTime = time.strftime("%Y%m%d%H%M%S", time.localtime())
94 | logging.info("正在在线更新指纹库。。")
95 | Fingerprint_Page = ""
96 | response = requests.get(Fingerprint_Page,timeout = 10,headers = head)
97 | filepath = os.path.join(path.library,"finger.json")
98 | bakfilepath = os.path.join(path.library,"finger_{}.json.bak".format(nowTime))
99 | with open(filepath,"rb") as file:
100 | if hashlib.md5(file.read()).hexdigest() == hashlib.md5(response.content).hexdigest():
101 | logging.info("指纹库已经是最新")
102 | is_update = False
103 | if is_update:
104 | logging.info("检查到指纹库有更新,正在同步指纹库。。。")
105 | os.rename(filepath,bakfilepath)
106 | with open(filepath,"wb") as file:
107 | file.write(response.content)
108 | with open(filepath,'rb') as file:
109 | Msg = "更新成功!" if hashlib.md5(file.read()).hexdigest() == hashlib.md5(response.content).hexdigest() else "更新失败"
110 | logging.info(Msg)
111 | except Exception as e:
112 | logging.warning("在线更新指纹库失败!")
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
--------------------------------------------------------------------------------
/lib/cmdline.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import argparse
5 |
6 | from config.color import color
7 | from config.data import logging
8 |
9 |
10 | def cmdline():
11 | parser = argparse.ArgumentParser(description=color.green("\n[*] 常用方法1(如下即为资产测绘+指纹识别):\n"
12 | " [*] 功能:fscan就是仅fofa,fqscan就是fofa+quake。只有加上-finger之后才可以进行指纹识别,加上-vest进行归属公司查询\n"
13 | " [*] 输出: xx公司.xlsx 包含资产测绘原结果及经过指纹识别后的新结果\n"
14 | " [*] 以下为针对单一域名或者IP、IP段进行资产测绘"
15 | " python3 SyntaxFinger.py -d example.com -m fscan -o xx公司\n"
16 | " python3 SyntaxFinger.py -d example.com -m fqscan -o xx公司 -finger\n"
17 | " python3 SyntaxFinger.py -d 192.168.1.1 -m fqhscan -o xx公司\n"
18 | " python3 SyntaxFinger.py -d 192.168.1.1 -m fqhscan -o xx公司 -finger\n"
19 | " python3 SyntaxFinger.py -d 192.168.1.1/24 -m fscan -o xx公司 \n"
20 | " python3 SyntaxFinger.py -d 192.168.1.1/24 -m fscan -o xx公司 -finger\n\n"
21 | " [*] 以下为使用自定义命令进行资产测绘(考虑到格式不兼容,自己去看格式是什么)\n"
22 | " python3 SyntaxFinger.py -search (body=\"mdc-section\") -m fscan -o xx公司 -finger\n"
23 | " python3 SyntaxFinger.py -search (body=\"mdc-section\") -m fscan -o xx公司 -finger -vest\n\n"
24 | "[*] 常用方法2(如下即为导入资产+指纹识别):\n"
25 | " [*] 功能:-u导入单个url,-f导入url文件\n"
26 | " [*] 输出: xx公司.xlsx 经过指纹识别后的结果\n"
27 | " python3 SyntaxFinger.py -u http://www.example.com -finger -vest\n"
28 | " python3 SyntaxFinger.py -f 1.txt -finger -vest\n"
29 | ""), formatter_class=argparse.RawTextHelpFormatter)
30 |
31 | api = parser.add_argument_group("资产测绘API+指纹识别")
32 | api.add_argument("-d", dest='icp', type=str,
33 | help="1、支持输入根域名、单个IP,自动使用空间引擎对domain||cert或者IP进行查询\n"
34 | "2、支持输入ip段,但是输入ip段时建议-m fscan(仅fofa搜索),不然hunter积分顶不住,quake也比较卡")
35 | api.add_argument("-search", dest='search', type=str,
36 | help="自定义语法输入,仅可输入各资产测绘拥有的语法")
37 | api.add_argument("-m", dest='method', type=str,
38 | help="fscan qscan hscan fqscan qhscan fqhscan,选择其一即可,代表fofa、quake、hunter或者联合查询")
39 |
40 | finger = parser.add_argument_group("手动导入资产+指纹识别")
41 | finger.add_argument('-u', dest='url', type=str, help="输入单个url,然后输入-finger进行单个指纹识别")
42 | finger.add_argument('-f', dest='file', type=str, help="输入url文件,然后输入-finger进行指纹识别")
43 |
44 | api = parser.add_argument_group("通用方法")
45 | api.add_argument("-finger", action="store_true", default=False, help="输入-finger则进行指纹识别")
46 | api.add_argument("-proxy", dest='proxy', type=str, help="增加代理")
47 | api.add_argument("-vest", action="store_true", default=False, help="输入-vest进行资产归属识别(定位icp)")
48 | api.add_argument("-o", dest='projectname', type=str,
49 | help="输出excel,名称最好是公司全称,不需要加后缀", default=False)
50 | args = parser.parse_args()
51 | return args
52 |
--------------------------------------------------------------------------------
/lib/identify.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import os
5 | import re
6 | import json
7 | from config.data import path
8 | from config.color import color
9 | from urllib.parse import urlsplit
10 | from config.data import logging, Webinfo
11 | from urllib.parse import urlparse
12 |
13 |
14 | class Identify:
15 | def __init__(self):
16 | filepath_old = os.path.join(path.library, 'finger_old.json')
17 | with open(filepath_old, 'r', encoding='utf-8') as file_old:
18 | self.all_fingerprint = json.load(file_old)
19 | logging.success(f"已成功加载互联网-Nday指纹{len(self.all_fingerprint)}条")
20 |
21 | def run(self, datas):
22 | self.datas = datas
23 | cms = self.which_app(self.datas["header"], self.datas["body"], self.datas["title"], self.datas["faviconhash"])
24 | if self.checkbackground():
25 | cms.append("后台")
26 | language = self.get_language(self.datas["header"])
27 | cms.extend(language)
28 | self.datas["cms"] = ' || '.join(set(cms))
29 | # _url = "://{0}".format(urlsplit(self.datas['url']).netloc) # 添加://降低误报率
30 | _webinfo = str(Webinfo.result)
31 | if self.datas['url'] in _webinfo:
32 | pass
33 | else:
34 | results = {"url": self.datas["url"], "cms": self.datas["cms"], "title": self.datas["title"],
35 | "status": self.datas["status"], "Server": self.datas['Server'],
36 | "size": self.datas["size"], "iscdn": self.datas["iscdn"], "ip": self.datas["ip"],
37 | "address": self.datas["address"], "isp": self.datas["isp"], "icp": self.datas["icp"]}
38 | if cms:
39 | Webinfo.result.insert(0, results)
40 | else:
41 | Webinfo.result.append(results)
42 | if self.datas['status'] == 200:
43 | msg_status = color.green(self.datas['status'])
44 | elif self.datas['status'] == 301 or self.datas['status'] == 302:
45 | msg_status = color.blue(self.datas['status'])
46 | else:
47 | msg_status = color.yellow(self.datas['status'])
48 | Msg = "{0} | 指纹:[{1}] | Status:[{2}] | 标题:\"{3}\" ".format(self.datas["url"],
49 | color.green(self.datas['cms']),
50 | msg_status,
51 | color.blue(self.datas['title'])
52 | )
53 | logging.success(Msg)
54 |
55 | # 指纹校验
56 | def check_fingerprint(self, finger, resp_header, resp_body, html_title, favicon_hash):
57 | allowed_method = ['keyword', 'faviconhash']
58 | allowed_keyword_position = ['title', 'body', 'header']
59 | method = finger.get('method', None)
60 | match_content = finger.get('match', None)
61 | if method not in allowed_method:
62 | print("指纹有误(method):", finger)
63 | return False
64 | if not match_content:
65 | print("指纹有误(match):", finger)
66 | return False
67 | if method == "keyword":
68 | position = finger.get('position', None)
69 | if position not in allowed_keyword_position:
70 | print("指纹有误(position):", finger)
71 | return False
72 | if position == "title":
73 | return match_content in html_title
74 | elif position == "body":
75 | return match_content in resp_body
76 | elif position == "header":
77 | return match_content in resp_header
78 | elif method == "faviconhash":
79 | return match_content == favicon_hash
80 | return False
81 |
82 | def relation_split(self, relation):
83 | relation_split_list = relation.split(" ")
84 | index = 0
85 | while index < len(relation_split_list):
86 | if relation_split_list[index].startswith("(") and relation_split_list[index] != "(":
87 | relation_split_list[index] = relation_split_list[index][1:]
88 | relation_split_list.insert(index, "(")
89 | if relation_split_list[index].endswith(")") and relation_split_list[index] != ")":
90 | relation_split_list[index] = relation_split_list[index][:-1]
91 | relation_split_list.insert(index + 1, ")")
92 | index = index + 1
93 | return relation_split_list
94 |
95 | def has_app(self, fingerprint, resp_header, resp_body, html_title, favicon_hash):
96 | fingers_result = {}
97 | fingers = fingerprint['finger']
98 | relations = fingerprint['relation']
99 | for k in fingers.keys():
100 | finger = fingers[k]
101 | fingers_result[k] = self.check_fingerprint(finger, resp_header, resp_body, html_title, favicon_hash)
102 | for relation in relations:
103 | # 拆开 index1 and index2 -> [index1,and,index1]
104 | relation_split_list = self.relation_split(relation)
105 | for i in range(len(relation_split_list)):
106 | if relation_split_list[i] not in ["and", "or", "(", ")"]:
107 | #index1 存在则为True index2不存在则为 False
108 | relation_split_list[i] = str(fingers_result.get(relation_split_list[i], False))
109 | #把列表转成字符串 [index1,and,index1] - > True and False
110 | relation_replaced = " ".join(relation_split_list)
111 | #运算 成立则为True
112 | if eval(relation_replaced):
113 | return True, relation
114 | return False, None
115 |
116 | def which_app(self, resp_header, resp_body, html_title, favicon_hash=None):
117 | app_list = []
118 | if favicon_hash is None:
119 | favicon_hash = "x"
120 | for fingerprint in self.all_fingerprint:
121 | is_has_app, matched_relation = self.has_app(fingerprint, resp_header, resp_body, html_title, favicon_hash)
122 | if is_has_app:
123 | # fingerprint['matched_relation'] = matched_relation
124 | ## 判断指纹是否拥有0day
125 | if fingerprint["day_type"] == 0:
126 | day_type = "0day!!!"
127 | if fingerprint["day_type"] == 1:
128 | day_type = "1day!"
129 | if fingerprint["day_type"] == -1:
130 | day_type = "通用"
131 | app_list.append(fingerprint["product"] + f"({day_type})" + "[" +fingerprint["team"] + "]")
132 | return app_list
133 |
134 | # 独立出的方法,专门检查后台
135 | def checkbackground(self):
136 |
137 | url = self.datas["url"].split("|")[0].strip()
138 |
139 | # 关键字匹配
140 | if "后台" in self.datas["title"] or "系统" in self.datas["title"] or "登陆" in self.datas["title"] or "管理" in \
141 | self.datas["title"]:
142 | return True
143 |
144 | elif ("管理员" in self.datas["body"] and "登陆" in self.datas["body"] or
145 | "登陆系统" in self.datas["body"]):
146 | return True
147 |
148 | try:
149 | path = urlparse(url).path.lower()
150 | bak_url_list = ("/login;jsessionid=", "/login.aspx?returnurl=", "/auth/login",
151 | "/login/?next=", "wp-login.php", "/ui/#login", "/admin", "/login.html",
152 | "/admin_login.asp")
153 |
154 | if path in bak_url_list:
155 | return True
156 | except:
157 | pass
158 |
159 | if (re.search(r"
6 | " Date : 2015-11-06
7 | """
8 | import struct, io, socket, sys
9 |
10 | class Ip2Region(object):
11 |
12 | def __init__(self, dbfile):
13 | self.__INDEX_BLOCK_LENGTH = 12
14 | self.__TOTAL_HEADER_LENGTH = 8192
15 | self.__f = None
16 | self.__headerSip = []
17 | self.__headerPtr = []
18 | self.__headerLen = 0
19 | self.__indexSPtr = 0
20 | self.__indexLPtr = 0
21 | self.__indexCount = 0
22 | self.__dbBinStr = ''
23 | self.initDatabase(dbfile)
24 |
25 | def memorySearch(self, ip):
26 | """
27 | " memory search method
28 | " param: ip
29 | """
30 | if not ip.isdigit(): ip = self.ip2long(ip)
31 |
32 | if self.__dbBinStr == '':
33 | self.__dbBinStr = self.__f.read() #read all the contents in file
34 | self.__indexSPtr = self.getLong(self.__dbBinStr, 0)
35 | self.__indexLPtr = self.getLong(self.__dbBinStr, 4)
36 | self.__indexCount = int((self.__indexLPtr - self.__indexSPtr)/self.__INDEX_BLOCK_LENGTH)+1
37 |
38 | l, h, dataPtr = (0, self.__indexCount, 0)
39 | while l <= h:
40 | m = int((l+h) >> 1)
41 | p = self.__indexSPtr + m*self.__INDEX_BLOCK_LENGTH
42 | sip = self.getLong(self.__dbBinStr, p)
43 |
44 | if ip < sip:
45 | h = m -1
46 | else:
47 | eip = self.getLong(self.__dbBinStr, p+4)
48 | if ip > eip:
49 | l = m + 1;
50 | else:
51 | dataPtr = self.getLong(self.__dbBinStr, p+8)
52 | break
53 |
54 | if dataPtr == 0: raise Exception("Data pointer not found")
55 |
56 | return self.returnData(dataPtr)
57 |
58 | def binarySearch(self, ip):
59 | """
60 | " binary search method
61 | " param: ip
62 | """
63 | if not ip.isdigit(): ip = self.ip2long(ip)
64 |
65 | if self.__indexCount == 0:
66 | self.__f.seek(0)
67 | superBlock = self.__f.read(8)
68 | self.__indexSPtr = self.getLong(superBlock, 0)
69 | self.__indexLPtr = self.getLong(superBlock, 4)
70 | self.__indexCount = int((self.__indexLPtr - self.__indexSPtr) / self.__INDEX_BLOCK_LENGTH) + 1
71 |
72 | l, h, dataPtr = (0, self.__indexCount, 0)
73 | while l <= h:
74 | m = int((l+h) >> 1)
75 | p = m*self.__INDEX_BLOCK_LENGTH
76 |
77 | self.__f.seek(self.__indexSPtr+p)
78 | buffer = self.__f.read(self.__INDEX_BLOCK_LENGTH)
79 | sip = self.getLong(buffer, 0)
80 | if ip < sip:
81 | h = m - 1
82 | else:
83 | eip = self.getLong(buffer, 4)
84 | if ip > eip:
85 | l = m + 1
86 | else:
87 | dataPtr = self.getLong(buffer, 8)
88 | break
89 |
90 | if dataPtr == 0: raise Exception("Data pointer not found")
91 |
92 | return self.returnData(dataPtr)
93 |
94 | def btreeSearch(self, ip):
95 | """
96 | " b-tree search method
97 | " param: ip
98 | """
99 | if not ip.isdigit(): ip = self.ip2long(ip)
100 |
101 | if len(self.__headerSip) < 1:
102 | headerLen = 0
103 | #pass the super block
104 | self.__f.seek(8)
105 | #read the header block
106 | b = self.__f.read(self.__TOTAL_HEADER_LENGTH)
107 | #parse the header block
108 | for i in range(0, len(b), 8):
109 | sip = self.getLong(b, i)
110 | ptr = self.getLong(b, i+4)
111 | if ptr == 0:
112 | break
113 | self.__headerSip.append(sip)
114 | self.__headerPtr.append(ptr)
115 | headerLen += 1
116 | self.__headerLen = headerLen
117 |
118 | l, h, sptr, eptr = (0, self.__headerLen, 0, 0)
119 | while l <= h:
120 | m = int((l+h) >> 1)
121 |
122 | if ip == self.__headerSip[m]:
123 | if m > 0:
124 | sptr = self.__headerPtr[m-1]
125 | eptr = self.__headerPtr[m]
126 | else:
127 | sptr = self.__headerPtr[m]
128 | eptr = self.__headerPtr[m+1]
129 | break
130 |
131 | if ip < self.__headerSip[m]:
132 | if m == 0:
133 | sptr = self.__headerPtr[m]
134 | eptr = self.__headerPtr[m+1]
135 | break
136 | elif ip > self.__headerSip[m-1]:
137 | sptr = self.__headerPtr[m-1]
138 | eptr = self.__headerPtr[m]
139 | break
140 | h = m - 1
141 | else:
142 | if m == self.__headerLen - 1:
143 | sptr = self.__headerPtr[m-1]
144 | eptr = self.__headerPtr[m]
145 | break
146 | elif ip <= self.__headerSip[m+1]:
147 | sptr = self.__headerPtr[m]
148 | eptr = self.__headerPtr[m+1]
149 | break
150 | l = m + 1
151 |
152 | if sptr == 0: raise Exception("Index pointer not found")
153 |
154 | indexLen = eptr - sptr
155 | self.__f.seek(sptr)
156 | index = self.__f.read(indexLen + self.__INDEX_BLOCK_LENGTH)
157 |
158 | l, h, dataPrt = (0, int(indexLen/self.__INDEX_BLOCK_LENGTH), 0)
159 | while l <= h:
160 | m = int((l+h) >> 1)
161 | offset = int(m * self.__INDEX_BLOCK_LENGTH)
162 | sip = self.getLong(index, offset)
163 |
164 | if ip < sip:
165 | h = m - 1
166 | else:
167 | eip = self.getLong(index, offset+4)
168 | if ip > eip:
169 | l = m + 1;
170 | else:
171 | dataPrt = self.getLong(index, offset+8)
172 | break
173 |
174 | if dataPrt == 0: raise Exception("Data pointer not found")
175 |
176 | return self.returnData(dataPrt)
177 |
178 | def initDatabase(self, dbfile):
179 | """
180 | " initialize the database for search
181 | " param: dbFile
182 | """
183 | try:
184 | self.__f = io.open(dbfile, "rb")
185 | except IOError as e:
186 | print("[Error]: %s" % e)
187 | sys.exit()
188 |
189 | def returnData(self, dataPtr):
190 | """
191 | " get ip data from db file by data start ptr
192 | " param: dsptr
193 | """
194 | dataLen = (dataPtr >> 24) & 0xFF
195 | dataPtr = dataPtr & 0x00FFFFFF
196 |
197 | self.__f.seek(dataPtr)
198 | data = self.__f.read(dataLen)
199 |
200 | return {
201 | "city_id": self.getLong(data, 0),
202 | "region" : data[4:]
203 | }
204 |
205 | def ip2long(self, ip):
206 | _ip = socket.inet_aton(ip)
207 | return struct.unpack("!L", _ip)[0]
208 |
209 | def isip(self, ip):
210 | p = ip.split(".")
211 |
212 | if len(p) != 4 : return False
213 | for pp in p:
214 | if not pp.isdigit() : return False
215 | if len(pp) > 3 : return False
216 | if int(pp) > 255 : return False
217 |
218 | return True
219 |
220 | def getLong(self, b, offset):
221 | if len(b[offset:offset+4]) == 4:
222 | return struct.unpack('I', b[offset:offset+4])[0]
223 | return 0
224 |
225 | def close(self):
226 | if self.__f != None:
227 | self.__f.close()
228 |
229 | self.__dbBinStr = None
230 | self.__headerPtr = None
231 | self.__headerSip = None
232 |
--------------------------------------------------------------------------------
/lib/ipAttributable.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import os
5 | from config.data import path, logging
6 | from config.data import Webinfo
7 | from lib.ip2Region import Ip2Region
8 |
9 |
10 | class IpAttributable:
11 | def __init__(self):
12 | dbFile = os.path.join(path.library, "data", "ip2region.db")
13 | self.searcher = Ip2Region(dbFile)
14 | logging.info(("正在查询Ip地理归属"))
15 |
16 |
17 | def ipCollection(self):
18 | ip_list = []
19 | for value in Webinfo.result:
20 | if value["iscdn"] == 0 and value["ip"] not in ip_list:
21 | ip_list.append(value["ip"])
22 | return ip_list
23 |
24 | def getAttributable(self):
25 | ips = self.ipCollection()
26 | try:
27 | for ip in ips:
28 | addr = []
29 | data = str(self.searcher.binarySearch(ip)["region"].decode('utf-8')).split("|")
30 | isp = data[4].replace("0", "")
31 | data.pop(4)
32 | data.pop(1)
33 | for ad in data:
34 | if ad != "0" and ad not in addr and "" != ad:
35 | addr.append(ad)
36 | address = ','.join(addr)
37 | for value in Webinfo.result:
38 | if value['ip'] == ip:
39 | Webinfo.result[Webinfo.result.index(value)]["address"] = address
40 | Webinfo.result[Webinfo.result.index(value)]["isp"] = isp
41 | except Exception as e:
42 | pass
43 | logging.info("IP地理归属识别完毕")
44 |
45 |
--------------------------------------------------------------------------------
/lib/options.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import os
5 |
6 | from tldextract import tldextract
7 |
8 | from api.fofa import Fofa
9 | from api.quake import Quake
10 | from api.hunter import Hunter
11 | from config.data import Urls, logging, Ips, Search, Webinfo, Urlerror
12 |
13 |
14 | class initoptions:
15 | def __init__(self, args):
16 | self.key = ["\"", "“", "”", "\\", "'"]
17 | Urls.url = []
18 | Webinfo.result = []
19 | Urlerror.result = []
20 | Search.search = []
21 | self._finger = args.finger
22 | self._url = args.url
23 | self._file = args.file
24 | self.projectname = args.projectname
25 | self.method = args.method
26 | self.icp = args.icp
27 | self.search = args.search
28 | self.vest = args.vest
29 | self.proxies = args.proxy
30 | if self.icp:
31 | self.syntax = self.check_icp(self.icp)
32 | if self.search:
33 | self.syntax = self.search
34 |
35 | def api_data(self):
36 | res = []
37 | fofa_res = []
38 | quake_res = []
39 | hunter_res = []
40 | if "f" in self.method:
41 | fofa_init = Fofa(self.syntax)
42 | fofa_res = fofa_init.run()
43 | res.extend(fofa_res)
44 | if "q" in self.method:
45 | fofa_init = Quake(self.syntax)
46 | quake_res = fofa_init.run()
47 | res.extend(quake_res)
48 | if "h" in self.method:
49 | hunter_init = Hunter(self.syntax)
50 | hunter_res = hunter_init.run()
51 | res.extend(hunter_res)
52 | return res
53 |
54 |
55 | def check_icp(self, icp):
56 | if "/" in icp:
57 | syntax = f"(ip=\"{icp}\")"
58 | else:
59 | tld = tldextract.extract(icp)
60 | # print(tld)
61 | if tld.suffix != '':
62 | domain = tld.domain + '.' + tld.suffix
63 | syntax = f"(domain=\"{domain}\"||cert=\"{domain}\")"
64 | else:
65 | ip = tld.domain
66 | syntax = f"(ip=\"{ip}\")"
67 | return syntax
68 |
69 | def target(self):
70 | if self._url:
71 | self.check_url(self._url)
72 | elif self._file:
73 | if os.path.exists(self._file):
74 | with open(self._file, 'r') as f:
75 | for i in f:
76 | self.check_url(i.strip())
77 | else:
78 | errMsg = "File {0} is not find".format(self._file)
79 | logging.error(errMsg)
80 | exit(0)
81 |
82 | def check_url(self, url):
83 | for key in self.key:
84 | if key in url:
85 | url = url.replace(key, "")
86 | if not url.startswith('http') and url:
87 | # 若没有http头默认同时添加上http与https到目标上
88 | Urls.url.append("http://" + str(url))
89 | Urls.url.append("https://" + str(url))
90 | elif url:
91 | Urls.url.append(url)
92 |
93 |
94 | def get_ip(self):
95 | try:
96 | if self._ip:
97 | if "-" in self._ip:
98 | start, end = [self.ip_num(x) for x in self._ip.split('-')]
99 | iplist = [self.num_ip(num) for num in range(start, end + 1) if num & 0xff]
100 | for ip in iplist:
101 | Ips.ip.append(ip)
102 | else:
103 | Ips.ip.append(self._ip)
104 | if Ips.ip:
105 | run = Fofa()
106 | except Exception as e:
107 | logging.error(e)
108 | logging.error("IP格式有误,正确格式为192.168.10.1,192.168.10.1/24 or 192.168.10.10-192.168.10.50")
109 | exit(0)
110 |
111 | def ip_num(self, ip):
112 | ip = [int(x) for x in ip.split('.')]
113 | return ip[0] << 24 | ip[1] << 16 | ip[2] << 8 | ip[3]
114 |
115 | def num_ip(self, num):
116 | return '%s.%s.%s.%s' % ((num & 0xff000000) >> 24,
117 | (num & 0x00ff0000) >> 16,
118 | (num & 0x0000ff00) >> 8,
119 | num & 0x000000ff)
120 |
--------------------------------------------------------------------------------
/lib/proxy.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | from config.data import logging
4 | from lib.cmdline import cmdline
5 |
6 | # 代理
7 | def proxies():
8 | arg = cmdline()
9 | # if 1==1:
10 | proxies = None
11 | if arg.proxy == "2":
12 |
13 | proxies = {
14 | "http": "http://127.0.0.1:8080",
15 | "https": "http://127.0.0.1:8080"
16 | }
17 | logging.info("正在使用burp进行代理:%s" % proxies)
18 | if arg.proxy == "1":
19 | # 隧道域名:端口号
20 | tunnel = "xxxx.kdltps.com:15818"
21 |
22 | # 用户名密码方式
23 | username = "xxxxx"
24 | password = "xxxxxx"
25 | proxies = {
26 | "http": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": tunnel},
27 | "https": "http://%(user)s:%(pwd)s@%(proxy)s/" % {"user": username, "pwd": password, "proxy": tunnel}
28 | }
29 | # proxies = {
30 | # "http": arg.proxy,
31 | # "https": arg.proxy
32 | # }
33 | logging.info("正在测试代理池连通性:%s" % proxies)
34 | # 测试页面
35 | target_url = "https://dev.kdlapi.com/testproxy"
36 |
37 | # 使用隧道域名发送请求
38 | response = requests.get(target_url, proxies=proxies)
39 |
40 | # 获取页面内容
41 | if response.status_code == 200:
42 | logging.success("代理池已稳定加载:%s" % proxies)
43 |
44 | return proxies
45 |
--------------------------------------------------------------------------------
/lib/req.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | # author = guchangan1
4 | import re
5 | import ssl
6 |
7 | import requests
8 | import random
9 | import codecs
10 | import mmh3
11 | from fake_useragent import UserAgent
12 | from lib.IpFactory import IPFactory
13 | from urllib.parse import urlsplit, urljoin
14 | from config.data import Urls, Webinfo,Urlerror,logging
15 | from config import config
16 | from lib.identify import Identify
17 | from bs4 import BeautifulSoup
18 | from lib.proxy import proxies
19 | import urllib3
20 | import warnings
21 | from urllib3.exceptions import InsecureRequestWarning
22 |
23 | urllib3.disable_warnings()
24 | warnings.filterwarnings('ignore', category=InsecureRequestWarning)
25 | from concurrent.futures import ThreadPoolExecutor
26 |
27 |
28 | class Request:
29 | def __init__(self):
30 | #加载指纹至列表 初始化指纹库
31 | self.checkcms = Identify()
32 | #加载ip2region、cdn至列表 初始化IP库
33 | self.ipFactory = IPFactory()
34 | self.proxies = proxies()
35 | #线程池 多线程进行指纹识别
36 | logging.info(f"开始进行0-1-Nday指纹识别,去重后共对{len(set(Urls.url))}条URL进行指纹识别")
37 | with ThreadPoolExecutor(config.threads) as pool:
38 | run = pool.map(self.apply, set(Urls.url))
39 | logging.info(f"指纹识别完毕,存活url共计{len(Webinfo.result)}条,请求失败url共计{len(Urlerror.result)}条")
40 |
41 |
42 | #发送HTTP请求,获取响应
43 | def apply(self, url):
44 | try:
45 | with requests.get(url, timeout=15, headers=self.get_headers(), cookies=self.get_cookies(), verify=False,
46 | allow_redirects=False, proxies=self.proxies, stream=True) as response:
47 | if int(response.headers.get("content-length", default=1000)) > 100000:
48 | self.response(url, response, True)
49 | else:
50 | self.response(url, response)
51 | if response.status_code == 302:
52 | if 'http' in response.headers.get('Location','') :
53 | redirect_url = response.headers.get('Location')
54 | else:
55 | redirect_url = urljoin(response.url, response.headers.get('Location'))
56 | if redirect_url:
57 | with requests.get(redirect_url, timeout=5, headers=self.get_headers(),
58 | cookies=self.get_cookies(), verify=False,
59 | allow_redirects=False, proxies=self.proxies, stream=True) as response2:
60 | if int(response2.headers.get("content-length", default=1000)) > 100000:
61 | self.response(redirect_url, response2, True)
62 | else:
63 | self.response(redirect_url, response2)
64 | except KeyboardInterrupt:
65 | logging.error("用户强制程序,系统中止!")
66 | except requests.exceptions.RequestException as e:
67 | results = {"url": str(url), "cms": "需要自己看原因", "title": str(e),
68 | "status": "-", "Server": "-",
69 | "size": "-", "iscdn": "-", "ip": "-",
70 | "address": "-", "isp": "-", "icp": "-"}
71 | Urlerror.result.append(results)
72 |
73 | except ConnectionError as e:
74 | results = {"url": str(url), "cms": "响应超时", "title": str(e),
75 | "status": "-", "Server": "-",
76 | "size": "-", "iscdn": "-", "ip": "-",
77 | "address": "-", "isp": "-", "icp": "-"}
78 | Urlerror.result.append(results)
79 |
80 | except Exception as e:
81 | results = {"url": str(url), "cms": "-", "title": str(e),
82 | "status": "-", "Server": "-",
83 | "size": "-", "iscdn": "-", "ip": "-",
84 | "address": "-", "isp": "-", "icp": "-"}
85 |
86 | Urlerror.result.append(results)
87 |
88 | #对响应进行处理,封装好要准备进行识别的内容为datas
89 | def response(self, url, response, ignore=False):
90 | if ignore:
91 | html = ""
92 | size = response.headers.get("content-length", default=1000)
93 | else:
94 | response.encoding = response.apparent_encoding if response.encoding == 'ISO-8859-1' else response.encoding
95 | response.encoding = "utf-8" if response.encoding is None else response.encoding
96 | html = response.content.decode(response.encoding,"ignore")
97 | if response.text != None:
98 | size = len(response.text)
99 | else:
100 | size = 1000
101 |
102 |
103 |
104 | title = self.get_title(html).strip().replace('\r', '').replace('\n', '')
105 | status = response.status_code
106 | header = response.headers
107 | server = header.get("Server", "")
108 | server = "" if len(server) > 50 else server
109 | faviconhash = self.get_faviconhash(url, html)
110 | iscdn, iplist = self.ipFactory.factory(url, header)
111 | iplist = ','.join(set(iplist))
112 | datas = {"url": url, "title": title, "cms": "", "body": html, "status": status, "Server": server, "size": size,
113 | "header": header, "faviconhash": faviconhash, "iscdn": iscdn, "ip": iplist,
114 | "address": "", "isp": "", "icp": ""}
115 | self.checkcms.run(datas)
116 |
117 | def get_faviconhash(self, url, body):
118 | faviconpaths = re.findall(r'href="(.*?favicon....)"', body)
119 | faviconpath = ""
120 | try:
121 | parsed = urlsplit(url)
122 | turl = parsed.scheme + "://" + parsed.netloc
123 | if faviconpaths:
124 | fav = faviconpaths[0]
125 | if fav.startswith("//"):
126 | faviconpath = "http:" + fav
127 | elif fav.startswith("http"):
128 | faviconpath = fav
129 | else:
130 | faviconpath = urljoin(turl, fav)
131 | else:
132 | faviconpath = urljoin(turl, "favicon.ico")
133 | response = requests.get(faviconpath, headers=self.get_headers(), timeout=4, verify=False)
134 | favicon = codecs.encode(response.content, "base64")
135 | hash = mmh3.hash(favicon)
136 | return hash
137 | except:
138 | return 0
139 |
140 | def get_title(self, html):
141 | soup = BeautifulSoup(html, 'lxml')
142 | title = soup.title
143 | if title and title.text:
144 | return title.text
145 | if soup.h1:
146 | return soup.h1.text
147 | if soup.h2:
148 | return soup.h2.text
149 | if soup.h3:
150 | return soup.h3.text
151 | desc = soup.find('meta', attrs={'name': 'description'})
152 | if desc:
153 | return desc['content']
154 |
155 | word = soup.find('meta', attrs={'name': 'keywords'})
156 | if word:
157 | return word['content']
158 |
159 | text = soup.text
160 | if text !=None:
161 | if len(text) <= 200:
162 | return text
163 | return ''
164 |
165 | def get_headers(self):
166 | """
167 | 生成伪造请求头
168 | """
169 | # ua = random.choice(config.user_agents)
170 | headers = config.head
171 | return headers
172 |
173 | def get_cookies(self):
174 | cookies = {'rememberMe ': 'xxx'}
175 | # cookies = {'xxx': 'xxx'}
176 | return cookies
177 |
178 |
179 |
180 |
181 |
--------------------------------------------------------------------------------
/library/cdn_asn_list.json:
--------------------------------------------------------------------------------
1 | [
2 | "AS10576",
3 | "AS10762",
4 | "AS11748",
5 | "AS131099",
6 | "AS132601",
7 | "AS133496",
8 | "AS134409",
9 | "AS135295",
10 | "AS136764",
11 | "AS137187",
12 | "AS13777",
13 | "AS13890",
14 | "AS14103",
15 | "AS14520",
16 | "AS17132",
17 | "AS199251",
18 | "AS200013",
19 | "AS200325",
20 | "AS200856",
21 | "AS201263",
22 | "AS202294",
23 | "AS203075",
24 | "AS203139",
25 | "AS204248",
26 | "AS204286",
27 | "AS204545",
28 | "AS206227",
29 | "AS206734",
30 | "AS206848",
31 | "AS206986",
32 | "AS207158",
33 | "AS208559",
34 | "AS209403",
35 | "AS21030",
36 | "AS21257",
37 | "AS23327",
38 | "AS23393",
39 | "AS23637",
40 | "AS23794",
41 | "AS24997",
42 | "AS26492",
43 | "AS268843",
44 | "AS28709",
45 | "AS29264",
46 | "AS30282",
47 | "AS30637",
48 | "AS328126",
49 | "AS36408",
50 | "AS38107",
51 | "AS397192",
52 | "AS40366",
53 | "AS43303",
54 | "AS44907",
55 | "AS46071",
56 | "AS46177",
57 | "AS47542",
58 | "AS49287",
59 | "AS49689",
60 | "AS51286",
61 | "AS55082",
62 | "AS55254",
63 | "AS56636",
64 | "AS57363",
65 | "AS58127",
66 | "AS59730",
67 | "AS59776",
68 | "AS60068",
69 | "AS60626",
70 | "AS60922",
71 | "AS61107",
72 | "AS61159",
73 | "AS62026",
74 | "AS62229",
75 | "AS63062",
76 | "AS64232",
77 | "AS8868",
78 | "AS9053",
79 | "AS55770",
80 | "AS49846",
81 | "AS49249",
82 | "AS48163",
83 | "AS45700",
84 | "AS43639",
85 | "AS39836",
86 | "AS393560",
87 | "AS393234",
88 | "AS36183",
89 | "AS35994",
90 | "AS35993",
91 | "AS35204",
92 | "AS34850",
93 | "AS34164",
94 | "AS33905",
95 | "AS32787",
96 | "AS31377",
97 | "AS31110",
98 | "AS31109",
99 | "AS31108",
100 | "AS31107",
101 | "AS30675",
102 | "AS24319",
103 | "AS23903",
104 | "AS23455",
105 | "AS23454",
106 | "AS22207",
107 | "AS21399",
108 | "AS21357",
109 | "AS21342",
110 | "AS20940",
111 | "AS20189",
112 | "AS18717",
113 | "AS18680",
114 | "AS17334",
115 | "AS16702",
116 | "AS16625",
117 | "AS12222",
118 | "AS209101",
119 | "AS201585",
120 | "AS135429",
121 | "AS395747",
122 | "AS394536",
123 | "AS209242",
124 | "AS203898",
125 | "AS202623",
126 | "AS14789",
127 | "AS133877",
128 | "AS13335",
129 | "AS132892",
130 | "AS21859",
131 | "AS6185",
132 | "AS47823",
133 | "AS4134"
134 | ]
135 |
--------------------------------------------------------------------------------
/library/cdn_cname_keywords.json:
--------------------------------------------------------------------------------
1 | {
2 | "cdn": "cdn",
3 | "cache": "cache",
4 | "tbcache.com": "Alibaba Cloud",
5 | "alicdn.com": "Alibaba Cloud",
6 | "tcdn.qq.com": "tcdn.qq.com",
7 | "00cdn.com": "XYcdn",
8 | "21cvcdn.com": "21Vianet",
9 | "21okglb.cn": "21Vianet",
10 | "21speedcdn.com": "21Vianet",
11 | "21vianet.com.cn": "21Vianet",
12 | "21vokglb.cn": "21Vianet",
13 | "360wzb.com": "360",
14 | "51cdn.com": "ChinaCache",
15 | "acadn.com": "Dnion",
16 | "aicdn.com": "UPYUN",
17 | "akadns.net": "Akamai",
18 | "akamai-staging.net": "Akamai",
19 | "akamai.com": "Akamai",
20 | "akamai.net": "Akamai",
21 | "akamaitech.net": "Akamai",
22 | "akamaized.net": "Akamai",
23 | "alicloudlayer.com": "ALiyun",
24 | "alikunlun.com": "ALiyun",
25 | "aliyun-inc.com": "ALiyun",
26 | "alicloudsec.com": "ALiyun",
27 | "aliyuncs.com": "ALiyun",
28 | "amazonaws.com": "Amazon Cloudfront",
29 | "anankecdn.com.br": "Ananke",
30 | "aodianyun.com": "VOD",
31 | "aqb.so": "AnQuanBao",
32 | "awsdns": "KeyCDN",
33 | "azioncdn.net": "Azion",
34 | "azureedge.net": "Azure CDN",
35 | "bdydns.com": "Baiduyun",
36 | "bitgravity.com": "Tata Communications",
37 | "cachecn.com": "CnKuai",
38 | "cachefly.net": "Cachefly",
39 | "ccgslb.com": "ChinaCache",
40 | "ccgslb.net": "ChinaCache",
41 | "ccgslb.com.cn": "ChinaCache",
42 | "cdn-cdn.net": "",
43 | "cdn.cloudflare.net": "CloudFlare",
44 | "cdn.dnsv1.com": "Tengxunyun",
45 | "cdn.ngenix.net": "",
46 | "cdn20.com": "ChinaCache",
47 | "cdn77.net": "CDN77",
48 | "cdn77.org": "CDN77",
49 | "cdnetworks.net": "CDNetworks",
50 | "cdnify.io": "CDNify",
51 | "cdnnetworks.com": "CDNetworks",
52 | "cdnsun.net": "CDNsun",
53 | "cdntip.com": "QCloud",
54 | "cdnudns.com": "PowerLeader",
55 | "cdnvideo.ru": "CDNvideo",
56 | "cdnzz.net": "SuZhi",
57 | "chinacache.net": "ChinaCache",
58 | "chinaidns.net": "LineFuture",
59 | "chinanetcenter.com": "ChinaCache",
60 | "cloudcdn.net": "CnKuai",
61 | "cloudfront.net": "Amazon Cloudfront",
62 | "customcdn.cn": "ChinaCache",
63 | "customcdn.com": "ChinaCache",
64 | "dnion.com": "Dnion",
65 | "dnspao.com": "",
66 | "edgecastcdn.net": "EdgeCast",
67 | "edgesuite.net": "Akamai",
68 | "ewcache.com": "Dnion",
69 | "fastcache.com": "FastCache",
70 | "fastcdn.cn": "Dnion",
71 | "fastly.net": "Fastly",
72 | "fastweb.com": "CnKuai",
73 | "fastwebcdn.com": "CnKuai",
74 | "footprint.net": "Level3",
75 | "fpbns.net": "Level3",
76 | "fwcdn.com": "CnKuai",
77 | "fwdns.net": "CnKuai",
78 | "globalcdn.cn": "Dnion",
79 | "hacdn.net": "CnKuai",
80 | "hadns.net": "CnKuai",
81 | "hichina.com": "WWW",
82 | "hichina.net": "WWW",
83 | "hwcdn.net": "Highwinds",
84 | "incapdns.net": "Incapsula",
85 | "internapcdn.net": "Internap",
86 | "jiashule.com": "Jiasule",
87 | "kunlun.com": "ALiyun",
88 | "kunlunar.com": "ALiyun",
89 | "kunlunca.com": "ALiyun",
90 | "kxcdn.com": "KeyCDN",
91 | "lswcdn.net": "Leaseweb",
92 | "lxcdn.com": "ChinaCache",
93 | "mwcloudcdn.com": "QUANTIL",
94 | "netdna-cdn.com": "MaxCDN",
95 | "okcdn.com": "21Vianet",
96 | "okglb.com": "21Vianet",
97 | "ourwebcdn.net": "ChinaCache",
98 | "ourwebpic.com": "ChinaCache",
99 | "presscdn.com": "Presscdn",
100 | "qingcdn.com": "",
101 | "qiniudns.com": "QiNiu",
102 | "skyparkcdn.net": "",
103 | "speedcdns.com": "QUANTIL",
104 | "sprycdn.com": "PowerLeader",
105 | "tlgslb.com": "Dnion",
106 | "txcdn.cn": "CDNetworks",
107 | "txnetworks.cn": "CDNetworks",
108 | "ucloud.cn": "UCloud",
109 | "unicache.com": "LineFuture",
110 | "verygslb.com": "VeryCloud",
111 | "vo.llnwd.net": "Limelight",
112 | "wscdns.com": "ChinaNetCenter",
113 | "wscloudcdn.com": "ChinaNetCenter",
114 | "xgslb.net": "Webluker",
115 | "ytcdn.net": "Akamai",
116 | "yunjiasu-cdn": "Baiduyun",
117 | "cloudfront": "CloudFront",
118 | "kunlun.com": "Alibaba Cloud",
119 | "ccgslb": "ChinaCache",
120 | "edgekey": "Akamai",
121 | "fastly": "Fastly",
122 | "chinacache": "ChinaCache",
123 | "akamai": "Akamai",
124 | "edgecast": "EdgeCast",
125 | "azioncdn": "Azion",
126 | "cachefly": "CacheFly",
127 | "cdn77": "CDN77",
128 | "cdnetworks": "CDNetworks",
129 | "cdnify": "CDNify",
130 | "wscloudcdn": "ChinaNetCenter",
131 | "speedcdns": "ChinaNetCenter/Quantil",
132 | "mwcloudcdn": "ChinaNetCenter/Quantil",
133 | "cloudflare": "CloudFlare",
134 | "hwcdn": "HighWinds",
135 | "kxcdn": "KeyCDN",
136 | "fpbns": "Level3",
137 | "footprint": "Level3",
138 | "llnwd": "LimeLight",
139 | "netdna": "MaxCDN",
140 | "bitgravity": "Tata CDN",
141 | "azureedge": "Azure CDN",
142 | "anankecdn": "Anake CDN",
143 | "presscdn": "Press CDN",
144 | "telefonica": "Telefonica CDN",
145 | "dnsv1": "Tecent CDN",
146 | "cdntip": "Tecent CDN",
147 | "skyparkcdn": "Sky Park CDN",
148 | "ngenix": "Ngenix",
149 | "lswcdn": "LeaseWeb",
150 | "internapcdn": "Internap",
151 | "incapdns": "Incapsula",
152 | "cdnsun": "CDN SUN",
153 | "cdnvideo": "CDN Video",
154 | "clients.turbobytes.net": "TurboBytes",
155 | "turbobytes-cdn.com": "TurboBytes",
156 | "afxcdn.net": "afxcdn.net",
157 | "akamaiedge.net": "Akamai",
158 | "akamaitechnologies.com": "Akamai",
159 | "gslb.tbcache.com": "Alimama",
160 | "att-dsa.net": "AT&T",
161 | "belugacdn.com": "BelugaCDN",
162 | "bluehatnetwork.com": "Blue Hat Network",
163 | "systemcdn.net": "EdgeCast",
164 | "panthercdn.com": "CDNetworks",
165 | "cdngc.net": "CDNetworks",
166 | "gccdn.net": "CDNetworks",
167 | "gccdn.cn": "CDNetworks",
168 | "c3cache.net": "ChinaCache",
169 | "cncssr.chinacache.net": "ChinaCache",
170 | "c3cdn.net": "ChinaCache",
171 | "lxdns.com": "ChinaNetCenter",
172 | "speedcdns.com": "QUANTIL/ChinaNetCenter",
173 | "mwcloudcdn.com": "QUANTIL/ChinaNetCenter",
174 | "cloudflare.com": "Cloudflare",
175 | "cloudflare.net": "Cloudflare",
176 | "adn.": "EdgeCast",
177 | "wac.": "EdgeCast",
178 | "wpc.": "EdgeCast",
179 | "fastlylb.net": "Fastly",
180 | "google.": "Google",
181 | "googlesyndication.": "Google",
182 | "youtube.": "Google",
183 | "googleusercontent.com": "Google",
184 | "l.doubleclick.net": "Google",
185 | "hiberniacdn.com": "Hibernia",
186 | "inscname.net": "Instartlogic",
187 | "insnw.net": "Instartlogic",
188 | "lswcdn.net": "LeaseWeb CDN",
189 | "llnwd.net": "Limelight",
190 | "lldns.net": "Limelight",
191 | "netdna-ssl.com": "MaxCDN",
192 | "netdna.com": "MaxCDN",
193 | "stackpathdns.com": "StackPath",
194 | "mncdn.com": "Medianova",
195 | "instacontent.net": "Mirror Image",
196 | "mirror-image.net": "Mirror Image",
197 | "cap-mii.net": "Mirror Image",
198 | "rncdn1.com": "Reflected Networks",
199 | "simplecdn.net": "Simple CDN",
200 | "swiftcdn1.com": "SwiftCDN",
201 | "swiftserve.com": "SwiftServe",
202 | "gslb.taobao.com": "Taobao",
203 | "cdn.bitgravity.com": "Tata communications",
204 | "cdn.telefonica.com": "Telefonica",
205 | "vo.msecnd.net": "Windows Azure",
206 | "ay1.b.yahoo.com": "Yahoo",
207 | "yimg.": "Yahoo",
208 | "zenedge.net": "Zenedge",
209 | "cdnsun.net.": "CDNsun",
210 | "pilidns.com": "QiNiu",
211 | "cdngslb.com": "AliCDN Global",
212 | "ialicdn.com": "AliCDN",
213 | "alivecdn.com": "AliCDN",
214 | "myalicdn.com": "AliCDN"
215 | }
216 |
--------------------------------------------------------------------------------
/library/cdn_header_keys.json:
--------------------------------------------------------------------------------
1 | [
2 | "xcs",
3 | "via",
4 | "x-via",
5 | "x-cdn",
6 | "x-cdn-forward",
7 | "x-ser",
8 | "x-cf1",
9 | "cache",
10 | "x-cache",
11 | "x-cached",
12 | "x-cacheable",
13 | "x-hit-cache",
14 | "x-cache-status",
15 | "x-cache-hits",
16 | "x-cache-lookup",
17 | "cc_cache",
18 | "webcache",
19 | "chinacache",
20 | "x-req-id",
21 | "x-requestid",
22 | "cf-request-id",
23 | "x-github-request-id",
24 | "x-sucuri-id",
25 | "x-amz-cf-id",
26 | "x-airee-node",
27 | "x-cdn-provider",
28 | "x-fastly",
29 | "x-iinfo",
30 | "x-llid",
31 | "sozu-id",
32 | "x-cf-tsc",
33 | "x-ws-request-id",
34 | "fss-cache",
35 | "powered-by-chinacache",
36 | "verycdn",
37 | "yunjiasu",
38 | "skyparkcdn",
39 | "x-beluga-cache-status",
40 | "x-content-type-options",
41 | "x-download-options",
42 | "x-proxy-node",
43 | "access-control-max-age",
44 | "age",
45 | "etag",
46 | "expires",
47 | "pragma",
48 | "cache-control",
49 | "last-modified"
50 | ]
51 |
--------------------------------------------------------------------------------
/library/cdn_ip_cidr.json:
--------------------------------------------------------------------------------
1 | [
2 | "223.99.255.0/24",
3 | "71.152.0.0/17",
4 | "219.153.73.0/24",
5 | "125.39.46.0/24",
6 | "190.93.240.0/20",
7 | "14.0.113.0/24",
8 | "14.0.47.0/24",
9 | "113.20.148.0/22",
10 | "103.75.201.0/24",
11 | "1.32.239.0/24",
12 | "101.79.239.0/24",
13 | "52.46.0.0/18",
14 | "125.88.189.0/24",
15 | "150.138.248.0/24",
16 | "180.153.235.0/24",
17 | "205.251.252.0/23",
18 | "103.1.65.0/24",
19 | "115.127.227.0/24",
20 | "14.0.42.0/24",
21 | "109.199.58.0/24",
22 | "116.211.155.0/24",
23 | "112.253.3.0/24",
24 | "14.0.58.0/24",
25 | "223.112.227.0/24",
26 | "113.20.150.0/23",
27 | "61.182.141.0/24",
28 | "34.216.51.0/25",
29 | "124.95.188.0/24",
30 | "42.51.25.0/24",
31 | "183.136.133.0/24",
32 | "52.220.191.0/26",
33 | "119.84.93.0/24",
34 | "182.118.38.0/24",
35 | "13.59.250.0/26",
36 | "54.178.75.0/24",
37 | "119.84.92.0/24",
38 | "183.131.62.0/24",
39 | "111.32.136.0/24",
40 | "13.124.199.0/24",
41 | "111.47.227.0/24",
42 | "104.37.177.0/24",
43 | "14.0.50.0/24",
44 | "183.230.70.0/24",
45 | "114.111.59.0/24",
46 | "220.181.135.0/24",
47 | "112.140.32.0/19",
48 | "101.79.230.0/24",
49 | "14.0.115.0/24",
50 | "103.28.248.0/22",
51 | "117.34.72.0/24",
52 | "109.199.57.0/24",
53 | "101.79.149.0/24",
54 | "116.128.128.0/24",
55 | "115.231.186.0/24",
56 | "103.22.200.0/22",
57 | "61.155.165.0/24",
58 | "113.20.148.0/23",
59 | "185.254.242.0/24",
60 | "59.36.120.0/24",
61 | "70.132.0.0/18",
62 | "116.31.126.0/24",
63 | "119.147.134.0/24",
64 | "115.127.246.0/24",
65 | "52.47.139.0/24",
66 | "118.107.175.0/24",
67 | "52.78.247.128/26",
68 | "110.93.176.0/20",
69 | "54.240.128.0/18",
70 | "46.51.216.0/21",
71 | "119.31.251.0/24",
72 | "125.39.18.0/24",
73 | "108.175.33.0/24",
74 | "1.31.128.0/24",
75 | "61.151.163.0/24",
76 | "103.95.132.0/24",
77 | "58.215.118.0/24",
78 | "54.233.255.128/26",
79 | "120.52.113.0/24",
80 | "118.107.174.0/24",
81 | "1.32.242.0/24",
82 | "221.195.34.0/24",
83 | "101.79.228.0/24",
84 | "205.251.249.0/24",
85 | "113.200.91.0/24",
86 | "101.79.146.0/24",
87 | "221.238.22.0/24",
88 | "134.19.183.0/24",
89 | "110.93.160.0/20",
90 | "180.97.158.0/24",
91 | "115.127.251.0/24",
92 | "119.167.147.0/24",
93 | "115.127.238.0/24",
94 | "115.127.240.0/22",
95 | "14.0.48.0/24",
96 | "115.127.240.0/24",
97 | "113.7.183.0/24",
98 | "112.140.128.0/20",
99 | "115.127.255.0/24",
100 | "114.31.36.0/22",
101 | "101.79.232.0/24",
102 | "218.98.44.0/24",
103 | "106.119.182.0/24",
104 | "101.79.167.0/24",
105 | "125.39.5.0/24",
106 | "58.49.105.0/24",
107 | "124.202.164.0/24",
108 | "111.177.6.0/24",
109 | "61.133.127.0/24",
110 | "185.11.124.0/22",
111 | "150.138.150.0/24",
112 | "115.127.248.0/24",
113 | "103.74.80.0/22",
114 | "101.79.166.0/24",
115 | "101.71.55.0/24",
116 | "198.41.128.0/17",
117 | "117.21.219.0/24",
118 | "103.231.170.0/24",
119 | "221.204.202.0/24",
120 | "101.79.224.0/24",
121 | "112.25.16.0/24",
122 | "111.177.3.0/24",
123 | "204.246.168.0/22",
124 | "103.40.7.0/24",
125 | "134.226.0.0/16",
126 | "52.15.127.128/26",
127 | "122.190.2.0/24",
128 | "101.203.192.0/18",
129 | "1.32.238.0/24",
130 | "101.79.144.0/24",
131 | "176.34.28.0/24",
132 | "119.84.15.0/24",
133 | "18.216.170.128/25",
134 | "222.88.94.0/24",
135 | "101.79.150.0/24",
136 | "114.111.48.0/21",
137 | "124.95.168.0/24",
138 | "114.111.48.0/20",
139 | "110.93.176.0/21",
140 | "223.111.127.0/24",
141 | "117.23.61.0/24",
142 | "140.207.120.0/24",
143 | "157.255.26.0/24",
144 | "221.204.14.0/24",
145 | "183.222.96.0/24",
146 | "104.37.180.0/24",
147 | "42.236.93.0/24",
148 | "111.63.51.0/24",
149 | "114.31.32.0/20",
150 | "118.180.50.0/24",
151 | "222.240.184.0/24",
152 | "205.251.192.0/19",
153 | "101.79.225.0/24",
154 | "115.127.228.0/24",
155 | "113.20.148.0/24",
156 | "61.213.176.0/24",
157 | "112.65.75.0/24",
158 | "111.13.147.0/24",
159 | "113.20.145.0/24",
160 | "103.253.132.0/24",
161 | "52.222.128.0/17",
162 | "183.203.7.0/24",
163 | "27.221.27.0/24",
164 | "103.79.134.0/24",
165 | "123.150.187.0/24",
166 | "103.15.194.0/24",
167 | "162.158.0.0/15",
168 | "61.163.30.0/24",
169 | "182.140.227.0/24",
170 | "112.25.60.0/24",
171 | "117.148.161.0/24",
172 | "61.182.136.0/24",
173 | "114.31.56.0/22",
174 | "64.252.128.0/18",
175 | "183.61.185.0/24",
176 | "115.127.250.0/24",
177 | "150.138.138.0/24",
178 | "13.210.67.128/26",
179 | "211.162.64.0/24",
180 | "61.174.9.0/24",
181 | "14.0.112.0/24",
182 | "52.52.191.128/26",
183 | "27.221.124.0/24",
184 | "103.4.203.0/24",
185 | "103.14.10.0/24",
186 | "34.232.163.208/29",
187 | "114.31.48.0/20",
188 | "59.51.81.0/24",
189 | "183.60.235.0/24",
190 | "101.227.206.0/24",
191 | "125.39.174.0/24",
192 | "119.167.246.0/24",
193 | "118.107.160.0/21",
194 | "223.166.151.0/24",
195 | "110.93.160.0/19",
196 | "204.246.172.0/23",
197 | "119.31.253.0/24",
198 | "143.204.0.0/16",
199 | "14.0.60.0/24",
200 | "123.151.76.0/24",
201 | "116.193.80.0/24",
202 | "120.241.102.0/24",
203 | "180.96.20.0/24",
204 | "216.137.32.0/19",
205 | "223.94.95.0/24",
206 | "103.4.201.0/24",
207 | "14.0.56.0/24",
208 | "115.127.234.0/24",
209 | "113.20.144.0/23",
210 | "103.248.104.0/24",
211 | "122.143.15.0/24",
212 | "101.79.229.0/24",
213 | "101.79.163.0/24",
214 | "104.37.112.0/22",
215 | "115.127.253.0/24",
216 | "141.101.64.0/18",
217 | "113.20.144.0/22",
218 | "101.79.155.0/24",
219 | "117.148.160.0/24",
220 | "124.193.166.0/24",
221 | "109.94.168.0/24",
222 | "203.90.247.0/24",
223 | "101.79.208.0/21",
224 | "182.118.12.0/24",
225 | "114.31.58.0/23",
226 | "202.162.109.0/24",
227 | "101.79.164.0/24",
228 | "58.216.2.0/24",
229 | "222.216.190.0/24",
230 | "101.79.165.0/24",
231 | "111.6.191.0/24",
232 | "1.255.100.0/24",
233 | "52.84.0.0/15",
234 | "112.65.74.0/24",
235 | "183.250.179.0/24",
236 | "101.79.236.0/24",
237 | "119.31.252.0/24",
238 | "113.20.150.0/24",
239 | "60.12.166.0/24",
240 | "101.79.234.0/24",
241 | "113.17.174.0/24",
242 | "101.79.237.0/24",
243 | "61.54.46.0/24",
244 | "118.212.233.0/24",
245 | "183.110.242.0/24",
246 | "150.138.149.0/24",
247 | "117.34.13.0/24",
248 | "115.127.245.0/24",
249 | "14.0.102.0/24",
250 | "14.0.109.0/24",
251 | "61.130.28.0/24",
252 | "113.20.151.0/24",
253 | "219.159.84.0/24",
254 | "114.111.62.0/24",
255 | "172.64.0.0/13",
256 | "61.155.222.0/24",
257 | "120.52.29.0/24",
258 | "115.127.231.0/24",
259 | "14.0.49.0/24",
260 | "113.202.0.0/16",
261 | "103.248.104.0/22",
262 | "205.251.250.0/23",
263 | "103.216.136.0/22",
264 | "118.107.160.0/20",
265 | "109.87.0.0/21",
266 | "54.239.128.0/18",
267 | "115.127.224.0/19",
268 | "111.202.98.0/24",
269 | "109.94.169.0/24",
270 | "59.38.112.0/24",
271 | "204.246.176.0/20",
272 | "123.133.84.0/24",
273 | "103.4.200.0/24",
274 | "111.161.109.0/24",
275 | "112.84.34.0/24",
276 | "103.82.129.0/24",
277 | "183.3.254.0/24",
278 | "112.137.184.0/21",
279 | "122.227.237.0/24",
280 | "36.42.75.0/24",
281 | "13.35.0.0/16",
282 | "101.226.4.0/24",
283 | "116.140.35.0/24",
284 | "58.250.143.0/24",
285 | "13.54.63.128/26",
286 | "205.251.254.0/24",
287 | "173.245.48.0/20",
288 | "183.61.177.0/24",
289 | "113.20.144.0/24",
290 | "104.37.183.0/24",
291 | "35.158.136.0/24",
292 | "116.211.121.0/24",
293 | "42.236.94.0/24",
294 | "117.34.91.0/24",
295 | "123.6.13.0/24",
296 | "13.224.0.0/14",
297 | "113.20.146.0/24",
298 | "58.58.81.0/24",
299 | "52.124.128.0/17",
300 | "122.228.198.0/24",
301 | "197.234.240.0/22",
302 | "99.86.0.0/16",
303 | "144.220.0.0/16",
304 | "119.188.97.0/24",
305 | "36.27.212.0/24",
306 | "104.37.178.0/24",
307 | "114.31.52.0/22",
308 | "218.65.212.0/24",
309 | "1.255.41.0/24",
310 | "14.0.45.0/24",
311 | "1.32.243.0/24",
312 | "220.170.185.0/24",
313 | "122.190.3.0/24",
314 | "103.79.133.0/24",
315 | "220.181.55.0/24",
316 | "125.39.191.0/24",
317 | "115.127.226.0/24",
318 | "125.39.32.0/24",
319 | "61.120.154.0/24",
320 | "103.4.202.0/24",
321 | "103.79.134.0/23",
322 | "115.127.224.0/24",
323 | "113.20.147.0/24",
324 | "61.156.149.0/24",
325 | "210.209.122.0/24",
326 | "115.127.249.0/24",
327 | "104.37.179.0/24",
328 | "120.52.18.0/24",
329 | "54.192.0.0/16",
330 | "14.0.55.0/24",
331 | "61.160.224.0/24",
332 | "113.207.101.0/24",
333 | "101.79.157.0/24",
334 | "110.93.128.0/20",
335 | "58.251.121.0/24",
336 | "61.240.149.0/24",
337 | "130.176.0.0/16",
338 | "113.107.238.0/24",
339 | "112.65.73.0/24",
340 | "103.75.200.0/23",
341 | "199.83.128.0/21",
342 | "123.129.220.0/24",
343 | "54.230.0.0/16",
344 | "114.111.60.0/24",
345 | "199.27.128.0/21",
346 | "14.0.118.0/24",
347 | "101.79.158.0/24",
348 | "119.31.248.0/21",
349 | "54.182.0.0/16",
350 | "113.31.27.0/24",
351 | "14.17.69.0/24",
352 | "101.79.145.0/24",
353 | "113.20.144.0/21",
354 | "180.163.22.0/24",
355 | "104.37.176.0/21",
356 | "117.25.156.0/24",
357 | "115.127.252.0/24",
358 | "115.127.244.0/23",
359 | "14.0.46.0/24",
360 | "113.207.102.0/24",
361 | "52.199.127.192/26",
362 | "13.113.203.0/24",
363 | "64.252.64.0/18",
364 | "1.32.240.0/24",
365 | "123.129.232.0/24",
366 | "1.32.241.0/24",
367 | "180.163.189.0/24",
368 | "157.255.25.0/24",
369 | "1.32.244.0/24",
370 | "103.248.106.0/24",
371 | "121.48.95.0/24",
372 | "54.239.192.0/19",
373 | "113.20.146.0/23",
374 | "61.136.173.0/24",
375 | "35.162.63.192/26",
376 | "117.34.14.0/24",
377 | "183.232.29.0/24",
378 | "42.81.93.0/24",
379 | "122.228.238.0/24",
380 | "183.61.190.0/24",
381 | "125.39.239.0/24",
382 | "115.127.230.0/24",
383 | "103.140.200.0/23",
384 | "202.102.85.0/24",
385 | "14.0.32.0/21",
386 | "14.0.57.0/24",
387 | "112.25.90.0/24",
388 | "58.211.137.0/24",
389 | "210.22.63.0/24",
390 | "34.226.14.0/24",
391 | "13.32.0.0/15",
392 | "101.79.156.0/24",
393 | "103.89.176.0/24",
394 | "14.0.116.0/24",
395 | "106.42.25.0/24",
396 | "101.79.233.0/24",
397 | "101.79.231.0/24",
398 | "103.75.200.0/24",
399 | "119.188.9.0/24",
400 | "183.232.51.0/24",
401 | "149.126.72.0/21",
402 | "103.21.244.0/22",
403 | "115.127.233.0/24",
404 | "27.221.20.0/24",
405 | "198.143.32.0/19",
406 | "103.248.107.0/24",
407 | "101.79.227.0/24",
408 | "115.127.242.0/24",
409 | "119.31.250.0/24",
410 | "103.82.130.0/24",
411 | "99.84.0.0/16",
412 | "222.73.144.0/24",
413 | "103.79.132.0/22",
414 | "101.79.208.0/20",
415 | "104.37.182.0/24",
416 | "101.79.152.0/24",
417 | "36.99.18.0/24",
418 | "101.71.56.0/24",
419 | "36.250.5.0/24",
420 | "61.158.240.0/24",
421 | "119.188.14.0/24",
422 | "13.249.0.0/16",
423 | "183.214.156.0/24",
424 | "60.221.236.0/24",
425 | "58.30.212.0/24",
426 | "115.127.254.0/24",
427 | "188.114.96.0/20",
428 | "115.127.241.0/24",
429 | "103.4.200.0/22",
430 | "115.127.239.0/24",
431 | "115.127.243.0/24",
432 | "111.32.135.0/24",
433 | "120.221.29.0/24",
434 | "115.127.232.0/24",
435 | "14.0.43.0/24",
436 | "14.0.59.0/24",
437 | "183.61.236.0/24",
438 | "34.223.12.224/27",
439 | "103.24.120.0/24",
440 | "52.57.254.0/24",
441 | "113.207.100.0/24",
442 | "222.186.19.0/24",
443 | "113.20.149.0/24",
444 | "150.138.151.0/24",
445 | "115.231.110.0/24",
446 | "52.56.127.0/25",
447 | "104.37.176.0/24",
448 | "163.177.8.0/24",
449 | "163.53.89.0/24",
450 | "52.82.128.0/19",
451 | "114.111.63.0/24",
452 | "108.162.192.0/18",
453 | "14.136.130.0/24",
454 | "115.127.229.0/24",
455 | "14.17.71.0/24",
456 | "52.212.248.0/26",
457 | "180.163.188.0/24",
458 | "61.182.137.0/24",
459 | "119.161.224.0/21",
460 | "14.0.41.0/24",
461 | "202.162.108.0/24",
462 | "106.122.248.0/24",
463 | "52.66.194.128/26",
464 | "115.127.237.0/24",
465 | "220.170.186.0/24",
466 | "14.0.32.0/19",
467 | "14.0.114.0/24",
468 | "112.90.216.0/24",
469 | "115.127.236.0/24",
470 | "116.193.84.0/24",
471 | "113.207.76.0/24",
472 | "101.79.235.0/24",
473 | "101.79.224.0/20",
474 | "61.155.149.0/24",
475 | "101.79.148.0/24",
476 | "180.163.224.0/24",
477 | "204.246.174.0/23",
478 | "183.60.136.0/24",
479 | "101.227.207.0/24",
480 | "103.248.105.0/24",
481 | "119.188.35.0/24",
482 | "42.236.7.0/24",
483 | "116.193.88.0/21",
484 | "116.193.83.0/24",
485 | "120.199.69.0/24",
486 | "122.226.182.0/24",
487 | "58.20.204.0/24",
488 | "110.93.128.0/21",
489 | "115.231.187.0/24",
490 | "69.28.58.0/24",
491 | "114.31.32.0/19",
492 | "112.25.91.0/24",
493 | "59.52.28.0/24",
494 | "117.27.149.0/24",
495 | "61.147.92.0/24",
496 | "14.0.117.0/24",
497 | "14.0.40.0/24",
498 | "119.97.151.0/24",
499 | "103.199.228.0/22",
500 | "122.70.134.0/24",
501 | "115.127.244.0/24",
502 | "223.112.198.0/24",
503 | "115.127.225.0/24",
504 | "104.16.0.0/12",
505 | "121.12.98.0/24",
506 | "103.31.4.0/22",
507 | "204.246.164.0/22",
508 | "223.94.66.0/24",
509 | "35.167.191.128/26",
510 | "116.31.127.0/24",
511 | "101.79.226.0/24",
512 | "34.195.252.0/24",
513 | "115.127.247.0/24",
514 | "61.240.144.0/24",
515 | "108.175.32.0/20",
516 | "120.197.85.0/24",
517 | "183.232.53.0/24",
518 | "111.161.66.0/24",
519 | "117.34.28.0/24",
520 | "45.64.64.0/22",
521 | "14.0.44.0/24",
522 | "109.86.0.0/15",
523 | "182.23.211.0/24",
524 | "58.211.2.0/24",
525 | "119.36.164.0/24",
526 | "116.55.250.0/24",
527 | "101.227.163.0/24",
528 | "13.228.69.0/24",
529 | "120.221.136.0/24",
530 | "119.188.132.0/24",
531 | "115.127.235.0/24",
532 | "42.236.6.0/24",
533 | "125.88.190.0/24",
534 | "61.54.47.0/24",
535 | "103.27.12.0/22",
536 | "116.193.80.0/21",
537 | "101.79.159.0/24",
538 | "123.155.158.0/24",
539 | "111.47.226.0/24",
540 | "131.0.72.0/22",
541 | "192.230.64.0/18"
542 | ]
--------------------------------------------------------------------------------
/library/data/GeoLite2-ASN.mmdb:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/guchangan1/SyntaxFinger/575a3145fbbbf0c0facf83113f0286deebef7450/library/data/GeoLite2-ASN.mmdb
--------------------------------------------------------------------------------
/library/data/README.MD:
--------------------------------------------------------------------------------
1 | 当前版本号 202202,字段area更换为所属大洲
2 |
--------------------------------------------------------------------------------
/library/data/ip2region.db:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/guchangan1/SyntaxFinger/575a3145fbbbf0c0facf83113f0286deebef7450/library/data/ip2region.db
--------------------------------------------------------------------------------
/library/nameservers_list.json:
--------------------------------------------------------------------------------
1 | [
2 | "8.8.4.4",
3 | "8.8.8.8",
4 | "114.114.114.114",
5 | "1.1.1.1",
6 | "119.29.29.29",
7 | "223.5.5.5",
8 | "180.76.76.76",
9 | "117.50.10.10",
10 | "208.67.222.222",
11 | "223.6.6.6",
12 | "4.2.2.1",
13 | "168.95.1.1",
14 | "202.14.67.4",
15 | "202.14.67.14",
16 | "168.95.192.1",
17 | "168.95.1.1",
18 | "202.86.191.50",
19 | "202.175.45.2",
20 | "202.248.20.133",
21 | "211.129.155.175",
22 | "101.110.50.105",
23 | "212.66.129.108",
24 | "104.152.211.99",
25 | "9.9.9.9",
26 | "82.127.173.122",
27 | "61.19.42.5",
28 | "210.23.129.34",
29 | "210.80.58.3",
30 | "62.122.101.59",
31 | "80.66.158.118",
32 | "101.226.4.6",
33 | "218.30.118.6",
34 | "123.125.81.6",
35 | "140.207.198.6",
36 | "80.80.80.80",
37 | "61.132.163.68",
38 | "202.102.213.68",
39 | "202.98.192.67",
40 | "202.98.198.167",
41 | "210.22.70.3",
42 | "123.123.123.123",
43 | "210.22.84.3",
44 | "221.7.1.20",
45 | "221.7.1.21",
46 | "202.116.128.1",
47 | "202.192.18.1",
48 | "211.136.112.50",
49 | "211.138.30.66"
50 | ]
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | urllib3>=1.25.2,<2.0.0
2 | requests==2.31.0
3 | geoip2==4.0.2
4 | tldextract~=5.1.2
5 | openpyxl~=3.0.7
6 | mmh3~=3.0.0
7 | beautifulsoup4~=4.10.0
8 | argparse~=1.4.0
9 | dnspython~=2.6.1
10 | lxml~=5.2.1
11 | colorama~=0.4.4
12 | fake-useragent==1.5.1 #https://pypi.org/project/fake-useragent/
--------------------------------------------------------------------------------
/target.txt:
--------------------------------------------------------------------------------
1 | 百度公司 baidu.com
2 |
--------------------------------------------------------------------------------