├── LICENSE
├── README.md
├── __init__.py
├── base.py
├── bin
├── README.md
├── cacheServer_v0.3.0-beta-Centos6.tar.gz
├── cacheServer_v0.3.0-beta-Centos7.tar.gz
└── cacheServer_v0.3.0-beta-Ubunut.tar.gz
├── cache
├── __init__.py
├── cache.py
└── cacheserver.py
├── conf
├── rules
│ ├── syn.yaml
│ ├── tcp.yaml
│ └── udp.yaml
└── scoutd.conf
├── doc
├── 2384F272-01BD-4081-BD0C-2993592A5C94.png
├── 6563C7A9-A76A-4851-BF53-91D6CF08CE4F.png
├── 6F7268C1-9277-4516-B5D7-2D95477EF22C.png
├── 7CE72B62-09B9-427C-9CD3-9E09CCACAF8A.png
├── EEAF5357-D03F-41F8-B574-CEF4ECC570F2.png
├── FD2AF693-B35A-4B67-A07C-6E5B29FC666A.png
├── README.md
└── Scout_ObServer.png
├── notice.py
├── pcap
├── __init__.py
├── dstat.py
├── pkts.py
└── queue.py
├── plugin
├── __init__.py
├── images.py
└── jsonserver.py
├── rule.py
├── scoutd.py
└── util.py
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 ywjt
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Scout v0.3.0-beta
2 |
3 | Scout是一个攻击检测工具,能有效识别如CC、压测工具发起的拒绝服务攻击,并能实时告警。它支持配置防火墙的封锁,也可以通过调用脚本做一些其它的处理。工具使用了libcap底层数据包捕获函数库,经过适配器的所有数据,被采样到缓存服务中,然后对数据进行实时的分析。支持策略过滤配置,灵活、快速。适合用于小规模WEB集群,最优的方式是架设于上层的负载均衡集群中。
4 |
5 | **版本更新**
6 | ```
7 | 2020-03-13
8 | * 修复在大并发攻击下采样线程自动挂掉的问题
9 | * 增加独立采样进程,并发采样速度提升50倍
10 | * 优化采样的系统效率,降低CPU使用率
11 | * 集成用于grafana图形显示的http接口
12 | ```
13 |
14 | ### 部署架构
15 |
16 | PS: 不建议Scout与后端服务(源机)部署在同一台机上,最优的方式是把Scout独立成节点,部署到源机上层的负载均衡集群上,在上层进行截拦。
17 |
18 | ### 运行环境:
19 | * 支持 Centos6.x/7.x/8.x、Ubuntu14.04/16.04
20 | * 使用root特权运行
21 | * 注意下载对应的版本
22 |
23 |
24 | ### 配置文件有两种:
25 |
26 | * 启动配置 /etc/scout.d/scoutd.conf
27 | * 策略配置(支持yaml、json格式),语法不能有错,暂时没有做过多的语法校验。/etc/scout.d/rules/
28 |
29 |
30 | ### 安装Scout
31 |
32 | 1)解压到指定目录
33 | * Centos6.x:
34 | ```shell
35 | wget https://github.com/ywjt/Scout/releases/download/v0.3.0-beta-Centos6/scout_v0.3.0-beta-Centos6.tar.gz
36 | tar zxvf scout_v0.3.0-beta-Centos6.tar.gz -C /usr/local/
37 | ```
38 |
39 | * Centos7.x\8.x:
40 | ```shell
41 | wget https://github.com/ywjt/Scout/releases/download/v0.3.0-beta-Centos7/scout_v0.3.0-beta-Centos7.tar.gz
42 | tar zxvf scout_v0.3.0-beta-Centos7.tar.gz -C /usr/local/
43 | ```
44 |
45 | * Ubuntu14.04\16.04:
46 | ```shell
47 | wget https://github.com/ywjt/Scout/releases/download/v0.3.0-beta-Ubuntu/scout_v0.3.0-beta-Ubuntu.tar.gz
48 | tar zxvf scout_v0.3.0-beta-Ubuntu.tar.gz -C /usr/local/
49 | ```
50 |
51 | 2)设置软连接
52 | ```shell
53 | ln -s /usr/local/scout/conf /etc/scout.d
54 | ln -s /usr/local/scout/bin/* /usr/local/bin/
55 | ```
56 |
57 | 如果是 Centos7\8:
58 | ```shell
59 | ln -s /usr/lib64/libsasl2.so.3.0.0 /usr/lib64/libsasl2.so.2
60 | ```
61 |
62 | 3)初始化缓存目录
63 | ```shell
64 | scoutd init
65 | ```
66 |
67 | 这一步在新安装时要做,还有如果全局配置文件里改变了storage_type缓存类型,也需要重新初始化。重新初始化会清除缓存数据。
68 | 修改 /etc/scout.d/scoutd.conf 中:
69 | ```
70 | listen_ip =""
71 | motr_interface =""
72 | motr_port = ""
73 | ```
74 | 修改 /etc/scout.d/rules/下tcp.yaml、udp.yaml文件中的trustIps字段为你的对应本机IP,
75 | ```
76 | trustIps:
77 | - 127.0.0.1
78 | - 10.10.0.4
79 | - 114.114.114.114
80 | ```
81 | 关于QPS:
82 | 一般来说,每秒有单IP短连接数超过100,我们可以认为是异常连接,也即100QPS。公式(QPS=noOfConnections/timeDelta)策略文件可以配置以下方式:
83 | ```
84 | timeDelta: 1
85 | noOfConnections: 100
86 | ```
87 | 或者
88 | ```
89 | timeDelta: 3
90 | noOfConnections: 300
91 | ```
92 | 因为攻击连接是一个爬升的过程,第二个写法对异常连接有比较低的宽容度,即较敏感。如果QPS设定比较高,建议用第二个写法。
93 |
94 |
95 | 4)启动Scout
96 | ```shell
97 | scoutd start
98 | scoutd version
99 | ```
100 | **PS:确保你的系统已安装iptables 防火墙,本具默认使用iptables,否则无法实现封禁操作。当然你也可以在策略文件中关闭它。如果是Ubuntu请额外安装支持iptables,然后把UFW关闭。**
101 |
102 |
103 | 5)可以查看运行状态
104 | ```shell
105 | scoutd status
106 | scoutd dstat
107 | ```
108 |
109 | 6)可以监听日志输出
110 | ```shell
111 | scoutd watch
112 | ```
113 |
114 | 7)其它使用帮助
115 | ```shell
116 | scoutd help
117 | ```
118 |
119 | ```shell
120 | Usage: Scoutd
121 |
122 | Options:
123 | init creating and initializing a new cache partition.
124 | start start all service.
125 | stop stop main proccess, cacheserver keep running.
126 | restart restart main proccess, cacheserver keep running.
127 | reload same as restart.
128 | forcestop stop all service, include cacheserver and main proccess.
129 | reservice restart cacheserver and main proccess.
130 | status show main proccess run infomation.
131 | dstat show generating system resource statistics.
132 | view check block/unblock infomation.
133 | watch same as tailf, watching log output.
134 | help show this usage information.
135 | version show version information.
136 | ```
137 |
138 |
139 | ### 全局配置说明
140 | ```shell
141 | #日志输出等级,选项:DEBUG,INFO,WARNING,ERROR,CRITICAL
142 | log_level = "INFO"
143 |
144 | #本机监听,填写本机所有通信IP,不要填0.0.0.0
145 | listen_ip = "10.10.0.4,114.114.114.114"
146 |
147 | # 信任IP列表,支持CIDR格式
148 | trust_ip = "172.16.0.0/16"
149 |
150 | # 监听适配器,如果是多网口,请填写'any',否则填'eth0|eth1|em0|em1...'
151 | motr_interface = "eth0"
152 |
153 | # 监听端口,可以多个 如: "443,80,8080"
154 | # rules配置文件的过滤端口基于这里,如果这里没有配,即rules过滤端口不存在
155 | motr_port = "80,443,53"
156 |
157 | # 捕获数据包的最大字节数, 相当于buffer_timeout时间内的缓冲区
158 | max_bytes = 65536
159 |
160 | # 定义适配器是否必须进入混杂模式
161 | # 关于混杂模式,如果启用则会把任何流过网口的数据都会捕获,这样会产生很多杂乱的数据
162 | # 要精准捕获由外网流入的数据, 建议设为 False
163 | promiscuous = False
164 |
165 | # 缓冲区超时时间,单位是毫秒,一般设1000ms即可
166 | # 当捕获程序在设定的超时周期内返回一次数据集
167 | buffer_timeout = 1000
168 |
169 | # 自动删除缓存记录的存活时间,单位秒
170 | # 默认: 86400 (1 days)
171 | expire_after_seconds = 86400
172 |
173 | # 缓存数据的方式,可选:'Memory' 或 'Disk'
174 | # Memory 内存方式,若服务关闭数据会被重置,检测效率高,准确性高
175 | # Disk 磁盘方式,数据会被持久化,检测效率低,需要通过提高策略阀值达到预警
176 | # 不支持动态切换,如果首次启动后,切换缓存方式,需要重新初始化缓存服务,执行 Scoutd init
177 | storage_type = 'Memory'
178 |
179 | # 限制内存使用大小,最小1G
180 | # 不配置默认为可用系统内存的一半,配置不能有小数点
181 | storage_size = 1
182 |
183 | # 接口服务参数
184 | # http_host: 主机地址,默认 'localhost'
185 | # http_port: 绑定端口,默认 6667
186 | http_host = 'localhost'
187 | http_port = 6667
188 | ```
189 |
190 |
191 | ### 策略配置说明
192 |
193 | **Bolt Fields**
194 |
195 | Field | Desc | Bolt |
196 | mac_src | 来源MAC | TCP,UDP |
197 | mac_dst | 目标MAC | TCP,UDP |
198 | src | 来源IP | TCP,UDP |
199 | dst | 目标IP | TCP,UDP |
200 | sport | 来源Port | TCP,UDP |
201 | dport | 目标Port | TCP,UDP |
202 | proto | 访问协议 | TCP,UDP |
203 | ttl | 来源ttl值 | TCP,UDP |
204 | flags | 连接状态 | TCP |
205 | time | 记录时间戳 | TCP,UDP |
206 |
207 |
208 | 上述列出的Field可以用于策略文件的编写,要怎么实现查询想要的数据,就需要自行构造了。策略文件的filter模块始终都是以类似SQL的聚合查询语法来执行。
209 |
210 | ```shell
211 | SELECT count({Field}) AS total , {Field}
212 | FROM TCP
213 | WHERE (time >= 1573110114 AND time <= 1573110144)
214 | AND ...
215 | GROUP BY {Field}
216 | HAVING count({Field}) > 100
217 | ```
218 |
219 | **策略文件例子**
220 | ```yaml
221 | name: "CC attack check" #策略名称
222 | desc: "" #简单描述一下
223 | ctime: "Thu Oct 24 17:48:11 CST 2019" #创建时间
224 |
225 | # 缓存表,目前只有TCP、UDP两个表,实际上是指定分析的数据源
226 | bolt: "TCP"
227 |
228 | # 过滤器,类似于SQL查询
229 | # ```
230 | # SELECT count(src) AS total , src
231 | # FROM TCP
232 | # WHERE dport IN (80, 443)
233 | # AND (time >= 1573110114 AND time <= 1573110144)
234 | # AND src NOT IN ('127.0.0.1', '10.10.0.4', '114.114.114.114')
235 | # GROUP BY src
236 | # HAVING count(src) > 100
237 | # ```
238 | # 返回:{u'total': 121, u'_id': u'115.115.115.115'}
239 | #
240 | filter:
241 | timeDelta: 1 #时间区间, Seconds. QPS = noOfConnections / timeDelta
242 | trustIps: #排除src白名单,列表
243 | - 127.0.0.1
244 | - 10.10.0.4
245 | - 114.114.114.114
246 | motrPort: #过滤端口,列表
247 | - 80
248 | - 443
249 | motrProto: "TCP" #过滤协议,TCP或UDP(暂时没有区分更细的协议名,如http、https、ssh、ftp、dns等)
250 | flags: "" #TCP握手的状态 (常见syn、ack、psh、fin)
251 | noOfConnections: 100 #聚合的阀值,结合noOfCondition|returnFiled,如group by src having count(src) $gte 1000
252 | noOfCondition: "$gte" #聚合阀值条件, 如$ge\$gt\$gte\$lt\$lte
253 | returnFiled: "src" #聚合的字段名, blot表里必须存在
254 |
255 | # 执行模块
256 | block:
257 | action: true #是否封禁
258 | expire: 300 #封禁时间,Seconds.
259 | iptables: true #默认是用防火墙封禁,如果自定义脚本,这里设为false,如果为true,blkcmd/ubkcmd则为空,否则填了也不会生效
260 | blkcmd: "" #锁定时执行,传参为 returnFiled 列表值(你可以用脚本来扩展,注意执行权限)
261 | ubkcmd: "" #解锁时执行,传参为 returnFiled 列表值(你可以用脚本来扩展,注意执行权限)
262 |
263 | # 通知模块
264 | notice:
265 | send: true #是否发送
266 | email:
267 | - 350311204@qq.com #接收人邮箱,列表
268 |
269 | ```
270 |
271 |
272 |
273 | ### 安装 Scout web
274 | scout已集成了用于grafana展示的api接口,你只需要安装 grafana server 再导入json模板即可。
275 |
276 | **安装 grafana server 6.4.4**
277 | * Ubuntu & Debian
278 | ```shell
279 | wget https://dl.grafana.com/oss/release/grafana_6.4.4_amd64.deb
280 | sudo dpkg -i grafana_6.4.4_amd64.deb
281 | ````
282 |
283 | * Redhat & Centos
284 | ```shell
285 | wget https://dl.grafana.com/oss/release/grafana-6.4.4-1.x86_64.rpm
286 | sudo yum install initscripts urw-fonts
287 | sudo rpm -Uvh grafana-6.4.4-1.x86_64.rpm
288 | ```
289 |
290 | * 启动 grafana server
291 | ```shell
292 | service grafana-server start
293 | ```
294 |
295 | * 打开Web界面 http://IP:3000/
296 | ```shell
297 | 帐号 admin
298 | 密码 admin
299 | ```
300 |
301 | **导入模板**
302 | * 安装 grafana-simple-json-datasource 插件
303 | ```shell
304 | sudo grafana-cli plugins install simpod-json-datasource
305 | sudo service grafana-server restart
306 | ```
307 |
308 | * 后台配置 simple-json
309 | 1、添加datasource
310 |
311 |
312 | 2、选择JSON引擎
313 |
314 |
315 | 3、配置JSON引擎接口
316 | ```shell
317 | 这里只需要把URL填入 http://localhost:6667 即可。插件仅允许本地通信,6667端口为固定不可改。
318 | ```
319 |
320 |
321 | 4、导入JSON模板
322 | ```shell
323 | Scout_plugin_for_grafana_server.json
324 | ```
325 |
326 |
327 |
328 |
329 |
330 |
331 | ### 模拟测试
332 | 下面使用hping3 工具发起攻击测试,工具自行安装。hping3是一个很全面的网络压测工具。
333 |
334 | **发起80端口syn半连接请求**
335 | ```shell
336 | hping3 -I eth0 -S 目标IP -p 80 --faster
337 | ```
338 |
339 | **发起53端口udp flood**
340 | ```shell
341 | hping3 -2 -I eth0 -S 目标IP -p 53 --faster
342 | ```
343 |
344 | **监听Scout输出**
345 | ```shell
346 | [root@~]# scoutd watch
347 | logging output ......
348 | 2019-11-07 16:11:27 WARNING [LOCK] syn has been blocked, It has 606 packets transmitted to server.
349 | 2019-11-07 16:11:28 ERROR [MAIL] Send mail failed to: [Errno -2] Name or service not known
350 | 2019-11-07 16:11:29 WARNING [syn.yaml] {u'total': 606, u'_id': u'syn', 'block': 1}
351 | 2019-11-07 16:11:30 WARNING [LOCK] 117.*.*.22 has been blocked, It has 861 packets transmitted to server.
352 | 2019-11-07 16:11:32 ERROR [MAIL] Send mail failed to: [Errno -2] Name or service not known
353 | 2019-11-07 16:11:32 WARNING [tcp.yaml] {u'total': 861, u'_id': u'117.*.*.22'}
354 | 2019-11-07 16:11:36 WARNING [LOCK] 117.*.*.25 has been blocked, It has 904 packets transmitted to server.
355 | 2019-11-07 16:11:38 ERROR [MAIL] Send mail failed to: [Errno -2] Name or service not known
356 | 2019-11-07 16:11:39 WARNING [udp.yaml] {u'total': 904, u'_id': u'117.*.*.25'}
357 | 2019-11-07 16:11:39 WARNING [syn.yaml] {u'total': 1765, u'_id': u'syn', 'block': 1}
358 | 2019-11-07 16:11:40 WARNING [tcp.yaml] {u'total': 1817, u'_id': u'117.*.*.22'}
359 | 2019-11-07 16:11:43 WARNING [udp.yaml] {u'total': 1806, u'_id': u'117.*.*.25'}
360 | ```
361 |
362 | 可以发现所有策略文件都被执行了,并达到预警阀值。再查看封锁记录。
363 |
364 | ```shell
365 | [root@~]# scoutd view
366 | +------------+----------+-------+------------------------------------------------------------+---------------------+
367 | | _ID | ConfName | Total | Command | Time |
368 | +------------+----------+-------+------------------------------------------------------------+---------------------+
369 | | syn | syn | 371 | /opt/notice.sh {u'total': 371, u'_id': u'syn', 'block': 1} | 2019-11-07 16:12:06 |
370 | | 117.*.*.22 | tcp | 371 | /sbin/iptables -I INPUT -s 117.*.*.22 -j DROP | 2019-11-07 16:12:06 |
371 | | 117.*.*.25 | udp | 604 | /sbin/iptables -I INPUT -s 117.*.*.25 -j DROP | 2019-11-07 16:12:09 |
372 | +------------+----------+-------+------------------------------------------------------------+---------------------+
373 | ```
374 |
375 | ```shell
376 | [root@~]# scoutd dstat
377 | +---------------------+------+------+-------+------+--------------+-----------+-----------+
378 | | Time | 1min | 5min | 15min | %CPU | MemFree(MiB) | Recv(MiB) | Send(MiB) |
379 | +---------------------+------+------+-------+------+--------------+-----------+-----------+
380 | | 2019-11-07 16:29:33 | 0.00 | 0.04 | 0.05 | 0.5 | 4307 | 0.002 | 0.002 |
381 | | 2019-11-07 16:30:36 | 0.00 | 0.03 | 0.05 | 0.5 | 4258 | 0.000 | 0.000 |
382 | | 2019-11-07 16:31:40 | 0.77 | 0.21 | 0.11 | 43.9 | 4291 | 3.754 | 0.001 |
383 | | 2019-11-07 16:32:43 | 0.67 | 0.33 | 0.16 | 0.2 | 4300 | 0.000 | 0.000 |
384 | +---------------------+------+------+-------+------+--------------+-----------+-----------+
385 | ```
386 |
387 |
388 | ## About
389 |
390 | **Original Author:** YWJT http://www.ywjt.org/ (Copyright (C) 2020)
391 | **Maintainer:** ZhiQiang Koo <350311204@qq.com>
392 |
393 |
394 |
395 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 | __VERSIONS__ = '0.3.0-beta'
--------------------------------------------------------------------------------
/base.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 |
16 | import sys
17 | import os
18 | sys.path.append(".")
19 | import os
20 | import ConfigParser
21 | import functools
22 | import threading
23 | import logging
24 | import yaml
25 | import datetime
26 | import time
27 | from time import sleep
28 | from signal import SIGTERM
29 | from __init__ import __VERSIONS__
30 | try:
31 | import cPickle as pickle
32 | except ImportError:
33 | import pickle
34 |
35 | CONF_DIR = '/etc/scout.d/'
36 | LOGS_DIR = '/var/log/scout/'
37 | CACHE_DIR = '/var/cache/'
38 | PROC_DIR = '/usr/local/scout/'
39 | PLUGIN_DIR = PROC_DIR+'plugin/'
40 | PCAP_ID_FILE = '%s/.__pcap.id' % LOGS_DIR
41 |
42 | if not os.path.exists(PROC_DIR):
43 | print("run path failed: %s \n" % PROC_DIR)
44 | sys.exit(1)
45 |
46 | if not os.path.exists(LOGS_DIR):
47 | os.system('mkdir -p %s' %LOGS_DIR)
48 | os.chmod(LOGS_DIR, 775)
49 |
50 |
51 | def save_pid(bytes_text):
52 | try:
53 | f = open(PCAP_ID_FILE, 'wb+')
54 | pickle.dump(bytes_text, f)
55 | f.close()
56 | except IOError as e:
57 | Loger().ERROR('%s not found!' % PCAP_ID_FILE)
58 |
59 | def del_pid():
60 | try:
61 | f = file(PCAP_ID_FILE, 'r')
62 | pids = pickle.load(f)
63 | for pid in pids:
64 | os.kill(int(pid), SIGTERM)
65 | f.close()
66 | os.remove(PCAP_ID_FILE)
67 | except IOError as e:
68 | print("del pid #1 failed: %d (%s)\n" % (e.errno, e.strerror))
69 | sys.exit(1)
70 |
71 | def cacheserver_running_alive():
72 | cache = os.popen('ps -fe|grep "cacheServer"|grep -v "grep"|wc -l').read().strip()
73 | if (cache == '0'):
74 | return None
75 | else:
76 | return True
77 |
78 | def scoutd_running_alive():
79 | try:
80 | f = open(LOGS_DIR + "scoutd.pid", 'r')
81 | PID = f.read()
82 | f.close()
83 | return True
84 | except IOError as e:
85 | Loger().STDOUT()
86 | Loger().ERROR('Scoutd not running... you must be start it first!')
87 | sys.exit(1)
88 |
89 |
90 | """
91 | #==================================
92 | # 异步修饰
93 | #==================================
94 | """
95 | def async(func):
96 | @functools.wraps(func)
97 | def wrapper(*args, **kwargs):
98 | my_thread = threading.Thread(target=func, args=args, kwargs=kwargs)
99 | my_thread.start()
100 | return wrapper
101 |
102 |
103 | """
104 | #==================================
105 | # 加载配置文件内容
106 | #==================================
107 | """
108 | class LoadConfig(object):
109 | cf = ''
110 | filepath = CONF_DIR + "scoutd.conf"
111 |
112 | def __init__(self):
113 | try:
114 | f = open(self.filepath, 'r')
115 | except IOError as e:
116 | print("\"%s\" Config file not found." % (self.filepath))
117 | sys.exit(1)
118 | f.close()
119 |
120 | self.cf = ConfigParser.ConfigParser()
121 | self.cf.read(self.filepath)
122 |
123 | def getSectionValue(self, section, key):
124 | return self.getFormat(self.cf.get(section, key))
125 | def getSectionOptions(self, section):
126 | return self.cf.options(section)
127 | def getSectionItems(self, section):
128 | return self.cf.items(section)
129 | def getFormat(self, string):
130 | return string.strip("'").strip('"').replace(" ","")
131 |
132 |
133 | """
134 | #==================================
135 | # 初始化主类参数,用于后续继承
136 | #==================================
137 | """
138 | class ScoutBase(object):
139 | avr = {}
140 | avr['version'] = __VERSIONS__ if __VERSIONS__ else 'none'
141 |
142 | def __init__(self):
143 | self.avr['log_level'] = LoadConfig().getSectionValue('system', 'log_level')
144 | self.avr['log_file'] = LoadConfig().getSectionValue('system', 'log_file')
145 | self.avr['listen_ip'] = LoadConfig().getSectionValue('main','listen_ip')
146 | self.avr['trust_ip'] = LoadConfig().getSectionValue('main','trust_ip')
147 | self.avr['motr_port'] = LoadConfig().getSectionValue('main','motr_port')
148 | self.avr['motr_interface'] = LoadConfig().getSectionValue('main','motr_interface')
149 | self.avr['max_bytes'] = int(LoadConfig().getSectionValue('main','max_bytes'))
150 | self.avr['promiscuous'] = bool(LoadConfig().getSectionValue('main','promiscuous'))
151 | self.avr['buffer_timeout'] = int(LoadConfig().getSectionValue('main','buffer_timeout'))
152 | self.avr['expire_after_seconds'] = int(LoadConfig().getSectionValue('cache','expire_after_seconds'))
153 | self.avr['storage_type'] = LoadConfig().getSectionValue('cache','storage_type')
154 | self.avr['storage_size'] = int(LoadConfig().getSectionValue('cache','storage_size'))
155 | self.avr['file_path'] = LoadConfig().getSectionValue('rules','file_path')
156 | self.avr['file_type'] = LoadConfig().getSectionValue('rules','file_type')
157 | self.avr['admin_email'] = LoadConfig().getSectionValue('alert','admin_email')
158 | self.avr['smtp_server']= LoadConfig().getSectionValue('alert','smtp_server')
159 | self.avr['admin_email'] = LoadConfig().getSectionValue('alert','admin_email')
160 | self.avr['smtp_user'] = LoadConfig().getSectionValue('alert','smtp_user')
161 | self.avr['smtp_passwd'] = LoadConfig().getSectionValue('alert','smtp_passwd')
162 | self.avr['smtp_ssl'] = LoadConfig().getSectionValue('alert','smtp_ssl')
163 | self.avr['http_host'] = LoadConfig().getSectionValue('http','http_host')
164 | self.avr['http_port'] = LoadConfig().getSectionValue('http','http_port')
165 |
166 | def getValue(self):
167 | return self.var
168 |
169 |
170 | def cidr(self):
171 | s = {}
172 | s['wip']=''
173 | s['port']=''
174 | s['lip']=''
175 | for locip in self.avr['listen_ip'].split(","):
176 | if locip:
177 | s['lip'] += 'dst host {0} or '.format(locip)
178 | for port in self.avr['motr_port'].split(","):
179 | if port:
180 | s['port'] += 'dst port {0} or '.format(port)
181 | for wip in self.avr['trust_ip'].split(","):
182 | if wip:
183 | if wip.find("~")>0:
184 | lstart = int(wip.split("~")[0].split(".")[-1])
185 | lend = int(wip.split("~")[1].split(".")[-1])+1
186 | ldun = ".".join(wip.split("~")[1].split(".")[0:3])
187 | for wli in xrange(lstart, lend):
188 | s['wip'] += '! dst net {0} and '.format("%s.%d" % (ldun,wli))
189 | elif wip.find("-")>0:
190 | lstart = int(wip.split("-")[0].split(".")[-1])
191 | lend = int(wip.split("-")[1].split(".")[-1])+1
192 | ldun = ".".join(wip.split("-")[1].split(".")[0:3])
193 | for wli in xrange(lstart, lend):
194 | s['wip'] += '! dst net {0} and '.format("%s.%d" % (ldun,wli))
195 | else:
196 | s['wip'] += '! dst net {0} and '.format(wip)
197 | s['lip'] = s['lip'].strip('or ')
198 | s['port']= s['port'].strip('or ')
199 | s['wip'] = s['wip'].strip('and ')
200 | return s
201 |
202 |
203 | """
204 | #==================================
205 | # 加载rules文件内容
206 | #==================================
207 | """
208 | class Rules(ScoutBase):
209 | def __init__(self, basename):
210 | ScoutBase.__init__(self)
211 |
212 | if self.avr['file_path']:
213 | filepath = '%s/%s' %(self.avr['file_path'], basename)
214 | else:
215 | filepath = '%s/%s' %(CONF_DIR+"/rules", basename)
216 |
217 | if self.avr['file_type'] == 'yaml':
218 | self.parse = self.YAML(self.FILE(filepath, self.avr['file_type']))
219 | elif self.avr['file_type'] == '' or self.avr['file_type'] == 'json':
220 | self.parse = self.JSON(self.FILE(filepath, 'json'))
221 | else:
222 | Loger().ERROR("rules file type not Supportd!")
223 | raise
224 |
225 | def echo(self):
226 | return self.parse
227 |
228 | def FILE(self, filepath, filetype):
229 | return "%s.%s" %(filepath, filetype)
230 |
231 | def YAML(self, filename):
232 | with open(filename, 'r') as f:
233 | yam = yaml.load(f.read())
234 | return yam
235 |
236 | def JSON(self, filename):
237 | with open(filename, 'r') as f:
238 | js = json.loads(f.read())
239 | return js
240 |
241 |
242 |
243 | """
244 | #==================================
245 | # 日志记录方法
246 | #==================================
247 | """
248 | class Loger(ScoutBase):
249 | def __init__(self):
250 | ScoutBase.__init__(self)
251 |
252 | if self.avr['log_file'].find("/") == -1:
253 | self.log_file = LOGS_DIR + self.avr['log_file']
254 | else:
255 | self.log_file = self.avr['log_file']
256 |
257 | if self.avr['log_level'].upper()=="DEBUG":
258 | logging_level = logging.DEBUG
259 | elif self.avr['log_level'].upper()=="INFO":
260 | logging_level = logging.INFO
261 | elif self.avr['log_level'].upper()=="WARNING":
262 | logging_level = logging.WARNING
263 | elif self.avr['log_level'].upper()=="ERROR":
264 | logging_level = logging.ERROR
265 | elif self.avr['log_level'].upper()=="CRITICAL":
266 | logging_level = logging.CRITICAL
267 | else:
268 | logging_level = logging.WARNING
269 |
270 | logging.basicConfig(level=logging_level,
271 | format='%(asctime)s %(levelname)s %(message)s', # 日志格式
272 | datefmt='%Y-%m-%d %H:%M:%S', # 时间格式:2018-11-12 23:50:21
273 | filename=self.log_file, # 日志的输出路径
274 | filemode='a') # 追加模式
275 |
276 | def STDOUT(self):
277 | logger = logging.getLogger()
278 | ch = logging.StreamHandler()
279 | logger.addHandler(ch)
280 | def DEBUG(self, data):
281 | return logging.debug(data)
282 | def INFO(self, data):
283 | return logging.info(data)
284 | def WARNING(self, data):
285 | return logging.warning(data)
286 | def ERROR(self, data):
287 | return logging.error(data)
288 | def CRITICAL(self, data):
289 | return logging.critical(data)
290 |
291 |
292 | """
293 | #==================================
294 | # 日志监听输出
295 | #==================================
296 | """
297 | class Tailf(ScoutBase):
298 | def __init__(self):
299 | ScoutBase.__init__(self)
300 | __logfilename = LOGS_DIR + self.avr['log_file']
301 | self.check_file_validity(__logfilename)
302 | self.tailed_file = __logfilename
303 | self.callback = sys.stdout.write
304 |
305 | def follow(self, s=1):
306 | print("logging output ......")
307 | with open(self.tailed_file) as file_:
308 | file_.seek(0, 2)
309 | while True:
310 | curr_position = file_.tell()
311 | line = file_.readline()
312 | if not line:
313 | file_.seek(curr_position)
314 | else:
315 | self.callback(line)
316 | time.sleep(s)
317 |
318 | def register_callback(self, func):
319 | self.callback = func
320 |
321 | def check_file_validity(self, file_):
322 | if not os.access(file_, os.F_OK):
323 | raise TailError("File '%s' does not exist" % (file_))
324 | if not os.access(file_, os.R_OK):
325 | raise TailError("File '%s' not readable" % (file_))
326 | if os.path.isdir(file_):
327 | raise TailError("File '%s' is a directory" % (file_))
328 |
329 | class TailError(Exception):
330 | def __init__(self, msg):
331 | self.message = msg
332 | def __str__(self):
333 | return self.message
334 |
335 |
336 | """
337 | #==================================
338 | # 日志记录类型
339 | #==================================
340 | # [RECORD] 非封锁记录
341 | # [LOCK] 封锁
342 | # [UNLOCK] 解封
343 | # [REBL] 重载封锁列表
344 | # [MAIL] 发送邮件
345 | #==================================
346 | """
347 | Notes = {
348 | "RECORD" : "[RECORD] %s has been Unusual, It has %s packets transmitted to server.",
349 | "LOCK" : "[LOCK] %s has been blocked, It has %s packets transmitted to server.",
350 | "UNLOCK": "[UNLOCK] %s has been unblocked.",
351 | "REBL": "[REBL] %s reload in iptables Success.",
352 | "MAIL": "[MAIL] %s record on %s,It has %s packets transmitted to server. Attention please!"
353 | }
354 |
355 |
356 |
357 |
358 |
359 |
360 |
--------------------------------------------------------------------------------
/bin/README.md:
--------------------------------------------------------------------------------
1 |
2 | # cacheServer for scout v0.3.0-beta
3 |
4 | scout集成了一个用于缓存数据的软件(cacheServer),如果你想直接代码运行scout,需要先安装cacheServer。
5 | ```
6 | ## Centos 6.x
7 | tar -zxvf cacheServer_v0.3.0-beta-Centos6.tar.gz -C /usr/local/bin/
8 |
9 |
10 | ## Centos 7.x
11 | tar -zxvf cacheServer_v0.3.0-beta-Centos7.tar.gz -C /usr/local/bin/
12 |
13 |
14 | ## Ubuntu >14.04
15 | tar zxvf cacheServer_v0.3.0-beta-Ubunut.tar.gz -C /usr/local/bin/
16 | ```
17 |
--------------------------------------------------------------------------------
/bin/cacheServer_v0.3.0-beta-Centos6.tar.gz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ywjt/Scout/a2bd57afb6b6cdfa1dbd94c1494c13fa8b320768/bin/cacheServer_v0.3.0-beta-Centos6.tar.gz
--------------------------------------------------------------------------------
/bin/cacheServer_v0.3.0-beta-Centos7.tar.gz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ywjt/Scout/a2bd57afb6b6cdfa1dbd94c1494c13fa8b320768/bin/cacheServer_v0.3.0-beta-Centos7.tar.gz
--------------------------------------------------------------------------------
/bin/cacheServer_v0.3.0-beta-Ubunut.tar.gz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ywjt/Scout/a2bd57afb6b6cdfa1dbd94c1494c13fa8b320768/bin/cacheServer_v0.3.0-beta-Ubunut.tar.gz
--------------------------------------------------------------------------------
/cache/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/cache/cache.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 |
16 | import sys
17 | sys.path.append(".")
18 | import os
19 | import sys
20 | import pprint
21 | import pymongo
22 | from pymongo import MongoClient
23 |
24 |
25 | class CacheServer(object):
26 | """
27 | cacheUtil
28 | """
29 |
30 | def __init__(self):
31 | """
32 | 初始化
33 | """
34 | self._host = 'localhost'
35 | self._port = '6666'
36 | self._database = 'Scout'
37 | pass
38 |
39 | def create_or_connect_cache(self):
40 | """
41 | 返回数据库实例、同步
42 | :return:
43 | """
44 | try:
45 | uri = "mongodb://{0}:{1}/?authSource={2}".format(self._host, self._port, self._database)
46 | client = MongoClient(uri)
47 | return client[self._database]
48 | except Exception as e:
49 | print(e)
50 | raise
51 |
52 |
53 | def create_index(self, collection, key, expire_time=None):
54 | """
55 | 设定文档的ttl失效时间
56 | :param collection:
57 | :param key:
58 | :param expire_time:
59 | """
60 | if expire_time:
61 | return collection.create_index([(key, pymongo.ASCENDING)], expireAfterSeconds=expire_time)
62 | else:
63 | return collection.create_index([(key, pymongo.ASCENDING)])
64 |
65 |
66 | def get_collection(self, dbmobj, collection_name):
67 | """
68 | 判断集合是否已存在
69 | 返回输入的名称对应的集合
70 | :param collection_name:
71 | :return:
72 | """
73 | collist = dbmobj.list_collection_names()
74 | if collection_name in collist:
75 | try:
76 | return collection_name
77 | except Exception as e:
78 | raise
79 | else:
80 | return None
81 |
82 | def find_aggregate(self, collection, pipeline=[]):
83 | """
84 | mongodb的聚合类似于管道操作,通过多个构件来组成一个管道:filter, project, group, sort, limit, skip
85 | """
86 | return collection.aggregate(pipeline)
87 |
88 | def find_one(self, collection, **kwargs):
89 | """
90 | 按条件查询单个doc,如果传入集合为空将返回默认数据
91 | :param collection:
92 | :param kwargs:
93 | :return:
94 | """
95 | result_obj = collection.find_one(kwargs)
96 | return result_obj
97 |
98 | def find_all(self, collection, limit=None, skip=0):
99 | """
100 | 查询传入条件集合和全部数据
101 | :return:
102 | """
103 | cursor = collection.find()
104 | cursor.skip(skip).limit(limit)
105 | return cursor
106 |
107 | def find_conditions(self, collection, limit=0, **kwargs):
108 | """
109 | 按条件查询,并做返回条数限制
110 | :param collection:
111 | :param limit:
112 | :param kwargs:
113 | :return:
114 | """
115 | # return collection.find(kwargs).limit(limit)
116 | if limit == 0:
117 | # cursor = collection.find(kwargs).sort('i').skip(0)
118 | cursor = collection.find(kwargs).skip(0)
119 | else:
120 | cursor = collection.find(kwargs).sort('time',pymongo.DESCENDING).limit(limit).skip(0)
121 | return cursor
122 |
123 | def count(self, collection, kwargs={}):
124 | """
125 | 返回查询的条数
126 | :param collection:
127 | :param kwargs:
128 | :return:
129 | """
130 | n = collection.count_documents(kwargs)
131 | # n = db.test_collection.count_documents({'i': {'$gt': 1000}})
132 | print('%s documents in collection' % n)
133 | return n
134 |
135 | def replace_id(self, collection, condition={}, new_doc={}):
136 | """
137 | 通过ID进行更新,没有记录会插入
138 | :param collection:
139 | :param condition:
140 | :param new_doc:
141 | :return:
142 | """
143 | _id = condition['_id']
144 | old_document = collection.find_one(condition)
145 | if old_document:
146 | collection.replace_one({'_id': _id}, new_doc)
147 | return 0
148 | else:
149 | collection.insert_one(new_doc)
150 | return 1
151 |
152 | def update(self, collection, condition={}, new_part={}):
153 | """
154 | 进行替换部分内容
155 | :param collection:
156 | :param condition:
157 | :param new_part:
158 | :return:
159 | """
160 | result = collection.update_one(condition, {'$set': new_part})
161 | print('updated %s document' % result.modified_count)
162 | new_document = collection.find_one(condition)
163 | print('document is now %s' % pprint.pformat(new_document))
164 |
165 | def replace(self, collection, condition={}, new_doc={}):
166 | """
167 | 分步骤通过一定条件进行替换部分内容
168 | :param collection:
169 | :param condition:
170 | :param new_doc:
171 | :return:
172 | """
173 | old_document = collection.find_one(condition)
174 | _id = old_document['_id']
175 | result = collection.replace_one({'_id': _id}, new_doc)
176 | print('replaced %s document' % result.modified_count)
177 | new_document = collection.find_one({'_id': _id})
178 | print('document is now %s' % pprint.pformat(new_document))
179 |
180 | def update_many(self, collection, condition={}, new_part={}):
181 | """
182 | 批量更新
183 | :param collection:
184 | :param condition:
185 | :param new_part:
186 | :return:
187 | """
188 | # result4 = collection.update_many({'i': {'$gt': 100}}, {'$set': {'key': 'value'}})
189 | result = collection.update_many(condition, {'$set': new_part})
190 | print('updated %s document' % result.modified_count)
191 |
192 | def insert_one(self, collection, new_doc={}):
193 | """
194 | 单条插入
195 | :param collection:
196 | :param new_doc:
197 | :return:
198 | """
199 | try:
200 | result = collection.insert_one(new_doc)
201 | #print('inserted_id %s' % repr(result.inserted_id))
202 | return result
203 | except Exception as e:
204 | return str(e)
205 |
206 | def insert_many(self, collection, new_doc=[]):
207 | """
208 | 批量添加
209 | :param collection:
210 | :param need_insert_dict_many:
211 | :return:
212 | """
213 | try:
214 | result = collection.insert_many(new_doc)
215 | #print('inserted %d docs' % (len(result.inserted_ids),))
216 | return 'ok'
217 | except Exception as e:
218 | return str(e)
219 |
220 | def delete_many(self, collection, condition={}):
221 | """
222 | 批量删除
223 | :param collection:
224 | :param condition:
225 | :return:
226 | """
227 | # print('%s documents before calling delete_many()' % n)
228 | #n = collection.count_documents({})
229 | #print('%s documents before calling delete_many()' % n)
230 | # result4 = collection.delete_many({'i': {'$gte': 1000}})
231 | result = collection.delete_many(condition)
232 | #n = collection.count_documents({})
233 | #print('%s documents after calling delete_many()' % n)
234 | return result
235 |
236 |
--------------------------------------------------------------------------------
/cache/cacheserver.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 | import sys
16 | sys.path.append("..")
17 | import os
18 | import time
19 | import datetime
20 | import shutil
21 | import commands
22 | from base import ScoutBase, Loger
23 | from base import CONF_DIR, LOGS_DIR, CACHE_DIR, PROC_DIR, PLUGIN_DIR
24 | from base import cacheserver_running_alive
25 |
26 |
27 |
28 | class CacheServerd(ScoutBase):
29 | def __init__(self):
30 | ScoutBase.__init__(self)
31 | __dbPath = '%s/.Scoutd/Bolt' % CACHE_DIR
32 | __logPath = '%s/cacheserver.log' % LOGS_DIR
33 | __storagePort = 6666
34 | __storageSize = self.avr['storage_size']
35 | if self.avr['storage_type'] in ['Memory','Disk']:
36 | if self.avr['storage_type']=='Memory':
37 | self.__CacheRunCommand = 'cacheServer \
38 | --port=%d \
39 | --dbpath=%s \
40 | --storageEngine=inMemory \
41 | --inMemorySizeGB=%d \
42 | --logpath=%s \
43 | --logappend \
44 | --fork \
45 | --quiet' % (__storagePort, __dbPath, __storageSize, __logPath)
46 | else:
47 | self.__CacheRunCommand = 'cacheServer \
48 | --port=%d \
49 | --dbpath=%s \
50 | --storageEngine=wiredTiger \
51 | --wiredTigerCacheSizeGB=%d \
52 | --logpath=%s \
53 | --logappend \
54 | --fork \
55 | --quiet' % (__storagePort, __dbPath, __storageSize, __logPath)
56 | self.__CacheStopCommand = 'cacheServer --dbpath=%s --shutdown' % __dbPath
57 | else:
58 | Loger().CRITICAL("'storage_type' value not match! options: 'Memory' or 'Disk'")
59 | raise
60 |
61 | def start(self):
62 | try:
63 | if not cacheserver_running_alive():
64 | os.chdir(PROC_DIR)
65 | status, output = commands.getstatusoutput(self.__CacheRunCommand)
66 | if status==0:
67 | Loger().INFO('CacheServer started with pid {}\n'.format(os.getpid()))
68 | time.sleep(2)
69 | else:
70 | Loger().ERROR('CacheServer started failed.')
71 | else:
72 | Loger().INFO('CacheServer Daemon Alive!')
73 | except Exception as e:
74 | sys.stdout.write(str(e)+'\n')
75 | pass
76 |
77 | def stop(self):
78 | try:
79 | if cacheserver_running_alive():
80 | os.chdir(PROC_DIR)
81 | status, output = commands.getstatusoutput(self.__CacheStopCommand)
82 | if status==0:
83 | Loger().WARNING('CacheServer stop Success. {}\n'.format(time.ctime()))
84 | time.sleep(2)
85 | else:
86 | Loger().ERROR('CacheServer stop fail! %s' % output)
87 | raise
88 | except Exception as e:
89 | pass
90 |
91 | def restart(self):
92 | self.stop()
93 | self.start()
94 |
95 |
96 |
97 | def initCache():
98 | __dbPath = '%s/.Scoutd/Bolt' % CACHE_DIR
99 | if os.path.exists(__dbPath):
100 | shutil.rmtree(__dbPath)
101 | os.system('mkdir -p %s' % __dbPath)
102 | os.chmod(__dbPath, 777)
--------------------------------------------------------------------------------
/conf/rules/syn.yaml:
--------------------------------------------------------------------------------
1 | name: "syn flood check."
2 | desc: "every 10 seconds to check syn flood state."
3 | ctime: "Thu Oct 24 17:48:11 CST 2019"
4 |
5 | bolt: "TCP"
6 |
7 | filter:
8 | timeDelta: 3
9 | trustIps: ""
10 | motrPort: ""
11 | motrProto: "TCP"
12 | flags:
13 | - "syn"
14 | noOfConnections: 100
15 | noOfCondition: "$gte"
16 | returnFiled: "flags"
17 |
18 | block:
19 | action: true
20 | expire: 300
21 | iptables: false
22 | blkcmd: "/opt/notice.sh %s"
23 | ubkcmd: ""
24 |
25 | notice:
26 | send: true
27 | email:
28 | - 350311204@qq.com
29 |
--------------------------------------------------------------------------------
/conf/rules/tcp.yaml:
--------------------------------------------------------------------------------
1 | name: "CC attack check."
2 | desc: "every 10 second to check src ip QPS >=100."
3 | ctime: "Thu Oct 24 17:48:11 CST 2019"
4 |
5 | bolt: "TCP"
6 |
7 | filter:
8 | timeDelta: 3
9 | trustIps:
10 | - 127.0.0.1
11 | - 10.10.0.4
12 | - 114.114.114.114
13 | motrPort:
14 | - 80
15 | - 443
16 | motrProto: "TCP"
17 | flags: ""
18 | noOfConnections: 300
19 | noOfCondition: "$gte"
20 | returnFiled: "src"
21 |
22 | block:
23 | action: true
24 | expire: 300
25 | iptables: true
26 | blkcmd: ""
27 | ubkcmd: ""
28 |
29 | notice:
30 | send: true
31 | email:
32 | - 350311204@qq.com
33 |
--------------------------------------------------------------------------------
/conf/rules/udp.yaml:
--------------------------------------------------------------------------------
1 | name: "UDP flood check."
2 | desc: "every 10 second to check 'DNS reflection attack' in UDP protocol."
3 | ctime: "Thu Oct 24 17:48:11 CST 2019"
4 |
5 | bolt: "UDP"
6 |
7 | filter:
8 | timeDelta: 3
9 | trustIps:
10 | - 127.0.0.1
11 | - 10.10.0.4
12 | - 114.114.114.114
13 | motrPort:
14 | - 53
15 | motrProto: "UDP"
16 | flags: ""
17 | noOfConnections: 300
18 | noOfCondition: "$gte"
19 | returnFiled: "src"
20 |
21 | block:
22 | action: true
23 | expire: 300
24 | iptables: true
25 | blkcmd: ""
26 | ubkcmd: ""
27 |
28 | notice:
29 | send: true
30 | email:
31 | - 350311204@qq.com
32 |
--------------------------------------------------------------------------------
/conf/scoutd.conf:
--------------------------------------------------------------------------------
1 |
2 | # @ Scout for Python
3 | ##############################################################################
4 | # Author: YWJT / ZhiQiang Koo #
5 | # Modify: 2019-11-06 #
6 | ##############################################################################
7 | # This program is distributed under the "Artistic License" Agreement #
8 | # The LICENSE file is located in the same directory as this program. Please #
9 | # read the LICENSE file before you make copies or distribute this program #
10 | ##############################################################################
11 |
12 |
13 | [system]
14 | # log_level:
15 | # 选项:DEBUG,INFO,WARNING,ERROR,CRITICAL
16 | # 默认值:WARNING
17 | #
18 | # log_file:
19 | # log_file = "scoutd.log",默认存放到/var/log/scout/下
20 | # 也可以:
21 | # log_file = "/var/log/scoutd.log"
22 | #
23 |
24 | log_level = "INFO"
25 | log_file = "scoutd.log"
26 |
27 | [main]
28 | # 这里填写本机所有通信IP,不要填0.0.0.0
29 | listen_ip = "10.10.0.114,114.114.114.114"
30 |
31 | # 信任IP列表,支持CIRD格式
32 | trust_ip = "127.0.0.1"
33 |
34 | # 监听适配器,如果是多网口,请填写'any',否则填'eth0|eth1|em0|em1...'
35 | motr_interface = "eth0"
36 |
37 | # 监听端口,可以多个 如: "443,80,8080"
38 | # rules配置文件的过滤端口基于这里,如果这里没有配,即rules过滤端口不存在
39 | motr_port = "80,443,53"
40 |
41 | # 捕获数据包的最大字节数, 相当于buffer_timeout时间内的缓冲区
42 | max_bytes = 65536
43 |
44 | # 定义适配器是否必须进入混杂模式
45 | # 关于混杂模式,如果启用则会把任何流过网口的数据都会捕获,这样会产生很多杂乱的数据
46 | # 要精准捕获由外网流入的数据, 建议设为 False
47 | promiscuous = False
48 |
49 | # 缓冲区超时时间,单位是毫秒,一般设1000ms即可
50 | # 当捕获程序在设定的超时周期内返回一次数据集
51 | buffer_timeout = 1000
52 |
53 | # =======================================================================
54 | # 关于缓存服务的说明:
55 | # 缓存服务是独立于Scout主程序的一个服务,但是由Scoutd统一管理,缓存服务启动后,一般
56 | # 不需要经常维护它,如果使用内存方式,重启会导致记录数据清空。
57 | # =======================================================================
58 | [cache]
59 | # 自动删除缓存记录的存活时间,单位秒
60 | # 默认: 86400 (1 days)
61 | expire_after_seconds = 86400
62 |
63 | # 缓存数据的方式,可选:'Memory' 或 'Disk'
64 | # Memory 内存方式,若服务关闭数据会被重置,检测效率高,准确性高
65 | # Disk 磁盘方式,数据会被持久化,检测效率低,需要通过提高策略阀值达到预警
66 | # 不支持动态切换,如果首次启动后,切换缓存方式,需要重新初始化缓存服务,执行 Scoutd init
67 | storage_type = 'Memory'
68 |
69 |
70 | # 限制内存使用大小,最小1G
71 | # 不配置默认为可用系统内存的一半,配置不能有小数点
72 | storage_size = 1
73 |
74 |
75 | [http]
76 | # 接口服务参数
77 | # # http_host: 主机地址,默认 'localhost'
78 | # # http_port: 绑定端口,默认 6667
79 | http_host = 'localhost'
80 | http_port = 6667
81 |
82 |
83 | [rules]
84 | # 策略文件目录,默认: /conf/rules/
85 | # 手工创建策略文件名可随意但不能冲突,否则同名会覆盖上一个
86 | file_path = ""
87 |
88 | # 策略文件类型
89 | # 支持语法: yaml|json,默认: json
90 | file_type="yaml"
91 |
92 | [alert]
93 | # 告警SMTP服务器
94 | # 目前仅支持邮箱发送,你也可以通过策略文件的'blkcmd/ubkcmd'构造其他方式
95 | smtp_user = ""
96 | smtp_passwd = ""
97 | smtp_server = ""
98 | smtp_ssl = False
99 | admin_email = "admin@ScoutBase.net"
100 |
--------------------------------------------------------------------------------
/doc/2384F272-01BD-4081-BD0C-2993592A5C94.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ywjt/Scout/a2bd57afb6b6cdfa1dbd94c1494c13fa8b320768/doc/2384F272-01BD-4081-BD0C-2993592A5C94.png
--------------------------------------------------------------------------------
/doc/6563C7A9-A76A-4851-BF53-91D6CF08CE4F.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ywjt/Scout/a2bd57afb6b6cdfa1dbd94c1494c13fa8b320768/doc/6563C7A9-A76A-4851-BF53-91D6CF08CE4F.png
--------------------------------------------------------------------------------
/doc/6F7268C1-9277-4516-B5D7-2D95477EF22C.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ywjt/Scout/a2bd57afb6b6cdfa1dbd94c1494c13fa8b320768/doc/6F7268C1-9277-4516-B5D7-2D95477EF22C.png
--------------------------------------------------------------------------------
/doc/7CE72B62-09B9-427C-9CD3-9E09CCACAF8A.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ywjt/Scout/a2bd57afb6b6cdfa1dbd94c1494c13fa8b320768/doc/7CE72B62-09B9-427C-9CD3-9E09CCACAF8A.png
--------------------------------------------------------------------------------
/doc/EEAF5357-D03F-41F8-B574-CEF4ECC570F2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ywjt/Scout/a2bd57afb6b6cdfa1dbd94c1494c13fa8b320768/doc/EEAF5357-D03F-41F8-B574-CEF4ECC570F2.png
--------------------------------------------------------------------------------
/doc/FD2AF693-B35A-4B67-A07C-6E5B29FC666A.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ywjt/Scout/a2bd57afb6b6cdfa1dbd94c1494c13fa8b320768/doc/FD2AF693-B35A-4B67-A07C-6E5B29FC666A.png
--------------------------------------------------------------------------------
/doc/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/doc/Scout_ObServer.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ywjt/Scout/a2bd57afb6b6cdfa1dbd94c1494c13fa8b320768/doc/Scout_ObServer.png
--------------------------------------------------------------------------------
/notice.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 | import sys
16 | sys.path.append(".")
17 | import smtplib
18 | from email.header import Header
19 | from email.mime.text import MIMEText
20 | from base import ScoutBase, Loger
21 |
22 |
23 | class PyEmail(ScoutBase):
24 | sender = '' #发送者
25 |
26 | """
27 | @name: 构造函数
28 | @desc: 读取配置文件,初始化变量
29 | """
30 | def __init__(self):
31 | ScoutBase.__init__(self)
32 |
33 | if not self.avr['admin_email'].find('@'):
34 | self.sender = self.avr['smtp_server'].replace(self.avr['smtp_server'].split('.')[0]+'.',self.avr['admin_email']+'@')
35 | else:
36 | self.sender = self.avr['admin_email']
37 |
38 |
39 | """
40 | @name: 普通发信模式
41 | @desc: 不需要SSl认证
42 | """
43 | def nonsend(self,subject, msg, receiver):
44 | msg = MIMEText(msg,'plain','utf-8') #中文需参数‘utf-8’,单字节字符不需要
45 | msg['Subject'] = subject
46 | smtp = smtplib.SMTP()
47 | smtp.connect(self.avr['smtp_server'])
48 | smtp.login(self.avr['smtp_user'], self.avr['smtp_passwd'])
49 | smtp.sendmail(self.sender, receiver, msg.as_string())
50 | smtp.quit()
51 |
52 | """
53 | @name: SSL发信模式
54 | @desc: 支持google邮箱
55 | """
56 | def sslsend(self,subject, msg, receiver):
57 | msg = MIMEText(msg,'plain','utf-8') #中文需参数‘utf-8’,单字节字符不需要
58 | msg['Subject'] = Header(subject, 'utf-8')
59 | smtp = smtplib.SMTP()
60 | smtp.connect(self.avr['smtp_server'])
61 | smtp.ehlo()
62 | smtp.starttls()
63 | smtp.ehlo()
64 | smtp.set_debuglevel(1)
65 | smtp.login(self.avr['smtp_user'], self.avr['smtp_passwd'])
66 | smtp.sendmail(self.sender, receiver, msg.as_string())
67 | smtp.quit()
68 |
69 | """
70 | @name: 发送邮件
71 | """
72 | def sendto(self,subject, msg, receiver):
73 | if self.avr['smtp_ssl']:
74 | try:
75 | self.sslsend(subject, msg, receiver)
76 | Loger().WARNING('[MAIL] Send mail Success.')
77 | except Exception as e:
78 | Loger().ERROR('[MAIL] Send mail failed to: %s' % e)
79 | else:
80 | try:
81 | self.nonsend(subject, msg, receiver)
82 | Loger().WARNING('[MAIL] Send mail Success.')
83 | except Exception as e:
84 | Loger().ERROR('[MAIL] Send mail failed to: %s' % e)
85 |
86 |
--------------------------------------------------------------------------------
/pcap/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/pcap/dstat.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 | import sys
16 | sys.path.append("..")
17 | import os
18 | import re
19 | import time
20 | import datetime
21 | import psutil
22 | from time import sleep
23 | from base import ScoutBase
24 | from cache.cache import CacheServer
25 | from base import async, Loger
26 | from prettytable import PrettyTable
27 |
28 |
29 | class Dstat(ScoutBase):
30 |
31 | def __init__(self):
32 | ScoutBase.__init__(self)
33 |
34 | """Instant a CacheServer
35 | """
36 | self.Cache = CacheServer().create_or_connect_cache()
37 | self.Dcol = self.Cache["DSTAT"]
38 | CacheServer().create_index(self.Dcol, "exptime", self.avr['expire_after_seconds'])
39 |
40 |
41 | def load_cache(self, collection_obj, stdout):
42 | CacheServer().insert_one(collection_obj, stdout)
43 |
44 |
45 | def show(self):
46 | __lineRows = False
47 | table = PrettyTable(['Time','1min','5min','15min','%CPU','MemFree(MiB)','Recv(MiB)','Send(MiB)'])
48 | kwargs={'time': {'$gte': int(time.time()-900)}}
49 | for item in CacheServer().find_conditions(self.Dcol, **kwargs):
50 | table.add_row([time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(item['time'])), item['1m'], item['5m'], item['15m'], item['cpu_percent'], item['mem_free'], item['recv'], item['send']])
51 | __lineRows = True
52 | if __lineRows: table.sort_key(item['time'])
53 | table.reversesort=False
54 | table.border = 1
55 | print(table)
56 |
57 |
58 | def net_io_counters(self):
59 | net_io = psutil.net_io_counters(pernic=True)
60 | recv = net_io[self.avr['motr_interface']].bytes_recv
61 | send = net_io[self.avr['motr_interface']].bytes_sent
62 | return (float(recv), float(send))
63 |
64 | def cpu_count(self, logical=False):
65 | """
66 | logical 显示物理cpu数还是逻辑个数
67 | """
68 | return psutil.cpu_count(logical)
69 |
70 | def cpu_times_idle(self, interval=1, percpu=False):
71 | idle=[]
72 | for c in psutil.cpu_times_percent(interval, percpu):
73 | idle.append(c.idle)
74 | return idle
75 |
76 | def cpu_percent(self, interval=1, percpu=False):
77 | """
78 | interval 计算cpu使用率的时间间隔
79 | percpu 选择总的使用率还是每个cpu的使用率
80 | """
81 | return psutil.cpu_percent(interval, percpu)
82 |
83 | def memory_info(self):
84 | mem = psutil.virtual_memory()
85 | total = "%d" %(mem.total/1024/1024)
86 | free = "%d" %(mem.available/1024/1024)
87 | return (total, free)
88 |
89 | def process(self, pid):
90 | proc_info={}
91 | p = psutil.Process(pid)
92 | proc_info["name"]=p.name() #进程名
93 | proc_info["exe"]=p.exe() #进程的bin路径
94 | proc_info["cwd"]=p.cwd() #进程的工作目录绝对路径
95 | proc_info["status"]=p.status() #进程状态
96 | proc_info["create_time"]=p.create_time() #进程创建时间
97 | proc_info["running_time"]='%d Seconds' % int((time.time()-proc_info["create_time"]))
98 | proc_info["uids"]=p.uids() #进程uid信息
99 | proc_info["gids"]=p.gids() #进程的gid信息
100 | proc_info["cpu_times"]=p.cpu_times() #进程的cpu时间信息,包括user,system两个cpu信息
101 | proc_info["cpu_affinity"]=p.cpu_affinity() #get进程cpu亲和度,如果要设置cpu亲和度,将cpu号作为参考就好
102 | proc_info["memory_percent"]=p.memory_percent() #进程内存利用率
103 | proc_info["memory_info"]=p.memory_info() #进程内存rss,vms信息
104 | proc_info["io_counters"]=p.io_counters() #进程的IO信息,包括读写IO数字及参数
105 | proc_info["connections"]=p.connections() #返回进程列表
106 | proc_info["num_threads"]=p.num_threads() #进程开启的线程数
107 | return proc_info
108 |
109 |
110 | def net(self):
111 | net = {}
112 | (recv, send) = self.net_io_counters()
113 | while True:
114 | time.sleep(2)
115 | (new_recv, new_send) = self.net_io_counters()
116 | net['recv'] = "%.3f" %((new_recv - recv)/1024/1024)
117 | net['send'] = "%.3f" %((new_send - send)/1024/1024)
118 | return net
119 |
120 |
121 | def loadavg(self):
122 | loadavg = {}
123 | f = open("/proc/loadavg","r")
124 | con = f.read().split()
125 | f.close()
126 | loadavg['1m'] = con[0]
127 | loadavg['5m'] = con[1]
128 | loadavg['15m'] = con[2]
129 | return loadavg
130 |
131 | def LOOP(self, keeprunning=True, timeout=60):
132 | data ={}
133 | while keeprunning:
134 | data = dict(self.net(), **self.loadavg())
135 | #data["time"]=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
136 | data["time"]=int(time.time())
137 | data["exptime"]=datetime.datetime.utcnow()
138 | data["cpu_percent"]=self.cpu_percent()
139 | data["mem_total"],data["mem_free"] =self.memory_info()
140 | data["cpu_physics"] = self.cpu_count(1)
141 | data["cpu_logical"] = self.cpu_count()
142 |
143 | #Loger().INFO(data)
144 | self.load_cache(self.Dcol, data)
145 | sleep(timeout)
146 |
147 |
148 |
149 |
150 |
151 |
152 |
153 |
154 |
155 |
--------------------------------------------------------------------------------
/pcap/pkts.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 | import sys
15 | sys.path.append("..")
16 | import os
17 | import socket
18 | import dpkt
19 | import binascii
20 | import struct
21 | import uuid
22 | import re
23 | import time
24 | import datetime
25 | import shutil
26 | import pcapy
27 | import threading
28 | from multiprocessing import Queue
29 | from dpkt.compat import compat_ord
30 | from base import async, Loger, save_pid
31 | from base import ScoutBase
32 | from pcap.queue import PQueues
33 |
34 |
35 |
36 |
37 | class Pcapy(ScoutBase):
38 |
39 | def __init__(self):
40 | ScoutBase.__init__(self)
41 | cidr=self.cidr()
42 | self.interface = self.avr['motr_interface'] if self.avr['motr_interface'] else 'any'
43 | self.filters = '{0} and {1} and {2}'.format(cidr['lip'], cidr['port'], cidr['wip'])
44 | self.__max_bytes = self.avr['max_bytes']
45 | self.__promiscuous = self.avr['promiscuous']
46 | self.__buffer_timeout = self.avr['buffer_timeout']
47 | self.PQ = PQueues()
48 | self.PQ.Qset(Queue())
49 |
50 |
51 | def mac_addr(self, address):
52 | """Convert a MAC address to a readable/printable string
53 |
54 | Args:
55 | address (str): a MAC address in hex form (e.g. '\x01\x02\x03\x04\x05\x06')
56 | Returns:
57 | str: Printable/readable MAC address
58 | """
59 | return ':'.join('%02x' % compat_ord(b) for b in address)
60 |
61 |
62 |
63 | def inet_to_str(self, inet):
64 | """Convert inet object to a string
65 |
66 | Args:
67 | inet (inet struct): inet network address
68 | Returns:
69 | str: Printable/readable IP address
70 | """
71 | # First try ipv4 and then ipv6
72 | try:
73 | return socket.inet_ntop(socket.AF_INET, inet)
74 | except ValueError:
75 | return socket.inet_ntop(socket.AF_INET6, inet)
76 |
77 |
78 | '''
79 | # 分析以太网头部信息
80 | 数据包包含 Ethernet->TCP/UDP/ICMP/ARP->RAW三个层次
81 | 第一层是网络层,包含源、目的mac、ip协议号
82 | 第二层是协议层,包含源ip、目标ip(除arp、icmp)
83 | 第三层是元数据,包含端口号、http报文
84 | '''
85 | def ether(self, hdr, buf):
86 | try:
87 | # Unpack the Ethernet frame (mac src/dst, ethertype)
88 | eth = dpkt.ethernet.Ethernet(buf)
89 |
90 | # Make sure the Ethernet data contains an IP packet
91 | if not isinstance(eth.data, dpkt.ip.IP):
92 | Loger().CRITICAL('IP Packet type not supported %s' % eth.data._class_._name_)
93 | raise
94 |
95 | # 让以太网帧的ip数据报部分等于一个新的对象,packet
96 | packet=eth.data
97 | stdout={}
98 | stdout["time"]=int(time.time())
99 | stdout["exptime"]=datetime.datetime.utcnow()
100 | stdout["mac_src"]=self.mac_addr(eth.src)
101 | stdout["mac_dst"]=self.mac_addr(eth.dst)
102 | stdout["eth_type"]=eth.type
103 | stdout["length"]=packet.len
104 | stdout["ttl"]=packet.ttl
105 |
106 | """protocol number
107 | packet.p:
108 | 1 : ICMP Packets
109 | 6 : TCP protocol
110 | 8 : IP protocol
111 | 17 : UDP protocol
112 | """
113 | try:
114 | # 将特征字符串换为对象, 这里就不用if..else 了
115 | exec_func = getattr(self, str("recv_%s" % int(packet.p)))
116 | exec_func(packet, stdout)
117 | except Exception as e:
118 | Loger().WARNING('Protocol type not supported %s' % str(e))
119 | pass
120 |
121 | except Exception as e:
122 | raise
123 |
124 |
125 | def recv_http(self, ip):
126 | # 确保对象在传输层
127 | if isinstance(ip.data, dpkt.tcp.TCP):
128 | '''
129 | # 解析出http请求数据
130 | 包含一个元组数据,比如
131 | Request(body='', uri=u'/okokoko', headers=OrderedDict([(u'host', u'1.1.1.1'),
132 | (u'user-agent', u'curl/7.54.0'), (u'accept', u'*/*')]), version=u'1.1', data='', method=u'GET')
133 | '''
134 | tcp = ip.data
135 | try:
136 | request=dpkt.http.Request(tcp.data)
137 | req_dict={}
138 | req_dict["url"]=request.uri
139 | req_dict["method"]=request.method
140 | req_dict["version"]=request.version
141 | req_dict["headers"]={
142 | "host": request.headers["host"],
143 | "user-agent": request.headers["user-agent"],
144 | }
145 | return req_dict
146 | except (dpkt.dpkt.NeedData, dpkt.dpkt.UnpackError):
147 | pass
148 |
149 |
150 |
151 | def recv_6(self, packet, stdout):
152 | if True:
153 | try:
154 | tcp=packet.data
155 | stdout["proto"]='TCP'
156 |
157 | '''
158 | 日常的分析有用的五个字段:
159 | FIN表示关闭连接
160 | SYN表示建立连接
161 | RST表示连接重置
162 | ACK表示响应
163 | PSH表示有DATA数据传输
164 |
165 | 其中,ACK是可能与SYN,FIN等同时使用的,比如SYN和ACK可能同时为1,它表示的就是建立连接之后的响应,如果只是单个的一个SYN,它表示的只是建立连接。
166 | TCP的几次握手就是通过这样的ACK表现出来的。但SYN与FIN是不会同时为1的,因为前者表示的是建立连接,而后者表示的是断开连接。
167 | RST一般是在FIN之后才会出现为1的情况,表示的是连接重置。一般地,当出现FIN包或RST包时,我们便认为客户端与服务器端断开了连接;
168 | 而当出现SYN和SYN+ACK包时,我们认为客户端与服务器建立了一个连接。
169 | PSH为1的情况,一般只出现在 DATA内容不为0的包中,也就是说PSH为1表示的是有真正的TCP数据包内容被传递。
170 | TCP的连接建立和连接关闭,都是通过请求-响应的模式完成的。
171 |
172 | 三次握手确认建立一个连接:
173 | 位码即tcp标志位,有6种标示:
174 | SYN(synchronous建立联机)
175 | ACK(acknowledgement 确认)
176 | PSH(push传送)
177 | FIN(finish结束)
178 | RST(reset重置)
179 | URG(urgent紧急)
180 | Sequence number(顺序号码)
181 | Acknowledge number(确认号码)
182 | 第一次握手:主机A发送位码为syn=1,随机产生seq number=1234567的数据包到服务器,主机B由SYN=1知道,A要求建立联机;
183 | 第二次握手:主机B收到请求后要确认联机信息,向A发送ack number=(主机A的seq+1),syn=1,ack=1,随机产生seq=7654321的包;
184 | 第三次握手:主机A收到后检查ack number是否正确,即第一次发送的seq number+1,以及位码ack是否为1,若正确,主机A会再发送ack number=(主机B的seq+1),ack=1,主机B收到后确认seq值与ack=1则连接建立成功。
185 | 完成三次握手,主机A与主机B开始传送数据。
186 | '''
187 | if tcp.flags&dpkt.tcp.TH_FIN: flags='fin'
188 | elif tcp.flags&dpkt.tcp.TH_SYN: flags='syn'
189 | elif tcp.flags&dpkt.tcp.TH_RST: flags='rst'
190 | elif tcp.flags&dpkt.tcp.TH_PUSH: flags='psh'
191 | elif tcp.flags&dpkt.tcp.TH_ACK: flags='ack'
192 | elif tcp.flags&dpkt.tcp.TH_URG: flags='urg'
193 | elif tcp.flags&dpkt.tcp.TH_ECE: flags='ece'
194 | elif tcp.flags&dpkt.tcp.TH_CWR: flags='cwr'
195 | else: flags='.'
196 |
197 | stdout["src"]=self.inet_to_str(packet.src)
198 | stdout["dst"]=self.inet_to_str(packet.dst)
199 | stdout["sport"]=tcp.sport
200 | stdout["dport"]=tcp.dport
201 | stdout["flags"]=flags
202 | stdout["seq"]=tcp.seq
203 | stdout["ack"]=tcp.ack
204 | stdout["sum"]=tcp.sum
205 |
206 | except Exception as e:
207 | pass
208 |
209 | #尝试解出http的数据包
210 | stdout["RAW"]=self.recv_http(packet)
211 | #写入进程队列
212 | self.PQ.Qpush(stdout)
213 |
214 |
215 | def recv_17(self, packet, stdout):
216 | if True:
217 | try:
218 | udp=packet.data
219 | stdout["proto"]='UDP'
220 | stdout["src"]=self.inet_to_str(packet.src)
221 | stdout["dst"]=self.inet_to_str(packet.dst)
222 | stdout["sport"]=udp.sport
223 | stdout["dport"]=udp.dport
224 | stdout["sum"]=udp.sum
225 | stdout["RAW"]={}
226 |
227 | if udp.dport==53:
228 | dns = dpkt.dns.DNS(udp.data)
229 | try:
230 | r_ip = socket.inet_ntoa(dns.an[1].ip) # dns返回ip
231 | except:
232 | r_ip = "" # dns请求,则没有ip返回
233 | stdout["RAW"]={'id':dns.id, 'qd':dns.qd[0].name, 'ip':r_ip}
234 |
235 | except Exception as e:
236 | pass
237 |
238 | #写入进程队列
239 | self.PQ.Qpush(stdout)
240 |
241 |
242 | '''
243 | def recv_1(self, packet):
244 | if True:
245 | try:
246 | icmp=packet.data
247 | if isinstance(icmp, dpkt.icmp.ICMP):
248 | stdout["src"]=self.inet_to_str(packet.src)
249 | stdout["dst"]=self.inet_to_str(packet.dst)
250 | stdout["type"]=icmp.type
251 | stdout["code"]=icmp.code
252 | stdout["sum"]=icmp.sum
253 | stdout["RAW"]=icmp.data
254 | except Exception as e:
255 | print e
256 | raise
257 | print stdout
258 | '''
259 |
260 | def data_link_str(self, link_type):
261 | if link_type==1:
262 | return 'Ethernet (10Mb, 100Mb, 1000Mb, and up); the 10MB in the DLT_ name is historical.'
263 | elif link_type==6:
264 | return 'IEEE 802.5 Token Ring; the IEEE802, without _5, in the DLT_ name is historical.'
265 | elif link_type==105:
266 | return 'IEEE 802.11 wireless LAN.'
267 | else:
268 | return 'unknow link type.'
269 |
270 | '''
271 | max_bytes
272 | 捕获数据包的最大字节数,最多65535个字节
273 |
274 | promiscuous
275 | 混杂模式就是接收所有经过网卡的数据包,包括不是发给本机的包,即不验证MAC地址
276 | 普通模式下网卡只接收发给本机的包(包括广播包)传递给上层程序,其它的包一律丢弃
277 |
278 | read_timeout
279 | 每次捕获的时间间隔,单位milliseconds
280 | 超过这个时间后,获得数据包的函数会立即返回
281 | 0表示一直等待直到有数据包到来,errbuf保存错误信息
282 |
283 | packet_limit
284 | 捕获的次数,小于或等于0时 表示不限制
285 | '''
286 | def dump(self):
287 | pcapy.findalldevs()
288 | p = pcapy.open_live(self.interface, self.__max_bytes, self.__promiscuous, self.__buffer_timeout)
289 | p.setfilter(self.filters)
290 | packet_limit= -1
291 | print("Listen: %s, net=%s, mask=%s, linktype=[%d, %s] \n\n" % (self.interface, p.getnet(), p.getmask(), p.datalink(), self.data_link_str(p.datalink())))
292 | p.loop(packet_limit, self.ether)
293 |
294 | def save(self):
295 | self.PQ.Qsave()
296 |
297 | def LOOP(self):
298 | dh = self.PQ.createProcess(self.dump)
299 | sh = self.PQ.createProcess(self.save)
300 | save_pid([dh.pid, sh.pid])
301 |
--------------------------------------------------------------------------------
/pcap/queue.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 | import sys
16 | sys.path.append("..")
17 | import os
18 | import time
19 | import psutil
20 | import time
21 | from threading import Thread
22 | from multiprocessing import Process
23 | from multiprocessing import Queue
24 | from collections import deque
25 | from cache.cache import CacheServer
26 | from base import Loger
27 | from base import ScoutBase
28 |
29 |
30 | class PQueues(ScoutBase):
31 |
32 | def __init__(self):
33 | ScoutBase.__init__(self)
34 | self.TCP_DQ = deque(maxlen=500)
35 | self.UDP_DQ = deque(maxlen=500)
36 |
37 | """Instant a CacheServer
38 | exptime:
39 | expireAfterSeconds: Used to create an expiring (TTL) collection.
40 | MongoDB will automatically delete documents from this collection after seconds.
41 | The indexed field must be a UTC datetime or the data will not expire.
42 | """
43 | __Cache=CacheServer().create_or_connect_cache()
44 | self.TCP=__Cache["TCP"]
45 | self.UDP=__Cache["UDP"]
46 | CacheServer().create_index(self.TCP, "exptime", self.avr['expire_after_seconds'])
47 | CacheServer().create_index(self.UDP, "exptime", self.avr['expire_after_seconds'])
48 |
49 | def Qset(self, q=None):
50 | self.q = q
51 |
52 | def saveCache(self, bolt, stdout):
53 | try:
54 | obj = getattr(self, str(bolt))
55 | if type(stdout)==list:
56 | CacheServer().insert_many(obj, stdout)
57 | else:
58 | CacheServer().insert_one(obj, stdout)
59 | except Exception as e:
60 | Loger().ERROR("no collection name is %s, Error: %s" % (str(stdout["proto"]), str(e)))
61 | pass
62 |
63 |
64 | def Qpush(self, value):
65 | self.q.put(value)
66 |
67 | def Qdeque(self, dq, collection, value):
68 | if len(dq) == dq.maxlen:
69 | self.saveCache(collection, list(dq))
70 | dq.clear()
71 | time.sleep(0.1)
72 | dq.append(value)
73 |
74 | def Qsave(self):
75 | while 1:
76 | DQ=self.q.get()
77 | if DQ:
78 | _dq_handle = getattr(self, "%s_DQ" % str(DQ["proto"]))
79 | self.Qdeque(_dq_handle, DQ["proto"], DQ)
80 | else:
81 | time.sleep(1)
82 |
83 |
84 | def createThread(self, func, *args):
85 | t = Thread(target=func, args=(args))
86 | t.start()
87 | return t
88 |
89 | def createProcess(self, func, *args):
90 | p = Process(target=func, args=(args))
91 | p.start()
92 | return p
93 |
94 |
--------------------------------------------------------------------------------
/plugin/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/plugin/images.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2019-11-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 |
16 | import sys
17 | sys.path.append("..")
18 | import os, sys
19 | import time
20 | import psutil
21 | from calendar import timegm
22 | from datetime import datetime
23 | from base import ScoutBase
24 | from cache.cache import CacheServer
25 |
26 | _LISTEN_IP = ScoutBase().avr['listen_ip'].split(',')
27 |
28 |
29 | def convert_to_time_ms(timestamp):
30 | return 1000 * timegm(datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%fZ').timetuple())
31 |
32 | def convert_time_ms_agg(timestamp):
33 | return timegm(datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%fZ').timetuple())
34 |
35 |
36 | class GetSeries(object):
37 | def __init__(self):
38 | self.Cache = CacheServer().create_or_connect_cache()
39 | self.TCPcol=self.Cache["TCP"]
40 | self.UDPcol=self.Cache["UDP"]
41 | self.Dcol = self.Cache["DSTAT"]
42 | self.Bcol = self.Cache["BLOCK"]
43 |
44 | def data_series_uptime(self,req, pid):
45 | """
46 | 运行时间
47 | """
48 | p = psutil.Process(pid)
49 | times = int(time.time()-p.create_time())
50 | return [{
51 | "target": "uptime",
52 | "datapoints": [
53 | [times, convert_to_time_ms(req['range']['from'])],
54 | [times, convert_to_time_ms(req['range']['to'])]
55 | ]
56 | }]
57 |
58 | def data_series_cpu_percent(self,req):
59 | """
60 | CPU使用率
61 | """
62 | kw = {"time": {"$gte": convert_time_ms_agg(req['range']['from']), "$lte": convert_time_ms_agg(req['range']['to'])} }
63 | get_series = CacheServer().find_conditions(self.Dcol, limit=500, **kw)
64 | _cpu = []
65 | for res in get_series:
66 | if not res is None: _cpu.append([float(res['cpu_percent']), int(res["time"]*1000)])
67 | return [{ "target": "cpu_percent", "datapoints": _cpu}]
68 |
69 | def data_series_load_average(self,req):
70 | """
71 | 当前负载
72 | """
73 | kw = {"time": {"$gte": convert_time_ms_agg(req['range']['from']), "$lte": convert_time_ms_agg(req['range']['to'])} }
74 | get_series = CacheServer().find_conditions(self.Dcol, limit=500, **kw)
75 | _1m=[]
76 | _5m=[]
77 | _15m=[]
78 | for res in get_series:
79 | if not res is None:
80 | _1m.append([res['1m'], int(res["time"]*1000)])
81 | _5m.append([res['5m'], int(res["time"]*1000)])
82 | _15m.append([res['15m'], int(res["time"]*1000)])
83 | return [{ "target": '1m', "datapoints": _1m},
84 | { "target": '5m', "datapoints": _5m},
85 | { "target": '15m', "datapoints": _15m}]
86 |
87 | def data_series_mem_free(self,req):
88 | """
89 | 空闲内存
90 | """
91 | kw = {"time": {"$gte": convert_time_ms_agg(req['range']['from']), "$lte": convert_time_ms_agg(req['range']['to'])} }
92 | get_series = CacheServer().find_conditions(self.Dcol, limit=500, **kw)
93 | _mem_free = []
94 | for res in get_series:
95 | if not res is None: _mem_free.append([res['mem_free'], int(res["time"]*1000)])
96 | return [{"target": "mem_free", "datapoints": _mem_free}]
97 |
98 | def data_series_netflow(self, req):
99 | """
100 | 网络流量
101 | """
102 | kw = {"time": {"$gte": convert_time_ms_agg(req['range']['from']), "$lte": convert_time_ms_agg(req['range']['to'])} }
103 | get_series = CacheServer().find_conditions(self.Dcol, limit=500, **kw)
104 | _recv_list=[]
105 | _send_list=[]
106 | for res in get_series:
107 | if not res is None:
108 | _recv_list.append([float(res['recv']), int(res["time"]*1000)])
109 | _send_list.append([float(res['send']), int(res["time"]*1000)])
110 | return [{ "target": 'recv', "datapoints": _recv_list},
111 | { "target": 'send', "datapoints": _send_list}]
112 |
113 | def data_series_exception_packet(self,req):
114 | """
115 | 异常数据包
116 | """
117 | get_series = CacheServer().find_aggregate(self.Bcol, [{"$group": {"_id": "$time", "total": { "$sum": "$total" }}}])
118 | _packet = []
119 | for res in get_series:
120 | if not res is None:
121 | _packet.append([res['total'], int(res["_id"]*1000)])
122 | else:
123 | _packet.append([0, int(time.time()*1000)])
124 | return [{ "target": 'excep_packet', "datapoints": _packet}]
125 |
126 | def data_table_bolt_tcp(self, req):
127 | """
128 | TCP数据记录
129 | """
130 | kw = { "src": {"$nin": _LISTEN_IP}, "time": {"$gte": convert_time_ms_agg(req['range']['from']), "$lte": convert_time_ms_agg(req['range']['to'])} }
131 | get_series = CacheServer().find_conditions(self.TCPcol, limit=500, **kw)
132 | _list = []
133 | for res in get_series:
134 | _line = [res["proto"], res["src"], res["sport"], res["dst"], res["dport"], res["ttl"], res["flags"], (res["time"]*1000)]
135 | if not res is None: _list.append(_line)
136 | return [{
137 | "columns":[
138 | {"text":"proto", "type":"string"},
139 | {"text":"src", "type":"string"},
140 | {"text":"sport", "type":"string"},
141 | {"text":"dst", "type":"string"},
142 | {"text":"dport", "type":"string"},
143 | {"text":"ttl", "type":"number"},
144 | {"text":"flags", "type":"string"},
145 | {"text":"time", "type":"time"}],
146 | "rows": _list ,
147 | "type":"table"
148 | }]
149 |
150 | def data_table_bolt_udp(self, req):
151 | """
152 | UDP数据记录
153 | """
154 | kw = {"src": {"$nin": _LISTEN_IP}, "time": {"$gte": convert_time_ms_agg(req['range']['from']), "$lte": convert_time_ms_agg(req['range']['to'])} }
155 | get_series = CacheServer().find_conditions(self.UDPcol, limit=500, **kw)
156 | _list = []
157 | for res in get_series:
158 | _line = [res["proto"], res["src"], res["sport"], res["dst"], res["dport"], res["ttl"], (res["time"]*1000)]
159 | if not res is None: _list.append(_line)
160 | return [{
161 | "columns":[
162 | {"text":"proto", "type":"string"},
163 | {"text":"src", "type":"string"},
164 | {"text":"sport", "type":"string"},
165 | {"text":"dst", "type":"string"},
166 | {"text":"dport", "type":"string"},
167 | {"text":"ttl", "type":"number"},
168 | {"text":"time", "type":"time"}],
169 | "rows": _list ,
170 | "type":"table"
171 | }]
172 |
173 | def data_table_active_table(self, req):
174 | """
175 | 执行列表
176 | """
177 | kw = {"time": {"$gte": convert_time_ms_agg(req['range']['from']), "$lte": convert_time_ms_agg(req['range']['to'])} }
178 | get_series = CacheServer().find_conditions(self.Bcol, limit=500, **kw)
179 | _list = []
180 | for res in get_series:
181 | if not res is None: _list.append([res["_id"], (res["time"]*1000), res["confname"], res["total"], res["command"]])
182 | return [{
183 | "columns":[
184 | {"text":"_ID", "type":"string"},
185 | {"text":"Time", "type":"time"},
186 | {"text":"ConfName", "type":"string"},
187 | {"text":"Total", "type":"number"},
188 | {"text":"Command", "type":"string"}],
189 | "rows": _list ,
190 | "type":"table"
191 | }]
192 |
193 |
--------------------------------------------------------------------------------
/plugin/jsonserver.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 | import sys, os
15 | sys.path.append("..")
16 | import _strptime
17 | import logging
18 | from calendar import timegm
19 | from datetime import datetime
20 | from images import CacheServer, GetSeries
21 | from flask import Flask, request, jsonify
22 | from base import ScoutBase
23 | from base import CONF_DIR, LOGS_DIR
24 |
25 | log = logging.getLogger('werkzeug')
26 | log.setLevel(logging.WARNING)
27 | app = Flask("Scout plugin for grafana server")
28 |
29 |
30 | #return '/search' KEYS
31 | __SCOUT_KEYS= ['cpu_percent', 'loadavg', 'mem_free', 'netflow', 'uptime', 'bolt_tcp', 'bolt_udp', 'active_table', 'excep_packet']
32 |
33 |
34 | def get_main_pid():
35 | try:
36 | f = open(LOGS_DIR + "scoutd.pid", 'r')
37 | PID = f.read()
38 | f.close()
39 | return PID
40 | except IOError as e:
41 | return 0
42 |
43 |
44 | def convert_to_time_ms(timestamp):
45 | return 1000 * timegm(datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S.%fZ').timetuple())
46 |
47 |
48 | @app.route('/')
49 | def health_check():
50 | return 'This datasource is healthy.'
51 |
52 |
53 | @app.route('/search', methods=['POST'])
54 | def search():
55 | return jsonify(__SCOUT_KEYS)
56 |
57 |
58 | @app.route('/query', methods=['POST'])
59 | def query():
60 | req = request.get_json()
61 | for item in req['targets']:
62 | if item['target'] == 'uptime':
63 | data = GetSeries().data_series_uptime(req, int(get_main_pid()))
64 | elif item['target'] == 'netflow':
65 | data = GetSeries().data_series_netflow(req)
66 | elif item['target'] == 'cpu_percent':
67 | data = GetSeries().data_series_cpu_percent(req)
68 | elif item['target']== 'mem_free':
69 | data = GetSeries().data_series_mem_free(req)
70 | elif item['target'] == 'loadavg':
71 | data = GetSeries().data_series_load_average(req)
72 | elif item['target'] == 'excep_packet':
73 | data = GetSeries().data_series_exception_packet(req)
74 | elif item['target'] in ['active_table']:
75 | data = GetSeries().data_table_active_table(req)
76 | elif item['target'] in ['bolt_tcp']:
77 | data = GetSeries().data_table_bolt_tcp(req)
78 | elif item['target'] in ['bolt_udp']:
79 | data = GetSeries().data_table_bolt_udp(req)
80 | else:
81 | data = []
82 | return jsonify(data)
83 |
84 |
85 | @app.route('/annotations', methods=['POST'])
86 | def annotations():
87 | req = request.get_json()
88 | data = [
89 | {
90 | "annotation": 'This is the annotation',
91 | "time": (convert_to_time_ms(req['range']['from']) +
92 | convert_to_time_ms(req['range']['to'])) / 2,
93 | "title": 'Deployment notes',
94 | "tags": ['tag1', 'tag2'],
95 | "text": 'Hm, something went wrong...'
96 | }
97 | ]
98 | return jsonify(data)
99 |
100 |
101 | @app.route('/tag-keys', methods=['POST'])
102 | def tag_keys():
103 | data = [
104 | {"type": "string", "text": "TCP"},
105 | {"type": "string", "text": "DSTAT"}
106 | ]
107 | return jsonify(data)
108 |
109 |
110 | @app.route('/tag-values', methods=['POST'])
111 | def tag_values():
112 | req = request.get_json()
113 | if req['key'] == 'TCP':
114 | return jsonify([
115 | {'text': 'syn'},
116 | {'text': 'ack'},
117 | {'text': 'fin'}
118 | ])
119 | elif req['key'] == 'DSTAT':
120 | return jsonify([
121 | {'text': '1m'},
122 | {'text': '5m'},
123 | {'text': '15m'},
124 | {'text': 'mem_free'},
125 | {'text': 'cpu_percent'},
126 | {'text': 'recv'},
127 | {'text': 'send'}
128 | ])
129 |
130 |
131 |
132 | def app_run():
133 | __HTTP_HOST = ScoutBase().avr['http_host']
134 | __HTTP_PORT = ScoutBase().avr['http_port']
135 | app.run(host=__HTTP_HOST, port=__HTTP_PORT, debug=False)
136 |
--------------------------------------------------------------------------------
/rule.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 | import sys
16 | sys.path.append(".")
17 | import os
18 | import re
19 | import string
20 | import datetime
21 | import time
22 | import subprocess
23 | from threading import Thread
24 | from prettytable import PrettyTable
25 | from time import sleep
26 | from notice import PyEmail
27 | from base import async
28 | from cache.cache import CacheServer
29 | from base import ScoutBase, Loger, Notes, Rules, async
30 | from base import CONF_DIR, LOGS_DIR, CACHE_DIR
31 |
32 |
33 | class Rule(ScoutBase):
34 |
35 | def __init__(self):
36 | ScoutBase.__init__(self)
37 | self.filepath = self.avr['file_path'] if self.avr['file_path'] else CONF_DIR+"/rules"
38 | self.filetype = self.avr['file_type'] if self.avr['file_type']=='yaml' else 'json'
39 | self.S = {}
40 |
41 | """Instant a CacheServer
42 | """
43 | self.Cache = CacheServer().create_or_connect_cache()
44 | self.Bcol = self.Cache["BLOCK"]
45 | CacheServer().create_index(self.Bcol, "_id")
46 |
47 |
48 | def cache_connect(self, bolt):
49 | return self.Cache[CacheServer().get_collection(self.Cache, bolt)]
50 |
51 |
52 | def load_cache(self, collection, condition, stdout):
53 | return CacheServer().replace_id(collection, condition, stdout)
54 |
55 |
56 | """
57 | 存储配置文件解析后的key==>value键值对
58 | 为了减轻loop过程中每次打开配置文件的IO线程。
59 | 这里仅仅用了一个类的全局对象,如果配置文件发生变更,则不会实时读取,需要重启进程。
60 | """
61 | def rule_key_value(self, basename):
62 | if basename:
63 | self.S[basename] = Rules(basename).echo()
64 |
65 |
66 | """
67 | 封锁操作
68 | confname:
69 | 配置文件名,在数据记录里会登记属于哪些配置所触发的记录
70 | parse:
71 | 解析后的配置文件的字典集
72 | data:
73 | 通过filter查询返回的结果集
74 | """
75 | def rule_block(self, confname, parse={}, data={}):
76 | # 输出符合过滤条件的信息
77 | Loger().WARNING("[%s] %s" % (confname, data))
78 |
79 | # 解析block,注意配置文件不能缺少关键字
80 | try:
81 | action = parse['block']['action']
82 | expire = parse['block']['expire']
83 | command = parse['block']['blkcmd']
84 | iptables = parse['block']['iptables']
85 | except Exception as e:
86 | Loger().ERROR("block rule farmat error.")
87 | raise
88 |
89 | block_data={}
90 | block_data['_id'] = data['_id']
91 | block_data['total'] = data['total']
92 | block_data["time"]=int(time.time())
93 | block_data["exptime"]=int(time.time()) + expire
94 | block_data["confname"]=str(confname)
95 | block_data["command"]=''
96 |
97 | if action:
98 | if iptables:
99 | block_data["command"] = ("/sbin/iptables -I INPUT -s %s -j DROP" % data['_id'])
100 | else:
101 | if command.find(' %s')>0:
102 | data['block']=1
103 | block_data["command"] = (command % data)
104 |
105 | state=self.load_cache(self.Bcol, {'_id': data['_id']}, block_data)
106 | if state:
107 | subprocess.call(block_data["command"], shell=True)
108 | Loger().WARNING(Notes['LOCK'] % (data['_id'], data['total']))
109 | #发出邮件通知
110 | self.rule_notice(confname, parse, data)
111 | else:
112 | Loger().WARNING(Notes['RECORD'] % (data['_id'], data['total']))
113 |
114 |
115 | """
116 | 解锁操作
117 | confname:
118 | 配置文件名,在数据记录里会登记属于哪些配置所触发的记录
119 | parse:
120 | 解析后的配置文件的字典集
121 | """
122 | @async
123 | def rule_unblock(self, confname, parse={}):
124 | # 解析block,注意配置文件不能缺少关键字
125 | try:
126 | action = parse['block']['action']
127 | expire = parse['block']['expire']
128 | command = parse['block']['ubkcmd']
129 | iptables = parse['block']['iptables']
130 | except Exception as e:
131 | Loger().ERROR("block rule farmat error.")
132 | raise
133 |
134 | if action:
135 | #解封过时记录
136 | call_cmd=''
137 | kwargs={'exptime': {'$lt': int(time.time())}, 'confname': confname}
138 | for item in CacheServer().find_conditions(self.Bcol, **kwargs):
139 | if iptables:
140 | call_cmd=("/sbin/iptables -D INPUT -s %s -j DROP" % item['_id'])
141 | else:
142 | if command.find(' %s')>0:
143 | temp={}
144 | temp['_id']=item['_id']
145 | temp['total']=item['total']
146 | temp['unblock']=1
147 | call_cmd=(command % temp)
148 |
149 | subprocess.call(call_cmd, shell=True)
150 | Loger().WARNING(Notes['UNLOCK'] % item['_id'])
151 |
152 | CacheServer().delete_many(self.Bcol, kwargs)
153 |
154 |
155 | def rule_notice(self, confname, parse={}, data={}):
156 | try:
157 | send = parse['notice']['send']
158 | email = parse['notice']['email']
159 | except Exception as e:
160 | Loger().ERROR("block rule farmat error.")
161 | raise
162 |
163 | subject = "Scout email server"
164 | if send:
165 | for receiver in email:
166 | PyEmail().sendto(subject, Notes['MAIL'] %(data['_id'], confname, data['total']), receiver)
167 |
168 |
169 | """
170 | 主要解析函数
171 | 解析配置filter部份,并查询结果。
172 |
173 | return:
174 | 结果集 {u'total': 101, u'_id': u'1.1.1.1'}
175 | basename:
176 | 配置文件名
177 | parse:
178 | 已解析过的字典(缓存里的数据)
179 |
180 | """
181 | def rule_filter(self, parse={}):
182 |
183 | if parse['bolt'] in ["TCP", "UDP"]:
184 | col = self.cache_connect(parse['bolt'])
185 | else:
186 | Loger().ERROR("Bolt value must be 'TCP', 'UDP' !")
187 | raise
188 |
189 | # 解析filter,注意配置文件不能缺少关键字
190 | try:
191 | timeDelta = parse['filter']['timeDelta'] #时间区间, Seconds.
192 | trustIps = parse['filter']['trustIps'] #排除src白名单
193 | motrPort = parse['filter']['motrPort'] #过滤端口
194 | motrProto = parse['filter']['motrProto'] #过滤协议
195 | flags = parse['filter']['flags'] #连接状态
196 | noOfConnections = parse['filter']['noOfConnections'] #阀值
197 | noOfCondition = parse['filter']['noOfCondition'] #阀值条件 如$ge\$gt\$gte\$lt\$lte
198 | returnFiled = parse['filter']['returnFiled'] #过滤器返回的字段名, blot表里必须存在
199 | except Exception as e:
200 | Loger().ERROR("filter rule farmat error.")
201 | raise
202 |
203 | #构造查询
204 | aggs=[]
205 | lte_time = int(time.time())
206 | gte_time = (lte_time - timeDelta)
207 | if timeDelta: aggs.append({'$match': {'time' : {'$gte' : gte_time, '$lte' : lte_time}}})
208 | if flags: aggs.append({'$match': {'flags': {'$in': flags}}})
209 | if motrPort: aggs.append({'$match': {'dport': {'$in': motrPort}}})
210 | if trustIps: aggs.append({'$match': {'src': {'$nin': trustIps}}})
211 | aggs.append({'$group': {'_id': '$%s' %returnFiled, 'total': {'$sum': 1}}})
212 | aggs.append({'$match': {'total': {noOfCondition: noOfConnections}}})
213 |
214 | #Loger().DEBUG(aggs)
215 | return CacheServer().find_aggregate(col, aggs)
216 |
217 | """
218 | 查看封锁记录
219 | """
220 | def view(self):
221 | __lineRows = False
222 | table = PrettyTable(['_ID','ConfName','Total','Command','Time'])
223 | kwargs={'exptime': {'$gte': int(time.time())}}
224 | for item in CacheServer().find_conditions(self.Bcol, **kwargs):
225 | table.add_row([item['_id'], item['confname'], item['total'], item['command'], time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(item['time']))])
226 | __lineRows = True
227 |
228 | if __lineRows: table.sort_key(item['time'])
229 | table.reversesort=False
230 | table.border = 1
231 | print(table)
232 |
233 |
234 | def LOOP(self, keeprunning=True, timeout=1):
235 | while keeprunning:
236 | """如果缓存里有key==>value
237 | 则取缓存数据,否则重新加载配置文件
238 | """
239 | if self.S:
240 | for k in self.S.keys():
241 | #执行解锁
242 | self.rule_unblock(k, self.S[k])
243 | #执行封锁
244 | for res in self.rule_filter(self.S[k]):
245 | self.rule_block(k, self.S[k], res)
246 | Loger().WARNING("[%s.%s] %s" % (k, self.filetype, res))
247 | else:
248 | ptn = re.compile('.*\.%s' % self.filetype)
249 | for f in os.listdir(self.filepath):
250 | ff = ptn.match(f)
251 | if not ff is None:
252 | tp = ff.group().split('.')[0]
253 | self.rule_key_value(tp)
254 |
255 | sleep(timeout)
256 |
257 |
258 |
259 |
--------------------------------------------------------------------------------
/scoutd.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 | import sys
16 | sys.path.append(".")
17 | import os
18 | import time
19 | import atexit
20 | import datetime
21 | import commands
22 | import shutil
23 | import platform
24 | import signal
25 | from signal import SIGTERM
26 | from base import ScoutBase, Loger, del_pid
27 | from base import CONF_DIR, LOGS_DIR, CACHE_DIR, PROC_DIR, PLUGIN_DIR
28 | from base import cacheserver_running_alive, scoutd_running_alive
29 | from util import Scout
30 | from pcap.dstat import Dstat
31 | from cache.cacheserver import CacheServerd, initCache
32 |
33 |
34 | class Daemon:
35 | """
36 | daemon class.
37 | Usage: subclass the Daemon class and override the _run() method
38 | """
39 | def __init__(self, pidfile='/tmp/daemon.pid', stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'):
40 | self.stdin = stdin
41 | self.stdout = stdout
42 | self.stderr = stderr
43 | self.pidfile = LOGS_DIR + pidfile
44 | self.chdir = PROC_DIR
45 |
46 | def _daemonize(self):
47 | #脱离父进程
48 | try:
49 | pid = os.fork()
50 | if pid > 0:
51 | sys.exit(0)
52 | except OSError as e:
53 | Loger().ERROR("Scoutd fork #1 failed:"+str(e.strerror))
54 | sys.exit(1)
55 | os.setsid()
56 | os.chdir(self.chdir)
57 | os.umask(0)
58 |
59 | #第二次fork,禁止进程重新打开控制终端
60 | try:
61 | pid = os.fork()
62 | if pid > 0:
63 | sys.exit(0)
64 | except OSError as e:
65 | Loger().ERROR("Scoutd fork #2 failed:"+str(e.strerror))
66 | sys.exit(1)
67 |
68 | sys.stdout.flush()
69 | sys.stderr.flush()
70 | si = file(self.stdin, 'r')
71 | so = file(self.stdout, 'a+')
72 | se = file(self.stderr, 'a+', 0)
73 | os.dup2(si.fileno(), sys.stdin.fileno())
74 | os.dup2(so.fileno(), sys.stdout.fileno())
75 | os.dup2(se.fileno(), sys.stderr.fileno())
76 | atexit.register(self.delpid)
77 | pid = str(os.getpid())
78 | file(self.pidfile,'w+').write("%s\n" % pid)
79 |
80 | def delpid(self):
81 | os.remove(self.pidfile)
82 |
83 | def start(self):
84 | """
85 | Start the daemon
86 | """
87 | try:
88 | pf = file(self.pidfile,'r')
89 | pid = int(pf.read().strip())
90 | pf.close()
91 | except IOError as e:
92 | pid = None
93 |
94 | if pid:
95 | message = "Start error,pidfile %s already exist. Scoutd already running?"
96 | Loger().ERROR(message % self.pidfile)
97 | sys.exit(1)
98 |
99 | self._daemonize()
100 | self._run()
101 |
102 | def stop(self):
103 | """
104 | Stop the daemon
105 | """
106 | self._stop_first()
107 |
108 | try:
109 | pf = file(self.pidfile,'r')
110 | pid = int(pf.read().strip())
111 | pf.close()
112 | except IOError as e:
113 | pid = None
114 |
115 | if not pid:
116 | message = "pidfile %s does not exist. Scoutd not running?"
117 | Loger().ERROR(message % self.pidfile)
118 | return
119 |
120 | try:
121 | while 1:
122 | os.kill(pid, SIGTERM)
123 | time.sleep(0.1)
124 | except OSError as err:
125 | err = str(err)
126 | if err.find("No such process") > 0:
127 | if os.path.exists(self.pidfile):
128 | os.remove(self.pidfile)
129 | Loger().WARNING("stop Scoutd Success.")
130 | else:
131 | Loger().ERROR("stop error,"+str(err))
132 | sys.exit(1)
133 |
134 |
135 | def restart(self):
136 | self.stop()
137 | self.start()
138 |
139 |
140 | class Scoutd(Daemon):
141 | def status(self):
142 | try:
143 | pf = file(self.pidfile, 'r')
144 | pid = int(pf.read().strip())
145 | pf.close()
146 | Scout().status(pid)
147 | except IOError as e:
148 | message = "No such a process running.\n"
149 | sys.stderr.write(message)
150 |
151 |
152 | def _run(self):
153 | if not cacheserver_running_alive():
154 | Loger().ERROR('CacheServer not running... you must be start it first!')
155 | sys.exit(1)
156 | Loger().INFO('Scoutd %s ' % ScoutBase().avr['version'])
157 | Loger().INFO('Copyright (C) 2011-2019, YWJT.org.')
158 | Loger().INFO('Scoutd started with pid %d' % os.getpid())
159 | Loger().INFO('Scoutd started with %s' % datetime.datetime.now().strftime("%m/%d/%Y %H:%M"))
160 | Scout().run()
161 |
162 | def _stop_first(self):
163 | del_pid()
164 |
165 |
166 |
167 |
168 | def help():
169 | __MAN = 'Usage: %s \n \
170 | \n \
171 | Options: \n \
172 | init creating and initializing a new cache partition. \n \
173 | start start all service. \n \
174 | stop stop main proccess, cacheserver keep running. \n \
175 | restart restart main proccess, cacheserver keep running. \n \
176 | reload same as restart. \n \
177 | forcestop stop all service, include cacheserver and main proccess. \n \
178 | reservice restart cacheserver and main proccess. \n \
179 | status show main proccess run infomation. \n \
180 | dstat show generating system resource statistics. \n \
181 | view check block/unblock infomation. \n \
182 | watch same as tailf, watching log output. \n \
183 | help show this usage information. \n \
184 | version show version information. \n \
185 | '
186 | print(__MAN % sys.argv[0])
187 |
188 |
189 | if __name__ == '__main__':
190 | cached = CacheServerd()
191 | scoutd = Scoutd(pidfile='scoutd.pid')
192 |
193 | if len(sys.argv) > 1:
194 | if 'START' == (sys.argv[1]).upper():
195 | cached.start()
196 | scoutd.start()
197 | elif 'STOP' == (sys.argv[1]).upper():
198 | scoutd.stop()
199 | elif 'FORCESTOP' == (sys.argv[1]).upper():
200 | scoutd.stop()
201 | cached.stop()
202 | elif 'RESERVICE' == (sys.argv[1]).upper():
203 | print("Are you sure force restart Scoutd service?\n \
204 | When the cacheServer is restarted, \n \
205 | * If you set storage_type = 'Memory', the data records will be cleared.\n \
206 | * If you set storage_type = 'Disk', the data records not be cleared.")
207 | if raw_input("Enter ('yes|y|Y'):") in ['yes', 'Y', 'y']:
208 | cached.restart()
209 | scoutd.restart()
210 | elif 'RESTART' == sys.argv[1].upper():
211 | scoutd.restart()
212 | elif 'RELOAD' == sys.argv[1].upper():
213 | scoutd.restart()
214 | elif 'STATUS' == (sys.argv[1]).upper():
215 | scoutd.status()
216 | elif 'HELP' == (sys.argv[1]).upper():
217 | help()
218 | elif 'WATCH' == (sys.argv[1]).upper():
219 | Scout().tailf()
220 | elif 'VERSION' == (sys.argv[1]).upper():
221 | print(ScoutBase().avr['version'])
222 | elif 'VIEW' == (sys.argv[1]).upper():
223 | Scout().view()
224 | elif 'INIT' == (sys.argv[1]).upper():
225 | print("Are you sure initialize Scoutd service?\n When the cache data will be cleared.")
226 | if raw_input("Enter ('yes|y|Y'):") in ['yes', 'Y', 'y']:
227 | cached.stop()
228 | initCache()
229 | cached.start()
230 | elif 'DSTAT' == (sys.argv[1]).upper():
231 | Scout().dstat()
232 | else:
233 | print "Unknow Command!"
234 | help()
235 | sys.exit(1)
236 | else:
237 | print(ScoutBase().avr['version'])
238 | help()
239 | sys.exit(1)
240 |
--------------------------------------------------------------------------------
/util.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #-*- coding: utf-8 -*-
3 | """
4 | * @ Scout for Python
5 | ##############################################################################
6 | # Author: YWJT / ZhiQiang Koo #
7 | # Modify: 2020-03-13 #
8 | ##############################################################################
9 | # This program is distributed under the "Artistic License" Agreement #
10 | # The LICENSE file is located in the same directory as this program. Please #
11 | # read the LICENSE file before you make copies or distribute this program #
12 | ##############################################################################
13 | """
14 |
15 | import sys
16 | sys.path.append(".")
17 | import os
18 | import re
19 | import string
20 | import datetime
21 | import time
22 | from multiprocessing import Process
23 | from base import async
24 | from base import ScoutBase, Loger, Notes, Rules, Tailf
25 | from base import scoutd_running_alive
26 | from rule import Rule
27 | from notice import PyEmail
28 | from pcap.dstat import Dstat
29 | from pcap.pkts import Pcapy
30 | from plugin.jsonserver import app_run
31 |
32 |
33 | class Scout(ScoutBase):
34 | def __init__(self):
35 | ScoutBase.__init__(self)
36 | cidr=self.cidr()
37 | self.kwargs={
38 | 'interface': self.avr['motr_interface'],
39 | 'filters': '{0} and {1} and {2}'.format(cidr['lip'], cidr['port'], cidr['wip']),
40 | 'max_bytes': self.avr['max_bytes'],
41 | 'promiscuous': self.avr['promiscuous'],
42 | 'buffer_timeout': self.avr['buffer_timeout'],
43 | 'expire_after_seconds': self.avr['expire_after_seconds'],
44 | 'filepath':self.avr['file_path'],
45 | 'filetype': self.avr['file_type']
46 | }
47 |
48 | def echo(self, active='running', pid=os.getpid()):
49 | print('Scout v%s' % self.avr['version'])
50 | print('Active: \033[32m active (%s) \033[0m, PID: %d (Scoutd), Since %s' % (active, pid, time.strftime("%a %b %d %H:%M:%S CST", time.localtime())))
51 | print('Filter: (%s)' % self.kwargs['filters'])
52 |
53 | def status(self, pid):
54 | info=Dstat().process(pid)
55 | self.echo("running" if info["status"]=="sleeping" else "stoping", pid)
56 | info=sorted(info.iteritems(), key=lambda d:d[0], reverse=False)
57 | for k, v in info:
58 | if k!="connections":
59 | print('%s : %s ' % (k, str(v)))
60 |
61 | def dstat(self):
62 | if scoutd_running_alive():
63 | Dstat().show()
64 |
65 | def view(self):
66 | if scoutd_running_alive():
67 | Rule().view()
68 |
69 | def tailf(self):
70 | if scoutd_running_alive():
71 | Tailf().follow()
72 |
73 | @async
74 | def async_dump(self):
75 | try:
76 | Pcapy().LOOP()
77 | except Exception as e:
78 | Loger().WARNING("Scout dump Exception: %s" %(e))
79 | raise
80 |
81 |
82 | @async
83 | def async_dstat(self):
84 | try:
85 | Dstat().LOOP()
86 | except Exception as e:
87 | Loger().WARNING("Scout dstat Exception: %s" %(e))
88 | pass
89 |
90 |
91 | @async
92 | def async_rule(self):
93 | try:
94 | Rule().LOOP()
95 | except Exception as e:
96 | Loger().WARNING("Scout rule Exception: %s" %(e))
97 | pass
98 |
99 | @async
100 | def async_plugin(self):
101 | try:
102 | app_run()
103 | except Exception as e:
104 | Loger().WARNING("ScoutHttpApi Exception: %s" %(e))
105 | pass
106 |
107 |
108 |
109 | def run(self):
110 | self.async_dump()
111 | self.async_rule()
112 | self.async_dstat()
113 | self.async_plugin()
114 | self.echo()
115 |
116 |
117 |
118 |
--------------------------------------------------------------------------------