├── README.md ├── docs ├── elkraa.md └── rule.txt ├── lib ├── .DS_Store ├── RedisAdd.py ├── __init__.py ├── config.py ├── daemon.py ├── log.py ├── mail.py └── redis │ ├── __init__.py │ ├── _compat.py │ ├── client.py │ ├── connection.py │ ├── exceptions.py │ ├── lock.py │ ├── sentinel.py │ └── utils.py └── main.py /README.md: -------------------------------------------------------------------------------- 1 | # AuditdPy 2 | 3 | ![][Python 2.7] 4 | 5 | > Linux服务器命令监控辅助脚本,ElasticSearch + Logstash + Kibana + Redis + Auditd(简称elkra吧,hhh),该程序在elkra做就是把Auditd的日志读取出来,然后发送到Redis。 6 | 7 | > 之所以写这个是因为部署方便 / 减少消耗不必要的资源,每台Server都部署Logstash的话很消耗资源。我个人的代码水平、正则水平比较菜,代码水平强的人可以优化下代码和正则。 8 | 9 | elkraa部署文档 10 | ---- 11 | [elkraa部署文档][2] 12 | 13 | 目录结构 14 | ---- 15 | ``` 16 | ├── README.md 17 | ├── docs 18 | │ └── rule.txt auditd规则 19 | ├── lib 20 | │ ├── RedisAdd.py redis操作 21 | │ ├── __init__.py 22 | │ ├── config.py 配置文件 23 | │ ├── daemon.py 守护进程 24 | │ ├── log.py debug log 25 | │ ├── mail.py send mail 26 | │ └── redis python redis package 27 | └── main.py 入口文件 28 | ``` 29 | 30 | 配置 31 | ---- 32 | #### Redis or Mail 配置:./lib/config.py 文件,redis的配置是必填的,邮箱那里可以选填,因为代码里面写的是默认不发送邮件的。 33 | ``` 34 | #-*- coding:utf-8 -*- 35 | 36 | # redis config 37 | # redis地址 38 | redis_host = 'localhost' 39 | # redis密码 40 | redis_pass = 'Mosuan' 41 | # redis db(可选,默认就ok) 42 | redis_db = 0 43 | # redis端口 44 | redis_port = 6379 45 | # redis key, 可以说是频道名吧 46 | redis_key = 'logstash:redis' 47 | 48 | # mail config 49 | # 邮箱服务器地址 50 | mail_host = 'smtp.126.com' 51 | # 邮箱服务器端口 52 | mail_port = 25 53 | # 邮箱 54 | mail_user = 'mosuan_6c6c6c@126.com' 55 | # 邮箱密码 56 | mail_pass = 'xxxxxxxxx' 57 | # 发送给谁 58 | mail_receivers = 'mosuansb@gmail.com' 59 | # 邮件标题 60 | mail_title = 'Auditd Error Message' 61 | ``` 62 | 63 | #### 命令白名单配置: ./main.py 文件 self.command_white 变量 64 | ``` 65 | # 支持正则,添加规则的时候必须指定以什么开头和什么结尾,不然误报漏报估计会很严重 66 | # 例子: 67 | self.command_white = [ 68 | "^(sudo ausearch) -", 69 | "^grep [a-zA-Z1-9]{1,20}", 70 | "^ifconfig -a", 71 | ] 72 | ``` 73 | 74 | #### 发送邮件的开关:./main.py 文件 mail_status 变量 75 | ``` 76 | # False 为不发送邮件, True为发送邮件 77 | mail_status = False 78 | ``` 79 | 80 | 运行程序 81 | ---- 82 | #### 运行程序 83 | ``` 84 | sudo python main.py start 85 | ``` 86 | 87 | #### 停止程序 88 | ``` 89 | sudo python main.py stop 90 | ``` 91 | 92 | 93 | [Python 2.7]: https://img.shields.io/badge/python-2.7-brightgreen.svg 94 | [2]: https://github.com/Mosuan/AuditdPy/blob/master/docs/elkraa.md 95 | -------------------------------------------------------------------------------- /docs/elkraa.md: -------------------------------------------------------------------------------- 1 | # elkraa 部署文档 2 | ![][Python 2.7] ![][redis] 3 | 4 | elkraa是什么 5 | ---- 6 | * elasticsearch 7 | * logstash 8 | * kibana 9 | * redis 10 | * auditd 11 | * auditdPy 12 | 13 | 以上这些程序的首字母、简称,每次都打出全部的感觉好累,就这样简称吧。 14 | 15 | ### 均以 Ubuntu 为例 16 | 17 | 服务器要求 18 | ---- 19 | ``` 20 | server N台 简称为Auditd机器 21 | elk+redis 一台 简称为elk机器 22 | ``` 23 | 24 | 流程 25 | ---- 26 | ![elkraa][1] 27 | 28 | * 多台服务器部署Auditd + AuditdPy。 29 | * elk机器部署ElasticSearch + Logstash + Kibana + Redis。 30 | 31 | ##### 流程: 32 | 1. Auditd 监控服务器的命令文件执行,并写入Log。 33 | 2. AuditdPy 从 Log 中提取数据 rpush 到Redis。 34 | 3. Redis 在框架中就是个队列的角色。 35 | 4. Logstash 监控 Redis 的 key,有数据则发送到 ElasticSearch。 36 | 5. 最后Kibana 读取 ElasticSearch 的数据展示到前端。 37 | 38 | 39 | elk机器配置 40 | ---- 41 | 42 | ##### redis设置密码 43 | 44 | ##### 修改 ElasticSearch 配置 `sudo vim /etc/elasticsearch/elasticsearch.yml`,搜索`network.host`,修改如下配置 45 | ``` 46 | network.host: localhost 47 | ``` 48 | 49 | ##### 添加 Logstash 配置 `sudo vim /etc/logstash/conf.d/config.conf` 50 | ``` 51 | input { 52 | redis { 53 | host => '127.0.0.1' 54 | password => 'Mosuan' 55 | data_type => 'list' 56 | key => 'logstash:redis' 57 | } 58 | } 59 | output { 60 | elasticsearch { hosts => localhost } 61 | stdout { codec => rubydebug } 62 | } 63 | ``` 64 | 65 | ##### 运行Logstash 66 | ``` 67 | sudo nohup /opt/logstash/bin/logstash -f /etc/logstash/conf.d/ & 68 | ``` 69 | 70 | ##### 修改 Kibana 配置 `sudo vim /opt/kibana/config/kibana.yml`,搜索`server.host`,修改如下配置: 71 | ``` 72 | server.host: "0.0.0.0" 73 | ``` 74 | 75 | ##### 以上修改配置之后均要重启一次服务。 76 | 77 | 78 | Auditd机器配置 79 | ---- 80 | 81 | ##### 安装auditd (centos 6.x自带) 82 | ``` 83 | sudo apt-get install auditd 84 | ``` 85 | 86 | ##### auditPy 87 | ``` 88 | git clone https://github.com/Mosuan/AuditdPy 89 | ``` 90 | 91 | ##### 查看并复制auditd 的规则 92 | ``` 93 | cat ./AuditdLogPy/docs/rule.txt 94 | ``` 95 | 96 | ##### 将auditd规则粘贴覆盖到rules 97 | ``` 98 | sudo vim /etc/audit/audit.rules 99 | ``` 100 | 101 | ##### 重启auditd 102 | ``` 103 | sudo /etc/init.d/auditd restart 104 | ``` 105 | 106 | ##### 查看auditd 规则是否加载上 107 | ``` 108 | sudo auditctl -l 109 | ``` 110 | ##### 加载成功的样子 111 | ![auditctl][2] 112 | 113 | #### auditdPy 配置 114 | 115 | ##### Redis or Mail 配置:./lib/config.py 文件,redis的配置是必填的,邮箱那里可以选填,因为代码里面写的是默认不发送邮件的。 116 | ``` 117 | #-*- coding:utf-8 -*- 118 | 119 | # redis config 120 | # redis地址 121 | redis_host = 'localhost' 122 | # redis密码 123 | redis_pass = 'Mosuan' 124 | # redis db(可选,默认就ok) 125 | redis_db = 0 126 | # redis端口 127 | redis_port = 6379 128 | # redis key, 可以说是频道名吧 129 | redis_key = 'logstash:redis' 130 | 131 | # mail config 132 | # 邮箱服务器地址 133 | mail_host = 'smtp.126.com' 134 | # 邮箱服务器端口 135 | mail_port = 25 136 | # 邮箱 137 | mail_user = 'mosuan_6c6c6c@126.com' 138 | # 邮箱密码 139 | mail_pass = 'xxxxxxxxx' 140 | # 发送给谁 141 | mail_receivers = 'mosuansb@gmail.com' 142 | # 邮件标题 143 | mail_title = 'Auditd Error Message' 144 | ``` 145 | 146 | ##### 命令白名单配置: ./main.py 文件 self.command_white 变量 147 | ``` 148 | # 支持正则,添加规则的时候必须指定以什么开头和什么结尾,不然误报漏报估计会很严重 149 | # 例子: 150 | self.command_white = [ 151 | "^(sudo ausearch) -", 152 | "^grep [a-zA-Z1-9]{1,20}", 153 | "^ifconfig -a", 154 | ] 155 | ``` 156 | 157 | ##### 发送邮件的开关:./main.py 文件 mail_status 变量 158 | ``` 159 | # False 为不发送邮件, True为发送邮件 160 | mail_status = False 161 | ``` 162 | 163 | ##### 运行程序 164 | ``` 165 | sudo python main.py start 166 | ``` 167 | 168 | ##### 停止程序 169 | ``` 170 | sudo python main.py stop 171 | ``` 172 | 173 | ##### 错误日志 174 | ``` 175 | cat /tmp/auditd_py_error.log 176 | ``` 177 | 178 | 效果 179 | ---- 180 | ![kibana][6] 181 | 182 | 参考 183 | ---- 184 | [Python Redis库文档][3] 185 | [Logstash Redis 文档][4] 186 | [Auditd 文档][5] 187 | 188 | 189 | 190 | [1]: http://chuantu.biz/t6/93/1507805942x1033161390.jpg 191 | [2]: http://chuantu.biz/t6/93/1507805758x1033161390.jpg 192 | [Python 2.7]: https://img.shields.io/badge/python-2.7-brightgreen.svg 193 | [redis]: https://img.shields.io/badge/redis-4.0.1-red.svg 194 | [3]: http://redis-py.readthedocs.io/en/latest/ 195 | [4]: https://kibana.logstash.es/content/logstash/scale/redis.html 196 | [5]: https://access.redhat.com/documentation/zh-cn/red_hat_enterprise_linux/7/html/security_guide/sec-understanding_audit_log_files 197 | [6]: http://chuantu.biz/t6/93/1507805878x1033161390.png 198 | -------------------------------------------------------------------------------- /docs/rule.txt: -------------------------------------------------------------------------------- 1 | -D 2 | 3 | -w /usr/bin/ssh -p warx -k security_audit 4 | -w /usr/bin/sudo -p warx -k security_audit 5 | -w /usr/bin/whoami -p warx -k security_audit 6 | -w /usr/bin/curl -p warx -k security_audit 7 | -w /usr/bin/git -p warx -k security_audit 8 | -w /usr/bin/wget -p warx -k security_audit 9 | -w /usr/bin/who -p warx -k security_audit 10 | -w /usr/bin/arch -p warx -k security_audit 11 | -w /usr/bin/clear -p warx -k security_audit 12 | -w /usr/bin/cut -p warx -k security_audit 13 | -w /usr/bin/cvt -p warx -k security_audit 14 | -w /usr/bin/dig -p warx -k security_audit 15 | -w /usr/bin/diff -p warx -k security_audit 16 | -w /usr/bin/diff3 -p warx -k security_audit 17 | -w /usr/bin/dirmngr -p warx -k security_audit 18 | -w /usr/bin/dirname -p warx -k security_audit 19 | -w /usr/bin/dpkg -p warx -k security_audit 20 | -w /usr/bin/env -p warx -k security_audit 21 | -w /usr/bin/file -p warx -k security_audit 22 | -w /usr/bin/find -p warx -k security_audit 23 | -w /usr/bin/g++ -p warx -k security_audit 24 | -w /usr/bin/gcc -p warx -k security_audit 25 | -w /usr/bin/gdbserver -p warx -k security_audit 26 | -w /usr/bin/gio -p warx -k security_audit 27 | -w /usr/bin/gpg -p warx -k security_audit 28 | -w /usr/bin/gpasswd -p warx -k security_audit 29 | -w /usr/bin/gs -p warx -k security_audit 30 | -w /usr/bin/h2xs -p warx -k security_audit 31 | -w /usr/bin/head -p warx -k security_audit 32 | -w /usr/bin/hd -p warx -k security_audit 33 | -w /usr/bin/ico -p warx -k security_audit 34 | -w /usr/bin/htpasswd -p warx -k security_audit 35 | -w /usr/bin/id -p warx -k security_audit 36 | -w /usr/bin/info -p warx -k security_audit 37 | -w /usr/bin/infocmp -p warx -k security_audit 38 | -w /usr/bin/install -p warx -k security_audit 39 | -w /usr/bin/install-info -p warx -k security_audit 40 | -w /usr/bin/ischroot -p warx -k security_audit 41 | -w /usr/bin/ipcs -p warx -k security_audit 42 | -w /usr/bin/interdiff -p warx -k security_audit 43 | -w /usr/bin/isodump -p warx -k security_audit 44 | -w /usr/bin/isoinfo -p warx -k security_audit 45 | -w /usr/bin/join -p warx -k security_audit 46 | -w /usr/bin/jpgicc -p warx -k security_audit 47 | -w /usr/bin/kbdinfo -p warx -k security_audit 48 | -w /usr/bin/less -p warx -k security_audit 49 | -w /usr/bin/logger -p warx -k security_audit 50 | -w /usr/bin/look -p warx -k security_audit 51 | -w /usr/bin/loweb -p warx -k security_audit 52 | -w /usr/bin/lowriter -p warx -k security_audit 53 | -w /usr/bin/ltrace -p warx -k security_audit 54 | -w /usr/bin/lsusb -p warx -k security_audit 55 | -w /usr/bin/make -p warx -k security_audit 56 | -w /usr/bin/man -p warx -k security_audit 57 | -w /usr/bin/mandb -p warx -k security_audit 58 | -w /usr/bin/manpath -p warx -k security_audit 59 | -w /usr/bin/mcomp -p warx -k security_audit 60 | -w /usr/bin/mcookie -p warx -k security_audit 61 | -w /usr/bin/mcopy -p warx -k security_audit 62 | -w /usr/bin/md5sum -p warx -k security_audit 63 | -w /usr/bin/mdel -p warx -k security_audit 64 | -w /usr/bin/mkfifo -p warx -k security_audit 65 | -w /usr/bin/mtools -p warx -k security_audit 66 | -w /usr/bin/mtr -p warx -k security_audit 67 | -w /usr/bin/mysql -p warx -k security_audit 68 | -w /usr/bin/mtrace -p warx -k security_audit 69 | -w /usr/bin/namei -p warx -k security_audit 70 | -w /usr/bin/nice -p warx -k security_audit 71 | -w /usr/bin/nl -p warx -k security_audit 72 | -w /usr/bin/nm -p warx -k security_audit 73 | -w /usr/bin/nproc -p warx -k security_audit 74 | -w /usr/bin/nohup -p warx -k security_audit 75 | -w /usr/bin/nroff -p warx -k security_audit 76 | -w /usr/bin/nslookup -p warx -k security_audit 77 | -w /usr/bin/nstat -p warx -k security_audit 78 | -w /usr/bin/nsupdate -p warx -k security_audit 79 | -w /usr/bin/objcopy -p warx -k security_audit 80 | -w /usr/bin/objdump -p warx -k security_audit 81 | -w /usr/bin/oclock -p warx -k security_audit 82 | -w /usr/bin/od -p warx -k security_audit 83 | -w /usr/bin/onboard -p warx -k security_audit 84 | -w /usr/bin/openssl -p warx -k security_audit 85 | -w /usr/bin/orca -p warx -k security_audit 86 | -w /usr/bin/pacat -p warx -k security_audit 87 | -w /usr/bin/pamfile -p warx -k security_audit 88 | -w /usr/bin/pamdice -p warx -k security_audit 89 | -w /usr/bin/passwd -p warx -k security_audit 90 | -w /usr/bin/partx -p warx -k security_audit 91 | -w /usr/bin/paste -p warx -k security_audit 92 | -w /usr/bin/perl -p warx -k security_audit 93 | -w /usr/bin/pg -p warx -k security_audit 94 | -w /usr/bin/pgrep -p warx -k security_audit 95 | -w /usr/bin/pic -p warx -k security_audit 96 | -w /usr/bin/pip -p warx -k security_audit 97 | -w /usr/bin/pip2 -p warx -k security_audit 98 | -w /usr/bin/pldd -p warx -k security_audit 99 | -w /usr/bin/pmap -p warx -k security_audit 100 | -w /usr/bin/pon -p warx -k security_audit 101 | -w /usr/bin/pr -p warx -k security_audit 102 | -w /usr/bin/resize -p warx -k security_audit 103 | -w /usr/bin/rev -p warx -k security_audit 104 | -w /usr/bin/redis-benchmark -p warx -k security_audit 105 | -w /usr/bin/redis-check-aof -p warx -k security_audit 106 | -w /usr/bin/redis-check-rdb -p warx -k security_audit 107 | -w /usr/bin/redis-cli -p warx -k security_audit 108 | -w /usr/bin/rgrep -p warx -k security_audit 109 | -w /usr/bin/rsync -p warx -k security_audit 110 | -w /usr/bin/scp -p warx -k security_audit 111 | -w /usr/bin/script -p warx -k security_audit 112 | -w /usr/bin/sdiff -p warx -k security_audit 113 | -w /usr/bin/sftp -p warx -k security_audit 114 | -w /usr/bin/sg -p warx -k security_audit 115 | -w /usr/bin/sha1sum -p warx -k security_audit 116 | -w /usr/bin/shasum -p warx -k security_audit 117 | -w /usr/bin/skill -p warx -k security_audit 118 | -w /usr/bin/slogin -p warx -k security_audit 119 | -w /usr/bin/split -p warx -k security_audit 120 | -w /usr/bin/splitdiff -p warx -k security_audit 121 | -w /usr/bin/ssh-add -p warx -k security_audit 122 | -w /usr/bin/ssh-agent -p warx -k security_audit 123 | -w /usr/bin/ssh-argv0 -p warx -k security_audit 124 | -w /usr/bin/ssh-copy-id -p warx -k security_audit 125 | -w /usr/bin/ssh-import-id -p warx -k security_audit 126 | -w /usr/bin/ssh-import-id-gh -p warx -k security_audit 127 | -w /usr/bin/ssh-import-id-lp -p warx -k security_audit 128 | -w /usr/bin/ssh-keygen -p warx -k security_audit 129 | -w /usr/bin/ssh-keyscan -p warx -k security_audit 130 | -w /usr/bin/stat -p warx -k security_audit 131 | -w /usr/bin/subl -p warx -k security_audit 132 | -w /usr/bin/sum -p warx -k security_audit 133 | -w /usr/bin/symcryptrun -p warx -k security_audit 134 | -w /usr/bin/synclient -p warx -k security_audit 135 | -w /usr/bin/syndaemon -p warx -k security_audit 136 | -w /usr/bin/syslinux -p warx -k security_audit 137 | -w /usr/bin/syslinux-legacy -p warx -k security_audit 138 | -w /usr/bin/system-config-printer -p warx -k security_audit 139 | -w /usr/bin/system-config-printer-applet -p warx -k security_audit 140 | -w /usr/bin/systemd-analyze -p warx -k security_audit 141 | -w /usr/bin/systemd-cat -p warx -k security_audit 142 | -w /usr/bin/systemd-cgls -p warx -k security_audit 143 | -w /usr/bin/systemd-resolve -p warx -k security_audit 144 | -w /usr/bin/vim.basic -p warx -k security_audit 145 | -w /usr/bin/tail -p warx -k security_audit 146 | -w /usr/bin/tee -p warx -k security_audit 147 | -w /usr/bin/test -p warx -k security_audit 148 | -w /usr/bin/tgz -p warx -k security_audit 149 | -w /usr/bin/time -p warx -k security_audit 150 | -w /usr/bin/timeout -p warx -k security_audit 151 | -w /usr/bin/top -p warx -k security_audit 152 | -w /usr/bin/tty -p warx -k security_audit 153 | -w /usr/bin/which -p warx -k security_audit 154 | -w /usr/bin/unlink -p warx -k security_audit 155 | -w /usr/bin/unzip -p warx -k security_audit 156 | -w /usr/bin/uptime -p warx -k security_audit 157 | -w /usr/bin/users -p warx -k security_audit 158 | 159 | -w /sbin/acpi_available -p warx -k security_audit 160 | -w /sbin/agetty -p warx -k security_audit 161 | -w /sbin/alsa -p warx -k security_audit 162 | -w /sbin/apm_available -p warx -k security_audit 163 | -w /sbin/apparmor_parser -p warx -k security_audit 164 | -w /sbin/auditctl -p warx -k security_audit 165 | -w /sbin/auditd -p warx -k security_audit 166 | -w /sbin/augenrules -p warx -k security_audit 167 | -w /sbin/chcpu -p warx -k security_audit 168 | -w /sbin/ctrlaltdel -p warx -k security_audit 169 | -w /sbin/debugfs -p warx -k security_audit 170 | -w /sbin/ethtool -p warx -k security_audit 171 | -w /sbin/fdisk -p warx -k security_audit 172 | -w /sbin/findfs -p warx -k security_audit 173 | -w /sbin/fsfreeze -p warx -k security_audit 174 | -w /sbin/gdisk -p warx -k security_audit 175 | -w /sbin/getcap -p warx -k security_audit 176 | -w /sbin/installkernel -p warx -k security_audit 177 | -w /sbin/ipmaddr -p warx -k security_audit 178 | -w /sbin/iwconfig -p warx -k security_audit 179 | -w /sbin/iw -p warx -k security_audit 180 | -w /sbin/rarp -p warx -k security_audit 181 | -w /sbin/raw -p warx -k security_audit 182 | -w /sbin/regdbdump -p warx -k security_audit 183 | -w /sbin/resize2fs -p warx -k security_audit 184 | -w /sbin/resolvconf -p warx -k security_audit 185 | -w /sbin/route -p warx -k security_audit 186 | -w /sbin/shadowconfig -p warx -k security_audit 187 | -w /sbin/sulogin -p warx -k security_audit 188 | -w /sbin/switch_root -p warx -k security_audit 189 | -w /sbin/sysctl -p warx -k security_audit 190 | -w /sbin/tc -p warx -k security_audit 191 | -w /sbin/upstart -p warx -k security_audit 192 | -w /sbin/upstart-dbus-bridge -p warx -k security_audit 193 | -w /sbin/upstart-event-bridge -p warx -k security_audit 194 | -w /sbin/upstart-file-bridge -p warx -k security_audit 195 | -w /sbin/upstart-local-bridge -p warx -k security_audit 196 | -w /sbin/upstart-socket-bridge -p warx -k security_audit 197 | -w /sbin/upstart-udev-bridge -p warx -k security_audit 198 | -w /sbin/ureadahead -p warx -k security_audit 199 | -w /sbin/wipefs -p warx -k security_audit 200 | -w /sbin/wpa_action -p warx -k security_audit 201 | -w /sbin/wpa_cli -p warx -k security_audit 202 | -w /sbin/wpa_supplicant -p warx -k security_audit 203 | -w /sbin/xtables-multi -p warx -k security_audit 204 | -w /sbin/zramctl -p warx -k security_audit 205 | 206 | -w /bin/bunzip2 -p warx -k security_audit 207 | -w /bin/busybox -p warx -k security_audit 208 | -w /bin/bzcat -p warx -k security_audit 209 | -w /bin/bzdiff -p warx -k security_audit 210 | -w /bin/bzexe -p warx -k security_audit 211 | -w /bin/bzgrep -p warx -k security_audit 212 | -w /bin/bzip2 -p warx -k security_audit 213 | -w /bin/bzip2recover -p warx -k security_audit 214 | -w /bin/bzmore -p warx -k security_audit 215 | -w /bin/cat -p warx -k security_audit 216 | -w /bin/chacl -p warx -k security_audit 217 | -w /bin/mkdir -p warx -k security_audit 218 | -w /bin/rm -p warx -k security_audit 219 | -w /bin/chgrp -p warx -k security_audit 220 | -w /bin/chmod -p warx -k security_audit 221 | -w /bin/chown -p warx -k security_audit 222 | -w /bin/chvt -p warx -k security_audit 223 | -w /bin/cp -p warx -k security_audit 224 | -w /bin/cpio -p warx -k security_audit 225 | -w /bin/date -p warx -k security_audit 226 | -w /bin/dd -p warx -k security_audit 227 | -w /bin/df -p warx -k security_audit 228 | -w /bin/dir -p warx -k security_audit 229 | -w /bin/dmesg -p warx -k security_audit 230 | -w /bin/dumpkeys -p warx -k security_audit 231 | -w /bin/echo -p warx -k security_audit 232 | -w /bin/ed -p warx -k security_audit 233 | -w /bin/efibootmgr -p warx -k security_audit 234 | -w /bin/egrep -p warx -k security_audit 235 | -w /bin/false -p warx -k security_audit 236 | -w /bin/fgconsole -p warx -k security_audit 237 | -w /bin/findmnt -p warx -k security_audit 238 | -w /bin/fuser -p warx -k security_audit 239 | -w /bin/fusermount -p warx -k security_audit 240 | -w /bin/getfacl -p warx -k security_audit 241 | -w /bin/gunzip -p warx -k security_audit 242 | -w /bin/gzip -p warx -k security_audit 243 | -w /bin/hciconfig -p warx -k security_audit 244 | -w /bin/hostname -p warx -k security_audit 245 | -w /bin/ip -p warx -k security_audit 246 | -w /bin/journalctl -p warx -k security_audit 247 | -w /bin/kbd_mode -p warx -k security_audit 248 | -w /bin/kill -p warx -k security_audit 249 | -w /bin/kmod -p warx -k security_audit 250 | -w /bin/less -p warx -k security_audit 251 | -w /bin/lessecho -p warx -k security_audit 252 | -w /bin/lesskey -p warx -k security_audit 253 | -w /bin/lesspipe -p warx -k security_audit 254 | -w /bin/ln -p warx -k security_audit 255 | -w /bin/loadkeys -p warx -k security_audit 256 | -w /bin/login -p warx -k security_audit 257 | -w /bin/loginctl -p warx -k security_audit 258 | -w /bin/mknod -p warx -k security_audit 259 | -w /bin/mktemp -p warx -k security_audit 260 | -w /bin/more -p warx -k security_audit 261 | -w /bin/mount -p warx -k security_audit 262 | -w /bin/mountpoint -p warx -k security_audit 263 | -w /bin/mv -p warx -k security_audit 264 | -w /bin/nano -p warx -k security_audit 265 | -w /bin/nc.openbsd -p warx -k security_audit 266 | -w /bin/netstat -p warx -k security_audit 267 | -w /bin/networkctl -p warx -k security_audit 268 | -w /bin/ntfs-3g -p warx -k security_audit 269 | -w /bin/ntfsmove -p warx -k security_audit 270 | -w /bin/openvt -p warx -k security_audit 271 | -w /bin/ping -p warx -k security_audit 272 | -w /bin/ping6 -p warx -k security_audit 273 | -w /bin/plymouth -p warx -k security_audit 274 | -w /bin/ps -p warx -k security_audit 275 | -w /bin/pwd -p warx -k security_audit 276 | -w /bin/readlink -p warx -k security_audit 277 | -w /bin/red -p warx -k security_audit 278 | -w /bin/rmdir -p warx -k security_audit 279 | -w /bin/sed -p warx -k security_audit 280 | -w /bin/setfacl -p warx -k security_audit 281 | -w /bin/setfont -p warx -k security_audit 282 | -w /bin/setupcon -p warx -k security_audit 283 | -w /bin/sleep -p warx -k security_audit 284 | -w /bin/ss -p warx -k security_audit 285 | -w /bin/stty -p warx -k security_audit 286 | -w /bin/su -p warx -k security_audit 287 | -w /bin/sync -p warx -k security_audit 288 | -w /bin/systemctl -p warx -k security_audit 289 | -w /bin/tailf -p warx -k security_audit 290 | -w /bin/tar -p warx -k security_audit 291 | -w /bin/tempfile -p warx -k security_audit 292 | -w /bin/touch -p warx -k security_audit 293 | -w /bin/true -p warx -k security_audit 294 | -w /bin/umount -p warx -k security_audit 295 | -w /bin/ulockmgr_server -p warx -k security_audit 296 | -w /bin/udevadm -p warx -k security_audit 297 | -w /bin/vdir -p warx -k security_audit 298 | -w /bin/which -p warx -k security_audit 299 | -w /bin/whiptail -p warx -k security_audit 300 | -w /bin/zcat -p warx -k security_audit 301 | -w /bin/zcmp -p warx -k security_audit 302 | -w /bin/zdiff -p warx -k security_audit 303 | -w /bin/zegrep -p warx -k security_audit 304 | -w /bin/zfgrep -p warx -k security_audit 305 | -w /bin/zforce -p warx -k security_audit 306 | -w /bin/zgrep -p warx -k security_audit 307 | -w /bin/zless -p warx -k security_audit 308 | -w /bin/zmore -p warx -k security_audit 309 | -w /bin/znew -p warx -k security_audit 310 | -w /bin/zsh -p warx -k security_audit 311 | -w /bin/zsh5 -p warx -k security_audit 312 | 313 | -w /usr/sbin/aa-exec -p warx -k security_audit 314 | -w /usr/sbin/aa-status -p warx -k security_audit 315 | -w /usr/sbin/accessdb -p warx -k security_audit 316 | -w /usr/sbin/acpid -p warx -k security_audit 317 | -w /usr/sbin/addgnupghome -p warx -k security_audit 318 | -w /usr/sbin/add-shell -p warx -k security_audit 319 | -w /usr/sbin/adduser -p warx -k security_audit 320 | -w /usr/sbin/alsactl -p warx -k security_audit 321 | -w /usr/sbin/alsa-info.sh -p warx -k security_audit 322 | -w /usr/sbin/anacron -p warx -k security_audit 323 | -w /usr/sbin/applygnupgdefaults -p warx -k security_audit 324 | -w /usr/sbin/aptd -p warx -k security_audit 325 | -w /usr/sbin/arp -p warx -k security_audit 326 | -w /usr/sbin/arpd -p warx -k security_audit 327 | -w /usr/sbin/aspell-autobuildhash -p warx -k security_audit 328 | -w /usr/sbin/avahi-autoipd -p warx -k security_audit 329 | -w /usr/sbin/avahi-daemon -p warx -k security_audit 330 | -w /usr/sbin/chat -p warx -k security_audit 331 | -w /usr/sbin/chgpasswd -p warx -k security_audit 332 | -w /usr/sbin/chpasswd -p warx -k security_audit 333 | -w /usr/sbin/chroot -p warx -k security_audit 334 | -w /usr/sbin/cppw -p warx -k security_audit 335 | -w /usr/sbin/cracklib-check -p warx -k security_audit 336 | -w /usr/sbin/cracklib-format -p warx -k security_audit 337 | -w /usr/sbin/cracklib-packer -p warx -k security_audit 338 | -w /usr/sbin/cracklib-unpacker -p warx -k security_audit 339 | -w /usr/sbin/create-cracklib-dict -p warx -k security_audit 340 | -w /usr/sbin/cron -p warx -k security_audit 341 | -w /usr/sbin/cupsaccept -p warx -k security_audit 342 | -w /usr/sbin/cupsaddsmb -p warx -k security_audit 343 | -w /usr/sbin/cups-browsed -p warx -k security_audit 344 | -w /usr/sbin/cupsctl -p warx -k security_audit 345 | -w /usr/sbin/cupsd -p warx -k security_audit 346 | -w /usr/sbin/cupsfilter -p warx -k security_audit 347 | -w /usr/sbin/cups-genppdupdate -p warx -k security_audit 348 | -w /usr/sbin/deluser -p warx -k security_audit 349 | -w /usr/sbin/dmidecode -p warx -k security_audit 350 | -w /usr/sbin/dnsmasq -p warx -k security_audit 351 | -w /usr/sbin/fdformat -p warx -k security_audit 352 | -w /usr/sbin/getweb -p warx -k security_audit 353 | -w /usr/sbin/groupadd -p warx -k security_audit 354 | -w /usr/sbin/groupdel -p warx -k security_audit 355 | -w /usr/sbin/groupmod -p warx -k security_audit 356 | -w /usr/sbin/grpck -p warx -k security_audit 357 | -w /usr/sbin/grpconv -p warx -k security_audit 358 | -w /usr/sbin/grub-install -p warx -k security_audit 359 | -w /usr/sbin/grub-macbless -p warx -k security_audit 360 | -w /usr/sbin/grub-mkconfig -p warx -k security_audit 361 | -w /usr/sbin/grub-mkdevicemap -p warx -k security_audit 362 | -w /usr/sbin/grub-probe -p warx -k security_audit 363 | -w /usr/sbin/grub-reboot -p warx -k security_audit 364 | -w /usr/sbin/iptables-apply -p warx -k security_audit 365 | -w /usr/sbin/kerneloops -p warx -k security_audit 366 | -w /usr/sbin/lpadmin -p warx -k security_audit 367 | -w /usr/sbin/NetworkManager -p warx -k security_audit 368 | -w /usr/sbin/newusers -p warx -k security_audit 369 | -w /usr/sbin/nginx -p warx -k security_audit 370 | -w /usr/sbin/nologin -p warx -k security_audit 371 | -w /usr/sbin/remove-shell -p warx -k security_audit 372 | -w /usr/sbin/rfkill -p warx -k security_audit 373 | -w /usr/sbin/rsyslogd -p warx -k security_audit 374 | -w /usr/sbin/safe_finger -p warx -k security_audit 375 | -w /usr/sbin/service -p warx -k security_audit 376 | -w /usr/sbin/sshd -p warx -k security_audit 377 | -w /usr/sbin/tarcat -p warx -k security_audit 378 | -w /usr/sbin/tcpdump -p warx -k security_audit 379 | -w /usr/sbin/useradd -p warx -k security_audit 380 | -w /usr/sbin/userdel -p warx -k security_audit 381 | -w /usr/sbin/usermod -p warx -k security_audit 382 | -w /usr/sbin/uuidd -p warx -k security_audit 383 | -w /usr/sbin/validlocale -p warx -k security_audit 384 | -w /usr/sbin/vbetool -p warx -k security_audit 385 | -w /usr/sbin/vcstime -p warx -k security_audit 386 | -w /usr/sbin/vipw -p warx -k security_audit 387 | -w /usr/sbin/visudo -p warx -k security_audit 388 | -w /usr/sbin/vpddecode -p warx -k security_audit 389 | -w /usr/sbin/zic -p warx -k security_audit -------------------------------------------------------------------------------- /lib/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Mosuan/AuditdPy/384764701f6df499d7627c44e6c56810095507c7/lib/.DS_Store -------------------------------------------------------------------------------- /lib/RedisAdd.py: -------------------------------------------------------------------------------- 1 | #-*- coding:utf-8 -*- 2 | 3 | import time 4 | import redis 5 | 6 | from log import Log 7 | from config import * 8 | 9 | class Redis(object): 10 | 11 | def push_msg(self, msg): 12 | try: 13 | r = redis.StrictRedis(host=redis_host, port=redis_port, db=redis_db, password=redis_pass) 14 | r.rpush(redis_key, msg) 15 | time.sleep(0.01) 16 | except Exception,e: 17 | Log().debug("RedisAdd.py push_msg error: %s" % (str(e))) 18 | 19 | 20 | if __name__ == '__main__': 21 | obj = Redis() 22 | obj.push_msg('{"Mosuan":"unravel"}') 23 | -------------------------------------------------------------------------------- /lib/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Mosuan/AuditdPy/384764701f6df499d7627c44e6c56810095507c7/lib/__init__.py -------------------------------------------------------------------------------- /lib/config.py: -------------------------------------------------------------------------------- 1 | #-*- coding:utf-8 -*- 2 | 3 | # redis config 4 | redis_host = '10.102.5.119' 5 | redis_pass = 'Mosuan' 6 | redis_db = 0 7 | redis_port = 6379 8 | redis_key = 'logstash:redis' 9 | 10 | # mail config 11 | mail_host = 'smtp.126.com' 12 | mail_port = 25 13 | mail_user = 'mosuan_6c6c6c@126.com' 14 | mail_pass = '*******' 15 | mail_receivers = 'mosuansb@gmail.com' 16 | mail_title = 'Auditd Error Message' 17 | # send mail? 18 | mail_status = False 19 | -------------------------------------------------------------------------------- /lib/daemon.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2.7 2 | #-*- coding:utf-8 -*- 3 | import sys 4 | import os 5 | import time 6 | import atexit 7 | from signal import SIGTERM 8 | 9 | 10 | class Daemon: 11 | """ 12 | A generic daemon class. 13 | 14 | Usage: subclass the Daemon class and override the run() method 15 | """ 16 | def __init__(self, pidfile, stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'): 17 | self.stdin = stdin 18 | self.stdout = stdout 19 | self.stderr = stderr 20 | self.pidfile = pidfile 21 | 22 | def daemonize(self): 23 | """ 24 | do the UNIX double-fork magic, see Stevens' "Advanced 25 | Programming in the UNIX Environment" for details (ISBN 0201563177) 26 | http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16 27 | """ 28 | try: 29 | pid = os.fork() 30 | if pid > 0: 31 | # exit first parent 32 | sys.exit(0) 33 | except OSError, e: 34 | sys.stderr.write("fork #1 failed: %d (%s)\n" % (e.errno, e.strerror)) 35 | sys.exit(1) 36 | 37 | # decouple from parent environment 38 | os.chdir("/") 39 | os.setsid() 40 | os.umask(0) 41 | 42 | # do second fork 43 | try: 44 | pid = os.fork() 45 | if pid > 0: 46 | # exit from second parent 47 | sys.exit(0) 48 | except OSError, e: 49 | sys.stderr.write("fork #2 failed: %d (%s)\n" % (e.errno, e.strerror)) 50 | sys.exit(1) 51 | 52 | # redirect standard file descriptors 53 | sys.stdout.flush() 54 | sys.stderr.flush() 55 | si = file(self.stdin, 'r') 56 | so = file(self.stdout, 'a+') 57 | se = file(self.stderr, 'a+', 0) 58 | os.dup2(si.fileno(), sys.stdin.fileno()) 59 | os.dup2(so.fileno(), sys.stdout.fileno()) 60 | os.dup2(se.fileno(), sys.stderr.fileno()) 61 | 62 | # write pidfile 63 | atexit.register(self.delpid) 64 | pid = str(os.getpid()) 65 | file(self.pidfile,'w+').write("%s\n" % pid) 66 | 67 | def delpid(self): 68 | os.remove(self.pidfile) 69 | 70 | def start(self): 71 | """ 72 | Start the daemon 73 | """ 74 | # Check for a pidfile to see if the daemon already runs 75 | try: 76 | pf = file(self.pidfile,'r') 77 | pid = int(pf.read().strip()) 78 | pf.close() 79 | except IOError: 80 | pid = None 81 | 82 | if pid: 83 | message = "pidfile %s already exist. Daemon already running?\n" 84 | sys.stderr.write(message % self.pidfile) 85 | sys.exit(1) 86 | 87 | # Start the daemon 88 | self.daemonize() 89 | self.run() 90 | 91 | def stop(self): 92 | """ 93 | Stop the daemon 94 | """ 95 | # Get the pid from the pidfile 96 | try: 97 | pf = file(self.pidfile,'r') 98 | pid = int(pf.read().strip()) 99 | pf.close() 100 | except IOError: 101 | pid = None 102 | 103 | if not pid: 104 | message = "pidfile %s does not exist. Daemon not running?\n" 105 | sys.stderr.write(message % self.pidfile) 106 | return # not an error in a restart 107 | 108 | # Try killing the daemon process 109 | try: 110 | while 1: 111 | os.kill(pid, SIGTERM) 112 | time.sleep(0.1) 113 | except OSError, err: 114 | err = str(err) 115 | if err.find("No such process") > 0: 116 | if os.path.exists(self.pidfile): 117 | os.remove(self.pidfile) 118 | else: 119 | print str(err) 120 | sys.exit(1) 121 | 122 | def restart(self): 123 | """ 124 | Restart the daemon 125 | """ 126 | self.stop() 127 | self.start() 128 | 129 | def run(self): 130 | """ 131 | You should override this method when you subclass Daemon. It will be called after the process has been 132 | daemonized by start() or restart(). 133 | """ 134 | 135 | 136 | class MyDaemon(Daemon): 137 | def run(self): 138 | while True: 139 | time.sleep(60) 140 | print('daemon runing') 141 | 142 | 143 | if __name__ == "__main__": 144 | daemon = MyDaemon("/tmp/auditd_daemon.pid") 145 | if len(sys.argv) >= 2: 146 | if 'start' == sys.argv[1]: 147 | daemon.start() 148 | elif 'stop' == sys.argv[1]: 149 | daemon.stop() 150 | elif 'restart' == sys.argv[1]: 151 | daemon.restart() 152 | else: 153 | print("Unknown command") 154 | sys.exit(2) 155 | sys.exit(0) 156 | else: 157 | print("usage: %s start|stop|restart" % sys.argv[0]) 158 | sys.exit(2) -------------------------------------------------------------------------------- /lib/log.py: -------------------------------------------------------------------------------- 1 | #-*- coding:utf-8 -*- 2 | 3 | import os 4 | import time 5 | 6 | class Log(object): 7 | 8 | def __init__(self): 9 | self.error_log = '/tmp/auditd_py_error.log' 10 | 11 | def _time(self): 12 | return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())) 13 | 14 | def _ip(self): 15 | return os.popen("ifconfig -a|grep inet|grep -v 127.0.0.1|grep -v inet6 | awk '{print $2}' | tr -d 'addr:'").read() 16 | 17 | def _file(self, filename, msg=''): 18 | ''' 19 | filename add 20 | ''' 21 | if not os.path.exists(filename): 22 | objs = open(filename,"w+") 23 | objs.close() 24 | file_obj = open(filename, "a") 25 | content = file_obj.write(msg) 26 | file_obj.close() 27 | 28 | def debug(self, msg): 29 | time = self._time() 30 | ip = str(self._ip()).replace("\n"," ") 31 | error = str("time=%s msg=%s" % (time, str(msg))) 32 | msg = "{\"level\":\"debug\", \"error\": \"%s\", \"time\": \"%s\", \"ip\": \"%s\"}" % (msg, time, ip) 33 | self._file(self.error_log, error+"\n") 34 | if not 'Redis' in msg: 35 | from RedisAdd import Redis 36 | Redis().push_msg(str(msg)) 37 | 38 | if __name__ == '__main__': 39 | Log().debug("debug log test!") -------------------------------------------------------------------------------- /lib/mail.py: -------------------------------------------------------------------------------- 1 | #-*- coding:utf-8 -*- 2 | 3 | import smtplib 4 | from email.mime.text import MIMEText 5 | from email.header import Header 6 | 7 | from config import * 8 | from log import Log 9 | 10 | class Mail(object): 11 | 12 | def __init__(self): 13 | self.mail_host = mail_host 14 | self.mail_port = mail_port 15 | self.mail_user = mail_user 16 | self.mail_pass = mail_pass 17 | self.receivers = mail_receivers 18 | 19 | 20 | def content(self, msg): 21 | message = MIMEText(msg, 'html') 22 | message['from'] = self.mail_user 23 | message['to'] = self.receivers 24 | 25 | msg = """ 26 | {} 27 | """.format(msg) 28 | message['Subject'] = Header(mail_title, 'utf-8') 29 | return message 30 | 31 | def sendmail(self, msg): 32 | try: 33 | message = self.content(msg) 34 | objs = smtplib.SMTP(self.mail_host, ) 35 | objs.login(self.mail_user, self.mail_pass) 36 | objs.sendmail(self.mail_user, self.receivers, message.as_string()) 37 | objs.quit() 38 | return 'done' 39 | except smtplib.SMTPException, e: 40 | objs.quit() 41 | Log().debug("mail.py sendmail error: %s" % (str(e))) 42 | return 'fail' 43 | 44 | 45 | if __name__ == '__main__': 46 | 47 | obj = Mail() 48 | print(obj.sendmail(""" 49 | 50 | auditdPy 51 | 52 | auditd log py 53 | ip 54 | 55 | ifconfig -a|grep inet|grep -v 127.0.0.1|grep -v inet6 | awk '{print $2}' | tr -d 'addr:' 56 | """)) -------------------------------------------------------------------------------- /lib/redis/__init__.py: -------------------------------------------------------------------------------- 1 | from client import Redis, StrictRedis 2 | from connection import ( 3 | BlockingConnectionPool, 4 | ConnectionPool, 5 | Connection, 6 | SSLConnection, 7 | UnixDomainSocketConnection 8 | ) 9 | from utils import from_url 10 | from exceptions import ( 11 | AuthenticationError, 12 | BusyLoadingError, 13 | ConnectionError, 14 | DataError, 15 | InvalidResponse, 16 | PubSubError, 17 | ReadOnlyError, 18 | RedisError, 19 | ResponseError, 20 | TimeoutError, 21 | WatchError 22 | ) 23 | 24 | 25 | __version__ = '2.10.6' 26 | VERSION = tuple(map(int, __version__.split('.'))) 27 | 28 | __all__ = [ 29 | 'Redis', 'StrictRedis', 'ConnectionPool', 'BlockingConnectionPool', 30 | 'Connection', 'SSLConnection', 'UnixDomainSocketConnection', 'from_url', 31 | 'AuthenticationError', 'BusyLoadingError', 'ConnectionError', 'DataError', 32 | 'InvalidResponse', 'PubSubError', 'ReadOnlyError', 'RedisError', 33 | 'ResponseError', 'TimeoutError', 'WatchError' 34 | ] 35 | -------------------------------------------------------------------------------- /lib/redis/_compat.py: -------------------------------------------------------------------------------- 1 | """Internal module for Python 2 backwards compatibility.""" 2 | import errno 3 | import sys 4 | 5 | try: 6 | InterruptedError = InterruptedError 7 | except: 8 | InterruptedError = OSError 9 | 10 | # For Python older than 3.5, retry EINTR. 11 | if sys.version_info[0] < 3 or (sys.version_info[0] == 3 and 12 | sys.version_info[1] < 5): 13 | # Adapted from https://bugs.python.org/review/23863/patch/14532/54418 14 | import socket 15 | import time 16 | import errno 17 | 18 | from select import select as _select 19 | 20 | def select(rlist, wlist, xlist, timeout): 21 | while True: 22 | try: 23 | return _select(rlist, wlist, xlist, timeout) 24 | except InterruptedError as e: 25 | # Python 2 does not define InterruptedError, instead 26 | # try to catch an OSError with errno == EINTR == 4. 27 | if getattr(e, 'errno', None) == getattr(errno, 'EINTR', 4): 28 | continue 29 | raise 30 | 31 | # Wrapper for handling interruptable system calls. 32 | def _retryable_call(s, func, *args, **kwargs): 33 | # Some modules (SSL) use the _fileobject wrapper directly and 34 | # implement a smaller portion of the socket interface, thus we 35 | # need to let them continue to do so. 36 | timeout, deadline = None, 0.0 37 | attempted = False 38 | try: 39 | timeout = s.gettimeout() 40 | except AttributeError: 41 | pass 42 | 43 | if timeout: 44 | deadline = time.time() + timeout 45 | 46 | try: 47 | while True: 48 | if attempted and timeout: 49 | now = time.time() 50 | if now >= deadline: 51 | raise socket.error(errno.EWOULDBLOCK, "timed out") 52 | else: 53 | # Overwrite the timeout on the socket object 54 | # to take into account elapsed time. 55 | s.settimeout(deadline - now) 56 | try: 57 | attempted = True 58 | return func(*args, **kwargs) 59 | except socket.error as e: 60 | if e.args[0] == errno.EINTR: 61 | continue 62 | raise 63 | finally: 64 | # Set the existing timeout back for future 65 | # calls. 66 | if timeout: 67 | s.settimeout(timeout) 68 | 69 | def recv(sock, *args, **kwargs): 70 | return _retryable_call(sock, sock.recv, *args, **kwargs) 71 | 72 | def recv_into(sock, *args, **kwargs): 73 | return _retryable_call(sock, sock.recv_into, *args, **kwargs) 74 | 75 | else: # Python 3.5 and above automatically retry EINTR 76 | from select import select 77 | 78 | def recv(sock, *args, **kwargs): 79 | return sock.recv(*args, **kwargs) 80 | 81 | def recv_into(sock, *args, **kwargs): 82 | return sock.recv_into(*args, **kwargs) 83 | 84 | if sys.version_info[0] < 3: 85 | from urllib import unquote 86 | from urlparse import parse_qs, urlparse 87 | from itertools import imap, izip 88 | from string import letters as ascii_letters 89 | from Queue import Queue 90 | try: 91 | from cStringIO import StringIO as BytesIO 92 | except ImportError: 93 | from StringIO import StringIO as BytesIO 94 | 95 | # special unicode handling for python2 to avoid UnicodeDecodeError 96 | def safe_unicode(obj, *args): 97 | """ return the unicode representation of obj """ 98 | try: 99 | return unicode(obj, *args) 100 | except UnicodeDecodeError: 101 | # obj is byte string 102 | ascii_text = str(obj).encode('string_escape') 103 | return unicode(ascii_text) 104 | 105 | def iteritems(x): 106 | return x.iteritems() 107 | 108 | def iterkeys(x): 109 | return x.iterkeys() 110 | 111 | def itervalues(x): 112 | return x.itervalues() 113 | 114 | def nativestr(x): 115 | return x if isinstance(x, str) else x.encode('utf-8', 'replace') 116 | 117 | def u(x): 118 | return x.decode() 119 | 120 | def b(x): 121 | return x 122 | 123 | def next(x): 124 | return x.next() 125 | 126 | def byte_to_chr(x): 127 | return x 128 | 129 | unichr = unichr 130 | xrange = xrange 131 | basestring = basestring 132 | unicode = unicode 133 | bytes = str 134 | long = long 135 | else: 136 | from urllib.parse import parse_qs, unquote, urlparse 137 | from io import BytesIO 138 | from string import ascii_letters 139 | from queue import Queue 140 | 141 | def iteritems(x): 142 | return iter(x.items()) 143 | 144 | def iterkeys(x): 145 | return iter(x.keys()) 146 | 147 | def itervalues(x): 148 | return iter(x.values()) 149 | 150 | def byte_to_chr(x): 151 | return chr(x) 152 | 153 | def nativestr(x): 154 | return x if isinstance(x, str) else x.decode('utf-8', 'replace') 155 | 156 | def u(x): 157 | return x 158 | 159 | def b(x): 160 | return x.encode('latin-1') if not isinstance(x, bytes) else x 161 | 162 | next = next 163 | unichr = chr 164 | imap = map 165 | izip = zip 166 | xrange = range 167 | basestring = str 168 | unicode = str 169 | safe_unicode = str 170 | bytes = bytes 171 | long = int 172 | 173 | try: # Python 3 174 | from queue import LifoQueue, Empty, Full 175 | except ImportError: 176 | from Queue import Empty, Full 177 | try: # Python 2.6 - 2.7 178 | from Queue import LifoQueue 179 | except ImportError: # Python 2.5 180 | from Queue import Queue 181 | # From the Python 2.7 lib. Python 2.5 already extracted the core 182 | # methods to aid implementating different queue organisations. 183 | 184 | class LifoQueue(Queue): 185 | "Override queue methods to implement a last-in first-out queue." 186 | 187 | def _init(self, maxsize): 188 | self.maxsize = maxsize 189 | self.queue = [] 190 | 191 | def _qsize(self, len=len): 192 | return len(self.queue) 193 | 194 | def _put(self, item): 195 | self.queue.append(item) 196 | 197 | def _get(self): 198 | return self.queue.pop() 199 | -------------------------------------------------------------------------------- /lib/redis/connection.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | from distutils.version import StrictVersion 3 | from itertools import chain 4 | import os 5 | import socket 6 | import sys 7 | import threading 8 | import warnings 9 | 10 | try: 11 | import ssl 12 | ssl_available = True 13 | except ImportError: 14 | ssl_available = False 15 | 16 | from _compat import (b, xrange, imap, byte_to_chr, unicode, bytes, long, 17 | BytesIO, nativestr, basestring, iteritems, 18 | LifoQueue, Empty, Full, urlparse, parse_qs, 19 | recv, recv_into, select, unquote) 20 | from exceptions import ( 21 | RedisError, 22 | ConnectionError, 23 | TimeoutError, 24 | BusyLoadingError, 25 | ResponseError, 26 | InvalidResponse, 27 | AuthenticationError, 28 | NoScriptError, 29 | ExecAbortError, 30 | ReadOnlyError 31 | ) 32 | from utils import HIREDIS_AVAILABLE 33 | if HIREDIS_AVAILABLE: 34 | import hiredis 35 | 36 | hiredis_version = StrictVersion(hiredis.__version__) 37 | HIREDIS_SUPPORTS_CALLABLE_ERRORS = \ 38 | hiredis_version >= StrictVersion('0.1.3') 39 | HIREDIS_SUPPORTS_BYTE_BUFFER = \ 40 | hiredis_version >= StrictVersion('0.1.4') 41 | 42 | if not HIREDIS_SUPPORTS_BYTE_BUFFER: 43 | msg = ("redis-py works best with hiredis >= 0.1.4. You're running " 44 | "hiredis %s. Please consider upgrading." % hiredis.__version__) 45 | warnings.warn(msg) 46 | 47 | HIREDIS_USE_BYTE_BUFFER = True 48 | # only use byte buffer if hiredis supports it and the Python version 49 | # is >= 2.7 50 | if not HIREDIS_SUPPORTS_BYTE_BUFFER or ( 51 | sys.version_info[0] == 2 and sys.version_info[1] < 7): 52 | HIREDIS_USE_BYTE_BUFFER = False 53 | 54 | SYM_STAR = b('*') 55 | SYM_DOLLAR = b('$') 56 | SYM_CRLF = b('\r\n') 57 | SYM_EMPTY = b('') 58 | 59 | SERVER_CLOSED_CONNECTION_ERROR = "Connection closed by server." 60 | 61 | 62 | class Token(object): 63 | """ 64 | Literal strings in Redis commands, such as the command names and any 65 | hard-coded arguments are wrapped in this class so we know not to apply 66 | and encoding rules on them. 67 | """ 68 | 69 | _cache = {} 70 | 71 | @classmethod 72 | def get_token(cls, value): 73 | "Gets a cached token object or creates a new one if not already cached" 74 | 75 | # Use try/except because after running for a short time most tokens 76 | # should already be cached 77 | try: 78 | return cls._cache[value] 79 | except KeyError: 80 | token = Token(value) 81 | cls._cache[value] = token 82 | return token 83 | 84 | def __init__(self, value): 85 | if isinstance(value, Token): 86 | value = value.value 87 | self.value = value 88 | self.encoded_value = b(value) 89 | 90 | def __repr__(self): 91 | return self.value 92 | 93 | def __str__(self): 94 | return self.value 95 | 96 | 97 | class Encoder(object): 98 | "Encode strings to bytes and decode bytes to strings" 99 | 100 | def __init__(self, encoding, encoding_errors, decode_responses): 101 | self.encoding = encoding 102 | self.encoding_errors = encoding_errors 103 | self.decode_responses = decode_responses 104 | 105 | def encode(self, value): 106 | "Return a bytestring representation of the value" 107 | if isinstance(value, Token): 108 | return value.encoded_value 109 | elif isinstance(value, bytes): 110 | return value 111 | elif isinstance(value, (int, long)): 112 | value = b(str(value)) 113 | elif isinstance(value, float): 114 | value = b(repr(value)) 115 | elif not isinstance(value, basestring): 116 | # an object we don't know how to deal with. default to unicode() 117 | value = unicode(value) 118 | if isinstance(value, unicode): 119 | value = value.encode(self.encoding, self.encoding_errors) 120 | return value 121 | 122 | def decode(self, value, force=False): 123 | "Return a unicode string from the byte representation" 124 | if (self.decode_responses or force) and isinstance(value, bytes): 125 | value = value.decode(self.encoding, self.encoding_errors) 126 | return value 127 | 128 | 129 | class BaseParser(object): 130 | EXCEPTION_CLASSES = { 131 | 'ERR': { 132 | 'max number of clients reached': ConnectionError 133 | }, 134 | 'EXECABORT': ExecAbortError, 135 | 'LOADING': BusyLoadingError, 136 | 'NOSCRIPT': NoScriptError, 137 | 'READONLY': ReadOnlyError, 138 | } 139 | 140 | def parse_error(self, response): 141 | "Parse an error response" 142 | error_code = response.split(' ')[0] 143 | if error_code in self.EXCEPTION_CLASSES: 144 | response = response[len(error_code) + 1:] 145 | exception_class = self.EXCEPTION_CLASSES[error_code] 146 | if isinstance(exception_class, dict): 147 | exception_class = exception_class.get(response, ResponseError) 148 | return exception_class(response) 149 | return ResponseError(response) 150 | 151 | 152 | class SocketBuffer(object): 153 | def __init__(self, socket, socket_read_size): 154 | self._sock = socket 155 | self.socket_read_size = socket_read_size 156 | self._buffer = BytesIO() 157 | # number of bytes written to the buffer from the socket 158 | self.bytes_written = 0 159 | # number of bytes read from the buffer 160 | self.bytes_read = 0 161 | 162 | @property 163 | def length(self): 164 | return self.bytes_written - self.bytes_read 165 | 166 | def _read_from_socket(self, length=None): 167 | socket_read_size = self.socket_read_size 168 | buf = self._buffer 169 | buf.seek(self.bytes_written) 170 | marker = 0 171 | 172 | try: 173 | while True: 174 | data = recv(self._sock, socket_read_size) 175 | # an empty string indicates the server shutdown the socket 176 | if isinstance(data, bytes) and len(data) == 0: 177 | raise socket.error(SERVER_CLOSED_CONNECTION_ERROR) 178 | buf.write(data) 179 | data_length = len(data) 180 | self.bytes_written += data_length 181 | marker += data_length 182 | 183 | if length is not None and length > marker: 184 | continue 185 | break 186 | except socket.timeout: 187 | raise TimeoutError("Timeout reading from socket") 188 | except socket.error: 189 | e = sys.exc_info()[1] 190 | raise ConnectionError("Error while reading from socket: %s" % 191 | (e.args,)) 192 | 193 | def read(self, length): 194 | length = length + 2 # make sure to read the \r\n terminator 195 | # make sure we've read enough data from the socket 196 | if length > self.length: 197 | self._read_from_socket(length - self.length) 198 | 199 | self._buffer.seek(self.bytes_read) 200 | data = self._buffer.read(length) 201 | self.bytes_read += len(data) 202 | 203 | # purge the buffer when we've consumed it all so it doesn't 204 | # grow forever 205 | if self.bytes_read == self.bytes_written: 206 | self.purge() 207 | 208 | return data[:-2] 209 | 210 | def readline(self): 211 | buf = self._buffer 212 | buf.seek(self.bytes_read) 213 | data = buf.readline() 214 | while not data.endswith(SYM_CRLF): 215 | # there's more data in the socket that we need 216 | self._read_from_socket() 217 | buf.seek(self.bytes_read) 218 | data = buf.readline() 219 | 220 | self.bytes_read += len(data) 221 | 222 | # purge the buffer when we've consumed it all so it doesn't 223 | # grow forever 224 | if self.bytes_read == self.bytes_written: 225 | self.purge() 226 | 227 | return data[:-2] 228 | 229 | def purge(self): 230 | self._buffer.seek(0) 231 | self._buffer.truncate() 232 | self.bytes_written = 0 233 | self.bytes_read = 0 234 | 235 | def close(self): 236 | try: 237 | self.purge() 238 | self._buffer.close() 239 | except: 240 | # issue #633 suggests the purge/close somehow raised a 241 | # BadFileDescriptor error. Perhaps the client ran out of 242 | # memory or something else? It's probably OK to ignore 243 | # any error being raised from purge/close since we're 244 | # removing the reference to the instance below. 245 | pass 246 | self._buffer = None 247 | self._sock = None 248 | 249 | 250 | class PythonParser(BaseParser): 251 | "Plain Python parsing class" 252 | def __init__(self, socket_read_size): 253 | self.socket_read_size = socket_read_size 254 | self.encoder = None 255 | self._sock = None 256 | self._buffer = None 257 | 258 | def __del__(self): 259 | try: 260 | self.on_disconnect() 261 | except Exception: 262 | pass 263 | 264 | def on_connect(self, connection): 265 | "Called when the socket connects" 266 | self._sock = connection._sock 267 | self._buffer = SocketBuffer(self._sock, self.socket_read_size) 268 | self.encoder = connection.encoder 269 | 270 | def on_disconnect(self): 271 | "Called when the socket disconnects" 272 | if self._sock is not None: 273 | self._sock.close() 274 | self._sock = None 275 | if self._buffer is not None: 276 | self._buffer.close() 277 | self._buffer = None 278 | self.encoder = None 279 | 280 | def can_read(self): 281 | return self._buffer and bool(self._buffer.length) 282 | 283 | def read_response(self): 284 | response = self._buffer.readline() 285 | if not response: 286 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) 287 | 288 | byte, response = byte_to_chr(response[0]), response[1:] 289 | 290 | if byte not in ('-', '+', ':', '$', '*'): 291 | raise InvalidResponse("Protocol Error: %s, %s" % 292 | (str(byte), str(response))) 293 | 294 | # server returned an error 295 | if byte == '-': 296 | response = nativestr(response) 297 | error = self.parse_error(response) 298 | # if the error is a ConnectionError, raise immediately so the user 299 | # is notified 300 | if isinstance(error, ConnectionError): 301 | raise error 302 | # otherwise, we're dealing with a ResponseError that might belong 303 | # inside a pipeline response. the connection's read_response() 304 | # and/or the pipeline's execute() will raise this error if 305 | # necessary, so just return the exception instance here. 306 | return error 307 | # single value 308 | elif byte == '+': 309 | pass 310 | # int value 311 | elif byte == ':': 312 | response = long(response) 313 | # bulk response 314 | elif byte == '$': 315 | length = int(response) 316 | if length == -1: 317 | return None 318 | response = self._buffer.read(length) 319 | # multi-bulk response 320 | elif byte == '*': 321 | length = int(response) 322 | if length == -1: 323 | return None 324 | response = [self.read_response() for i in xrange(length)] 325 | if isinstance(response, bytes): 326 | response = self.encoder.decode(response) 327 | return response 328 | 329 | 330 | class HiredisParser(BaseParser): 331 | "Parser class for connections using Hiredis" 332 | def __init__(self, socket_read_size): 333 | if not HIREDIS_AVAILABLE: 334 | raise RedisError("Hiredis is not installed") 335 | self.socket_read_size = socket_read_size 336 | 337 | if HIREDIS_USE_BYTE_BUFFER: 338 | self._buffer = bytearray(socket_read_size) 339 | 340 | def __del__(self): 341 | try: 342 | self.on_disconnect() 343 | except Exception: 344 | pass 345 | 346 | def on_connect(self, connection): 347 | self._sock = connection._sock 348 | kwargs = { 349 | 'protocolError': InvalidResponse, 350 | 'replyError': self.parse_error, 351 | } 352 | 353 | # hiredis < 0.1.3 doesn't support functions that create exceptions 354 | if not HIREDIS_SUPPORTS_CALLABLE_ERRORS: 355 | kwargs['replyError'] = ResponseError 356 | 357 | if connection.encoder.decode_responses: 358 | kwargs['encoding'] = connection.encoder.encoding 359 | self._reader = hiredis.Reader(**kwargs) 360 | self._next_response = False 361 | 362 | def on_disconnect(self): 363 | self._sock = None 364 | self._reader = None 365 | self._next_response = False 366 | 367 | def can_read(self): 368 | if not self._reader: 369 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) 370 | 371 | if self._next_response is False: 372 | self._next_response = self._reader.gets() 373 | return self._next_response is not False 374 | 375 | def read_response(self): 376 | if not self._reader: 377 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) 378 | 379 | # _next_response might be cached from a can_read() call 380 | if self._next_response is not False: 381 | response = self._next_response 382 | self._next_response = False 383 | return response 384 | 385 | response = self._reader.gets() 386 | socket_read_size = self.socket_read_size 387 | while response is False: 388 | try: 389 | if HIREDIS_USE_BYTE_BUFFER: 390 | bufflen = recv_into(self._sock, self._buffer) 391 | if bufflen == 0: 392 | raise socket.error(SERVER_CLOSED_CONNECTION_ERROR) 393 | else: 394 | buffer = recv(self._sock, socket_read_size) 395 | # an empty string indicates the server shutdown the socket 396 | if not isinstance(buffer, bytes) or len(buffer) == 0: 397 | raise socket.error(SERVER_CLOSED_CONNECTION_ERROR) 398 | except socket.timeout: 399 | raise TimeoutError("Timeout reading from socket") 400 | except socket.error: 401 | e = sys.exc_info()[1] 402 | raise ConnectionError("Error while reading from socket: %s" % 403 | (e.args,)) 404 | if HIREDIS_USE_BYTE_BUFFER: 405 | self._reader.feed(self._buffer, 0, bufflen) 406 | else: 407 | self._reader.feed(buffer) 408 | response = self._reader.gets() 409 | # if an older version of hiredis is installed, we need to attempt 410 | # to convert ResponseErrors to their appropriate types. 411 | if not HIREDIS_SUPPORTS_CALLABLE_ERRORS: 412 | if isinstance(response, ResponseError): 413 | response = self.parse_error(response.args[0]) 414 | elif isinstance(response, list) and response and \ 415 | isinstance(response[0], ResponseError): 416 | response[0] = self.parse_error(response[0].args[0]) 417 | # if the response is a ConnectionError or the response is a list and 418 | # the first item is a ConnectionError, raise it as something bad 419 | # happened 420 | if isinstance(response, ConnectionError): 421 | raise response 422 | elif isinstance(response, list) and response and \ 423 | isinstance(response[0], ConnectionError): 424 | raise response[0] 425 | return response 426 | 427 | 428 | if HIREDIS_AVAILABLE: 429 | DefaultParser = HiredisParser 430 | else: 431 | DefaultParser = PythonParser 432 | 433 | 434 | class Connection(object): 435 | "Manages TCP communication to and from a Redis server" 436 | description_format = "Connection" 437 | 438 | def __init__(self, host='localhost', port=6379, db=0, password=None, 439 | socket_timeout=None, socket_connect_timeout=None, 440 | socket_keepalive=False, socket_keepalive_options=None, 441 | retry_on_timeout=False, encoding='utf-8', 442 | encoding_errors='strict', decode_responses=False, 443 | parser_class=DefaultParser, socket_read_size=65536): 444 | self.pid = os.getpid() 445 | self.host = host 446 | self.port = int(port) 447 | self.db = db 448 | self.password = password 449 | self.socket_timeout = socket_timeout 450 | self.socket_connect_timeout = socket_connect_timeout or socket_timeout 451 | self.socket_keepalive = socket_keepalive 452 | self.socket_keepalive_options = socket_keepalive_options or {} 453 | self.retry_on_timeout = retry_on_timeout 454 | self.encoder = Encoder(encoding, encoding_errors, decode_responses) 455 | self._sock = None 456 | self._parser = parser_class(socket_read_size=socket_read_size) 457 | self._description_args = { 458 | 'host': self.host, 459 | 'port': self.port, 460 | 'db': self.db, 461 | } 462 | self._connect_callbacks = [] 463 | 464 | def __repr__(self): 465 | return self.description_format % self._description_args 466 | 467 | def __del__(self): 468 | try: 469 | self.disconnect() 470 | except Exception: 471 | pass 472 | 473 | def register_connect_callback(self, callback): 474 | self._connect_callbacks.append(callback) 475 | 476 | def clear_connect_callbacks(self): 477 | self._connect_callbacks = [] 478 | 479 | def connect(self): 480 | "Connects to the Redis server if not already connected" 481 | if self._sock: 482 | return 483 | try: 484 | sock = self._connect() 485 | except socket.timeout: 486 | raise TimeoutError("Timeout connecting to server") 487 | except socket.error: 488 | e = sys.exc_info()[1] 489 | raise ConnectionError(self._error_message(e)) 490 | 491 | self._sock = sock 492 | try: 493 | self.on_connect() 494 | except RedisError: 495 | # clean up after any error in on_connect 496 | self.disconnect() 497 | raise 498 | 499 | # run any user callbacks. right now the only internal callback 500 | # is for pubsub channel/pattern resubscription 501 | for callback in self._connect_callbacks: 502 | callback(self) 503 | 504 | def _connect(self): 505 | "Create a TCP socket connection" 506 | # we want to mimic what socket.create_connection does to support 507 | # ipv4/ipv6, but we want to set options prior to calling 508 | # socket.connect() 509 | err = None 510 | for res in socket.getaddrinfo(self.host, self.port, 0, 511 | socket.SOCK_STREAM): 512 | family, socktype, proto, canonname, socket_address = res 513 | sock = None 514 | try: 515 | sock = socket.socket(family, socktype, proto) 516 | # TCP_NODELAY 517 | sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) 518 | 519 | # TCP_KEEPALIVE 520 | if self.socket_keepalive: 521 | sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) 522 | for k, v in iteritems(self.socket_keepalive_options): 523 | sock.setsockopt(socket.SOL_TCP, k, v) 524 | 525 | # set the socket_connect_timeout before we connect 526 | sock.settimeout(self.socket_connect_timeout) 527 | 528 | # connect 529 | sock.connect(socket_address) 530 | 531 | # set the socket_timeout now that we're connected 532 | sock.settimeout(self.socket_timeout) 533 | return sock 534 | 535 | except socket.error as _: 536 | err = _ 537 | if sock is not None: 538 | sock.close() 539 | 540 | if err is not None: 541 | raise err 542 | raise socket.error("socket.getaddrinfo returned an empty list") 543 | 544 | def _error_message(self, exception): 545 | # args for socket.error can either be (errno, "message") 546 | # or just "message" 547 | if len(exception.args) == 1: 548 | return "Error connecting to %s:%s. %s." % \ 549 | (self.host, self.port, exception.args[0]) 550 | else: 551 | return "Error %s connecting to %s:%s. %s." % \ 552 | (exception.args[0], self.host, self.port, exception.args[1]) 553 | 554 | def on_connect(self): 555 | "Initialize the connection, authenticate and select a database" 556 | self._parser.on_connect(self) 557 | 558 | # if a password is specified, authenticate 559 | if self.password: 560 | self.send_command('AUTH', self.password) 561 | if nativestr(self.read_response()) != 'OK': 562 | raise AuthenticationError('Invalid Password') 563 | 564 | # if a database is specified, switch to it 565 | if self.db: 566 | self.send_command('SELECT', self.db) 567 | if nativestr(self.read_response()) != 'OK': 568 | raise ConnectionError('Invalid Database') 569 | 570 | def disconnect(self): 571 | "Disconnects from the Redis server" 572 | self._parser.on_disconnect() 573 | if self._sock is None: 574 | return 575 | try: 576 | self._sock.shutdown(socket.SHUT_RDWR) 577 | self._sock.close() 578 | except socket.error: 579 | pass 580 | self._sock = None 581 | 582 | def send_packed_command(self, command): 583 | "Send an already packed command to the Redis server" 584 | if not self._sock: 585 | self.connect() 586 | try: 587 | if isinstance(command, str): 588 | command = [command] 589 | for item in command: 590 | self._sock.sendall(item) 591 | except socket.timeout: 592 | self.disconnect() 593 | raise TimeoutError("Timeout writing to socket") 594 | except socket.error: 595 | e = sys.exc_info()[1] 596 | self.disconnect() 597 | if len(e.args) == 1: 598 | errno, errmsg = 'UNKNOWN', e.args[0] 599 | else: 600 | errno = e.args[0] 601 | errmsg = e.args[1] 602 | raise ConnectionError("Error %s while writing to socket. %s." % 603 | (errno, errmsg)) 604 | except: 605 | self.disconnect() 606 | raise 607 | 608 | def send_command(self, *args): 609 | "Pack and send a command to the Redis server" 610 | self.send_packed_command(self.pack_command(*args)) 611 | 612 | def can_read(self, timeout=0): 613 | "Poll the socket to see if there's data that can be read." 614 | sock = self._sock 615 | if not sock: 616 | self.connect() 617 | sock = self._sock 618 | return self._parser.can_read() or \ 619 | bool(select([sock], [], [], timeout)[0]) 620 | 621 | def read_response(self): 622 | "Read the response from a previously sent command" 623 | try: 624 | response = self._parser.read_response() 625 | except: 626 | self.disconnect() 627 | raise 628 | if isinstance(response, ResponseError): 629 | raise response 630 | return response 631 | 632 | def pack_command(self, *args): 633 | "Pack a series of arguments into the Redis protocol" 634 | output = [] 635 | # the client might have included 1 or more literal arguments in 636 | # the command name, e.g., 'CONFIG GET'. The Redis server expects these 637 | # arguments to be sent separately, so split the first argument 638 | # manually. All of these arguements get wrapped in the Token class 639 | # to prevent them from being encoded. 640 | command = args[0] 641 | if ' ' in command: 642 | args = tuple([Token.get_token(s) 643 | for s in command.split()]) + args[1:] 644 | else: 645 | args = (Token.get_token(command),) + args[1:] 646 | 647 | buff = SYM_EMPTY.join( 648 | (SYM_STAR, b(str(len(args))), SYM_CRLF)) 649 | 650 | for arg in imap(self.encoder.encode, args): 651 | # to avoid large string mallocs, chunk the command into the 652 | # output list if we're sending large values 653 | if len(buff) > 6000 or len(arg) > 6000: 654 | buff = SYM_EMPTY.join( 655 | (buff, SYM_DOLLAR, b(str(len(arg))), SYM_CRLF)) 656 | output.append(buff) 657 | output.append(arg) 658 | buff = SYM_CRLF 659 | else: 660 | buff = SYM_EMPTY.join((buff, SYM_DOLLAR, b(str(len(arg))), 661 | SYM_CRLF, arg, SYM_CRLF)) 662 | output.append(buff) 663 | return output 664 | 665 | def pack_commands(self, commands): 666 | "Pack multiple commands into the Redis protocol" 667 | output = [] 668 | pieces = [] 669 | buffer_length = 0 670 | 671 | for cmd in commands: 672 | for chunk in self.pack_command(*cmd): 673 | pieces.append(chunk) 674 | buffer_length += len(chunk) 675 | 676 | if buffer_length > 6000: 677 | output.append(SYM_EMPTY.join(pieces)) 678 | buffer_length = 0 679 | pieces = [] 680 | 681 | if pieces: 682 | output.append(SYM_EMPTY.join(pieces)) 683 | return output 684 | 685 | 686 | class SSLConnection(Connection): 687 | description_format = "SSLConnection" 688 | 689 | def __init__(self, ssl_keyfile=None, ssl_certfile=None, ssl_cert_reqs=None, 690 | ssl_ca_certs=None, **kwargs): 691 | if not ssl_available: 692 | raise RedisError("Python wasn't built with SSL support") 693 | 694 | super(SSLConnection, self).__init__(**kwargs) 695 | 696 | self.keyfile = ssl_keyfile 697 | self.certfile = ssl_certfile 698 | if ssl_cert_reqs is None: 699 | ssl_cert_reqs = ssl.CERT_NONE 700 | elif isinstance(ssl_cert_reqs, basestring): 701 | CERT_REQS = { 702 | 'none': ssl.CERT_NONE, 703 | 'optional': ssl.CERT_OPTIONAL, 704 | 'required': ssl.CERT_REQUIRED 705 | } 706 | if ssl_cert_reqs not in CERT_REQS: 707 | raise RedisError( 708 | "Invalid SSL Certificate Requirements Flag: %s" % 709 | ssl_cert_reqs) 710 | ssl_cert_reqs = CERT_REQS[ssl_cert_reqs] 711 | self.cert_reqs = ssl_cert_reqs 712 | self.ca_certs = ssl_ca_certs 713 | 714 | def _connect(self): 715 | "Wrap the socket with SSL support" 716 | sock = super(SSLConnection, self)._connect() 717 | sock = ssl.wrap_socket(sock, 718 | cert_reqs=self.cert_reqs, 719 | keyfile=self.keyfile, 720 | certfile=self.certfile, 721 | ca_certs=self.ca_certs) 722 | return sock 723 | 724 | 725 | class UnixDomainSocketConnection(Connection): 726 | description_format = "UnixDomainSocketConnection" 727 | 728 | def __init__(self, path='', db=0, password=None, 729 | socket_timeout=None, encoding='utf-8', 730 | encoding_errors='strict', decode_responses=False, 731 | retry_on_timeout=False, 732 | parser_class=DefaultParser, socket_read_size=65536): 733 | self.pid = os.getpid() 734 | self.path = path 735 | self.db = db 736 | self.password = password 737 | self.socket_timeout = socket_timeout 738 | self.retry_on_timeout = retry_on_timeout 739 | self.encoder = Encoder(encoding, encoding_errors, decode_responses) 740 | self._sock = None 741 | self._parser = parser_class(socket_read_size=socket_read_size) 742 | self._description_args = { 743 | 'path': self.path, 744 | 'db': self.db, 745 | } 746 | self._connect_callbacks = [] 747 | 748 | def _connect(self): 749 | "Create a Unix domain socket connection" 750 | sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 751 | sock.settimeout(self.socket_timeout) 752 | sock.connect(self.path) 753 | return sock 754 | 755 | def _error_message(self, exception): 756 | # args for socket.error can either be (errno, "message") 757 | # or just "message" 758 | if len(exception.args) == 1: 759 | return "Error connecting to unix socket: %s. %s." % \ 760 | (self.path, exception.args[0]) 761 | else: 762 | return "Error %s connecting to unix socket: %s. %s." % \ 763 | (exception.args[0], self.path, exception.args[1]) 764 | 765 | 766 | FALSE_STRINGS = ('0', 'F', 'FALSE', 'N', 'NO') 767 | 768 | 769 | def to_bool(value): 770 | if value is None or value == '': 771 | return None 772 | if isinstance(value, basestring) and value.upper() in FALSE_STRINGS: 773 | return False 774 | return bool(value) 775 | 776 | 777 | URL_QUERY_ARGUMENT_PARSERS = { 778 | 'socket_timeout': float, 779 | 'socket_connect_timeout': float, 780 | 'socket_keepalive': to_bool, 781 | 'retry_on_timeout': to_bool 782 | } 783 | 784 | 785 | class ConnectionPool(object): 786 | "Generic connection pool" 787 | @classmethod 788 | def from_url(cls, url, db=None, decode_components=False, **kwargs): 789 | """ 790 | Return a connection pool configured from the given URL. 791 | 792 | For example:: 793 | 794 | redis://[:password]@localhost:6379/0 795 | rediss://[:password]@localhost:6379/0 796 | unix://[:password]@/path/to/socket.sock?db=0 797 | 798 | Three URL schemes are supported: 799 | 800 | - ```redis://`` 801 | `_ creates a 802 | normal TCP socket connection 803 | - ```rediss://`` 804 | `_ creates a 805 | SSL wrapped TCP socket connection 806 | - ``unix://`` creates a Unix Domain Socket connection 807 | 808 | There are several ways to specify a database number. The parse function 809 | will return the first specified option: 810 | 1. A ``db`` querystring option, e.g. redis://localhost?db=0 811 | 2. If using the redis:// scheme, the path argument of the url, e.g. 812 | redis://localhost/0 813 | 3. The ``db`` argument to this function. 814 | 815 | If none of these options are specified, db=0 is used. 816 | 817 | The ``decode_components`` argument allows this function to work with 818 | percent-encoded URLs. If this argument is set to ``True`` all ``%xx`` 819 | escapes will be replaced by their single-character equivalents after 820 | the URL has been parsed. This only applies to the ``hostname``, 821 | ``path``, and ``password`` components. 822 | 823 | Any additional querystring arguments and keyword arguments will be 824 | passed along to the ConnectionPool class's initializer. The querystring 825 | arguments ``socket_connect_timeout`` and ``socket_timeout`` if supplied 826 | are parsed as float values. The arguments ``socket_keepalive`` and 827 | ``retry_on_timeout`` are parsed to boolean values that accept 828 | True/False, Yes/No values to indicate state. Invalid types cause a 829 | ``UserWarning`` to be raised. In the case of conflicting arguments, 830 | querystring arguments always win. 831 | """ 832 | url_string = url 833 | url = urlparse(url) 834 | qs = '' 835 | 836 | # in python2.6, custom URL schemes don't recognize querystring values 837 | # they're left as part of the url.path. 838 | if '?' in url.path and not url.query: 839 | # chop the querystring including the ? off the end of the url 840 | # and reparse it. 841 | qs = url.path.split('?', 1)[1] 842 | url = urlparse(url_string[:-(len(qs) + 1)]) 843 | else: 844 | qs = url.query 845 | 846 | url_options = {} 847 | 848 | for name, value in iteritems(parse_qs(qs)): 849 | if value and len(value) > 0: 850 | parser = URL_QUERY_ARGUMENT_PARSERS.get(name) 851 | if parser: 852 | try: 853 | url_options[name] = parser(value[0]) 854 | except (TypeError, ValueError): 855 | warnings.warn(UserWarning( 856 | "Invalid value for `%s` in connection URL." % name 857 | )) 858 | else: 859 | url_options[name] = value[0] 860 | 861 | if decode_components: 862 | password = unquote(url.password) if url.password else None 863 | path = unquote(url.path) if url.path else None 864 | hostname = unquote(url.hostname) if url.hostname else None 865 | else: 866 | password = url.password 867 | path = url.path 868 | hostname = url.hostname 869 | 870 | # We only support redis:// and unix:// schemes. 871 | if url.scheme == 'unix': 872 | url_options.update({ 873 | 'password': password, 874 | 'path': path, 875 | 'connection_class': UnixDomainSocketConnection, 876 | }) 877 | 878 | else: 879 | url_options.update({ 880 | 'host': hostname, 881 | 'port': int(url.port or 6379), 882 | 'password': password, 883 | }) 884 | 885 | # If there's a path argument, use it as the db argument if a 886 | # querystring value wasn't specified 887 | if 'db' not in url_options and path: 888 | try: 889 | url_options['db'] = int(path.replace('/', '')) 890 | except (AttributeError, ValueError): 891 | pass 892 | 893 | if url.scheme == 'rediss': 894 | url_options['connection_class'] = SSLConnection 895 | 896 | # last shot at the db value 897 | url_options['db'] = int(url_options.get('db', db or 0)) 898 | 899 | # update the arguments from the URL values 900 | kwargs.update(url_options) 901 | 902 | # backwards compatability 903 | if 'charset' in kwargs: 904 | warnings.warn(DeprecationWarning( 905 | '"charset" is deprecated. Use "encoding" instead')) 906 | kwargs['encoding'] = kwargs.pop('charset') 907 | if 'errors' in kwargs: 908 | warnings.warn(DeprecationWarning( 909 | '"errors" is deprecated. Use "encoding_errors" instead')) 910 | kwargs['encoding_errors'] = kwargs.pop('errors') 911 | 912 | return cls(**kwargs) 913 | 914 | def __init__(self, connection_class=Connection, max_connections=None, 915 | **connection_kwargs): 916 | """ 917 | Create a connection pool. If max_connections is set, then this 918 | object raises redis.ConnectionError when the pool's limit is reached. 919 | 920 | By default, TCP connections are created unless connection_class is 921 | specified. Use redis.UnixDomainSocketConnection for unix sockets. 922 | 923 | Any additional keyword arguments are passed to the constructor of 924 | connection_class. 925 | """ 926 | max_connections = max_connections or 2 ** 31 927 | if not isinstance(max_connections, (int, long)) or max_connections < 0: 928 | raise ValueError('"max_connections" must be a positive integer') 929 | 930 | self.connection_class = connection_class 931 | self.connection_kwargs = connection_kwargs 932 | self.max_connections = max_connections 933 | 934 | self.reset() 935 | 936 | def __repr__(self): 937 | return "%s<%s>" % ( 938 | type(self).__name__, 939 | self.connection_class.description_format % self.connection_kwargs, 940 | ) 941 | 942 | def reset(self): 943 | self.pid = os.getpid() 944 | self._created_connections = 0 945 | self._available_connections = [] 946 | self._in_use_connections = set() 947 | self._check_lock = threading.Lock() 948 | 949 | def _checkpid(self): 950 | if self.pid != os.getpid(): 951 | with self._check_lock: 952 | if self.pid == os.getpid(): 953 | # another thread already did the work while we waited 954 | # on the lock. 955 | return 956 | self.disconnect() 957 | self.reset() 958 | 959 | def get_connection(self, command_name, *keys, **options): 960 | "Get a connection from the pool" 961 | self._checkpid() 962 | try: 963 | connection = self._available_connections.pop() 964 | except IndexError: 965 | connection = self.make_connection() 966 | self._in_use_connections.add(connection) 967 | return connection 968 | 969 | def get_encoder(self): 970 | "Return an encoder based on encoding settings" 971 | kwargs = self.connection_kwargs 972 | return Encoder( 973 | encoding=kwargs.get('encoding', 'utf-8'), 974 | encoding_errors=kwargs.get('encoding_errors', 'strict'), 975 | decode_responses=kwargs.get('decode_responses', False) 976 | ) 977 | 978 | def make_connection(self): 979 | "Create a new connection" 980 | if self._created_connections >= self.max_connections: 981 | raise ConnectionError("Too many connections") 982 | self._created_connections += 1 983 | return self.connection_class(**self.connection_kwargs) 984 | 985 | def release(self, connection): 986 | "Releases the connection back to the pool" 987 | self._checkpid() 988 | if connection.pid != self.pid: 989 | return 990 | self._in_use_connections.remove(connection) 991 | self._available_connections.append(connection) 992 | 993 | def disconnect(self): 994 | "Disconnects all connections in the pool" 995 | all_conns = chain(self._available_connections, 996 | self._in_use_connections) 997 | for connection in all_conns: 998 | connection.disconnect() 999 | 1000 | 1001 | class BlockingConnectionPool(ConnectionPool): 1002 | """ 1003 | Thread-safe blocking connection pool:: 1004 | 1005 | >>> from redis.client import Redis 1006 | >>> client = Redis(connection_pool=BlockingConnectionPool()) 1007 | 1008 | It performs the same function as the default 1009 | ``:py:class: ~redis.connection.ConnectionPool`` implementation, in that, 1010 | it maintains a pool of reusable connections that can be shared by 1011 | multiple redis clients (safely across threads if required). 1012 | 1013 | The difference is that, in the event that a client tries to get a 1014 | connection from the pool when all of connections are in use, rather than 1015 | raising a ``:py:class: ~redis.exceptions.ConnectionError`` (as the default 1016 | ``:py:class: ~redis.connection.ConnectionPool`` implementation does), it 1017 | makes the client wait ("blocks") for a specified number of seconds until 1018 | a connection becomes available. 1019 | 1020 | Use ``max_connections`` to increase / decrease the pool size:: 1021 | 1022 | >>> pool = BlockingConnectionPool(max_connections=10) 1023 | 1024 | Use ``timeout`` to tell it either how many seconds to wait for a connection 1025 | to become available, or to block forever: 1026 | 1027 | # Block forever. 1028 | >>> pool = BlockingConnectionPool(timeout=None) 1029 | 1030 | # Raise a ``ConnectionError`` after five seconds if a connection is 1031 | # not available. 1032 | >>> pool = BlockingConnectionPool(timeout=5) 1033 | """ 1034 | def __init__(self, max_connections=50, timeout=20, 1035 | connection_class=Connection, queue_class=LifoQueue, 1036 | **connection_kwargs): 1037 | 1038 | self.queue_class = queue_class 1039 | self.timeout = timeout 1040 | super(BlockingConnectionPool, self).__init__( 1041 | connection_class=connection_class, 1042 | max_connections=max_connections, 1043 | **connection_kwargs) 1044 | 1045 | def reset(self): 1046 | self.pid = os.getpid() 1047 | self._check_lock = threading.Lock() 1048 | 1049 | # Create and fill up a thread safe queue with ``None`` values. 1050 | self.pool = self.queue_class(self.max_connections) 1051 | while True: 1052 | try: 1053 | self.pool.put_nowait(None) 1054 | except Full: 1055 | break 1056 | 1057 | # Keep a list of actual connection instances so that we can 1058 | # disconnect them later. 1059 | self._connections = [] 1060 | 1061 | def make_connection(self): 1062 | "Make a fresh connection." 1063 | connection = self.connection_class(**self.connection_kwargs) 1064 | self._connections.append(connection) 1065 | return connection 1066 | 1067 | def get_connection(self, command_name, *keys, **options): 1068 | """ 1069 | Get a connection, blocking for ``self.timeout`` until a connection 1070 | is available from the pool. 1071 | 1072 | If the connection returned is ``None`` then creates a new connection. 1073 | Because we use a last-in first-out queue, the existing connections 1074 | (having been returned to the pool after the initial ``None`` values 1075 | were added) will be returned before ``None`` values. This means we only 1076 | create new connections when we need to, i.e.: the actual number of 1077 | connections will only increase in response to demand. 1078 | """ 1079 | # Make sure we haven't changed process. 1080 | self._checkpid() 1081 | 1082 | # Try and get a connection from the pool. If one isn't available within 1083 | # self.timeout then raise a ``ConnectionError``. 1084 | connection = None 1085 | try: 1086 | connection = self.pool.get(block=True, timeout=self.timeout) 1087 | except Empty: 1088 | # Note that this is not caught by the redis client and will be 1089 | # raised unless handled by application code. If you want never to 1090 | raise ConnectionError("No connection available.") 1091 | 1092 | # If the ``connection`` is actually ``None`` then that's a cue to make 1093 | # a new connection to add to the pool. 1094 | if connection is None: 1095 | connection = self.make_connection() 1096 | 1097 | return connection 1098 | 1099 | def release(self, connection): 1100 | "Releases the connection back to the pool." 1101 | # Make sure we haven't changed process. 1102 | self._checkpid() 1103 | if connection.pid != self.pid: 1104 | return 1105 | 1106 | # Put the connection back into the pool. 1107 | try: 1108 | self.pool.put_nowait(connection) 1109 | except Full: 1110 | # perhaps the pool has been reset() after a fork? regardless, 1111 | # we don't want this connection 1112 | pass 1113 | 1114 | def disconnect(self): 1115 | "Disconnects all connections in the pool." 1116 | for connection in self._connections: 1117 | connection.disconnect() 1118 | -------------------------------------------------------------------------------- /lib/redis/exceptions.py: -------------------------------------------------------------------------------- 1 | "Core exceptions raised by the Redis client" 2 | from _compat import unicode 3 | 4 | 5 | class RedisError(Exception): 6 | pass 7 | 8 | 9 | # python 2.5 doesn't implement Exception.__unicode__. Add it here to all 10 | # our exception types 11 | if not hasattr(RedisError, '__unicode__'): 12 | def __unicode__(self): 13 | if isinstance(self.args[0], unicode): 14 | return self.args[0] 15 | return unicode(self.args[0]) 16 | RedisError.__unicode__ = __unicode__ 17 | 18 | 19 | class AuthenticationError(RedisError): 20 | pass 21 | 22 | 23 | class ConnectionError(RedisError): 24 | pass 25 | 26 | 27 | class TimeoutError(RedisError): 28 | pass 29 | 30 | 31 | class BusyLoadingError(ConnectionError): 32 | pass 33 | 34 | 35 | class InvalidResponse(RedisError): 36 | pass 37 | 38 | 39 | class ResponseError(RedisError): 40 | pass 41 | 42 | 43 | class DataError(RedisError): 44 | pass 45 | 46 | 47 | class PubSubError(RedisError): 48 | pass 49 | 50 | 51 | class WatchError(RedisError): 52 | pass 53 | 54 | 55 | class NoScriptError(ResponseError): 56 | pass 57 | 58 | 59 | class ExecAbortError(ResponseError): 60 | pass 61 | 62 | 63 | class ReadOnlyError(ResponseError): 64 | pass 65 | 66 | 67 | class LockError(RedisError, ValueError): 68 | "Errors acquiring or releasing a lock" 69 | # NOTE: For backwards compatability, this class derives from ValueError. 70 | # This was originally chosen to behave like threading.Lock. 71 | pass 72 | -------------------------------------------------------------------------------- /lib/redis/lock.py: -------------------------------------------------------------------------------- 1 | import threading 2 | import time as mod_time 3 | import uuid 4 | from exceptions import LockError, WatchError 5 | from utils import dummy 6 | from _compat import b 7 | 8 | 9 | class Lock(object): 10 | """ 11 | A shared, distributed Lock. Using Redis for locking allows the Lock 12 | to be shared across processes and/or machines. 13 | 14 | It's left to the user to resolve deadlock issues and make sure 15 | multiple clients play nicely together. 16 | """ 17 | def __init__(self, redis, name, timeout=None, sleep=0.1, 18 | blocking=True, blocking_timeout=None, thread_local=True): 19 | """ 20 | Create a new Lock instance named ``name`` using the Redis client 21 | supplied by ``redis``. 22 | 23 | ``timeout`` indicates a maximum life for the lock. 24 | By default, it will remain locked until release() is called. 25 | ``timeout`` can be specified as a float or integer, both representing 26 | the number of seconds to wait. 27 | 28 | ``sleep`` indicates the amount of time to sleep per loop iteration 29 | when the lock is in blocking mode and another client is currently 30 | holding the lock. 31 | 32 | ``blocking`` indicates whether calling ``acquire`` should block until 33 | the lock has been acquired or to fail immediately, causing ``acquire`` 34 | to return False and the lock not being acquired. Defaults to True. 35 | Note this value can be overridden by passing a ``blocking`` 36 | argument to ``acquire``. 37 | 38 | ``blocking_timeout`` indicates the maximum amount of time in seconds to 39 | spend trying to acquire the lock. A value of ``None`` indicates 40 | continue trying forever. ``blocking_timeout`` can be specified as a 41 | float or integer, both representing the number of seconds to wait. 42 | 43 | ``thread_local`` indicates whether the lock token is placed in 44 | thread-local storage. By default, the token is placed in thread local 45 | storage so that a thread only sees its token, not a token set by 46 | another thread. Consider the following timeline: 47 | 48 | time: 0, thread-1 acquires `my-lock`, with a timeout of 5 seconds. 49 | thread-1 sets the token to "abc" 50 | time: 1, thread-2 blocks trying to acquire `my-lock` using the 51 | Lock instance. 52 | time: 5, thread-1 has not yet completed. redis expires the lock 53 | key. 54 | time: 5, thread-2 acquired `my-lock` now that it's available. 55 | thread-2 sets the token to "xyz" 56 | time: 6, thread-1 finishes its work and calls release(). if the 57 | token is *not* stored in thread local storage, then 58 | thread-1 would see the token value as "xyz" and would be 59 | able to successfully release the thread-2's lock. 60 | 61 | In some use cases it's necessary to disable thread local storage. For 62 | example, if you have code where one thread acquires a lock and passes 63 | that lock instance to a worker thread to release later. If thread 64 | local storage isn't disabled in this case, the worker thread won't see 65 | the token set by the thread that acquired the lock. Our assumption 66 | is that these cases aren't common and as such default to using 67 | thread local storage. 68 | """ 69 | self.redis = redis 70 | self.name = name 71 | self.timeout = timeout 72 | self.sleep = sleep 73 | self.blocking = blocking 74 | self.blocking_timeout = blocking_timeout 75 | self.thread_local = bool(thread_local) 76 | self.local = threading.local() if self.thread_local else dummy() 77 | self.local.token = None 78 | if self.timeout and self.sleep > self.timeout: 79 | raise LockError("'sleep' must be less than 'timeout'") 80 | 81 | def __enter__(self): 82 | # force blocking, as otherwise the user would have to check whether 83 | # the lock was actually acquired or not. 84 | self.acquire(blocking=True) 85 | return self 86 | 87 | def __exit__(self, exc_type, exc_value, traceback): 88 | self.release() 89 | 90 | def acquire(self, blocking=None, blocking_timeout=None): 91 | """ 92 | Use Redis to hold a shared, distributed lock named ``name``. 93 | Returns True once the lock is acquired. 94 | 95 | If ``blocking`` is False, always return immediately. If the lock 96 | was acquired, return True, otherwise return False. 97 | 98 | ``blocking_timeout`` specifies the maximum number of seconds to 99 | wait trying to acquire the lock. 100 | """ 101 | sleep = self.sleep 102 | token = b(uuid.uuid1().hex) 103 | if blocking is None: 104 | blocking = self.blocking 105 | if blocking_timeout is None: 106 | blocking_timeout = self.blocking_timeout 107 | stop_trying_at = None 108 | if blocking_timeout is not None: 109 | stop_trying_at = mod_time.time() + blocking_timeout 110 | while 1: 111 | if self.do_acquire(token): 112 | self.local.token = token 113 | return True 114 | if not blocking: 115 | return False 116 | if stop_trying_at is not None and mod_time.time() > stop_trying_at: 117 | return False 118 | mod_time.sleep(sleep) 119 | 120 | def do_acquire(self, token): 121 | if self.redis.setnx(self.name, token): 122 | if self.timeout: 123 | # convert to milliseconds 124 | timeout = int(self.timeout * 1000) 125 | self.redis.pexpire(self.name, timeout) 126 | return True 127 | return False 128 | 129 | def release(self): 130 | "Releases the already acquired lock" 131 | expected_token = self.local.token 132 | if expected_token is None: 133 | raise LockError("Cannot release an unlocked lock") 134 | self.local.token = None 135 | self.do_release(expected_token) 136 | 137 | def do_release(self, expected_token): 138 | name = self.name 139 | 140 | def execute_release(pipe): 141 | lock_value = pipe.get(name) 142 | if lock_value != expected_token: 143 | raise LockError("Cannot release a lock that's no longer owned") 144 | pipe.delete(name) 145 | 146 | self.redis.transaction(execute_release, name) 147 | 148 | def extend(self, additional_time): 149 | """ 150 | Adds more time to an already acquired lock. 151 | 152 | ``additional_time`` can be specified as an integer or a float, both 153 | representing the number of seconds to add. 154 | """ 155 | if self.local.token is None: 156 | raise LockError("Cannot extend an unlocked lock") 157 | if self.timeout is None: 158 | raise LockError("Cannot extend a lock with no timeout") 159 | return self.do_extend(additional_time) 160 | 161 | def do_extend(self, additional_time): 162 | pipe = self.redis.pipeline() 163 | pipe.watch(self.name) 164 | lock_value = pipe.get(self.name) 165 | if lock_value != self.local.token: 166 | raise LockError("Cannot extend a lock that's no longer owned") 167 | expiration = pipe.pttl(self.name) 168 | if expiration is None or expiration < 0: 169 | # Redis evicted the lock key between the previous get() and now 170 | # we'll handle this when we call pexpire() 171 | expiration = 0 172 | pipe.multi() 173 | pipe.pexpire(self.name, expiration + int(additional_time * 1000)) 174 | 175 | try: 176 | response = pipe.execute() 177 | except WatchError: 178 | # someone else acquired the lock 179 | raise LockError("Cannot extend a lock that's no longer owned") 180 | if not response[0]: 181 | # pexpire returns False if the key doesn't exist 182 | raise LockError("Cannot extend a lock that's no longer owned") 183 | return True 184 | 185 | 186 | class LuaLock(Lock): 187 | """ 188 | A lock implementation that uses Lua scripts rather than pipelines 189 | and watches. 190 | """ 191 | lua_acquire = None 192 | lua_release = None 193 | lua_extend = None 194 | 195 | # KEYS[1] - lock name 196 | # ARGV[1] - token 197 | # ARGV[2] - timeout in milliseconds 198 | # return 1 if lock was acquired, otherwise 0 199 | LUA_ACQUIRE_SCRIPT = """ 200 | if redis.call('setnx', KEYS[1], ARGV[1]) == 1 then 201 | if ARGV[2] ~= '' then 202 | redis.call('pexpire', KEYS[1], ARGV[2]) 203 | end 204 | return 1 205 | end 206 | return 0 207 | """ 208 | 209 | # KEYS[1] - lock name 210 | # ARGS[1] - token 211 | # return 1 if the lock was released, otherwise 0 212 | LUA_RELEASE_SCRIPT = """ 213 | local token = redis.call('get', KEYS[1]) 214 | if not token or token ~= ARGV[1] then 215 | return 0 216 | end 217 | redis.call('del', KEYS[1]) 218 | return 1 219 | """ 220 | 221 | # KEYS[1] - lock name 222 | # ARGS[1] - token 223 | # ARGS[2] - additional milliseconds 224 | # return 1 if the locks time was extended, otherwise 0 225 | LUA_EXTEND_SCRIPT = """ 226 | local token = redis.call('get', KEYS[1]) 227 | if not token or token ~= ARGV[1] then 228 | return 0 229 | end 230 | local expiration = redis.call('pttl', KEYS[1]) 231 | if not expiration then 232 | expiration = 0 233 | end 234 | if expiration < 0 then 235 | return 0 236 | end 237 | redis.call('pexpire', KEYS[1], expiration + ARGV[2]) 238 | return 1 239 | """ 240 | 241 | def __init__(self, *args, **kwargs): 242 | super(LuaLock, self).__init__(*args, **kwargs) 243 | LuaLock.register_scripts(self.redis) 244 | 245 | @classmethod 246 | def register_scripts(cls, redis): 247 | if cls.lua_acquire is None: 248 | cls.lua_acquire = redis.register_script(cls.LUA_ACQUIRE_SCRIPT) 249 | if cls.lua_release is None: 250 | cls.lua_release = redis.register_script(cls.LUA_RELEASE_SCRIPT) 251 | if cls.lua_extend is None: 252 | cls.lua_extend = redis.register_script(cls.LUA_EXTEND_SCRIPT) 253 | 254 | def do_acquire(self, token): 255 | timeout = self.timeout and int(self.timeout * 1000) or '' 256 | return bool(self.lua_acquire(keys=[self.name], 257 | args=[token, timeout], 258 | client=self.redis)) 259 | 260 | def do_release(self, expected_token): 261 | if not bool(self.lua_release(keys=[self.name], 262 | args=[expected_token], 263 | client=self.redis)): 264 | raise LockError("Cannot release a lock that's no longer owned") 265 | 266 | def do_extend(self, additional_time): 267 | additional_time = int(additional_time * 1000) 268 | if not bool(self.lua_extend(keys=[self.name], 269 | args=[self.local.token, additional_time], 270 | client=self.redis)): 271 | raise LockError("Cannot extend a lock that's no longer owned") 272 | return True 273 | -------------------------------------------------------------------------------- /lib/redis/sentinel.py: -------------------------------------------------------------------------------- 1 | import os 2 | import random 3 | import weakref 4 | 5 | from client import StrictRedis 6 | from connection import ConnectionPool, Connection 7 | from exceptions import (ConnectionError, ResponseError, ReadOnlyError, 8 | TimeoutError) 9 | from _compat import iteritems, nativestr, xrange 10 | 11 | 12 | class MasterNotFoundError(ConnectionError): 13 | pass 14 | 15 | 16 | class SlaveNotFoundError(ConnectionError): 17 | pass 18 | 19 | 20 | class SentinelManagedConnection(Connection): 21 | def __init__(self, **kwargs): 22 | self.connection_pool = kwargs.pop('connection_pool') 23 | super(SentinelManagedConnection, self).__init__(**kwargs) 24 | 25 | def __repr__(self): 26 | pool = self.connection_pool 27 | s = '%s' % (type(self).__name__, pool.service_name) 28 | if self.host: 29 | host_info = ',host=%s,port=%s' % (self.host, self.port) 30 | s = s % host_info 31 | return s 32 | 33 | def connect_to(self, address): 34 | self.host, self.port = address 35 | super(SentinelManagedConnection, self).connect() 36 | if self.connection_pool.check_connection: 37 | self.send_command('PING') 38 | if nativestr(self.read_response()) != 'PONG': 39 | raise ConnectionError('PING failed') 40 | 41 | def connect(self): 42 | if self._sock: 43 | return # already connected 44 | if self.connection_pool.is_master: 45 | self.connect_to(self.connection_pool.get_master_address()) 46 | else: 47 | for slave in self.connection_pool.rotate_slaves(): 48 | try: 49 | return self.connect_to(slave) 50 | except ConnectionError: 51 | continue 52 | raise SlaveNotFoundError # Never be here 53 | 54 | def read_response(self): 55 | try: 56 | return super(SentinelManagedConnection, self).read_response() 57 | except ReadOnlyError: 58 | if self.connection_pool.is_master: 59 | # When talking to a master, a ReadOnlyError when likely 60 | # indicates that the previous master that we're still connected 61 | # to has been demoted to a slave and there's a new master. 62 | # calling disconnect will force the connection to re-query 63 | # sentinel during the next connect() attempt. 64 | self.disconnect() 65 | raise ConnectionError('The previous master is now a slave') 66 | raise 67 | 68 | 69 | class SentinelConnectionPool(ConnectionPool): 70 | """ 71 | Sentinel backed connection pool. 72 | 73 | If ``check_connection`` flag is set to True, SentinelManagedConnection 74 | sends a PING command right after establishing the connection. 75 | """ 76 | 77 | def __init__(self, service_name, sentinel_manager, **kwargs): 78 | kwargs['connection_class'] = kwargs.get( 79 | 'connection_class', SentinelManagedConnection) 80 | self.is_master = kwargs.pop('is_master', True) 81 | self.check_connection = kwargs.pop('check_connection', False) 82 | super(SentinelConnectionPool, self).__init__(**kwargs) 83 | self.connection_kwargs['connection_pool'] = weakref.proxy(self) 84 | self.service_name = service_name 85 | self.sentinel_manager = sentinel_manager 86 | 87 | def __repr__(self): 88 | return "%s>> from redis.sentinel import Sentinel 145 | >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1) 146 | >>> master = sentinel.master_for('mymaster', socket_timeout=0.1) 147 | >>> master.set('foo', 'bar') 148 | >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1) 149 | >>> slave.get('foo') 150 | 'bar' 151 | 152 | ``sentinels`` is a list of sentinel nodes. Each node is represented by 153 | a pair (hostname, port). 154 | 155 | ``min_other_sentinels`` defined a minimum number of peers for a sentinel. 156 | When querying a sentinel, if it doesn't meet this threshold, responses 157 | from that sentinel won't be considered valid. 158 | 159 | ``sentinel_kwargs`` is a dictionary of connection arguments used when 160 | connecting to sentinel instances. Any argument that can be passed to 161 | a normal Redis connection can be specified here. If ``sentinel_kwargs`` is 162 | not specified, any socket_timeout and socket_keepalive options specified 163 | in ``connection_kwargs`` will be used. 164 | 165 | ``connection_kwargs`` are keyword arguments that will be used when 166 | establishing a connection to a Redis server. 167 | """ 168 | 169 | def __init__(self, sentinels, min_other_sentinels=0, sentinel_kwargs=None, 170 | **connection_kwargs): 171 | # if sentinel_kwargs isn't defined, use the socket_* options from 172 | # connection_kwargs 173 | if sentinel_kwargs is None: 174 | sentinel_kwargs = dict([(k, v) 175 | for k, v in iteritems(connection_kwargs) 176 | if k.startswith('socket_') 177 | ]) 178 | self.sentinel_kwargs = sentinel_kwargs 179 | 180 | self.sentinels = [StrictRedis(hostname, port, **self.sentinel_kwargs) 181 | for hostname, port in sentinels] 182 | self.min_other_sentinels = min_other_sentinels 183 | self.connection_kwargs = connection_kwargs 184 | 185 | def __repr__(self): 186 | sentinel_addresses = [] 187 | for sentinel in self.sentinels: 188 | sentinel_addresses.append('%s:%s' % ( 189 | sentinel.connection_pool.connection_kwargs['host'], 190 | sentinel.connection_pool.connection_kwargs['port'], 191 | )) 192 | return '%s' % ( 193 | type(self).__name__, 194 | ','.join(sentinel_addresses)) 195 | 196 | def check_master_state(self, state, service_name): 197 | if not state['is_master'] or state['is_sdown'] or state['is_odown']: 198 | return False 199 | # Check if our sentinel doesn't see other nodes 200 | if state['num-other-sentinels'] < self.min_other_sentinels: 201 | return False 202 | return True 203 | 204 | def discover_master(self, service_name): 205 | """ 206 | Asks sentinel servers for the Redis master's address corresponding 207 | to the service labeled ``service_name``. 208 | 209 | Returns a pair (address, port) or raises MasterNotFoundError if no 210 | master is found. 211 | """ 212 | for sentinel_no, sentinel in enumerate(self.sentinels): 213 | try: 214 | masters = sentinel.sentinel_masters() 215 | except (ConnectionError, TimeoutError): 216 | continue 217 | state = masters.get(service_name) 218 | if state and self.check_master_state(state, service_name): 219 | # Put this sentinel at the top of the list 220 | self.sentinels[0], self.sentinels[sentinel_no] = ( 221 | sentinel, self.sentinels[0]) 222 | return state['ip'], state['port'] 223 | raise MasterNotFoundError("No master found for %r" % (service_name,)) 224 | 225 | def filter_slaves(self, slaves): 226 | "Remove slaves that are in an ODOWN or SDOWN state" 227 | slaves_alive = [] 228 | for slave in slaves: 229 | if slave['is_odown'] or slave['is_sdown']: 230 | continue 231 | slaves_alive.append((slave['ip'], slave['port'])) 232 | return slaves_alive 233 | 234 | def discover_slaves(self, service_name): 235 | "Returns a list of alive slaves for service ``service_name``" 236 | for sentinel in self.sentinels: 237 | try: 238 | slaves = sentinel.sentinel_slaves(service_name) 239 | except (ConnectionError, ResponseError, TimeoutError): 240 | continue 241 | slaves = self.filter_slaves(slaves) 242 | if slaves: 243 | return slaves 244 | return [] 245 | 246 | def master_for(self, service_name, redis_class=StrictRedis, 247 | connection_pool_class=SentinelConnectionPool, **kwargs): 248 | """ 249 | Returns a redis client instance for the ``service_name`` master. 250 | 251 | A SentinelConnectionPool class is used to retrive the master's 252 | address before establishing a new connection. 253 | 254 | NOTE: If the master's address has changed, any cached connections to 255 | the old master are closed. 256 | 257 | By default clients will be a redis.StrictRedis instance. Specify a 258 | different class to the ``redis_class`` argument if you desire 259 | something different. 260 | 261 | The ``connection_pool_class`` specifies the connection pool to use. 262 | The SentinelConnectionPool will be used by default. 263 | 264 | All other keyword arguments are merged with any connection_kwargs 265 | passed to this class and passed to the connection pool as keyword 266 | arguments to be used to initialize Redis connections. 267 | """ 268 | kwargs['is_master'] = True 269 | connection_kwargs = dict(self.connection_kwargs) 270 | connection_kwargs.update(kwargs) 271 | return redis_class(connection_pool=connection_pool_class( 272 | service_name, self, **connection_kwargs)) 273 | 274 | def slave_for(self, service_name, redis_class=StrictRedis, 275 | connection_pool_class=SentinelConnectionPool, **kwargs): 276 | """ 277 | Returns redis client instance for the ``service_name`` slave(s). 278 | 279 | A SentinelConnectionPool class is used to retrive the slave's 280 | address before establishing a new connection. 281 | 282 | By default clients will be a redis.StrictRedis instance. Specify a 283 | different class to the ``redis_class`` argument if you desire 284 | something different. 285 | 286 | The ``connection_pool_class`` specifies the connection pool to use. 287 | The SentinelConnectionPool will be used by default. 288 | 289 | All other keyword arguments are merged with any connection_kwargs 290 | passed to this class and passed to the connection pool as keyword 291 | arguments to be used to initialize Redis connections. 292 | """ 293 | kwargs['is_master'] = False 294 | connection_kwargs = dict(self.connection_kwargs) 295 | connection_kwargs.update(kwargs) 296 | return redis_class(connection_pool=connection_pool_class( 297 | service_name, self, **connection_kwargs)) 298 | -------------------------------------------------------------------------------- /lib/redis/utils.py: -------------------------------------------------------------------------------- 1 | from contextlib import contextmanager 2 | 3 | 4 | try: 5 | import hiredis 6 | HIREDIS_AVAILABLE = True 7 | except ImportError: 8 | HIREDIS_AVAILABLE = False 9 | 10 | 11 | def from_url(url, db=None, **kwargs): 12 | """ 13 | Returns an active Redis client generated from the given database URL. 14 | 15 | Will attempt to extract the database id from the path url fragment, if 16 | none is provided. 17 | """ 18 | from redis.client import Redis 19 | return Redis.from_url(url, db, **kwargs) 20 | 21 | 22 | @contextmanager 23 | def pipeline(redis_obj): 24 | p = redis_obj.pipeline() 25 | yield p 26 | p.execute() 27 | 28 | 29 | class dummy(object): 30 | """ 31 | Instances of this class can be used as an attribute container. 32 | """ 33 | pass 34 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | #-*- coding:utf-8 -*- 3 | # Email: Mosuansb@gmail.com 4 | # apikey = "mosuans34bt6est-dksavbxckd-fasdasfafa-sadhrkkias-fiuluilasf" 5 | 6 | import os 7 | import re 8 | import sys 9 | import time 10 | import random 11 | 12 | from lib.log import Log 13 | from lib.mail import Mail 14 | from lib.daemon import Daemon 15 | from lib.RedisAdd import Redis 16 | 17 | class Auditd(object): 18 | 19 | def __init__(self): 20 | self._command_reg = "=[\"](.*?)[\"]" 21 | self._command_int_reg = "=[\"]?(.*?)" 22 | self._log_split_reg = "time->((.*[\\n]){8})" 23 | # command white list regular 24 | self.command_white = [ 25 | "^(sudo ausearch) -", 26 | "^grep [a-zA-Z1-9]{1,20}", 27 | "^ifconfig -a", 28 | "^sh -c\s+$", 29 | ] 30 | 31 | self.time_list_log = "/tmp/auditd_time_list.log" 32 | self.ip = str(self._ip()).replace("\n"," ") 33 | 34 | def _time(self, msg): 35 | """ 36 | time.time() 37 | """ 38 | return msg.split("(")[1].split(")")[0].split(".")[0] 39 | 40 | def _command(self, msg): 41 | command = "" 42 | for num,item in enumerate(msg.split(" ")): 43 | if num > 2: 44 | # item is int? 45 | if len(item.split('"')) > 1: 46 | command += re.findall(self._command_reg, item)[0]+" " 47 | else: 48 | command += re.findall(self._command_int_reg, item)[0]+" " 49 | return str(command.replace("\\","\\\\\\\\")) 50 | 51 | def _ip(self): 52 | return os.popen("ifconfig -a|grep inet|grep -v 127.0.0.1|grep -v inet6 | awk '{print $2}' | tr -d 'addr:'").read() 53 | 54 | def _file(self, filename, msg='', type='read'): 55 | ''' 56 | filename read/write/add 57 | ''' 58 | try: 59 | if type == 'read': 60 | if not os.path.exists(filename): 61 | objs = open(filename,"w+") 62 | objs.close() 63 | 64 | file_obj = open(filename, "r") 65 | content = file_obj.read() 66 | file_obj.close() 67 | return content 68 | elif type == 'write': 69 | file_obj = open(filename, "w") 70 | content = file_obj.write(msg) 71 | file_obj.close() 72 | return content 73 | 74 | except Exception,e: 75 | Log().debug("main.py _file error: %s" % (str(e))) 76 | 77 | 78 | def _user(self, uid): 79 | try: 80 | result = os.popen("getent passwd {}".format(uid)).read() 81 | return result.split(":")[0] 82 | except Exception,e: 83 | pass 84 | 85 | def _data(self, cmd, status=1): 86 | """ 87 | auditd log 88 | """ 89 | times = (self._file(self.time_list_log).replace("\n","")).split(",") 90 | _time_list = [] 91 | result = "" 92 | content = "[auditdPy]: \n" 93 | content_html = "[auditdPy]:
" 94 | # except error 95 | try: 96 | result = os.popen(cmd).read() 97 | 98 | log_list = re.findall(self._log_split_reg, result) 99 | 100 | for item in log_list: 101 | if not item[0]: 102 | continue 103 | # userid 104 | uid = re.findall(" uid=(.*?) ", item[0]) 105 | if len(uid) > 0: uid = uid[0] 106 | user = self._user(uid) 107 | # command 108 | execve = re.findall("type=EXECVE(.*?)\\n", item[0]) 109 | if len(execve) > 0: 110 | execve = execve[0] 111 | _time = int(self._time(execve)) 112 | # not time 113 | if not str(_time) in times: 114 | _command_str = self._command(execve) 115 | # False or True 116 | _is = False 117 | # command is white? 118 | for _while in self.command_white: 119 | if len(re.findall(_while, _command_str)) > 0: 120 | _is = True 121 | if not _is: 122 | _time_list.append(_time) 123 | _date = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(_time)) 124 | content += "user:{} ip:{} time:{} command:{} \n".format(user, self.ip, _date, _command_str) 125 | content_html += "user:{}  ip:{}  time:{}  command:{}
".format(user, self.ip, _date, _command_str) 126 | # add redis msg 127 | redis_msg = "{\"level\":\"info\", \"command\": \"%s\", \"time\": \"%s\", \"user\": \"%s\", \"ip\": \"%s\"}" % (_command_str, _date, user, self.ip) 128 | Redis().push_msg(str(redis_msg)) 129 | # send mail 130 | if status: 131 | Mail().sendmail(content_html) 132 | 133 | # filter rule 134 | times.extend(list(set(_time_list))) 135 | times_str = ','.join(str(v) for v in times) 136 | # end position 137 | self._file(self.time_list_log, times_str, 'write') 138 | 139 | # log 140 | if len(_time_list) > 0: 141 | print(content) 142 | else: 143 | print("[not log]") 144 | except Exception,e: 145 | Log().debug("main.py _data error: %s" % (str(e))) 146 | 147 | def main(self): 148 | # send mail? 149 | mail_status = False 150 | cmd = "sudo ausearch -ts today -k security_audit" 151 | self._data(cmd, mail_status) 152 | #time.sleep(10) 153 | 154 | class Auditd_daemon(Daemon): 155 | def run(self): 156 | while True: 157 | #file("/tmp/111.txt","w+").write(str(random.random())) 158 | Auditd().main() 159 | time.sleep(10) 160 | 161 | 162 | if __name__ == '__main__': 163 | daemon = Auditd_daemon("/tmp/auditd_daemon.pid") 164 | if len(sys.argv) >= 2: 165 | if 'start' == sys.argv[1]: 166 | daemon.start() 167 | elif 'stop' == sys.argv[1]: 168 | daemon.stop() 169 | else: 170 | print("Unknown command") 171 | sys.exit(2) 172 | sys.exit(0) 173 | else: 174 | print("usage: %s start|stop" % sys.argv[0]) 175 | sys.exit(2) 176 | 177 | --------------------------------------------------------------------------------