├── README.md
├── pybackup.conf
├── pybackup.png
├── pybackup.py
└── unfinished
└── pybackup.py
/README.md:
--------------------------------------------------------------------------------
1 | [-blue.svg?style=flat)](http://fuxkdb.com/)
2 |
3 | # pybackup使用文档
4 | pybackup源自于对线上备份脚本的改进和对备份情况的监控需求.
5 |
6 | 原本生产库的备份是通过shell脚本调用mydumper,之后再将备份通过rsync传输到备份机.
7 |
8 | 想要获取备份状态,时间,rsync传输时间等信息只能通过解析日志.
9 |
10 | pybackup由python编写,调用mydumper和rsync,将备份信息存入数据库中,后期可以通过grafana图形化展示和监控备份
11 |
12 | 目前不支持2.6,仅在2.7.14做过测试
13 | ## 参数说明
14 | 帮助信息
15 | ```
16 | Usage:
17 | pybackup.py mydumper ARG_WITH_NO_--... (([--no-rsync] [--no-history]) | [--only-backup])
18 | pybackup.py only-rsync [--backup-dir=
] [--bk-id=] [--log-file=]
19 | pybackup.py mark-del --backup-dir=
20 | pybackup.py validate-backup --log-file=
21 | pybackup.py -h | --help
22 | pybackup.py --version
23 |
24 | Options:
25 | -h --help Show help information.
26 | --version Show version.
27 | --no-rsync Do not use rsync.
28 | --no-history Do not record backup history information.
29 | --only-backup Equal to use both --no-rsync and --no-history.
30 | --only-rsync When you backup complete, but rsync failed, use this option to rsync your backup.
31 | --backup-dir= The directory where the backuped files are located. [default: ./]
32 | --bk-id= bk-id in table user_backup.
33 | --log-file= log file [default: ./pybackup_default.log]
34 |
35 | more help information in:
36 | https://github.com/Fanduzi
37 | ```
38 |
39 | ### pybackup.py mydumper
40 | ```
41 | pybackup.py mydumper ARG_WITH_NO_--... (([--no-rsync] [--no-history]) | [--only-backup])
42 | ```
43 | 除了最后三个参数,使用的所有参数和mydumper -h中列出的参数相同. 只不过目前只支持长选项,并且不带'--'
44 |
45 | 例:
46 | ```
47 | ./pybackup.py mydumper password=fanboshi database=fandb outputdir=/data4/recover/pybackup/2017-11-12 logfile=/data4/recover/pybackup/bak.log verbose=3
48 | ```
49 | 可以使用`./pybackup.py mydumper help`查看mydumper帮助信息
50 |
51 | --no-rsync
52 |
53 | 不使用rsync传输
54 |
55 | --no-history
56 |
57 | 不记录备份信息到数据库
58 |
59 | --only-backup
60 |
61 | 等价于同时使用--no-rsync和--no-history . 不能与--no-rsync或--no-history同时使用
62 |
63 | ```
64 | pybackup.py only-rsync [--backup-dir=] [--bk-id=] [--log-file=]
65 | ```
66 | 当备份成功rsync失败时可以使用only-rsync来同步备份成功的文件
67 |
68 | --backup-dir
69 |
70 | 需要使用rsync同步的备份文件路径,如果不指定,则默认为./
71 |
72 | --bk-id
73 |
74 | user_backup表中记录的备份bk_id,如果指定,则会在rsync同步完成后更新指定bk_id行的,传输起始时间,耗时,是否成功等信息.如果不指定则不更新user_backup表
75 |
76 | --log-file
77 |
78 | 本次rsync指定的日志,如果不指定,则默认为当前目录rsync.log文件
79 |
80 | #### 配置文件说明
81 | 配置文件为pbackup.conf
82 | ```
83 | [root@localhost pybackup]# less pybackup.conf
84 | [CATALOG] --存储备份信息的数据库配置
85 | db_host=localhost
86 | db_port=3306
87 | db_user=root
88 | db_passwd=fanboshi
89 | db_use=catalogdb
90 |
91 | [TDB] --需要备份的数据库配置
92 | db_host=localhost
93 | db_port=3306
94 | db_user=root
95 | db_passwd=fanboshi
96 | db_use=information_schema
97 | db_consistency=True --0.7.0新增option,可以不写,不写则为False,后面会对这个option进行说明
98 | db_list=test,fandb,union_log_ad_% --指定需要备份的数据库,可以使用mysql支持的通配符. 如果想要备份所有数据库则填写%
99 |
100 | [rsync]
101 | password_file=/data4/recover/pybackup/rsync.sec --等同于--password-file
102 | dest=platform@182.92.83.238/db_backup/106.3.130.84 --传输到哪个目录
103 | address= --使用的网卡.可以为空不指定
104 | ```
105 | 注意
106 | ```
107 | [TDB]
108 | db_list=fan,bo,shi 代表备份fan,bo,shi三个数据库
109 | db_list=!fan,!bo,!shi 代表不备份fan,bo,shi三个数据库
110 | db_list=% 代表备份所有数据库
111 | db_list=!fan,bo 不支持
112 | ```
113 | 还有一点需要注意,即便在配置文件中定义了db_list参数,也可以在命令行强制指定database=xx / regex / tables-list,例如
114 | ```
115 | pybackup.py mydumper password="xx" user=root socket=/data/mysql/mysql.sock outputdir=/data/backup_db/ verbose=3 compress threads=8 triggers events routines use-savepoints logfile=/data/backup_db/pybackup.log database=yourdb
116 | ```
117 | 此时会只备份`yourdb`而忽略配置文件中的定义
118 |
119 | 备份信息示例
120 | ```
121 | *************************** 4. row ***************************
122 | id: 4
123 | bk_id: bcd36dc6-c9e7-11e7-9e30-005056b15d9c
124 | bk_server: 106.3.130.84
125 | start_time: 2017-11-15 17:31:20
126 | end_time: 2017-11-15 17:32:07
127 | elapsed_time: 47
128 | backuped_db: fandb,test,union_log_ad_201710_db,union_log_ad_201711_db
129 | is_complete: Y,Y,Y,Y
130 | bk_size: 480M
131 | bk_dir: /data4/recover/pybackup/2017-11-15
132 | transfer_start: 2017-11-15 17:32:07
133 | transfer_end: 2017-11-15 17:33:36
134 | transfer_elapsed: 89
135 | transfer_complete: Y
136 | remote_dest: platform@182.92.83.238/db_backup/106.3.130.84/
137 | master_status: mysql-bin.000036,61286,
138 | slave_status: Not a slave
139 | tool_version: mydumper 0.9.2, built against MySQL 5.5.53
140 | server_version: 5.7.18-log
141 | bk_command: mydumper --password=supersecrect --outputdir=/data4/recover/pybackup/2017-11-15 --verbose=3 --compress --triggers --events --routines --use-savepoints database=fandb,test,union_log_ad_201710_db,union_log_ad_201711_db
142 | ```
143 | #### 建库建表语句
144 | ```
145 | create database catalogdb;
146 | root@localhost 11:09: [catalogdb]> show create table user_backup\G
147 | *************************** 1. row ***************************
148 | Table: user_backup
149 | Create Table: CREATE TABLE `user_backup` (
150 | `id` int(11) NOT NULL AUTO_INCREMENT,
151 | `bk_id` varchar(36) NOT NULL,
152 | `bk_server` varchar(15) NOT NULL,
153 | `start_time` datetime NOT NULL,
154 | `end_time` datetime NOT NULL,
155 | `elapsed_time` int(11) NOT NULL,
156 | `backuped_db` varchar(2048) DEFAULT NULL,
157 | `is_complete` varchar(200) DEFAULT NULL,
158 | `bk_size` varchar(10) NOT NULL,
159 | `bk_dir` varchar(200) NOT NULL,
160 | `transfer_start` datetime DEFAULT NULL,
161 | `transfer_end` datetime DEFAULT NULL,
162 | `transfer_elapsed` int(11) DEFAULT NULL,
163 | `transfer_complete` varchar(20) NOT NULL,
164 | `remote_dest` varchar(200) NOT NULL,
165 | `master_status` varchar(200) NOT NULL,
166 | `slave_status` varchar(200) NOT NULL,
167 | `tool_version` varchar(200) NOT NULL,
168 | `server_version` varchar(200) NOT NULL,
169 | `pybackup_version` varchar(200) DEFAULT NULL,
170 | `bk_command` varchar(2048) DEFAULT NULL,
171 | `tag` varchar(200) NOT NULL DEFAULT 'N/A',
172 | `is_deleted` char(1) NOT NULL DEFAULT 'N',
173 | `validate_status` varchar(20) NOT NULL DEFAULT 'N/A',
174 | PRIMARY KEY (`id`),
175 | UNIQUE KEY `bk_id` (`bk_id`),
176 | KEY `idx_start_time` (`start_time`),
177 | KEY `idx_transfer_start` (`transfer_start`)
178 | ) ENGINE=InnoDB AUTO_INCREMENT=1481 DEFAULT CHARSET=utf8
179 | 1 row in set (0.00 sec)
180 | ```
181 |
182 | #### 关于db_consistency
183 | ```
184 | ./pybackup.py mydumper password=fanboshi database=fandb outputdir=/data4/recover/pybackup/2017-11-12 logfile=/data4/recover/pybackup/bak.log verbose=3
185 | ```
186 | 以上面命令为例,默认脚本逻辑对于db_list指定的库通过for循环逐一使用mydumper --database=xx 备份
187 | 如果指定了db_consistency=True则会替换为使用 --regex备份db_list中指定的所有数据库, 保证了数据库之间的一致性
188 | ```
189 | 备份完成后会将每个备份单独存放到对应的库名文件夹,并且每个文件夹都有一份相同的metedata
190 | # pwd
191 | /data/backup_db/2018-03-29/9a84bd44-32c2-11e8-a5ec-00163f00254b
192 | # ls
193 | dbe8je6i4c3gjd50 metadata mysql www_xz8_db
194 | # diff dbe8je6i4c3gjd50/metadata metadata
195 | # diff mysql/metadata metadata
196 | # diff www_xz8_db/metadata metadata
197 | ```
198 | #### 备份脚本示例
199 | ```
200 | #!/bin/sh
201 | DSERVENDAY=`date +%Y-%m-%d --date='2 day ago'`
202 | DTODAY=`date +%Y-%m-%d`
203 |
204 | cd /data/backup_db/
205 | rm -rf $DTODAY
206 | rm -rf $DSERVENDAY --不再建议这样删除备份了,建议使用 rm成功后使用 pybackup.py mark-del --bk-id将对应备份信息更新为已删除
207 | mkdir $DTODAY
208 | source ~/.bash_profile
209 | python /data/backup_db/pybackup.py mydumper password="papapa" user=root socket=/data/mysql/mysql.sock outputdir=/data/backup_db/$DTODAY verbose=3 compress threads=8 triggers events routines use-savepoints logfile=/data/backup_db/pybackup.log
210 | ```
211 |
212 | crontab
213 | ```
214 | 0 4 * * * /data/backup_db/pybackup.sh>> /data/backup_db/pybackup_sh.log 2>&1
215 | ```
216 |
217 | logroatate脚本
218 | ```
219 | /data/backup_db/pybackup.log {
220 | daily
221 | rotate 7
222 | missingok
223 | compress
224 | delaycompress
225 | copytruncate
226 | }
227 |
228 | /data/backup_db/pybackup_sh.log {
229 | daily
230 | rotate 7
231 | missingok
232 | compress
233 | delaycompress
234 | copytruncate
235 | }
236 | ```
237 |
238 | ### pybackup.py only-rsync
239 | 当备份传输失败时,此命令用于手工传输备份,指定bk-id会更新user_backup表
240 | ```
241 | nohup python pybackup.py only-rsync --backup-dir=/data/backup_db/2017-12-15 --bk-id=94b64244-e13b-11e7-9eaa-00163f00254b --log-file=/data/scripts/log/rsync.log &
242 | ```
243 |
244 | ### pybackup.py mark-del
245 | 建议使用pybackup备份的备份集先使用此命令更新user_backup.is_deleted列后再物理删除
246 | ```
247 | obsolete_dir2=`find /data2/backup/db_backup/101.47.124.133 -name "201*" ! -name "*-01" -type d -mtime +31`
248 | if [ "x$obsolete_dir2" == 'x' ];then
249 | echo "no obsolete in XXXDB"
250 | else
251 | python /data/scripts/bin/pybackup.py mark-del --backup-dir=$obsolete_dir2
252 | find /data2/backup/db_backup/101.47.124.133 -name "2017*" ! -name "*-01" -type d -mtime +31 -exec rm -r {} \;
253 | fi
254 | ```
255 |
256 | 逻辑是通过找到指定目录下的目录名(目录名就是bk_id),根据此目录名作为bk_id更新user_backup.is_deleted列
257 | ```
258 | [root@localhost 2017-12-14]# tree
259 | .
260 | └── 0883fd06-e033-11e7-88ad-00163e0e2396
261 | └── day_hour
262 | ├── dbe8je6i4c3gjd50.day-schema.sql.gz
263 | ├── dbe8je6i4c3gjd50.day.sql.gz
264 | ├── dbe8je6i4c3gjd50.hour-schema.sql.gz
265 | ├── dbe8je6i4c3gjd50.hour.sql.gz
266 | └── metadata
267 | ```
268 |
269 | ### pybackup.py validate-backup
270 | 用于测试备份的可恢复性, 通过查询catalogdb库获取未进行恢复测试且未删除的备份进行恢复(可能是鸡肋的功能,有人说逻辑备份只要成功了不就都能恢复么,无言以对,貌似失败也就只有字符集不对的情况?)
271 | >2018.04.27
272 | >想了下可能应该恢复完成后, 首先去做源库的从库, 同步之后看有没有报错, 再完善点, 则是要做pt-table-checksum校验, 最近没时间, 后面再做吧
273 |
274 | 需要手工填写user_backup_path表
275 | ```
276 | root@localhost 23:12: [catalogdb]> select * from user_backup_path;
277 | +----+----------------+----------------+-----------------------------------------+--------------------------------+
278 | | id | bk_server | remote_server | real_path | tag |
279 | +----+----------------+----------------+-----------------------------------------+--------------------------------+
280 | | 1 | 120.27.138.23 | 106.3.10.8 | /data1/backup/db_backup/120.27.138.23/ | 国内平台从1 |
281 | | 2 | 101.37.174.13 | 106.3.10.9 | /data2/backup/db_backup/101.37.174.13/ | 国内平台主2 |
282 | +----+----------------+----------------+-----------------------------------------+--------------------------------+
283 | ```
284 | 上例中表示
285 |
286 | 120.27.138.23(国内平台从1)的备份 存放在 106.3.10.8 的 `/data1/backup/db_backup/120.27.138.23/`中
287 | 101.37.174.13(国内平台主2)的备份 存放在 106.3.10.9 的 `/data2/backup/db_backup/101.37.174.13/`中
288 |
289 |
290 | 建议在存放备份的机器安装好MySQL然后通过定时任务不停地查询catalogdb获取需要进行恢复测试的备份集进行恢复,恢复成功后会更新user_backup.validate_status列, 并会在user_recover_info插入记录
291 | ```
292 | #备份可恢复性测试
293 | */15 * * * * /data/scripts/bin/validate_backup.sh >> /data/scripts/log/validate_backup.log 2>&1
294 | [root@localhost 2017-12-14]# less /data/scripts/bin/validate_backup.sh
295 | #!/bin/bash
296 | source ~/.bash_profile
297 | num_validate=`ps -ef | grep pybackup | grep -v grep | grep validate|wc -l`
298 | if [ "$num_validate" == 0 ];then
299 | python /data/scripts/bin/pybackup.py validate-backup --log-file=/data/scripts/log/validate.log
300 | fi
301 | ```
302 |
303 | 建表语句
304 | ```
305 | root@localhost 11:08: [catalogdb]> show create table user_backup_path\G
306 | *************************** 1. row ***************************
307 | Table: user_backup_path
308 | Create Table: CREATE TABLE `user_backup_path` (
309 | `id` int(11) NOT NULL AUTO_INCREMENT,
310 | `bk_server` varchar(15) NOT NULL,
311 | `remote_server` varchar(15) NOT NULL,
312 | `real_path` varchar(200) NOT NULL,
313 | `tag` varchar(200) NOT NULL,
314 | `is_offline` char(1) NOT NULL DEFAULT 'N',
315 | PRIMARY KEY (`id`)
316 | ) ENGINE=InnoDB AUTO_INCREMENT=21 DEFAULT CHARSET=utf8
317 | 1 row in set (0.00 sec)
318 |
319 | root@localhost 11:09: [catalogdb]> show create table user_recover_info\G
320 | *************************** 1. row ***************************
321 | Table: user_recover_info
322 | Create Table: CREATE TABLE `user_recover_info` (
323 | `id` int(11) NOT NULL AUTO_INCREMENT,
324 | `bk_id` varchar(36) NOT NULL,
325 | `tag` varchar(200) NOT NULL DEFAULT 'N/A',
326 | `backup_path` varchar(2000) NOT NULL,
327 | `db` varchar(200) NOT NULL,
328 | `start_time` datetime NOT NULL,
329 | `end_time` datetime NOT NULL,
330 | `elapsed_time` int(11) NOT NULL,
331 | `recover_status` varchar(20) DEFAULT NULL,
332 | `validate_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
333 | PRIMARY KEY (`id`)
334 | ) ENGINE=InnoDB AUTO_INCREMENT=7347 DEFAULT CHARSET=utf8
335 | 1 row in set (0.00 sec)
336 | ```
337 |
338 | ## 逻辑图(点击看大图)
339 | 
340 |
341 |
--------------------------------------------------------------------------------
/pybackup.conf:
--------------------------------------------------------------------------------
1 | [pybackup]
2 | tag=某某业务数据库
3 |
4 | [CATALOG] --存储备份信息的数据库配置
5 | db_host=localhost
6 | db_port=3306
7 | db_user=root
8 | db_passwd=fanboshi
9 | db_use=catalogdb
10 |
11 | [TDB] --需要备份的数据库配置
12 | db_host=localhost
13 | db_port=3306
14 | db_user=root
15 | db_passwd=fanboshi
16 | db_use=information_schema
17 | db_list=test,fandb,union_log_ad_% --指定需要备份的数据库,可以使用mysql支持的通配符. 如果想要备份所有数据库则填写%
18 |
19 | [rsync]
20 | password_file=/data4/recover/pybackup/rsync.sec --等同于--password-file
21 | dest=platform@182.92.83.238/db_backup/106.3.130.84 --传输到哪个目录
22 | address= --使用的网卡.可以为空不指定
23 |
--------------------------------------------------------------------------------
/pybackup.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Fanduzi/pybackup/31d7cf855bedc3c1241d1251ca50ba2d80e34be2/pybackup.png
--------------------------------------------------------------------------------
/pybackup.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 | # -*- coding: utf8 -*-
3 | """
4 | Usage:
5 | pybackup.py mydumper ARG_WITH_NO_--... (([--no-rsync] [--no-history]) | [--only-backup])
6 | pybackup.py only-rsync [--backup-dir=] [--bk-id=] [--log-file=]
7 | pybackup.py mark-del --backup-dir=
8 | pybackup.py validate-backup --log-file= [--bk_id=]
9 | pybackup.py -h | --help
10 | pybackup.py --version
11 |
12 | Options:
13 | -h --help Show help information.
14 | --version Show version.
15 | --no-rsync Do not use rsync.
16 | --no-history Do not record backup history information.
17 | --only-backup Equal to use both --no-rsync and --no-history.
18 | --only-rsync When you backup complete, but rsync failed, use this option to rsync your backup.
19 | --backup-dir= The directory where the backuped files are located. [default: ./]
20 | --bk-id= bk-id in table user_backup.
21 | --log-file= log file [default: ./pybackup_default.log]
22 |
23 | more help information in:
24 | https://github.com/Fanduzi
25 | """
26 |
27 | #示例
28 | """
29 | python pybackup.py only-rsync --backup-dir=/data/backup_db/2017-11-28 --bk-id=9fc4b0ba-d3e6-11e7-9fd7-00163f001c40 --log-file=rsync.log
30 | --backup-dir 最后日期不带/ 否则将传到rsync://platform@106.3.130.84/db_backup2/120.27.143.36/目录下而不是rsync://platform@106.3.130.84/db_backup2/120.27.143.36/2017-11-28目录下
31 | python /data/backup_db/pybackup.py mydumper password=xx user=root socket=/data/mysql/mysql.sock outputdir=/data/backup_db/2017-11-28 verbose=3 compress threads=8 triggers events routines use-savepoints logfile=/data/backup_db/pybackup.log
32 | """
33 |
34 | import os
35 | import sys
36 | import subprocess
37 | import datetime
38 | import logging
39 | import pymysql
40 | import uuid
41 | import copy
42 | import ConfigParser
43 |
44 | from docopt import docopt
45 |
46 |
47 |
48 | def confLog():
49 | '''日志配置'''
50 | if arguments['only-rsync'] or arguments['validate-backup']:
51 | log = arguments['--log-file']
52 | else:
53 | log_file = [x for x in arguments['ARG_WITH_NO_--'] if 'logfile' in x]
54 | if not log_file:
55 | print('必须指定--logfile选项')
56 | sys.exit(1)
57 | else:
58 | log = log_file[0].split('=')[1]
59 | arguments['ARG_WITH_NO_--'].remove(log_file[0])
60 | logging.basicConfig(level=logging.DEBUG,
61 | format='%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s',
62 | #datefmt='%a, %d %b %Y %H:%M:%S',
63 | datefmt='%Y-%m-%d %H:%M:%S',
64 | filename=log,
65 | filemode='a')
66 |
67 |
68 | def getMdumperCmd(*args):
69 | '''拼接mydumper命令'''
70 | cmd = 'mydumper '
71 | for i in range(0, len(args)):
72 | if i == len(args) - 1:
73 | cmd += str(args[i])
74 | else:
75 | cmd += str(args[i]) + ' '
76 | return(cmd)
77 |
78 |
79 | def getDBS(targetdb):
80 | '''获取查询数据库的语句'''
81 | if tdb_list:
82 | sql = 'select SCHEMA_NAME from schemata where 1=1 '
83 | if tdb_list != '%':
84 | dbs = tdb_list.split(',')
85 | for i in range(0, len(dbs)):
86 | if dbs[i][0] != '!':
87 | if len(dbs) == 1:
88 | sql += "and (SCHEMA_NAME like '" + dbs[0] + "')"
89 | else:
90 | if i == 0:
91 | sql += "and (SCHEMA_NAME like '" + dbs[i] + "'"
92 | elif i == len(dbs) - 1:
93 | sql += " or SCHEMA_NAME like '" + dbs[i] + "')"
94 | else:
95 | sql += " or SCHEMA_NAME like '" + dbs[i] + "'"
96 | elif dbs[i][0] == '!':
97 | if len(dbs) == 1:
98 | sql += "and (SCHEMA_NAME not like '" + dbs[0][1:] + "')"
99 | else:
100 | if i == 0:
101 | sql += "and (SCHEMA_NAME not like '" + dbs[i][1:] + "'"
102 | elif i == len(dbs) - 1:
103 | sql += " and SCHEMA_NAME not like '" + dbs[i][1:] + "')"
104 | else:
105 | sql += " and SCHEMA_NAME not like '" + dbs[i][1:] + "'"
106 | elif tdb_list == '%':
107 | dbs = ['%']
108 | sql = "select SCHEMA_NAME from schemata where SCHEMA_NAME like '%'"
109 | print('getDBS: ' + sql)
110 | bdb = targetdb.dql(sql)
111 | bdb_list = []
112 | for i in range(0, len(bdb)):
113 | bdb_list += bdb[i]
114 | return bdb_list
115 | else:
116 | return None
117 |
118 |
119 | class Fandb:
120 | '''定义pymysql类'''
121 |
122 | def __init__(self, host, port, user, password, db, charset='utf8mb4'):
123 | self.host = host
124 | self.port = int(port)
125 | self.user = user
126 | self.password = password
127 | self.db = db
128 | self.charset = charset
129 | try:
130 | self.conn = pymysql.connect(host=self.host, port=self.port, user=self.user,
131 | password=self.password, db=self.db, charset=self.charset)
132 | self.cursor = self.conn.cursor()
133 | self.diccursor = self.conn.cursor(pymysql.cursors.DictCursor)
134 | except Exception, e:
135 | logging.error('connect error', exc_info=True)
136 |
137 | def dml(self, sql, val=None):
138 | self.cursor.execute(sql, val)
139 |
140 | def version(self):
141 | self.cursor.execute('select version()')
142 | return self.cursor.fetchone()
143 |
144 | def dql(self, sql):
145 | self.cursor.execute(sql)
146 | return self.cursor.fetchall()
147 |
148 | def commit(self):
149 | self.conn.commit()
150 |
151 | def close(self):
152 | self.cursor.close()
153 | self.diccursor.close()
154 | self.conn.close()
155 |
156 |
157 | def runBackup(targetdb):
158 | '''执行备份'''
159 | # 是否指定了--database参数
160 | isDatabase_arg = [ x for x in arguments['ARG_WITH_NO_--'] if 'database' in x ]
161 | isTables_list = [ x for x in arguments['ARG_WITH_NO_--'] if 'tables-list' in x ]
162 | isRegex = [ x for x in arguments['ARG_WITH_NO_--'] if 'regex' in x ]
163 | #备份的数据库 字符串
164 | start_time = datetime.datetime.now()
165 | logging.info('Begin Backup')
166 | print(str(start_time) + ' Begin Backup')
167 | # 指定了--database参数,则为备份单个数据库,即使配置文件中指定了也忽略
168 |
169 | if isTables_list:
170 | targetdb.close()
171 | print(mydumper_args)
172 | cmd = getMdumperCmd(*mydumper_args)
173 | cmd_list = cmd.split(' ')
174 | passwd = [x.split('=')[1] for x in cmd_list if 'password' in x][0]
175 | cmd = cmd.replace(passwd, '"'+passwd+'"')
176 | backup_dest = [x.split('=')[1] for x in cmd_list if 'outputdir' in x][0]
177 | if backup_dest[-1] != '/':
178 | uuid_dir = backup_dest + '/' + bk_id + '/'
179 | else:
180 | uuid_dir = backup_dest + bk_id + '/'
181 |
182 | if not os.path.isdir(uuid_dir):
183 | os.makedirs(uuid_dir)
184 | cmd = cmd.replace(backup_dest, uuid_dir)
185 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
186 | while child.poll() == None:
187 | stdout_line = child.stdout.readline().strip()
188 | if stdout_line:
189 | logging.info(stdout_line)
190 | logging.info(child.stdout.read().strip())
191 | state = child.returncode
192 | logging.info('backup state:'+str(state))
193 | # 检查备份是否成功
194 | if state != 0:
195 | logging.critical(' Backup Failed!')
196 | is_complete = 'N'
197 | end_time = datetime.datetime.now()
198 | print(str(end_time) + ' Backup Failed')
199 | elif state == 0:
200 | end_time = datetime.datetime.now()
201 | logging.info('End Backup')
202 | is_complete = 'Y'
203 | print(str(end_time) + ' Backup Complete')
204 | elapsed_time = (end_time - start_time).total_seconds()
205 | bdb = [ x.split('=')[1] for x in cmd_list if 'tables-list' in x ][0]
206 | return start_time, end_time, elapsed_time, is_complete, cmd, bdb, uuid_dir, 'tables-list'
207 | elif isRegex:
208 | targetdb.close()
209 | print(mydumper_args)
210 | cmd = getMdumperCmd(*mydumper_args)
211 | cmd_list = cmd.split(' ')
212 | passwd = [x.split('=')[1] for x in cmd_list if 'password' in x][0]
213 | cmd = cmd.replace(passwd, '"'+passwd+'"')
214 | regex_expression = [x.split('=')[1] for x in cmd_list if 'regex' in x][0]
215 | cmd = cmd.replace(regex_expression, "'" + regex_expression + "'")
216 | backup_dest = [x.split('=')[1] for x in cmd_list if 'outputdir' in x][0]
217 | if backup_dest[-1] != '/':
218 | uuid_dir = backup_dest + '/' + bk_id + '/'
219 | else:
220 | uuid_dir = backup_dest + bk_id + '/'
221 |
222 | if not os.path.isdir(uuid_dir):
223 | os.makedirs(uuid_dir)
224 | cmd = cmd.replace(backup_dest, uuid_dir)
225 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
226 | while child.poll() == None:
227 | stdout_line = child.stdout.readline().strip()
228 | if stdout_line:
229 | logging.info(stdout_line)
230 | logging.info(child.stdout.read().strip())
231 | state = child.returncode
232 | logging.info('backup state:'+str(state))
233 | # 检查备份是否成功
234 | if state != 0:
235 | logging.critical(' Backup Failed!')
236 | is_complete = 'N'
237 | end_time = datetime.datetime.now()
238 | print(str(end_time) + ' Backup Failed')
239 | elif state == 0:
240 | end_time = datetime.datetime.now()
241 | logging.info('End Backup')
242 | is_complete = 'Y'
243 | print(str(end_time) + ' Backup Complete')
244 | elapsed_time = (end_time - start_time).total_seconds()
245 | bdb = [ x.split('=')[1] for x in cmd_list if 'regex' in x ][0]
246 | return start_time, end_time, elapsed_time, is_complete, cmd, bdb, uuid_dir, 'regex'
247 | elif isDatabase_arg:
248 | targetdb.close()
249 | print(mydumper_args)
250 | bdb = isDatabase_arg[0].split('=')[1]
251 | # 生成备份命令
252 | database = [ x.split('=')[1] for x in mydumper_args if 'database' in x ][0]
253 | outputdir_arg = [ x for x in mydumper_args if 'outputdir' in x ]
254 | temp_mydumper_args = copy.deepcopy(mydumper_args)
255 | if outputdir_arg[0][-1] != '/':
256 | temp_mydumper_args.remove(outputdir_arg[0])
257 | temp_mydumper_args.append(outputdir_arg[0]+'/' + bk_id + '/' + database)
258 | last_outputdir = (outputdir_arg[0] + '/' + bk_id + '/' + database).split('=')[1]
259 | else:
260 | temp_mydumper_args.remove(outputdir_arg[0])
261 | temp_mydumper_args.append(outputdir_arg[0] + bk_id + '/' + database)
262 | last_outputdir = (outputdir_arg[0] + bk_id + '/' + database).split('=')[1]
263 | if not os.path.isdir(last_outputdir):
264 | os.makedirs(last_outputdir)
265 | cmd = getMdumperCmd(*temp_mydumper_args)
266 | #密码中可能有带'#'或括号的,处理一下用引号包起来
267 | cmd_list = cmd.split(' ')
268 | passwd = [x.split('=')[1] for x in cmd_list if 'password' in x][0]
269 | cmd = cmd.replace(passwd, '"'+passwd+'"')
270 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
271 | while child.poll() == None:
272 | stdout_line = child.stdout.readline().strip()
273 | if stdout_line:
274 | logging.info(stdout_line)
275 | logging.info(child.stdout.read().strip())
276 | state = child.returncode
277 | logging.info('backup state:'+str(state))
278 | # 检查备份是否成功
279 | if state != 0:
280 | logging.critical(' Backup Failed!')
281 | is_complete = 'N'
282 | end_time = datetime.datetime.now()
283 | print(str(end_time) + ' Backup Failed')
284 | elif state == 0:
285 | end_time = datetime.datetime.now()
286 | logging.info('End Backup')
287 | is_complete = 'Y'
288 | print(str(end_time) + ' Backup Complete')
289 | elapsed_time = (end_time - start_time).total_seconds()
290 | return start_time, end_time, elapsed_time, is_complete, cmd, bdb, last_outputdir, 'database'
291 | # 没有指定--database参数
292 | elif not isDatabase_arg:
293 | # 获取需要备份的数据库的列表
294 | bdb_list = getDBS(targetdb)
295 | targetdb.close()
296 | print(bdb_list)
297 | bdb = ','.join(bdb_list)
298 | # 如果列表为空,报错
299 | if not bdb_list:
300 | logging.critical('必须指定--database或在配置文件中指定需要备份的数据库')
301 | sys.exit(1)
302 |
303 | if db_consistency.upper() == 'TRUE':
304 | regex = ' --regex="^(' + '\.|'.join(bdb_list) + '\.' + ')"'
305 | print(mydumper_args)
306 | cmd = getMdumperCmd(*mydumper_args)
307 | cmd_list = cmd.split(' ')
308 | passwd = [x.split('=')[1] for x in cmd_list if 'password' in x][0]
309 | cmd = cmd.replace(passwd, '"'+passwd+'"')
310 | backup_dest = [x.split('=')[1] for x in cmd_list if 'outputdir' in x][0]
311 | if backup_dest[-1] != '/':
312 | uuid_dir = backup_dest + '/' + bk_id + '/'
313 | else:
314 | uuid_dir = backup_dest + bk_id + '/'
315 | if not os.path.isdir(uuid_dir):
316 | os.makedirs(uuid_dir)
317 | cmd = cmd.replace(backup_dest, uuid_dir)
318 | cmd = cmd + regex
319 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
320 | while child.poll() == None:
321 | stdout_line = child.stdout.readline().strip()
322 | if stdout_line:
323 | logging.info(stdout_line)
324 | logging.info(child.stdout.read().strip())
325 | state = child.returncode
326 | logging.info('backup state:'+str(state))
327 | # 检查备份是否成功
328 | if state != 0:
329 | logging.critical(' Backup Failed!')
330 | is_complete = 'N'
331 | end_time = datetime.datetime.now()
332 | print(str(end_time) + ' Backup Failed')
333 | elif state == 0:
334 | end_time = datetime.datetime.now()
335 | logging.info('End Backup')
336 | is_complete = 'Y'
337 | print(str(end_time) + ' Backup Complete')
338 | for db in bdb_list:
339 | os.makedirs(uuid_dir + db)
340 | os.chdir(uuid_dir)
341 | mv_cmd = 'mv `ls ' + uuid_dir + '|grep -v "^' + db + '$"|grep -E "' + db + '\.|' + db + '-' + '"` ' + uuid_dir + db + '/'
342 | print(mv_cmd)
343 | child = subprocess.Popen(mv_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
344 | while child.poll() == None:
345 | stdout_line = child.stdout.readline().strip()
346 | if stdout_line:
347 | logging.info(stdout_line)
348 | logging.info(child.stdout.read().strip())
349 | state = child.returncode
350 | logging.info('mv state:'+str(state))
351 | if state != 0:
352 | logging.critical(' mv Failed!')
353 | print('mv Failed')
354 | elif state == 0:
355 | logging.info('mv Complete')
356 | print('mv Complete')
357 | cp_metadata = 'cp ' + uuid_dir + 'metadata ' + uuid_dir + db + '/'
358 | subprocess.call(cp_metadata, shell=True)
359 | elapsed_time = (end_time - start_time).total_seconds()
360 | return start_time, end_time, elapsed_time, is_complete, cmd, bdb, uuid_dir, 'db_consistency'
361 | else:
362 | # 多个备份,每个备份都要有成功与否状态标记
363 | is_complete = ''
364 | # 在备份列表中循环
365 | for i in bdb_list:
366 | print(i)
367 | comm = []
368 | # 一次备份一个数据库,下次循环将comm置空
369 | outputdir_arg = [ x for x in mydumper_args if 'outputdir' in x ]
370 | temp_mydumper_args = copy.deepcopy(mydumper_args)
371 | if outputdir_arg[0][-1] != '/':
372 | temp_mydumper_args.remove(outputdir_arg[0])
373 | temp_mydumper_args.append(outputdir_arg[0] + '/' + bk_id + '/' + i)
374 | last_outputdir = (outputdir_arg[0] +'/' + bk_id + '/' + i).split('=')[1]
375 | else:
376 | temp_mydumper_args.remove(outputdir_arg[0])
377 | temp_mydumper_args.append(outputdir_arg[0] + bk_id + '/' + i)
378 | last_outputdir = (outputdir_arg[0] + bk_id + '/' + i).split('=')[1]
379 | if not os.path.isdir(last_outputdir):
380 | os.makedirs(last_outputdir)
381 | comm = temp_mydumper_args + ['--database=' + i]
382 | # 生成备份命令
383 | cmd = getMdumperCmd(*comm)
384 | #密码中可能有带'#'或括号的,处理一下用引号包起来
385 | cmd_list = cmd.split(' ')
386 | passwd = [x.split('=')[1] for x in cmd_list if 'password' in x][0]
387 | cmd = cmd.replace(passwd, '"'+passwd+'"')
388 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
389 | while child.poll() == None:
390 | stdout_line = child.stdout.readline().strip()
391 | if stdout_line:
392 | logging.info(stdout_line)
393 | logging.info(child.stdout.read().strip())
394 | state = child.returncode
395 | logging.info('backup state:'+str(state))
396 | if state != 0:
397 | logging.critical(i + ' Backup Failed!')
398 | # Y,N,Y,Y
399 | if is_complete:
400 | is_complete += ',N'
401 | else:
402 | is_complete += 'N'
403 | end_time = datetime.datetime.now()
404 | print(str(end_time) + ' ' + i + ' Backup Failed')
405 | elif state == 0:
406 | if is_complete:
407 | is_complete += ',Y'
408 | else:
409 | is_complete += 'Y'
410 | end_time = datetime.datetime.now()
411 | logging.info(i + ' End Backup')
412 | print(str(end_time) + ' ' + i + ' Backup Complete')
413 | end_time = datetime.datetime.now()
414 | elapsed_time = (end_time - start_time).total_seconds()
415 | full_comm = 'mydumper ' + \
416 | ' '.join(mydumper_args) + ' database=' + ','.join(bdb_list)
417 | return start_time, end_time, elapsed_time, is_complete, full_comm, bdb, last_outputdir, 'for database'
418 |
419 |
420 | def getIP():
421 | '''获取ip地址'''
422 | # 过滤内网IP
423 | cmd = "/sbin/ifconfig | /bin/grep 'inet addr:' | /bin/grep -v '127.0.0.1' | /bin/grep -v '192\.168' | /bin/grep -v '10\.'| /bin/cut -d: -f2 | /usr/bin/head -1 | /bin/awk '{print $1}'"
424 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
425 | child.wait()
426 | ipaddress = child.communicate()[0].strip()
427 | return ipaddress
428 |
429 |
430 | def getBackupSize(outputdir):
431 | '''获取备份集大小'''
432 | cmd = 'du -sh ' + os.path.abspath(os.path.join(outputdir,'..'))
433 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
434 | child.wait()
435 | backup_size = child.communicate()[0].strip().split('\t')[0]
436 | return backup_size
437 |
438 |
439 | def getMetadata(outputdir):
440 | '''从metadata中获取 SHOW MASTER STATUS / SHOW SLAVE STATUS 信息'''
441 | if outputdir[-1] != '/':
442 | metadata = outputdir + '/metadata'
443 | else:
444 | metadata = outputdir + 'metadata'
445 | with open(metadata, 'r') as file:
446 | content = file.readlines()
447 |
448 | separate_pos = content.index('\n')
449 |
450 | master_status = content[:separate_pos]
451 | master_log = [x.split(':')[1].strip() for x in master_status if 'Log' in x]
452 | master_pos = [x.split(':')[1].strip() for x in master_status if 'Pos' in x]
453 | master_GTID = [''.join([x.strip() for x in master_status[4::]]).replace('GTID:','')]
454 | master_info = ','.join(master_log + master_pos + master_GTID)
455 |
456 | slave_status = content[separate_pos + 1:]
457 | if not 'Finished' in slave_status[0]:
458 | slave_log = [x.split(':')[1].strip() for x in slave_status if 'Log' in x]
459 | slave_pos = [x.split(':')[1].strip() for x in slave_status if 'Pos' in x]
460 | slave_GTID = [''.join([x.strip() for x in slave_status[4:-1]]).replace('GTID:','')]
461 | slave_info = ','.join(slave_log + slave_pos + slave_GTID)
462 | return master_info, slave_info
463 | else:
464 | return master_info, 'Not a slave'
465 |
466 |
467 | def safeCommand(cmd):
468 | '''移除bk_command中的密码'''
469 | cmd_list = cmd.split(' ')
470 | passwd = [x.split('=')[1] for x in cmd_list if 'password' in x][0]
471 | safe_command = cmd.replace(passwd, 'supersecrect')
472 | return safe_command
473 |
474 |
475 | def getVersion(db):
476 | '''获取mydumper 版本和 mysql版本'''
477 | child = subprocess.Popen('mydumper --version',
478 | shell=True, stdout=subprocess.PIPE)
479 | child.wait()
480 | mydumper_version = child.communicate()[0].strip()
481 | mysql_version = db.version()
482 | return mydumper_version, mysql_version
483 |
484 |
485 | def rsync(bk_dir, address):
486 | '''rsync, bk_dir为备份所在目录,address为使用的网卡'''
487 | if not address:
488 | cmd = 'rsync -auv ' + bk_dir + ' --password-file=' + \
489 | password_file + ' rsync://' + dest
490 | else:
491 | cmd = 'rsync -auv ' + bk_dir + ' --address=' + address + \
492 | ' --password-file=' + password_file + ' rsync://' + dest
493 | start_time = datetime.datetime.now()
494 | logging.info('Start rsync')
495 | print(str(start_time) + ' Start rsync')
496 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
497 | while child.poll() == None:
498 | stdout_line = child.stdout.readline().strip()
499 | if stdout_line:
500 | logging.info(stdout_line)
501 | logging.info(child.stdout.read().strip())
502 | state = child.returncode
503 | logging.info('rsync state:'+str(state))
504 | if state != 0:
505 | end_time = datetime.datetime.now()
506 | logging.critical('Rsync Failed!')
507 | print(str(end_time) + ' Rsync Failed!')
508 | is_complete = 'N'
509 | else:
510 | end_time = datetime.datetime.now()
511 | logging.info('Rsync complete')
512 | print(str(end_time) + ' Rsync complete')
513 | is_complete = 'Y'
514 | elapsed_time = (end_time - start_time).total_seconds()
515 | return start_time, end_time, elapsed_time, is_complete
516 |
517 |
518 | def markDel(backup_dir,targetdb):
519 | backup_list = os.listdir(backup_dir)
520 | sql = "update user_backup set is_deleted='Y' where bk_id in (" + "'" + "','".join(backup_list) + "')"
521 | print('markDel:' + sql)
522 | targetdb.dml(sql)
523 | targetdb.commit()
524 | targetdb.close()
525 |
526 |
527 | def validateBackup(bk_id=None):
528 | sql = (
529 | "select a.id, a.bk_id, a.tag, date(start_time), real_path"
530 | " from user_backup a,user_backup_path b"
531 | " where a.tag = b.tag"
532 | " and is_complete not like '%N%'"
533 | " and is_deleted != 'Y'"
534 | " and transfer_complete = 'Y'"
535 | " and a.tag = '{}'"
536 | " and validate_status != 'passed'"
537 | " and start_time >= DATE_SUB(CURDATE(), INTERVAL 1 MONTH)"
538 | " order by rand() limit 1"
539 | )
540 |
541 | sql2 = (
542 | "select a.id, a.bk_id, a.tag, date(start_time), real_path"
543 | " from user_backup a,user_backup_path b"
544 | " where a.tag = b.tag"
545 | " and a.bk_id = '{}'"
546 | )
547 |
548 | start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags = [], [], [], [], [], [], []
549 | for tag in bk_list:
550 | print(datetime.datetime.now())
551 | print(tag)
552 | logging.info('-='*20)
553 | logging.info('开始恢复: ' + tag)
554 | catalogdb = Fandb(cata_host, cata_port, cata_user, cata_passwd, cata_use)
555 | if bk_id:
556 | dql_res = catalogdb.dql(sql2.format(bk_id))
557 | else:
558 | dql_res = catalogdb.dql(sql.format(tag))
559 | result = dql_res[0] if dql_res else None
560 | if result:
561 | res_bk_id, res_tag, res_start_time, real_path = result[1], result[2], result[3], result[4]
562 | catalogdb.close()
563 | backup_path = real_path + str(res_start_time) + '/' + res_bk_id + '/'
564 | logging.info('Backup path: '+ backup_path )
565 | dbs = [ directory for directory in os.listdir(backup_path) if os.path.isdir(backup_path+directory) and directory != 'mysql' ]
566 | if dbs:
567 | for db in dbs:
568 | '''
569 | ([datetime.datetime(2017, 12, 25, 15, 11, 36, 480263), datetime.datetime(2017, 12, 25, 15, 33, 17, 292924), datetime.datetime(2017, 12, 25, 17, 10, 38, 226598), datetime.datetime(2017, 12, 25, 17, 10, 39, 374409)], [datetime.datetime(2017, 12, 25, 15, 33, 17, 292734), datetime.datetime(2017, 12, 25, 17, 10, 38, 226447), datetime.datetime(2017, 12, 25, 17, 10, 38, 855657), datetime.datetime(2017, 12, 25, 17, 10, 39, 776067)], [0, 0, 0, 0], [u'dadian', u'sdkv2', u'dopack', u'catalogdb'], [u'/data2/backup/db_backup/120.55.74.93/2017-12-23/b22694c4-e752-11e7-9370-00163e0007f1/', u'/data2/backup/db_backup/106.3.130.84/2017-12-16/12cb7486-e229-11e7-b172-005056b15d9c/'], [u'b22694c4-e752-11e7-9370-00163e0007f1', u'12cb7486-e229-11e7-b172-005056b15d9c'], ['\xe5\x9b\xbd\xe5\x86\x85sdk\xe4\xbb\x8e1', '\xe6\x96\xb0\xe5\xa4\x87\xe4\xbb\xbd\xe6\x9c\xba'])
570 | insert into user_recover_info(tag, bk_id, backup_path, db, start_time, end_time, elapsed_time, recover_status) values (国内sdk从1,b22694c4-e752-11e7-9370-00163e0007f1,/data2/backup/db_backup/120.55.74.93/2017-12-23/b22694c4-e752-11e7-9370-00163e0007f1/,dadian,2017-12-25 15:11:36.480263,2017-12-25 15:33:17.292734,1300.812471,sucess)
571 | insert into user_recover_info(tag, bk_id, backup_path, db, start_time, end_time, elapsed_time, recover_status) values (新备份机,12cb7486-e229-11e7-b172-005056b15d9c,/data2/backup/db_backup/106.3.130.84/2017-12-16/12cb7486-e229-11e7-b172-005056b15d9c/,sdkv2,2017-12-25 15:33:17.292924,2017-12-25 17:10:38.226447,5840.933523,sucess)
572 |
573 | 1 个 bk_id 对应3个备份,1 个 bk_id 对应1个备份 ,但是tag只append 了俩, 应该内个库append一次,或者改成字典
574 | '''
575 | tags.append(tag)
576 | backup_paths.append(backup_path)
577 | bk_ids.append(res_bk_id)
578 | db_list.append(db)
579 | full_backup_path = backup_path + db + '/'
580 | #print(full_backup_path)
581 | load_cmd = 'myloader -d {} --user=root --password=fanboshi --overwrite-tables --verbose=3 --threads=3'.format(full_backup_path)
582 | print(load_cmd)
583 | start_time.append(datetime.datetime.now())
584 | logging.info('Start recover '+ db )
585 | child = subprocess.Popen(load_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
586 | while child.poll() == None:
587 | stdout_line = child.stdout.readline().strip()
588 | if stdout_line:
589 | logging.info(stdout_line)
590 | logging.info(child.stdout.read().strip())
591 | state = child.returncode
592 | recover_status.append(state)
593 | logging.info('Recover state:'+str(state))
594 | end_time.append(datetime.datetime.now())
595 | if state != 0:
596 | logging.info('Recover {} Failed'.format(db))
597 | elif state == 0:
598 | logging.info('Recover {} complete'.format(db))
599 | else:
600 | load_cmd = 'myloader -d {} --user=root --password=fanboshi --overwrite-tables --verbose=3 --threads=3'.format(backup_path)
601 | print(load_cmd)
602 | tags.append(tag)
603 | start_time.append(datetime.datetime.now())
604 | logging.info('Start recover')
605 | child = subprocess.Popen(load_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
606 | while child.poll() == None:
607 | stdout_line = child.stdout.readline().strip()
608 | if stdout_line:
609 | logging.info(stdout_line)
610 | logging.info(child.stdout.read().strip())
611 | state = child.returncode
612 | recover_status.append(state)
613 | logging.info('Recover state:'+str(state))
614 | end_time.append(datetime.datetime.now())
615 | if state != 0:
616 | logging.info('Recover Failed')
617 | elif state == 0:
618 | logging.info('Recover complete')
619 | db_list.append('N/A')
620 | backup_paths.append(backup_path)
621 | bk_ids.append(res_bk_id)
622 | return start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags
623 |
624 |
625 | if __name__ == '__main__':
626 | '''
627 | 参数解析
628 | '''
629 | pybackup_version = 'pybackup 0.10.13.0'
630 | arguments = docopt(__doc__, version=pybackup_version)
631 | print(arguments)
632 |
633 | '''
634 | 解析配置文件获取参数
635 | '''
636 | cf = ConfigParser.ConfigParser()
637 | cf.read(os.path.split(os.path.realpath(__file__))[0] + '/pybackup.conf')
638 | # print(os.getcwd())
639 | # print(os.path.split(os.path.realpath(__file__))[0])
640 | section_name = 'CATALOG'
641 | cata_host = cf.get(section_name, "db_host")
642 | cata_port = cf.get(section_name, "db_port")
643 | cata_user = cf.get(section_name, "db_user")
644 | cata_passwd = cf.get(section_name, "db_passwd")
645 | cata_use = cf.get(section_name, "db_use")
646 |
647 | if not arguments['validate-backup'] and not arguments['mark-del']:
648 | section_name = 'TDB'
649 | tdb_host = cf.get(section_name, "db_host")
650 | tdb_port = cf.get(section_name, "db_port")
651 | tdb_user = cf.get(section_name, "db_user")
652 | tdb_passwd = cf.get(section_name, "db_passwd")
653 | tdb_use = cf.get(section_name, "db_use")
654 | tdb_list = cf.get(section_name, "db_list")
655 | try:
656 | global db_consistency
657 | db_consistency = cf.get(section_name, "db_consistency")
658 | except ConfigParser.NoOptionError,e:
659 | db_consistency = 'False'
660 | print('没有指定db_consistency参数,默认采用--database循环备份db_list中指定的数据库,数据库之间不保证一致性')
661 |
662 | if cf.has_section('rsync'):
663 | section_name = 'rsync'
664 | password_file = cf.get(section_name, "password_file")
665 | dest = cf.get(section_name, "dest")
666 | address = cf.get(section_name, "address")
667 | if dest[-1] != '/':
668 | dest += '/'
669 | rsync_enable = True
670 | else:
671 | rsync_enable = False
672 | print("没有在配置文件中指定rsync区块,备份后不传输")
673 |
674 | section_name = 'pybackup'
675 | tag = cf.get(section_name, "tag")
676 | elif arguments['validate-backup']:
677 | section_name = 'Validate'
678 | if arguments['--bk_id']:
679 | bk_list=list(arguments['--bk_id'])
680 | else:
681 | bk_list = cf.get(section_name, "bk_list").split(',')
682 |
683 | if arguments['mydumper'] and ('help' in arguments['ARG_WITH_NO_--'][0]):
684 | subprocess.call('mydumper --help', shell=True)
685 | elif arguments['only-rsync']:
686 | confLog()
687 | backup_dir = arguments['--backup-dir']
688 | if arguments['--bk-id']:
689 | transfer_start, transfer_end, transfer_elapsed, transfer_complete = rsync(backup_dir, address)
690 | catalogdb = Fandb(cata_host, cata_port, cata_user, cata_passwd, cata_use)
691 | sql = 'update user_backup set transfer_start=%s, transfer_end=%s, transfer_elapsed=%s, transfer_complete=%s where bk_id=%s'
692 | catalogdb.dml(sql, (transfer_start, transfer_end, transfer_elapsed, transfer_complete, arguments['--bk-id']))
693 | catalogdb.commit()
694 | catalogdb.close()
695 | else:
696 | rsync(backup_dir,address)
697 | elif arguments['mark-del']:
698 | catalogdb = Fandb(cata_host, cata_port, cata_user, cata_passwd, cata_use)
699 | markDel(arguments['--backup-dir'],catalogdb)
700 | elif arguments['validate-backup']:
701 | confLog()
702 | if arguments['--bk_id']:
703 | start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags = validateBackup(arguments['--bk_id'])
704 | else:
705 | start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags = validateBackup()
706 | print(start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags)
707 | if bk_ids:
708 | catalogdb = Fandb(cata_host, cata_port, cata_user, cata_passwd, cata_use)
709 | sql1 = "insert into user_recover_info(tag, bk_id, backup_path, db, start_time, end_time, elapsed_time, recover_status) values (%s,%s,%s,%s,%s,%s,%s,%s)"
710 | sql2 = "update user_backup set validate_status=%s where bk_id=%s"
711 | logging.info(zip(start_time, end_time, recover_status, db_list))
712 | for stime, etime, rstatus, db ,backup_path, bk_id, tag in zip(start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags):
713 | if rstatus == 0:
714 | status = 'sucess'
715 | failed_flag = False
716 | else:
717 | status = 'failed'
718 | failed_flag = True
719 | # print(sql1 % (tag.decode('utf-8'), bk_id, backup_path, db, stime, etime, (etime - stime).total_seconds(), status))
720 | logging.info(sql1 % (tag.decode('utf-8'), bk_id, backup_path, db, stime, etime, (etime - stime).total_seconds(), status))
721 | catalogdb.dml(sql1,(tag, bk_id, backup_path, db, stime, etime, (etime - stime).total_seconds(), status))
722 | if not failed_flag:
723 | catalogdb.dml(sql2,('passed', bk_id))
724 | catalogdb.commit()
725 | catalogdb.close()
726 | logging.info('恢复完成')
727 | else:
728 | logging.info('没有可用备份')
729 | print('没有可用备份')
730 | else:
731 | confLog()
732 | bk_id = str(uuid.uuid1())
733 | if arguments['mydumper']:
734 | mydumper_args = ['--' + x for x in arguments['ARG_WITH_NO_--']]
735 | is_rsync = True
736 | is_history = True
737 | if arguments['--no-rsync']:
738 | is_rsync = False
739 | if arguments['--no-history']:
740 | is_history = False
741 | if arguments['--only-backup']:
742 | is_history = False
743 | is_rsync = False
744 | print('is_rsync,is_history: ',is_rsync,is_history)
745 | bk_dir = [x for x in arguments['ARG_WITH_NO_--'] if 'outputdir' in x][0].split('=')[1]
746 | os.chdir(bk_dir)
747 | targetdb = Fandb(tdb_host, tdb_port, tdb_user, tdb_passwd, tdb_use)
748 | mydumper_version, mysql_version = getVersion(targetdb)
749 | start_time, end_time, elapsed_time, is_complete, bk_command, backuped_db, last_outputdir, backup_type = runBackup(
750 | targetdb)
751 |
752 | safe_command = safeCommand(bk_command)
753 |
754 | if 'N' not in is_complete:
755 | bk_size = getBackupSize(last_outputdir)
756 | master_info, slave_info = getMetadata(last_outputdir)
757 | if rsync_enable:
758 | if is_rsync:
759 | transfer_start, transfer_end, transfer_elapsed, transfer_complete_temp = rsync(bk_dir, address)
760 | transfer_complete = transfer_complete_temp
761 | transfer_count = 0
762 | while transfer_complete_temp != 'Y' and transfer_count < 3:
763 | transfer_start_temp, transfer_end, transfer_elapsed_temp, transfer_complete_temp = rsync(bk_dir, address)
764 | transfer_complete = transfer_complete + ',' + transfer_complete_temp
765 | transfer_count += 1
766 | transfer_elapsed = ( transfer_end - transfer_start ).total_seconds()
767 |
768 | else:
769 | transfer_start, transfer_end, transfer_elapsed, transfer_complete = None,None,None,'N/A (local backup)'
770 | dest = 'N/A (local backup)'
771 | else:
772 | bk_size = 'N/A'
773 | master_info, slave_info = 'N/A', 'N/A'
774 | transfer_start, transfer_end, transfer_elapsed, transfer_complete = None,None,None,'Backup failed'
775 | dest = 'Backup failed'
776 |
777 |
778 | if is_history:
779 | bk_server = getIP()
780 | catalogdb = Fandb(cata_host, cata_port, cata_user, cata_passwd, cata_use)
781 | sql = 'insert into user_backup(bk_id,bk_server,start_time,end_time,elapsed_time,backuped_db,is_complete,bk_size,bk_dir,transfer_start,transfer_end,transfer_elapsed,transfer_complete,remote_dest,master_status,slave_status,tool_version,server_version,pybackup_version,bk_command,tag) values(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)'
782 | if backup_type == 'for database':
783 | last_outputdir = os.path.abspath(os.path.join(last_outputdir,'..'))
784 | print(bk_id, bk_server, start_time, end_time, elapsed_time, backuped_db, is_complete, bk_size, last_outputdir, transfer_start, transfer_end,transfer_elapsed, transfer_complete, dest, master_info, slave_info, mydumper_version, mysql_version, pybackup_version, safe_command)
785 | catalogdb.dml(sql, (bk_id, bk_server, start_time, end_time, elapsed_time, backuped_db, is_complete, bk_size, last_outputdir, transfer_start, transfer_end,
786 | transfer_elapsed, transfer_complete, dest, master_info, slave_info, mydumper_version, mysql_version, pybackup_version, safe_command, tag))
787 | catalogdb.commit()
788 | catalogdb.close()
789 |
790 |
--------------------------------------------------------------------------------
/unfinished/pybackup.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 | # -*- coding: utf8 -*-
3 | """
4 | Usage:
5 | pybackup.py mydumper ARG_WITHOUT_--... (([--no-rsync] [--no-history]) | [--only-backup])
6 | pybackup.py only-rsync [--backup-dir=] [--bk-id=] [--log-file=]
7 | pybackup.py mark-del --backup-dir=
8 | pybackup.py validate-backup --log-file= [--bk_id=]
9 | pybackup.py -h | --help
10 | pybackup.py --version
11 |
12 | Options:
13 | -h --help Show help information.
14 | --version Show version.
15 | --no-rsync Do not use rsync.
16 | --no-history Do not record backup history information.
17 | --only-backup Equal to use both --no-rsync and --no-history.
18 | --only-rsync When you backup complete, but rsync failed, use this option to rsync your backup.
19 | --backup-dir= The directory where the backuped files are located. [default: ./]
20 | --bk-id= bk-id in table user_backup.
21 | --log-file= log file [default: ./pybackup_default.log]
22 |
23 | more help information in:
24 | https://github.com/Fanduzi
25 | """
26 |
27 | #示例
28 | """
29 | python pybackup.py only-rsync --backup-dir=/data/backup_db/2017-11-28 --bk-id=9fc4b0ba-d3e6-11e7-9fd7-00163f001c40 --log-file=rsync.log
30 | --backup-dir 最后日期不带/ 否则将传到rsync://platform@106.3.130.84/db_backup2/120.27.143.36/目录下而不是rsync://platform@106.3.130.84/db_backup2/120.27.143.36/2017-11-28目录下
31 | python /data/backup_db/pybackup.py mydumper password=xx user=root socket=/data/mysql/mysql.sock outputdir=/data/backup_db/2017-11-28 verbose=3 compress threads=8 triggers events routines use-savepoints logfile=/data/backup_db/pybackup.log
32 | """
33 |
34 | import os
35 | import sys
36 | import subprocess
37 | import datetime
38 | import logging
39 | import pymysql
40 | import uuid
41 | import copy
42 | import ConfigParser
43 |
44 | from docopt import docopt
45 |
46 |
47 |
48 | def confLog():
49 | '''日志配置'''
50 | if arguments['only-rsync'] or arguments['validate-backup']:
51 | log = arguments['--log-file']
52 | else:
53 | log_file = [x for x in arguments['ARG_WITHOUT_--'] if 'logfile' in x]
54 | if not log_file:
55 | print('必须指定--logfile选项')
56 | sys.exit(1)
57 | else:
58 | log = log_file[0].split('=')[1]
59 | arguments['ARG_WITHOUT_--'].remove(log_file[0])
60 | logging.basicConfig(level=logging.DEBUG,
61 | format='%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s',
62 | #datefmt='%a, %d %b %Y %H:%M:%S',
63 | datefmt='%Y-%m-%d %H:%M:%S',
64 | filename=log,
65 | filemode='a')
66 |
67 |
68 | def getMdumperCmd(*args):
69 | '''拼接mydumper命令'''
70 | cmd = 'mydumper '
71 | for i in range(0, len(args)):
72 | if i == len(args) - 1:
73 | cmd += str(args[i])
74 | else:
75 | cmd += str(args[i]) + ' '
76 | return(cmd)
77 |
78 |
79 | def getDBS(targetdb):
80 | '''获取查询数据库的语句'''
81 | if tdb_list:
82 | sql = 'select SCHEMA_NAME from schemata where 1=1 '
83 | if tdb_list != '%':
84 | dbs = tdb_list.split(',')
85 | for i in range(0, len(dbs)):
86 | if dbs[i][0] != '!':
87 | if len(dbs) == 1:
88 | sql += "and (SCHEMA_NAME like '" + dbs[0] + "')"
89 | else:
90 | if i == 0:
91 | sql += "and (SCHEMA_NAME like '" + dbs[i] + "'"
92 | elif i == len(dbs) - 1:
93 | sql += " or SCHEMA_NAME like '" + dbs[i] + "')"
94 | else:
95 | sql += " or SCHEMA_NAME like '" + dbs[i] + "'"
96 | elif dbs[i][0] == '!':
97 | if len(dbs) == 1:
98 | sql += "and (SCHEMA_NAME not like '" + dbs[0][1:] + "')"
99 | else:
100 | if i == 0:
101 | sql += "and (SCHEMA_NAME not like '" + dbs[i][1:] + "'"
102 | elif i == len(dbs) - 1:
103 | sql += " and SCHEMA_NAME not like '" + dbs[i][1:] + "')"
104 | else:
105 | sql += " and SCHEMA_NAME not like '" + dbs[i][1:] + "'"
106 | elif tdb_list == '%':
107 | dbs = ['%']
108 | sql = "select SCHEMA_NAME from schemata where SCHEMA_NAME like '%'"
109 | print('getDBS: ' + sql)
110 | bdb = targetdb.dql(sql)
111 | bdb_list = []
112 | for i in range(0, len(bdb)):
113 | bdb_list += bdb[i]
114 | return bdb_list
115 | else:
116 | return None
117 |
118 |
119 | class Fandb:
120 | '''定义pymysql类'''
121 |
122 | def __init__(self, host, port, user, password, db, charset='utf8mb4'):
123 | self.host = host
124 | self.port = int(port)
125 | self.user = user
126 | self.password = password
127 | self.db = db
128 | self.charset = charset
129 | try:
130 | self.conn = pymysql.connect(host=self.host, port=self.port, user=self.user,
131 | password=self.password, db=self.db, charset=self.charset)
132 | self.cursor = self.conn.cursor()
133 | self.diccursor = self.conn.cursor(pymysql.cursors.DictCursor)
134 | except Exception, e:
135 | logging.error('connect error', exc_info=True)
136 |
137 | def dml(self, sql, val=None):
138 | self.cursor.execute(sql, val)
139 |
140 | def version(self):
141 | self.cursor.execute('select version()')
142 | return self.cursor.fetchone()
143 |
144 | def dql(self, sql):
145 | self.cursor.execute(sql)
146 | return self.cursor.fetchall()
147 |
148 | def commit(self):
149 | self.conn.commit()
150 |
151 | def close(self):
152 | self.cursor.close()
153 | self.diccursor.close()
154 | self.conn.close()
155 |
156 | def getOption(cmd,option):
157 | cmd_list = cmd.split(' ')
158 | return [ x.split('=')[1] for x in cmd_list if option in x ][0]
159 |
160 | def enclosePass(cmd):
161 | '''密码中可能有带'#'或括号的,处理一下用引号包起来'''
162 | cmd_list = cmd.split(' ')
163 | cmd = cmd.replace(passwd, '"'+passwd+'"')
164 | return cmd
165 |
166 | def runProcess(cmd):
167 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
168 | while child.poll() == None:
169 | stdout_line = child.stdout.readline().strip()
170 | if stdout_line:
171 | logging.info(stdout_line)
172 | logging.info(child.stdout.read().strip())
173 | state = child.returncode
174 | return state
175 |
176 | def judgeStatus(state):
177 | if state != 0:
178 | logging.critical(' Backup Failed!')
179 | is_complete = 'N'
180 | end_time = datetime.datetime.now()
181 | print(str(end_time) + ' Backup Failed')
182 | elif state == 0:
183 | end_time = datetime.datetime.now()
184 | logging.info('End Backup')
185 | is_complete = 'Y'
186 | print(str(end_time) + ' Backup Complete')
187 | return end_time
188 |
189 | def runBackup(targetdb):
190 | '''执行备份'''
191 | # 是否指定了--database参数
192 | isDatabase = [ x for x in arguments['ARG_WITHOUT_--'] if 'database' in x ]
193 | isTables_list = [ x for x in arguments['ARG_WITHOUT_--'] if 'tables-list' in x ]
194 | isRegex = [ x for x in arguments['ARG_WITHOUT_--'] if 'regex' in x ]
195 | # 备份的数据库 字符串
196 | start_time = datetime.datetime.now()
197 | logging.info('Begin Backup')
198 | print(str(start_time) + ' Begin Backup')
199 | # 指定了--database参数,则为备份单个数据库,即使配置文件中指定了也忽略
200 |
201 | # 如果指定了--tables-list参数
202 | if isTables_list:
203 | targetdb.close()
204 | print(mydumper_args)
205 | cmd = getMdumperCmd(*mydumper_args)
206 | #密码中可能有带'#'或括号的,处理一下用引号包起来
207 | enclosePass(cmd)
208 | backup_dest = getOption(cmd,'outputdir')
209 | if backup_dest[-1] != '/':
210 | uuid_dir = backup_dest + '/' + bk_id + '/'
211 | else:
212 | uuid_dir = backup_dest + bk_id + '/'
213 |
214 | # 创建目录
215 | if not os.path.isdir(uuid_dir):
216 | os.makedirs(uuid_dir)
217 | # 将命令中的备份目录替换为带uuid的目录
218 | cmd = cmd.replace(backup_dest, uuid_dir)
219 | # 执行备份
220 | state = runProcess(cmd)
221 | logging.info('backup state:'+str(state))
222 | # 检查备份是否成功
223 | end_time = judgeStatus(state)
224 | elapsed_time = (end_time - start_time).total_seconds()
225 | bdb = getOption(cmd,'tables-list')
226 | return start_time, end_time, elapsed_time, is_complete, cmd, bdb, uuid_dir, 'tables-list'
227 | # 如果指定了--regex参数
228 | elif isRegex:
229 | targetdb.close()
230 | print(mydumper_args)
231 | cmd = getMdumperCmd(*mydumper_args)
232 | # 密码中可能有带'#'或括号的,处理一下用引号包起来
233 | enclosePass(cmd)
234 | regex_expression = getOption(cmd,'regex')
235 | cmd = cmd.replace(regex_expression, "'" + regex_expression + "'")
236 | backup_dest = getOption(cmd,'outputdir')
237 | if backup_dest[-1] != '/':
238 | uuid_dir = backup_dest + '/' + bk_id + '/'
239 | else:
240 | uuid_dir = backup_dest + bk_id + '/'
241 |
242 | if not os.path.isdir(uuid_dir):
243 | os.makedirs(uuid_dir)
244 | cmd = cmd.replace(backup_dest, uuid_dir)
245 | state = runProcess(cmd)
246 | logging.info('backup state:'+str(state))
247 | # 检查备份是否成功
248 | end_time = judgeStatus(state)
249 | elapsed_time = (end_time - start_time).total_seconds()
250 | bdb = getOption(cmd,'regex')
251 | return start_time, end_time, elapsed_time, is_complete, cmd, bdb, uuid_dir, 'regex'
252 | # 如过指定了--database参数
253 | elif isDatabase:
254 | targetdb.close()
255 | print(mydumper_args)
256 | bdb = isDatabase[0].split('=')[1]
257 | # 生成备份命令
258 | database = [ x.split('=')[1] for x in mydumper_args if 'database' in x ][0]
259 | outputdir_arg = [ x for x in mydumper_args if 'outputdir' in x ]
260 | temp_mydumper_args = copy.deepcopy(mydumper_args)
261 | if outputdir_arg[0][-1] != '/':
262 | temp_mydumper_args.remove(outputdir_arg[0])
263 | temp_mydumper_args.append(outputdir_arg[0]+'/' + bk_id + '/' + database)
264 | last_outputdir = (outputdir_arg[0] + '/' + bk_id + '/' + database).split('=')[1]
265 | else:
266 | temp_mydumper_args.remove(outputdir_arg[0])
267 | temp_mydumper_args.append(outputdir_arg[0] + bk_id + '/' + database)
268 | last_outputdir = (outputdir_arg[0] + bk_id + '/' + database).split('=')[1]
269 | if not os.path.isdir(last_outputdir):
270 | os.makedirs(last_outputdir)
271 | cmd = getMdumperCmd(*temp_mydumper_args)
272 | # 密码中可能有带'#'或括号的,处理一下用引号包起来
273 | enclosePass(cmd)
274 | state = runProcess(cmd)
275 | logging.info('backup state:'+str(state))
276 | # 检查备份是否成功
277 | end_time = judgeStatus(state)
278 | elapsed_time = (end_time - start_time).total_seconds()
279 | return start_time, end_time, elapsed_time, is_complete, cmd, bdb, last_outputdir, 'database'
280 | # 没有指定--database参数
281 | elif not isDatabase:
282 | # 获取需要备份的数据库的列表
283 | bdb_list = getDBS(targetdb)
284 | targetdb.close()
285 | print(bdb_list)
286 | bdb = ','.join(bdb_list)
287 | # 如果列表为空,报错
288 | if not bdb_list:
289 | logging.critical('必须指定--database或在配置文件中指定需要备份的数据库')
290 | sys.exit(1)
291 |
292 | if db_consistency.upper() == 'TRUE':
293 | # 如果在pybackup.conf中设置db_consistency为true,则使用--regex来备份, 保证一致性
294 | regex = ' --regex="^(' + '\.|'.join(bdb_list) + '\.' + ')"'
295 | print(mydumper_args)
296 | cmd = getMdumperCmd(*mydumper_args)
297 | cmd = enclosePass(cmd)
298 | #backup_dest = [x.split('=')[1] for x in cmd_list if 'outputdir' in x][0]
299 | backup_dest = getOption(cmd,'outputdir')
300 | if backup_dest[-1] != '/':
301 | uuid_dir = backup_dest + '/' + bk_id + '/'
302 | else:
303 | uuid_dir = backup_dest + bk_id + '/'
304 | if not os.path.isdir(uuid_dir):
305 | os.makedirs(uuid_dir)
306 | cmd = cmd.replace(backup_dest, uuid_dir)
307 | cmd = cmd + regex
308 | state = runProcess(cmd)
309 | logging.info('backup state:'+str(state))
310 | # 检查备份是否成功
311 | end_time = judgeStatus(state)
312 | # --regex的备份都在同一个目录下, 这里按db创建目录, 将备份文件移动到对应的文件夹中, 再将metedata拷贝进去
313 | for db in bdb_list:
314 | os.makedirs(uuid_dir + db)
315 | os.chdir(uuid_dir)
316 | mv_cmd = 'mv `ls ' + uuid_dir + '|grep -v "^' + db + '$"|grep "' + db + '\.|' + db + '-"` ' + uuid_dir + db + '/'
317 | print(mv_cmd)
318 | state = runProcess(mv_cmd)
319 | logging.info('mv state:'+str(state))
320 | if state != 0:
321 | logging.critical(' mv Failed!')
322 | print('mv Failed')
323 | elif state == 0:
324 | logging.info('mv Complete')
325 | print('mv Complete')
326 | cp_metadata = 'cp ' + uuid_dir + 'metadata ' + uuid_dir + db + '/'
327 | subprocess.call(cp_metadata, shell=True)
328 | elapsed_time = (end_time - start_time).total_seconds()
329 | return start_time, end_time, elapsed_time, is_complete, cmd, bdb, uuid_dir, 'db_consistency'
330 | else:
331 | # 否则, 则多次执行mydumper创建多个备份, 每个备份都要有成功与否状态标记
332 | is_complete = ''
333 | # 在备份列表中循环
334 | for i in bdb_list:
335 | print(i)
336 | comm = []
337 | # 一次备份一个数据库,下次循环将comm置空
338 | outputdir_arg = [ x for x in mydumper_args if 'outputdir' in x ]
339 | temp_mydumper_args = copy.deepcopy(mydumper_args)
340 | if outputdir_arg[0][-1] != '/':
341 | temp_mydumper_args.remove(outputdir_arg[0])
342 | temp_mydumper_args.append(outputdir_arg[0] + '/' + bk_id + '/' + i)
343 | last_outputdir = (outputdir_arg[0] +'/' + bk_id + '/' + i).split('=')[1]
344 | else:
345 | temp_mydumper_args.remove(outputdir_arg[0])
346 | temp_mydumper_args.append(outputdir_arg[0] + bk_id + '/' + i)
347 | last_outputdir = (outputdir_arg[0] + bk_id + '/' + i).split('=')[1]
348 | if not os.path.isdir(last_outputdir):
349 | os.makedirs(last_outputdir)
350 | comm = temp_mydumper_args + ['--database=' + i]
351 | # 生成备份命令
352 | cmd = getMdumperCmd(*comm)
353 | #密码中可能有带'#'或括号的,处理一下用引号包起来
354 | cmd = enclosePass(cmd)
355 | state = runProcess(cmd)
356 | logging.info('backup state:'+str(state))
357 | if state != 0:
358 | logging.critical(i + ' Backup Failed!')
359 | # Y,N,Y,Y
360 | if is_complete:
361 | is_complete += ',N'
362 | else:
363 | is_complete += 'N'
364 | end_time = datetime.datetime.now()
365 | print(str(end_time) + ' ' + i + ' Backup Failed')
366 | elif state == 0:
367 | if is_complete:
368 | is_complete += ',Y'
369 | else:
370 | is_complete += 'Y'
371 | end_time = datetime.datetime.now()
372 | logging.info(i + ' End Backup')
373 | print(str(end_time) + ' ' + i + ' Backup Complete')
374 | end_time = datetime.datetime.now()
375 | elapsed_time = (end_time - start_time).total_seconds()
376 | full_comm = 'mydumper ' + \
377 | ' '.join(mydumper_args) + ' database=' + ','.join(bdb_list)
378 | return start_time, end_time, elapsed_time, is_complete, full_comm, bdb, last_outputdir, 'for database'
379 |
380 |
381 | def getIP():
382 | '''获取ip地址'''
383 | # 过滤内网IP
384 | cmd = "/sbin/ifconfig | /bin/grep 'inet addr:' | /bin/grep -v '127.0.0.1' | /bin/grep -v '192\.168' | /bin/grep -v '10\.'| /bin/cut -d: -f2 | /usr/bin/head -1 | /bin/awk '{print $1}'"
385 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
386 | child.wait()
387 | ipaddress = child.communicate()[0].strip()
388 | return ipaddress
389 |
390 |
391 | def getBackupSize(outputdir):
392 | '''获取备份集大小'''
393 | cmd = 'du -sh ' + os.path.abspath(os.path.join(outputdir,'..'))
394 | child = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
395 | child.wait()
396 | backup_size = child.communicate()[0].strip().split('\t')[0]
397 | return backup_size
398 |
399 |
400 | def getMetadata(outputdir):
401 | '''从metadata中获取 SHOW MASTER STATUS / SHOW SLAVE STATUS 信息'''
402 | if outputdir[-1] != '/':
403 | metadata = outputdir + '/metadata'
404 | else:
405 | metadata = outputdir + 'metadata'
406 | with open(metadata, 'r') as file:
407 | content = file.readlines()
408 |
409 | separate_pos = content.index('\n')
410 |
411 | master_status = content[:separate_pos]
412 | master_log = [x.split(':')[1].strip() for x in master_status if 'Log' in x]
413 | master_pos = [x.split(':')[1].strip() for x in master_status if 'Pos' in x]
414 | master_GTID = [x.split(':')[1].strip()
415 | for x in master_status if 'GTID' in x]
416 | master_info = ','.join(master_log + master_pos + master_GTID)
417 |
418 | slave_status = content[separate_pos + 1:]
419 | if not 'Finished' in slave_status[0]:
420 | slave_log = [x.split(':')[1].strip() for x in slave_status if 'Log' in x]
421 | slave_pos = [x.split(':')[1].strip() for x in slave_status if 'Pos' in x]
422 | slave_GTID = [x.split(':')[1].strip() for x in slave_status if 'GTID' in x]
423 | slave_info = ','.join(slave_log + slave_pos + slave_GTID)
424 | return master_info, slave_info
425 | else:
426 | return master_info, 'Not a slave'
427 |
428 |
429 | def safeCommand(cmd):
430 | '''移除user_backup表中, bk_command列中的密码'''
431 | cmd_list = cmd.split(' ')
432 | passwd = getOption(cmd,'password')
433 | safe_command = cmd.replace(passwd, 'supersecrect')
434 | return safe_command
435 |
436 |
437 | def getVersion(db):
438 | '''获取mydumper 版本和 MySQL版本'''
439 | child = subprocess.Popen('mydumper --version',
440 | shell=True, stdout=subprocess.PIPE)
441 | child.wait()
442 | mydumper_version = child.communicate()[0].strip()
443 | mysql_version = db.version()
444 | return mydumper_version, mysql_version
445 |
446 |
447 | def rsync(bk_dir, address):
448 | '''rsync, bk_dir为备份所在目录,address为使用的网卡'''
449 | if not address:
450 | cmd = 'rsync -auv ' + bk_dir + ' --password-file=' + \
451 | password_file + ' rsync://' + dest
452 | else:
453 | cmd = 'rsync -auv ' + bk_dir + ' --address=' + address + \
454 | ' --password-file=' + password_file + ' rsync://' + dest
455 | start_time = datetime.datetime.now()
456 | logging.info('Start rsync')
457 | print(str(start_time) + ' Start rsync')
458 | state = runProcess(cmd)
459 | logging.info('rsync state:'+str(state))
460 | if state != 0:
461 | end_time = datetime.datetime.now()
462 | logging.critical('Rsync Failed!')
463 | print(str(end_time) + ' Rsync Failed!')
464 | is_complete = 'N'
465 | else:
466 | end_time = datetime.datetime.now()
467 | logging.info('Rsync complete')
468 | print(str(end_time) + ' Rsync complete')
469 | is_complete = 'Y'
470 | elapsed_time = (end_time - start_time).total_seconds()
471 | return start_time, end_time, elapsed_time, is_complete
472 |
473 | # mark-del命令, 用于将备份标记为删除
474 | def markDel(backup_dir,targetdb):
475 | backup_list = os.listdir(backup_dir)
476 | sql = "update user_backup set is_deleted='Y' where bk_id in (" + "'" + "','".join(backup_list) + "')"
477 | print('markDel:' + sql)
478 | targetdb.dml(sql)
479 | targetdb.commit()
480 | targetdb.close()
481 |
482 |
483 | def validateBackup(bk_id=None):
484 | sql = (
485 | "select a.id, a.bk_id, a.tag, date(start_time), real_path"
486 | " from user_backup a,user_backup_path b"
487 | " where a.tag = b.tag"
488 | " and is_complete not like '%N%'"
489 | " and is_deleted != 'Y'"
490 | " and transfer_complete = 'Y'"
491 | " and a.tag = '{}'"
492 | " and validate_status != 'passed'"
493 | " and start_time >= DATE_SUB(CURDATE(), INTERVAL 1 MONTH)"
494 | " order by rand() limit 1"
495 | )
496 |
497 | sql2 = (
498 | "select a.id, a.bk_id, a.tag, date(start_time), real_path"
499 | " from user_backup a,user_backup_path b"
500 | " where a.tag = b.tag"
501 | " and a.bk_id = '{}'"
502 | )
503 |
504 | start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags = [], [], [], [], [], [], []
505 | for tag in bk_list:
506 | print(datetime.datetime.now())
507 | print(tag)
508 | logging.info('-='*20)
509 | logging.info('开始恢复: ' + tag)
510 | catalogdb = Fandb(cata_host, cata_port, cata_user, cata_passwd, cata_use)
511 | if bk_id:
512 | dql_res = catalogdb.dql(sql2.format(bk_id))
513 | else:
514 | dql_res = catalogdb.dql(sql.format(tag))
515 | result = dql_res[0] if dql_res else None
516 | if result:
517 | res_bk_id, res_tag, res_start_time, real_path = result[1], result[2], result[3], result[4]
518 | catalogdb.close()
519 | backup_path = real_path + str(res_start_time) + '/' + res_bk_id + '/'
520 | logging.info('Backup path: '+ backup_path )
521 | dbs = [ directory for directory in os.listdir(backup_path) if os.path.isdir(backup_path+directory) and directory != 'mysql' ]
522 | if dbs:
523 | for db in dbs:
524 | '''
525 | ([datetime.datetime(2017, 12, 25, 15, 11, 36, 480263), datetime.datetime(2017, 12, 25, 15, 33, 17, 292924), datetime.datetime(2017, 12, 25, 17, 10, 38, 226598), datetime.datetime(2017, 12, 25, 17, 10, 39, 374409)], [datetime.datetime(2017, 12, 25, 15, 33, 17, 292734), datetime.datetime(2017, 12, 25, 17, 10, 38, 226447), datetime.datetime(2017, 12, 25, 17, 10, 38, 855657), datetime.datetime(2017, 12, 25, 17, 10, 39, 776067)], [0, 0, 0, 0], [u'dadian', u'sdkv2', u'dopack', u'catalogdb'], [u'/data2/backup/db_backup/120.55.74.93/2017-12-23/b22694c4-e752-11e7-9370-00163e0007f1/', u'/data2/backup/db_backup/106.3.130.84/2017-12-16/12cb7486-e229-11e7-b172-005056b15d9c/'], [u'b22694c4-e752-11e7-9370-00163e0007f1', u'12cb7486-e229-11e7-b172-005056b15d9c'], ['\xe5\x9b\xbd\xe5\x86\x85sdk\xe4\xbb\x8e1', '\xe6\x96\xb0\xe5\xa4\x87\xe4\xbb\xbd\xe6\x9c\xba'])
526 | insert into user_recover_info(tag, bk_id, backup_path, db, start_time, end_time, elapsed_time, recover_status) values (国内sdk从1,b22694c4-e752-11e7-9370-00163e0007f1,/data2/backup/db_backup/120.55.74.93/2017-12-23/b22694c4-e752-11e7-9370-00163e0007f1/,dadian,2017-12-25 15:11:36.480263,2017-12-25 15:33:17.292734,1300.812471,sucess)
527 | insert into user_recover_info(tag, bk_id, backup_path, db, start_time, end_time, elapsed_time, recover_status) values (新备份机,12cb7486-e229-11e7-b172-005056b15d9c,/data2/backup/db_backup/106.3.130.84/2017-12-16/12cb7486-e229-11e7-b172-005056b15d9c/,sdkv2,2017-12-25 15:33:17.292924,2017-12-25 17:10:38.226447,5840.933523,sucess)
528 |
529 | 1 个 bk_id 对应3个备份,1 个 bk_id 对应1个备份 ,但是tag只append 了俩, 应该每个库append一次,或者改成字典
530 | '''
531 | tags.append(tag)
532 | backup_paths.append(backup_path)
533 | bk_ids.append(res_bk_id)
534 | db_list.append(db)
535 | full_backup_path = backup_path + db + '/'
536 | #print(full_backup_path)
537 | load_cmd = 'myloader -d {} --user=root --password=fanboshi --overwrite-tables --verbose=3 --threads=3'.format(full_backup_path)
538 | print(load_cmd)
539 | start_time.append(datetime.datetime.now())
540 | logging.info('Start recover '+ db )
541 | state = runProcess(load_cmd)
542 | recover_status.append(state)
543 | logging.info('Recover state:'+str(state))
544 | end_time.append(datetime.datetime.now())
545 | if state != 0:
546 | logging.info('Recover {} Failed'.format(db))
547 | elif state == 0:
548 | logging.info('Recover {} complete'.format(db))
549 | else:
550 | load_cmd = 'myloader -d {} --user=root --password=fanboshi --overwrite-tables --verbose=3 --threads=3'.format(backup_path)
551 | print(load_cmd)
552 | tags.append(tag)
553 | start_time.append(datetime.datetime.now())
554 | logging.info('Start recover')
555 | state = runProcess(load_cmd)
556 | recover_status.append(state)
557 | logging.info('Recover state:'+str(state))
558 | end_time.append(datetime.datetime.now())
559 | if state != 0:
560 | logging.info('Recover Failed')
561 | elif state == 0:
562 | logging.info('Recover complete')
563 | db_list.append('N/A')
564 | backup_paths.append(backup_path)
565 | bk_ids.append(res_bk_id)
566 | return start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags
567 |
568 |
569 | if __name__ == '__main__':
570 | '''
571 | 参数解析
572 | '''
573 | pybackup_version = 'pybackup 0.10.13.0'
574 | arguments = docopt(__doc__, version=pybackup_version)
575 | print(arguments)
576 |
577 | '''
578 | 解析配置文件获取参数
579 | '''
580 | cf = ConfigParser.ConfigParser()
581 | cf.read(os.path.split(os.path.realpath(__file__))[0] + '/pybackup.conf')
582 | section_name = 'CATALOG'
583 | cata_host = cf.get(section_name, "db_host")
584 | cata_port = cf.get(section_name, "db_port")
585 | cata_user = cf.get(section_name, "db_user")
586 | cata_passwd = cf.get(section_name, "db_passwd")
587 | cata_use = cf.get(section_name, "db_use")
588 |
589 | if not arguments['validate-backup'] and not arguments['mark-del']:
590 | section_name = 'TDB'
591 | tdb_host = cf.get(section_name, "db_host")
592 | tdb_port = cf.get(section_name, "db_port")
593 | tdb_user = cf.get(section_name, "db_user")
594 | tdb_passwd = cf.get(section_name, "db_passwd")
595 | tdb_use = cf.get(section_name, "db_use")
596 | tdb_list = cf.get(section_name, "db_list")
597 | try:
598 | global db_consistency
599 | db_consistency = cf.get(section_name, "db_consistency")
600 | except ConfigParser.NoOptionError,e:
601 | db_consistency = 'False'
602 | print('没有指定db_consistency参数,默认采用--database循环备份db_list中指定的数据库,数据库之间不保证一致性')
603 |
604 | if cf.has_section('rsync'):
605 | section_name = 'rsync'
606 | password_file = cf.get(section_name, "password_file")
607 | dest = cf.get(section_name, "dest")
608 | address = cf.get(section_name, "address")
609 | if dest[-1] != '/':
610 | dest += '/'
611 | rsync_enable = True
612 | else:
613 | rsync_enable = False
614 | print("没有在配置文件中指定rsync区块,备份后不传输")
615 |
616 | section_name = 'pybackup'
617 | tag = cf.get(section_name, "tag")
618 | elif arguments['validate-backup']:
619 | section_name = 'Validate'
620 | if arguments['--bk_id']:
621 | bk_list=list(arguments['--bk_id'])
622 | else:
623 | bk_list = cf.get(section_name, "bk_list").split(',')
624 |
625 | if arguments['mydumper'] and ('help' in arguments['ARG_WITHOUT_--'][0]):
626 | subprocess.call('mydumper --help', shell=True)
627 | elif arguments['only-rsync']:
628 | confLog()
629 | backup_dir = arguments['--backup-dir']
630 | if arguments['--bk-id']:
631 | transfer_start, transfer_end, transfer_elapsed, transfer_complete = rsync(backup_dir, address)
632 | catalogdb = Fandb(cata_host, cata_port, cata_user, cata_passwd, cata_use)
633 | sql = 'update user_backup set transfer_start=%s, transfer_end=%s, transfer_elapsed=%s, transfer_complete=%s where bk_id=%s'
634 | catalogdb.dml(sql, (transfer_start, transfer_end, transfer_elapsed, transfer_complete, arguments['--bk-id']))
635 | catalogdb.commit()
636 | catalogdb.close()
637 | else:
638 | rsync(backup_dir,address)
639 | elif arguments['mark-del']:
640 | catalogdb = Fandb(cata_host, cata_port, cata_user, cata_passwd, cata_use)
641 | markDel(arguments['--backup-dir'],catalogdb)
642 | elif arguments['validate-backup']:
643 | confLog()
644 | if arguments['--bk_id']:
645 | start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags = validateBackup(arguments['--bk_id'])
646 | else:
647 | start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags = validateBackup()
648 | print(start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags)
649 | if bk_ids:
650 | catalogdb = Fandb(cata_host, cata_port, cata_user, cata_passwd, cata_use)
651 | sql1 = "insert into user_recover_info(tag, bk_id, backup_path, db, start_time, end_time, elapsed_time, recover_status) values (%s,%s,%s,%s,%s,%s,%s,%s)"
652 | sql2 = "update user_backup set validate_status=%s where bk_id=%s"
653 | logging.info(zip(start_time, end_time, recover_status, db_list))
654 | for stime, etime, rstatus, db ,backup_path, bk_id, tag in zip(start_time, end_time, recover_status, db_list, backup_paths, bk_ids, tags):
655 | if rstatus == 0:
656 | status = 'sucess'
657 | failed_flag = False
658 | else:
659 | status = 'failed'
660 | failed_flag = True
661 | # print(sql1 % (tag.decode('utf-8'), bk_id, backup_path, db, stime, etime, (etime - stime).total_seconds(), status))
662 | logging.info(sql1 % (tag.decode('utf-8'), bk_id, backup_path, db, stime, etime, (etime - stime).total_seconds(), status))
663 | catalogdb.dml(sql1,(tag, bk_id, backup_path, db, stime, etime, (etime - stime).total_seconds(), status))
664 | if not failed_flag:
665 | catalogdb.dml(sql2,('passed', bk_id))
666 | catalogdb.commit()
667 | catalogdb.close()
668 | logging.info('恢复完成')
669 | else:
670 | logging.info('没有可用备份')
671 | print('没有可用备份')
672 | else:
673 | confLog()
674 | bk_id = str(uuid.uuid1())
675 | if arguments['mydumper']:
676 | mydumper_args = ['--' + x for x in arguments['ARG_WITHOUT_--']]
677 | is_rsync = True
678 | is_history = True
679 | if arguments['--no-rsync']:
680 | is_rsync = False
681 | if arguments['--no-history']:
682 | is_history = False
683 | if arguments['--only-backup']:
684 | is_history = False
685 | is_rsync = False
686 | print('is_rsync,is_history: ',is_rsync,is_history)
687 | bk_dir = [x for x in arguments['ARG_WITHOUT_--'] if 'outputdir' in x][0].split('=')[1]
688 | os.chdir(bk_dir)
689 | targetdb = Fandb(tdb_host, tdb_port, tdb_user, tdb_passwd, tdb_use)
690 | mydumper_version, mysql_version = getVersion(targetdb)
691 | start_time, end_time, elapsed_time, is_complete, bk_command, backuped_db, last_outputdir, backup_type = runBackup(
692 | targetdb)
693 |
694 | safe_command = safeCommand(bk_command)
695 |
696 | if 'N' not in is_complete:
697 | bk_size = getBackupSize(last_outputdir)
698 | master_info, slave_info = getMetadata(last_outputdir)
699 | if rsync_enable:
700 | if is_rsync:
701 | transfer_start, transfer_end, transfer_elapsed, transfer_complete_temp = rsync(bk_dir, address)
702 | transfer_complete = transfer_complete_temp
703 | transfer_count = 0
704 | while transfer_complete_temp != 'Y' and transfer_count < 3:
705 | transfer_start_temp, transfer_end, transfer_elapsed_temp, transfer_complete_temp = rsync(bk_dir, address)
706 | transfer_complete = transfer_complete + ',' + transfer_complete_temp
707 | transfer_count += 1
708 | transfer_elapsed = ( transfer_end - transfer_start ).total_seconds()
709 |
710 | else:
711 | transfer_start, transfer_end, transfer_elapsed, transfer_complete = None,None,None,'N/A (local backup)'
712 | dest = 'N/A (local backup)'
713 | else:
714 | bk_size = 'N/A'
715 | master_info, slave_info = 'N/A', 'N/A'
716 | transfer_start, transfer_end, transfer_elapsed, transfer_complete = None,None,None,'Backup failed'
717 | dest = 'Backup failed'
718 |
719 |
720 | if is_history:
721 | bk_server = getIP()
722 | catalogdb = Fandb(cata_host, cata_port, cata_user, cata_passwd, cata_use)
723 | sql = 'insert into user_backup(bk_id,bk_server,start_time,end_time,elapsed_time,backuped_db,is_complete,bk_size,bk_dir,transfer_start,transfer_end,transfer_elapsed,transfer_complete,remote_dest,master_status,slave_status,tool_version,server_version,pybackup_version,bk_command,tag) values(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)'
724 | if backup_type == 'for database':
725 | last_outputdir = os.path.abspath(os.path.join(last_outputdir,'..'))
726 | print(bk_id, bk_server, start_time, end_time, elapsed_time, backuped_db, is_complete, bk_size, last_outputdir, transfer_start, transfer_end,transfer_elapsed, transfer_complete, dest, master_info, slave_info, mydumper_version, mysql_version, pybackup_version, safe_command)
727 | catalogdb.dml(sql, (bk_id, bk_server, start_time, end_time, elapsed_time, backuped_db, is_complete, bk_size, last_outputdir, transfer_start, transfer_end,
728 | transfer_elapsed, transfer_complete, dest, master_info, slave_info, mydumper_version, mysql_version, pybackup_version, safe_command, tag))
729 | catalogdb.commit()
730 | catalogdb.close()
731 |
732 |
--------------------------------------------------------------------------------