├── .travis.yml.disabled ├── LICENSE.txt ├── README.rst ├── db_smart_backup.sh ├── elasticsearch.conf.sample ├── mongod.conf.sample ├── mysql.conf.sample ├── postgresql.conf.sample ├── redis.conf.sample ├── run_dbsmartbackup.sh ├── run_dbsmartbackups.sh ├── slapd.conf.sample └── test.py /.travis.yml.disabled: -------------------------------------------------------------------------------- 1 | language: python 2 | 3 | python: 4 | - 2.7 5 | 6 | # matrix: 7 | 8 | install: 9 | - sudo apt-get install postgresql-9.1 mysql-server 10 | 11 | # before_script: 12 | # - export DISPLAY=:99.0 13 | # - sh -e /etc/init.d/xvfb start 14 | 15 | script: sudo python test.py 16 | 17 | 18 | 19 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | Copyright (c) 2000-2024 2 | Makina Corpus 3 | Makina Corpus 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are 8 | met: 9 | 10 | 1. Redistributions of source code must retain the above copyright 11 | notice, this list of conditions and the following disclaimer. 12 | 13 | 2. Redistributions in binary form must reproduce the above copyright 14 | notice, this list of conditions and the following disclaimer in 15 | the documentation and/or other materials provided with the 16 | distribution. 17 | 18 | 3. Neither the name of copyright holders nor the names of its contributors may 19 | be used to endorse or promote products derived from this software 20 | without specific prior written permission. 21 | 22 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 23 | "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 24 | LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 25 | A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDERS OR 26 | CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 27 | EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 28 | PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR 29 | PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 30 | LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING 31 | NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 | 34 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | ===================================================== 2 | Backup Script for various databases: 3 | ===================================================== 4 | Simple dump based backup with intelligent rotation and hooks. 5 | Supports with battery included for mysql, mongodb, slapd & postgresql 6 | 7 | 8 | DISCLAIMER - ABANDONED/UNMAINTAINED CODE / DO NOT USE 9 | ======================================================= 10 | minitage project was terminated in 2013. Consequently, this repository and all associated resources (including related projects, code, documentation, and distributed packages such as Docker images, PyPI packages, etc.) are now explicitly declared **unmaintained** and **abandoned**. 11 | 12 | I would like to remind everyone that this project’s free license has always been based on the principle that the software is provided "AS-IS", without any warranty or expectation of liability or maintenance from the maintainer. 13 | As such, it is used solely at the user's own risk, with no warranty or liability from the maintainer, including but not limited to any damages arising from its use. 14 | 15 | Due to the enactment of the Cyber Resilience Act (EU Regulation 2024/2847), which significantly alters the regulatory framework, including penalties of up to €15M, combined with its demands for **unpaid** and **indefinite** liability, it has become untenable for me to continue maintaining all my Open Source Projects as a natural person. 16 | The new regulations impose personal liability risks and create an unacceptable burden, regardless of my personal situation now or in the future, particularly when the work is done voluntarily and without compensation. 17 | 18 | **No further technical support, updates (including security patches), or maintenance, of any kind, will be provided.** 19 | 20 | These resources may remain online, but solely for public archiving, documentation, and educational purposes. 21 | 22 | Users are strongly advised not to use these resources in any active or production-related projects, and to seek alternative solutions that comply with the new legal requirements (EU CRA). 23 | 24 | **Using these resources outside of these contexts is strictly prohibited and is done at your own risk.** 25 | 26 | This project has been transfered to Makina Corpus ( https://makina-corpus.com ). This project and its associated resources, including published resources related to this project (e.g., from PyPI, Docker Hub, GitHub, etc.), may be removed starting **March 15, 2025**, especially if the CRA’s risks remain disproportionate. 27 | 28 | .. contents:: 29 | 30 | 31 | Badges 32 | ------ 33 | 34 | .. image:: https://travis-ci.org/kiorky/db_smart_backup.png 35 | :target: http://travis-ci.org/kiorky/db_smart_backup 36 | 37 | Supported databases 38 | ------------------- 39 | - Mongodb 40 | - PostGRESQL 41 | - Redis 42 | - Elasticsearch 43 | - MySQL 44 | - slapd (OpenLDAP) 45 | 46 | Why another tool ? 47 | -------------------- 48 | - There are great tools out there, but they are not fitting our needs and 49 | knowledge, and some of them did not have much tests, sorry. 50 | - We just wanted a simple bash script, and using **dumps** (even in custom format 51 | for postgres) but just snapshots. So for example, postgreSQL PITR wal were not an 52 | option eliminating btw *barman* & *pg_rman*. All the other shell scripts including 53 | *automysqlbackup*/*autopostgresql* were not fitting exactly all the features we 54 | wanted and some were just too bash complicated for our little own brains. 55 | - We wanted hooks to react on each backup stage, those hooks can be in another 56 | language, this is up to the user (very usefull for monitoring stuff). 57 | - We want a generic script for any database, providing you add support on 58 | it, this consists just on writing a 'global' and a 'dump' function. For more 59 | information, read the sources luke. 60 | 61 | - **WARNING** 62 | DO NOT PUT DATA UNDER THE DATADIR ELSE THAN WHAT DBSMARTBACKUP 63 | 64 | So main features/requirements are: 65 | 66 | - Posix shell compliant (goal, but not that tested, the really tested one 67 | is bash in posix mode) 68 | - **PostgreSQL / MySQL support** for simple database and privileges 69 | dumps 70 | - Enougthly unit **tested** 71 | - XZ **compression** if available 72 | - Easily **extensible** to add another backup type / Generic backups methods 73 | - **Optional hooks** at each stage of the process addable via configuration 74 | (bash functions to uncomment) 75 | - **Keep a fixed number of dumps**, recent ones, old ones, and in a smart way. 76 | More on that later on this document. But for example the default is to keep 77 | the last 24 dumps, then 14 days (1 per day), 8 weeks (1 per week) and 12 78 | months (1 per month). 79 | 80 | 81 | Installation 82 | ------------ 83 | :: 84 | 85 | curl -OJLs https://raw.githubusercontent.com/kiorky/db_smart_backup/master/db_smart_backup.sh 86 | curl -OJLs https://raw.githubusercontent.com/kiorky/db_smart_backup/master/run_dbsmartbackups.sh 87 | chmod +x db_smart_backup.sh run_dbsmartbackups.sh 88 | 89 | Generate a config file:: 90 | 91 | ./db_smart_backup.sh --gen-config /path/to/config 92 | vim /path/to/config 93 | 94 | Backup:: 95 | 96 | ./db_smart_backup.sh /path/to/config 97 | 98 | 99 | 100 | Backup all found databases in cron 101 | ----------------------------------- 102 | We also bundle a script named **run_dbsmartbackups.sh** which search in /etc/dbsmartbackup for any database configuration: 103 | 104 | - pg: /etc/dbsmartbackup/postgresql.conf 105 | - mysql: /etc/dbsmartbackup/mysql.conf 106 | - mongodb: /etc/dbsmartbackup/mongod.conf 107 | - slapd /etc/dbsmartbackup/slapd.conf 108 | - redis /etc/dbsmartbackup/redis.conf 109 | - elasticsearch /etc/dbsmartbackup/elasticsearch.conf 110 | 111 | be sure to have the scripts in your path:: 112 | 113 | curl -OJLs https://raw.githubusercontent.com/kiorky/db_smart_backup/master/db_smart_backup.sh 114 | curl -OJLs https://raw.githubusercontent.com/kiorky/db_smart_backup/master/run_dbsmartbackups.sh 115 | chmod +x db_smart_backup.sh run_dbsmartbackups.sh 116 | mkdir /etc/dbsmartbackup 117 | 118 | In /etc/dbsmartbackup, generate a config file (either: mysql.conf, mongod.conf, slapd.conf, postgresql.conf):: 119 | 120 | ./db_smart_backup.sh --gen-config /etc/dbsmartbackup/.conf 121 | vim /path/to/configa 122 | 123 | Testing the backup:: 124 | 125 | ./db_smart_backup.sh /etc/dbsmartbackup/.conf 126 | 127 | Only execute the pruning policy:: 128 | 129 | ./db_smart_backup.sh -p /etc/dbsmartbackup/.conf 130 | 131 | Test the cron that search for all possible things to backups:: 132 | 133 | run_dbsmartbackups.sh 134 | 135 | Add it to cron:: 136 | 137 | 0 0 * * * root /usr/bin/run_dbsmartbackups.sh --no-colors --quiet 138 | 139 | 140 | For postgresql, you can configure the path to your postgresql.conf(s) PATH(s) by 141 | exporting "PG_CONFS" that is a space separated absolute paths to 142 | postgresql.conf's. 143 | Note, that for redhat or debian based, PG_CONFS should be OK by default. 144 | 145 | Changelog 146 | ---------- 147 | 148 | Credits 149 | ------------- 150 | - by Makina Corpus / freesoftware@makina-corpus.com 151 | - inspired by automysqlbackup/autopostgresqlbackup 152 | 153 | The great things 154 | ----------------- 155 | - Hooks support for each stage, those are bash functions acting as entry point 156 | for you to customize the backup upon what will happen during execution 157 | - Smart idiot and simple retention policies 158 | Idea is to have a directory with all the sql for all days of the year 159 | and then hard links in subdirs to those files for easy access 160 | but also to triage what to rotate and what to prune:: 161 | 162 | POSTGRESQL/ 163 | DBNAME/ 164 | dumps/ 165 | DBNAME_20xx0101_01-01-01.sql.compressed <- 01/01/20xx 166 | DBNAME_20xx0102_01-01-01.sql.compressed 167 | DBNAME_20xx0103_01-01-01.sql.compressed 168 | DBNAME_20xx0107_01-01-01.sql.compressed 169 | DBNAME_20xx0108_01-01-01.sql.compressed 170 | DBNAME_20xx3101_01-01-01.sql.compressed 171 | DBNAME_20xx0202_01-01-01.sql.compressed 172 | lastsnapshots/ 173 | DBNAME_20xx0101_01-01-01.sql.compressed 174 | DBNAME_20xx0102_01-01-01.sql.compressed 175 | DBNAME_20xx0202_01-01-01.sql.compressed 176 | monthly/ 177 | 20xx_01_DBNAME_20xx0101.sql.compressed -> /fullpath/DBNAME/dumps/DBNAME_20xx0101.sql.compressed 178 | 20xx_02_DBNAME_20xx0201.sql.compressed -> /fullpath/DBNAME/dumps/DBNAME_20xx0202.sql.compressed 179 | 20xx_03_DBNAME_20xx0301.sql.compressed -> /fullpath/DBNAME/dumps/DBNAME_20xx0202.sql.compressed 180 | weekly/ 181 | 20xx_01_DBNAME_20xx0101.sql.compressed -> /fullpath/DBNAME/dumps/DBNAME_20xx0101.sql.compressed 182 | 20xx_02_DBNAME_20xx0108.sql.compressed -> /fullpath/DBNAME/dumps/DBNAME_20xx0108.sql.compressed 183 | daily/ 184 | 20xx_01_01_DBNAME_20xx0101.sql.compressed -> /fullpath/DBNAME/dumps/DBNAME_20xx0101.sql.compressed 185 | 20xx_02_01_DBNAME_20xx0108.sql.compressed -> /fullpath/DBNAME/dumps/DBNAME_20xx0108.sql.compressed 186 | 187 | - Indeed: 188 | 189 | - First thing to do after after a backup is to look if a folder has more than the 190 | configured backups per each type of rotation (month, week, days, snapshots) 191 | and clean the oldest first. 192 | - Then we will just have to prune hardlinks where linked count is stricly inferior to 2, 193 | meaning that no one of the retention policies link this backup anymore. It 194 | is what we can call an orphan and is willing to be pruned. 195 | - Indeed, this means that **our backups are only in the dumps folder**. 196 | 197 | - How do I see that other directories contains only hard links from dump directory? 198 | 199 | - You can see the hard links with ls in two ways. Using `ls -i` to get the 200 | real inode number in first col or `ls -l` to get the hard link counters. 201 | :: 202 | 203 | # ls -il /var/backup/postgresql/localhost/foobar/dumps/ 204 | total 13332 205 | 14044 -rw-r----- 5 root root 1237208 22 mars 16:19 foobar_2014-03-22_16-19-34.sql 206 | 14049 -rw-r----- 2 root root 1237208 22 mars 16:25 foobar_2014-02-22_11-25-53.sql 207 | 14054 -rw-r----- 2 root root 1237208 22 mars 16:27 foobar_2014-01-22_15-27-22.sql 208 | (...) 209 | # ls -il /var/backup/postgresql/localhost/foobar/weekly/ 210 | total 1212 211 | 14044 -rw-r----- 5 root root 1237208 22 mars 16:19 foobar_2014_12.sql 212 | ___^ inode ^ 213 | _________________^ here we see the hard link counter on this file 214 | 215 | 216 | 217 | Backup types 218 | ------------- 219 | PostgreSQL & MySQL specificities 220 | ++++++++++++++++++++++++++++++++++++++++ 221 | - We use traditionnal postgreSQL environment variables to set the host, port, password and user to set at backup 222 | time 223 | 224 | - For PostgreSQL, you will certainly have to set only the BACKUP_TYPE to 225 | postgresql 226 | - For MySQL you may have only to input the password 227 | 228 | Add another backup type 229 | ++++++++++++++++++++++++ 230 | You need to first read the implementations for **mysql** and **postgresql**, those are 231 | really simple, then follow the next guide (you do not need to make the script 232 | call your functions, they are introspected): 233 | 234 | - Add a function **yourtype_set_connection_vars** to set any necessary extra global variable needed 235 | at the connect phase to your service 236 | - Add a function **yourtype_check_connectivity** that exit in error if the 237 | connexion is not possible and die in error else (use the **die_in_error** 238 | function) 239 | - Add a function **yourtype_set_vars** to set any necessary extra global variable needed 240 | to handle your service 241 | - Add a function **yourtype_get_all_databases** that return a space separated 242 | list of your database dbs. 243 | - Add a function **yourtype_dump** that will dump a database to a file, or a 244 | stub returning 0 as $? (call **/bin/true**) if it is not relevant for your 245 | backup type. 246 | - Add a function **yourtype_dumpall** even if one of them 247 | is just an empty stub, the script will then introspect itself to find 248 | them. Those functions must set the **LAST_BACKUP_STATUS** either to **""** 249 | on sucess or **"failure"** if the backup failed. 250 | - Add what is needed to load the configuration in the default configuration 251 | file in the **generate_configuration_file** method 252 | - Hack the defaults and variables in **set_vars**, the same way, if 253 | necessary. 254 | 255 | Hooks 256 | --------- 257 | - We provide a hook mechanism to let you configure custom code at each stage of 258 | the backup program. For this, you just need to uncomment the relevant part in 259 | your configuration file and implement whatever code you want, and even call 260 | another script in another language. 261 | 262 | - after the backup program starts: **pre_backup_hook** 263 | - after the global backup(failure): **postglobalbackup_hook** 264 | - after the global backup: **post_global_backup_failure_hook** 265 | - after specific db backup: **post_dbbackup_hook** 266 | - after specific db backup(failure): **post_db_backup_failure_hook** 267 | - after the backups rotation: **post_rotate_hook** 268 | - after the backups orphans cleanups: **post_cleanup_hook** 269 | - at backup end: **post_backup_hook** 270 | 271 | - Think that you will have access in the environment of 272 | the hook to all the variables defined and exported by the script. 273 | You just have to check by reading the source what to test and how. 274 | 275 | Options 276 | ----------- 277 | - Read the script header to know what each option can do 278 | - You'll need to tweak at least: 279 | 280 | - The database identifiers 281 | - The backup root location (/var/backup/ by default) 282 | - Which type of backup to do (maybe only postgresql) 283 | - The retention policy (there's a default one) 284 | 285 | 286 | Backup Rotation.. 287 | ------------------ 288 | We use hardlinks to achieve that but be aware that it may have filesystem limits: 289 | - number of databases backed up (a lot if every possible anyway on modern filesystems (2^32 hardlinks) 290 | and count something for the max like **366x2+57+12** for a year and a db. 291 | - and all subdirs should be on the same mounted point than the **dumps** directory. 292 | 293 | Default policy 294 | ++++++++++++++ 295 | - We keep the **24** last done dumps 296 | - We keep **14** days left 297 | - We keep 1 backup per week for the last **8** weeks 298 | - We keep 1 backup per month for the last **12** months 299 | 300 | Please Note!! 301 | -------------- 302 | I take no responsability for any data loss or corruption when using this script.. 303 | This script will not help in the event of a hard drive crash. If a 304 | copy of the backup has not be stored offline or on another PC.. 305 | You should copy your backups offline regularly for best protection. 306 | Happy backing up... 307 | -------------------------------------------------------------------------------- /db_smart_backup.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # LICENSE: BSD 3 clause 3 | # Author: Makina Corpus / freesoftware@makina-corpus.com 4 | 5 | __NAME__="db_smart_backup" 6 | 7 | # even if it is not really tested, we are trying to get full posix compatibility 8 | # and to run on another shell than bash 9 | #if [ x"$SHELL" = "x/bin/bash" ];then 10 | # set -o posix &> /dev/null 11 | #fi 12 | 13 | generate_configuration_file() { 14 | cat > ${DSB_CONF_FILE} << EOF 15 | 16 | # A script can run only for one database type and a specific host 17 | # at a time (mysql, postgresql) 18 | # But you can run it with multiple configuration files 19 | # You can obiously share the same base backup directory. 20 | 21 | # set to 1 to deactivate colors (cron) 22 | #NO_COLOR="" 23 | 24 | # Choose Compression type. (gzip or bzip2 or xz or zstd) 25 | #COMP=bzip2 26 | 27 | #User to run dumps dump binaries as, defaults to logged in user 28 | #RUNAS=postgres 29 | #DB user to connect to the database with, defaults to \$RUNAS 30 | #DBUSER=postgres 31 | 32 | ######## Backup settings 33 | # one of: postgresql mysql 34 | #BACKUP_TYPE=postgresql 35 | #BACKUP_TYPE=mysql 36 | #BACKUP_TYPE=mongodb 37 | #BACKUP_TYPE=slapd 38 | #BACKUP_TYPE=redis 39 | #BACKUP_TYPE=es 40 | 41 | # Backup directory location e.g /backups 42 | #TOP_BACKUPDIR="/var/db_smart_backup" 43 | 44 | # do also global backup (use by postgresql to save roles/groups and only that 45 | #DO_GLOBAL_BACKUP="1" 46 | 47 | # HOW MANY BACKUPS TO KEEP & LEVEL 48 | # How many snapshots to keep (lastlog for dump) 49 | # How many per day 50 | #KEEP_LASTS=24 51 | #KEEP_DAYS=14 52 | #KEEP_WEEKS=8 53 | #KEEP_MONTHES=12 54 | #EEP_LOGS=60 55 | 56 | # directories permission 57 | #DPERM="750" 58 | 59 | # directory permission 60 | #FPERM="640" 61 | 62 | # OWNER/GROUP 63 | #OWNER=root 64 | #GROUP=root 65 | 66 | ######## Database connection settings 67 | # host defaults to localhost 68 | # and without port we use a connection via socket 69 | #HOST="" 70 | #PORT="" 71 | 72 | # defaults to postgres on postgresql backup 73 | # as ident is used by default on many installs, we certainly 74 | # do not need either a password 75 | #PASSWORD="" 76 | 77 | # List of DBNAMES for Daily/Weekly Backup e.g. "DB1 DB2 DB3" 78 | #DBNAMES="all" 79 | 80 | # List of DBNAMES to EXLUCDE if DBNAMES are set to all (must be in " quotes) 81 | #DBEXCLUDE="" 82 | 83 | ######### Elasticsearch 84 | # ES_URI="http://localhost:9200" 85 | # ES_USER="user" 86 | # ES_PASSWORD="secret" 87 | # path to snapshots (have to be added to path.repo in elasticsearch.yml) 88 | # ES_SNAPSHOTS_DIR="\${ES_SNAPSHOTS_DIR:-\${ES_TMP}/snapshots}" 89 | # elasticsearch daemon user 90 | 91 | ######### Postgresql 92 | # binaries path 93 | #PSQL="" 94 | #PG_DUMP="" 95 | #PG_DUMPALL="" 96 | 97 | ######## slapd 98 | # SLAPCAT_ARGS="\${SLAPCAT_ARGS:-""}" 99 | # SLAPD_DIR="\${SLAPD_DIR:-/var/lib/ldap}" 100 | 101 | # OPT string for use with pg_dump ( see man pg_dump ) 102 | #OPT="--create -Fc" 103 | 104 | # OPT string for use with pg_dumpall ( see man pg_dumpall ) 105 | #OPTALL="--globals-only" 106 | 107 | ######## MYSQL 108 | #MYSQL_SOCK_PATHS="" 109 | #MYSQL="" 110 | #MYSQLDUMP="" 111 | # do we disable mysqldump --single-transaction0 112 | #MYSQLDUMP_NO_SINGLE_TRANSACTION="" 113 | # disable to enable autocommit 114 | #MYSQLDUMP_AUTOCOMMIT="1" 115 | # set to enable complete inserts (true by default, disabling enable extended inserts) 116 | #MYSQLDUMP_COMPLETEINSERTS="1" 117 | # do we disable mysqldump --lock-tables=false 118 | #MYSQLDUMP_LOCKTABLES="" 119 | # set to add extra dumps info 120 | #MYSQLDUMP_DEBUG="" 121 | # set to disable dump routines 122 | #MYSQLDUMP_NOROUTINES="" 123 | # do we use ssl to connect 124 | #MYSQL_USE_SSL="" 125 | 126 | ######## mongodb 127 | # MONGODB_PATH="\${MONGODB_PATH:-"/var/lib/mongodb"}" 128 | # MONGODB_USER="\${MONGODB_USER:-""}" 129 | # MONGODB_PASSWORD="\${MONGODB_PASSWORD:-"\${PASSWORD}"}" 130 | # MONGODB_ARGS="\${MONGODB_ARGS:-""}" 131 | 132 | ######## Redis 133 | # REDIS_PATH="\${REDIS_PATH:-"/var/lib/redis"}" 134 | 135 | ######## Hooks (optionnal) 136 | # functions names which point to functions defined in your 137 | # configuration file 138 | # Pay attention not to make function names colliding with functions in the script 139 | 140 | # 141 | # All those hooks can call externals programs (eg: python scripts) 142 | # Look inside the shell script to know which variables you ll have 143 | # set in the context, but you ll have useful information available at 144 | # each stage like the dbname, etc. 145 | # 146 | 147 | # Function to run before backups (uncomment to use) 148 | #pre_backup_hook() { 149 | #} 150 | 151 | # Function to run after global backup (uncomment to use) 152 | #post_global_backup_hook() { 153 | #} 154 | 155 | # Function to run after global backup (uncomment to use) 156 | #post_global_backup_failure_hook() { 157 | #} 158 | 159 | # Fuction run after each database backup if the backup failed 160 | #post_db_backup_failure_hook() { 161 | #} 162 | 163 | # Function to run after each database backup (uncomment to use) 164 | #post_db_backup_hook() { 165 | #} 166 | 167 | # Function to run after backup rotation 168 | #post_rotate_hook() { 169 | #} 170 | 171 | # Function to run after backup orphan cleanup 172 | #post_cleanup_hook() { 173 | #} 174 | 175 | # Function run after backups (uncomment to use) 176 | #post_backup_hook="mycompany_postbackup" 177 | 178 | # Function to run after the recap mail emission 179 | #failure_hook() { 180 | #} 181 | # vim:set ft=sh: 182 | EOF 183 | chmod 640 "${DSB_CONF_FILE}" 184 | } 185 | 186 | fn_exists() { 187 | echo $(LC_ALL=C;LANG=C;type ${1} 2>&1 | head -n1 | grep -q "is a function";echo $?) 188 | } 189 | 190 | 191 | print_name() { 192 | echo -e "[${__NAME__}]" 193 | } 194 | 195 | log() { 196 | echo -e "${RED}$(print_name) ${@}${NORMAL}" 1>&2 197 | } 198 | 199 | cyan_log() { 200 | echo -e "${CYAN}${@}${NORMAL}" 1>&2 201 | } 202 | 203 | die_() { 204 | ret="${1}" 205 | shift 206 | cyan_log "ABRUPT PROGRAM TERMINATION: ${@}" 207 | do_hook "FAILURE command output" "failure_hook" 208 | exit ${ret} 209 | } 210 | 211 | die() { 212 | die_ 1 "${@}" 213 | } 214 | 215 | die_in_error_() { 216 | ret="${1}" 217 | shift 218 | msg="${@:-"${ERROR_MSG}"}" 219 | if [ x"${ret}" != "x0" ];then 220 | die_ "${ret}" "${msg}" 221 | fi 222 | } 223 | 224 | die_in_error() { 225 | die_in_error_ "$?" "${@}" 226 | } 227 | 228 | yellow_log(){ 229 | echo -e "${YELLOW}$(print_name) ${@}${NORMAL}" 1>&2 230 | } 231 | 232 | readable_date() { 233 | date +"%Y-%m-%d %H:%M:%S.%N" 234 | } 235 | 236 | debug() { 237 | if [ x"${DSB_DEBUG}" != "x" ];then 238 | yellow_log "DEBUG $(readable_date): $@" 239 | fi 240 | } 241 | 242 | remove_files() { 243 | for i in $@;do if [ -e $i ];then rm -f "$i";fi;done 244 | } 245 | 246 | usage() { 247 | cyan_log "- Backup your databases with ease" 248 | yellow_log " $0" 249 | yellow_log " /path/toconfig" 250 | yellow_log " alias to --backup" 251 | yellow_log " -b|--backup /path/toconfig:" 252 | yellow_log " backup databases" 253 | yellow_log " --gen-config [/path/toconfig (default: ${DSB_CONF_FILE}_DEFAULT)]" 254 | yellow_log " generate a new config file]" 255 | } 256 | 257 | runas() { 258 | echo "${RUNAS:-"$(whoami)"}" 259 | } 260 | 261 | quote_all() { 262 | cmd="" 263 | for i in "${@}";do 264 | cmd="${cmd} \"$(echo "${i}"|sed "s/\"/\"\'\"/g")\"" 265 | done 266 | echo "${cmd}" 267 | } 268 | 269 | runcmd_as() { 270 | cd "${RUNAS_DIR:-/}" 271 | bin="${1}" 272 | shift 273 | args=$(quote_all "${@}") 274 | if [ x"$(runas)" = "x" ] || [ x"$(runas)" = "x$(whoami)" ];then 275 | ${bin} "${@}" 276 | else 277 | su ${RUNAS} -c "${bin} ${args}" 278 | fi 279 | } 280 | 281 | get_compressed_name() { 282 | if [ x"${COMP}" = "xxz" ];then 283 | echo "${1}.xz"; 284 | elif [ x"${COMP}" = "xgz" ] || [ x"${COMP}" = "xgzip" ];then 285 | echo "${1}.gz"; 286 | elif [ x"${COMP}" = "xbzip2" ] || [ x"${COMP}" = "xbz2" ];then 287 | echo "${1}.bz2"; 288 | elif [ x"${COMP}" = "xzstd" ] || [ x"${COMP}" = "xzst" ];then 289 | echo "${1}.zst"; 290 | else 291 | echo "${1}"; 292 | fi 293 | } 294 | 295 | set_compressor() { 296 | for comp in ${COMP} ${COMPS};do 297 | c="" 298 | if [ x"${comp}" = "xxz" ];then 299 | XZ="${XZ:-xz}" 300 | c="${XZ}" 301 | elif [ x"${comp}" = "xgz" ] || [ x"${comp}" = "xgzip" ];then 302 | GZIP="${GZIP:-gzip}" 303 | c="${GZIP}" 304 | elif [ x"${comp}" = "xbzip2" ] || [ x"${comp}" = "xbz2" ];then 305 | BZIP2="${BZIP2:-bzip2}" 306 | c="${BZIP2}" 307 | elif [ x"${comp}" = "xzstd" ] || [ x"${comp}" = "xzst" ];then 308 | ZSTD="${ZSTD:-zstd}" 309 | c="${ZSTD}" 310 | else 311 | c="nocomp" 312 | fi 313 | # test that the binary is present 314 | if [ x"$c" != "xnocomp" ] && [ -e "$(which "$c")" ];then 315 | COMP=$c 316 | break 317 | else 318 | COMP="nocomp" 319 | fi 320 | done 321 | export COMP=$COMP 322 | } 323 | 324 | comp_msg() { 325 | sz="";s1="";s2="";ratio="" 326 | if [ -e "$zname" ];then 327 | sz="$(du -sh "$zname"|awk '{print $1}') " 328 | s1="$(du -sb "$zname"|awk '{print $1}')" 329 | fi 330 | if [ -e "$name" ];then 331 | s2="$(du -sb "$name"|awk '{print $1}')" 332 | fi 333 | if [ -e "$zname" ] && [ -e "$name" ];then 334 | ratio=$(echo "$s1" "${s2}" | awk '{printf "%.2f \n", $1/$2}') 335 | fi 336 | log "${RED}${NORMAL}${YELLOW} ${COMP}${NORMAL}${RED} -> ${YELLOW}${zname}${NORMAL} ${RED}(${NORMAL}${YELLOW} ${sz} ${NORMAL}${RED}/${NORMAL} ${YELLOW}${ratio}${NORMAL}${RED})${NORMAL}" 337 | } 338 | 339 | 340 | cleanup_uncompressed_dump_if_ok() { 341 | comp_msg 342 | dumpfiles="" 343 | if [ x"$?" != x"0" ];then 344 | dumpfiles="${zname}" 345 | fi 346 | if [ x"$?" = x"0" ];then 347 | if [ "${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 348 | dumpfiles="${name}" 349 | fi 350 | fi 351 | remove_files "$dumpfiles" 352 | } 353 | 354 | do_compression() { 355 | COMPRESSED_NAME="" 356 | name="${1}" 357 | zname="${2:-$(get_compressed_name ${1})}" 358 | if [ x"${COMP}" = "xxz" ];then 359 | if [ "x${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 360 | "${XZ}" --stdout -f -k "${name}" > "${zname}" 361 | else 362 | "${XZ}" --stdout -f -k > "${zname}" 363 | fi 364 | cleanup_uncompressed_dump_if_ok 365 | elif [ x"${COMP}" = "xgz" ] || [ x"${COMP}" = "xgzip" ];then 366 | if [ "x${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 367 | "${GZIP}" -f -c "${name}" > "${zname}" 368 | else 369 | "${GZIP}" -f -c > "${zname}" 370 | fi 371 | cleanup_uncompressed_dump_if_ok 372 | elif [ x"${COMP}" = "xbzip2" ] || [ x"${COMP}" = "xbz2" ];then 373 | if [ "x${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 374 | "${BZIP2}" -f -k -c "${name}" > "${zname}" 375 | else 376 | "${BZIP2}" -f -k -c > "${zname}" 377 | fi 378 | cleanup_uncompressed_dump_if_ok 379 | elif [ x"${COMP}" = "xzstd" ] || [ x"${COMP}" = "xzst" ];then 380 | if [ "x${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 381 | "${ZSTD}" -f -c -q "${name}" > "${zname}" 382 | else 383 | "${ZSTD}" -f -c -q > "${zname}" 384 | fi 385 | cleanup_uncompressed_dump_if_ok 386 | else 387 | /bin/true # noop 388 | fi 389 | if ( [ -e "${zname}" ] && [ x"${zname}" != "x${name}" ] ) || [ "${PIPED_BACKUP_COMPRESSION}" = "x1" ];then 390 | COMPRESSED_NAME="${zname}" 391 | else 392 | if [ -e "${name}" ];then 393 | log "No compressor found, no compression done" 394 | COMPRESSED_NAME="${name}" 395 | else 396 | log "Compression error" 397 | fi 398 | fi 399 | if [ x"${COMPRESSED_NAME}" != "x" ];then 400 | fix_perm "${fic}" 401 | fi 402 | } 403 | 404 | get_logsdir() { 405 | dir="${TOP_BACKUPDIR}/logs" 406 | echo "$dir" 407 | } 408 | 409 | get_logfile() { 410 | filen="$(get_logsdir)/${__NAME__}_${FULL_FDATE}.log" 411 | echo ${filen} 412 | } 413 | 414 | get_backupdir() { 415 | dir="${TOP_BACKUPDIR}/${BACKUP_TYPE:-}" 416 | if [ x"${BACKUP_TYPE}" = "xpostgresql" ];then 417 | host="${HOST}" 418 | if [ x"${HOST}" = "x" ] || [ x"${PGHOST}" = "x" ];then 419 | host="localhost" 420 | fi 421 | if [ -e $host ];then 422 | host="localhost" 423 | fi 424 | dir="$dir/$host" 425 | fi 426 | echo "$dir" 427 | } 428 | 429 | create_db_directories() { 430 | db="${1}" 431 | dbdir="$(get_backupdir)/${db}" 432 | created="0" 433 | for d in\ 434 | "$dbdir"\ 435 | "$dbdir/weekly"\ 436 | "$dbdir/monthly"\ 437 | "$dbdir/dumps"\ 438 | "$dbdir/daily"\ 439 | "$dbdir/lastsnapshots"\ 440 | ;do 441 | if [ ! -e "$d" ];then 442 | mkdir -p "$d" 443 | created="1" 444 | fi 445 | done 446 | if [ x"${created}" = "x1" ];then 447 | fix_perms 448 | fi 449 | } 450 | 451 | link_into_dirs() { 452 | db="${1}" 453 | real_filename="${2}" 454 | real_zfilename="$(get_compressed_name "${real_filename}")" 455 | daily_filename="$(get_backupdir)/${db}/daily/${db}_${YEAR}_${DOY}_${DATE}.${BACKUP_EXT}" 456 | lastsnapshots_filename="$(get_backupdir)/${db}/lastsnapshots/${db}_${YEAR}_${DOY}_${FDATE}.${BACKUP_EXT}" 457 | weekly_filename="$(get_backupdir)/${db}/weekly/${db}_${YEAR}_${W}.${BACKUP_EXT}" 458 | monthly_filename="$(get_backupdir)/${db}/monthly/${db}_${YEAR}_${MNUM}.${BACKUP_EXT}" 459 | lastsnapshots_zfilename="$(get_compressed_name "${lastsnapshots_filename}")" 460 | daily_zfilename="$(get_compressed_name "${daily_filename}")" 461 | weekly_zfilename="$(get_compressed_name "$weekly_filename")" 462 | monthly_zfilename="$(get_compressed_name "${monthly_filename}")" 463 | if [ ! -e "${daily_zfilename}" ];then 464 | ln "${real_zfilename}" "${daily_zfilename}" 465 | fi 466 | if [ ! -e "${weekly_zfilename}" ];then 467 | ln "${real_zfilename}" "${weekly_zfilename}" 468 | fi 469 | if [ ! -e "${monthly_zfilename}" ];then 470 | ln "${real_zfilename}" "${monthly_zfilename}" 471 | fi 472 | if [ ! -e "${lastsnapshots_zfilename}" ];then 473 | ln "${real_zfilename}" "${lastsnapshots_zfilename}" 474 | fi 475 | } 476 | 477 | dummy_callee_for_tests() { 478 | echo "here" 479 | } 480 | 481 | dummy_for_tests() { 482 | dummy_callee_for_tests 483 | } 484 | 485 | remove_backup_status_files() { 486 | remove_files "$statusfile" "$statusfile.c" 487 | } 488 | 489 | do_db_backup_() { 490 | LAST_BACKUP_STATUS="" 491 | db="${1}" 492 | fun_="${2}" 493 | create_db_directories "${db}" 494 | real_filename="$(get_backupdir)/${db}/dumps/${db}_${FDATE}.${BACKUP_EXT}" 495 | zreal_filename="$(get_compressed_name "${real_filename}")" 496 | statusfile=$(mktemp) 497 | remove_backup_status_files 498 | adb="${YELLOW}${db}${NORMAL} " 499 | if [ x"${db}" = x"${GLOBAL_SUBDIR}" ];then 500 | adb="" 501 | fi 502 | log "Dumping database ${adb}${RED}to maybe uncompressed dump: ${YELLOW}${real_filename}${NORMAL}" 503 | if [ "x${PIPED_BACKUP_COMPRESSION}" = "x1" ];then 504 | # ensure backup + compression is atomic with the absence of +o pipefail in old posix shells 505 | ( $fun_ "${db}" "${real_filename}" && touch "$statusfile" ) \ 506 | | ( do_compression "${real_filename}" "${zreal_filename}" && touch "$statusfile.c" ) 507 | 508 | if [ "x$?" != "x0" ] || ! ( [ -e "$statusfile" ] && [ -e "$statusfile.c" ] );then 509 | remove_backup_status_files 510 | LAST_BACKUP_STATUS="failure" 511 | log "${CYAN} Backup of ${db} failed !!!${NORMAL}" 512 | fi 513 | else 514 | $fun_ "${db}" "${real_filename}" 515 | fi 516 | if [ x"$?" != "x0" ];then 517 | LAST_BACKUP_STATUS="failure" 518 | log "${CYAN} Backup of ${db} failed !!!${NORMAL}" 519 | else 520 | if [ "x${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 521 | do_compression "${real_filename}" "${zreal_filename}" 522 | fi 523 | link_into_dirs "${db}" "${real_filename}" 524 | fi 525 | } 526 | 527 | do_db_backup() { 528 | db="`echo ${1} | sed 's/%/ /g'`" 529 | fun_="${BACKUP_TYPE}_dump" 530 | do_db_backup_ "${db}" "$fun_" 531 | } 532 | 533 | do_global_backup() { 534 | db="$GLOBAL_SUBDIR" 535 | fun_="${BACKUP_TYPE}_dumpall" 536 | log_rule 537 | log "GLOBAL BACKUP" 538 | log_rule 539 | do_db_backup_ "${db}" "$fun_" 540 | } 541 | 542 | activate_IO_redirection() { 543 | if [ x"${DSB_ACTITED_RIO}" = x"" ];then 544 | DSB_ACTITED_RIO="1" 545 | logdir="$(dirname $(get_logfile))" 546 | if [ ! -e "${logdir}" ];then 547 | mkdir -p "${logdir}" 548 | fi 549 | touch "$(get_logfile)" 550 | exec 1> >(tee -a "$(get_logfile)") 2>&1 551 | fi 552 | } 553 | 554 | 555 | deactivate_IO_redirection() { 556 | if [ x"${DSB_ACTITED_RIO}" != x"" ];then 557 | DSB_ACTITED_RIO="" 558 | exec 1>&1 # Restore stdout and close file descriptor #6. 559 | exec 2>&2 # Restore stdout and close file descriptor #7. 560 | fi 561 | } 562 | 563 | do_pre_backup() { 564 | debug "do_pre_backup" 565 | # IO redirection for logging. 566 | if [ x"$COMP" = "xnocomp" ];then 567 | comp_msg="No compression" 568 | else 569 | comp_msg="${COMP}" 570 | fi 571 | # If backing up all DBs on the server 572 | log_rule 573 | log "DB_SMART_BACKUP by freesoftware@makina-corpus.com / http://www.makina-corpus.com" 574 | log "Conf: ${YELLOW}'${DSB_CONF_FILE}'" 575 | log "Log: ${YELLOW}'$(get_logfile)'" 576 | log "Backup Start Time: ${YELLOW}$(readable_date)${NORMAL}" 577 | log "Backup of database compression://type@server: ${YELLOW}${comp_msg}://${BACKUP_TYPE}@${HOST}${NORMAL}" 578 | log_rule 579 | } 580 | 581 | 582 | fix_perm() { 583 | fic="${1}" 584 | if [ -e "${fic}" ];then 585 | if [ -d "${fic}" ];then 586 | perm="${DPERM:-750}" 587 | elif [ -f "${fic}" ];then 588 | perm="${FPERM:-640}" 589 | fi 590 | chown ${OWNER:-"root"}:${GROUP:-"root"} "${fic}" 591 | chmod -f $perm "${fic}" 592 | fi 593 | } 594 | 595 | fix_perms() { 596 | debug "fix_perms" 597 | find "${TOP_BACKUPDIR}" -type d -print|\ 598 | while read fic 599 | do 600 | fix_perm "${fic}" 601 | done 602 | find "${TOP_BACKUPDIR}" -type f -print|\ 603 | while read fic 604 | do 605 | fix_perm "${fic}" 606 | done 607 | } 608 | 609 | 610 | wrap_log() { 611 | echo -e "$("$@"|sed "s/^/$(echo -e "${NORMAL}${RED}")$(print_name) $(echo -e "${NORMAL}${YELLOW}")/g"|sed "s/\t/ /g"|sed "s/ +/ /g")${NORMAL}" 612 | } 613 | 614 | do_post_backup() { 615 | # Run command when we're done 616 | log_rule 617 | debug "do_post_backup" 618 | log "Total disk space used for backup storage.." 619 | log " Size - Location:" 620 | wrap_log du -shc "$(get_backupdir)"/* 621 | log_rule 622 | log "Backup end time: ${YELLOW}$(readable_date)${NORMAL}" 623 | log_rule 624 | deactivate_IO_redirection 625 | sanitize_log 626 | } 627 | 628 | sanitize_log() { 629 | sed -i -e "s/\x1B\[[0-9;]*[JKmsu]//g" "$(get_logfile)" 630 | } 631 | 632 | get_sorted_files() { 633 | files="$(ls -1 "${1}" 2>/dev/null)" 634 | sep="____----____----____" 635 | echo -e "${files}"|while read fic;do 636 | key="" 637 | oldkey="${fic}" 638 | while true;do 639 | key="$(echo "${oldkey}"|sed -e "s/_\([0-9][^0-9]\)/_0\1/g")" 640 | if [ x"${key}" != x"${oldkey}" ];then 641 | oldkey="${key}" 642 | else 643 | break 644 | fi 645 | done 646 | echo "${key}${sep}${fic}" 647 | done | sort -n -r | awk -F"$sep" '{print $2}' 648 | } 649 | 650 | do_rotate() { 651 | log_rule 652 | debug "rotate" 653 | log "Execute backup rotation policy, keep" 654 | log " - logs : ${YELLOW}${KEEP_LOGS}${NORMAL}" 655 | log " - last snapshots : ${YELLOW}${KEEP_LASTS}${NORMAL}" 656 | log " - daily dumps : ${YELLOW}${KEEP_DAYS}${NORMAL}" 657 | log " - weekly dumpss : ${YELLOW}${KEEP_WEEKS}${NORMAL}" 658 | log " - monthly dumps : ${YELLOW}${KEEP_MONTHES}${NORMAL}" 659 | # ./TOPDIR/POSTGRESQL/HOSTNAME 660 | # or ./TOPDIR/logs for logs 661 | ls -1d "${TOP_BACKUPDIR}" "$(get_backupdir)"/*|while read nsubdir;do 662 | # ./TOPDIR/HOSTNAME/DBNAME/${monthly,weekly,daily,dumps} 663 | suf="" 664 | if [ x"$nsubdir" = "x${TOP_BACKUPDIR}" ];then 665 | subdirs="logs" 666 | suf="/logs" 667 | else 668 | subdirs="monthly weekly daily lastsnapshots" 669 | fi 670 | log " - Operating in: ${YELLOW}'${nsubdir}${suf}'${NORMAL}" 671 | for chronodir in ${subdirs};do 672 | subdir="${nsubdir}/${chronodir}" 673 | if [ -d "${subdir}" ];then 674 | if [ x"${chronodir}" = "xlogs" ];then 675 | to_keep=${KEEP_LOGS:-60} 676 | elif [ x"${chronodir}" = "xweekly" ];then 677 | to_keep=${KEEP_WEEKS:-2} 678 | elif [ x"${chronodir}" = "xmonthly" ];then 679 | to_keep=${KEEP_MONTHES:-2} 680 | elif [ x"${chronodir}" = "xdaily" ];then 681 | to_keep=${KEEP_DAYS:-2} 682 | elif [ x"${chronodir}" = "xlastsnapshots" ];then 683 | to_keep=${KEEP_LASTS:-2} 684 | else 685 | to_keep="65535" # int limit 686 | fi 687 | i=0 688 | get_sorted_files "${subdir}" | while read nfic;do 689 | dfic="${subdir}/${nfic}" 690 | i="$((${i}+1))" 691 | if [ "${i}" -gt "${to_keep}" ] &&\ 692 | [ -e "${dfic}" ] &&\ 693 | [ ! -d ${dfic} ];then 694 | log " * Unlinking ${YELLOW}${dfic}${NORMAL}" 695 | remove_files "${dfic}" 696 | fi 697 | done 698 | fi 699 | done 700 | done 701 | } 702 | 703 | log_rule() { 704 | log "======================================================================" 705 | } 706 | 707 | handle_hook_error() { 708 | debug "handle_hook_error" 709 | log "Unexpected exit of ${HOOK_CMD} hook, you should never issue an exit in a hook" 710 | log_rule 711 | DSB_RETURN_CODE="1" 712 | handle_exit 713 | } 714 | 715 | do_prune() { 716 | do_rotate 717 | do_hook "Postrotate command output" "post_rotate_hook" 718 | do_cleanup_orphans 719 | do_hook "Postcleanup command output" "post_cleanup_hook" 720 | fix_perms 721 | do_post_backup 722 | do_hook "Postbackup command output" "post_backup_hook" 723 | } 724 | 725 | handle_exit() { 726 | DSB_RETURN_CODE="${DSB_RETURN_CODE:-$?}" 727 | if [ x"${DSB_BACKUP_STARTED}" != "x" ];then 728 | debug "handle_exit" 729 | DSB_HOOK_NO_TRAP="1" 730 | do_prune 731 | if [ x"$DSB_RETURN_CODE" != "x0" ];then 732 | log "WARNING, this script did not behaved correctly, check the log: $(get_logfile)" 733 | fi 734 | if [ x"${DSB_GLOBAL_BACKUP_IN_FAILURE}" != x"" ];then 735 | cyan_log "Global backup failed, check the log: $(get_logfile)" 736 | DSB_RETURN_CODE="${DSB_BACKUP_FAILED}" 737 | fi 738 | if [ x"${DSB_BACKUP_IN_FAILURE}" != x"" ];then 739 | cyan_log "One of the databases backup failed, check the log: $(get_logfile)" 740 | DSB_RETURN_CODE="${DSB_BACKUP_FAILED}" 741 | fi 742 | fi 743 | exit "${DSB_RETURN_CODE}" 744 | } 745 | 746 | do_trap() { 747 | debug "do_trap" 748 | trap handle_exit EXIT SIGHUP SIGINT SIGQUIT SIGTERM 749 | } 750 | 751 | do_cleanup_orphans() { 752 | log_rule 753 | debug "do_cleanup_orphans" 754 | log "Cleaning orphaned dumps:" 755 | # prune all files in dumps dirs which have no more any 756 | # hardlinks in chronoted directories (weekly, monthly, daily) 757 | find "$(get_backupdir)" -type f -links 1 -print 2>/dev/null|\ 758 | while read fic 759 | do 760 | log " * Pruning ${YELLOW}${fic}${NORMAL}" 761 | remove_files "${fic}" 762 | done 763 | } 764 | 765 | do_hook() { 766 | HOOK_HEADER="${1}" 767 | HOOK_CMD="${2}" 768 | if [ x"${DSB_HOOK_NO_TRAP}" = "x" ];then 769 | trap handle_hook_error EXIT SIGHUP SIGINT SIGQUIT SIGTERM 770 | fi 771 | if [ x"$(fn_exists ${HOOK_CMD})" = "x0" ];then 772 | debug "do_hook ${HOOK_CMD}" 773 | log_rule 774 | log "HOOK: ${YELLOW} ${HOOK_HEADER}" 775 | "${HOOK_CMD}" 776 | log_rule 777 | log "" 778 | fi 779 | if [ x"${DSB_HOOK_NO_TRAP}" = "x" ];then 780 | trap handle_exit EXIT SIGHUP SIGINT SIGQUIT SIGTERM 781 | fi 782 | } 783 | 784 | do_backup() { 785 | debug "do_backup" 786 | if [ x"${BACKUP_TYPE}" = "x" ];then 787 | die "No backup type, choose between mysql,postgresql,redis,mongodb,slapd,es" 788 | fi 789 | # if either the source failed or we do not have a configuration file, bail out 790 | die_in_error "Invalid configuration file: ${DSB_CONF_FILE}" 791 | DSB_BACKUP_STARTED="y" 792 | do_pre_backup 793 | do_hook "Prebackup command output" "pre_backup_hook" 794 | if [ x"${DO_GLOBAL_BACKUP}" != "x" ];then 795 | do_global_backup 796 | if [ x"${LAST_BACKUP_STATUS}" = "xfailure" ];then 797 | do_hook "Postglobalbackup command output" "post_global_backup_hook" 798 | DSB_GLOBAL_BACKUP_IN_FAILURE="y" 799 | else 800 | do_hook "Postglobalbackup(failure) command output" "post_global_backup_failure_hook" 801 | fi 802 | fi 803 | if [ "x${BACKUP_DB_NAMES}" != "x" ];then 804 | log_rule 805 | log "DATABASES BACKUP" 806 | log_rule 807 | for db in ${BACKUP_DB_NAMES};do 808 | do_db_backup $db 809 | if [ x"${LAST_BACKUP_STATUS}" = "xfailure" ];then 810 | do_hook "Postdbbackup: ${db} command output" "post_db_backup_hook" 811 | DSB_BACKUP_IN_FAILURE="y" 812 | else 813 | do_hook "Postdbbackup: ${db}(failure) command output" "post_db_backup_failure_hook" 814 | fi 815 | done 816 | fi 817 | } 818 | 819 | mark_run_rotate() { 820 | DSB_CONF_FILE="${1}" 821 | DO_PRUNE="1" 822 | } 823 | 824 | mark_run_backup() { 825 | DSB_CONF_FILE="${1}" 826 | DO_BACKUP="1" 827 | } 828 | 829 | verify_backup_type() { 830 | for typ_ in _dump _dumpall;do 831 | if [ x"$(fn_exists ${BACKUP_TYPE}${typ_})" != "x0" ];then 832 | die "Please provide a ${BACKUP_TYPE}${typ_} export function" 833 | fi 834 | done 835 | } 836 | 837 | db_user() { 838 | echo "${DBUSER:-${RUNAS:-$(whoami)}}" 839 | } 840 | 841 | set_colors() { 842 | YELLOW="\e[1;33m" 843 | RED="\\033[31m" 844 | CYAN="\\033[36m" 845 | NORMAL="\\033[0m" 846 | if [ x"$NO_COLOR" != "x" ] || [ x"$NOCOLOR" != "x" ] || [ x"$NO_COLORS" != "x" ] || [ x"$NOCOLORS" != "x" ];then 847 | YELLOW="" 848 | RED="" 849 | CYAN="" 850 | NORMAL="" 851 | fi 852 | } 853 | 854 | set_vars() { 855 | debug "set_vars" 856 | args=${@} 857 | set_colors 858 | PARAM="" 859 | DSB_CONF_FILE_DEFAULT="/etc/db_smartbackup.conf.sh" 860 | parsable_args="$(echo "${@}"|sed "s/^--//g")" 861 | if [ x"${parsable_args}" = "x" ];then 862 | USAGE="1" 863 | fi 864 | if [ -e "${parsable_args}" ];then 865 | mark_run_backup ${1} 866 | else 867 | while true 868 | do 869 | sh="1" 870 | if [ x"${1}" = "x$PARAM" ];then 871 | break 872 | fi 873 | if [ x"${1}" = "x--gen-config" ];then 874 | DSB_GENERATE_CONFIG="1" 875 | DSB_CONF_FILE="${2:-${DSB_CONF_FILE_DEFAULT}}" 876 | sh="2" 877 | elif [ x"${1}" = "x-p" ] || [ x"${1}" = "x--prune" ];then 878 | mark_run_rotate ${2};sh="2" 879 | elif [ x"${1}" = "x-b" ] || [ x"${1}" = "x--backup" ];then 880 | mark_run_backup ${2};sh="2" 881 | else 882 | if [ x"${DB_SMART_BACKUP_AS_FUNCS}" = "x" ];then 883 | usage 884 | die "Invalid invocation" 885 | fi 886 | fi 887 | PARAM="${1}" 888 | OLD_ARG="${1}" 889 | for i in $(seq $sh);do 890 | shift 891 | if [ x"${1}" = "x${OLD_ARG}" ];then 892 | break 893 | fi 894 | done 895 | if [ x"${1}" = "x" ];then 896 | break 897 | fi 898 | done 899 | fi 900 | 901 | ######## Backup settings 902 | NO_COLOR="${NO_COLOR:-}" 903 | COMP=${COMP:-xz} 904 | BACKUP_TYPE=${BACKUP_TYPE:-} 905 | TOP_BACKUPDIR="${TOP_BACKUPDIR:-/var/db_smart_backup}" 906 | DEFAULT_DO_GLOBAL_BACKUP="1" 907 | if [ "x${BACKUP_TYPE}" = "xes" ];then 908 | DEFAULT_DO_GLOBAL_BACKUP="" 909 | fi 910 | DO_GLOBAL_BACKUP="${DO_GLOBAL_BACKUP-${DEFAULT_DO_GLOBAL_BACKUP}}" 911 | KEEP_LASTS="${KEEP_LASTS:-24}" 912 | KEEP_DAYS="${KEEP_DAYS:-14}" 913 | KEEP_WEEKS="${KEEP_WEEKS:-8}" 914 | KEEP_MONTHES="${KEEP_MONTHES:-12}" 915 | KEEP_LOGS="${KEEP_LOGS:-60}" 916 | DPERM="${DPERM:-"750"}" 917 | FPERM="${FPERM:-"640"}" 918 | OWNER="${OWNER:-"root"}" 919 | GROUP="${GROUP:-"root"}" 920 | 921 | ######## Database connection settings 922 | HOST="${HOST:-localhost}" 923 | PORT="${PORT:-}" 924 | RUNAS="" # see runas function 925 | DBUSER="" # see db_user function 926 | PASSWORD="${PASSWORD:-}" 927 | DBNAMES="${DBNAMES:-all}" 928 | DBEXCLUDE="${DBEXCLUDE:-}" 929 | 930 | ######## hostname 931 | GET_HOSTNAME=`hostname -f` 932 | if [ x"${GET_HOSTNAME}" = x"" ]; then 933 | GET_HOSTNAME=`hostname -s` 934 | fi 935 | 936 | ######## Mail setup 937 | MAILCONTENT="${MAILCONTENT:-stdout}" 938 | MAXATTSIZE="${MAXATTSIZE:-4000}" 939 | MAILADDR="${MAILADDR:-root@localhost}" 940 | 941 | MAIL_SERVERNAME="${MAIL_SERVERNAME:-${GET_HOSTNAME}}" 942 | 943 | ######### Postgresql 944 | PSQL="${PSQL:-"$(which psql 2>/dev/null)"}" 945 | PG_DUMP="${PG_DUMP:-"$(which pg_dump 2>/dev/null)"}" 946 | PG_DUMPALL="${PG_DUMPALL:-"$(which pg_dumpall 2>/dev/null)"}" 947 | OPT="${OPT:-"--create -Fc -Z0"}" 948 | OPTALL="${OPTALL:-"--globals-only"}" 949 | 950 | ######### MYSQL 951 | MYSQL_USE_SSL="${MYSQL_USE_SSL:-}" 952 | MYSQL_SOCK_PATHS="${MYSQL_SOCK_PATHS:-"/var/run/mysqld/mysqld.sock"}" 953 | MYSQL="${MYSQL:-$(which mysql 2>/dev/null)}" 954 | MYSQLDUMP="${MYSQLDUMP:-$(which mysqldump 2>/dev/null)}" 955 | MYSQLDUMP_NO_SINGLE_TRANSACTION="${MYSQLDUMP_NO_SINGLE_TRANSACTION:-}" 956 | MYSQLDUMP_AUTOCOMMIT="${MYSQLDUMP_AUTOCOMMIT:-1}" 957 | MYSQLDUMP_COMPLETEINSERTS="${MYSQLDUMP_COMPLETEINSERTS:-1}" 958 | MYSQLDUMP_LOCKTABLES="${MYSQLDUMP_LOCKTABLES:-}" 959 | MYSQLDUMP_DEBUG="${MYSQLDUMP_DEBUG:-}" 960 | MYSQLDUMP_NOROUTINES="${MYSQLDUMP_NOROUTINES:-}" 961 | # mongodb 962 | MONGODB_PATH="${MONGODB_PATH:-"/var/lib/mongodb"}" 963 | MONGODB_USER="${MONGODB_USER:-"${DBUSER}"}" 964 | MONGODB_PASSWORD="${MONGODB_PASSWORD:-"${PASSWORD}"}" 965 | MONGODB_ARGS="${MONGODB_ARGS:-""}" 966 | # slapd 967 | SLAPCAT_ARGS="${SLAPCAT_ARGS:-""}" 968 | SLAPD_DIR="${SLAPD_DIR:-/var/lib/ldap}" 969 | 970 | ######## Hooks 971 | pre_backup_hook="${pre_backup_hook:-}" 972 | post_global_backup_hook="${post_global_backup_hook-}" 973 | post_db_backup_hook="${post_db_backup_hook-}" 974 | post_backup_hook="${post_backup_hook-}" 975 | 976 | ######## Advanced options 977 | COMPS="${COMPS-"xz zstd bz2 gzip nocomp"}" 978 | COMP="${COMP-"${COMPS}"}" 979 | GLOBAL_SUBDIR="__GLOBAL__" 980 | PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 981 | DATE=`date +%Y-%m-%d` # Datestamp e.g 2002-09-21 982 | FDATE=`date +%Y-%m-%d_%H-%M-%S` # Datestamp e.g 2002-09-21 983 | FULL_FDATE=`date +%Y-%m-%d_%H-%M-%S.%N` # Datestamp e.g 2002-09-21 984 | DOY=`date +%j` # Day of the YEAR 0..366 985 | DOW=`date +%A` # Day of the week e.g. Monday 986 | DNOW=`date +%u` # Day number of the week 1 to 7 where 1 represents Monday 987 | DOM=`date +%d` # Date of the Month e.g. 27 988 | M=`date +%B` # Month e.g January 989 | YEAR=`date +%Y` # Datestamp e.g 2002-09-21 990 | MNUM=`date +%m` # Month e.g 1 991 | W=`date +%V` # Week Number e.g 37 992 | DSB_BACKUPFILES="" # thh: added for later mailing 993 | DSB_RETURN_CODE="" 994 | DSB_GLOBAL_BACKUP_FAILED="3" 995 | DSB_BACKUP_FAILED="4" 996 | # source conf file if any 997 | if [ -e "${DSB_CONF_FILE}" ];then 998 | . "${DSB_CONF_FILE}" 999 | fi 1000 | 1001 | DEFAULT_PIPED_BACKUP_COMPRESSION="$(if ( echo $BACKUP_TYPE|grep -Eq "^ldap|slapd|mysql|post" );then echo 1;fi)" 1002 | export PIPED_BACKUP_COMPRESSION="${PIPED_BACKUP_COMPRESSION-${DEFAULT_PIPED_BACKUP_COMPRESSION}}" 1003 | 1004 | activate_IO_redirection 1005 | set_compressor 1006 | 1007 | if [ x"${BACKUP_TYPE}" != "x" ];then 1008 | verify_backup_type 1009 | if [ x"$(fn_exists "${BACKUP_TYPE}_set_connection_vars")" = "x0" ];then 1010 | "${BACKUP_TYPE}_set_connection_vars" 1011 | fi 1012 | "${BACKUP_TYPE}_check_connectivity" 1013 | if [ x"$(fn_exists "${BACKUP_TYPE}_get_all_databases")" = "x0" ];then 1014 | ALL_DBNAMES="$(${BACKUP_TYPE}_get_all_databases)" 1015 | fi 1016 | if [ x"$(fn_exists "${BACKUP_TYPE}_set_vars")" = "x0" ];then 1017 | "${BACKUP_TYPE}_set_vars" 1018 | fi 1019 | fi 1020 | if [ "x${BACKUP_TYPE}" = "xmongodb" ]\ 1021 | || [ "x${BACKUP_TYPE}" = "xes" ]\ 1022 | || [ "x${BACKUP_TYPE}" = "xredis" ];then 1023 | BACKUP_EXT="tar" 1024 | elif [ "x${BACKUP_TYPE}" = "xslapd" ];then 1025 | BACKUP_EXT="ldif" 1026 | else 1027 | BACKUP_EXT="sql" 1028 | fi 1029 | 1030 | BACKUP_DB_NAMES="${DBNAMES}" 1031 | # Re source to reoverride any core overriden variable 1032 | if [ -e "${DSB_CONF_FILE}" ];then 1033 | . "${DSB_CONF_FILE}" 1034 | fi 1035 | # ensure lower KEEP_LASTS than period retention is enforced 1036 | if [ "$KEEP_LASTS" -lt "$KEEP_DAYS" ];then 1037 | export KEEP_DAYS="$KEEP_LASTS" 1038 | fi 1039 | ### 1040 | # for monthes and week we are dictated by their values 1041 | # this may change, and we could activate the following code 1042 | # KEEP_LASTS_MONTHES=$(expr $KEEP_LASTS % 28) 1043 | # KEEP_LASTS_WEEKS=$(expr $KEEP_LASTS % 7) 1044 | # if [ $KEEP_LASTS_WEEKS = 0 ];then 1045 | # export KEEP_LASTS_WEEKS=1 1046 | # fi 1047 | # if [ $KEEP_LASTS_MONTHES = 0 ];then 1048 | # export KEEP_LASTS_MONTHS=1 1049 | # fi 1050 | # set -x 1051 | # if [ "$KEEP_LASTS_MONTHES" -lt "$KEEP_MONTHES" ];then 1052 | # export KEEP_MONTHES="$KEEP_LASTS_MONTHES" 1053 | # fi 1054 | # if [ "$KEEP_LASTS_WEEKS" -lt "$KEEP_WEEKS" ];then 1055 | # export KEEP_DAYS="$KEEP_LASTS_WEEKS" 1056 | # fi 1057 | ### 1058 | } 1059 | 1060 | do_main() { 1061 | if [ x"${1#--/}" = "x" ];then 1062 | set_colors 1063 | usage 1064 | exit 0 1065 | else 1066 | do_trap 1067 | set_vars "${@}" 1068 | if [ x"${1#--/}" = "x" ];then 1069 | usage 1070 | exit 0 1071 | elif [ x"${DSB_GENERATE_CONFIG}" != "x" ];then 1072 | generate_configuration_file 1073 | die_in_error "end_of_scripts" 1074 | elif [ "x${DO_BACKUP}" != "x" ] || [ "x${DO_PRUNE}" != "x" ] ;then 1075 | if [ -e "${DSB_CONF_FILE}" ];then 1076 | if [ "x${DO_PRUNE}" != "x" ];then 1077 | func=do_prune 1078 | else 1079 | func=do_backup 1080 | fi 1081 | ${func} 1082 | die_in_error "end_of_scripts" 1083 | else 1084 | cyan_log "Missing or invalid configuration file: ${DSB_CONF_FILE}" 1085 | exit 1 1086 | fi 1087 | fi 1088 | fi 1089 | } 1090 | 1091 | #################### POSTGRESQL 1092 | pg_dumpall_() { 1093 | runcmd_as "${PG_DUMPALL}" "${@}" 1094 | } 1095 | 1096 | pg_dump_() { 1097 | runcmd_as "${PG_DUMP}" "${@}" 1098 | } 1099 | 1100 | psql_() { 1101 | runcmd_as "${PSQL}" -w "${@}" 1102 | } 1103 | 1104 | # REAL API IS HERE 1105 | postgresql_set_connection_vars() { 1106 | export RUNAS="${RUNAS:-postgres}" 1107 | export PGHOST="${HOST}" 1108 | export PGPORT="${PORT}" 1109 | export PGUSER="$(db_user)" 1110 | export PGPASSWORD="${PASSWORD}" 1111 | if [ x"${PGHOST}" = "xlocalhost" ]; then 1112 | PGHOST= 1113 | fi 1114 | } 1115 | 1116 | postgresql_set_vars() { 1117 | if [ x"${DBNAMES}" = "xall" ]; then 1118 | DBNAMES=${ALL_DBNAMES} 1119 | if [ " ${DBEXCLUDE#*" template0 "*} " != " $DBEXCLUDE " ];then 1120 | DBEXCLUDE="${DBEXCLUDE} template0" 1121 | fi 1122 | for exclude in ${DBEXCLUDE};do 1123 | DBNAMES=$(echo ${DBNAMES} | sed "s/\b${exclude}\b//g") 1124 | done 1125 | fi 1126 | if [ x"${DBNAMES}" = "xall" ]; then 1127 | die "${BACKUP_TYPE}: could not get all databases" 1128 | fi 1129 | for i in "psql::${PSQL}" "pg_dumpall::${PG_DUMPALL}" "pg_dump::${PG_DUMP}";do 1130 | var="$(echo ${i}|awk -F:: '{print $1}')" 1131 | bin="$(echo ${i}|awk -F:: '{print $2}')" 1132 | if [ ! -e "${bin}" ];then 1133 | die "missing ${var}" 1134 | fi 1135 | done 1136 | } 1137 | 1138 | postgresql_check_connectivity() { 1139 | who="$(whoami)" 1140 | pgu="$(db_user)" 1141 | psql_ --username="$(db_user)" -c "select * from pg_roles" -d postgres >/dev/null 1142 | die_in_error "Cant connect to postgresql server with ${pgu} as ${who}, did you configured \$RUNAS("$(runas)") in $DSB_CONF_FILE" 1143 | } 1144 | 1145 | postgresql_get_all_databases() { 1146 | LANG=C LC_ALL=C psql_ --username="$(db_user)" -l -A -F: | sed -ne "/:/ { /Name:Owner/d; /template0/d; s/:.*$//; p }" 1147 | } 1148 | 1149 | postgresql_dumpall() { 1150 | if [ "x${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 1151 | pg_dumpall_ --username="$(db_user)" $OPTALL > "${2}" 1152 | else 1153 | pg_dumpall_ --username="$(db_user)" $OPTALL 1154 | fi 1155 | } 1156 | 1157 | postgresql_dump() { 1158 | if [ "x${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 1159 | pg_dump_ --username="$(db_user)" $OPT "${1}" > "${2}" 1160 | else 1161 | pg_dump_ --username="$(db_user)" $OPT "${1}" 1162 | fi 1163 | } 1164 | 1165 | #################### MYSQL 1166 | # REAL API IS HERE 1167 | mysql__() { 1168 | runcmd_as "${MYSQL}" $(mysql_common_args) "${@}" 1169 | } 1170 | 1171 | mysqldump__() { 1172 | runcmd_as "${MYSQLDUMP}" $(mysql_common_args) "${@}" 1173 | } 1174 | 1175 | mysqldump_() { 1176 | mysqldump__ "-u$(db_user)" "$@" 1177 | } 1178 | 1179 | mysql_() { 1180 | mysql__ "-u$(db_user)" "$@" 1181 | } 1182 | 1183 | mysql_set_connection_vars() { 1184 | export MYSQL_HOST="${HOST:-localhost}" 1185 | export MYSQL_TCP_PORT="${PORT:-3306}" 1186 | export MYSQL_PWD="${PASSWORD}" 1187 | if [ x"${MYSQL_HOST}" = "xlocalhost" ];then 1188 | while read path;do 1189 | if [ "x${path}" != "x" ]; then 1190 | export MYSQL_HOST="127.0.0.1" 1191 | export MYSQL_UNIX_PORT="${path}" 1192 | fi 1193 | done < <(printf "${MYSQL_SOCK_PATHS}\n\n") 1194 | fi 1195 | if [ -e "${MYSQL_UNIX_PORT}" ];then 1196 | log "Using mysql socket: ${path}" 1197 | else 1198 | MYSQL_UNIX_PORT= 1199 | fi 1200 | } 1201 | 1202 | mysql_set_vars() { 1203 | if [ x"${MYSQLDUMP_AUTOCOMMIT}" = x"" ];then 1204 | MYSQLDUMP_OPTS_COMMON="${MYSQLDUMP_OPTS_COMMON} --no-autocommit" 1205 | fi 1206 | if [ x"${MYSQLDUMP_NO_SINGLE_TRANSACTION}" = x"" ];then 1207 | MYSQLDUMP_OPTS_COMMON="${MYSQLDUMP_OPTS_COMMON} --single-transaction" 1208 | fi 1209 | if [ x"${MYSQLDUMP_COMPLETEINSERTS}" != x"" ];then 1210 | MYSQLDUMP_OPTS_COMMON="${MYSQLDUMP_OPTS_COMMON} --complete-insert" 1211 | else 1212 | MYSQLDUMP_OPTS_COMMON="${MYSQLDUMP_OPTS_COMMON} --extended-insert" 1213 | fi 1214 | if [ x"${MYSQLDUMP_LOCKTABLES}" = x"" ];then 1215 | MYSQLDUMP_OPTS_COMMON="${MYSQLDUMP_OPTS_COMMON} --lock-tables=false" 1216 | fi 1217 | if [ x"${MYSQLDUMP_DEBUG}" != x"" ];then 1218 | MYSQLDUMP_OPTS_COMMON="${MYSQLDUMP_OPTS_COMMON} --debug-info" 1219 | fi 1220 | if [ x"${MYSQLDUMP_NOROUTINES}" = x"" ];then 1221 | MYSQLDUMP_OPTS_COMMON="${MYSQLDUMP_OPTS_COMMON} --routines" 1222 | fi 1223 | MYSQLDUMP_OPTS_COMMON="${MYSQLDUMP_OPTS_COMMON} --quote-names --opt" 1224 | MYSQLDUMP_OPTS="${MYSQLDUMP_OPTS:-"${MYSQLDUMP_OPTS_COMMON}"}" 1225 | MYSQLDUMP_ALL_OPTS="${MYSQLDUMP_ALL_OPTS:-"${MYSQLDUMP_OPTS_COMMON} --all-databases --no-data"}" 1226 | if [ x"${DBNAMES}" = "xall" ]; then 1227 | DBNAMES=${ALL_DBNAMES} 1228 | for exclude in ${DBEXCLUDE};do 1229 | DBNAMES=$(echo ${DBNAMES} | sed "s/\b${exclude}\b//g") 1230 | done 1231 | fi 1232 | if [ x"${DBNAMES}" = "xall" ]; then 1233 | die "${BACKUP_TYPE}: could not get all databases" 1234 | fi 1235 | for i in "mysql::${MYSQL}" "mysqldump::${MYSQLDUMP}";do 1236 | var="$(echo ${i}|awk -F:: '{print $1}')" 1237 | bin="$(echo ${i}|awk -F:: '{print $2}')" 1238 | if [ ! -e "${bin}" ];then 1239 | die "missing ${var}" 1240 | fi 1241 | done 1242 | } 1243 | 1244 | mysql_common_args() { 1245 | args="" 1246 | if [ x"${MYSQL_USE_SSL}" != "x" ];then 1247 | args="${args} --ssl" 1248 | fi 1249 | if [ x"${MYSQL_UNIX_PORT}" = "x" ];then 1250 | args="--host=$MYSQL_HOST --port=$MYSQL_TCP_PORT" 1251 | fi 1252 | echo "${args}" 1253 | } 1254 | 1255 | mysql_check_connectivity() { 1256 | who="$(whoami)" 1257 | mysqlu="$(db_user)" 1258 | echo "select 1"|mysql_ information_schema&> /dev/null 1259 | die_in_error "Cant connect to mysql server with ${mysqlu} as ${who}, did you configured \$RUNAS \$PASSWORD \$DBUSER in $DSB_CONF_FILE" 1260 | } 1261 | 1262 | mysql_get_all_databases() { 1263 | echo "select schema_name from SCHEMATA;"|mysql_ -N information_schema 2>/dev/null \ 1264 | | grep -v performance_schema \ 1265 | | grep -v information_schema 1266 | die_in_error "Could not get mysql databases" 1267 | } 1268 | 1269 | mysql_dumpall() { 1270 | if [ "x${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 1271 | mysqldump_ ${MYSQLDUMP_ALL_OPTS} 2>&1 > "${2}" 1272 | else 1273 | mysqldump_ ${MYSQLDUMP_ALL_OPTS} 2>&1 1274 | fi 1275 | } 1276 | 1277 | mysql_dump() { 1278 | if [ "x${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 1279 | mysqldump_ ${MYSQLDUMP_OPTS} -B "${1}" > "${2}" 1280 | else 1281 | mysqldump_ ${MYSQLDUMP_OPTS} -B "${1}" 1282 | fi 1283 | } 1284 | 1285 | 1286 | #################### MONGODB 1287 | # REAL API IS HERE 1288 | mongodb_set_connection_vars() { 1289 | /bin/true 1290 | } 1291 | 1292 | mongodb_set_vars() { 1293 | DBNAMES="" 1294 | } 1295 | 1296 | mongodb_check_connectivity() { 1297 | test -d "${MONGODB_PATH}/journal" 1298 | die_in_error "no mongodb" 1299 | } 1300 | 1301 | mongodb_get_all_databases() { 1302 | /bin/true 1303 | } 1304 | 1305 | mongodb_dumpall() { 1306 | DUMPDIR="${2}.dir" 1307 | if [ ! -e ${DUMPDIR} ];then 1308 | mkdir -p "${DUMPDIR}" 1309 | fi 1310 | if [ "x${MONGODB_PASSWORD}" != "x" ];then 1311 | MONGODB_ARGS="$MONGODB_ARGS -p $MONGODB_PASSWORD" 1312 | fi 1313 | if [ "x${MONGODB_USER}" != "x" ];then 1314 | MONGODB_ARGS="$MONGODB_ARGS -u $MONGODB_USER" 1315 | fi 1316 | mongodump ${MONGODB_ARGS} --out "${DUMPDIR}"\ 1317 | && die_in_error "mongodb dump failed" 1318 | cd "${DUMPDIR}" && tar cf "${2}" . 1319 | die_in_error "mongodb tar failed" 1320 | rm -rf "${DUMPDIR}" 1321 | } 1322 | 1323 | mongodb_dump() { 1324 | /bin/true 1325 | } 1326 | 1327 | #################### redis 1328 | # REAL API IS HERE 1329 | redis_set_connection_vars() { 1330 | /bin/true 1331 | } 1332 | 1333 | redis_set_vars() { 1334 | DBNAMES="" 1335 | export REDIS_PATH="${REDIS_PATH:-"/var/lib/redis"}" 1336 | } 1337 | 1338 | redis_check_connectivity() { 1339 | if [ ! -e "${REDIS_PATH}" ];then 1340 | die_in_error "no redis dir" 1341 | fi 1342 | if [ "x${REDIS_PATH}" != "x" ];then 1343 | die_in_error "redis dir is not set" 1344 | fi 1345 | if [ "x$(ls -1 "${REDIS_PATH}"|wc -l|sed -e"s/ //g")" = "x0" ];then 1346 | die_in_error "no redis rdbs in ${REDIS_PATH}" 1347 | fi 1348 | } 1349 | 1350 | redis_get_all_databases() { 1351 | /bin/true 1352 | } 1353 | 1354 | redis_dumpall() { 1355 | BCK_DIR="$(dirname ${2})" 1356 | if [ ! -e "${BCK_DIR}" ];then 1357 | mkdir -p "${BCK_DIR}" 1358 | fi 1359 | c="${PWD}" 1360 | cd "${REDIS_PATH}" && tar cf "${2}" . && cd "${c}" 1361 | die_in_error "redis $2 dump failed" 1362 | } 1363 | 1364 | redis_dump() { 1365 | /bin/true 1366 | } 1367 | 1368 | #################### slapd 1369 | # REAL API IS HERE 1370 | slapd_set_connection_vars() { 1371 | /bin/true 1372 | } 1373 | 1374 | slapd_set_vars() { 1375 | DBNAMES="" 1376 | } 1377 | 1378 | slapd_check_connectivity() { 1379 | if [ ! -e ${SLAPD_DIR} ];then 1380 | die_in_error "no slapd dir" 1381 | fi 1382 | if [ "x$(ls -1 "${SLAPD_DIR}"|wc -l|sed -e"s/ //g")" = "x0" ];then 1383 | die_in_error "no slapd db in ${SLAPD_DIR}" 1384 | fi 1385 | } 1386 | 1387 | slapd_get_all_databases() { 1388 | /bin/true 1389 | } 1390 | 1391 | slapd_dumpall() { 1392 | BCK_DIR="$(dirname ${2})" 1393 | if [ ! -e "${BCK_DIR}" ];then 1394 | mkdir -p "${BCK_DIR}" 1395 | fi 1396 | if [ "x${PIPED_BACKUP_COMPRESSION}" != "x1" ];then 1397 | slapcat ${SLAPCAT_ARGS} > "${2}" 1398 | else 1399 | slapcat ${SLAPCAT_ARGS} 1400 | fi 1401 | die_in_error "slapd $2 dump failed" 1402 | } 1403 | 1404 | slapd_dump() { 1405 | /bin/true 1406 | } 1407 | 1408 | # ELASTICSEARCH 1409 | es_set_connection_vars() { 1410 | if [ "x${ES_URI}" = "x" ];then 1411 | export ES_URI="http://localhost:9200" 1412 | fi 1413 | export ES_USER="${ES_USER:-${DBUSER}}" 1414 | export ES_PASSWORD="${ES_PASSWORD:-${PASSWORD}}" 1415 | } 1416 | 1417 | es_set_vars() { 1418 | export BACKUP_DB_NAMES="${BACKUP_DB_NAMES:-${DBNAMES}}" 1419 | if [ x"${DBNAMES}" = "xall" ]; then 1420 | DBNAMES=${ALL_DBNAMES} 1421 | for exclude in ${DBEXCLUDE};do 1422 | DBNAMES=$(echo ${DBNAMES} | sed "s/\b${exclude}\b//g") 1423 | done 1424 | fi 1425 | if [ x"${DBNAMES}" = "xall" ]; then 1426 | die "${BACKUP_TYPE}: could not get all databases" 1427 | fi 1428 | } 1429 | 1430 | curl_es() { 1431 | path="${1}" 1432 | shift 1433 | es_args="${ES_EXTRA_ARGS-}" 1434 | curl="$(which curl 2>/dev/null)" 1435 | jq="$(which jq 2>/dev/null)" 1436 | if [ ! -f "${curl}" ];then 1437 | die "install curl" 1438 | fi 1439 | if [ ! -f "${jq}" ];then 1440 | die "install jq" 1441 | fi 1442 | if [ "x${ES_USER}" != "x" ];then 1443 | es_args="${es_args} -u ${ES_USER}:${ES_PASSWORD}" 1444 | fi 1445 | curl -H "Content-Type: application/json" -s "${@}" $es_args "${ES_URI}/${path}" 1446 | } 1447 | 1448 | es_check_connectivity() { 1449 | curl_es 1>/dev/null || die_in_error "$ES_URI unreachable" 1450 | if [ "x$ES_SNAPSHOTS_DIR" = "x" ];then 1451 | ES_TMP=$(curl_es "_nodes/_local?pretty"|grep '"work" :'|awk '{print $3}'|sed -e 's/\(^[^"]*"\)\|\("[^"]*$\)//g') 1452 | if [ "x${ES_TMP}" = "x" ];then ES_SNAPSHOTS_DIR="${TOP_BACKUPDIR}/tmp";fi 1453 | if ! -e [ "x${ES_SNAPSHOTS_DIR}" ];then mkdir -p "${ES_SNAPSHOTS_DIR}";fi 1454 | fi 1455 | ES_SNAPSHOTS_DIR="${ES_SNAPSHOTS_DIR:-${ES_TMP}/snapshots}" 1456 | export ES_SNAPSHOTS_DIR 1457 | # set backup repository 1458 | } 1459 | 1460 | es_get_all_databases() { 1461 | curl_es _cat/indices|awk '{print $3}' 1462 | } 1463 | 1464 | es_getreponame() { 1465 | name="dsb_${1}" 1466 | echo "${name}" 1467 | } 1468 | 1469 | es_getworkdir() { 1470 | name="${1}" 1471 | # THIS HAVE TO BE ADDED TO PATH.REPO ES CONF ! 1472 | echo "${ES_SNAPSHOTS_DIR}/${name}" 1473 | } 1474 | 1475 | es_preparerepo() { 1476 | name="${1}" 1477 | directory="$(es_getworkdir ${name})" 1478 | esname="$(es_getreponame ${name})" 1479 | if [ ! -e "${ES_SNAPSHOTS_DIR}" ];then 1480 | die "Invalid es dir" 1481 | fi 1482 | for i in $(seq 3);do 1483 | ret=$(curl_es "_snapshot/${esname}"|jq '.["'"${esname}"'"]["settings"]["location"]') 1484 | if [ "x${ret}" != 'x"'"${directory}"'"' ];then 1485 | sleep 1 1486 | else 1487 | break 1488 | fi 1489 | done 1490 | if [ "x${ret}" = 'x"'"${directory}"'"' ];then 1491 | sleep 1 1492 | fi 1493 | curl_es "_snapshot/${esname}" -XDELETE >/dev/null 2>&1 1494 | die_in_error "Directory API link removal problem for ${name} / ${esname} / ${directory}" 1495 | ret=$(curl_es "_snapshot/${esname}" -XPUT\ 1496 | -d '{"type": "fs", "settings": {"location": "'"${directory}"'", "compress": false}}') 1497 | if [ "x${ret}" != 'x{"acknowledged":true}' ];then 1498 | echo "${ret}" >&2 1499 | /bin/false 1500 | die "Cannot create repo ${esname} for ${name} (${directory})" 1501 | fi 1502 | for i in $(seq 10);do 1503 | ret=$(curl_es "_snapshot/${esname}"|jq -rc '.["'"${esname}"'"]["settings"]["location"]') 1504 | if [ "x$(basename ${ret})" != "x$(basename ${directory})" ];then 1505 | sleep 1 1506 | else 1507 | break 1508 | fi 1509 | done 1510 | if [ "x$(basename ${ret})" != "x$(basename ${directory})" ];then 1511 | echo $ret >&2;/bin/false 1512 | die "Directory snapshot metadata problem for ${name} / ${directory}" 1513 | fi 1514 | } 1515 | 1516 | 1517 | es_dumpall() { 1518 | cwd="${PWD}" 1519 | name="$(basename $(dirname $(dirname ${2})))" 1520 | esname="$(es_getreponame ${name})" 1521 | es_preparerepo "${name}" 1522 | ret=$(curl_es "_snapshot/${esname}/dump" -XDELETE) 1523 | ret=$(curl_es "_snapshot/${esname}/dump?wait_for_completion=true" -XPUT) 1524 | if [ "x$(echo "${ret}"|grep -q '"state":"SUCCESS"';echo ${?})" = "x0" ];then 1525 | directory=$(es_getworkdir ${name}) 1526 | if [ -e "${directory}" ];then 1527 | cd "${directory}" 1528 | tar cf "${2}" .\ 1529 | && curl_es "_snapshot/${esname}/dump"\ 1530 | && cd "${cwd}"\ 1531 | && echo 1532 | die_in_error "ES tar: ${2} / ${name} / ${esname} failed" 1533 | else 1534 | die_in_error "ES tar: ${2} / ${name} / ${esname} backup workdir ${directory} pb" 1535 | fi 1536 | else 1537 | echo ${ret} >&2;/bin/false 1538 | die_in_error "ES tar: ${2} / ${name} / ${esname} backup failed" 1539 | fi 1540 | } 1541 | 1542 | es_dump() { 1543 | cwd="${PWD}" 1544 | name="$(basename $(dirname $(dirname ${2})))" 1545 | esname="$(es_getreponame ${name})" 1546 | es_preparerepo "${name}" 1547 | ret=$(curl_es "_snapshot/${esname}/dump" -XDELETE) 1548 | ret=$(curl_es "_snapshot/${esname}/dump?wait_for_completion=true" -XPUT -d '{ 1549 | "indices": "'"${name}"'", 1550 | "ignore_unavailable": "true", 1551 | "include_global_state": false 1552 | }') 1553 | if [ "x$(echo "${ret}"|grep -q '"state":"SUCCESS"';echo ${?})" = "x0" ];then 1554 | directory=$(es_getworkdir ${name}) 1555 | if [ -e "${directory}" ];then 1556 | cd "${directory}" 1557 | tar cf "${2}" .\ 1558 | && curl_es "_snapshot/${esname}/dump"\ 1559 | && cd "${cwd}"\ 1560 | && echo 1561 | die_in_error "ESs tar: ${2} / ${name} / ${esname} failed" 1562 | else 1563 | die_in_error "ESs tar: ${2} / ${name} / ${esname} backup workdir ${directory} pb" 1564 | fi 1565 | else 1566 | echo ${ret} >&2;/bin/false 1567 | die_in_error "ESs tar: ${2} / ${name} / ${esname} backup failed" 1568 | fi 1569 | } 1570 | 1571 | #################### MAIN 1572 | if [ x"${DB_SMART_BACKUP_AS_FUNCS}" = "x" ];then 1573 | do_main "${@}" 1574 | fi 1575 | 1576 | # vim:set ft=sh sts=4 ts=4 tw=0 ai et: 1577 | -------------------------------------------------------------------------------- /elasticsearch.conf.sample: -------------------------------------------------------------------------------- 1 | BACKUP_TYPE=es 2 | # ES_EXTRA_ARGS=-k 3 | # ES_USER=admin 4 | # ES_PASSWORD=admin 5 | -------------------------------------------------------------------------------- /mongod.conf.sample: -------------------------------------------------------------------------------- 1 | BACKUP_TYPE=mongo 2 | MONGODB_USER=foo 3 | MONGODB_PASSWORD=secret 4 | -------------------------------------------------------------------------------- /mysql.conf.sample: -------------------------------------------------------------------------------- 1 | BACKUP_TYPE=mysql 2 | PASSWORD="root" 3 | DBUSER="root" 4 | -------------------------------------------------------------------------------- /postgresql.conf.sample: -------------------------------------------------------------------------------- 1 | BACKUP_TYPE=postgresql 2 | -------------------------------------------------------------------------------- /redis.conf.sample: -------------------------------------------------------------------------------- 1 | # A script can run only for one database type and a speific host 2 | 3 | # at a time (mongod, postgresql) 4 | # But you can run it with multiple configuration files 5 | # You can obiously share the same base backup directory. 6 | 7 | 8 | 9 | # set to 1 to deactivate colors (cron) 10 | #NO_COLOR="" 11 | 12 | # Choose Compression type. (gzip or bzip2 or xz) 13 | #COMP=bzip2 14 | 15 | #User to run dumps dump binaries as, defaults to logged in user 16 | #RUNAS=postgres 17 | #DB user to connect to the database with, defaults to 18 | #DBUSER=postgres 19 | 20 | ######## Backup settings 21 | # one of: postgresql mongod 22 | #BACKUP_TYPE=postgresql 23 | BACKUP_TYPE="redis" 24 | 25 | # Backup directory location e.g /backups 26 | TOP_BACKUPDIR="/srv/backups/redis" 27 | 28 | # do also global backup (use by postgresql to save roles/groups and only that 29 | DO_GLOBAL_BACKUP="1" 30 | 31 | # HOW MANY BACKUPS TO KEEP & LEVEL 32 | # How many snapshots to keep (lastlog for dump) 33 | # How many per day 34 | KEEP_LASTS=24 35 | KEEP_DAYS=14 36 | KEEP_WEEKS=88 37 | KEEP_MONTHES=12 38 | KEEP_LOGS=60 39 | 40 | # directories permission 41 | #DPERM="750" 42 | 43 | # directory permission 44 | #FPERM="640" 45 | 46 | # OWNER/GROUP 47 | OWNER=root 48 | GROUP=root 49 | 50 | ######## Database connection settings 51 | # host defaults to localhost 52 | # and without port we use a connection via socket 53 | #HOST="" 54 | #PORT="" 55 | 56 | # defaults to postgres on postgresql backup 57 | # as ident is used by default on many installs, we certainly 58 | # do not need either a password 59 | PASSWORD="" 60 | 61 | # List of DBNAMES for Daily/Weekly Backup e.g. "DB1 DB2 DB3" 62 | DBNAMES="all" 63 | 64 | # List of DBNAMES to EXLUCDE if DBNAMES are set to all (must be in " quotes) 65 | DBEXCLUDE="" 66 | 67 | ######## Mail setup 68 | # Email Address to send mail to? (user@domain.com) 69 | MAILADDR="root@localhost" 70 | 71 | # this server nick name 72 | MAIL_THISSERVERNAME="$(hostname -f)" 73 | 74 | # set to disable mail 75 | DISABLE_MAIL="" 76 | 77 | ######### Postgresql 78 | # binaries path 79 | #PSQL="" 80 | #PG_DUMP="" 81 | #PG_DUMPALL="" 82 | 83 | # OPT string for use with pg_dump ( see man pg_dump ) 84 | #OPT="--create -Fc" 85 | 86 | # OPT string for use with pg_dumpall ( see man pg_dumpall ) 87 | #OPTALL="--globals-only" 88 | 89 | ######## Hooks (optionnal) 90 | # functions names which point to functions defined in your 91 | # configuration file 92 | # Pay attention not to make function names colliding with functions in the script 93 | 94 | # 95 | # All those hooks can call externals programs (eg: python scripts) 96 | # Look inside the shell script to know which variables you ll have 97 | # set in the context, but you ll have useful information available at 98 | # each stage like the dbname, etc. 99 | # 100 | 101 | # Function to run before backups (uncomment to use) 102 | #pre_backup_hook() { 103 | #} 104 | 105 | # Function to run after global backup (uncomment to use) 106 | #post_global_backup_hook() { 107 | #} 108 | 109 | # Function to run after global backup (uncomment to use) 110 | #post_global_backup_failure_hook() { 111 | #} 112 | 113 | # Fuction run after each database backup if the backup failed 114 | #post_db_backup_failure_hook() { 115 | #} 116 | 117 | # Function to run after each database backup (uncomment to use) 118 | #post_db_backup_hook() { 119 | #} 120 | 121 | # Function to run after backup rotation 122 | #post_rotate_hook() { 123 | #} 124 | 125 | # Function to run after backup orphan cleanup 126 | #post_cleanup_hook() { 127 | #} 128 | 129 | # Function run after backups (uncomment to use) 130 | #post_backup_hook="mycompany_postbackup" 131 | 132 | # Function to run after the recap mail emission 133 | #post_mail_hook() { 134 | #} 135 | 136 | # Function to run after the recap mail emission 137 | #failure_hook() { 138 | #} 139 | if [ -f /etc/dbsmartbackup/redis.conf.local ];then 140 | . /etc/dbsmartbackup/redis.conf.local 141 | fi 142 | 143 | # vim:set ft=sh: 144 | -------------------------------------------------------------------------------- /run_dbsmartbackup.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # 3 | # Wrapper to embed dbs in a cron 4 | # * * * * * root /path/to/run_dbsmartbackup.sh /path/to/conf 5 | 6 | if [ -f /etc/db_smart_backup_deactivated ];then 7 | exit 0 8 | fi 9 | 10 | QUIET="${QUIET:-}" 11 | RET=0 12 | for i in ${@};do 13 | if [ "x${i}" = "x--no-colors" ];then 14 | export NO_COLORS="1" 15 | shift 16 | elif [ "x${i}" = "x--quiet" ];then 17 | QUIET="1" 18 | shift 19 | elif [ "x${i}" = "x--help" ] || \ 20 | [ "x${i}" = "x--h" ] \ 21 | ;then 22 | HELP="1" 23 | shift 24 | fi 25 | done 26 | __NAME__="RUN_DB_SMARTBACKUP" 27 | if [ "x${HELP}" != "x" ];then 28 | echo "${0} [--quiet] [--no-colors] [conf]" 29 | echo "Run all found db_smart_backups configurations" 30 | exit 1 31 | fi 32 | if [ x"${DEBUG}" != "x" ];then 33 | set -x 34 | fi 35 | 36 | is_container() { 37 | echo "$(cat -e /proc/1/environ |grep container=|wc -l|sed -e "s/ //g")" 38 | } 39 | 40 | filter_host_pids() { 41 | pids="" 42 | if [ "x$(is_container)" != "x0" ];then 43 | pids="${pids} $(echo "${@}")" 44 | else 45 | for pid in ${@};do 46 | if [ "x$(grep -q /lxc/ /proc/${pid}/cgroup 2>/dev/null;echo "${?}")" != "x0" ];then 47 | pids="${pids} $(echo "${pid}")" 48 | fi 49 | done 50 | fi 51 | echo "${pids}" | sed -e "s/\(^ \+\)\|\( \+$\)//g" 52 | } 53 | 54 | go_run_db_smart_backup() { 55 | conf="${1}" 56 | if [ "x${QUIET}" != "x" ];then 57 | db_smart_backup.sh "${conf}" 2>&1 1>> "${LOG}" 58 | if [ "x${?}" != "x0" ];then 59 | RET=1 60 | fi 61 | else 62 | db_smart_backup.sh "${conf}" 63 | if [ "x${?}" != "x0" ];then 64 | RET=1 65 | fi 66 | fi 67 | } 68 | # a running socket in the standard debian location 69 | CONF="${DB_SMARTBACKUPS_CONF-${1}}" 70 | LOG="${LOG:-/var/log/run_dbsmartbackup-${CONF//\//_}.log}" 71 | if [ ! -e $CONF ];then 72 | echo "invalid $CONF" > $LOG 73 | RET=1 74 | fi 75 | go_run_db_smart_backup "${CONF}" 76 | if [ x"${DEBUG}" != "x" ];then 77 | set +x 78 | fi 79 | if [ "x${QUIET}" != "x" ] && [ "x${RET}" != "x0" ];then 80 | cat "${LOG}" 81 | fi 82 | if [ -f $LOG ];then 83 | rm -f $LOG 84 | fi 85 | exit $RET 86 | # vim:set et sts=4 ts=4 tw=00: 87 | -------------------------------------------------------------------------------- /run_dbsmartbackups.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # 3 | # Search in /etc/dbsmartbackup for any database configuration 4 | # Run db_smart_backup.sh whenever it is applicable on those configurations 5 | # 6 | # pg: /etc/dbsmartbackup/postgresql.conf 7 | # mysql: /etc/dbsmartbackup/mysql.conf 8 | # mongodb: /etc/dbsmartbackup/mongod.conf 9 | # slapd: /etc/dbsmartbackup/slapd.conf 10 | # redis: /etc/dbsmartbackup/redis.conf 11 | # 12 | if [ -f /etc/db_smart_backup_deactivated ];then 13 | exit 0 14 | fi 15 | 16 | LOG="${LOG:-/var/log/run_dbsmartbackup.log}" 17 | QUIET="${QUIET:-}" 18 | RET=0 19 | for i in ${@};do 20 | if [ "x${i}" = "x--no-colors" ];then 21 | export NO_COLORS="1" 22 | fi 23 | if [ "x${i}" = "x--quiet" ];then 24 | QUIET="1" 25 | fi 26 | if [ "x${i}" = "x--help" ] || \ 27 | [ "x${i}" = "x--h" ] \ 28 | ;then 29 | HELP="1" 30 | fi 31 | done 32 | __NAME__="RUN_DB_SMARTBACKUPS" 33 | if [ "x${HELP}" != "x" ];then 34 | echo "${0} [--quiet] [--no-colors]" 35 | echo "Run all found db_smart_backups configurations" 36 | exit 1 37 | fi 38 | if [ x"${DEBUG}" != "x" ];then 39 | set -x 40 | fi 41 | 42 | is_container() { 43 | echo "$(cat -e /proc/1/environ |grep container=|wc -l|sed -e "s/ //g")" 44 | } 45 | 46 | filter_host_pids() { 47 | pids="" 48 | if [ "x$(is_container)" != "x0" ];then 49 | pids="${pids} $(echo "${@}")" 50 | else 51 | for pid in ${@};do 52 | if [ "x$(grep -q /lxc/ /proc/${pid}/cgroup 2>/dev/null;echo "${?}")" != "x0" ];then 53 | pids="${pids} $(echo "${pid}")" 54 | fi 55 | done 56 | fi 57 | echo "${pids}" | sed -e "s/\(^ \+\)\|\( \+$\)//g" 58 | } 59 | 60 | go_run_db_smart_backup() { 61 | conf="${1}" 62 | if [ "x${QUIET}" != "x" ];then 63 | db_smart_backup.sh "${conf}" 2>&1 1>> "${LOG}" 64 | if [ "x${?}" != "x0" ];then 65 | RET=1 66 | fi 67 | else 68 | db_smart_backup.sh "${conf}" 69 | if [ "x${?}" != "x0" ];then 70 | RET=1 71 | fi 72 | fi 73 | } 74 | if [ "x${PG_CONFS}" = "x" ];then 75 | # /etc/postgresql matches debia,n 76 | # /var/lib/pgsql matches redhat 77 | PG_CONFS=$(find /etc/postgresql /var/lib/pgsql -name postgresql.conf 2>/dev/null) 78 | fi 79 | if [ "x${PG_CONFS}" = "x" ];then 80 | PG_CONFS=/etc/postgresql.conf 81 | fi 82 | PORTS=$(egrep -h "^port\s=\s" ${PG_CONFS} 2>/dev/null|awk -F= '{print $2}'|awk '{print $1}'|sort -u) 83 | DB_SMARTBACKUPS_CONFS="${DB_SMARTBACKUPS_CONFS:-"/etc/dbsmartbackup"}" 84 | # try to run postgresql backup to any postgresql version if we found 85 | # a running socket in the standard debian location 86 | CONF="${DB_SMARTBACKUPS_CONFS}/postgresql.conf" 87 | for port in ${PORTS};do 88 | socket_path="/var/run/postgresql/.s.PGSQL.$port" 89 | if [ -e "${socket_path}" ];then 90 | # search back from which config the port comes from 91 | for i in /etc/postgresql/*/*/post*.conf;do 92 | if [ x"${port}" = x"$(egrep -h "^port\s=\s" "$i"|awk -F= '{print $2}'|awk '{print $1}')" ];then 93 | # search the postgres version to export binaries 94 | export PGVER="$(basename $(dirname $(dirname ${i})))" 95 | export PGVER="${PGVER:-9.3}" 96 | break 97 | fi 98 | done 99 | if [ -e "${CONF}" ];then 100 | export PGHOST="/var/run/postgresql" 101 | export HOST="${PGHOST}" 102 | export PGPORT="$port" 103 | export PORT="${PGPORT}" 104 | export PATH="/usr/lib/postgresql/${PGVER}/bin:${PATH}" 105 | if [ "x${QUIET}" = "x" ];then 106 | echo "$__NAME__: Running backup for postgresql ${socket_path}: ${VER} (${CONF} $(which psql))" 107 | fi 108 | go_run_db_smart_backup "${CONF}" 109 | unset PGHOST HOST PGPORT PORT 110 | fi 111 | fi 112 | done 113 | # try to run mysql backups if the config file is present 114 | # and we found a mysqld process 115 | CONF="${DB_SMARTBACKUPS_CONFS}/mysql.conf" 116 | if [ "x$(which mysql 2>/dev/null)" != "x" ] && [ x"$(filter_host_pids $(ps aux|grep mysqld|grep -v grep)|wc -w)" != "x0" ] && [ -e "${CONF}" ];then 117 | if [ "x${QUIET}" = "x" ];then 118 | echo "$__NAME__: Running backup for mysql: $(mysql --version) (${CONF} $(which mysql))" 119 | fi 120 | go_run_db_smart_backup "${CONF}" 121 | fi 122 | if [ x"${DEBUG}" != "x" ];then 123 | set +x 124 | fi 125 | # try to run redi backups if the config file is present 126 | CONF="${DB_SMARTBACKUPS_CONFS}/redis.conf" 127 | if [ "x$(which redis-server)" != "x" ] && [ x"$(filter_host_pids $(ps aux|grep redis-server|grep -v grep)|wc -w)" != "x0" ] && [ -e "${CONF}" ];then 128 | if [ "x${QUIET}" = "x" ];then 129 | echo "$__NAME__: Running backup for redis: $(redis-server --version|head -n1) (${CONF} $(which redis-server))" 130 | fi 131 | go_run_db_smart_backup "${CONF}" 132 | fi 133 | # try to run mongodb backups if the config file is present 134 | CONF="${DB_SMARTBACKUPS_CONFS}/mongod.conf" 135 | if [ "x$(which mongod 2>/dev/null )" != "x" ] && [ x"$(filter_host_pids $(ps aux|grep mongod|grep -v grep)|wc -w)" != "x0" ] && [ -e "${CONF}" ];then 136 | if [ "x${QUIET}" = "x" ];then 137 | echo "$__NAME__: Running backup for mongod: $(mongod --version|head -n1) (${CONF} $(which mongod))" 138 | fi 139 | go_run_db_smart_backup "${CONF}" 140 | fi 141 | # try to run slapd backups if the config file is present 142 | # and we found a mysqld process 143 | CONF="${DB_SMARTBACKUPS_CONFS}/slapd.conf" 144 | if [ x"$(filter_host_pids $(ps aux|grep slapd|grep -v grep|awk '{print $2}')|wc -w)" != "x0" ] && [ -e "${CONF}" ];then 145 | if [ "x${QUIET}" = "x" ];then 146 | echo "$__NAME__: Running backup for slapd" 147 | fi 148 | go_run_db_smart_backup "${CONF}" 149 | fi 150 | # try to run ES backups if the config file is present 151 | # and we found a mysqld process 152 | CONF="${DB_SMARTBACKUPS_CONFS}/elasticsearch.conf" 153 | if [ x"$(filter_host_pids $(ps aux|grep org.elasticsearch.bootstrap.Elasticsearch|grep -v grep|awk '{print $2}')|wc -w)" != "x0" ] && [ -e "${CONF}" ];then 154 | if [ "x${QUIET}" = "x" ];then 155 | echo "$__NAME__: Running backup for elasticsearch" 156 | fi 157 | go_run_db_smart_backup "${CONF}" 158 | fi 159 | if [ x"${DEBUG}" != "x" ];then 160 | set +x 161 | fi 162 | if [ "x${QUIET}" != "x" ] && [ "x${RET}" != "x0" ];then 163 | cat "${LOG}" 164 | fi 165 | exit $RET 166 | # vim:set et sts=4 ts=4 tw=00: 167 | -------------------------------------------------------------------------------- /slapd.conf.sample: -------------------------------------------------------------------------------- 1 | BACKUP_TYPE=slapd 2 | -------------------------------------------------------------------------------- /test.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # THIS TEST MUST BE RUN AS ROOT 5 | # 6 | import unittest 7 | import os 8 | import tempfile 9 | import shutil 10 | import re 11 | from subprocess import ( 12 | Popen, 13 | PIPE, 14 | check_output, 15 | CalledProcessError, 16 | STDOUT) 17 | 18 | 19 | J = os.path.join 20 | D = os.path.dirname 21 | CWD = os.path.abspath(D(__file__)) 22 | 23 | COMMON = u'''#!/usr/bin/env bash 24 | chown -R postgres "{dir}" 25 | cp $0 /fooo 26 | export DB_SMART_BACKUP_AS_FUNCS=1 27 | CWD={CWD} 28 | TOP_BACKUPDIR={TOP_BACKUPDIR} 29 | GLOBAL_SUBDIR={GLOBAL_SUBDIR} 30 | BACKUP_TYPE={BACKUP_TYPE} 31 | comp=bz2 32 | SOURCE={SOURCE} 33 | if [[ -n $SOURCE ]];then 34 | . "{cwd}/db_smart_backup.sh" 35 | fi 36 | dofic() {{ 37 | COMP=${{COMP:-xz}} 38 | db="${{1:-${{DB:-"foomoobar"}}}}" 39 | create_db_directories "${{db}}" 40 | local fn="$(get_backupdir)/${{db}}/dumps/${{db}}_${{DATE}}.sql" 41 | touch "$fn" "$(get_compressed_name "$fn")" 42 | link_into_dirs "${{db}}" "$fn" 43 | }} 44 | 45 | ''' 46 | NO_COMPRESS = u''' 47 | do_compression() {{ 48 | echo "do_compression $@" 49 | }} 50 | ''' 51 | NO_DB = u''' 52 | do_hook() {{ 53 | echo "do_hook $@" 54 | }} 55 | do_db_backup() {{ 56 | echo "do_db_backup $@" 57 | }} 58 | ''' 59 | NO_ROTATE = u''' 60 | do_rotate() {{ 61 | echo "do_rotate $@" 62 | }} 63 | ''' 64 | NO_ORPHAN = u''' 65 | cleanup_orphans() {{ 66 | echo "cleanup_orphans $@" 67 | }} 68 | ''' 69 | NO_MAIL = u''' 70 | do_sendmail() {{ 71 | echo "do_sendmail $@" 72 | }} 73 | ''' 74 | 75 | 76 | class TestCase(unittest.TestCase): 77 | 78 | def exec_script(self, 79 | string, 80 | common=True, 81 | no_compress=True, 82 | no_orphan=True, 83 | no_rotate=True, 84 | no_db=True, 85 | no_mail=True, 86 | source=True, 87 | stderr=None): 88 | pt = os.path.join(self.dir, 'w.sh') 89 | if not isinstance(string, unicode): 90 | string = string.decode('utf-8') 91 | script = u'' 92 | if common: 93 | script += COMMON 94 | if no_rotate: 95 | script += NO_ROTATE 96 | if no_orphan: 97 | script += NO_ORPHAN 98 | if no_compress: 99 | script += NO_COMPRESS 100 | if no_db: 101 | script += NO_DB 102 | if no_mail: 103 | script += NO_MAIL 104 | script += u'\ncd "{0}"\n'.format(self.dir) 105 | script += string 106 | script += u'\ncd "{0}"\n'.format(self.dir) 107 | with open(pt, 'w') as fic: 108 | opts = self.opts.copy() 109 | opts.update({ 110 | 'SOURCE': (source) and u'1' or u'', 111 | }) 112 | sscript = script.format(**opts).encode('utf-8') 113 | fic.write(sscript) 114 | os.chmod(pt, 0755) 115 | try: 116 | if stderr: 117 | stderr = STDOUT 118 | ret = check_output([pt], shell=True, stderr=stderr) 119 | except CalledProcessError, ex: 120 | ret = ex.output 121 | return ret 122 | 123 | def setUp(self): 124 | self.env = os.environ.copy() 125 | self.dir = tempfile.mkdtemp() 126 | self.opts = { 127 | 'cwd': unicode(CWD), 128 | 'CWD': unicode(CWD), 129 | 'dir': unicode(self.dir), 130 | 'TOP_BACKUPDIR': unicode(os.path.join(self.dir, "pgbackups")), 131 | 'BACKUP_TYPE': u"postgresql", 132 | 'GLOBAL_SUBDIR': u"__GLOBAL__", 133 | } 134 | 135 | def tearDown(self): 136 | if os.path.exists(self.dir): 137 | shutil.rmtree(self.dir) 138 | os.environ.update(self.env) 139 | 140 | def test_Compressor(self): 141 | TEST = ''' 142 | COMP="" COMPS="xz nocomp" XZ="/null";set_compressor;echo comp:$COMP; 143 | COMP="xz" COMPS="xz" XZ="" ;set_compressor;echo comp:$COMP; 144 | ''' 145 | ret = self.exec_script(TEST, stderr=True) 146 | self.assertEqual( 147 | 'comp:nocomp\n' 148 | 'comp:xz\n', ret) 149 | TEST = ''' 150 | COMP="" COMPS="bzip2 nocomp" BZIP2="/null";set_compressor;echo comp:$COMP; 151 | COMP="bzip2" COMPS="bzip2" BZIP2="" ;set_compressor;echo comp:$COMP; 152 | ''' 153 | ret = self.exec_script(TEST, stderr=True) 154 | self.assertEqual( 155 | 'comp:nocomp\n' 156 | 'comp:bzip2\n', ret) 157 | TEST = ''' 158 | COMP="" COMPS="gzip nocomp" GZIP="/null";set_compressor;echo comp:$COMP; 159 | COMP="gzip" COMPS="gzip" GZIP="" ;set_compressor;echo comp:$COMP; 160 | ''' 161 | ret = self.exec_script(TEST, stderr=True) 162 | self.assertEqual( 163 | 'comp:nocomp\n' 164 | 'comp:gzip\n', ret) 165 | TEST = ''' 166 | COMP="xz" 167 | COMPS="gzip bzip2 xz nocomp" 168 | GZIP="/null"; XZ="/null"; BZIP2="/null" 169 | set_compressor;echo comp:$COMP; 170 | ''' 171 | ret = self.exec_script(TEST, stderr=True) 172 | self.assertEqual( 173 | 'comp:nocomp\n', ret) 174 | 175 | def test_Compression(self): 176 | TEST = ''' 177 | XZ=/nonexist 178 | BZ2=/nonexist 179 | COMP=gzip 180 | echo "$(get_compressed_name a)" 181 | COMP=bzip2 182 | echo "$(get_compressed_name a)" 183 | COMP=foo 184 | echo "$(get_compressed_name a)" 185 | ''' 186 | ret = self.exec_script(TEST) 187 | self.assertEqual( 188 | 'a.gz\n' 189 | 'a.bz2\n' 190 | 'a\n', ret) 191 | TEST = ''' 192 | COMP="xz";COMPS="$COMP";set_compressor 193 | echo azerty>thiscompress 194 | do_compression thiscompress 195 | echo $COMPRESSED_NAME 196 | ''' 197 | ret = self.exec_script(TEST, stderr=True, no_compress=False) 198 | for i in [ 199 | '^thiscompress.xz.*$', 200 | ]: 201 | self.assertTrue(re.search(i, ret, re.M | re.U), i) 202 | TEST = ''' 203 | COMP="gzip";COMPS="$COMP";set_compressor 204 | echo azerty>thiscompress 205 | do_compression thiscompress 206 | echo $COMPRESSED_NAME 207 | ''' 208 | ret = self.exec_script(TEST, stderr=True, no_compress=False) 209 | for i in [ 210 | '^thiscompress.gz.*$', 211 | ]: 212 | self.assertTrue(re.search(i, ret, re.M | re.U), i) 213 | TEST = ''' 214 | COMP="bzip2";COMPS="$COMP";set_compressor 215 | echo azerty>thiscompress 216 | do_compression thiscompress 217 | echo $COMPRESSED_NAME 218 | ''' 219 | ret = self.exec_script(TEST, stderr=True, no_compress=False) 220 | for i in [ 221 | '^thiscompress.bz2.*$', 222 | ]: 223 | self.assertTrue(re.search(i, ret, re.M | re.U), i) 224 | TEST = ''' 225 | COMP="nocomp";COMPS="$COMP";set_compressor 226 | echo azerty>thiscompress 227 | do_compression thiscompress 228 | echo $COMPRESSED_NAME 229 | ''' 230 | ret = self.exec_script(TEST, stderr=True, no_compress=False) 231 | for i in [ 232 | '\[db_smart_backup\] No compressor found, no compression done', 233 | 'thiscompress', 234 | ]: 235 | self.assertTrue(re.search(i, ret, re.M | re.U), i) 236 | 237 | def test_arunas(self): 238 | TEST = ''' 239 | outputDir="{dir}/spaces" 240 | export DB_SMART_BACKUP_AS_FUNCS="1" RUNAS="postgres" PSQL="psql" 241 | mkdir -p "$outputDir/WITH QUOTES" 242 | touch "$outputDir/WITH QUOTES/"{{a,b,e}} 243 | touch "$outputDir/WITH QUOTES/"{{cc,dd,ff}} 244 | chown -Rf postgres "$outputDir/WITH QUOTES" 245 | export RUNAS="postgres" 246 | runcmd_as whoami 247 | runcmd_as ls "$outputDir/WITH QUOTES" 248 | export RUNAS="root" 249 | runcmd_as whoami 250 | runcmd_as ls "$outputDir/WITH QUOTES" 251 | ''' 252 | ret = self.exec_script(TEST) 253 | self.assertEqual( 254 | 'postgres\na\nb\ncc\ndd\ne\nff\n' 255 | 'root\na\nb\ncc\ndd\ne\nff\n', ret) 256 | 257 | def test_monkeypatch(self): 258 | TEST = ''' 259 | export DB_SMART_BACKUP_AS_FUNCS="1" RUNAS="postgres" PSQL="psql" 260 | . "$CWD/db_smart_backup.sh" 261 | dummy_for_tests 262 | dummy_callee_for_tests() {{ 263 | echo "foo" 264 | }} 265 | dummy_for_tests 266 | ''' 267 | ret = self.exec_script(TEST) 268 | self.assertEqual( 269 | ret, 270 | 'here\nfoo\n' 271 | ) 272 | 273 | def test_pgbins_run_as(self): 274 | TEST = ''' 275 | export DB_SMART_BACKUP_AS_FUNCS="1" RUNAS="postgres" PSQL="psql" 276 | . "$CWD/db_smart_backup.sh" 277 | export RUNAS="postgres" 278 | psql_ -c "select * from pg_roles" 279 | echo $ret 280 | ''' 281 | ret = self.exec_script(TEST) 282 | self.assertTrue('rolnam' in ret) 283 | 284 | def test_createdirs(self): 285 | TEST = ''' 286 | create_db_directories "WITH QUOTES ET utf8 éà" 287 | create_db_directories $GLOBAL_SUBDIR 288 | create_db_directories foo 289 | ''' 290 | self.exec_script(TEST) 291 | for i in[ 292 | 'postgresql/localhost/WITH QUOTES ET utf8 éà/daily', 293 | 'postgresql/localhost/WITH QUOTES ET utf8 éà/dumps', 294 | 'postgresql/localhost/WITH QUOTES ET utf8 éà/monthly', 295 | 'postgresql/localhost/WITH QUOTES ET utf8 éà/weekly', 296 | 'postgresql/localhost/foo/daily', 297 | 'postgresql/localhost/foo/dumps', 298 | 'postgresql/localhost/foo/monthly', 299 | 'postgresql/localhost/foo/weekly', 300 | 'postgresql/localhost/__GLOBAL__/daily', 301 | 'postgresql/localhost/__GLOBAL__/dumps', 302 | 'postgresql/localhost/__GLOBAL__/monthly', 303 | 'postgresql/localhost/__GLOBAL__/weekly', 304 | 'postgresql/localhost/__GLOBAL__', 305 | ]: 306 | self.assertTrue( 307 | os.path.isdir(J(self.dir, "pgbackups", i)), i) 308 | 309 | def test_backdir(self): 310 | TEST = ''' 311 | echo $(TOP_BACKUPDIR="foo" BACKUP_TYPE=bar HOST="" PGHOST="" get_backupdir) 312 | echo $(TOP_BACKUPDIR="foo" BACKUP_TYPE=postgresql PGHOST="" get_backupdir) 313 | echo $(TOP_BACKUPDIR="foo" BACKUP_TYPE=postgresql HOST="bar" PGHOST="foo" get_backupdir) 314 | ''' 315 | ret = self.exec_script(TEST) 316 | self.assertEqual( 317 | ret, 318 | 'foo/bar\n' 319 | 'foo/postgresql/localhost\n' 320 | 'foo/postgresql/bar\n') 321 | 322 | def test_link_into_dirs(self): 323 | TEST = ''' 324 | DB=foo;YEAR="2002";MNUM="01";DOM="08";DATE="$YEAR-$MNUM-$DOM";DOY="0$DOM";W="2" dofic 325 | ''' 326 | self.exec_script(TEST) 327 | for i in [ 328 | "postgresql/localhost/foo/weekly/foo_2002_2.sql.xz", 329 | "postgresql/localhost/foo/monthly/foo_2002_01.sql.xz", 330 | "postgresql/localhost/foo/daily/foo_2002_008_2002-01-08.sql.xz", 331 | ]: 332 | self.assertTrue(os.path.exists(J(self.dir, 'pgbackups', i)), i) 333 | 334 | def test_cleanup_orphans_1(self): 335 | common = u''' 336 | BACKUPDIR="$outputDir/pgbackups" 337 | KEEP_MONTHES=1 338 | KEEP_WEEKS=3 339 | KEEP_DAYS=9 340 | DB=foo 341 | YEAR=2002 342 | ''' 343 | RTEST1 = common + u''' 344 | DB="WITH QUOTES é utf8" 345 | find "$(get_backupdir)/$DB/"{{daily,monthly,lastsnapshots,weekly}}/ -type f|\ 346 | while read fic;do rm -f "${{fic}}";done 347 | do_cleanup_orphans 2>&1 348 | ''' 349 | TEST = common + u''' 350 | DB="WITH QUOTES é utf8" 351 | MNUM="01";DOM="08";DATE="$YEAR-$MNUM-$DOM";DOY="0$DOM"; 352 | DOY="0$DOM";W="2" 353 | FDATE="${{DATE}}_01-02-03";dofic 354 | ''' 355 | # removing file from all subdirs except dump place 356 | self.exec_script(TEST) 357 | ret = self.exec_script(RTEST1) 358 | self.assertTrue( 359 | re.search( 360 | 'Pruning .*/pgbackups/postgresql/localhost/' 361 | 'WITH QUOTES é utf8/dumps/' 362 | 'WITH QUOTES é utf8_2002-01-08.sql', 363 | ret 364 | ) 365 | ) 366 | 367 | def test_cleanup_orphans_2(self): 368 | common = u''' 369 | BACKUPDIR="$outputDir/pgbackups" 370 | KEEP_LOGS=1 371 | KEEP_LASTS=24 372 | KEEP_MONTHES=1 373 | KEEP_WEEKS=3 374 | KEEP_DAYS=9 375 | DB=foo 376 | YEAR=2002 377 | ''' 378 | RTEST2 = common + u''' 379 | DB="WITH QUOTES é utf8" 380 | find "$(get_backupdir)/$DB/"{{dumps,daily,monthly,weekly}} -type f|\ 381 | while read fic;do rm -f "${{fic}}";done 382 | do_cleanup_orphans 2>&1 383 | ''' 384 | TEST = common + u''' 385 | DB="WITH QUOTES é utf8" 386 | MNUM="01";DOM="08";DATE="$YEAR-$MNUM-$DOM";DOY="0$DOM"; 387 | DOY="0$DOM";W="2" 388 | FDATE="${{DATE}}_01-02-03";dofic 389 | ''' 390 | # removing file from dumps + all-one dir 391 | # and test than orphan cleanup the last bits 392 | self.exec_script(TEST) 393 | ret = self.exec_script(RTEST2) 394 | import pdb;pdb.set_trace() 395 | self.assertTrue( 396 | re.search( 397 | 'Pruning .*WITH QUOTES', 398 | ret 399 | ) 400 | ) 401 | 402 | def test_rotate(self): 403 | rotatec = u''' 404 | BACKUPDIR="$outputDir/pgbackups" 405 | KEEP_MONTHES=1 406 | KEEP_WEEKS=3 407 | KEEP_DAYS=9 408 | DB=foo 409 | YEAR=2002 410 | ''' 411 | RTEST = rotatec + u''' 412 | do_rotate 2>&1 413 | ''' 414 | TEST = rotatec + u''' 415 | DB="WITH QUOTES é utf8" 416 | MNUM="01";DOM="08";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 417 | MNUM="01";DOM="09";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 418 | MNUM="01";DOM="10";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 419 | MNUM="01";DOM="11";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 420 | MNUM="01";DOM="12";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 421 | MNUM="01";DOM="13";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 422 | MNUM="01";DOM="14";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 423 | MNUM="01";DOM="15";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="3" dofic 424 | MNUM="01";DOM="16";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="3" dofic 425 | MNUM="01";DOM="17";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="5" dofic 426 | MNUM="01";DOM="17";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="5" dofic 427 | MNUM="03";DOM="01";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="60" ;W="8" dofic 428 | DB=__GLOBAL__ 429 | MNUM="01";DOM="08";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 430 | MNUM="01";DOM="09";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 431 | MNUM="01";DOM="10";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 432 | MNUM="01";DOM="11";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 433 | MNUM="01";DOM="12";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 434 | MNUM="01";DOM="13";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 435 | MNUM="01";DOM="14";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 436 | MNUM="01";DOM="15";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="3" dofic 437 | MNUM="01";DOM="16";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="3" dofic 438 | MNUM="01";DOM="17";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="5" dofic 439 | MNUM="01";DOM="17";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="5" dofic 440 | MNUM="03";DOM="01";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="60" ;W="8" dofic 441 | DB=bar 442 | MNUM="01";DOM="08";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 443 | MNUM="01";DOM="09";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 444 | MNUM="01";DOM="10";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 445 | MNUM="01";DOM="11";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 446 | MNUM="01";DOM="12";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 447 | MNUM="01";DOM="13";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 448 | MNUM="01";DOM="14";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="2" dofic 449 | MNUM="01";DOM="15";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="3" dofic 450 | MNUM="01";DOM="16";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="3" dofic 451 | MNUM="01";DOM="17";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="5" dofic 452 | MNUM="01";DOM="17";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="0$DOM";W="5" dofic 453 | MNUM="03";DOM="01";DATE="$YEAR-$MNUM-$DOM";FDATE="${{DATE}}_01-01-01";DOY="60" ;W="8" dofic 454 | ''' 455 | self.exec_script(TEST) 456 | counters = { 457 | '': 50, 458 | 'daily': 11, 459 | 'monthly': 2, 460 | 'weekly': 4, 461 | 'dumps': 22, 462 | 'lastsnapshots': 11, 463 | 464 | } 465 | for i in ['__GLOBAL__', 'bar', u"WITH QUOTES é utf8"]: 466 | for j in ['', 'daily', 'weekly', 'monthly', 467 | 'dumps', 'lastsnapshots']: 468 | cmd = ( 469 | u"find '{0}/pgbackups/postgresql/localhost/{1}/{2}' " 470 | u"-type f 2>/dev/null" 471 | u"|wc -l".format(self.dir, i, j)) 472 | ret = self.exec_script(cmd) 473 | counter = counters.get(j) 474 | self.assertTrue(int(ret.strip()) == counter, 475 | u"{3} {0}: {2} != {1}".format( 476 | j, ret, counter, i)) 477 | ret = self.exec_script(RTEST, no_rotate=False) 478 | counters = { 479 | '': 37, 480 | 'daily': 9, 481 | 'monthly': 1, 482 | 'weekly': 3, 483 | 'dumps': 22, 484 | 'lastsnapshots': 2, 485 | } 486 | for i in ['__GLOBAL__', 'bar', u"WITH QUOTES é utf8"]: 487 | for j in ['', 'daily', 'weekly', 'monthly', 488 | 'dumps', 'lastsnapshots']: 489 | cmd = ( 490 | u"find '{0}/pgbackups/postgresql/localhost/{1}/{2}' " 491 | u"-type f 2>/dev/null" 492 | u"|wc -l".format(self.dir, i, j)) 493 | ret = self.exec_script(cmd) 494 | counter = counters.get(j) 495 | self.assertTrue(int(ret.strip()) == counter, 496 | u"{3} {0}: {2} != {1}".format( 497 | j, ret, counter, i)) 498 | 499 | 500 | 501 | def test_sort1(self): 502 | TEST = u''' 503 | # weekly 504 | cd "{dir}" 505 | rm -rf testtest;mkdir testtest 506 | touch testtest/foo_2001_55.sql 507 | touch testtest/foo_2001_02.sql 508 | touch testtest/foo_2001_3.sql 509 | touch testtest/foo_2001_1.sql 510 | touch testtest/foo_2001_44.sql 511 | get_sorted_files testtest 512 | ''' 513 | ret = self.exec_script(TEST) 514 | self.assertEqual( 515 | ret, 516 | ( 517 | 'foo_2001_55.sql\n' 518 | 'foo_2001_44.sql\n' 519 | 'foo_2001_3.sql\n' 520 | 'foo_2001_02.sql\n' 521 | 'foo_2001_1.sql\n' 522 | ) 523 | ) 524 | 525 | def test_sort2(self): 526 | TEST = u''' 527 | # daily 528 | cd "{dir}" 529 | rm -rf testtest;mkdir testtest 530 | touch testtest/foo_2001_034_2001_02_01.sql 531 | touch testtest/foo_2002_035_2002_02_02.sql 532 | touch testtest/foo_2001_001_2001_01_01.sql 533 | touch testtest/foo_2001_002_2001_01_02.sql 534 | touch testtest/foo_2001_003_2001_1_3.sql 535 | touch testtest/foo_2001_001_2001_01_01.sql 536 | touch testtest/foo_2001_067_2001_04_02.sql 537 | touch testtest/foo_2001_066_2001_04_01.sql 538 | touch testtest/foo_2002_001_2002_1_1.sql 539 | get_sorted_files testtest 540 | ''' 541 | ret = self.exec_script(TEST) 542 | self.assertEqual( 543 | ret, 544 | ( 545 | 'foo_2002_035_2002_02_02.sql\n' 546 | 'foo_2002_001_2002_1_1.sql\n' 547 | 'foo_2001_067_2001_04_02.sql\n' 548 | 'foo_2001_066_2001_04_01.sql\n' 549 | 'foo_2001_034_2001_02_01.sql\n' 550 | 'foo_2001_003_2001_1_3.sql\n' 551 | 'foo_2001_002_2001_01_02.sql\n' 552 | 'foo_2001_001_2001_01_01.sql\n' 553 | ) 554 | ) 555 | 556 | def test_sort3(self): 557 | TEST = u''' 558 | # dumps 559 | cd "{dir}" 560 | rm -rf testtest;mkdir testtest 561 | touch testtest/foo_2001_9_16.sql 562 | touch testtest/foo_2001_02_01.sql 563 | touch testtest/foo_2002_02_02.sql 564 | touch testtest/foo_2001_01_1.sql 565 | touch testtest/foo_2001_01_03.sql 566 | touch testtest/foo_2001_1_2.sql 567 | touch testtest/foo_2001_01_1.sql 568 | touch testtest/foo_2001_04_02.sql 569 | touch testtest/foo_2001_04_01.sql 570 | touch testtest/foo_2003_4_1.sql 571 | get_sorted_files testtest 572 | ''' 573 | ret = self.exec_script(TEST) 574 | self.assertEqual( 575 | ret, 576 | ( 577 | 'foo_2003_4_1.sql\n' 578 | 'foo_2002_02_02.sql\n' 579 | 'foo_2001_9_16.sql\n' 580 | 'foo_2001_04_02.sql\n' 581 | 'foo_2001_04_01.sql\n' 582 | 'foo_2001_02_01.sql\n' 583 | 'foo_2001_01_03.sql\n' 584 | 'foo_2001_1_2.sql\n' 585 | 'foo_2001_01_1.sql\n' 586 | 587 | ) 588 | ) 589 | 590 | def test_asanitize(self): 591 | TEST = u''' 592 | # monthly 593 | cd "{dir}" 594 | set_vars 595 | activate_IO_redirection 596 | cyan_log "foo" 597 | yellow_log "foo" 598 | log "foo" 599 | deactivate_IO_redirection 600 | sanitize_log 601 | cat -e "$DSB_LOGFILE" 602 | ''' 603 | ret = self.exec_script(TEST) 604 | self.assertEqual( 605 | ret.splitlines()[-3:], 606 | ['foo$', 607 | '[db_smart_backup] foo$', 608 | '[db_smart_backup] foo$'] 609 | ) 610 | 611 | def test_sort4(self): 612 | TEST = u''' 613 | # monthly 614 | cd "{dir}" 615 | rm -rf testtest;mkdir testtest 616 | touch testtest/foo_2002_02.sql 617 | touch testtest/foo_2001_02.sql 618 | touch testtest/foo_2003_2.sql 619 | touch testtest/foo_2001_01.sql 620 | touch testtest/foo_2001_04.sql 621 | touch testtest/foo_2002_2.sql 622 | get_sorted_files testtest 623 | ''' 624 | ret = self.exec_script(TEST) 625 | self.assertEqual( 626 | ret, 627 | ( 628 | 'foo_2003_2.sql\n' 629 | 'foo_2002_2.sql\n' 630 | 'foo_2002_02.sql\n' 631 | 'foo_2001_04.sql\n' 632 | 'foo_2001_02.sql\n' 633 | 'foo_2001_01.sql\n' 634 | ) 635 | ) 636 | 637 | 638 | if __name__ == '__main__': 639 | unittest.main() 640 | 641 | # vim:et:ft=python:sts=4:sw=4 642 | --------------------------------------------------------------------------------