├── CHANGELOG.md ├── Dockerfile ├── LICENSE ├── README.md ├── examples └── docker-compose.yml └── install └── s6 └── services └── 10-mariadb-backup └── run /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | ## 3.0 2017-04-13 2 | * Rewrite with S6.d process Manager 3 | 4 | 5 | ## 2.2 2017-04-13 6 | * Rebase - Alpine:edge 7 | * Zabbix Included 8 | 9 | 10 | ## 2.0.1 - 2017-01-23 - 11 | * Bugfix for MD5 12 | 13 | ## 2.0 - 2017-01-18 - dave at tiredofit dot ca 14 | * New Environment Variables (COMPRESSION, MD5, SPLIT_DB) 15 | * Cleaned up 16 | 17 | 18 | ## 1.0 - 2017-01-01 - dave at tiredofit dot ca 19 | * Initial Commit 20 | 21 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM tiredofit/alpine:edge 2 | LABEL maintainer="Dave Conroy (dave at tiredofit dot ca)" 3 | 4 | ### Set Environment Variables 5 | ENV ENABLE_SMTP=FALSE 6 | 7 | ### Dependencies 8 | RUN apk update && \ 9 | apk add \ 10 | mysql-client \ 11 | bzip2 \ 12 | xz && \ 13 | rm -rf /var/cache/apk/* 14 | 15 | ### S6 Setup 16 | ADD install/s6 /etc/s6 17 | 18 | ### Entrypoint Configuration 19 | ENTRYPOINT ["/init"] 20 | 21 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2016 Dave Conroy 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # tiredofit/mariadb-backup 2 | ## Unmaintained - Active Development now resides here [tiredofit/db-backup](https://github.com/tiredofit/docker-db-backup) 3 | 4 | [![Build Status](https://img.shields.io/docker/build/tiredofit/mariadb-backup.svg)](https://hub.docker.com/r/tiredofit/mariadb-backup) 5 | [![Docker Pulls](https://img.shields.io/docker/pulls/tiredofit/mariadb-backup.svg)](https://hub.docker.com/r/tiredofit/mariadb-backup) 6 | [![Docker Stars](https://img.shields.io/docker/stars/tiredofit/mariadb-backup.svg)](https://hub.docker.com/r/tiredofit/mariadb-backup) 7 | [![Docker Layers](https://images.microbadger.com/badges/image/tiredofit/mariadb-backup.svg)](https://microbadger.com/images/tiredofit/mariadb-backup) 8 | [![Image Size](https://img.shields.io/microbadger/image-size/tiredofit/mariadb-backup.svg)](https://microbadger.com/images/tiredofit/mariadb-backup) 9 | 10 | # Introduction 11 | 12 | This will build a container for backing up MySQL containers. 13 | 14 | * dump to local filesystem 15 | * select database user and password 16 | * backup all databases 17 | * choose to have an MD5 sum after backup for verification 18 | * delete old backups after specific amount of time 19 | * choose compression type (none, gz, bz, xz) 20 | * connect to any container running on the same system 21 | * select how often to run a dump 22 | * select when to start the first dump, whether time of day or relative to container start time 23 | 24 | This Container uses Alpine:Edge as a base. 25 | 26 | 27 | [Changelog](CHANGELOG.md) 28 | 29 | # Authors 30 | 31 | - [Dave Conroy](https://github.com/tiredofit) 32 | 33 | # Table of Contents 34 | 35 | - [Introduction](#introduction) 36 | - [Changelog](CHANGELOG.md) 37 | - [Prerequisites](#prerequisites) 38 | - [Installation](#installation) 39 | - [Quick Start](#quick-start) 40 | - [Configuration](#configuration) 41 | - [Data Volumes](#data-volumes) 42 | - [Environment Variables](#environmentvariables) 43 | - [Maintenance](#maintenance) 44 | - [Shell Access](#shell-access) 45 | 46 | # Prerequisites 47 | 48 | You must have a working MySQL server available for this to work properly, it does not provide server functionality! 49 | 50 | 51 | # Installation 52 | 53 | Automated builds of the image are available on [Docker Hub](https://hub.docker.com/tiredofit/mariadb-backup) and is the recommended method of installation. 54 | 55 | 56 | ```bash 57 | docker pull tiredofit/mariadb-backup 58 | ``` 59 | 60 | # Quick Start 61 | 62 | * The quickest way to get started is using [docker-compose](https://docs.docker.com/compose/). See the examples folder for a working [docker-compose.yml](examples/docker-compose.yml) that can be modified for development or production use. 63 | 64 | * Set various [environment variables](#environment-variables) to understand the capabiltiies of this image. 65 | * Map [persistent storage](#data-volumes) for access to configuration and data files for backup. 66 | 67 | > **NOTE**: If you are using this with a docker-compose file along with a seperate SQL container, take care not to set the variables to backup immediately, more so have it delay execution for a minute, otherwise you will get a failed first backup. 68 | 69 | # Configuration 70 | 71 | ## Data-Volumes 72 | 73 | The following directories are used for configuration and can be mapped for persistent storage. 74 | 75 | | Directory | Description | 76 | |-----------|-------------| 77 | | `/backup` | SQL Backups | 78 | 79 | 80 | ## Environment Variables 81 | 82 | Along with the Environment Variables from the [Base image](https://hub.docker.com/r/tiredofit/alpine), below is the complete list of available options that can be used to customize your installation. 83 | 84 | 85 | | Parameter | Description | 86 | |-----------|-------------| 87 | | `DB_SERVER` | server name 88 | | `DB_NAME` | database name 89 | | `DB_USER` | username for the database - use root to backup all of them. 90 | | `DB_PASS` | password for the database 91 | | `DB_DUMP_FREQ` | How often to do a dump, in minutes. Defaults to 1440 minutes, or once per day. 92 | | `DB_DUMP_BEGIN` | What time to do the first dump. Defaults to immediate. Must be in one of two formats 93 | | | Absolute HHMM, e.g. `2330` or `0415` 94 | | | Relative +MM, i.e. how many minutes after starting the container, e.g. `+0` (immediate), `+10` (in 10 minutes), or `+90` in an hour and a half 95 | | `DB_DUMP_DEBUG` | If set to `true`, print copious shell script messages to the container log. Otherwise only basic messages are printed. 96 | | `DB_DUMP_TARGET` | Where to put the dump file, should be a directory. Supports three formats | 97 | | | Local If the value of `DB_DUMP_TARGET` starts with a `/` character, will dump to a local path, which should be volume-mounted. 98 | | `DB_CLEANUP_TIME` | Value in minutes to delete old backups (only fired when dump freqency fires). 1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything. 99 | | `COMPRESSION` | Use either Gzip (GZ), Bzip2 (BZ), XZip (XZ), or none (NONE). (Default GZ) 100 | | `MD5` | Generate MD5 Sum in Directory, TRUE or FALSE (Default TRUE) 101 | | `SPLIT_DB` | If using root as username and multiple DBs on system, set to TRUE to create Seperate DB Backups instead of all in one. (Default FALSE) 102 | 103 | 104 | ## Maintenance 105 | #### Shell Access 106 | 107 | For debugging and maintenance purposes you may want access the containers shell. 108 | 109 | ```bash 110 | docker exec -it (whatever your container name is e.g.) mariadb-backup bash 111 | ``` 112 | 113 | -------------------------------------------------------------------------------- /examples/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '2' 2 | 3 | services: 4 | example-db: 5 | container_name: example-db 6 | image: mariadb:latest 7 | volumes: 8 | - ./db:/var/lib/mysql 9 | environment: 10 | - MYSQL_ROOT_PASSWORD=examplerootpassword 11 | - MYSQL_DATABASE=example 12 | - MYSQL_USER=example 13 | - MYSQL_PASSWORD=examplepassword 14 | restart: always 15 | 16 | example-db-backup: 17 | container_name: example-db-backup 18 | image: tiredofit/mariadb-backup 19 | links: 20 | - example-db 21 | volumes: 22 | - ./backups:/backups 23 | environment: 24 | - DB_SERVER=example-db 25 | - DB_NAME=example 26 | - DB_USER=example 27 | - DB_PASSWORD="examplepassword" 28 | - DB_DUMP_FREQ=1440 29 | - DB_DUMP_BEGIN=0000 30 | - DB_CLEANUP_TIME=8640 31 | - MD5=TRUE 32 | - COMPRESSION=XZ 33 | - SPLIT_DB=FALSE 34 | 35 | restart: always 36 | 37 | 38 | -------------------------------------------------------------------------------- /install/s6/services/10-mariadb-backup/run: -------------------------------------------------------------------------------- 1 | #!/usr/bin/with-contenv bash 2 | 3 | date >/dev/null 4 | sleep 10 5 | if [[ -n "$DB_DUMP_DEBUG" ]]; then 6 | set -x 7 | fi 8 | 9 | # set our defaults 10 | 11 | # DB_DUMP_FREQ = how often to run a backup in minutes, i.e. how long to wait from the most recently completed to the next 12 | DB_DUMP_FREQ=${DB_DUMP_FREQ:-1440} 13 | 14 | # DB_DUMP_BEGIN = what time to start the first backup upon execution. If starts with '+' it means, "in x minutes" 15 | DB_DUMP_BEGIN=${DB_DUMP_BEGIN:-+0} 16 | 17 | # DB_DUMP_TARGET = where to place the backup file. 18 | DB_DUMP_TARGET=${DB_DUMP_TARGET:-/backups} 19 | 20 | # login credentials 21 | DBUSER=${DB_USER} 22 | DBPASS=${DB_PASSWORD} 23 | 24 | # database server 25 | DBSERVER=${DB_SERVER} 26 | 27 | # database name 28 | DBNAME=${DB_NAME} 29 | 30 | # temporary dump dir 31 | TMPDIR=/tmp/backups 32 | TMPRESTORE=/tmp/restorefile 33 | 34 | # compression 35 | COMPRESSION=${COMPRESSION:-GZ} 36 | 37 | # Should we split DB's? 38 | SPLIT_DB=${SPLIT_DB:-FALSE} 39 | 40 | # MD5 SUM 41 | MD5=${MD5:-TRUE} 42 | 43 | # this is global, so has to be set outside 44 | declare -A uri 45 | 46 | # 47 | # URI parsing function 48 | # 49 | # The function creates global variables with the parsed results. 50 | # It returns 0 if parsing was successful or non-zero otherwise. 51 | # 52 | # [schema://][user[:password]@]host[:port][/path][?[arg1=val1]...][#fragment] 53 | # 54 | function uri_parser() { 55 | uri=() 56 | # uri capture 57 | full="$@" 58 | 59 | # safe escaping 60 | full="${full//\`/%60}" 61 | full="${full//\"/%22}" 62 | 63 | # URL that begins with '/' is like 'file:///' 64 | if [[ "${full:0:1}" == "/" ]]; then 65 | full="file://localhost${full}" 66 | fi 67 | # file:/// should be file://localhost/ 68 | if [[ "${full:0:8}" == "file:///" ]]; then 69 | full="${full/file:\/\/\//file://localhost/}" 70 | fi 71 | 72 | # top level parsing 73 | pattern='^(([a-z0-9]{2,5})://)?((([^:\/]+)(:([^@\/]*))?@)?([^:\/?]+)(:([0-9]+))?)(\/[^?]*)?(\?[^#]*)?(#.*)?$' 74 | [[ "$full" =~ $pattern ]] || return 1 75 | 76 | # component extraction 77 | full=${BASH_REMATCH[0]} 78 | uri[uri]="$full" 79 | uri[schema]=${BASH_REMATCH[2]} 80 | uri[address]=${BASH_REMATCH[3]} 81 | uri[user]=${BASH_REMATCH[5]} 82 | uri[password]=${BASH_REMATCH[7]} 83 | uri[host]=${BASH_REMATCH[8]} 84 | uri[port]=${BASH_REMATCH[10]} 85 | uri[path]=${BASH_REMATCH[11]} 86 | uri[query]=${BASH_REMATCH[12]} 87 | uri[fragment]=${BASH_REMATCH[13]} 88 | if [[ ${uri[schema]} == "smb" && ${uri[path]} =~ ^/([^/]*)(/?.*)$ ]]; then 89 | uri[share]=${BASH_REMATCH[1]} 90 | uri[sharepath]=${BASH_REMATCH[2]} 91 | fi 92 | 93 | # does the user have a domain? 94 | if [[ -n ${uri[user]} && ${uri[user]} =~ ^([^\;]+)\;(.+)$ ]]; then 95 | uri[userdomain]=${BASH_REMATCH[1]} 96 | uri[user]=${BASH_REMATCH[2]} 97 | fi 98 | return 0 99 | } 100 | 101 | if [[ -n "$DB_RESTORE_TARGET" ]]; then 102 | uri_parser ${DB_RESTORE_TARGET} 103 | if [[ "${uri[schema]}" == "file" ]]; then 104 | cp $DB_RESTORE_TARGET $TMPRESTORE 2>/dev/null 105 | elif [[ "${uri[schema]}" == "s3" ]]; then 106 | aws s3 cp $DB_RESTORE_TARGET $TMPRESTORE 107 | elif [[ "${uri[schema]}" == "smb" ]]; then 108 | if [[ -n "${uri[user]}" ]]; then 109 | UPASS="-U ${uri[user]}%${uri[password]}" 110 | else 111 | UPASS= 112 | fi 113 | if [[ -n "${uri[userdomain]}" ]]; then 114 | UDOM="-W ${uri[userdomain]}" 115 | else 116 | UDOM= 117 | fi 118 | smbclient -N //${uri[host]}/${uri[share]} ${UPASS} ${UDOM} -c "get ${uri[sharepath]} ${TMPRESTORE}" 119 | fi 120 | # did we get a file? 121 | if [[ -f "$TMPRESTORE" ]]; then 122 | gunzip <$TMPRESTORE | mysql -h $DBSERVER -u $DBUSER -p$DBPASS 123 | /bin/rm -f $TMPRESTORE 124 | exit 0 125 | else 126 | echo "Could not find restore file $DB_RESTORE_TARGET" 127 | exit 1 128 | fi 129 | else 130 | # determine target proto 131 | uri_parser ${DB_DUMP_TARGET} 132 | 133 | # wait for the next time to start a backup 134 | # for debugging 135 | echo Starting at $(date) 136 | current_time=$(date +"%s") 137 | # get the begin time on our date 138 | # REMEMBER: we are using the basic date package in alpine 139 | today=$(date +"%Y%m%d") 140 | # could be a delay in minutes or an absolute time of day 141 | if [[ $DB_DUMP_BEGIN =~ ^\+(.*)$ ]]; then 142 | waittime=$((${BASH_REMATCH[1]} * 60)) 143 | else 144 | target_time=$(date --date="${today}${DB_DUMP_BEGIN}" +"%s") 145 | 146 | if [[ "$target_time" < "$current_time" ]]; then 147 | target_time=$(($target_time + 24 * 60 * 60)) 148 | fi 149 | 150 | waittime=$(($target_time - $current_time)) 151 | fi 152 | 153 | sleep $waittime 154 | 155 | # enter the loop 156 | while true; do 157 | # make sure the directory exists 158 | mkdir -p $TMPDIR 159 | 160 | # what is the name of our target? 161 | now=$(date +"%Y%m%d-%H%M%S") 162 | TARGET=db_${DBNAME}_${DBSERVER}_${now}.sql 163 | 164 | if [ "$SPLIT_DB" = "TRUE" ]; then 165 | DATABASES=$(mysql -h $DBSERVER -u$DBUSER -p$DBPASS --batch -e "SHOW DATABASES;" | grep -v Database | grep -v schema) 166 | for db in $DATABASES; do 167 | if [[ "$db" != "information_schema" ]] && [[ "$db" != _* ]]; then 168 | echo "Dumping database: $db" 169 | TARGET=db_${db}_${DBSERVER}_${now}.sql 170 | 171 | mysqldump --max-allowed-packet=512M -h $DBSERVER -u$DBUSER -p$DBPASS --databases $db >${TMPDIR}/${TARGET} 172 | 173 | if [ "$MD5" = "TRUE" ]; then 174 | cd $TMPDIR 175 | md5sum ${TARGET} >${TARGET}.md5 176 | fi 177 | 178 | # do the compression 179 | case "$COMPRESSION" in 180 | "GZ") 181 | gzip ${TMPDIR}/${TARGET} 182 | TARGET=${TARGET}.gz 183 | ;; 184 | "gz") 185 | gzip ${TMPDIR}/${TARGET} 186 | TARGET=${TARGET}.gz 187 | ;; 188 | "BZ") 189 | bzip2 ${TMPDIR}/${TARGET} 190 | TARGET=${TARGET}.bz2 191 | ;; 192 | "bz") 193 | bzip2 ${TMPDIR}/${TARGET} 194 | TARGET=${TARGET}.bz2 195 | ;; 196 | "XZ") 197 | xz ${TMPDIR}/${TARGET} 198 | TARGET=${TARGET}.xz 199 | ;; 200 | "xz") 201 | xz ${TMPDIR}/${TARGET} 202 | TARGET=${TARGET}.xz 203 | ;; 204 | "NONE") ;; 205 | 206 | "none") ;; 207 | 208 | esac 209 | 210 | # what kind of target do we have? Plain filesystem? smb? 211 | case "${uri[schema]}" in 212 | "file") 213 | mkdir -p ${uri[path]} 214 | mv ${TMPDIR}/*.md5 ${uri[path]}/ 215 | mv ${TMPDIR}/${TARGET} ${uri[path]}/${TARGET} 216 | ;; 217 | esac 218 | fi 219 | done 220 | 221 | else 222 | 223 | # make the dump (Default) 224 | mysqldump --max-allowed-packet=512M -A -h $DBSERVER -u$DBUSER -p$DBPASS >${TMPDIR}/${TARGET} 225 | 226 | # take md5 sum 227 | if [ "$MD5" = "TRUE" ]; then 228 | cd $TMPDIR 229 | md5sum ${TARGET} >${TARGET}.md5 230 | fi 231 | 232 | # do the compression 233 | case "$COMPRESSION" in 234 | "GZ") 235 | gzip ${TMPDIR}/${TARGET} 236 | TARGET=${TARGET}.gz 237 | ;; 238 | "gz") 239 | gzip ${TMPDIR}/${TARGET} 240 | TARGET=${TARGET}.gz 241 | ;; 242 | "BZ") 243 | bzip2 ${TMPDIR}/${TARGET} 244 | TARGET=${TARGET}.bz2 245 | ;; 246 | "bz") 247 | bzip2 ${TMPDIR}/${TARGET} 248 | TARGET=${TARGET}.bz2 249 | ;; 250 | "XZ") 251 | xz ${TMPDIR}/${TARGET} 252 | TARGET=${TARGET}.xz 253 | ;; 254 | "xz") 255 | xz ${TMPDIR}/${TARGET} 256 | TARGET=${TARGET}.xz 257 | ;; 258 | "NONE") ;; 259 | 260 | "none") ;; 261 | 262 | esac 263 | 264 | # what kind of target do we have? Plain filesystem? smb? 265 | case "${uri[schema]}" in 266 | "file") 267 | mkdir -p ${uri[path]} 268 | mv ${TMPDIR}/*.md5 ${uri[path]}/ 269 | mv ${TMPDIR}/${TARGET} ${uri[path]}/${TARGET} 270 | ;; 271 | esac 272 | 273 | fi 274 | 275 | # Perform Automatic Cleanup 276 | if [[ -n "$DB_CLEANUP_TIME" ]]; then 277 | find ${DB_DUMP_TARGET} -mmin +$DB_CLEANUP_TIME -iname "db_$DBNAME_*.*" -exec rm {} \; 278 | fi 279 | 280 | # wait 281 | sleep $(($DB_DUMP_FREQ * 60)) 282 | 283 | done 284 | fi 285 | --------------------------------------------------------------------------------