├── .gitignore ├── Dockerfile ├── LICENSE ├── README.md ├── doc ├── backup-dashboard-sample.png ├── docker-hub-build.png ├── s3-lifecycle.png └── s3-versioning.png ├── src ├── backup.sh └── entrypoint.sh └── test ├── backing-up-check-host └── docker-compose.yml ├── backing-up-locally └── docker-compose.yml ├── backing-up-to-glacier └── docker-compose.yml ├── backing-up-to-s3 └── docker-compose.yml ├── backing-up-via-scp └── docker-compose.yml ├── backing-up-with-encryptation └── docker-compose.yml ├── backing-up-with-uid └── docker-compose.yml ├── kitchen-sink └── docker-compose.yml ├── pre-post-backup-command └── docker-compose.yml ├── pre-post-backup-exec └── docker-compose.yml ├── pre-post-scp-command └── docker-compose.yml ├── stopping-containers-while-backing-up └── docker-compose.yml └── triggering-rotate-backups └── docker-compose.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | # General 2 | node_modules/ 3 | npm-debug.log 4 | .env 5 | .DS_Store 6 | .history 7 | .vscode 8 | 9 | # Project 10 | # n/a 11 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:18.04 2 | 3 | RUN apt-get update \ 4 | && apt-get install -y --no-install-recommends curl cron ca-certificates openssh-client iputils-ping unzip \ 5 | && apt-get clean && rm -rf /var/lib/apt/lists/* 6 | 7 | # Install awscliv2 https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html 8 | # ...but only for architectures that support it (see https://github.com/futurice/docker-volume-backup/issues/29) 9 | RUN if [ $(uname -m) = "aarch64" ] || [ $(uname -m) = "x86_64" ] ; then curl -sSL "https://awscli.amazonaws.com/awscli-exe-linux-$(uname -m).zip" -o "awscliv2.zip" && unzip -q awscliv2.zip && ./aws/install -i /usr/bin -b /usr/bin && rm -rf ./aws awscliv2.zip && aws --version ; fi 10 | 11 | # Install Docker binary 12 | # a) get.docker.com 13 | # https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#install-using-the-convenience-script 14 | # RUN curl -fsSL get.docker.com | sh 15 | # b) Borrow it from Official Docker container 16 | COPY --from=docker:latest /usr/local/bin/docker /usr/local/bin/ 17 | 18 | COPY ./src/entrypoint.sh ./src/backup.sh /root/ 19 | RUN chmod a+x /root/entrypoint.sh /root/backup.sh 20 | 21 | WORKDIR /root 22 | CMD [ "/root/entrypoint.sh" ] 23 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Futurice 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # docker-volume-backup 2 | 3 | Docker image for performing simple backups of Docker volumes. Main features: 4 | 5 | - Mount volumes into the container, and they'll get backed up 6 | - Use full `cron` expressions for scheduling the backups 7 | - Backs up to local disk, to remote host available via `scp`, to [AWS S3](https://aws.amazon.com/s3/), or to all of them 8 | - Allows triggering a backup manually if needed 9 | - Optionally stops containers for the duration of the backup, and starts them again afterward, to ensure consistent backups 10 | - Optionally `docker exec`s commands before/after backing up a container, to allow easy integration with database backup tools, for example 11 | - Optionally executes commands before/after backing up inside `docker-volume-backup` container and/or on remote host 12 | - Optionally ships backup metrics to [InfluxDB](https://docs.influxdata.com/influxdb/), for monitoring 13 | - Optionally encrypts backups with `gpg` before uploading 14 | 15 | ## Examples 16 | 17 | ### Backing up locally 18 | 19 | Say you're running some dashboards with [Grafana](https://grafana.com/) and want to back them up: 20 | 21 | ```yml 22 | version: "3" 23 | 24 | services: 25 | 26 | dashboard: 27 | image: grafana/grafana:7.4.5 28 | volumes: 29 | - grafana-data:/var/lib/grafana # This is where Grafana keeps its data 30 | 31 | backup: 32 | image: jareware/docker-volume-backup 33 | volumes: 34 | - grafana-data:/backup/grafana-data:ro # Mount the Grafana data volume (as read-only) 35 | - ./backups:/archive # Mount a local folder as the backup archive 36 | 37 | volumes: 38 | grafana-data: 39 | ``` 40 | 41 | This will back up the Grafana data volume, once per day, and write it to `./backups` with a filename like `backup-2018-11-27T16-51-56.tar.gz`. 42 | 43 | ### Backing up to S3 44 | 45 | Off-site backups are better, though: 46 | 47 | ```yml 48 | version: "3" 49 | 50 | services: 51 | 52 | dashboard: 53 | image: grafana/grafana:7.4.5 54 | volumes: 55 | - grafana-data:/var/lib/grafana # This is where Grafana keeps its data 56 | 57 | backup: 58 | image: jareware/docker-volume-backup 59 | environment: 60 | AWS_S3_BUCKET_NAME: my-backup-bucket # S3 bucket which you own, and already exists 61 | AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} # Read AWS secrets from environment (or a .env file) 62 | AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} 63 | volumes: 64 | - grafana-data:/backup/grafana-data:ro # Mount the Grafana data volume (as read-only) 65 | 66 | volumes: 67 | grafana-data: 68 | ``` 69 | 70 | This configuration will back up to AWS S3 instead. See below for additional tips about [S3 Bucket setup](#s3-bucket-setup). 71 | 72 | ### Restoring from S3 73 | 74 | Downloading backups from S3 can be done however you usually interact with S3, e.g. via the `aws s3` CLI or the AWS Web Console. 75 | 76 | However, if you're on the host that's running this image, you can also download the latest backup with: 77 | 78 | ``` 79 | $ docker-compose exec -T backup bash -c 'aws s3 cp s3://$AWS_S3_BUCKET_NAME/$BACKUP_FILENAME -' > restore.tar.gz 80 | ``` 81 | 82 | From here on out the restore process will depend on a variety of things, like whether you've encrypted the backups, how your volumes are configured, and what application it is exactly that you're restoring. 83 | 84 | But for the sake of example, to finish the restore for the above Grafana setup, you would: 85 | 86 | 1. Extract the contents of the backup, with e.g. `tar -xf restore.tar.gz`. This would leave you with a new directory called `backup` in the current dir. 87 | 1. Figure out the mount point of the `grafana-data` volume, with e.g. `docker volume ls` and then `docker volume inspect`. Let's say it ends up being `/var/lib/docker/volumes/bla-bla/_data`. This is where your live Grafana keeps its data on the host file system. 88 | 1. Stop Grafana, with `docker-compose stop dashboard`. 89 | 1. Move any existing data aside, with e.g. `sudo mv /var/lib/docker/volumes/bla-bla/_data{,_replaced_during_restore}`. You can also just remove it, if you like to live dangerously. 90 | 1. Move the backed up data to where the live Grafana can find it, with e.g. `sudo cp -r backup/grafana-data /var/lib/docker/volumes/bla-bla/_data`. 91 | 1. Depending on the Grafana version, [you may need to set some permissions manually](http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later), e.g. `sudo chown -R 472:472 /var/lib/docker/volumes/bla-bla/_data`. 92 | 1. Start Grafana back up, with `docker-compose start dashboard`. Your Grafana instance should now have travelled back in time to its latest backup. 93 | 94 | ### Backing up to remote host by means of SCP 95 | 96 | You can also upload to your backups to a remote host by means of secure copy (SCP) based on SSH. To do so, [create an SSH key pair if you do not have one yet and copy the public key to the remote host where your backups should be stored.](https://foofunc.com/how-to-create-and-add-ssh-key-in-remote-ssh-server/) Then, start the backup container by setting the variables `SCP_HOST`, `SCP_USER`, `SCP_DIRECTORY`, and provide the private SSH key by mounting it into `/ssh/id_rsa`. 97 | 98 | In the example, we store the backups in the remote host folder `/home/pi/backups` and use the default SSH key located at `~/.ssh/id_rsa`: 99 | 100 | ```yml 101 | version: "3" 102 | 103 | services: 104 | 105 | dashboard: 106 | image: grafana/grafana:7.4.5 107 | volumes: 108 | - grafana-data:/var/lib/grafana # This is where Grafana keeps its data 109 | 110 | backup: 111 | image: jareware/docker-volume-backup 112 | environment: 113 | SCP_HOST: 192.168.0.42 # Remote host IP address 114 | SCP_USER: pi # Remote host user to log in 115 | SCP_DIRECTORY: /home/pi/backups # Remote host directory 116 | volumes: 117 | - grafana-data:/backup/grafana-data:ro # Mount the Grafana data volume (as read-only) 118 | - ~/.ssh/id_rsa:/ssh/id_rsa:ro # Mount the SSH private key (as read-only) 119 | 120 | volumes: 121 | grafana-data: 122 | ``` 123 | 124 | ### Triggering a backup manually 125 | 126 | Sometimes it's useful to trigger a backup manually, e.g. right before making some big changes. 127 | 128 | This is as simple as: 129 | 130 | ``` 131 | $ docker-compose exec backup ./backup.sh 132 | 133 | [INFO] Backup starting 134 | 135 | 8 containers running on host in total 136 | 1 containers marked to be stopped during backup 137 | 138 | ... 139 | ... 140 | ... 141 | 142 | [INFO] Backup finished 143 | 144 | Will wait for next scheduled backup 145 | ``` 146 | 147 | If you **only** want to back up manually (i.e. not on a schedule), you should either: 148 | 149 | 1. Run the image without `docker-compose`, override the image entrypoint to `/root/backup.sh`, and ensure you match your env-vars with what the default `src/entrypoint.sh` would normally set up for you, or 150 | 1. Just use `BACKUP_CRON_EXPRESSION="#"` (to ensure scheduled backup never runs) and execute `docker-compose exec backup ./backup.sh` whenever you want to run a backup 151 | 152 | ### Stopping containers while backing up 153 | 154 | It's not generally safe to read files to which other processes might be writing. You may end up with corrupted copies. 155 | 156 | You can give the backup container access to the Docker socket, and label any containers that need to be stopped while the backup runs: 157 | 158 | ```yml 159 | version: "3" 160 | 161 | services: 162 | 163 | dashboard: 164 | image: grafana/grafana:7.4.5 165 | volumes: 166 | - grafana-data:/var/lib/grafana # This is where Grafana keeps its data 167 | labels: 168 | # Adding this label means this container should be stopped while it's being backed up: 169 | - "docker-volume-backup.stop-during-backup=true" 170 | 171 | backup: 172 | image: jareware/docker-volume-backup 173 | environment: 174 | AWS_S3_BUCKET_NAME: my-backup-bucket # S3 bucket which you own, and already exists 175 | AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} # Read AWS secrets from environment (or a .env file) 176 | AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} 177 | volumes: 178 | - /var/run/docker.sock:/var/run/docker.sock:ro # Allow use of the "stop-during-backup" feature 179 | - grafana-data:/backup/grafana-data:ro # Mount the Grafana data volume (as read-only) 180 | 181 | volumes: 182 | grafana-data: 183 | ``` 184 | 185 | This configuration allows you to safely back up things like databases, if you can tolerate a bit of downtime. 186 | 187 | ### Pre/post backup exec 188 | 189 | If you don't want to stop the container while it's being backed up, and the container comes with a backup utility (this is true for most databases), you can label the container with commands to run before/after backing it up: 190 | 191 | ```yml 192 | version: "3" 193 | 194 | services: 195 | 196 | database: 197 | image: influxdb:1.5.4 198 | volumes: 199 | - influxdb-data:/var/lib/influxdb # This is where InfluxDB keeps its data 200 | - influxdb-temp:/tmp/influxdb # This is our temp space for the backup 201 | labels: 202 | # These commands will be exec'd (in the same container) before/after the backup starts: 203 | - docker-volume-backup.exec-pre-backup=influxd backup -portable /tmp/influxdb 204 | - docker-volume-backup.exec-post-backup=rm -rfv /tmp/influxdb 205 | 206 | backup: 207 | image: jareware/docker-volume-backup 208 | volumes: 209 | - /var/run/docker.sock:/var/run/docker.sock:ro # Allow use of the "pre/post exec" feature 210 | - influxdb-temp:/backup/influxdb:ro # Mount the temp space so it gets backed up 211 | - ./backups:/archive # Mount a local folder as the backup archive 212 | 213 | volumes: 214 | influxdb-data: 215 | influxdb-temp: 216 | ``` 217 | 218 | The above configuration will perform a `docker exec` for the database container with `influxd backup`, right before the backup runs. The resulting DB snapshot is written to a temp volume (`influxdb-temp`), which is then backed up. Note that the main InfluxDB data volume (`influxdb-data`) isn't used at all, as it'd be unsafe to read while the DB process is running. 219 | 220 | Similarly, after the temp volume has been backed up, it's cleaned up with another `docker exec` in the database container, this time just invoking `rm`. 221 | 222 | If you need a more complex script for pre/post exec, consider mounting and invoking a shell script instead. 223 | 224 | ## Configuration 225 | 226 | Variable | Default | Notes 227 | --- | --- | --- 228 | `BACKUP_SOURCES` | `/backup` | Where to read data from. This can be a space-separated list if you need to back up multiple paths, when mounting multiple volumes for example. On the other hand, you can also just mount multiple volumes under `/backup` to have all of them backed up. 229 | `BACKUP_CRON_EXPRESSION` | `@daily` | Standard debian-flavored `cron` expression for when the backup should run. Use e.g. `0 4 * * *` to back up at 4 AM every night. See the [man page](http://man7.org/linux/man-pages/man8/cron.8.html) or [crontab.guru](https://crontab.guru/) for more. 230 | `BACKUP_FILENAME` | `backup-%Y-%m-%dT%H-%M-%S.tar.gz` | File name template for the backup file. Is passed through `date` for formatting. See the [man page](http://man7.org/linux/man-pages/man1/date.1.html) for more. 231 | `BACKUP_ARCHIVE` | `/archive` | When this path is available within the container (i.e. you've mounted a Docker volume there), a finished backup file will get archived there after each run. 232 | `PRE_BACKUP_COMMAND` | | Commands that is executed before the backup is created. 233 | `POST_BACKUP_COMMAND` | | Commands that is executed after the backup has been transferred. 234 | `BACKUP_UID` | `root (0)` | After backup file has been moved to archive location the file user ownership is changed to this UID. 235 | `BACKUP_GID` | `$BACKUP_UID` | After backup file has been moved to archive location the file group ownership is changed to this GID. 236 | `BACKUP_WAIT_SECONDS` | `0` | The backup script will sleep this many seconds between re-starting stopped containers, and proceeding with archiving/uploading the backup. This can be useful if you don't want the load/network spike of a large upload immediately after the load/network spike of container startup. 237 | `BACKUP_HOSTNAME` | `$(hostname)` | Name of the host (i.e. Docker container) in which the backup runs. Mostly useful if you want a specific hostname to be associated with backup metrics (see InfluxDB support). 238 | `BACKUP_CUSTOM_LABEL` | | When provided, the [start/stop](#stopping-containers-while-backing-up) and [pre/post exec](#prepost-backup-exec) logic only applies to containers with this custom label. 239 | `CHECK_HOST` | | When provided, the availability of the named host will be checked. The host should be the destination host of the backups. If the host is available, the backup is conducted as normal. Else, the backup is skipped. 240 | `AWS_S3_BUCKET_NAME` | | When provided, the resulting backup file will be uploaded to this S3 bucket after the backup has ran. You may include slashes after the bucket name if you want to upload into a specific path within the bucket, e.g. `your-bucket-name/backups/daily`. 241 | `AWS_GLACIER_VAULT_NAME` | | When provided, the resulting backup file will be uploaded to this AWS Glacier vault after the backup has ran. 242 | `AWS_ACCESS_KEY_ID` | | Required when using `AWS_S3_BUCKET_NAME`. 243 | `AWS_SECRET_ACCESS_KEY` | | Required when using `AWS_S3_BUCKET_NAME`. 244 | `AWS_DEFAULT_REGION` | | Optional when using `AWS_S3_BUCKET_NAME`. Allows you to override the AWS CLI default region. Usually not needed. 245 | `AWS_EXTRA_ARGS` | | Optional additional args for the AWS CLI. Useful for e.g. providing `--endpoint-url ` for S3-compatible systems, such as [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/), [MinIO](https://min.io/) and the like. 246 | `SCP_HOST` | | When provided, the resulting backup file will be uploaded by means of `scp` to the host stated. 247 | `SCP_USER` | | User name to log into `SCP_HOST`. 248 | `SCP_DIRECTORY` | | Directory on `SCP_HOST` where backup file is stored. 249 | `PRE_SCP_COMMAND` | | Commands that is executed on `SCP_HOST` before the backup is transferred. 250 | `POST_SCP_COMMAND` | | Commands that is executed on `SCP_HOST` after the backup has been transferred. 251 | `GPG_PASSPHRASE` | | When provided, the backup will be encrypted with gpg using this `passphrase`. 252 | `INFLUXDB_URL` | | Required when sending metrics to InfluxDB. 253 | `INFLUXDB_MEASUREMENT` | `docker_volume_backup` | Required when sending metrics to InfluxDB. 254 | `INFLUXDB_API_TOKEN` | | When provided, backup metrics will be sent to an InfluxDB instance using the API token for authorization. If API Tokens are not supported by the InfluxDB version in use, `INFLUXDB_CREDENTIALS` must be provided instead. 255 | `INFLUXDB_ORGANIZATION` | | Required when using `INFLUXDB_API_TOKEN`; e.g. `personal`. 256 | `INFLUXDB_BUCKET` | | Required when using `INFLUXDB_API_TOKEN`; e.g. `backup_metrics` 257 | `INFLUXDB_CREDENTIALS` | | When provided, backup metrics will be sent to an InfluxDB instance using `user:password` authentication. This is required if `INFLUXDB_API_TOKEN` not provided. 258 | `INFLUXDB_DB` | | Required when using `INFLUXDB_URL`; e.g. `my_database`. 259 | `TZ` | `UTC` | Which timezone should `cron` use, e.g. `America/New_York` or `Europe/Warsaw`. See [full list of available time zones](http://manpages.ubuntu.com/manpages/bionic/man3/DateTime::TimeZone::Catalog.3pm.html). 260 | 261 | ## Metrics 262 | 263 | After the backup, the script will collect some metrics from the run. By default, they're just written out as logs. For example: 264 | 265 | ``` 266 | docker_volume_backup 267 | host=my-demo-host 268 | size_compressed_bytes=219984 269 | containers_total=4 270 | containers_stopped=1 271 | time_wall=61.6939337253571 272 | time_total=1.69393372535706 273 | time_compress=0.171068429946899 274 | time_upload=0.56016993522644 275 | ``` 276 | 277 | If so configured, they can also be shipped to an InfluxDB instance. This allows you to set up monitoring and/or alerts for them. Here's a sample visualization on Grafana: 278 | 279 | ![Backup dashboard sample](doc/backup-dashboard-sample.png) 280 | 281 | ## Automatic backup rotation 282 | 283 | You probably don't want to keep all backups forever. A more common strategy is to hold onto a few recent ones, and remove older ones as they become irrelevant. There's no built-in support for this in `docker-volume-backup`, but you are able to trigger an external Docker container that includes [`rotate-backups`](https://pypi.org/project/rotate-backups/). In the examples, we draw on [docker-rotate-backups](https://github.com/jan-brinkmann/docker-rotate-backups). 284 | 285 | In order to start an external Docker container, access to `docker.sock` has to be granted (as already seen in in the section on [stopping containers while backing up](#stopping-containers-while-backing-up)). Then, `docker-rotate-backups` can be run on local directories as well as on remote directories. 286 | 287 | The default rotation scheme implemented in `docker-rotate-backups` preserves seven daily, four weekly, twelve monthly, and every yearly backups. For detailed information on customizing the rotation scheme, we refer to the [documentation](https://github.com/jan-brinkmann/docker-rotate-backups#how-to-customize). 288 | 289 | ### Rotation for local backups 290 | 291 | Let `/home/pi/backups` be the path to your local backups. Then, initialize the environmental variable `POST_BACKUP_COMMAND` with the following command. 292 | ``` 293 | environment: 294 | POST_BACKUP_COMMAND: "docker run --rm -e DRY_RUN=false -v /home/pi/backups:/archive ghcr.io/jan-brinkmann/docker-rotate-backups" 295 | volumes: 296 | - /var/run/docker.sock:/var/run/docker.sock:ro 297 | - /home/pi/backups:/archive 298 | ``` 299 | 300 | ### Rotation for backups tranferred via SCP 301 | 302 | Here, let `/home/pi/backups` be the backup directory on a remote host. To run `docker-rotate-backups` on that directory, the command in `POST_BACKUP_COMMAND` has to include all necessary information in order to access the remote host by means of SSH. Remember, if you transfer your [backups by means of SCP](#backing-up-to-remote-host-by-means-of-scp), all information in `SSH_USER`, `SSH_HOST`, `SSH_ARCHIVE`, and the SSH public key are already available. 303 | ``` 304 | environment: 305 | SCP_HOST: 192.168.0.42 306 | SCP_USER: pi 307 | SCP_DIRECTORY: /path/to/backups 308 | POST_BACKUP_COMMAND: "docker run --rm -e DRY_RUN=false -e SSH_USER=pi -e SSH_HOST=192.168.0.42 -e SSH_ARCHIVE=/home/pi/backups -v /home/pi/.ssh/id_rsa:/root/.ssh/id_rsa:ro ghcr.io/jan-brinkmann/docker-rotate-backups" 309 | volumes: 310 | - /var/run/docker.sock:/var/run/docker.sock:ro 311 | - /home/pi/.ssh/id_rsa:/ssh/id_rsa:ro 312 | ``` 313 | 314 | ### Rotation for S3 backups 315 | 316 | Amazon S3 has [Versioning](https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) and [Object Lifecycle Management](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html) features that can be useful for backups. 317 | 318 | First, you can enable versioning for your backup bucket: 319 | 320 | ![S3 versioning](doc/s3-versioning.png) 321 | 322 | Then, you can change your backup filename to a static one, for example: 323 | 324 | ```yml 325 | environment: 326 | BACKUP_FILENAME: latest.tar.gz 327 | ``` 328 | 329 | This allows you to retain previous versions of the backup file, but the _most recent_ version is always available with the same filename: 330 | 331 | $ aws s3 cp s3://my-backup-bucket/latest.tar.gz . 332 | download: s3://my-backup-bucket/latest.tar.gz to ./latest.tar.gz 333 | 334 | To make sure your bucket doesn't continue to grow indefinitely, you can enable some lifecycle rules: 335 | 336 | ![S3 lifecycle](doc/s3-lifecycle.png) 337 | 338 | These rules will: 339 | 340 | - Move non-latest backups to a cheaper, long-term storage class ([Glacier](https://aws.amazon.com/glacier/)) 341 | - Permanently remove backups after a year 342 | - Still always keep the latest backup available (even after a year has passed) 343 | 344 | ## Testing 345 | 346 | A bunch of test cases exist under [`test`](test/). To run them: 347 | 348 | cd test/backing-up-locally/ 349 | docker-compose stop && docker-compose rm -f && docker-compose build && docker-compose up 350 | 351 | Some cases may need secrets available in the environment, e.g. for S3 uploads to work. 352 | 353 | ## Releasing 354 | 355 | 1. [Draft a new release on GitHub](https://github.com/jareware/docker-volume-backup/releases/new) 356 | 1. `docker buildx build --platform linux/amd64,linux/arm64 -t jareware/docker-volume-backup:latest --push .` 357 | 1. `docker buildx build --platform linux/amd64,linux/arm64 -t jareware/docker-volume-backup:x.y.z --push .` 358 | -------------------------------------------------------------------------------- /doc/backup-dashboard-sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jareware/docker-volume-backup/a2d4ac75b980a5f4130a5d43717f6cff3b79202f/doc/backup-dashboard-sample.png -------------------------------------------------------------------------------- /doc/docker-hub-build.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jareware/docker-volume-backup/a2d4ac75b980a5f4130a5d43717f6cff3b79202f/doc/docker-hub-build.png -------------------------------------------------------------------------------- /doc/s3-lifecycle.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jareware/docker-volume-backup/a2d4ac75b980a5f4130a5d43717f6cff3b79202f/doc/s3-lifecycle.png -------------------------------------------------------------------------------- /doc/s3-versioning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jareware/docker-volume-backup/a2d4ac75b980a5f4130a5d43717f6cff3b79202f/doc/s3-versioning.png -------------------------------------------------------------------------------- /src/backup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Cronjobs don't inherit their env, so load from file 4 | source env.sh 5 | 6 | function info { 7 | bold="\033[1m" 8 | reset="\033[0m" 9 | echo -e "\n$bold[INFO] $1$reset\n" 10 | } 11 | 12 | if [ "$CHECK_HOST" != "false" ]; then 13 | info "Check host availability" 14 | TEMPFILE="$(mktemp)" 15 | ping -c 1 $CHECK_HOST | grep '1 packets transmitted, 1 received' > "$TEMPFILE" 16 | PING_RESULT="$(cat $TEMPFILE)" 17 | if [ ! -z "$PING_RESULT" ]; then 18 | echo "$CHECK_HOST is available." 19 | else 20 | echo "$CHECK_HOST is not available." 21 | info "Backup skipped" 22 | exit 0 23 | fi 24 | fi 25 | 26 | info "Backup starting" 27 | TIME_START="$(date +%s.%N)" 28 | DOCKER_SOCK="/var/run/docker.sock" 29 | 30 | if [ ! -z "$BACKUP_CUSTOM_LABEL" ]; then 31 | CUSTOM_LABEL="--filter label=$BACKUP_CUSTOM_LABEL" 32 | fi 33 | 34 | if [ -S "$DOCKER_SOCK" ]; then 35 | TEMPFILE="$(mktemp)" 36 | docker ps --format "{{.ID}}" --filter "label=docker-volume-backup.stop-during-backup=true" $CUSTOM_LABEL > "$TEMPFILE" 37 | CONTAINERS_TO_STOP="$(cat $TEMPFILE | tr '\n' ' ')" 38 | CONTAINERS_TO_STOP_TOTAL="$(cat $TEMPFILE | wc -l)" 39 | CONTAINERS_TOTAL="$(docker ps --format "{{.ID}}" | wc -l)" 40 | rm "$TEMPFILE" 41 | echo "$CONTAINERS_TOTAL containers running on host in total" 42 | echo "$CONTAINERS_TO_STOP_TOTAL containers marked to be stopped during backup" 43 | else 44 | CONTAINERS_TO_STOP_TOTAL="0" 45 | CONTAINERS_TOTAL="0" 46 | echo "Cannot access \"$DOCKER_SOCK\", won't look for containers to stop" 47 | fi 48 | 49 | if [ "$CONTAINERS_TO_STOP_TOTAL" != "0" ]; then 50 | info "Stopping containers" 51 | docker stop $CONTAINERS_TO_STOP 52 | fi 53 | 54 | if [ -S "$DOCKER_SOCK" ]; then 55 | for id in $(docker ps --filter label=docker-volume-backup.exec-pre-backup $CUSTOM_LABEL --format '{{.ID}}'); do 56 | name="$(docker ps --filter id=$id --format '{{.Names}}')" 57 | cmd="$(docker ps --filter id=$id --format '{{.Label "docker-volume-backup.exec-pre-backup"}}')" 58 | info "Pre-exec command for: $name" 59 | echo docker exec $id $cmd # echo the command we're using, for debuggability 60 | eval docker exec $id $cmd 61 | done 62 | fi 63 | 64 | if [ ! -z "$PRE_BACKUP_COMMAND" ]; then 65 | info "Pre-backup command" 66 | echo "$PRE_BACKUP_COMMAND" 67 | eval $PRE_BACKUP_COMMAND 68 | fi 69 | 70 | info "Creating backup" 71 | BACKUP_FILENAME="$(date +"${BACKUP_FILENAME:-backup-%Y-%m-%dT%H-%M-%S.tar.gz}")" 72 | TIME_BACK_UP="$(date +%s.%N)" 73 | tar -czvf "$BACKUP_FILENAME" $BACKUP_SOURCES # allow the var to expand, in case we have multiple sources 74 | BACKUP_SIZE="$(du --bytes $BACKUP_FILENAME | sed 's/\s.*$//')" 75 | TIME_BACKED_UP="$(date +%s.%N)" 76 | 77 | if [ ! -z "$GPG_PASSPHRASE" ]; then 78 | info "Encrypting backup" 79 | gpg --symmetric --cipher-algo aes256 --batch --passphrase "$GPG_PASSPHRASE" -o "${BACKUP_FILENAME}.gpg" $BACKUP_FILENAME 80 | rm $BACKUP_FILENAME 81 | BACKUP_FILENAME="${BACKUP_FILENAME}.gpg" 82 | fi 83 | 84 | if [ -S "$DOCKER_SOCK" ]; then 85 | for id in $(docker ps --filter label=docker-volume-backup.exec-post-backup $CUSTOM_LABEL --format '{{.ID}}'); do 86 | name="$(docker ps --filter id=$id --format '{{.Names}}')" 87 | cmd="$(docker ps --filter id=$id --format '{{.Label "docker-volume-backup.exec-post-backup"}}')" 88 | info "Post-exec command for: $name" 89 | echo docker exec $id $cmd # echo the command we're using, for debuggability 90 | eval docker exec $id $cmd 91 | done 92 | fi 93 | 94 | if [ "$CONTAINERS_TO_STOP_TOTAL" != "0" ]; then 95 | info "Starting containers back up" 96 | docker start $CONTAINERS_TO_STOP 97 | fi 98 | 99 | info "Waiting before processing" 100 | echo "Sleeping $BACKUP_WAIT_SECONDS seconds..." 101 | sleep "$BACKUP_WAIT_SECONDS" 102 | 103 | TIME_UPLOAD="0" 104 | TIME_UPLOADED="0" 105 | if [ ! -z "$AWS_S3_BUCKET_NAME" ]; then 106 | info "Uploading backup to S3" 107 | echo "Will upload to bucket \"$AWS_S3_BUCKET_NAME\"" 108 | TIME_UPLOAD="$(date +%s.%N)" 109 | aws $AWS_EXTRA_ARGS s3 cp --only-show-errors "$BACKUP_FILENAME" "s3://$AWS_S3_BUCKET_NAME/" 110 | echo "Upload finished" 111 | TIME_UPLOADED="$(date +%s.%N)" 112 | fi 113 | if [ ! -z "$AWS_GLACIER_VAULT_NAME" ]; then 114 | info "Uploading backup to GLACIER" 115 | echo "Will upload to vault \"$AWS_GLACIER_VAULT_NAME\"" 116 | TIME_UPLOAD="$(date +%s.%N)" 117 | aws $AWS_EXTRA_ARGS glacier upload-archive --account-id - --vault-name "$AWS_GLACIER_VAULT_NAME" --body "$BACKUP_FILENAME" 118 | echo "Upload finished" 119 | TIME_UPLOADED="$(date +%s.%N)" 120 | fi 121 | 122 | if [ ! -z "$SCP_HOST" ]; then 123 | info "Uploading backup by means of SCP" 124 | SSH_CONFIG="-o StrictHostKeyChecking=no -i /ssh/id_rsa" 125 | if [ ! -z "$PRE_SCP_COMMAND" ]; then 126 | echo "Pre-scp command: $PRE_SCP_COMMAND" 127 | ssh $SSH_CONFIG $SCP_USER@$SCP_HOST $PRE_SCP_COMMAND 128 | fi 129 | echo "Will upload to $SCP_HOST:$SCP_DIRECTORY" 130 | TIME_UPLOAD="$(date +%s.%N)" 131 | scp $SSH_CONFIG $BACKUP_FILENAME $SCP_USER@$SCP_HOST:$SCP_DIRECTORY 132 | echo "Upload finished" 133 | TIME_UPLOADED="$(date +%s.%N)" 134 | if [ ! -z "$POST_SCP_COMMAND" ]; then 135 | echo "Post-scp command: $POST_SCP_COMMAND" 136 | ssh $SSH_CONFIG $SCP_USER@$SCP_HOST $POST_SCP_COMMAND 137 | fi 138 | fi 139 | 140 | if [ -d "$BACKUP_ARCHIVE" ]; then 141 | info "Archiving backup" 142 | mv -v "$BACKUP_FILENAME" "$BACKUP_ARCHIVE/$BACKUP_FILENAME" 143 | if (($BACKUP_UID > 0)); then 144 | chown -v $BACKUP_UID:$BACKUP_GID "$BACKUP_ARCHIVE/$BACKUP_FILENAME" 145 | fi 146 | fi 147 | 148 | if [ ! -z "$POST_BACKUP_COMMAND" ]; then 149 | info "Post-backup command" 150 | echo "$POST_BACKUP_COMMAND" 151 | eval $POST_BACKUP_COMMAND 152 | fi 153 | 154 | 155 | if [ -f "$BACKUP_FILENAME" ]; then 156 | info "Cleaning up" 157 | rm -vf "$BACKUP_FILENAME" 158 | fi 159 | 160 | info "Collecting metrics" 161 | TIME_FINISH="$(date +%s.%N)" 162 | INFLUX_LINE="$INFLUXDB_MEASUREMENT\ 163 | ,host=$BACKUP_HOSTNAME\ 164 | \ 165 | size_compressed_bytes=$BACKUP_SIZE\ 166 | ,containers_total=$CONTAINERS_TOTAL\ 167 | ,containers_stopped=$CONTAINERS_TO_STOP_TOTAL\ 168 | ,time_wall=$(perl -E "say $TIME_FINISH - $TIME_START")\ 169 | ,time_total=$(perl -E "say $TIME_FINISH - $TIME_START - $BACKUP_WAIT_SECONDS")\ 170 | ,time_compress=$(perl -E "say $TIME_BACKED_UP - $TIME_BACK_UP")\ 171 | ,time_upload=$(perl -E "say $TIME_UPLOADED - $TIME_UPLOAD")\ 172 | " 173 | echo "$INFLUX_LINE" | sed 's/ /,/g' | tr , '\n' 174 | 175 | if [ ! -z "$INFLUXDB_CREDENTIALS" ]; then 176 | info "Shipping metrics" 177 | curl \ 178 | --silent \ 179 | --include \ 180 | --request POST \ 181 | --user "$INFLUXDB_CREDENTIALS" \ 182 | "$INFLUXDB_URL/write?db=$INFLUXDB_DB" \ 183 | --data-binary "$INFLUX_LINE" 184 | elif [ ! -z "$INFLUXDB_API_TOKEN" ]; then 185 | info "Shipping metrics" 186 | curl \ 187 | --silent \ 188 | --include \ 189 | --request POST \ 190 | --header "Authorization: Token $INFLUXDB_API_TOKEN" \ 191 | "$INFLUXDB_URL/api/v2/write?org=$INFLUXDB_ORGANIZATION&bucket=$INFLUXDB_BUCKET" \ 192 | --data-binary "$INFLUX_LINE" 193 | fi 194 | 195 | info "Backup finished" 196 | echo "Will wait for next scheduled backup" 197 | -------------------------------------------------------------------------------- /src/entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exit immediately on error 4 | set -e 5 | 6 | # Write cronjob env to file, fill in sensible defaults, and read them back in 7 | cat < env.sh 8 | BACKUP_SOURCES="${BACKUP_SOURCES:-/backup}" 9 | BACKUP_CRON_EXPRESSION="${BACKUP_CRON_EXPRESSION:-@daily}" 10 | AWS_S3_BUCKET_NAME="${AWS_S3_BUCKET_NAME:-}" 11 | AWS_GLACIER_VAULT_NAME="${AWS_GLACIER_VAULT_NAME:-}" 12 | AWS_EXTRA_ARGS="${AWS_EXTRA_ARGS:-}" 13 | PRE_BACKUP_COMMAND="${PRE_BACKUP_COMMAND:-}" 14 | POST_BACKUP_COMMAND="${POST_BACKUP_COMMAND:-}" 15 | SCP_HOST="${SCP_HOST:-}" 16 | SCP_USER="${SCP_USER:-}" 17 | SCP_DIRECTORY="${SCP_DIRECTORY:-}" 18 | PRE_SCP_COMMAND="${PRE_SCP_COMMAND:-}" 19 | POST_SCP_COMMAND="${POST_SCP_COMMAND:-}" 20 | BACKUP_FILENAME=${BACKUP_FILENAME:-"backup-%Y-%m-%dT%H-%M-%S.tar.gz"} 21 | BACKUP_ARCHIVE="${BACKUP_ARCHIVE:-/archive}" 22 | BACKUP_UID=${BACKUP_UID:-0} 23 | BACKUP_GID=${BACKUP_GID:-$BACKUP_UID} 24 | BACKUP_WAIT_SECONDS="${BACKUP_WAIT_SECONDS:-0}" 25 | BACKUP_HOSTNAME="${BACKUP_HOSTNAME:-$(hostname)}" 26 | GPG_PASSPHRASE="${GPG_PASSPHRASE:-}" 27 | INFLUXDB_URL="${INFLUXDB_URL:-}" 28 | INFLUXDB_DB="${INFLUXDB_DB:-}" 29 | INFLUXDB_CREDENTIALS="${INFLUXDB_CREDENTIALS:-}" 30 | INFLUXDB_ORGANIZATION="${INFLUXDB_ORGANIZATION:-}" 31 | INFLUXDB_BUCKET="${INFLUXDB_BUCKET:-}" 32 | INFLUXDB_API_TOKEN="${INFLUXDB_API_TOKEN:-}" 33 | INFLUXDB_MEASUREMENT="${INFLUXDB_MEASUREMENT:-docker_volume_backup}" 34 | BACKUP_CUSTOM_LABEL="${BACKUP_CUSTOM_LABEL:-}" 35 | CHECK_HOST="${CHECK_HOST:-"false"}" 36 | EOF 37 | chmod a+x env.sh 38 | source env.sh 39 | 40 | # Configure AWS CLI 41 | mkdir -p .aws 42 | cat < .aws/credentials 43 | [default] 44 | aws_access_key_id = ${AWS_ACCESS_KEY_ID} 45 | aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY} 46 | EOF 47 | if [ ! -z "$AWS_DEFAULT_REGION" ]; then 48 | cat < .aws/config 49 | [default] 50 | region = ${AWS_DEFAULT_REGION} 51 | EOF 52 | fi 53 | 54 | # Add our cron entry, and direct stdout & stderr to Docker commands stdout 55 | echo "Installing cron.d entry: docker-volume-backup" 56 | echo "$BACKUP_CRON_EXPRESSION root /root/backup.sh > /proc/1/fd/1 2>&1" > /etc/cron.d/docker-volume-backup 57 | 58 | # Let cron take the wheel 59 | echo "Starting cron in foreground with expression: $BACKUP_CRON_EXPRESSION" 60 | cron -f 61 | -------------------------------------------------------------------------------- /test/backing-up-check-host/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana 7 | ports: 8 | - "3000:3000" 9 | volumes: 10 | - grafana-data:/var/lib/grafana 11 | 12 | backup: 13 | build: ../.. 14 | environment: 15 | BACKUP_CRON_EXPRESSION: "* * * * *" 16 | CHECK_HOST: "192.168.0.2" # The script sends a ping to 192.168.0.2. If the host answers the ping, the backup starts. Otherwise, it is skipped. You can als provide a hostname that is resolved by means of DNS. 17 | 18 | volumes: 19 | - grafana-data:/backup/grafana-data:ro 20 | - ./backups:/archive 21 | 22 | volumes: 23 | grafana-data: 24 | -------------------------------------------------------------------------------- /test/backing-up-locally/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana 7 | ports: 8 | - "3000:3000" 9 | volumes: 10 | - grafana-data:/var/lib/grafana 11 | 12 | backup: 13 | build: ../.. 14 | environment: 15 | BACKUP_CRON_EXPRESSION: "* * * * *" 16 | volumes: 17 | - grafana-data:/backup/grafana-data:ro 18 | - ./backups:/archive 19 | 20 | volumes: 21 | grafana-data: 22 | -------------------------------------------------------------------------------- /test/backing-up-to-glacier/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana 7 | ports: 8 | - "3000:3000" 9 | volumes: 10 | - grafana-data:/var/lib/grafana 11 | 12 | backup: 13 | build: ../.. 14 | environment: 15 | BACKUP_CRON_EXPRESSION: "* * * * *" 16 | BACKUP_FILENAME: glacier-test.tar.gz 17 | AWS_GLACIER_VAULT_NAME: Backups 18 | AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} 19 | AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} 20 | AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION} 21 | volumes: 22 | - grafana-data:/backup/grafana-data:ro 23 | 24 | volumes: 25 | grafana-data: 26 | -------------------------------------------------------------------------------- /test/backing-up-to-s3/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana 7 | ports: 8 | - "3000:3000" 9 | volumes: 10 | - grafana-data:/var/lib/grafana 11 | 12 | backup: 13 | build: ../.. 14 | environment: 15 | BACKUP_CRON_EXPRESSION: "* * * * *" 16 | BACKUP_FILENAME: latest.tar.gz 17 | AWS_S3_BUCKET_NAME: docker-volume-backup-test-bucket 18 | AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} 19 | AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} 20 | volumes: 21 | - grafana-data:/backup/grafana-data:ro 22 | 23 | volumes: 24 | grafana-data: 25 | -------------------------------------------------------------------------------- /test/backing-up-via-scp/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana:7.4.5 7 | volumes: 8 | - grafana-data:/var/lib/grafana # This is where Grafana keeps its data 9 | 10 | backup: 11 | build: ../.. 12 | environment: 13 | SCP_HOST: 192.168.0.42 # Remote host IP address 14 | SCP_USER: pi # Remote host user to log in 15 | SCP_DIRECTORY: /home/pi/backups # Remote host directory 16 | volumes: 17 | - grafana-data:/backup/grafana-data:ro # Mount the Grafana data volume (as read-only) 18 | - ~/.ssh/id_rsa:/ssh/id_rsa:ro # Mount the SSH private key (as read-only) 19 | 20 | volumes: 21 | grafana-data: 22 | -------------------------------------------------------------------------------- /test/backing-up-with-encryptation/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana 7 | ports: 8 | - "3000:3000" 9 | volumes: 10 | - grafana-data:/var/lib/grafana 11 | 12 | backup: 13 | build: ../.. 14 | environment: 15 | BACKUP_CRON_EXPRESSION: "* * * * *" 16 | GPG_PASSPHRASE: changeme 17 | volumes: 18 | - grafana-data:/backup/grafana-data:ro 19 | - ./backups:/archive 20 | 21 | volumes: 22 | grafana-data: 23 | -------------------------------------------------------------------------------- /test/backing-up-with-uid/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana 7 | ports: 8 | - "3000:3000" 9 | volumes: 10 | - grafana-data:/var/lib/grafana 11 | 12 | backup: 13 | build: ../.. 14 | environment: 15 | BACKUP_UID: 1000 16 | BACKUP_GID: 100 17 | BACKUP_CRON_EXPRESSION: "* * * * *" 18 | volumes: 19 | - grafana-data:/backup/grafana-data:ro 20 | - ./backups:/archive 21 | 22 | volumes: 23 | grafana-data: 24 | -------------------------------------------------------------------------------- /test/kitchen-sink/docker-compose.yml: -------------------------------------------------------------------------------- 1 | # To verify variable substitution from environment: 2 | # $ docker-compose config 3 | # If there's a .env file in the same directory, it will be sourced automatically 4 | 5 | version: "3" 6 | 7 | services: 8 | 9 | foo: 10 | image: ubuntu 11 | command: "bash -c 'while true; do echo I am FOO | tee -a /data/log; sleep 5; done'" 12 | volumes: 13 | - foo-data:/data 14 | 15 | bar: 16 | image: ubuntu 17 | command: "bash -c 'while true; do echo I am BAR | tee -a /data/log; sleep 5; done'" 18 | volumes: 19 | - bar-data:/data 20 | labels: 21 | - "docker-volume-backup.stop-during-backup=true" 22 | 23 | backup: 24 | build: . 25 | environment: 26 | BACKUP_HOSTNAME: docker-volume-backup 27 | BACKUP_CRON_EXPRESSION: "* * * * *" 28 | BACKUP_FILENAME: "backup-%Y-%m-%d-%H-%M-%S.tar.gz" 29 | BACKUP_WAIT_SECONDS: 0 30 | AWS_S3_BUCKET_NAME: docker-volume-backup-test 31 | AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}" 32 | AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}" 33 | AWS_DEFAULT_REGION: "${AWS_DEFAULT_REGION}" 34 | BACKUP_ARCHIVE: /archive 35 | volumes: 36 | - "/var/run/docker.sock:/var/run/docker.sock:ro" # allow Docker commands from within the container 37 | - "foo-data:/backup/foo-backup:ro" 38 | - "bar-data:/backup/bar-backup:ro" 39 | - "./archive:/archive" 40 | 41 | volumes: 42 | foo-data: 43 | bar-data: 44 | -------------------------------------------------------------------------------- /test/pre-post-backup-command/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana:7.4.5 7 | volumes: 8 | - grafana-data:/var/lib/grafana # This is where Grafana keeps its data 9 | 10 | backup: 11 | build: ../.. 12 | environment: 13 | # Commands that is executed before the backup is transferred: 14 | PRE_BACKUP_COMMAND: "ls -la /archive" 15 | # Command that is executed after the backup has been transferred: 16 | # "Delete all files in /archive that are older than seven days." 17 | POST_BACKUP_COMMAND: "rm $$(find /archive/* -mtime +7)" 18 | volumes: 19 | - grafana-data:/backup/grafana-data:ro # Mount the Grafana data volume (as read-only) 20 | - /home/pi/backups:/archive # Mount the directory where the backups are being stored 21 | 22 | volumes: 23 | grafana-data: 24 | -------------------------------------------------------------------------------- /test/pre-post-backup-exec/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | database: 5 | image: influxdb:1.5.4 6 | volumes: 7 | - influxdb-data:/var/lib/influxdb 8 | - influxdb-temp:/tmp/influxdb 9 | labels: 10 | - docker-volume-backup.exec-pre-backup=bash -c 'influxd backup -portable /tmp/influxdb' 11 | - docker-volume-backup.exec-post-backup=rm -rfv /tmp/influxdb/* 12 | 13 | backup: 14 | build: ../.. 15 | environment: 16 | BACKUP_CRON_EXPRESSION: "* * * * *" 17 | volumes: 18 | - /var/run/docker.sock:/var/run/docker.sock:ro 19 | - influxdb-temp:/backup/influxdb:ro 20 | - ./backups:/archive 21 | 22 | volumes: 23 | influxdb-data: 24 | influxdb-temp: 25 | -------------------------------------------------------------------------------- /test/pre-post-scp-command/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana:7.4.5 7 | volumes: 8 | - grafana-data:/var/lib/grafana # This is where Grafana keeps its data 9 | 10 | backup: 11 | build: ../.. 12 | environment: 13 | SCP_HOST: 192.168.0.42 # Remote host IP address 14 | SCP_USER: pi # Remote host user to log in 15 | SCP_DIRECTORY: /home/pi/backups # Remote host directory 16 | # Commands that is executed before the backup is transferred by means of scp: 17 | PRE_SCP_COMMAND: "ls -la /home/pi/backups" 18 | # Command that is executed after the backup has been transferred by means of scp: 19 | POST_SCP_COMMAND: "rotate-backups --daily 7 --weekly 4 --monthly 12 --yearly always /home/pi/backups" 20 | volumes: 21 | - grafana-data:/backup/grafana-data:ro # Mount the Grafana data volume (as read-only) 22 | - ~/.ssh/id_rsa:/ssh/id_rsa:ro # Mount the SSH private key (as read-only) 23 | 24 | volumes: 25 | grafana-data: 26 | -------------------------------------------------------------------------------- /test/stopping-containers-while-backing-up/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana 7 | ports: 8 | - "3000:3000" 9 | volumes: 10 | - grafana-data:/var/lib/grafana 11 | labels: 12 | - "docker-volume-backup.stop-during-backup=true" 13 | 14 | backup: 15 | build: ../.. 16 | environment: 17 | BACKUP_CRON_EXPRESSION: "* * * * *" 18 | AWS_S3_BUCKET_NAME: docker-volume-backup-test-bucket 19 | AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} 20 | AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} 21 | volumes: 22 | - /var/run/docker.sock:/var/run/docker.sock:ro 23 | - grafana-data:/backup/grafana-data:ro 24 | 25 | volumes: 26 | grafana-data: 27 | -------------------------------------------------------------------------------- /test/triggering-rotate-backups/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | 5 | dashboard: 6 | image: grafana/grafana:7.4.5 7 | volumes: 8 | - grafana-data:/var/lib/grafana # This is where Grafana keeps its data 9 | 10 | backup-locally: 11 | build: ../.. 12 | environment: 13 | BACKUP_CRON_EXPRESSION: "0 * * * *" 14 | # Command that is executed after the backup has been transferred: 15 | # "Trigger external Docker container that includes rotate-backups and disable dry-run option." 16 | POST_BACKUP_COMMAND: "docker run --rm -e DRY_RUN=false -v /home/pi/backups:/archive ghcr.io/jan-brinkmann/docker-rotate-backups" 17 | volumes: 18 | - /var/run/docker.sock:/var/run/docker.sock:ro 19 | - grafana-data:/backup/grafana-data:ro # Mount the Grafana data volume (as read-only) 20 | - /home/pi/backups:/archive # Mount the directory where the backups are being stored 21 | 22 | backup-scp: 23 | build: ../.. 24 | environment: 25 | BACKUP_CRON_EXPRESSION: "30 * * * *" 26 | SCP_HOST: 192.168.0.42 # Remote host IP address 27 | SCP_USER: pi # Remote host user to log in 28 | SCP_DIRECTORY: /home/pi/backups # Remote host directory 29 | # Command that is executed after the backup has been transferred: 30 | # "Trigger external Docker container that includes rotate-backups and disable dry-run option." 31 | POST_BACKUP_COMMAND: "docker run --rm -e DRY_RUN=false -e SSH_USER=pi -e SSH_HOST=192.168.0.42 -e SSH_ARCHIVE=/home/pi/backups -v /home/pi/.ssh/id_rsa:/root/.ssh/id_rsa:ro ghcr.io/jan-brinkmann/docker-rotate-backups" 32 | volumes: 33 | - /var/run/docker.sock:/var/run/docker.sock:ro # Mount the docker.sock file (as read-only) 34 | - /home/pi/.ssh/id_rsa:/ssh/id_rsa:ro # Mount the SSH private key (as read-only) 35 | - grafana-data:/backup/grafana-data:ro # Mount the Grafana data volume (as read-only) 36 | 37 | volumes: 38 | grafana-data: 39 | --------------------------------------------------------------------------------