├── .gitignore ├── COPYING ├── README.md └── arkiv /.gitignore: -------------------------------------------------------------------------------- 1 | *.swp 2 | *.autosave 3 | -------------------------------------------------------------------------------- /COPYING: -------------------------------------------------------------------------------- 1 | Copyright © 2017, Amaury Bouchard 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 8 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Arkiv 2 | ===== 3 | 4 | Easy-to-use backup and archive tool. 5 | 6 | Arkiv is designed to **backup** local files and [MySQL](https://www.mysql.com/) databases, and **archive** them on [Amazon S3](https://aws.amazon.com/s3/) and [Amazon Glacier](https://aws.amazon.com/glacier/). 7 | Backup files are removed (locally and from Amazon S3) after defined delays. 8 | 9 | Arkiv could backup your data on a **daily** or an **hourly** basis (you can choose which day and/or which hours it will be launched). 10 | It is written in pure shell, so it can be used on any Unix/Linux machine. 11 | 12 | Arkiv was created by [Amaury Bouchard](http://amaury.net) and is [open-source software](#what-is-arkivs-license). 13 | 14 | 15 | ************************************************************************ 16 | 17 | Table of contents 18 | ----------------- 19 | 20 | 1. [How it works](#1-how-it-works) 21 | 1. [General Idea](#11-general-idea) 22 | 2. [Steb-by-step](#12-step-by-step) 23 | 2. [Installation](#2-installation) 24 | 1. [Prerequisites](#21-prerequisites) 25 | 2. [Source installation](#22-source-installation) 26 | 3. [Configuration](#23-configuration) 27 | 3. [Frequently Asked Questions](#3-frequently-asked-questions) 28 | 1. [Cost and license](#31-cost-and-license) 29 | 2. [Configuration](#32-configuration) 30 | 3. [Files backup](#33-files-backup) 31 | 4. [Output and log](#34-output-and-log) 32 | 5. [Database backup](#35-database-backup) 33 | 6. [Crontab](#36-crontab) 34 | 7. [Miscellaneous](#37-miscellaneous) 35 | 36 | 37 | ************************************************************************ 38 | 39 | ## 1. How it works 40 | 41 | ### 1.1 General idea 42 | 43 | - Generate backup data from local files and databases. 44 | - Store data on the local drive for a few days/weeks, in order to be able to restore fresh data very quickly. 45 | - Store data on Amazon S3 for a few weeks/months, if you need to restore them easily. 46 | - Store data on Amazon Glacier for ever. It's an incredibly cheap storage that should be used instead of Amazon S3 for long-term conservancy. 47 | 48 | Data are deleted from the local drive and Amazon S3 when the configured delays are reached. 49 | If your data are backed up multiple time per day (not just every day), it's possible to define a fine-grained purge of the files stored on the local drive and on Amazon S3. 50 | For example, it's possible to: 51 | - remove half the backups after two days 52 | - keep only 2 backups per day after 2 weeks 53 | - keep 1 backup per day after 3 weeks 54 | - remove all files after 2 months 55 | 56 | The same kind of configuration could be defined for Amazon S3 archives. 57 | 58 | ### 1.2 Step-by-step 59 | 60 | **Starting** 61 | 1. Arkiv is launched every day (or every hour) by Crontab. 62 | 2. It creates a directory dedicated to the backups of the day (or the backups of the hour). 63 | 64 | **Backup** 65 | 1. Each configured path is `tar`'ed and compressed, and the result is stored in the dedicated directory. 66 | 2. *If MySQL backups are configured*, the needed databases are dumped and compressed, in a sub-directory. 67 | 3. *If encryption is configured*, the backup files are encrypted. 68 | 4. Checksums are computed for all the generated files. These checksums are useful to verify that the files are not corrupted after being transfered over a network. 69 | 70 | **Archiving** 71 | 1. *If Amazon Glacier is configured*, all the generated backup files (not the checksums file) are sent to Amazon Glacier. For each one of them, a JSON file is created with the response's content; these files are important, because they contain the *archiveId* needed to restore the file. 72 | 2. *If Amazon S3 is configured*, the whole directory (backup files + checksums file + Amazon Glacier JSON files) is copied to Amazon S3. 73 | 74 | **Purge** 75 | 1. After a configured delay, backup files are removed from the local disk drive. 76 | 2. *If Amazon S3 is configured*, all backup files are removed from Amazon S3 after a configured delay. The checksums file and the Amazon Glacier JSON files are *not* removed, because they are needed to restore data from Amazon Glacier and check their integrity. 77 | 78 | 79 | ************************************************************************ 80 | 81 | ## 2. Installation 82 | 83 | ### 2.1 Prerequisites 84 | 85 | #### 2.1.1 Basic 86 | Several tools are needed by Arkiv to work correctly. They are usually installed by default on every Unix/Linux distributions. 87 | - A not-so-old [`bash`](https://en.wikipedia.org/wiki/Bash_(Unix_shell)) Shell interpreter located on `/bin/bash` (mandatory) 88 | - [`tar`](https://en.wikipedia.org/wiki/Tar_(computing)) for files concatenation (mandatory) 89 | - [`gzip`](https://en.wikipedia.org/wiki/Gzip), [`bzip2`](https://en.wikipedia.org/wiki/Bzip2), [`xz`](https://en.wikipedia.org/wiki/Xz) or [`zstd`](https://en.wikipedia.org/wiki/Zstd) for compression (at least one) 90 | - [`openssl`](https://en.wikipedia.org/wiki/OpenSSL) for encryption (optional) 91 | - [`sha256sum`](https://en.wikipedia.org/wiki/Sha256sum) for checksums computation (mandatory) 92 | - [`tput`](https://en.wikipedia.org/wiki/Tput) for [ANSI text formatting](https://en.wikipedia.org/wiki/ANSI_escape_code) (optional: can be manually deactivated; automatically deactivated if not installed) 93 | 94 | To install these tools on Ubuntu: 95 | ```shell 96 | # apt-get install tar gzip bzip2 xz-utils openssl coreutils ncurses-bin 97 | ``` 98 | 99 | #### 2.1.2 Encryption 100 | If you want to encrypt the generated backup files (stored locally as well as the ones archived on Amazon S3 and Amazon Glacier), you need to create a symmetric encryption key. 101 | 102 | Use this command to do it (you can adapt the destination path): 103 | ```shell 104 | # openssl rand 32 -out ~/.ssh/symkey.bin 105 | ``` 106 | 107 | #### 2.1.3 MySQL 108 | If you want to backup MySQL databases, you have to install [`mysqldump`](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) or [`xtrabackup`](https://www.percona.com/software/mysql-database/percona-xtrabackup). 109 | 110 | To install `mysqldump` on Ubuntu: 111 | ```shell 112 | # apt-get install mysql-client 113 | ``` 114 | 115 | To install `xtrabackup` on Ubuntu (see [documentation](https://www.percona.com/doc/percona-xtrabackup/2.4/installation/apt_repo.html)): 116 | ```shell 117 | # wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb 118 | # dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb 119 | # apt-get update 120 | # apt-get install percona-xtrabackup-24 121 | ``` 122 | 123 | #### 2.1.4 Amazon Web Services 124 | If you want to archive the generated backup files on Amazon S3/Glacier, you have to do these things: 125 | - Create a dedicated bucket on [Amazon S3](https://aws.amazon.com/s3/). 126 | - If you want to archive on [Amazon Glacier](https://aws.amazon.com/glacier/), create a dedicated vault in the same datacenter. 127 | - Create an [IAM](https://aws.amazon.com/iam/) user with read-write access to this bucket and this vault (if needed). 128 | - Install the [AWS-CLI](https://aws.amazon.com/cli/) program and [configure it](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html). 129 | 130 | Install AWS-CLI on Ubuntu: 131 | ```shell 132 | # apt-get install awscli 133 | ``` 134 | 135 | Configure the program (you will be asked for the AWS user's access key and secret key, and the used datacenter): 136 | ```shell 137 | # aws configure 138 | ``` 139 | 140 | 141 | ### 2.2 Source Installation 142 | 143 | Get the last version: 144 | ```shell 145 | # wget https://github.com/Digicreon/Arkiv/archive/refs/tags/1.0.0.zip -O Arkiv-1.0.0.zip 146 | # unzip Arkiv-1.0.0.zip 147 | 148 | or 149 | 150 | # wget https://github.com/Digicreon/Arkiv/archive/refs/tags/1.0.0.tar.gz -O Arkiv-1.0.0.tar.gz 151 | # unzip Arkiv-1.0.0.tar.gz 152 | ``` 153 | 154 | 155 | ### 2.3 Configuration 156 | 157 | ```shell 158 | # cd Arkiv-1.0.0 159 | # ./arkiv config 160 | ``` 161 | 162 | Some questions will be asked about: 163 | - If you want a simple installation (one backup per day, everyday, at midnight). 164 | - The local machine's name (will be used as a subdirectory of the S3 bucket). 165 | - The used compression type. 166 | - If you want to encrypt the generated backup files. 167 | - Which files must be backed up. 168 | - Everything about MySQL backup (SQL or binary backup, which databases, host/login/password for the connection). 169 | - Where to store the compressed files resulting of the backup. 170 | - Where to archive data on Amazon S3 and Amazon Glacier (if you want to). 171 | - When to purge files (locally and on Amazon S3). 172 | 173 | Finally, the program will offer you to add the Arkiv execution to the user's crontab. 174 | 175 | 176 | ************************************************************************ 177 | 178 | ## 3. Frequently Asked Questions 179 | 180 | ### 3.1 Cost and license 181 | 182 | #### What is Arkiv's license? 183 | Arkiv is licensed under the terms of the [MIT License](https://en.wikipedia.org/wiki/MIT_License), which is a permissive open-source free software license. 184 | 185 | More in the file `COPYING`. 186 | 187 | #### How much will I pay on Amazon S3/Glacier? 188 | You can use the [Amazon Web Services Calculator](https://calculator.s3.amazonaws.com/index.html) to estimate the cost depending of your usage. 189 | 190 | 191 | ### 3.2 Configuration 192 | 193 | #### How to choose the compression type? 194 | You can use one of the four common compression tools (`gzip`, `bzip2`, `xz`, `zstd`). 195 | 196 | Usually, you can follow these guidelines: 197 | - Use `zstd` if you want the best compression and decompression speed. 198 | - Use `xz` if you want the best compression ratio. 199 | - Use `gzip` or `bzip2` if you want the best portability (`xz` and `zstd` are younger and less widespread). 200 | 201 | Here are some helpful links: 202 | - [Gzip vs Bzip2 vs XZ Performance Comparison](https://www.rootusers.com/gzip-vs-bzip2-vs-xz-performance-comparison/) 203 | - [Quick Benchmark: Gzip vs Bzip2 vs LZMA vs XZ vs LZ4 vs LZO](https://catchchallenger.first-world.info/wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZMA_vs_XZ_vs_LZ4_vs_LZO) 204 | - [Zstandard presentation and benchmarks](https://facebook.github.io/zstd/) 205 | 206 | The default usage is `zstd`, because it has the best compression/speed ratio. 207 | 208 | #### I choose simple mode configuration (one backup per day, every day). Why is there a directory called "00:00" in the backup directory of the day? 209 | This directory means that your Arkiv backup process is launched at midnight. 210 | 211 | You may think that the backed up data should have been stored directly in the directory of the day, without a sub-directory for the hour (because there is only one backup per day). But if someday you'd want to change the configuration and do many backups per day, Arkiv would have trouble to manage purges. 212 | 213 | #### How to execute Arkiv with different configurations? 214 | You can add the path to the configuration file as a parameter of the program on the command line. 215 | 216 | To generate the configuration file: 217 | ```shell 218 | # ./arkiv config --config=/path/to/config/file 219 | or 220 | # ./arkiv config -c /path/to/config/file 221 | ``` 222 | 223 | To launch Arkiv: 224 | ```shell 225 | # ./arkiv exec --config=/path/to/config/file 226 | or 227 | # ./arkiv exec -c /path/to/config/file 228 | ``` 229 | 230 | You can modify the Crontab to add the path too. 231 | 232 | #### Is it possible to use a public/private key infrastructure for the encryption functionnality? 233 | It is not possible to encrypt data with a public key; OpenSSL's [PKI](https://en.wikipedia.org/wiki/Public_key_infrastructure) isn't designed to encrypt large data. Encryption is done using an 256 bits AES algorithm, which is symmetrical. 234 | To ensure that only the owner of a private key would be able to decrypt the data, without transfering this key, you have to encrypt the symmetric key using the public key, and then send the encrypted key to the private key's owner. 235 | 236 | Here are the steps to do it (key files are usually located in `~/.ssh/`). 237 | 238 | Create the symmetric key: 239 | ```shell 240 | # openssl rand 32 -out symkey.bin 241 | ``` 242 | 243 | Convert the public and private keys to PEM format (usually people have keys in RSA format, using them with [SSH](https://en.wikipedia.org/wiki/Secure_Shell)): 244 | ```shell 245 | # openssl rsa -in id_rsa -outform pem -out id_rsa.pem 246 | # openssl rsa -in id_rsa -pubout -outform pem -out id_rsa.pub.pem 247 | ``` 248 | 249 | Encrypt the symmetric key with the public key: 250 | ```shell 251 | # openssl rsautl -encrypt -inkey id_rsa.pub.pem -pubin -in symkey.bin -out symkey.bin.encrypt 252 | ``` 253 | 254 | To decrypt the encrypted symmetric key using the private key: 255 | ```shell 256 | # openssl rsautl -decrypt -inkey id_rsa.pem -in symkey.bin.encrypt -out symkey.bin 257 | ``` 258 | 259 | To decrypt the data file: 260 | ```shell 261 | # openssl enc -d -aes-256-cbc -in data.tgz.encrypt -out data.tgz -pass file:symkey.bin 262 | ``` 263 | 264 | #### Why is it not possible to archive on Amazon Glacier without archiving on Amazon S3? 265 | When you send a file to Amazon Glacier, you get back an *archiveId* (file's unique identifier). Arkiv take this information and write it down in a file; then this file is copied to Amazon S3. 266 | If the *archiveId* is lost, you will not be able to get the file back from Amazon Glacier. An archived file that you can't restore is useless. Even if it's possible to get the list of archived files from Amazon Glacier, it's a slow process; it's more flexible to store *archive identifiers* in Amazon S3 (and the cost to store them is insignificant). 267 | 268 | 269 | ### 3.3 Files backup 270 | 271 | #### How to exclude files and directories from archives? 272 | Arkiv provides several ways to exclude content from archives. 273 | 274 | First of all, it follows the [CACHEDIR.TAG](https://bford.info/cachedir/) standard. If a directory contains a `CACHEDIR.TAG` file, it will be added to the archive, as well as the `CACHEDIR.TAG` file, but not its other files and subdirectories. 275 | 276 | If you want to exclude the content of a directory in a way similar of the previous one, but you don't want to create a `CACHEDIR.TAG` file (to avoid exclusion of the directory by other programs), you can create an empty `.arkiv-exclude` file in the directory. The directory and the `.arkiv-exclude` will be added to the archive (to keep track of the folder, with the information of the subcontent exclusion), but not the other files and subdirectories contained in the given directory. 277 | 278 | If you want to exclude specific files of a directory, you can create a `.arkiv-ignore` file in the directory, and write a list of exclusion patterns into it. These patterns will be used to exclude files and subdirectories directly stored in the given directory. 279 | 280 | If you create a `.arkiv-ignore-recursive` file in a directory, patterns will be read from this file to define recursive exclusions in the given directory and all its subdirectories. 281 | 282 | 283 | ### 3.4 Output and log 284 | 285 | #### Is it possible to execute Arkiv without any output on STDOUT and/or STDERR? 286 | Yes, you just have to add some options on the command line: 287 | - `--no-stdout` (or `-o`) to avoid output on STDOUT 288 | - `--no-stderr` (or `-e`) to avoid output on STDERR 289 | 290 | You can use these options separately or together. 291 | 292 | #### How to write the execution log into a file? 293 | You can use a dedicated parameter: 294 | ```shell 295 | # ./arkiv exec --log=/path/to/log/file 296 | or 297 | # ./arkiv exec -l /path/to/log/file 298 | ``` 299 | 300 | It will not disable output on the terminal. You can use the options `--no-stdout` and `--no-stderr` for that (see previous answer). 301 | 302 | #### How to write log to syslog? 303 | Add the option `--syslog` (or `-s`) on the command line or in the Crontab command. 304 | 305 | #### How to get pure text (without ANSI commands) in Arkiv's log file? 306 | Add the option `--no-ansi` (or `-n`) on the command line or in the Crontab command. It will act on terminal output as well as log file (see `--log` option above) and syslog (see `--syslog` option above). 307 | 308 | #### I open the Arkiv log file with less, and it's full of strange characters 309 | Unlike `more` and `tail`, `less` doesn't interpret ANSI text formatting commands (bold, color, etc.) by default. 310 | To enable it, you have to use the option `-r` or `-R`. 311 | 312 | 313 | ### 3.5 Database backup 314 | 315 | #### What kind of database backups are available? 316 | Arkiv could generate two kinds of database backups: 317 | - SQL backups created using [`mysqldump`](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html). 318 | - Binary backups using [`xtrabackup`](https://www.percona.com/software/mysql-database/percona-xtrabackup). 319 | 320 | There is two types of binary backups: 321 | - Full backups; the server's files are entirely copied. 322 | - Incremental backups; only the data modified since the last backup (full or incremental) are copied. 323 | 324 | You must do a full backup before performing any incremental backup. 325 | 326 | #### Which databases and table engines could be backed up? 327 | If you choose SQL backups (using `mysqldump`), Arkiv can manage any table engine supported by [MySQL](https://www.mysql.com/), [MariaDB](https://mariadb.org/) and [Percona Server](https://www.percona.com/software/mysql-database/percona-server). 328 | 329 | If you choose binary backups (using `xtrabackup`), Arkiv can handle: 330 | - MySQL (5.1 and above) or MariaDB, with InnoDB, MyISAM and XtraDB tables. 331 | - Percona Server with XtraDB tables. 332 | 333 | Note that MyISAM tables can't be incrementally backed up. They are copied entirely each time an incremental backup is performed. 334 | 335 | #### Are binary backups prepared for restore? 336 | No. Binary backups are done using `xtrabackup --backup`. The `xtrabackup --prepare` step is not done to save time and space. You will have to do it when you want to restore a database (see below). 337 | 338 | #### How to define a full binary backup once per day and an incremental backup every other hours? 339 | You will have to create two different configuration files and add Arkiv in Crontab twice: once for the full backup (everyday at midnight for example), and once for the incremental backups (every hours except midnight). 340 | 341 | You need both executions to use the same LSN file. It will be written by the full backup, and read and updated by each incremental backups. 342 | 343 | The same process could be used with any other frequency (for example: full backups once a week and incremental backups every other days). 344 | 345 | #### How to restore a SQL backup? 346 | Arkiv generates one SQL file per database. You have to extract the wanted file and process it in your database server: 347 | ```shell 348 | # unxz /path/to/database_sql/database.sql.xz 349 | # mysql -u username -p < /path/to/database_sql/database.sql 350 | ``` 351 | 352 | #### How to restore a full binary backup without subsequent incremental backups? 353 | To restore the database, you first need to extract the data: 354 | ```shell 355 | # tar xJf /path/to/database_data.tar.xz 356 | or 357 | # tar xjf /path/to/database_data.tar.bz2 358 | or 359 | # tar xzf /path/to/database_data.tar.gz 360 | ``` 361 | 362 | Then you must prepare the backup: 363 | ```shell 364 | # xtrabackup --prepare --target-dir=/path/to/database_data 365 | ``` 366 | 367 | Please note that the MySQL server must be shut down, and the 'datadir' directory (usually `/var/lib/mysql`) must be empty. On Ubuntu: 368 | ```shell 369 | # service mysql stop 370 | # rm -rf /var/lib/mysql/* 371 | ``` 372 | 373 | Then you can restore the data: 374 | ```shell 375 | # xtrabackup --copy-back --target-dir=/path/to/database_data 376 | ``` 377 | 378 | Files' ownership must be given back to the MySQL user (usually `mysql`): 379 | ```shell 380 | # chown -R mysql:mysql /var/lib/mysql 381 | ``` 382 | 383 | Finally you can restart the MySQL daemon: 384 | ```shell 385 | # service mysql start 386 | ``` 387 | 388 | #### How to restore a full + incrementals binary backup? 389 | Let's say you have a full backup (located in `/full/database_data`) and three incremental backups (located in `/inc1/database_data`, `/inc2/database_data` and `/inc3/database_data`), and you have already extracted the backed up files (see previous answer). 390 | 391 | First, you must prepare the full backup with the additional `--apply-log-only` option: 392 | ```shell 393 | # xtrabackup --prepare --apply-log-only --target-dir=/full/database_data 394 | ``` 395 | 396 | And then you prepare using all incremental backups in their creation order, **except the last one**: 397 | ```shell 398 | # xtrabackup --prepare --apply-log-only --target-dir=/full/database_data --incremental-dir=/inc1/database_data 399 | # xtrabackup --prepare --apply-log-only --target-dir=/full/database_data --incremental-dir=/inc2/database_data 400 | ``` 401 | 402 | Data preparation of the last incremental backup is done without the `--apply-log-only` option: 403 | ```shell 404 | # xtrabackup --prepare --target-dir=/full/database_data --incremental-dir=/inc3/database_data 405 | ``` 406 | 407 | Once every backups have been merged, the process is the same than for a full backup: 408 | ```shell 409 | # service mysql stop 410 | # rm -rf /var/lib/mysql/* 411 | # xtrabackup --copy-back --target-dir=/path/to/database_data 412 | # chown -R mysql:mysql /var/lib/mysql 413 | # service mysql start 414 | ``` 415 | 416 | 417 | ### 3.6 Crontab 418 | 419 | #### On simple mode (one backup per day, every day at midnight), how to set up Arkiv to be executed at another time than midnight? 420 | You just have to edit the configuration file of the user's [Cron table](https://en.wikipedia.org/wiki/Cron): 421 | ```shell 422 | # crontab -e 423 | ``` 424 | 425 | #### How to execute pre- and/or post-backup scripts? 426 | See the previous answer. You just have to add these scripts before and/or after the Arkiv program in the Cron table. 427 | 428 | #### Is it possible to backup more often than every hours? 429 | No, it's not possible. 430 | 431 | #### I want to have colors in the Arkiv log file when it's launched from Crontab, as well as when it's launch from the command line 432 | The problem comes from the Crontab environment, which is very minimal. 433 | You have to set the `TERM` environment variable from the Crontab. It is also a good idea to define the `MAILTO` and `PATH` variables. 434 | 435 | Edit the Crontab: 436 | ```shell 437 | # crontab -e 438 | ``` 439 | 440 | And add these three lines at its beginning: 441 | ```shell 442 | TERM=xterm 443 | MAILTO=your.email@domain.com 444 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 445 | ``` 446 | 447 | #### How to receive an email alert when a problem occurs? 448 | Add a `MAILTO` environment variable at the beginning of your Crontab. See the previous answer. 449 | 450 | 451 | ### 3.7 Miscellaneous 452 | 453 | #### How to report bugs? 454 | [Arkiv issues tracker](https://github.com/Digicreon/Arkiv/issues) 455 | 456 | #### Why is Arkiv compatible only with Bash interpreter? 457 | Because the `read` buitin command has a `-s` parameter for silent input (used for encryption passphrase and MySQL password input without showing them), unavailable on `dash` or `zsh` (for example). 458 | 459 | #### Arkiv looks like Backup-Manager 460 | Yes indeed. Both of them wants to help people to backup files and databases, and archive data in a secure place. 461 | 462 | But Arkiv is different in several ways: 463 | - It can manage hourly backups. 464 | - It can transfer data on Amazon Glacier for long-term archiving. 465 | - It can manage complex purge policies. 466 | - The configuration process is simpler (you answer to questions). 467 | - Written in pure shell, it doesn't need a Perl interpreter. 468 | 469 | On the other hand, [Backup-Manager](https://github.com/sukria/Backup-Manager) is able to transfer to remote destinations by SCP or FTP, and to burn data on CD/DVD. 470 | 471 | -------------------------------------------------------------------------------- /arkiv: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # ARKIV 4 | # 5 | # Simple file archiver, designed to backup local files and MySQL databases and archive them on Amazon S3. 6 | # 7 | # Source code and documentation: https://github.com/Amaury/Arkiv 8 | # 9 | # © 2017, Amaury Bouchard 10 | # 11 | # @see Getopts: https://stackoverflow.com/questions/402377/using-getopts-in-bash-shell-script-to-get-long-and-short-command-line-options/7680682#7680682 12 | # @see Encryption: https://superuser.com/questions/724986/how-to-use-password-argument-in-via-command-line-to-openssl-for-decryption 13 | 14 | # ########## GLOBAL VARIABLES ########## 15 | 16 | # Path to the configuration file. 17 | # @see main_usage() 18 | CONFIG_FILE_PATH="~/.arkiv" 19 | 20 | # Path to the log file 21 | # @see main_usage() 22 | OPT_LOG_PATH="" 23 | 24 | # no-ansi mode 25 | # @see main_usage() 26 | OPT_NOANSI=0 27 | 28 | # no-stdout mode 29 | # @see main_usage() 30 | OPT_NOSTDOUT=0 31 | 32 | # no-stderr mode 33 | # @see main_usage() 34 | OPT_NOSTDERR=0 35 | 36 | # Write to syslog option. 37 | # @see main_usage() 38 | OPT_SYSLOG=0 39 | 40 | 41 | # ########## FUNCTIONS ########## 42 | 43 | # trim() 44 | # Remove spaces at the beginning and at the end of a character string. 45 | # @param string The string to trim. 46 | trim() { 47 | RESULT=$(echo "$1" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//') 48 | echo $RESULT 49 | } 50 | 51 | # filenamize() 52 | # Convert a string that contains a path to a file, and return a string suitable as a file name. 53 | # Replace slashes and spaces by dashes. 54 | # @param string The string to modify. 55 | filenamize() { 56 | RESULT=$(echo "$1" | sed 's/[[:space:]]\+/-/g' | sed 's/\//-/g' | sed -e 's/^-*//' -e 's/-*$//' | sed 's/-\+/-/g') 57 | echo $RESULT 58 | } 59 | 60 | # ansi() 61 | # Write ANSI-compatible statements. 62 | # @param string Command: 63 | # - reset: Remove all decoration. 64 | # - bold: Write text in bold. 65 | # - dim: Write faint text. 66 | # - rev: Write text in reverse video. Could take another parameter with the background color. 67 | # - under: Write underlined text. 68 | # - black, red, green, yellow, blue, magenta, cyan, white: Change the text color. 69 | ansi() { 70 | if [ "$TERM" = "" ] || [ "$OPT_NOANSI" = "1" ]; then 71 | return 72 | fi 73 | if ! type tput > /dev/null; then 74 | return 75 | fi 76 | case "$1" in 77 | "reset") tput sgr0 78 | ;; 79 | "bold") tput bold 80 | ;; 81 | "dim") tput dim 82 | ;; 83 | "rev") 84 | case "$2" in 85 | "black") tput setab 0 86 | ;; 87 | "red") tput setab 1 88 | ;; 89 | "green") tput setab 2 90 | ;; 91 | "yellow") tput setab 3 92 | ;; 93 | "blue") tput setab 4 94 | ;; 95 | "magenta") tput setab 5 96 | ;; 97 | "cyan") tput setab 6 98 | ;; 99 | "white") tput setab 7 100 | ;; 101 | *) tput rev 102 | esac 103 | ;; 104 | "under") tput smul 105 | ;; 106 | "black") tput setaf 0 107 | ;; 108 | "red") tput setaf 1 109 | ;; 110 | "green") tput setaf 2 111 | ;; 112 | "yellow") tput setaf 3 113 | ;; 114 | "blue") tput setaf 4 115 | ;; 116 | "magenta") tput setaf 5 117 | ;; 118 | "cyan") tput setaf 6 119 | ;; 120 | "white") tput setaf 7 121 | ;; 122 | esac 123 | } 124 | 125 | # log() 126 | # Write a character string with the current date before it to stdout or to the log file. 127 | # @param string The text to write. 128 | log() { 129 | if [ "$1" = "-n" ]; then 130 | STR="$2" 131 | else 132 | STR="$(ansi dim)[$(date +"%Y-%m-%d %H:%M:%S%:z")]$(ansi reset) $1" 133 | fi 134 | # write to stdout unless it is disabled 135 | if [ "$OPT_NOSTDOUT" != "1" ]; then 136 | echo $STR 137 | fi 138 | # write to log file 139 | if [ "$OPT_LOG_PATH" != "" ]; then 140 | echo "$STR" >> "$(eval realpath "$OPT_LOG_PATH")" 141 | fi 142 | # write to syslog 143 | if [ "$OPT_SYSLOG" = "1" ]; then 144 | logger --tag arkiv --priority user.warning "$TXT" 145 | fi 146 | } 147 | 148 | # err() 149 | # Write a character string with the current date before it to stderr or to the log file. 150 | # @param string The text to write. 151 | err() { 152 | CURDATE=$(date +"%Y-%m-%d %H:%M:%S%:z") 153 | STR="$(ansi dim)[$CURDATE]$(ansi reset) $1" 154 | # write to stderr unless it is disabled 155 | if [ "$OPT_NOSTDOUT" != "1" ]; then 156 | (>&2 echo "$STR") 157 | fi 158 | # write to log file 159 | if [ "$OPT_LOG_PATH" != "" ]; then 160 | echo "$STR" >> "$(eval realpath "$OPT_LOG_PATH")" 161 | fi 162 | # write to syslog 163 | if [ "$OPT_SYSLOG" = "1" ]; then 164 | logger --tag arkiv --priority user.warning "$TXT" 165 | fi 166 | } 167 | 168 | # main_usage() 169 | # Show help and exit. 170 | main_usage() { 171 | echo 172 | echo "$(ansi bold)NAME$(ansi reset)" 173 | echo " $(ansi bold)Arkiv$(ansi reset) − Backup and archiving tool" 174 | echo 175 | echo "$(ansi bold)SYNOPSIS$(ansi reset)" 176 | echo " $(ansi bold)arkiv$(ansi reset) help$(ansi dim)|$(ansi reset)config$(ansi dim)|$(ansi reset)exec [-c|--config$(ansi dim)=/path/to/config/file$(ansi reset)] [-l|--log$(ansi dim)=/path/to/log/file$(ansi reset)] [-o|--no-stdout] [-e|--no-stderr] [-n|--noansi] [-s|--syslog]" 177 | echo 178 | echo "$(ansi bold)DESCRIPTION$(ansi reset)" 179 | echo " $(ansi bold)Arkiv$(ansi reset) is an easy-to-use tool which provides files and MySQL databases backup, and archiving to Amazon S3 and Amazon Glacier." 180 | echo 181 | echo "$(ansi bold)MAIN USAGE$(ansi reset)" 182 | echo " $(ansi bold)help$(ansi reset)" 183 | echo " Display this help." 184 | echo " $(ansi bold)config$(ansi reset)" 185 | echo " Create Arkiv's configuration file." 186 | echo " $(ansi bold)exec$(ansi reset)" 187 | echo " Backup files and databases, archive them and purge old files." 188 | echo 189 | echo "$(ansi bold)PARAMETERS$(ansi reset)" 190 | echo " $(ansi bold)-c$(ansi reset) $(ansi dim)/path/to/config/file$(ansi reset)" 191 | echo " $(ansi bold)--config$(ansi reset)$(ansi dim)=/path/to/config/file$(ansi reset)" 192 | echo " Path to the configuration file to use instead of '~/.arkiv'." 193 | echo 194 | echo " $(ansi bold)-l$(ansi reset) $(ansi dim)/path/to/log/file$(ansi reset)" 195 | echo " $(ansi bold)--log$(ansi reset)$(ansi dim)=/path/to/log/file$(ansi reset)" 196 | echo " Path to the log file to use." 197 | echo 198 | echo " $(ansi bold)-o, --no-stdout$(ansi reset)" 199 | echo " Don't write informational log messages to STDOUT. If the log file is defined (option --log), messages will be written there anyway." 200 | echo 201 | echo " $(ansi bold)-e, --no-stderr$(ansi reset)" 202 | echo " Don't write error log messages to STDERR. If the log file is defined (option --log), messages will be written there anyway." 203 | echo 204 | echo " $(ansi bold)-n, --no-ansi$(ansi reset)" 205 | echo " Don't use ANSI commands to ouput decorated text (colors, bold, reverse video)." 206 | echo 207 | echo " $(ansi bold)-s, --syslog$(ansi reset)" 208 | echo " Write logs also to syslog." 209 | echo 210 | echo "$(ansi bold)SEE ALSO$(ansi reset)" 211 | echo " More documentation available on GitHub: $(ansi under)https://github.com/Amaury/Arkiv$(ansi reset)" 212 | echo 213 | exit 0 214 | } 215 | 216 | # _create_config() 217 | # Create the content of the configuration file. 218 | # Uses the global variables defined elsewhere ($CONF_LOCAL_HOSTNAME, $CONF_BACKUP_PATH, etc.). 219 | # @param string (optional) Visibility. Replace the MySQL password by an informative message if this parameter is set to "hide". 220 | _create_config() { 221 | RESULT="CONF_LOCAL_HOSTNAME=\"$CONF_LOCAL_HOSTNAME\" 222 | CONF_Z_TYPE=\"$CONF_Z_TYPE\"" 223 | if [ "$CONF_ENCRYPT" = "key" ] || [ "$CONF_ENCRYPT" = "pass" ]; then 224 | RESULT="$RESULT 225 | CONF_ENCRYPT=\"$CONF_ENCRYPT\"" 226 | if [ "$CONF_ENCRYPT" = "key" ]; then 227 | RESULT="$RESULT 228 | CONF_ENCRYPT_KEY=\"$CONF_ENCRYPT_KEY\"" 229 | else 230 | RESULT="$RESULT 231 | CONF_ENCRYPT_PASS=\"$CONF_ENCRYPT_PASS\"" 232 | fi 233 | fi 234 | RESULT="$RESULT 235 | CONF_BACKUP_PATH=\"$CONF_BACKUP_PATH\" 236 | CONF_SRC=\"$CONF_SRC\" 237 | CONF_MYSQL=\"$CONF_MYSQL\"" 238 | if [ "$CONF_MYSQL" != "no" ]; then 239 | RESULT="$RESULT 240 | CONF_MYSQL_HOST=\"$CONF_MYSQL_HOST\" 241 | CONF_MYSQL_USER=\"$CONF_MYSQL_USER\"" 242 | if [ "$1" = "hide" ]; then 243 | RESULT="$RESULT 244 | CONF_MYSQL_PWD=___HIDDEN_PASSWORD___" 245 | else 246 | RESULT="$RESULT 247 | CONF_MYSQL_PWD=\"$CONF_MYSQL_PWD\"" 248 | fi 249 | if [ "$CONF_MYSQL" = "mysqldump" ]; then 250 | RESULT="$RESULT 251 | CONF_MYSQL_ALL_DATABASES=\"$CONF_MYSQL_ALL_DATABASES\"" 252 | if [ "$CONF_MYSQL_ALL_DATABASES" = "no" ]; then 253 | RESULT="$RESULT 254 | CONF_MYSQL_DATABASES=\"$CONF_MYSQL_DATABASES\"" 255 | fi 256 | fi 257 | if [ "$CONF_MYSQL" = "xtrabackup" ]; then 258 | RESULT="$RESULT 259 | CONF_XTRABACKUP_TYPE=\"$CONF_XTRABACKUP_TYPE\" 260 | CONF_XTRABACKUP_LSN_FILE=\"$CONF_XTRABACKUP_LSN_FILE\"" 261 | fi 262 | fi 263 | RESULT="$RESULT 264 | CONF_AWS_S3=\"$CONF_AWS_S3\"" 265 | if [ "$CONF_AWS_S3" = "yes" ]; then 266 | RESULT="$RESULT 267 | CONF_S3_BUCKET=\"$CONF_S3_BUCKET\"" 268 | fi 269 | RESULT="$RESULT 270 | CONF_AWS_GLACIER=\"$CONF_AWS_GLACIER\"" 271 | if [ "$CONF_AWS_GLACIER" = "yes" ]; then 272 | RESULT="$RESULT 273 | CONF_GLACIER_VAULT=\"$CONF_GLACIER_VAULT\"" 274 | fi 275 | RESULT="$RESULT 276 | CONF_LOCAL_PURGE_DELAY=\"$CONF_LOCAL_PURGE_DELAY\"" 277 | if [ "$CONF_AWS_S3" = "yes" ]; then 278 | RESULT="$RESULT 279 | CONF_S3_PURGE_DELAY=\"$CONF_S3_PURGE_DELAY\"" 280 | fi 281 | echo "$RESULT" 282 | } 283 | 284 | # main_config() 285 | # Manage configuration. 286 | main_config() { 287 | # splashscreen 288 | echo 289 | echo " $(ansi rev) $(ansi reset)" 290 | echo " $(ansi rev) $(ansi reset)$(ansi rev blue) $(ansi reset)$(ansi rev) $(ansi reset)" 291 | echo " $(ansi rev) $(ansi reset)$(ansi rev blue) $(ansi white)$(ansi bold)Arkiv Configuration$(ansi reset)$(ansi rev blue) $(ansi reset)$(ansi rev) $(ansi reset)" 292 | echo " $(ansi rev) $(ansi reset)$(ansi rev blue) $(ansi reset)$(ansi rev) $(ansi reset)" 293 | echo " $(ansi rev) $(ansi reset)" 294 | echo 295 | # BASIC 296 | echo "$(ansi rev yellow) BASIC $(ansi reset)" 297 | # simple mode 298 | read -p " $(ansi yellow)Do you want to set up Arkiv's simple mode? (one backup per day, every day) [Y/n]$(ansi reset) " ANSWER 299 | SIMPLE_MODE=$(trim "$ANSWER") 300 | if [ "$SIMPLE_MODE" = "" ] || [ "$SIMPLE_MODE" = "y" ] || [ "$SIMPLE_MODE" = "Y" ]; then 301 | SIMPLE_MODE="yes" 302 | elif [ "$SIMPLE_MODE" = "n" ] || [ "$SIMPLE_MODE" = "N" ]; then 303 | SIMPLE_MODE="no" 304 | else 305 | echo " $(ansi red)⛔ Bad value. ABORT$(ansi reset)" 306 | exit 1 307 | fi 308 | # local host 309 | HOST=$(hostname) 310 | if [ "$HOST" = "" ]; then 311 | HOST="localhost" 312 | fi 313 | read -p " $(ansi yellow)Local host name? [$HOST]$(ansi reset) " ANSWER 314 | CONF_LOCAL_HOSTNAME=$(trim "$ANSWER") 315 | if [ "$CONF_LOCAL_HOSTNAME" = "" ]; then 316 | CONF_LOCAL_HOSTNAME="$HOST" 317 | fi 318 | # compression type 319 | read -p " $(ansi yellow)Compression type? (gzip|bzip2|xz|zstd) [zstd]$(ansi reset) " ANSWER 320 | CONF_Z_TYPE=$(trim "$ANSWER") 321 | if [ "$CONF_Z_TYPE" != "gzip" ] && [ "$CONF_Z_TYPE" != "bzip2" ] && [ "$CONF_Z_TYPE" != "xz" ]; then 322 | if [ "$CONF_Z_TYPE" != "" ] && [ "$CONF_Z_TYPE" != "zstd" ]; then 323 | echo " $(ansi red)⛔ Bad value. ABORT$(ansi reset)" 324 | exit 1 325 | fi 326 | CONF_Z_TYPE="zstd" 327 | fi 328 | # encryption 329 | read -p " $(ansi yellow)Do you want to encrypt data? [y/N]$(ansi reset) " ANSWER 330 | if [ "$ANSWER" = "y" ] || [ "$ANSWER" = "Y" ]; then 331 | # check openssl 332 | if ! type openssl > /dev/null; then 333 | echo " $(ansi red)⛔ The 'AWS-CLI' program is not installed. ABORT$(ansi reset)" 334 | exit 1 335 | fi 336 | read " $(ansi yellow)Do you want to use an encryption key? (otherwise you'll have to give a passphrase) [Y/n]$(ansi reset) " ANSWER 337 | if [ "$ANSWER" = "" ] || [ "$ANSWER" = "y" ] || [ "$ANSWER" = "Y" ]; then 338 | CONF_ENCRYPT="key" 339 | echo " $(ansi dim)Read the documentation to know how to generate an encryption key.$(ansi reset)" 340 | read -p " $(ansi yellow)Path to your encryption key? [~/.ssh/symkey.bin]$(ansi reset) " ANSWER 341 | CONF_ENCRYPT_KEY=$(trim "$ANSWER") 342 | if [ "$CONF_ENCRYP_KEY" = "" ]; then 343 | CONF_ENCRYPT_KEY="~/.ssh/id_rsa.pem" 344 | fi 345 | CONF_ENCRYPT_KEY="$(eval realpath "$CONF_ENCRYPT_KEY")" 346 | if [ ! -r "$CONF_ENCRYPT_KEY" ]; then 347 | echo " $(ansi red)⛔ Can't find file '$(ansi reset)$CONF_ENCRYPT_KEY$(ansi red)'. ABORT$(ansi reset)" 348 | exit 1 349 | fi 350 | elif [ "$ANSWER" = "n" ] || [ "$ANSWER" = "N" ]; then 351 | CONF_ENCRYPT="pass" 352 | read -s -p " $(ansi yellow)Encryption passphrase ?$(ansi reset) " ANSWER 353 | CONF_ENCRYPT_PASS=$(trim "$ANSWER") 354 | if [ "$CONF_ENCRYPT_PASS" = "" ]; then 355 | echo " $(ansi red)⛔ Empty passphrase. ABORT$(ansi reset)" 356 | exit 1 357 | fi 358 | else 359 | echo " $(ansi red)⛔ Bad value. ABORT$(ansi reset)" 360 | exit 1 361 | fi 362 | fi 363 | # BACKUP 364 | echo 365 | echo "$(ansi rev blue) BACKUP $(ansi reset)" 366 | # path to backup 367 | read -p " $(ansi yellow)Paths (directories and/or files) to backup? (separated with spaces) [/etc /home]$(ansi reset) " ANSWER 368 | CONF_SRC=$(trim "$ANSWER") 369 | if [ "$CONF_SRC" = "" ]; then 370 | CONF_SRC="/etc /home" 371 | fi 372 | # check paths 373 | for SRC in $CONF_SRC; do 374 | if [ "${SRC:0:1}" != "/" ]; then 375 | echo " $(ansi red)⛔ All paths must begin with a '$(ansi reset)/$(ansi red)'. ABORT$(ansi reset)" 376 | exit 1 377 | fi 378 | done 379 | # MySQL 380 | CONF_MYSQL="no" 381 | read -p " $(ansi yellow)Backup MySQL databases? [Y/n]$(ansi reset) " ANSWER 382 | ANSWER=$(trim "$ANSWER") 383 | if [ "$ANSWER" = "" ] || [ "$ANSWER" = "y" ] || [ "$ANSWER" = "Y" ]; then 384 | read -p " $(ansi yellow)SQL dumps or binary backup? [S/b]$(ansi reset) " ANSWER 385 | ANSWER=$(trim "$ANSWER") 386 | if [ "$ANSWER" = "" ] || [ "$ANSWER" = "s" ] || [ "$ANSWER" = "S" ]; then 387 | if ! type mysqldump > /dev/null; then 388 | echo " $(ansi red)⛔ The '$(ansi reset)mysqldump$(ansi red)' program is not installed. ABORT$(ansi reset)" 389 | exit 1 390 | fi 391 | CONF_MYSQL="mysqldump" 392 | elif [ "$ANSWER" = "b" ] || [ "$ANSWER" = "B" ]; then 393 | if ! type xtrabackup > /dev/null; then 394 | echo " $(ansi red)⛔ The '$(ansi reset)xtrabackup$(ansi red)' program is not installed. ABORT$(ansi reset)" 395 | exit 1 396 | fi 397 | CONF_MYSQL="xtrabackup" 398 | else 399 | echo " $(ansi red)⛔ Bad value. ABORT$(ansi reset)" 400 | exit 1 401 | fi 402 | # MySQL hostname 403 | read -p " $(ansi yellow)MySQL hostname? [localhost]$(ansi reset) " ANSWER 404 | CONF_MYSQL_HOST=$(trim "$ANSWER") 405 | if [ "$CONF_MYSQL_HOST" = "" ]; then 406 | CONF_MYSQL_HOST="localhost" 407 | fi 408 | # MySQL user 409 | read -p " $(ansi yellow)MySQL user? [root] $(ansi reset) " ANSWER 410 | CONF_MYSQL_USER=$(trim "$ANSWER") 411 | if [ "$CONF_MYSQL_USER" = "" ]; then 412 | CONF_MYSQL_USER="root" 413 | fi 414 | # MySQL password 415 | read -s -p " $(ansi yellow)MySQL password?$(ansi reset) " ANSWER 416 | CONF_MYSQL_PWD=$(trim "$ANSWER") 417 | if [ "$CONF_MYSQL_PWD" = "" ]; then 418 | echo " $(ansi red)⛔ Empty password. ABORT$(ansi reset)" 419 | exit 1 420 | fi 421 | echo 422 | # mysqldump specific questions 423 | if [ "$CONF_MYSQL" = "mysqldump" ]; then 424 | # MySQL databases 425 | read -p " $(ansi yellow)Do you want to backup all databases readable by this user? (otherwise you will have to give a list of database names) [Y/n]$(ansi reset) " ANSWER 426 | ANSWER=$(trim "$ANSWER") 427 | CONF_MYSQL_ALL_DATABASES="yes" 428 | if [ "$ANSWER" = "n" ] || [ "$ANSWER" = "N" ]; then 429 | CONF_MYSQL_ALL_DATABASES="no" 430 | read -p " $(ansi yellow)List of databases? (separated with space characters)$(ansi reset) " ANSWER 431 | CONF_MYSQL_DATABASES=$(trim "$ANSWER") 432 | if [ "$CONF_MYSQL_DATABASES" = "" ]; then 433 | echo " $(ansi red)⛔ No database to backup. ABORT$(ansi reset)" 434 | exit 1 435 | fi 436 | fi 437 | fi 438 | # xtrabackup specific questions 439 | if [ "$CONF_MYSQL" = "xtrabackup" ]; then 440 | CONF_XTRABACKUP_TYPE="full" 441 | CONF_XTRABACKUP_LSN_FILE="$CONFIG_FILE_PATH.lsn" 442 | MSG="Path to the LSN file that will be written?" 443 | # full or incremental 444 | read -p " $(ansi yellow)Full or incremental backup? [F/i]$(ansi reset) " ANSWER 445 | ANSWER=$(trim "$ANSWER") 446 | if [ "$ANSWER" = "i" ] || [ "$ANSWER" = "I" ]; then 447 | CONF_XTRABACKUP_TYPE="incremental" 448 | MSG="Path to the LSN file that will be read and updated?" 449 | fi 450 | # path to the LSN file 451 | read -p " $(ansi yellow)$MSG (needed for subsequent incremental backups) [$CONF_XTRABACKUP_LSN_FILE]$(ansi reset) " ANSWER 452 | ANSWER=$(trim "$ANSWER") 453 | if [ "$ANSWER" != "" ]; then 454 | CONF_XTRABACKUP_LSN_FILE="$ANSWER" 455 | fi 456 | fi 457 | elif [ "$ANSWER" != "n" ] && [ "$ANSWER" != "N" ]; then 458 | echo " $(ansi red)⛔ Bad value. ABORT$(ansi reset)" 459 | exit 1 460 | fi 461 | # ARCHIVE 462 | echo 463 | echo "$(ansi rev magenta) ARCHIVE $(ansi reset)" 464 | # local archive path 465 | read -p " $(ansi yellow)Path to local archives? [/var/archives]$(ansi reset) " ANSWER 466 | CONF_BACKUP_PATH=$(trim "$ANSWER") 467 | if [ "$CONF_BACKUP_PATH" = "" ]; then 468 | CONF_BACKUP_PATH="/var/archives" 469 | fi 470 | CONF_BACKUP_PATH="$(eval realpath "$CONF_BACKUP_PATH")" 471 | if [ ! -d "$CONF_BACKUP_PATH" ]; then 472 | read -p " $(ansi yellow)Directory '$(ansi reset)$CONF_BACKUP_PATH$(ansi yellow)' doesn't exist. Create it? [Y/n]$(ansi reset) " ANSWER 473 | if [ "$ANSWER" = "n" ] || [ "$ANSWER" = "N" ]; then 474 | echo " $(ansi red)⛔ ABORT$(ansi reset)" 475 | exit 1 476 | fi 477 | if ! mkdir -p "$CONF_BACKUP_PATH"; then 478 | echo " $(ansi red)⛔ Unable to create directory '$(ansi reset)$CONF_BACKUP_PATH$(ansi red)'. ABORT$(ansi reset)" 479 | exit 1 480 | fi 481 | chmod 700 "$CONF_BACKUP_PATH" 482 | if [ $? -ne 0 ]; then 483 | echo " $(ansi red)⛔ Unable to change permissions of directory '$(ansi reset)$CONF_BACKUP_PATH$(ansi red)'. ABORT$(ansi reset)" 484 | exit 1 485 | fi 486 | elif [ ! -w "$CONF_BACKUP_PATH" ]; then 487 | echo " $(ansi red)⛔ Directory '$(ansi reset)$CONF_BACKUP_PATH$(ansi red)' exists but is not writable. ABORT$(ansi reset)" 488 | exit 1 489 | fi 490 | # Amazon S3 + Glacier 491 | CONF_AWS_S3="no" 492 | CONF_AWS_GLACIER="no" 493 | read -p " $(ansi yellow)Archive to Amazon S3? [Y/n]$(ansi reset) " ANSWER 494 | if [ "$ANSWER" != "n" ] && [ "$ANSWER" != "N" ]; then 495 | CONF_AWS_S3="yes" 496 | # check aws-cli 497 | if ! type aws > /dev/null; then 498 | echo " $(ansi red)⛔ The 'AWS-CLI' program is not installed. ABORT$(ansi reset)" 499 | exit 1 500 | fi 501 | # Amazon S3 502 | read -p " $(ansi yellow)S3 Bucket name?$(ansi reset) " ANSWER 503 | CONF_S3_BUCKET=$(trim "$ANSWER") 504 | if [ "$CONF_S3_BUCKET" = "" ]; then 505 | echo " $(ansi red)⛔ Empty bucket name. ABORT$(ansi reset)" 506 | exit 1 507 | fi 508 | # Amazon Glacier 509 | read -p " $(ansi yellow)Archive to Amazon Glacier? [Y/n]$(ansi reset) " ANSWER 510 | if [ "$ANSWER" != "n" ] && [ "$ANSWER" != "N" ]; then 511 | CONF_AWS_GLACIER="yes" 512 | read -p " $(ansi yellow)Glacier Vault name?$(ansi reset) " ANSWER 513 | CONF_GLACIER_VAULT=$(trim "$ANSWER") 514 | if [ "$CONF_GLACIER_VAULT" = "" ]; then 515 | echo " $(ansi red)⛔ Empty vault name. ABORT$(ansi reset)" 516 | exit 1 517 | fi 518 | fi 519 | fi 520 | # PURGE 521 | echo 522 | echo "$(ansi rev red) PURGE $(ansi reset)" 523 | # local purge 524 | if [ "$SIMPLE_MODE" = "yes" ]; then 525 | # daily backups 526 | read -p " $(ansi yellow)Delay for local purge?$(ansi reset) $(ansi dim)(examples: \"$(ansi reset)3 days$(ansi dim)\" \"$(ansi reset)2 weeks$(ansi dim)\" \"$(ansi reset)2 months$(ansi dim)\")$(ansi reset) " ANSWER 527 | else 528 | # hourly backups 529 | echo " $(ansi yellow)Delay for local purge$(ansi reset)" 530 | echo " $(ansi dim)You can give as many delays as you want, separated by semi-colons.$(ansi reset)" 531 | echo " $(ansi dim)Each delay could be written like:$(ansi reset)" 532 | echo " $(ansi dim) - A number followed by a duration. Examples: \"3 days\", \"2 weeks\", \"2 months\"$(ansi reset)" 533 | echo " $(ansi dim) - Same as previous, followed by a colon and some hours to purge (separated by comas). Example: \"2 days:02,04,06,08,10;4 days,01,03,05\"$(ansi reset)" 534 | echo " $(ansi dim)$(ansi bold)Advice:$(ansi reset)$(ansi dim) Even if you purge on different hours, be sure to purge everything after a given delay.$(ansi reset)" 535 | echo " $(ansi dim) Like \"2 days:01,03,05,07,09,11,13,15,17,19,21,23;1 week:02,04,06,10,12,14,18,20,22;2 weeks:08,16;$(ansi under)1 month$(ansi reset)$(ansi dim)\"$(ansi reset)" 536 | read ANSWER 537 | fi 538 | CONF_LOCAL_PURGE_DELAY=$(trim "$ANSWER") 539 | if [ "$CONF_LOCAL_PURGE_DELAY" = "" ]; then 540 | read -p " $(ansi red)⚠ Are you sure you want to never purge any backup file? [y/N]$(ansi reset) " ANSWER 541 | if [ "$ANSWER" != "y" ] && [ "$ANSWER" != "Y" ]; then 542 | echo " $(ansi red)⛔ Empty purge delay. ABORT$(ansi reset)" 543 | exit 1 544 | fi 545 | fi 546 | # S3 purge 547 | if [ "$CONF_AWS_S3" = "yes" ]; then 548 | if [ "$SIMPLE_MODE" = "yes" ]; then 549 | # daily backups 550 | read -p " $(ansi yellow)Delay for Amazon S3 purge?$(ansi reset) $(ansi dim)(examples: \"$(ansi reset)3 days$(ansi dim)\" \"$(ansi reset)2 weeks$(ansi dim)\" \"$(ansi reset)2 months$(ansi dim)\")$(ansi reset) " ANSWER 551 | else 552 | # hourly backups 553 | echo " $(ansi yellow)Delay for Amazon S3 purge$(ansi reset)" 554 | echo " $(ansi dim)You can give as many delays as you want, separated by semi-colons.$(ansi reset)" 555 | echo " $(ansi dim)Each delay could be written like:$(ansi reset)" 556 | echo " $(ansi dim) - A number followed by a duration. Examples: \"3 days\", \"2 weeks\", \"2 months\"$(ansi reset)" 557 | echo " $(ansi dim) - Same as previous, followed by a colon and some hours to purge (separated by comas). Example: \"2 days:02,04,06,08,10;4 days,01,03,05\"$(ansi reset)" 558 | echo " $(ansi dim)$(ansi bold)Advice:$(ansi reset)$(ansi dim) Even if you purge on different hours, be sure to purge everything after a given delay.$(ansi reset)" 559 | echo " $(ansi dim) Like \"1 week:01,02,03,05,06,07,09,10,11,13,14,15,17,18,19,21,22,23;2 weeks:00,08,16;1 month:12,20;$(ansi under)2 months$(ansi reset)$(ansi dim)\"$(ansi reset)" 560 | read ANSWER 561 | fi 562 | CONF_S3_PURGE_DELAY=$(trim "$ANSWER") 563 | if [ "$CONF_S3_PURGE_DELAY" = "" ]; then 564 | read -p " $(ansi red)⚠ Are you sure you want to never purge any archived file? [y/N] " ANSWER 565 | if [ "$ANSWER" != "y" ] && [ "$ANSWER" != "Y" ]; then 566 | echo " $(ansi red)⛔ Empty purge delay. ABORT$(ansi reset)" 567 | exit 1 568 | fi 569 | fi 570 | fi 571 | # GENERATED CONFIG 572 | echo 573 | echo "$(ansi rev green) CONFIG CHECK $(ansi reset)" 574 | # create result 575 | RESULT="$(_create_config hide)" 576 | # display result 577 | echo 578 | echo " $(ansi under)HERE IS THE CONTENT THAT WILL BE WRITTEN$(ansi reset)" 579 | echo "$(ansi dim)$RESULT$(ansi reset)" 580 | echo 581 | # write content 582 | read -p " $(ansi yellow)Ready to erase file '$(ansi reset)$CONFIG_FILE_PATH$(ansi yellow)' and rebuild it? [y/N]$(ansi reset) " ANSWER 583 | ANSWER=$(trim "$ANSWER") 584 | if [ "$ANSWER" != "y" ] && [ "$ANSWER" != "Y" ] && [ "$ANSWER" != "yes" ] && [ "$ANSWER" != "YES" ] && [ "$ANSWER" != "Yes" ]; then 585 | echo 586 | echo " $(ansi yellow)⚠️ Warning. You will lose the configuration you are editing.$(ansi reset)" 587 | read -p " Do you really want to $(ansi red)abort$(ansi reset)? [Y/n] " ANSWER 588 | if [ "$ANSWER" != "n" ] && [ "$ANSWER" != "N" ] && [ "$ANSWER" != "no" ] && [ "$ANSWER" != "NO" ] && [ "$ANSWER" != "No" ]; then 589 | echo " $(ansi red)⛔ ABORT$(ansi reset)" 590 | exit 1 591 | fi 592 | fi 593 | RESULT="$(_create_config)" 594 | echo "$RESULT" > "$(eval realpath "$CONFIG_FILE_PATH")" 595 | if [ $? -ne 0 ]; then 596 | echo " $(ansi red)⛔ Unable to create the file '$(ansi reset)$CONFIG_FILE_PATH$(ansi red)'. ABORT$(ansi reset)" 597 | exit 1 598 | fi 599 | chmod 600 "$(eval realpath "$CONFIG_FILE_PATH")" 600 | if [ $? -ne 0 ]; then 601 | echo " $(ansi yellow)⚠ Unable to set access rights of the file '$(ansi reset)$CONFIG_FILE_PATH$(ansi yellow)'.$(ansi reset)" 602 | fi 603 | echo " $(ansi green)✓ The configuration file '$(ansi reset)$CONFIG_FILE_PATH$(ansi green)' was successfully created.$(ansi reset)" 604 | # CRONTAB 605 | echo 606 | echo "$(ansi rev cyan) CRONTAB $(ansi reset)" 607 | IN_CRONTAB=$(crontab -l 2>/dev/null | grep arkiv | wc -l) 608 | if [ "$IN_CRONTAB" != "0" ]; then 609 | read -p " $(ansi yellow)Arkiv is already defined in Crontab. Do you want to remove it and re-define it? [Y/n]$(ansi reset) " ANSWER 610 | else 611 | read -p " $(ansi yellow)Do you want to add execution in Crontab? [Y/n]$(ansi reset) " ANSWER 612 | fi 613 | ANSWER=$(trim "$ANSWER") 614 | if [ "$ANSWER" != "" ] && [ "$ANSWER" != "y" ] && [ "$ANSWER" != "Y" ] && [ "$ANSWER" != "n" ] && [ "$ANSWER" != "N" ]; then 615 | read -p " $(ansi yellow)⚠️ Bad value. Please try again [Y/n]$(ansi reset) " ANSWER 616 | if [ "$ANSWER" != "" ] && [ "$ANSWER" != "y" ] && [ "$ANSWER" != "Y" ] && [ "$ANSWER" != "n" ] && [ "$ANSWER" != "N" ]; then 617 | echo " $(ansi red)⛔ Bad value. ABORT$(ansi reset)" 618 | exit 1 619 | fi 620 | fi 621 | if [ "$ANSWER" != "n" ] && [ "$ANSWER" != "N" ]; then 622 | if [ "$SIMPLE_MODE" = "yes" ]; then 623 | CRON_DAYS="*" 624 | CRON_HOURS="0" 625 | else 626 | # execution days 627 | echo " $(ansi yellow)What days of the month should Arkiv be launched? [*]$(ansi reset)" 628 | echo " $(ansi dim)Give dates in Crontab format, like \"1,11,21,31\" or \"1-5,11-15,20-25\" or \"*/2\".$(ansi reset)" 629 | echo " $(ansi dim)Use \"*\" (default value) to launch Arkiv every day.$(ansi reset)" 630 | read ANSWER 631 | if [ "$ANSWER" = "*" ]; then 632 | CRON_DAYS="*" 633 | else 634 | CRON_DAYS=$(trim "$ANSWER") 635 | fi 636 | if [ "$CRON_DAYS" = "" ]; then 637 | CRON_DAYS="*" 638 | fi 639 | # execution hours 640 | echo " $(ansi yellow)What hours of the day should Arkiv be launched? [*]$(ansi reset)" 641 | echo " $(ansi dim)Give hours in Crontab format, like \"0,4,8,12,16,20\" or \"*/4\" or \"0-4,12-16\".$(ansi reset)" 642 | echo " $(ansi dim)Use \"*\" (default value) to launch Arkiv every hour.$(ansi reset)" 643 | read ANSWER 644 | if [ "$ANSWER" = "*" ]; then 645 | CRON_HOURS="*" 646 | else 647 | CRON_HOURS=$(trim "$ANSWER") 648 | fi 649 | if [ "$CRON_HOURS" = "" ]; then 650 | CRON_HOURS="*" 651 | fi 652 | fi 653 | # exec path 654 | CRON_EXEC_PATH="$PWD" 655 | read -p " $(ansi yellow)Path to the $(ansi reset)Arkiv$(ansi yellow) executable program [$CRON_EXEC_PATH]$(ansi reset) " ANSWER 656 | ANSWER=$(trim "$ANSWER") 657 | if [ "$ANSWER" != "" ]; then 658 | CRON_EXEC_PATH="$ANSWER" 659 | fi 660 | if [ ! -x "$CRON_EXEC_PATH/arkiv" ]; then 661 | echo " $(ansi red)⛔ Unable to find the file '$(ansi reset)$CRON_EXEC_PATH/arkiv$(ansi red)'. ABORT$(ansi reset)" 662 | exit 1 663 | fi 664 | # log path 665 | if [ "$OPT_LOG_PATH" != "" ]; then 666 | CRON_LOG_PATH="$OPT_LOG_PATH" 667 | else 668 | read -p " $(ansi yellow)Path to log file? [/var/log/arkiv/arkiv.log]$(ansi reset) " ANSWER 669 | CRON_LOG_PATH=$(trim "$ANSWER") 670 | if [ "$CRON_LOG_PATH" = "" ]; then 671 | CRON_LOG_PATH="/var/log/arkiv/arkiv.log" 672 | fi 673 | fi 674 | CRON_LOG_DIR=$(dirname "$CRON_LOG_PATH") 675 | if [ ! -d $CRON_LOG_DIR ]; then 676 | read -p " $(ansi yellow)Directory '$(ansi reset)$CRON_LOG_DIR$(ansi yellow)' doesn't exist. Create it? [Y/n]$(ansi reset) " ANSWER 677 | if [ "$ANSWER" = "n" ] || [ "$ANSWER" = "N" ]; then 678 | echo " $(ansi red)⛔ ABORT$(ansi reset)" 679 | exit 1 680 | fi 681 | if ! mkdir -p $CRON_LOG_DIR; then 682 | echo " $(ansi red)⛔ Unable to create directory '$(ansi reset)$CRON_LOG_DIR$(ansi red)'. ABORT$(ansi reset)" 683 | exit 1 684 | fi 685 | chmod 770 $CRON_LOG_DIR 686 | elif [ ! -w $CRON_LOG_DIR ]; then 687 | echo " $(ansi red)⛔ Directory '$(ansi reset)$CRON_LOG_DIR$(ansi red)' exists but is not writable. ABORT$(ansi reset)" 688 | exit 1 689 | fi 690 | # check 691 | echo 692 | echo " $(ansi under)HERE IS THE CRONTAB THAT IS ABOUT TO BE ADDED$(ansi reset)" 693 | echo "$(ansi dim)# ARKIV backup$(ansi reset)" 694 | echo "$(ansi dim)0 $CRON_HOURS $CRON_DAYS * * $CRON_EXEC_PATH/arkiv exec --config=$CONFIG_FILE_PATH --log=$CRON_LOG_PATH --no-stdout$(ansi reset)" 695 | echo 696 | # add to crontab 697 | read -p " $(ansi yellow)Is it OK for you? [Y/n]$(ansi reset) " ANSWER 698 | ANSWER=$(trim "$ANSWER") 699 | if [ "$ANSWER" = "n" ] || [ "$ANSWER" = "N" ]; then 700 | echo " $(ansi yellow)⚠ No Crontab installation.$(ansi reset)" 701 | elif [ "$ANSWER" = "" ] || [ "$ANSWER" = "y" ] || [ "$ANSWER" = "Y" ]; then 702 | (crontab -l 2>/dev/null | grep -v -e "ARKIV" -e "arkiv"; echo; echo "# ARKIV backup"; echo "0 $CRON_HOURS $CRON_DAYS * * $CRON_EXEC_PATH/arkiv exec --config=$CONFIG_FILE_PATH --log=$CRON_LOG_PATH --no-stdout") | crontab - 703 | echo " $(ansi green)✓ Arkiv was Successfully added to the Crontab$(ansi reset)" 704 | else 705 | echo " $(ansi red)⛔ Bad value. ABORT$(ansi reset)" 706 | exit 1 707 | fi 708 | fi 709 | # END 710 | echo 711 | echo "$(ansi rev) END OF CONFIGURATION $(ansi reset)" 712 | } 713 | 714 | # main_exec() 715 | # Backup files, archive and purge. 716 | main_exec() { 717 | # get current hour (used during purge process) 718 | CURRENT_HOUR=$(date +"%H") 719 | # log 720 | log -n "$(ansi rev)------------------------------------------------------------$(ansi reset)" 721 | # configuration 722 | log "$(ansi bold)Start$(ansi reset)" 723 | log "├ $(ansi dim)Read config file '$(ansi reset)$CONFIG_FILE_PATH$(ansi dim)'.$(ansi reset)" 724 | if [ ! -r "$(eval realpath "$CONFIG_FILE_PATH")" ]; then 725 | err "$(ansi red)⛔ Unable to read file '$(ansi reset)$CONFIG_FILE_PATH$(ansi red)'. ABORT$(ansi reset)" 726 | exit 1 727 | else 728 | . "$(eval realpath "$CONFIG_FILE_PATH")" 729 | fi 730 | # create destination directory 731 | CURRENT_PATH=$(date +%Y-%m-%d/%H:00) 732 | log "├ $(ansi dim)Create output directory '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH$(ansi dim)'.$(ansi reset)" 733 | mkdir -p "$CONF_BACKUP_PATH/$CURRENT_PATH" 734 | if [ $? -ne 0 ]; then 735 | err "$(ansi red)⛔ Unable to create directory '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH$(ansi red)'. ABORT$(ansi reset)" 736 | exit 1 737 | fi 738 | chmod -R 700 "$CONF_BACKUP_PATH/$CURRENT_PATH" 739 | log "└ $(ansi green)Done$(ansi reset)" 740 | # list of created files 741 | FILESLIST="" 742 | # backup files 743 | if [ "$CONF_SRC" != "" ]; then 744 | log "$(ansi bold)Backup files$(ansi reset)" 745 | # loop on paths to backup 746 | for SRC in $CONF_SRC; do 747 | log "├ $(ansi dim)Backup path '$(ansi reset)$SRC$(ansi dim)'.$(ansi reset)" 748 | FILENAME=$(filenamize "$SRC") 749 | FILEPATH="$CONF_BACKUP_PATH/$CURRENT_PATH/$FILENAME" 750 | # remove '/' at the beginning of the path to save, and call tar with "-C /" options to avoid the warning "Removing leading `/' from member names" 751 | SRC="$(echo "$SRC" | sed 's/^\///')" 752 | # tar path 753 | tar cf "$FILEPATH.tar" --exclude-caches --exclude-tag=.arkiv-exclude --exclude-ignore=.arkiv-ignore --exclude-ignore-recursive=.arkiv-ignore-recursive -C / "$SRC" 754 | if [ $? -ne 0 ]; then 755 | err "$(ansi yellow)⛔ Unable to tar path '$(ansi reset)/$SRC$(ansi yellow)' to '$(ansi reset)$FILEPATH.tar$(ansi yellow)'.$(ansi reset)" 756 | exit 1 757 | fi 758 | # compress file 759 | if [ "$CONF_Z_TYPE" = "gzip" ]; then 760 | FILE_EXT="tar.gz" 761 | elif [ "$CONF_Z_TYPE" = "bzip2" ]; then 762 | FILE_EXT="tar.bz2" 763 | elif [ "$CONF_Z_TYPE" = "xz" ]; then 764 | FILE_EXT="tar.xz" 765 | else 766 | FILE_EXT="tar.zst" 767 | fi 768 | echo "$FILEPATH.tar" | xargs $CONF_Z_TYPE -q 769 | if [ $? -eq 0 ]; then 770 | # remove the source file (zstd program doesn't remove the source file) 771 | rm -f "$FILEPATH.tar" 772 | else 773 | err "$(ansi yellow)⚠ Unable to compress file '$(ansi reset)$FILEPATH.tar$(ansi yellow)' using '$CONF_Z_TYPE'.$(ansi reset)" 774 | FILE_EXT="tar" 775 | fi 776 | # add to list 777 | FILESLIST="$FILESLIST $FILEPATH.$FILE_EXT" 778 | # change file rights 779 | chmod 600 "$FILEPATH.$FILE_EXT" 780 | done 781 | log "└ $(ansi green)Done$(ansi reset)" 782 | fi 783 | # database backup using mysqldump 784 | if [ "$CONF_MYSQL" = "mysqldump" ]; then 785 | log "$(ansi bold)SQL database backup$(ansi reset)" 786 | # fetch the list of databases if needed 787 | if [ "$CONF_MYSQL_ALL_DATABASES" = "yes" ]; then 788 | log "├ $(ansi dim)Fetch the list of databases$(ansi reset)" 789 | CONF_MYSQL_DATABASES="$(MYSQL_PWD="$CONF_MYSQL_PWD" mysql -u $CONF_MYSQL_USER -h $CONF_MYSQL_HOST -e "SHOW DATABASES;" | tr -d "| " | grep -v -e Database -e _schema -e mysql -e sys)" 790 | if [ $? -ne 0 ]; then 791 | err "$(ansi yellow)⚠ Unable to fetch the list of databases from MySQL.$(ansi reset)" 792 | fi 793 | fi 794 | CONF_MYSQL_DATABASES=$(trim "$CONF_MYSQL_DATABASES") 795 | if [ "$CONF_MYSQL_DATABASES" = "" ]; then 796 | err "$(ansi yellow)⚠ No one database to backup.$(ansi reset)" 797 | else 798 | # create destination directory 799 | log "├ $(ansi dim)Create destination directory '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/database_sql$(ansi bold)'$(ansi reset)" 800 | mkdir "$CONF_BACKUP_PATH/$CURRENT_PATH/database_sql" 801 | if [ $? -ne 0 ]; then 802 | err "$(ansi red)⛔ Unable to create directory '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/database_sql$(ansi red)'. ABORT$(ansi reset)" 803 | exit 1 804 | fi 805 | chmod 700 "$CONF_BACKUP_PATH/$CURRENT_PATH/database_sql" 806 | # loop on databases 807 | for DB_NAME in $CONF_MYSQL_DATABASES; do 808 | # dump MySQL database 809 | log "├ $(ansi dim)Backup database '$(ansi reset)$DB_NAME$(ansi bold)'$(ansi reset)" 810 | FILEPATH="$CONF_BACKUP_PATH/$CURRENT_PATH/database_sql/$DB_NAME" 811 | MYSQL_PWD="$CONF_MYSQL_PWD" mysqldump -u $CONF_MYSQL_USER --single-transaction --no-tablespaces --skip-lock-tables --routines $DB_NAME -h $CONF_MYSQL_HOST > "$FILEPATH.sql" 812 | if [ $? -ne 0 ]; then 813 | err "$(ansi yellow)⚠ Unable to dump database '$(ansi reset)$DB_NAME$(ansi yellow)'.$(ansi reset)" 814 | continue 815 | fi 816 | # compress the dumped file 817 | log "├ $(ansi dim)Compress backup file for '$(ansi reset)$DB_NAME$(ansi bold)'.$(ansi reset)" 818 | if [ "$CONF_Z_TYPE" = "gzip" ]; then 819 | FILE_EXT="sql.gz" 820 | elif [ "$CONF_Z_TYPE" = "bzip2" ]; then 821 | FILE_EXT="sql.bz2" 822 | elif [ "$CONF_Z_TYPE" = "xz" ]; then 823 | FILE_EXT="sql.xz" 824 | else 825 | FILE_EXT="sql.zst" 826 | fi 827 | echo "$FILEPATH.sql" | xargs $CONF_Z_TYPE -q 828 | if [ $? -eq 0 ]; then 829 | # remove the source file (zstd program doesn't remove the source file) 830 | rm -f "$FILEPATH.sql" 831 | else 832 | err "$(ansi yellow)⚠ Unable to compress file '$(ansi reset)$FILEPATH.sql$(ansi yellow)' using '$CONF_Z_TYPE'.$(ansi reset)" 833 | FILE_EXT="sql" 834 | fi 835 | # add to list 836 | FILESLIST="$FILESLIST $FILEPATH.$FILE_EXT" 837 | # change file rights 838 | chmod 600 "$FILEPATH.$FILE_EXT" 839 | done 840 | log "└ $(ansi green)Done$(ansi reset)" 841 | fi 842 | fi 843 | # database backup using xtrabackup 844 | if [ "$CONF_MYSQL" = "xtrabackup" ]; then 845 | log "$(ansi bold)Binary database backup$(ansi reset)" 846 | # create destination directory 847 | log "├ $(ansi dim)Create destination directory '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/database_data$(ansi bold)'$(ansi reset)" 848 | mkdir "$CONF_BACKUP_PATH/$CURRENT_PATH/database_data" 849 | if [ $? -ne 0 ]; then 850 | err "$(ansi red)⛔ Unable to create directory '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/database_data$(ansi red)'. ABORT$(ansi reset)" 851 | exit 1 852 | fi 853 | chmod 700 "$CONF_BACKUP_PATH/$CURRENT_PATH/database_data" 854 | # read LSN file if needed 855 | if [ "$CONF_XTRABACKUP_TYPE" = "incremental" ]; then 856 | log "├ $(ansi dim)Read LSN file$(ansi reset)" 857 | LSN=$(cat $CONF_XTRABACKUP_LSN_FILE | head -1) 858 | LSN=$(trim $LSN) 859 | if [ "$LSN" = "" ]; then 860 | err "$(ansi yellow)⚠ Empty LSN file '$(ansi reset)$CONF_XTRABACKUP_LSN_FILE$(ansi yellow)'. Forcing a full binary backup.$(ansi reset)" 861 | CONF_XTRABACKUP_TYPE="full" 862 | fi 863 | fi 864 | # backup operation 865 | if [ "$CONF_XTRABACKUP_TYPE" = "full" ]; then 866 | # full backup 867 | log "├ $(ansi dim)Full binary backup$(ansi reset)" 868 | MYSQL_PWD="$CONF_MYSQL_PWD" xtrabackup --backup --target-dir="$CONF_BACKUP_PATH/$CURRENT_PATH/database_data" -u $CONF_MYSQL_USER -h $CONF_MYSQL_HOST > /dev/null 2> "$CONF_BACKUP_PATH/$CURRENT_PATH/database_data/xtrabackup.log" 869 | if [ $? -ne 0 ]; then 870 | err "$(ansi red)⛔ Unable to save database. See log '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/database_data/xtrabackup.log$(ansi red)'. ABORT$(ansi reset)" 871 | exit 1 872 | fi 873 | elif [ "$CONF_XTRABACKUP_TYPE" = "incremental" ]; then 874 | # incremental backup 875 | log "├ $(ansi dim)Incremental binary backup$(ansi reset)" 876 | MYSQL_PWD="$CONF_MYSQL_PWD" xtrabackup --incremental --incremental-lsn=$LSN --target-dir="$CONF_BACKUP_PATH/$CURRENT_PATH/database_data" -u $CONF_MYSQL_USER -h $CONF_MYSQL_HOST > /dev/null 2> "$CONF_BACKUP_PATH/$CURRENT_PATH/database_data/xtrabackup.log" 877 | if [ $? -ne 0 ]; then 878 | err "$(ansi red)⛔ Unable to save database. See log '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/database_data/xtrabackup.log$(ansi red)'. ABORT$(ansi reset)" 879 | exit 1 880 | fi 881 | fi 882 | # write LSN File 883 | log "├ $(ansi dim)Write LSN file$(ansi reset)" 884 | grep last_lsn "$CONF_BACKUP_PATH/$CURRENT_PATH/database_data/xtrabackup_checkpoints" | cut -d"=" -f2 > $CONF_XTRABACKUP_LSN_FILE 885 | # tar directory 886 | log "├ $(ansi dim)Tar'ing generated files$(ansi reset)" 887 | tar cf "$CONF_BACKUP_PATH/$CURRENT_PATH/database_data.tar" "$CONF_BACKUP_PATH/$CURRENT_PATH/database_data" 888 | if [ $? -ne 0 ]; then 889 | err "$(ansi red)⛔ Unable to tar directory '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/database_data$(ansi red)' to '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/database_data.tar$(ansi red)'.$(ansi reset)" 890 | exit 1 891 | fi 892 | # compress files 893 | log "├ $(ansi dim)Compress file$(ansi reset)" 894 | if [ "$CONF_Z_TYPE" = "gzip" ]; then 895 | FILE_EXT="tar.gz" 896 | elif [ "$CONF_Z_TYPE" = "bzip2" ]; then 897 | FILE_EXT="tar.bz2" 898 | elif [ "$CONF_Z_TYPE" = "xz" ]; then 899 | FILE_EXT="tar.xz" 900 | else 901 | FILE_EXT="tar.zst" 902 | fi 903 | echo "$CONF_BACKUP_PATH/$CURRENT_PATH/database_data.tar" | xargs $CONF_Z_TYPE -q 904 | if [ $? -ne 0 ]; then 905 | err "$(ansi yellow)⚠ Unable to compress file '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/database_data.tar$(ansi yellow)' using '$CONF_Z_TYPE'.$(ansi reset)" 906 | FILE_EXT="tar" 907 | fi 908 | # remove directory 909 | rm -rf "$CONF_BACKUP_PATH/$CURRENT_PATH/database_data" 910 | if [ $? -eq 0 ]; then 911 | err "$(ansi yellow)⛔ Unable to delete directory '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/database_data$(ansi yellow)'.$(ansi reset)" 912 | exit 1 913 | fi 914 | # add to list 915 | FILESLIST="$FILESLIST $CONF_BACKUP_PATH/$CURRENT_PATH/database_data.$FILE_EXT" 916 | # change file rights 917 | chmod 600 "$FILEPATH.$FILE_EXT" 918 | log "└ $(ansi green)Done$(ansi reset)" 919 | fi 920 | # encryption 921 | FILESLIST_RESULT="" 922 | if [ "$CONF_ENCRYPT" = "" ]; then 923 | FILESLIST_RESULT="$FILESLIST" 924 | else 925 | for FILE in $FILESLIST; do 926 | if [ "$CONF_ENCRYPT" = "key" ]; then 927 | openssl enc -aes-256-cbc -e -salt -in "$FILE" -out "$FILE.encrypt" -pass file:$CONF_ENCRYPT_KEY 928 | else 929 | CRYPT_PWD="$CONF_ENCRYPT_PASS" openssl enc -aes-256-cbc -e -salt -in "$FILE" -out "$FILE.encrypt" -pass env:CRYPT_PWD 930 | fi 931 | if [ $? -ne 0 ]; then 932 | err "$(ansi yellow)⚠ Unable to encrypt file '$(ansi reset)$FILE$(ansi yellow)' using '$(ansi reset)$CONF_ENCRYPT$(ansi yellow)'.$(ansi reset)" 933 | FILESLIST_RESULT="$FILESLIST_RESULT $FILE" 934 | else 935 | FILESLIST_RESULT="$FILESLIST_RESULT $FILE.encrypt" 936 | fi 937 | done 938 | fi 939 | # compute checksums 940 | log "$(ansi bold)Compute checksums$(ansi reset)" 941 | sha256sum $FILESLIST_RESULT > "$CONF_BACKUP_PATH/$CURRENT_PATH/sha256sums" 942 | if [ $? -ne 0 ]; then 943 | err "$(ansi yellow)⚠ Unable to create SHA256 checksum file '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/sha256sums$(ansi yellow)'.$(ansi reset)" 944 | fi 945 | # change cheksum file rights 946 | chmod 600 "$CONF_BACKUP_PATH/$CURRENT_PATH/sha256sums" 947 | log "└ $(ansi green)Done$(ansi reset)" 948 | # archive on Amazon Glacier 949 | if [ "$CONF_AWS_GLACIER" = "yes" ]; then 950 | log "$(ansi bold)Archiving on Amazon Glacier$(ansi reset)" 951 | aws glacier upload-archive --account-id - --vault-name $CONF_GLACIER_VAULT --body "$CONF_BACKUP_PATH/$CURRENT_PATH/sha256sums" > "$CONF_BACKUP_PATH/$CURRENT_PATH/sha256sums.glacier.json" 952 | if [ $? -ne 0 ]; then 953 | err "$(ansi yellow)⚠ Unable to send file '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH/sha256sums$(ansi yellow)' to Amazon Glacier.$(ansi reset)" 954 | fi 955 | for FILE in $FILESLIST_RESULT; do 956 | log "├ $(ansi dim)Archive file '$(ansi reset)$FILE$(ansi dim)'.$(ansi reset)" 957 | aws glacier upload-archive --account-id - --vault-name $CONF_GLACIER_VAULT --body "$FILE" > "$FILE.glacier.json" 958 | if [ $? -ne 0 ]; then 959 | err "$(ansi yellow)⚠ Unable to send file '$(ansi reset)$FILE$(ansi yellow)' to Amazon Glacier.$(ansi reset)" 960 | fi 961 | done 962 | log "└ $(ansi green)Done$(ansi reset)" 963 | fi 964 | # archive on Amazon S3 965 | if [ "$CONF_AWS_S3" = "yes" ]; then 966 | log "$(ansi bold)Archiving on Amazon S3$(ansi reset)" 967 | aws s3 sync $CONF_BACKUP_PATH/$CURRENT_PATH s3://$CONF_S3_BUCKET/$CONF_LOCAL_HOSTNAME/$CURRENT_PATH --quiet --storage-class STANDARD_IA 968 | if [ $? -ne 0 ]; then 969 | err "$(ansi yellow)⚠ Unable to copy directory '$(ansi reset)$CONF_BACKUP_PATH/$CURRENT_PATH$(ansi yellow)' to Amazon S3 '$(ansi reset)$CONF_S3_BUCKET/$CONF_LOCAL_HOSTNAME/$CURRENT_PATH$(ansi yellow)'.$(ansi reset)" 970 | fi 971 | log "└ $(ansi green)Done$(ansi reset)" 972 | fi 973 | # purge local files 974 | if [ "$CONF_LOCAL_PURGE_DELAY" != "" ]; then 975 | log "$(ansi bold)Purge local files$(ansi reset)" 976 | # list of delays, each one like "2 days" or "3 weeks:01,03,05" 977 | DELAYS=$(echo "$CONF_LOCAL_PURGE_DELAY" | tr " " "_" | tr ";" "\n") 978 | # loop on the delays 979 | for DELAY in $DELAYS; do 980 | # extract the delay part, like "2 days" or "3 weeks" 981 | AGO=$(echo "$DELAY" | cut -d":" -f 1 | tr "_" " ") 982 | # compute the date of the archive(s) to purge 983 | PURGE_DATE=$(date --date="$AGO ago" +%Y-%m-%d) 984 | # check if the directory exists 985 | if [ ! -d "$CONF_BACKUP_PATH/$PURGE_DATE" ]; then 986 | continue; 987 | fi 988 | # check if there is some specified hours 989 | if [[ $DELAY != *:* ]]; then 990 | # no specified hour, delete the whole day 991 | log "├ $(ansi dim)Delete '$(ansi reset)$CONF_BACKUP_PATH/$PURGE_DATE$(ansi dim)'." 992 | rm -rf "$CONF_BACKUP_PATH/$PURGE_DATE" 993 | if [ $? -ne 0 ]; then 994 | err "$(ansi yellow)⚠ Unable to purge local directory '$(ansi reset)$CONF_BACKUP_PATH/$PURGE_DATE$(ansi yellow)'.$(ansi reset)" 995 | fi 996 | else 997 | # get the list of hours to delete 998 | HOURS=$(echo "$DELAY" | cut -d":" -f 2) 999 | HOURS=$(trim "$HOURS") 1000 | if [ "$HOURS" = "" ]; then 1001 | continue 1002 | fi 1003 | # loop on the hours 1004 | HOURS=$(echo "$HOURS" | tr "," "\n") 1005 | for HOUR in $HOURS; do 1006 | # get the hour on 2 digits 1007 | HOUR=$(printf "%02d" $(trim "$HOUR" | sed 's/^0//')) 1008 | # continue the loop if this hour is not equal to the current hour, or if the directory was already deleted 1009 | if [ "$HOUR" != "$CURRENT_HOUR" ] || [ ! -d "$CONF_BACKUP_PATH/$PURGE_DATE/$HOUR:00" ]; then 1010 | continue; 1011 | fi 1012 | # delete the directory 1013 | log "├ $(ansi dim)Delete '$(ansi reset)$CONF_BACKUP_PATH/$PURGE_DATE/$HOUR:00$(ansi dim)'." 1014 | rm -rf "$CONF_BACKUP_PATH/$PURGE_DATE/$HOUR:00" 1015 | if [ $? -ne 0 ]; then 1016 | err "$(ansi yellow)⚠ Unable to purge local directory '$(ansi reset)$CONF_BACKUP_PATH/$PURGE_DATE/$HOUR:00$(ansi yellow)'.$(ansi reset)" 1017 | fi 1018 | done 1019 | # delete the whole day's directory if it's empty 1020 | NBR=$(ls -l "$CONF_BACKUP_PATH/$PURGE_DATE" | tail -n +2 | wc -l) 1021 | if [ $NBR -eq 0 ]; then 1022 | log "├ $(ansi dim)Delete '$(ansi reset)$CONF_BACKUP_PATH/$PURGE_DATE$(ansi dim)'." 1023 | rm -rf "$CONF_BACKUP_PATH/$PURGE_DATE" 1024 | if [ $? -ne 0 ]; then 1025 | err "$(ansi yellow)⚠ Unable to purge local directory '$(ansi reset)$CONF_BACKUP_PATH/$PURGE_DATE$(ansi yellow)'.$(ansi reset)" 1026 | fi 1027 | fi 1028 | fi 1029 | done 1030 | log "└ $(ansi green)Done$(ansi reset)" 1031 | fi 1032 | # purge files on Amazon S3 1033 | if [ "$CONF_S3_PURGE_DELAY" != "" ]; then 1034 | log "$(ansi bold)Purge files on Amazon S3$(ansi reset)" 1035 | # list of delays, each one like "2 days" or "3 weeks:01,03,05" 1036 | DELAYS=$(echo "$CONF_S3_PURGE_DELAY" | tr " " "_" | tr ";" "\n") 1037 | # loop on the delays 1038 | for DELAY in $DELAYS; do 1039 | # extract the delay part, like "2 days" or "3 weeks" 1040 | AGO=$(echo "$DELAY" | cut -d":" -f 1 | tr "_" " ") 1041 | # compute the date of the archive(s) to purge 1042 | PURGE_DATE=$(date --date="$AGO ago" +%Y-%m-%d) 1043 | # check if there is some specified hours 1044 | if [[ $DELAY != *:* ]]; then 1045 | # no specified hour, delete the whole day 1046 | log "├ $(ansi dim)Delete '$(ansi reset)s3://$CONF_S3_BUCKET/$CONF_LOCAL_HOSTNAME/$PURGE_DATE$(ansi dim)'." 1047 | aws s3 rm s3://$CONF_S3_BUCKET/$CONF_LOCAL_HOSTNAME/$PURGE_DATE/ --recursive --exclude "sha256sums" --exclude "*.glacier.json" 1048 | if [ $? -ne 0 ]; then 1049 | err "$(ansi yellow)⚠ Unable to purge S3 directory '$(ansi reset)$CONF_S3_BUCKET/$CONF_LOCAL_HOSTNAME/$PURGE_DATE$(ansi yellow)'.$(ansi reset)" 1050 | fi 1051 | else 1052 | # get the list of hours to delete 1053 | HOURS=$(echo "$DELAY" | cut -d":" -f 2) 1054 | HOURS=$(trim "$HOURS") 1055 | if [ "$HOURS" = "" ]; then 1056 | continue 1057 | fi 1058 | # loop on the hours 1059 | HOURS=$(echo "$HOURS" | tr "," "\n") 1060 | for HOUR in $HOURS; do 1061 | # get the hour on 2 digits 1062 | HOUR=$(printf "%02d" $(trim "$HOUR" | sed 's/^0//')) 1063 | # continue the loop if this hour is not equal to the current hour 1064 | if [ "$HOUR" != "$CURRENT_HOUR" ]; then 1065 | continue; 1066 | fi 1067 | # delete the directory 1068 | log "├ $(ansi dim)Delete '$(ansi reset)s3://$CONF_S3_BUCKET/$CONF_LOCAL_HOSTNAME/$PURGE_DATE/$HOUR:00$(ansi dim)'." 1069 | aws s3 rm s3://$CONF_S3_BUCKET/$CONF_LOCAL_HOSTNAME/$PURGE_DATE/$HOUR:00/ --recursive --exclude "sha256sums" --exclude "*.glacier.json" 1070 | if [ $? -ne 0 ]; then 1071 | err "$(ansi yellow)⚠ Unable to purge S3 directory '$(ansi reset)$CONF_S3_BUCKET/$CONF_LOCAL_HOSTNAME/$PURGE_DATE/$HOUR:00$(ansi yellow)'.$(ansi reset)" 1072 | fi 1073 | done 1074 | fi 1075 | done 1076 | log "└ $(ansi green)Done$(ansi reset)" 1077 | fi 1078 | log "$(ansi green)✓ End of processing$(ansi reset)" 1079 | log -n "" 1080 | } 1081 | 1082 | 1083 | # ########## MAIN EXECUTION ########## 1084 | 1085 | # parsing command-line options 1086 | case "$1" in 1087 | "config") 1088 | EXEC_TYPE="config" 1089 | ;; 1090 | "exec") 1091 | EXEC_TYPE="exec" 1092 | ;; 1093 | *) 1094 | main_usage 1095 | ;; 1096 | esac 1097 | shift 1098 | while getopts ":c:l:oen-:" OPTCHAR; do 1099 | case "$OPTCHAR" in 1100 | "-") 1101 | case "$OPTARG" in 1102 | config=*) 1103 | CONFIG_FILE_PATH=${OPTARG#*=} 1104 | ;; 1105 | log=*) 1106 | OPT_LOG_PATH=${OPTARG#*=} 1107 | ;; 1108 | "no-stdout") 1109 | OPT_NOSTDOUT=1 1110 | ;; 1111 | "no-stderr") 1112 | OPT_NOSTDERR=1 1113 | ;; 1114 | "no-ansi") 1115 | OPT_NOANSI=1 1116 | ;; 1117 | "syslog") 1118 | OPT_SYSLOG=1 1119 | ;; 1120 | *) 1121 | log "$(ansi red)⛔ Bad command line option '--$OPTARG'. ABORT$(ansi reset)" 1122 | log "$(ansi dim)Try $(ansi reset)$0 help$(ansi dim) to get help.$(ansi reset)" 1123 | exit 1 1124 | esac 1125 | ;; 1126 | "c") 1127 | CONFIG_FILE_PATH="$OPTARG" 1128 | ;; 1129 | "l") 1130 | OPT_LOG_PATH="$OPTARG" 1131 | ;; 1132 | "o") 1133 | OPT_NOSTDOUT=1 1134 | ;; 1135 | "e") 1136 | OPT_NOSTDERR=1 1137 | ;; 1138 | "n") 1139 | OPT_NOANSI=1 1140 | ;; 1141 | "s") 1142 | OPT_SYSLOG=1 1143 | ;; 1144 | *) 1145 | log "$(ansi red)⛔ Bad command line option '-$OPTARG'. ABORT$(ansi reset)" 1146 | log "$(ansi dim)Try $(ansi reset)$0 help$(ansi dim) to get help.$(ansi reset)" 1147 | exit 1 1148 | ;; 1149 | esac 1150 | done 1151 | # main execution 1152 | case "$EXEC_TYPE" in 1153 | "config") 1154 | main_config 1155 | ;; 1156 | "exec") 1157 | main_exec 1158 | ;; 1159 | *) 1160 | main_usage 1161 | ;; 1162 | esac 1163 | 1164 | --------------------------------------------------------------------------------