├── .github └── FUNDING.yml ├── Backup.sh ├── LICENSE ├── README.md └── Sample Template Preview ├── Samlple Dark Theme - small - Database.png ├── Samlple Dark Theme - small - Directory.png ├── Samlple Dark Theme - small - Statistics.png ├── Samlple Dark Theme.png ├── Sample Light Theme - small - Database.png ├── Sample Light Theme - small - Directory.png ├── Sample Light Theme - small - Statistics.png └── Sample Light Theme.png /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | buy_me_a_coffee: DartSteven 2 | 3 | -------------------------------------------------------------------------------- /Backup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Backup Script for Directories with Special Focus on Docker and Its Databases 4 | # 5 | # Description: 6 | # This script is designed to automate the backup process for directories on a Linux-based server, 7 | # with special attention to Docker containers and their associated databases. It was created to provide 8 | # a reliable, flexible, and efficient way to safeguard critical data from directories and Dockerized 9 | # applications, ensuring easy restoration when needed. 10 | # 11 | # Key Features: 12 | # - Backs up important directories and Docker databases, including PostgreSQL, MySQL, Redis, and more. 13 | # - Provides the option to encrypt backups using AES-256 encryption for enhanced data security. 14 | # - Implements the "1-2-3" backup strategy, copying backups to multiple locations for redundancy. 15 | # - Allows for stopping and restarting Docker services to ensure consistent database snapshots. 16 | # - Sends detailed backup reports via email, covering backup status, sizes, durations, and verification results. 17 | # - Utilizes CPU parallelism for efficient compression, optimizing backup times on systems with multiple cores. 18 | # - Offers customizable retention policies to automatically clean up old backups after a specified number of days. 19 | # - Automatically detects the Linux distribution and installs required packages. 20 | # - Supported distributions include Ubuntu, Debian, CentOS, RHEL, Fedora, and Arch Linux. 21 | # 22 | # Why This Script Was Created: 23 | # This script was created to simplify and automate the process of backing up critical directories and Dockerized 24 | # applications, particularly their databases. By handling these tasks automatically, the script minimizes the risk 25 | # of data loss, reduces downtime in case of failures, and ensures that backups are performed consistently and securely. 26 | # 27 | # The script is highly customizable, allowing users to define backup directories, email configurations, and encryption 28 | # preferences. It’s an ideal solution for system administrators who need a powerful tool to protect data and ensure 29 | # disaster recovery, with a special focus on Docker environments. 30 | 31 | 32 | 33 | 34 | # Variable for naming the backup that will be sent via email 35 | # • Example: BACKUP_NAME="Docker-Compose, /etc, Databases" 36 | # • You can change the name to reflect the content of your backup. 37 | # • If you are backing up specific directories or databases, you can include their names in the BACKUP_NAME. 38 | BACKUP_NAME="" 39 | 40 | # Variable for the Server name 41 | # • This variable stores the name of the server where the backup script is running. 42 | # • Example: SERVER_NAME="HOME SERVER" 43 | # • This can be customized to any server name that makes sense for your environment, and it will be included in email notifications. 44 | SERVER_NAME="" 45 | 46 | # Variable to enable backup encryption 47 | # • This variable is used to determine whether to encrypt the backup files. 48 | # • Values: "Y" or "N". 49 | # • If you set ENCRYPT_BACKUP="Y", the backup will be encrypted with AES-256. If set to "N", no encryption will be applied. 50 | ENCRYPT_BACKUP="Y" 51 | 52 | 53 | # Default variables for the source directories 54 | # • This is an array of directories you wish to back up. 55 | # • Example: SOURCE_DIRS=( "/home/JohnDoe" "/etc" ) 56 | # • You can add multiple directories by separating each path with a space. Make sure each path points to valid directories you want to back up. 57 | SOURCE_DIRS=( 58 | "/home/JohnDoe" 59 | "/etc" 60 | ) 61 | 62 | 63 | # Default variables for exlude directories 64 | # • This is an array of directories you want to exclude from the backup. 65 | # • Example: EXCLUDE_DIRS=( "/home/JohnDoe/Personal" ) 66 | # • You can leave this array empty if you do not want to exclude any directories, or add paths you want to skip. 67 | EXCLUDE_DIRS=( 68 | "/home/JohnDoe/Personal" 69 | ) 70 | 71 | # Backup destination directory 72 | # • This is the destination directory where the backups will be stored. 73 | # • Example: BACKUP_DIR="/media/nvme/1TB/BACKUP" 74 | # • This should point to the directory where you have sufficient storage space for backups. 75 | BACKUP_DIR="" 76 | 77 | # Log file for the backup script 78 | LOG_FILE="$BACKUP_DIR/Compose.log" 79 | 80 | # Email Configuration Variables 81 | # • EMAIL_RECIPIENT: The email address to which the backup report will be sent. Example: EMAIL_RECIPIENT="JohnDoe@icloud.com" 82 | # • SMTP Configuration Variables (SMTP_HOST, SMTP_PORT, SMTP_FROM, SMTP_USER, SMTP_PASSWORD): 83 | # • These variables define the SMTP configuration needed to send emails. 84 | # • Example: 85 | # • SMTP_HOST="smtp.mail.me.com" (The SMTP server for sending emails) 86 | # • SMTP_PORT="587" (The port for SMTP, typically 587 for TLS) 87 | # • SMTP_FROM="JohnDoe@icloud.com" (The email address from which the email will be sent) 88 | # • SMTP_USER="JohnDoe@icloud.com" (SMTP username, often the same as the sender’s email) 89 | # • SMTP_PASSWORD="your-password" (Password for SMTP authentication) The app-specific password generated from iCloud* 90 | EMAIL_RECIPIENT="" 91 | SMTP_HOST="" 92 | SMTP_PORT="" 93 | SMTP_FROM="" 94 | SMTP_USER="" 95 | SMTP_PASSWORD="" 96 | 97 | 98 | # Variable to define how many days to retain backups, Change the number of days as needed 99 | # • This defines how many days to retain the backup files before they are automatically deleted. 100 | # • Example: DAYS_TO_KEEP=6 101 | # • You can adjust the number of days to suit your retention policy. 102 | DAYS_TO_KEEP=6 103 | 104 | # Variable to stop Docker before the backup 105 | # • This variable controls whether Docker containers should be stopped before starting the backup. 106 | # • Values: "Y" or "N". 107 | # • Set it to "Y" if you want Docker services to stop before the backup (useful when you are backing up Docker data). 108 | STOP_DOCKER_BEFORE_BACKUP="Y" 109 | 110 | # Variable to enable database backup 111 | # • This controls whether Docker databases should be backed up. 112 | # • Values: "Y" or "N". 113 | # • If set to "Y", the script will perform backups of the databases listed in the DATABASES variable. 114 | BACKUP_DOCKER_DATABASE="Y" 115 | 116 | # Backup destination database directory 117 | # • The destination directory where database backups will be stored. 118 | # • Example: BACKUP_DATABASE_DIR="/media/nvme/1TB/BACKUP/backup_databases" 119 | # • Make sure this points to a directory with enough space to store database backups. 120 | BACKUP_DATABASE_DIR="" 121 | 122 | # List of databases to back up; in the logic of DB | CONTAINER-NAME | DB-NAME | DB-USER-NAME | DB-PASSWORD. Example follows below. 123 | # • This is an array listing all the databases to be backed up. The format is: 124 | # • DB_TYPE|CONTAINER_NAME|DB_NAME|DB_USER|DB_PASSWORD 125 | # • Example: DATABASES=( "PostgreSQL|Joplin-Postgress|joplindb|joplin|joplin" 126 | # • You can add multiple databases by separating each entry with a space. Make sure each entry is in the correct format. 127 | # • You can find the database type, container name, database name, database user, and password by running the docker-compose ps command. 128 | # • Make sure to replace the example values with the actual values for your databases. 129 | # • You can leave this array empty if you do not want to back up any databases. 130 | # • Supportoed databases: PostgreSQL, TimescaleDB, MySQL, MariaDB, MongoDB, Redis, Cassandra, Elasticsearch, SQLite, Neo4j, 131 | # • CockroachDB, InfluxDB, Oracle, RethinkDB and Memcached. 132 | # 133 | ## • Example: 134 | # "PostgreSQL|Postgress-Container|postgress-db|postgress-login|postgress-pass" 135 | # "TimescaleDB|Timescale-Container|timescaledb|timescaleuser|timescalepass" 136 | # "MySQL|MySQLContainer|mydatabase|dbuser|password2" 137 | # "MariaDB|MariaDBContainer|mariadb|mariauser|mariapass" 138 | # "MongoDB|MongoContainer|mydb|mongouser|password3" 139 | # "Redis|RedisContainer|myredisdb|redisuser|password4" 140 | # "Cassandra|CassandraContainer|cassandradb|cassandrauser|cassandrapass" 141 | # "Elasticsearch|ElasticContainer|elasticdb|elasticuser|elasticpass" 142 | # "SQLite|SQLiteContainer|sqlitedb|sqliteuser|sqlitepass" 143 | # "Neo4j|Neo4jContainer|neo4jdb|neo4juser|neo4jpass" 144 | # "CockroachDB|CockroachContainer|cockroachdb|roachuser|roachpass" 145 | # "InfluxDB|InfluxContainer|influxdb|influxuser|influxpass" 146 | # "Oracle|OracleContainer|oracledb|oracleuser|oraclepass" 147 | # "RethinkDB|RethinkDBContainer|rethinkdb|rethinkuser|rethinkpass" 148 | # "Memcached|MemcachedContainer|memcacheddb|memcacheduser|memcachedpass" 149 | # 150 | DATABASES=( 151 | "PostgreSQL|Joplin-Postgress|joplindb|joplin|joplin" 152 | "PostgreSQL|Tesla-Postgres|teslamate|teslamate|teslamate" 153 | "PostgreSQL|immich_postgres|immich|postgres|postgres" 154 | "Redis|immich_redis||" 155 | "PostgreSQL|Invidiuous-db|invidious|kemal|kemal" 156 | 157 | ) 158 | 159 | # Variable for selecting the email template 160 | # • This variable allows you to choose between different email templates. 161 | # • Values: "random" or 162 | # • "1" Dark Theme - "2" Crimson Night - "3" Cyberpunk - "4" Steel Gray - "5" Emerald Glow 163 | # • "6" Home Lab Tech - "7" Light - "8" Midnight Blue - "9" Purple Dusk - "10" Retro Neon 164 | 165 | MAIL_TEMPLATE="random" 166 | 167 | # Variable to specify the maximum number of CPU cores to use for compression 168 | # • The variable MAX_CPU_CORE allows you to define how many CPU cores should be used for compression during the backup process. 169 | # • If you set this variable to a specific number, the script will use that many cores. 170 | # • However, if you leave this variable unset or set it to an empty value The script will automatically detect the maximum number of available CPU cores on your system and use all of them for compression. 171 | MAX_CPU_CORE=20 172 | 173 | 174 | # • Controls whether the 1-2-3 backup "principle" (Working in Progress) is applied (creating multiple backup copies). 175 | # • Values: "Y" or "N". 176 | # • If enabled ("Y"), backups will be copied to additional storage locations defined by SATA_DISK1 and SATA_DISK2. 177 | # • Example: 178 | # • SATA_DISK1="/media/nvme/1TB/123-1" 179 | # • SATA_DISK2="/media/nvme/1TB/123-2" 180 | BACKUP_123="Y" 181 | SATA_DISK1="" 182 | SATA_DISK2="" 183 | 184 | # SCP Configuration Variables 185 | SCP_ENABLED="Y" # Set to "Y" to enable SCP backup 186 | SCP_USER="" 187 | SCP_USER_PASSWORD="" 188 | SCP_HOST="" #IP Host 189 | SCP_DEST_DIR="" 190 | 191 | # SMB Configuration Variables 192 | SMB_ENABLED="Y" # Set to "Y" to enable SMB backup 193 | SMB_USER="" 194 | SMB_PASSWORD="" 195 | SMB_REMOTE_SERVER="" #IP Host 196 | SMB_REMOTE_MOUNTPOINT="" 197 | SMB_REMOTE_BACKUP="" 198 | SMB_MOUNT_POINT="" 199 | 200 | 201 | 202 | 203 | 204 | 205 | 206 | 207 | 208 | 209 | 210 | 211 | 212 | ##################################################################################################################################### 213 | # DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE # 214 | # DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE # 215 | # DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE # 216 | # DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE # 217 | # DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE - DO NOT EDIT BELOW THIS LINE # 218 | ##################################################################################################################################### 219 | 220 | 221 | # Check if the script is being run as root 222 | if [ "$EUID" -ne 0 ]; then 223 | echo "Error: This script must be run as root user." >&2 224 | exit 1 225 | fi 226 | 227 | # Function to detect the Linux distribution and install required packages 228 | check_and_install_dependencies() { 229 | # Identify the distribution 230 | if [ -f /etc/os-release ]; then 231 | . /etc/os-release 232 | DISTRO=$ID 233 | echo "Distro found: $DISTRO" 234 | echo 235 | elif [ -f /etc/lsb-release ]; then 236 | . /etc/lsb-release 237 | DISTRO=$DISTRIB_ID 238 | echo "Distro found: $DISTRO" 239 | echo 240 | elif [ -f /etc/debian_version ]; then 241 | DISTRO="debian" 242 | echo "Distro found: $DISTRO" 243 | echo 244 | elif [ -f /etc/redhat-release ]; then 245 | DISTRO="rhel" 246 | echo "Distro found: $DISTRO" 247 | echo 248 | else 249 | DISTRO=$(uname -s) 250 | echo "Distro found: $DISTRO" 251 | echo 252 | fi 253 | 254 | # Required packages 255 | REQUIRED_PACKAGES="tar bc bar pigz openssl msmtp coreutils cifs-utils openssh-client sshpass smbclient" 256 | 257 | case "$DISTRO" in 258 | ubuntu|debian) 259 | # Check if nala is installed 260 | if command -v nala > /dev/null 2>&1; then 261 | PACKAGE_MANAGER="nala" 262 | else 263 | PACKAGE_MANAGER="apt" 264 | fi 265 | 266 | # Check if packages are already installed 267 | for pkg in $REQUIRED_PACKAGES; do 268 | if ! dpkg -s $pkg > /dev/null 2>&1; then 269 | echo "Installing package $pkg on $DISTRO using $PACKAGE_MANAGER..." 270 | $PACKAGE_MANAGER update && $PACKAGE_MANAGER install -y $pkg 271 | else 272 | echo "$pkg is already installed." 273 | fi 274 | done 275 | ;; 276 | centos|rhel|fedora) 277 | # Check if packages are already installed 278 | for pkg in $REQUIRED_PACKAGES; do 279 | if ! rpm -q $pkg > /dev/null 2>&1; then 280 | echo "Installing package $pkg on $DISTRO..." 281 | yum install -y $pkg 282 | else 283 | echo "$pkg is already installed." 284 | fi 285 | done 286 | ;; 287 | arch) 288 | # Check if packages are already installed 289 | for pkg in $REQUIRED_PACKAGES; do 290 | if ! pacman -Qi $pkg > /dev/null 2>&1; then 291 | echo "Installing package $pkg on $DISTRO..." 292 | pacman -Syu --noconfirm $pkg 293 | else 294 | echo "$pkg is already installed." 295 | fi 296 | done 297 | ;; 298 | *) 299 | echo "Unsupported distribution: $DISTRO" >&2 300 | exit 1 301 | ;; 302 | esac 303 | } 304 | 305 | 306 | check_and_install_dependencies 307 | 308 | # Variable to specify the maximum number of CPU cores to use for compression and decompression 309 | # If not set, use the maximum available cores 310 | MAX_CPU_CORE=${MAX_CPU_CORE:-$(nproc)} 311 | 312 | # Get the number of available CPU cores 313 | AVAILABLE_CPU_CORES=$(nproc) 314 | 315 | # Ensure MAX_CPU_CORE does not exceed the available CPU cores 316 | if [ "$MAX_CPU_CORE" -gt "$AVAILABLE_CPU_CORES" ]; then 317 | MAX_CPU_CORE="$AVAILABLE_CPU_CORES" 318 | fi 319 | 320 | # Function to gather host statistics 321 | gather_host_statistics() { 322 | KERNEL_VERSION=$(uname -r) 323 | TOTAL_MEMORY=$(free -h | grep Mem | awk '{print $2}') 324 | USED_MEMORY=$(free -h | grep Mem | awk '{print $3}') 325 | AVAILABLE_MEMORY=$(free -h | grep Mem | awk '{print $7}') 326 | CPU_LOAD=$(uptime | awk -F'load average:' '{ print $2 }' | awk '{ print $1 }') 327 | DISK_USAGE=$(df -h / | grep / | awk '{print $5}') 328 | } 329 | 330 | # Function to generate current date and time 331 | CURRENT_DATE=$(date +%Y%m%d) 332 | 333 | # Variables to accumulate backup reports 334 | SOURCE_DIR_LIST="" 335 | EXCLUDE_DIR_LIST="" 336 | 337 | # Arrays to accumulate individual backup metrics 338 | BACKUP_FILES=() 339 | BACKUP_SIZES=() 340 | MD5_SUMS=() 341 | DISK_SPEEDS=() 342 | BACKUP_DURATIONS=() 343 | BACKUP_STARTS=() 344 | BACKUP_ENDS=() 345 | TEST_STATUSES=() 346 | 347 | # Arrays to separately manage backup files for directories and databases 348 | DIRECTORY_BACKUP_FILES=() 349 | DATABASE_BACKUP_FILES=() 350 | DIRECTORY_PASSWORDS=() 351 | DATABASE_PASSWORDS=() 352 | DATABASE_BACKUP_DETAILS=() 353 | DATABASE_BACKUP_DURATIONS=() 354 | DATABASE_BACKUP_STARTS=() 355 | DATABASE_BACKUP_ENDS=() 356 | DATABASE_TEST_STATUSES=() 357 | 358 | # Array to store generated passwords 359 | BACKUP_PASSWORDS=() 360 | 361 | # Check if SOURCE_DIRS is not empty 362 | if [ ${#SOURCE_DIRS[@]} -eq 0 ]; then 363 | echo "Error: No source directories specified for backup." >&2 364 | exit 1 365 | fi 366 | 367 | # Check if the backup directory does not exist 368 | if [ ! -d "$BACKUP_DIR" ]; then 369 | # Create the backup directory 370 | mkdir -p "$BACKUP_DIR" 371 | fi 372 | 373 | # Check if the backup database directory does not exist 374 | if [ ! -d "$BACKUP_DATABASE_DIR" ]; then 375 | # Create the backup directory 376 | mkdir -p "$BACKUP_DATABASE_DIR" 377 | fi 378 | 379 | # Function to check and create backup destination directories 380 | check_and_create_directories() { 381 | local dirs=("$@") 382 | for dir in "${dirs[@]}"; do 383 | if [ ! -d "$dir" ]; then 384 | echo 385 | echo "Directory $dir does not exist. Creating it..." 386 | mkdir -p "$dir" 387 | else 388 | echo 389 | echo "Directory $dir already exists." 390 | echo 391 | fi 392 | done 393 | } 394 | 395 | # Function to check and create remote directory via SCP 396 | check_and_create_scp_directory() { 397 | sshpass -p "$SCP_USER_PASSWORD" ssh "$SCP_USER@$SCP_HOST" "mkdir -p $SCP_DEST_DIR" 398 | } 399 | 400 | # Function to check and create remote directory via SMB 401 | check_and_create_smb_directory() { 402 | # Mount the SMB share 403 | mount_smb_share 404 | 405 | # Construct the full path for the remote backup directory 406 | FULL_REMOTE_PATH="${SMB_MOUNT_POINT}/${SMB_REMOTE_BACKUP}" 407 | 408 | # Check if the remote backup directory exists 409 | if [ ! -d "$FULL_REMOTE_PATH" ]; then 410 | echo "Directory $FULL_REMOTE_PATH does not exist. Creating it..." 411 | mkdir -p "$FULL_REMOTE_PATH" 412 | if [ $? -eq 0 ]; then 413 | echo "Directory created successfully." 414 | else 415 | echo "Failed to create directory." 416 | exit 1 417 | fi 418 | else 419 | echo "Directory $FULL_REMOTE_PATH already exists." 420 | fi 421 | } 422 | 423 | # Function to mount SMB share 424 | mount_smb_share() { 425 | # Check if the mount point exists, otherwise create it 426 | if [ ! -d "$SMB_MOUNT_POINT" ]; then 427 | echo "Creating local mount point at $SMB_MOUNT_POINT" 428 | mkdir -p "$SMB_MOUNT_POINT" 429 | fi 430 | 431 | # Check if the SMB share is already mounted 432 | if mount | grep "$SMB_MOUNT_POINT" > /dev/null; then 433 | echo "SMB share already mounted at $SMB_MOUNT_POINT" 434 | else 435 | echo "Mounting SMB share //${SMB_REMOTE_SERVER}${SMB_REMOTE_MOUNTPOINT} at $SMB_MOUNT_POINT" 436 | mount -t cifs -o username="$SMB_USER",password="$SMB_PASSWORD" "//${SMB_REMOTE_SERVER}${SMB_REMOTE_MOUNTPOINT}" "$SMB_MOUNT_POINT" 437 | 438 | # Check if the mount command was successful 439 | if [ $? -ne 0 ]; then 440 | echo "Error: Failed to mount SMB share //${SMB_REMOTE_SERVER}${SMB_REMOTE_MOUNTPOINT} at $SMB_MOUNT_POINT" >&2 441 | exit 1 442 | fi 443 | echo "SMB share mounted successfully at $SMB_MOUNT_POINT." 444 | fi 445 | } 446 | 447 | # Function to unmount SMB share 448 | unmount_smb_share() { 449 | echo "Unmounting SMB share at $SMB_MOUNT_POINT..." 450 | 451 | # Check if the share is mounted before unmounting it 452 | if mount | grep "$SMB_MOUNT_POINT" > /dev/null; then 453 | umount "$SMB_MOUNT_POINT" 454 | if [ $? -ne 0 ]; then 455 | echo "Warning: Failed to unmount SMB share at $SMB_MOUNT_POINT" >&2 456 | else 457 | echo "SMB share unmounted successfully." 458 | fi 459 | else 460 | echo "SMB share is not mounted at $SMB_MOUNT_POINT." 461 | fi 462 | } 463 | 464 | # Check and create backup destination directories for the 1,2,3 principle if 123BACKUP is enabled 465 | if [ "$BACKUP_123" == "Y" ]; then 466 | check_and_create_directories "$SATA_DISK1" "$SATA_DISK2" 467 | fi 468 | 469 | # Function to stop Docker based on the distribution 470 | stop_docker() { 471 | echo 472 | echo "##############################################################################################" 473 | echo "# " 474 | echo "# " 475 | echo "# STOPPING DOCKER " 476 | echo "# " 477 | echo "# " 478 | echo "##############################################################################################" 479 | echo 480 | case "$DISTRO" in 481 | ubuntu|debian) 482 | sudo systemctl stop docker.service 483 | sudo systemctl stop docker.socket 484 | sudo systemctl stop containerd.service 485 | ;; 486 | centos|rhel|fedora) 487 | sudo systemctl stop docker 488 | sudo systemctl stop docker.socket 489 | sudo systemctl stop containerd 490 | ;; 491 | arch) 492 | sudo systemctl stop docker 493 | sudo systemctl stop docker.socket 494 | sudo systemctl stop containerd 495 | ;; 496 | *) 497 | echo "Unsupported distribution: $DISTRO ????" >&2 498 | exit 1 499 | ;; 500 | esac 501 | echo "Docker services stopped." | tee -a "$LOG_FILE" 502 | } 503 | 504 | # Function to start Docker based on the distribution 505 | start_docker() { 506 | echo 507 | echo "##############################################################################################" 508 | echo "# " 509 | echo "# " 510 | echo "# STARTING DOCKER " 511 | echo "# " 512 | echo "# " 513 | echo "##############################################################################################" 514 | echo 515 | case "$DISTRO" in 516 | ubuntu|debian) 517 | sudo systemctl start docker.service 518 | sudo systemctl start docker.socket 519 | sudo systemctl start containerd.service 520 | ;; 521 | centos|rhel|fedora) 522 | sudo systemctl start docker 523 | sudo systemctl start docker.socket 524 | sudo systemctl start containerd 525 | ;; 526 | arch) 527 | sudo systemctl start docker 528 | sudo systemctl start docker.socket 529 | sudo systemctl start containerd 530 | ;; 531 | *) 532 | echo "Unsupported distribution: $DISTRO" >&2 533 | exit 1 534 | ;; 535 | esac 536 | echo "Docker services started." | tee -a "$LOG_FILE" 537 | } 538 | 539 | # Function to check if Docker has stopped 540 | check_docker_stopped() { 541 | local retries=5 542 | local wait_time=5 543 | echo 544 | echo "##############################################################################################" 545 | echo "# " 546 | echo "# " 547 | echo "# CHECK IF DOCKER IS STOPPED " 548 | echo "# " 549 | echo "# " 550 | echo "##############################################################################################" 551 | echo 552 | for ((i=1; i<=retries; i++)); do 553 | if ! systemctl is-active --quiet docker; then 554 | echo "Docker has stopped successfully." | tee -a "$LOG_FILE" 555 | return 0 556 | fi 557 | echo "Waiting for Docker to stop... ($i/$retries)" | tee -a "$LOG_FILE" 558 | sleep $wait_time 559 | done 560 | echo "Error: Docker did not stop within the expected time." | tee -a "$LOG_FILE" 561 | exit 1 562 | } 563 | 564 | # Function to generate a random password 565 | generate_password() { 566 | openssl rand -base64 32 567 | } 568 | 569 | # Function to calculate the MD5 checksum 570 | calculate_md5() { 571 | local file="$1" 572 | md5sum "$file" | awk '{ print $1 }' 573 | } 574 | 575 | # Function to calculate the total size of source directories 576 | calculate_total_source_size() { 577 | local total_size=0 578 | for dir in "${SOURCE_DIRS[@]}"; do 579 | dir_size=$(du -sb "$dir" | awk '{print $1}') 580 | total_size=$((total_size + dir_size)) 581 | done 582 | echo "$total_size" 583 | } 584 | 585 | # Function to calculate the file size in a human-readable format 586 | calculate_file_size_readable() { 587 | local file="$1" 588 | du -sh "$file" | awk '{print $1}' 589 | } 590 | 591 | # Function to verify the backup using pigz for parallel decompression 592 | verify_backup() { 593 | local file="$1" 594 | local password="$2" 595 | TEST_START=$(date +%s) 596 | echo 597 | echo "Begin backup verification for file: $file" | tee -a "$LOG_FILE" 598 | 599 | if [[ "$file" == *.enc ]]; then 600 | # Decrypt and then verify the backup 601 | if openssl enc -d -aes-256-cbc -pbkdf2 -iter 10000 -in "$file" -pass pass:"$password" | pigz -p $MAX_CPU_CORE -dc -9 | bar -s $(stat -c%s "$file") | tar -tf - > /dev/null 2>&1; then 602 | TEST_END=$(date +%s) 603 | TEST_STATUS="Successful" 604 | echo 605 | echo "Verification successful for encrypted file: $file" 606 | else 607 | TEST_END=$(date +%s) 608 | TEST_STATUS="Failed" 609 | echo 610 | echo "Verification failed for encrypted file: $file" >> "$LOG_FILE" 611 | echo "Verification failed for encrypted file: $file" 612 | fi 613 | else 614 | # Verify the backup using pigz for parallel decompression 615 | if pigz -p $MAX_CPU_CORE -dc -9 "$file" | bar -s $(stat -c%s "$file") | tar -tf - > /dev/null 2>&1; then 616 | TEST_END=$(date +%s) 617 | TEST_STATUS="Successful" 618 | echo 619 | echo "Verification successful for non-encrypted file: $file" 620 | else 621 | TEST_END=$(date +%s) 622 | TEST_STATUS="Failed" 623 | echo 624 | echo "Verification failed for non-encrypted file: $file" >> "$LOG_FILE" 625 | echo "Verification failed for non-encrypted file: $file" 626 | fi 627 | fi 628 | 629 | echo 630 | echo "Backup verification completed for file: $file" | tee -a "$LOG_FILE" 631 | 632 | # Calculate the test duration in minutes and seconds 633 | local test_duration=$(( TEST_END - TEST_START )) 634 | local test_minutes=$(( test_duration / 60 )) 635 | local test_seconds=$(( test_duration % 60 )) 636 | TEST_DURATION="${test_minutes} minutes and ${test_seconds} seconds" 637 | 638 | # Convert the timestamps to a readable format 639 | TEST_START_READABLE=$(convert_timestamp_to_date "$TEST_START") 640 | TEST_END_READABLE=$(convert_timestamp_to_date "$TEST_END") 641 | } 642 | 643 | # Function to compress and encrypt the database backup 644 | compress_and_encrypt_backup() { 645 | local file="$1" 646 | local encrypted_file="$2" 647 | local password="$3" 648 | 649 | # Remove the .sql.enc extension from the original file name to get the correct name 650 | local base_file=$(basename "$file" .sql.enc) 651 | local backup_dir=$(dirname "$file") # Get the directory of the file 652 | 653 | # Full path for the compressed file 654 | local compressed_file="${backup_dir}/${base_file}.tar.gz" 655 | 656 | # Compress the backup file in every case 657 | echo 658 | echo "Compressing the backup file $file..." 659 | tar -P -cf - "$file" | bar -s $(du -sb "$file" | awk '{print $1}') | pigz -p $MAX_CPU_CORE -9 > "$compressed_file" 660 | 661 | if [ $? -ne 0 ]; then 662 | echo "Compression failed for $file" >> "$LOG_FILE" 663 | return 1 664 | else 665 | echo "Compression completed for $file. Compressed file: $compressed_file" >> "$LOG_FILE" 666 | fi 667 | 668 | # Calculate the MD5 before encrypting 669 | local md5_checksum=$(md5sum "$compressed_file" | awk '{ print $1 }') 670 | echo "MD5 checksum: $md5_checksum" 671 | echo 672 | DATABASE_MD5_SUMS+=("$md5_checksum") 673 | 674 | if [ "$ENCRYPT_BACKUP" == "Y" ]; then 675 | echo "Encrypting the compressed file $compressed_file..." 676 | openssl enc -aes-256-cbc -pbkdf2 -iter 10000 -salt -in "$compressed_file" -out "${encrypted_file}.tar.gz.enc" -pass pass:"$password" 677 | 678 | if [ $? -ne 0 ]; then 679 | echo "Encryption failed for $compressed_file" >> "$LOG_FILE" 680 | echo 681 | return 1 682 | else 683 | echo "Encryption completed for $compressed_file" >> "$LOG_FILE" 684 | echo 685 | rm -f "$compressed_file" # Delete the file only after encryption is complete 686 | fi 687 | 688 | else 689 | COMPRESSED_BACKUP_SIZES+=($(stat -c%s "$compressed_file")) 690 | fi 691 | 692 | # Delete the original .sql file after compression and/or encryption 693 | rm -f "$file" 694 | echo "Original file $file deleted." >> "$LOG_FILE" 695 | } 696 | 697 | # Function to encrypt the backup file with AES-256 698 | encrypt_backup() { 699 | local file="$1" 700 | local password="$2" 701 | local encrypted_file="${file}.enc" 702 | 703 | # Encrypt the backup file using AES-256-CBC with PBKDF2 and 10,000 iterations 704 | openssl enc -aes-256-cbc -pbkdf2 -iter 10000 -salt -in "$file" -out "$encrypted_file" -pass pass:"$password" 2>> "$LOG_FILE" 705 | 706 | # Remove the unencrypted file 707 | rm -f "$file" 708 | echo "$encrypted_file" 709 | } 710 | 711 | # Function to convert a UNIX timestamp to a readable date 712 | convert_timestamp_to_date() { 713 | local timestamp=$1 714 | date -d @"$timestamp" +"%a %d %b %Y, %H:%M:%S %Z" 715 | } 716 | 717 | # Function to calculate the backup duration in minutes and seconds 718 | calculate_backup_duration() { 719 | local start_time=$1 720 | local end_time=$2 721 | local duration=$(( end_time - start_time )) 722 | 723 | # If the duration is too short (less than 1 second), set it to 1 second 724 | if [[ $duration -eq 0 ]]; then 725 | duration=1 726 | fi 727 | 728 | # Calculate minutes and seconds 729 | local minutes=$(( duration / 60 )) 730 | local seconds=$(( duration % 60 )) 731 | 732 | # Return the calculated duration 733 | echo "${minutes} minutes and ${seconds} seconds" 734 | } 735 | 736 | # Function to calculate disk speed 737 | calculate_disk_speed() { 738 | local file_path=$1 # The backup file 739 | local start_time=$2 # Backup start timestamp 740 | local end_time=$3 # Backup end timestamp 741 | 742 | # Calculate the backup file size in bytes 743 | local size_bytes=$(stat -c%s "$file_path") 744 | 745 | # Calculate the duration in seconds 746 | local time_seconds=$((end_time - start_time)) 747 | 748 | # Avoid division by zero: if the time is 0, set the duration to 1 second 749 | if [[ $time_seconds -eq 0 ]]; then 750 | time_seconds=1 751 | fi 752 | 753 | # Calculate speed in MB/s (bytes / seconds -> MB/s) 754 | DISK_SPEED=$(echo "scale=2; $size_bytes / $time_seconds / 1024 / 1024" | bc -l) 755 | 756 | # Write the speed to the log for debugging without duplicating "MB/s" 757 | echo "$DISK_SPEED" | tee -a "$LOG_FILE" 758 | } 759 | 760 | # Function to perform the database backup 761 | backup_database() { 762 | local db_info="$1" 763 | local db_type=$(echo $db_info | cut -d'|' -f1) 764 | local container_name=$(echo $db_info | cut -d'|' -f2) 765 | local db_name=$(echo $db_info | cut -d'|' -f3) 766 | local db_user=$(echo $db_info | cut -d'|' -f4) 767 | local db_password=$(echo $db_info | cut -d'|' -f5) 768 | local backup_file="$BACKUP_DATABASE_DIR/${db_name}-$(date +%Y%m%d).sql" 769 | local encrypted_file="${backup_file}" 770 | 771 | # Log the backup start time 772 | local BACKUP_START=$(date +%s) 773 | 774 | case "$db_type" in 775 | "PostgreSQL"|"TimescaleDB") 776 | echo "##############################################################################################" 777 | echo "# " 778 | echo "# Performing the database backup $db_type..., " 779 | echo "# MD5 checksum, Compressing and Verifivation " 780 | echo "# of "$container_name" " 781 | echo "# " 782 | echo "##############################################################################################" 783 | docker exec "$container_name" pg_dump -U "$db_user" "$db_name" > "$backup_file" 784 | ;; 785 | "MySQL"|"MariaDB") 786 | echo "##############################################################################################" 787 | echo "# " 788 | echo "# Performing the database backup $db_type..., " 789 | echo "# MD5 checksum, Compressing and Verifivation " 790 | echo "# of "$container_name" " 791 | echo "# " 792 | echo "##############################################################################################" 793 | docker exec "$container_name" mysqldump -u "$db_user" -p"$db_password" "$db_name" > "$backup_file" 794 | ;; 795 | "MongoDB") 796 | echo "##############################################################################################" 797 | echo "# " 798 | echo "# Performing the database backup $db_type..., " 799 | echo "# MD5 checksum, Compressing and Verifivation " 800 | echo "# of "$container_name" " 801 | echo "# " 802 | echo "##############################################################################################" 803 | docker exec "$container_name" mongodump --db "$db_name" --out "$BACKUP_DATABASE_DIR/$db_name-$(date +%Y%m%d)" 804 | ;; 805 | "Redis") 806 | echo "##############################################################################################" 807 | echo "# " 808 | echo "# Performing the database backup $db_type..., " 809 | echo "# MD5 checksum, Compressing and Verifivation " 810 | echo "# of "$container_name" " 811 | echo "# " 812 | echo "##############################################################################################" 813 | docker exec "$container_name" redis-cli BGSAVE 814 | # Wait for the Redis backup to complete 815 | while ! docker exec "$container_name" redis-cli LASTSAVE > /dev/null; do 816 | sleep 1 817 | done 818 | # Copy the dump.rdb file from the Redis container to the backup directory 819 | docker cp "$container_name:/data/dump.rdb" "$BACKUP_DATABASE_DIR/dump-$CURRENT_DATE.rdb" 820 | # Update the backup file name with the correct container name 821 | backup_file="$BACKUP_DATABASE_DIR/dump-$CURRENT_DATE.rdb" 822 | encrypted_file="$BACKUP_DATABASE_DIR/${container_name}_dump-$CURRENT_DATE" 823 | ;; 824 | "Cassandra") 825 | echo "##############################################################################################" 826 | echo "# " 827 | echo "# Performing the database backup $db_type..., " 828 | echo "# MD5 checksum, Compressing and Verifivation " 829 | echo "# of "$container_name" " 830 | echo "# " 831 | echo "##############################################################################################" 832 | docker exec "$container_name" nodetool snapshot -t "$(date +%Y%m%d)-snapshot" 833 | ;; 834 | "Elasticsearch") 835 | echo "##############################################################################################" 836 | echo "# " 837 | echo "# Performing the database backup $db_type..., " 838 | echo "# MD5 checksum, Compressing and Verifivation " 839 | echo "# of "$container_name" " 840 | echo "# " 841 | echo "##############################################################################################" 842 | docker exec "$container_name" elasticsearch-snapshot --repository backup-repo --snapshot "$(date +%Y%m%d)" 843 | ;; 844 | "SQLite") 845 | echo "##############################################################################################" 846 | echo "# " 847 | echo "# Performing the database backup $db_type..., " 848 | echo "# MD5 checksum, Compressing and Verifivation " 849 | echo "# of "$container_name" " 850 | echo "# " 851 | echo "##############################################################################################" 852 | docker cp "$container_name:/path/to/database.sqlite" "$BACKUP_DATABASE_DIR/sqlite-backup-$(date +%Y%m%d).sqlite" 853 | ;; 854 | "Neo4j") 855 | echo "##############################################################################################" 856 | echo "# " 857 | echo "# Performing the database backup $db_type..., " 858 | echo "# MD5 checksum, Compressing and Verifivation " 859 | echo "# of "$container_name" " 860 | echo "# " 861 | echo "##############################################################################################" 862 | docker exec "$container_name" neo4j-admin backup --from="$container_name" --backup-dir="$BACKUP_DATABASE_DIR" 863 | ;; 864 | "CockroachDB") 865 | echo "##############################################################################################" 866 | echo "# " 867 | echo "# Performing the database backup $db_type..., " 868 | echo "# MD5 checksum, Compressing and Verifivation " 869 | echo "# of "$container_name" " 870 | echo "# " 871 | echo "##############################################################################################" 872 | docker exec "$container_name" cockroach sql --execute="BACKUP TO '$BACKUP_DATABASE_DIR/backup'" 873 | ;; 874 | "InfluxDB") 875 | echo "##############################################################################################" 876 | echo "# " 877 | echo "# Performing the database backup $db_type..., " 878 | echo "# MD5 checksum, Compressing and Verifivation " 879 | echo "# of "$container_name" " 880 | echo "# " 881 | echo "##############################################################################################" 882 | docker exec "$container_name" influx backup -portable "$BACKUP_DATABASE_DIR" 883 | ;; 884 | "Oracle") 885 | echo "##############################################################################################" 886 | echo "# " 887 | echo "# Performing the database backup $db_type..., " 888 | echo "# MD5 checksum, Compressing and Verifivation " 889 | echo "# of "$container_name" " 890 | echo "# " 891 | echo "##############################################################################################" 892 | docker exec "$container_name" rman target / <${db_name}${container_name}${BACKUP_DATABASE_DIR}${original_db_size}") 964 | 965 | echo "Backup and compression completed for the database $db_name." 966 | echo 967 | } 968 | 969 | # Function to perform the database restore 970 | restore_database() { 971 | local db_info="$1" 972 | local db_type=$(echo $db_info | cut -d'|' -f1) 973 | local container_name=$(echo $db_info | cut -d'|' -f2) 974 | local db_name=$(echo $db_info | cut -d'|' -f3) 975 | local db_user=$(echo $db_info | cut -d'|' -f4) 976 | local db_password=$(echo $db_info | cut -d'|' -f5) 977 | local backup_file="$BACKUP_DATABASE_DIR/${db_name}-$(date +%Y%m%d).sql" 978 | 979 | case "$db_type" in 980 | "PostgreSQL"|"TimescaleDB") 981 | echo "Restoring the database $db_type..." 982 | docker exec -i "$container_name" psql -U "$db_user" "$db_name" < "$backup_file" 983 | ;; 984 | "MySQL"|"MariaDB") 985 | echo "Restoring the database MySQL/MariaDB..." 986 | docker exec -i "$container_name" mysql -u "$db_user" -p"$db_password" "$db_name" < "$backup_file" 987 | ;; 988 | "MongoDB") 989 | echo "Restoring the database MongoDB..." 990 | docker exec "$container_name" mongorestore --db "$db_name" "$BACKUP_DATABASE_DIR/$db_name-$(date +%Y%m%d)" 991 | ;; 992 | "Redis") 993 | echo "Restoring the database Redis..." 994 | docker cp "$backup_file" "$container_name:/data/dump.rdb" 995 | docker exec "$container_name" redis-cli shutdown save 996 | docker start "$container_name" 997 | ;; 998 | "Cassandra") 999 | echo "Restoring the database Cassandra..." 1000 | docker exec "$container_name" nodetool refresh -- $db_name $(date +%Y%m%d)-snapshot 1001 | ;; 1002 | "Elasticsearch") 1003 | echo "Restoring the database Elasticsearch..." 1004 | docker exec "$container_name" elasticsearch-snapshot --repository backup-repo --restore --snapshot "$(date +%Y%m%d)" 1005 | ;; 1006 | "SQLite") 1007 | echo "Restoring the database SQLite..." 1008 | docker cp "$backup_file" "$container_name:/path/to/database.sqlite" 1009 | ;; 1010 | "Neo4j") 1011 | echo "Restoring the database Neo4j..." 1012 | docker exec "$container_name" neo4j-admin restore --from="$BACKUP_DATABASE_DIR" 1013 | ;; 1014 | "CockroachDB") 1015 | echo "Restoring the database CockroachDB..." 1016 | docker exec "$container_name" cockroach sql --execute="RESTORE FROM '$BACKUP_DATABASE_DIR/backup'" 1017 | ;; 1018 | "InfluxDB") 1019 | echo "Restoring the database InfluxDB..." 1020 | docker exec "$container_name" influx restore -portable "$BACKUP_DATABASE_DIR" 1021 | ;; 1022 | "Oracle") 1023 | echo "Restoring the database Oracle..." 1024 | docker exec "$container_name" rman target / <= 1024" | bc -l) )); then 1066 | size=$(echo "scale=2; $size / 1024" | bc) 1067 | unit="KiB" 1068 | fi 1069 | if (( $(echo "$size >= 1024" | bc -l) )); then 1070 | size=$(echo "scale=2; $size / 1024" | bc) 1071 | unit="MiB" 1072 | fi 1073 | if (( $(echo "$size >= 1024" | bc -l) )); then 1074 | size=$(echo "scale=2; $size / 1024" | bc) 1075 | unit="GiB" 1076 | fi 1077 | if (( $(echo "$size >= 1024" | bc -l) )); then 1078 | size=$(echo "scale=2; $size / 1024" | bc) 1079 | unit="TiB" 1080 | fi 1081 | echo "$size $unit" 1082 | } 1083 | 1084 | # Function to count the number of files in a directory 1085 | count_files_in_directory() { 1086 | local dir="$1" 1087 | find "$dir" -type f | wc -l 1088 | } 1089 | 1090 | # Start backup for all source directories 1091 | for DIR in "${SOURCE_DIRS[@]}"; do 1092 | DIR_NAME=$(basename "$DIR") 1093 | BACKUP_FILE="$BACKUP_DIR/${CURRENT_DATE}-${DIR_NAME}.tar.gz" 1094 | 1095 | # Log the backup start time 1096 | BACKUP_START=$(date +%s) 1097 | 1098 | # Write a log with the start timestamp 1099 | echo "##############################################################################################" 1100 | echo "# " 1101 | echo "# " 1102 | echo "# Starting backup for $DIR_NAME " 1103 | echo "# Verification and compression in progress... " 1104 | echo "# " 1105 | echo "##############################################################################################" 1106 | 1107 | # Check if EXCLUDE_DIRS is empty 1108 | EXCLUDE_OPTIONS="" 1109 | if [ ${#EXCLUDE_DIRS[@]} -ne 0 ]; then 1110 | EXCLUDE_OPTIONS=$(printf -- '--exclude=%s ' "${EXCLUDE_DIRS[@]}") 1111 | fi 1112 | 1113 | # Create the backup with 'bar' to show progress, excluding sockets 1114 | tar $EXCLUDE_OPTIONS --exclude='*.sock' -cf - "$DIR" -P | bar -s $(du -sb "$DIR" | awk '{print $1}') | pigz -9 > "$BACKUP_FILE" 2>> "$LOG_FILE" 1115 | 1116 | 1117 | # Write a log with the end timestamp 1118 | BACKUP_END=$(date +%s) 1119 | echo "Backup completed for $DIR_NAME at $(convert_timestamp_to_date "$BACKUP_END")" | tee -a "$LOG_FILE" 1120 | 1121 | 1122 | # Convert timestamps to a readable format 1123 | BACKUP_START_READABLE=$(convert_timestamp_to_date "$BACKUP_START") 1124 | BACKUP_END_READABLE=$(convert_timestamp_to_date "$BACKUP_END") 1125 | 1126 | # Calculate the backup duration 1127 | BACKUP_DURATION=$(calculate_backup_duration "$BACKUP_START" "$BACKUP_END") 1128 | 1129 | # Calculate disk write speed (use a separate variable) 1130 | DISK_SPEED=$(calculate_disk_speed "$BACKUP_FILE" "$BACKUP_START" "$BACKUP_END") 1131 | 1132 | # Calculate backup size in bytes 1133 | BACKUP_SIZE_BYTES=$(stat -c%s "$BACKUP_FILE") 1134 | 1135 | # Calculate the number of files in the current directory 1136 | NUM_FILES=$(count_files_in_directory "$DIR") 1137 | 1138 | # Calculate the original size of the directory in a human-readable format 1139 | ORIGINAL_SIZE=$(calculate_total_source_size_readable "$DIR") 1140 | 1141 | 1142 | 1143 | # Calculate the backup size in a readable format 1144 | BACKUP_SIZE=$(human_readable_size "$BACKUP_SIZE_BYTES") 1145 | 1146 | # Calculate the MD5 checksum 1147 | MD5_SUM=$(md5sum "$BACKUP_FILE" | awk '{print $1}') 1148 | 1149 | # If encryption is enabled, generate a password and encrypt the backup 1150 | if [ "$ENCRYPT_BACKUP" == "Y" ]; then 1151 | PASSWORD=$(generate_password) 1152 | BACKUP_FILE=$(encrypt_backup "$BACKUP_FILE" "$PASSWORD") 1153 | BACKUP_PASSWORDS+=("$PASSWORD") # Store the password for this backup 1154 | else 1155 | echo 1156 | PASSWORD="" # Ensure PASSWORD is empty if encryption is disabled 1157 | echo 1158 | BACKUP_PASSWORDS+=("No encryption") # No password if encryption is disabled 1159 | fi 1160 | 1161 | # Verify the backup 1162 | verify_backup "$BACKUP_FILE" "$PASSWORD" 1163 | 1164 | # Array BACKUP_PASSWORDS for logging 1165 | echo "BACKUP_PASSWORDS: ${BACKUP_PASSWORDS[@]}" | tee -a "$LOG_FILE" 1166 | 1167 | # Add the source directory to the HTML report 1168 | SOURCE_DIR_LIST="$SOURCE_DIR_LIST$(basename ${BACKUP_FILE})
${DIR_NAME}
${BACKUP_DIR}${BACKUP_DURATION}${BACKUP_START_READABLE}${BACKUP_END_READABLE}${DISK_SPEED} MB/s" 1169 | # Store values in their respective arrays 1170 | BACKUP_FILES+=("$BACKUP_FILE") 1171 | BACKUP_SIZES+=("$BACKUP_SIZE") 1172 | MD5_SUMS+=("$MD5_SUM") 1173 | DISK_SPEEDS+=("$DISK_SPEED") 1174 | BACKUP_DURATIONS+=("$BACKUP_DURATION") 1175 | BACKUP_STARTS+=("$BACKUP_START_READABLE") 1176 | BACKUP_ENDS+=("$BACKUP_END_READABLE") 1177 | TEST_STATUSES+=("$TEST_STATUS") 1178 | NUM_FILES_ARRAY+=("$NUM_FILES") # Store the number of files for this backup 1179 | ORIGINAL_SIZES+=("$ORIGINAL_SIZE") # Store the original size for this backup 1180 | done 1181 | 1182 | 1183 | # Cleanup old backups keeping only the most recent ones based on DAYS_TO_KEEP 1184 | cleanup_old_backups() { 1185 | local backup_dir="$1" 1186 | local is_123_backup="$2" # nuovo parametro per identificare se è un backup 123 1187 | 1188 | echo "Starting cleanup in directory: $backup_dir" 1189 | 1190 | if [ "$is_123_backup" == "Y" ]; then 1191 | # Per i backup 123, gestisci tutti i file nella stessa directory 1192 | if [ "$ENCRYPT_BACKUP" == "Y" ]; then 1193 | find "$backup_dir" -type f -name "*.tar.gz.enc" | sort -r | awk -v keep="$DAYS_TO_KEEP" ' 1194 | NR > keep { 1195 | print "Removing backup: " $0 1196 | system("rm -f " $0) 1197 | }' 1198 | else 1199 | find "$backup_dir" -type f -name "*.tar.gz" | sort -r | awk -v keep="$DAYS_TO_KEEP" ' 1200 | NR > keep { 1201 | print "Removing backup: " $0 1202 | system("rm -f " $0) 1203 | }' 1204 | fi 1205 | else 1206 | # Per il backup principale, gestisci separatamente i backup delle directory 1207 | for DIR in "${SOURCE_DIRS[@]}"; do 1208 | base_dir=$(basename "$DIR") 1209 | echo "Cleaning up old backups for directory: $base_dir" 1210 | if [ "$ENCRYPT_BACKUP" == "Y" ]; then 1211 | find "$backup_dir" -type f -name "*-${base_dir}.tar.gz.enc" | sort -r | awk -v keep="$DAYS_TO_KEEP" ' 1212 | NR > keep { 1213 | print "Removing backup: " $0 1214 | system("rm -f " $0) 1215 | }' 1216 | else 1217 | find "$backup_dir" -type f -name "*-${base_dir}.tar.gz" | sort -r | awk -v keep="$DAYS_TO_KEEP" ' 1218 | NR > keep { 1219 | print "Removing backup: " $0 1220 | system("rm -f " $0) 1221 | }' 1222 | fi 1223 | done 1224 | 1225 | # Per il backup principale, gestisci separatamente i backup dei database 1226 | if [ "$BACKUP_DOCKER_DATABASE" == "Y" ]; then 1227 | for DB in "${DATABASES[@]}"; do 1228 | db_type=$(echo $DB | cut -d'|' -f1) 1229 | container_name=$(echo $DB | cut -d'|' -f2) 1230 | db_name=$(echo $DB | cut -d'|' -f3) 1231 | 1232 | # Skip if database name is empty (like in Redis case) 1233 | if [ -z "$db_name" ]; then 1234 | db_name="dump" 1235 | fi 1236 | 1237 | echo "Cleaning up old backups for database: $db_name" 1238 | if [ "$ENCRYPT_BACKUP" == "Y" ]; then 1239 | find "$backup_dir" -type f -name "${db_name}-*.tar.gz.enc" | sort -r | awk -v keep="$DAYS_TO_KEEP" ' 1240 | NR > keep { 1241 | print "Removing backup: " $0 1242 | system("rm -f " $0) 1243 | }' 1244 | else 1245 | find "$backup_dir" -type f -name "${db_name}-*.tar.gz" | sort -r | awk -v keep="$DAYS_TO_KEEP" ' 1246 | NR > keep { 1247 | print "Removing backup: " $0 1248 | system("rm -f " $0) 1249 | }' 1250 | fi 1251 | done 1252 | fi 1253 | fi 1254 | } 1255 | 1256 | # Clean up main backup directories (sources and databases separately) 1257 | cleanup_old_backups "$BACKUP_DIR" "N" 1258 | cleanup_old_backups "$BACKUP_DATABASE_DIR" "N" 1259 | 1260 | # If 123 backup is enabled, clean up those locations 1261 | if [ "$BACKUP_123" == "Y" ]; then 1262 | cleanup_old_backups "$SATA_DISK1" "Y" 1263 | cleanup_old_backups "$SATA_DISK2" "Y" 1264 | fi 1265 | 1266 | # Restart Docker if necessary 1267 | if [ "$STOP_DOCKER_BEFORE_BACKUP" == "Y" ]; then 1268 | start_docker 1269 | fi 1270 | 1271 | # Function to count existing backups 1272 | count_existing_backups() { 1273 | local dir="$1" 1274 | local pattern="$2" 1275 | find "$dir" -type f -name "$pattern" | wc -l 1276 | } 1277 | 1278 | calculate_available_space() { 1279 | local dir="$1" 1280 | df -h "$dir" | awk 'NR==2 {print $4}' 1281 | } 1282 | 1283 | # Function to calculate available space in bytes 1284 | calculate_available_space_bytes() { 1285 | local dir="$1" 1286 | df --output=avail -B1 "$dir" | awk 'NR==2 {print $1}' 1287 | } 1288 | 1289 | # Function to copy backups to the 1,2,3 principle destinations 1290 | copy_backups_to_destinations() { 1291 | local backup_files=("$@") 1292 | local total_size=0 1293 | local copy_start_time=$(date +%s) 1294 | 1295 | # Calculates the total size of all backup files. 1296 | for file in "${backup_files[@]}"; do 1297 | total_size=$((total_size + $(stat -c%s "$file"))) 1298 | done 1299 | 1300 | 1301 | 1302 | 1303 | # Copies all files to SATA_DISK1 using bar 1304 | available_space_before_1=$(calculate_available_space "$SATA_DISK1") 1305 | available_space_before_bytes_1=$(calculate_available_space_bytes "$SATA_DISK1") 1306 | local copy_start_time_1=$(date +%s) 1307 | 1308 | echo 1309 | echo "##############################################################################################" 1310 | echo "# " 1311 | echo "# " 1312 | echo "# Copying backups to $SATA_DISK1... " 1313 | echo "# " 1314 | echo "# " 1315 | echo "##############################################################################################" 1316 | echo "Copying backups to $SATA_DISK1..." 1317 | for file in "${backup_files[@]}"; do 1318 | bar -s $(stat -c%s "$file") < "$file" > "$SATA_DISK1/$(basename "$file")" 1319 | done 1320 | local copy_end_time_1=$(date +%s) 1321 | available_space_after_1=$(calculate_available_space "$SATA_DISK1") 1322 | available_space_after_bytes_1=$(calculate_available_space_bytes "$SATA_DISK1") 1323 | TOTAL_COPY_TIME_1=$(calculate_total_copy_time "$copy_start_time_1" "$copy_end_time_1") 1324 | COPY_SPEED_1=$(calculate_copy_speed "$total_size" "$((copy_end_time_1 - copy_start_time_1))") 1325 | TOTAL_SIZE_1=$(human_readable_size "$total_size") 1326 | 1327 | # Copies all files to SATA_DISK2 using bar 1328 | available_space_before_2=$(calculate_available_space "$SATA_DISK2") 1329 | available_space_before_bytes_2=$(calculate_available_space_bytes "$SATA_DISK2") 1330 | local copy_start_time_2=$(date +%s) 1331 | 1332 | echo 1333 | echo "##############################################################################################" 1334 | echo "# " 1335 | echo "# " 1336 | echo "# Copying backups to $SATA_DISK2... " 1337 | echo "# " 1338 | echo "# " 1339 | echo "##############################################################################################" 1340 | for file in "${backup_files[@]}"; do 1341 | bar -s $(stat -c%s "$file") < "$file" > "$SATA_DISK2/$(basename "$file")" 1342 | done 1343 | local copy_end_time_2=$(date +%s) 1344 | available_space_after_2=$(calculate_available_space "$SATA_DISK2") 1345 | available_space_after_bytes_2=$(calculate_available_space_bytes "$SATA_DISK2") 1346 | TOTAL_COPY_TIME_2=$(calculate_total_copy_time "$copy_start_time_2" "$copy_end_time_2") 1347 | COPY_SPEED_2=$(calculate_copy_speed "$total_size" "$((copy_end_time_2 - copy_start_time_2))") 1348 | TOTAL_SIZE_2=$(human_readable_size "$total_size") 1349 | 1350 | 1351 | # Copy backups using SCP 1352 | if [ "$BACKUP_123" == "Y" ] && [ "$SCP_ENABLED" == "Y" ]; then 1353 | echo 1354 | echo "##############################################################################################" 1355 | echo "# " 1356 | echo "# " 1357 | echo "# Copying backups via SCP... " 1358 | echo "# " 1359 | echo "# " 1360 | echo "##############################################################################################" 1361 | check_and_create_scp_directory # Check and create SCP directory 1362 | for file in "${backup_files[@]}"; do 1363 | sshpass -p "$SCP_USER_PASSWORD" scp "$file" "$SCP_USER@$SCP_HOST:$SCP_DEST_DIR" 1364 | done 1365 | 1366 | fi 1367 | 1368 | # Copy backups using SMB 1369 | if [ "$BACKUP_123" == "Y" ] && [ "$SMB_ENABLED" == "Y" ]; then 1370 | echo 1371 | echo "##############################################################################################" 1372 | echo "# " 1373 | echo "# " 1374 | echo "# Copying backups to SMB share... " 1375 | echo "# " 1376 | echo "# " 1377 | echo "##############################################################################################" 1378 | mount_smb_share 1379 | check_and_create_smb_directory # Check and create SMB directory 1380 | for file in "${backup_files[@]}"; do 1381 | #cp "$file" "$SMB_MOUNT_POINT" 1382 | cp "$file" "$FULL_REMOTE_PATH" 1383 | if [ $? -ne 0 ]; then 1384 | echo "Error: Failed to copy $file to SMB share at $SMB_MOUNT_POINT" >&2 1385 | unmount_smb_share 1386 | exit 1 1387 | fi 1388 | done 1389 | #unmount_smb_share 1390 | fi 1391 | } 1392 | 1393 | # Function to send a combined email for both directory and database backups 1394 | send_email() { 1395 | # Host statistics function 1396 | gather_host_statistics 1397 | # Calculate total number of directories and files backed up 1398 | TOTAL_DIRECTORIES=${#SOURCE_DIRS[@]} 1399 | TOTAL_FILES=$(find "${SOURCE_DIRS[@]}" -type f | wc -l) 1400 | 1401 | # Calculate total backup size 1402 | TOTAL_BACKUP_SIZE=$(du -sh "$BACKUP_DIR" | awk '{print $1}') 1403 | 1404 | # Calculate total and average backup time 1405 | TOTAL_BACKUP_TIME=0 1406 | for duration in "${BACKUP_DURATIONS[@]}"; do 1407 | minutes=$(echo $duration | awk '{print $1}') 1408 | seconds=$(echo $duration | awk '{print $4}') 1409 | TOTAL_BACKUP_TIME=$((TOTAL_BACKUP_TIME + minutes * 60 + seconds)) 1410 | done 1411 | 1412 | if [ "$TOTAL_DIRECTORIES" -ne 0 ]; then 1413 | AVERAGE_BACKUP_TIME=$((TOTAL_BACKUP_TIME / TOTAL_DIRECTORIES)) 1414 | AVERAGE_BACKUP_MINUTES=$((AVERAGE_BACKUP_TIME / 60)) 1415 | AVERAGE_BACKUP_SECONDS=$((AVERAGE_BACKUP_TIME % 60)) 1416 | else 1417 | AVERAGE_BACKUP_TIME=0 1418 | AVERAGE_BACKUP_MINUTES=0 1419 | AVERAGE_BACKUP_SECONDS=0 1420 | fi 1421 | 1422 | # Check if backups are encrypted 1423 | if [ "$ENCRYPT_BACKUP" == "Y" ]; then 1424 | ENCRYPTION_STATUS="Yes" 1425 | else 1426 | ENCRYPTION_STATUS="No" 1427 | fi 1428 | 1429 | # Calculate total number of databases backed up 1430 | TOTAL_DATABASES=${#DATABASES[@]} 1431 | 1432 | # Calculate total database backup size 1433 | TOTAL_DATABASE_BACKUP_SIZE=$(du -sh "$BACKUP_DATABASE_DIR" | awk '{print $1}') 1434 | 1435 | # Count existing backups for directories and databases 1436 | TOTAL_DIRECTORY_BACKUPS=$(count_existing_backups "$BACKUP_DIR" "*.tar.gz*") 1437 | TOTAL_DATABASE_BACKUPS=$(count_existing_backups "$BACKUP_DATABASE_DIR" "*.tar.gz*") 1438 | 1439 | # Styles for email templates 1440 | # 1441 | # word-wrap: break-word; white-space: normal; overflow: hidden; 1442 | # 1443 | # 1444 | # Dark Theme 1445 | STYLE_1=" body { font-family: 'Courier New', Courier, monospace; background-color: #1b1b1b; color: #e0e0e0; } 1446 | .container { max-width: 1200px; margin: 40px auto; background-color: #2b2b2b; padding: 20px; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); color: #e0e0e0; } 1447 | h1 { font-size: 28px; color: #77c0e3; margin-bottom: 20px; } h2 { color: #e0e0e0; margin-bottom: 15px; font-size: 22px; } ul { list-style-type: none; padding: 0; } 1448 | li { padding: 8px; background-color: #444444; margin-bottom: 8px; border-radius: 4px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5); } pre { background-color: #77c0e3; color: #1b1b1b; padding: 10px; border-radius: 5px; white-space: pre-wrap; word-wrap: break-word; } 1449 | table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } th, td { padding: 12px; border: 1px solid #444444; text-align: left; } th { background-color: #77c0e3; color: #1b1b1b; } td { background-color: #444444; color: #e0e0e0; } 1450 | footer { margin-top: 20px; font-size: 12px; color: #b0b0b0; text-align: center; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; border-radius: 8px; overflow: hidden;} .stats-table { border-radius: 8px ; overflow: hidden; border-radius: 8px } 1451 | .coffee-link { color: #77c0e3; text-decoration: none; } .coffee-link:hover { text-decoration: underline; } .coffee-logo { width: 20px; vertical-align: middle; margin-right: 5px; } .separator { border-top: 3px solid #77c0e3; margin: 20px 0; border-radius: 5px; } 1452 | .centered-table th, .centered-table td { text-align: center; } body3 { background-color: #121212; font-family: Arial, sans-serif; color: #aab0c6; } .stats_backup { width: 300px auto; height: 250px ; padding: 20px; background-color: #2b2b2b; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: center; color: #e0e0e0; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1453 | .stats_info_backup { width: 700px auto; height: 250px ; padding: 20px; background-color: #2b2b2b; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: left; color: #e0e0e0; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1454 | .backup { font-size: 50px; margin-bottom: 15px; } .enc { font-size: 20px; font-weight: bold; } .size { font-size: 40px; color: #7f85ff; font-family: 'Courier New', Courier, monospace; } .status { font-size: 14px; }" 1455 | 1456 | # Crimson Night 1457 | STYLE_2="body { font-family: 'Courier New', Courier, monospace; background-color: #2b0a14; color: #f4c7c3; } 1458 | .container { max-width: 1200px; margin: 40px auto; background-color: #400b1d; padding: 20px; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); color: #f4c7c3; } 1459 | h1 { font-size: 28px; color: #ff557f; margin-bottom: 20px; } h2 { color: #f4c7c3; margin-bottom: 15px; font-size: 22px; } ul { list-style-type: none; padding: 0; } li { padding: 8px; background-color: #660f2c; margin-bottom: 8px; border-radius: 4px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5); } 1460 | pre { background-color: #ff557f; color: #2b0a14; padding: 10px; border-radius: 5px; white-space: pre-wrap; word-wrap: break-word; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } th, td { padding: 12px; border: 1px solid #660f2c; text-align: left; } 1461 | th { background-color: #ff557f; color: #2b0a14; } td { background-color: #660f2c; color: #f4c7c3; } footer { margin-top: 20px; font-size: 12px; color: #b0b0b0; text-align: center; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; border-radius: 8px; overflow: hidden; } 1462 | .stats-table { border-radius: 8px; overflow: hidden; } .coffee-link { color: #ff557f; text-decoration: none; } .coffee-link:hover { text-decoration: underline; } .coffee-logo { width: 20px; vertical-align: middle; margin-right: 5px; } 1463 | .separator { border-top: 3px solid #ff557f; margin: 20px 0; border-radius: 5px; } .centered-table th, .centered-table td { text-align: center; } body3 { background-color: #400b1d; font-family: Arial, sans-serif; color: #f4c7c3; } 1464 | .stats_backup { width: 300px auto; height: 210px; padding: 20px; background-color: #400b1d; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: center; color: #f4c7c3; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1465 | .stats_info_backup { width: 700px auto; height: 210px; padding: 20px; background-color: #400b1d; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: left; color: #f4c7c3; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1466 | .backup { font-size: 50px; margin-bottom: 15px; } .enc { font-size: 20px; font-weight: bold; } .size { font-size: 40px; color: #ff557f; font-family: 'Courier New', Courier, monospace; } .status { font-size: 14px; }" 1467 | 1468 | # Cyberpunk 1469 | STYLE_3="body { font-family: 'Courier New', Courier, monospace; background-color: #1b1b1b; color: #ff0077; } .container { max-width: 1200px; margin: 40px auto; background-color: #2b2b2b; padding: 20px; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); color: #ff0077; } 1470 | h1 { font-size: 28px; color: #ffcc00; margin-bottom: 20px; } h2 { color: #ff0077; margin-bottom: 15px; font-size: 22px; } ul { list-style-type: none; padding: 0; } 1471 | li { padding: 8px; background-color: #333333; margin-bottom: 8px; border-radius: 4px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5); } pre { background-color: #ffcc00; color: #1b1b1b; padding: 10px; border-radius: 5px; white-space: pre-wrap; word-wrap: break-word; } 1472 | table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } th, td { padding: 12px; border: 1px solid #333333; text-align: left; } th { background-color: #ffcc00; color: #1b1b1b; } td { background-color: #333333; color: #ff0077; } 1473 | footer { margin-top: 20px; font-size: 12px; color: #ff99cc; text-align: center; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; border-radius: 8px; overflow: hidden;} .stats-table { border-radius: 8px; overflow: hidden; } 1474 | .coffee-link { color: #ffcc00; text-decoration: none; } .coffee-link:hover { text-decoration: underline; } .coffee-logo { width: 20px; vertical-align: middle; margin-right: 5px; } .separator { border-top: 3px solid #ffcc00; margin: 20px 0; border-radius: 5px; } 1475 | .centered-table th, .centered-table td { text-align: center; } body3 { background-color: #121212; font-family: Arial, sans-serif; color: #ff0077; } .stats_backup { width: 300px auto; height: 210px; padding: 20px; background-color: #2b2b2b; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: center; color: #ff0077; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1476 | .stats_info_backup { width: 700px auto; height: 210px; padding: 20px; background-color: #2b2b2b; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: left; color: #ff0077; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1477 | .backup { font-size: 50px; margin-bottom: 15px; } .enc { font-size: 20px; font-weight: bold; } .size { font-size: 40px; color: #ffcc00; font-family: 'Courier New', Courier, monospace; } .status { font-size: 14px; }" 1478 | 1479 | # Steel Gray 1480 | STYLE_4="body { font-family: 'Courier New', Courier, monospace; background-color: #1a1a1a; color: #e0e0e0; } .container { max-width: 1200px; margin: 40px auto; background-color: #333333; padding: 20px; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); color: #e0e0e0; } 1481 | h1 { font-size: 28px; color: #a0a0a0; margin-bottom: 20px; } h2 { color: #e0e0e0; margin-bottom: 15px; font-size: 22px; } ul { list-style-type: none; padding: 0; } li { padding: 8px; background-color: #444444; margin-bottom: 8px; border-radius: 4px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5); } 1482 | pre { background-color: #a0a0a0; color: #1a1a1a; padding: 10px; border-radius: 5px; white-space: pre-wrap; word-wrap: break-word; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } th, td { padding: 12px; border: 1px solid #444444; text-align: left; } 1483 | th { background-color: #a0a0a0; color: #1a1a1a; } td { background-color: #444444; color: #e0e0e0; } footer { margin-top: 20px; font-size: 12px; color: #b0b0b0; text-align: center; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; border-radius: 8px; overflow: hidden; } 1484 | .stats-table { border-radius: 8px; overflow: hidden; } .coffee-link { color: #a0a0a0; text-decoration: none; } .coffee-link:hover { text-decoration: underline; } .coffee-logo { width: 20px; vertical-align: middle; margin-right: 5px; } 1485 | .separator { border-top: 3px solid #a0a0a0; margin: 20px 0; border-radius: 5px; } .centered-table th, .centered-table td { text-align: center; } body3 { background-color: #333333; font-family: Arial, sans-serif; color: #e0e0e0; } 1486 | .stats_backup { width: 300px auto; height: 210px; padding: 20px; background-color: #333333; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: center; color: #e0e0e0; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1487 | .stats_info_backup { width: 700px auto; height: 210px; padding: 20px; background-color: #333333; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: left; color: #e0e0e0; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1488 | .backup { font-size: 50px; margin-bottom: 15px; } .enc { font-size: 20px; font-weight: bold; } .size { font-size: 40px; color: #a0a0a0; font-family: 'Courier New', Courier, monospace; } .status { font-size: 14px; }" 1489 | 1490 | # Emerald Glow 1491 | STYLE_5="body { font-family: 'Courier New', Courier, monospace; background-color: #0a2e1f; color: #c4f4e1; } .container { max-width: 1200px; margin: 40px auto; background-color: #144f3b; padding: 20px; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); color: #c4f4e1; } 1492 | h1 { font-size: 28px; color: #3ce896; margin-bottom: 20px; } h2 { color: #c4f4e1; margin-bottom: 15px; font-size: 22px; } ul { list-style-type: none; padding: 0; } li { padding: 8px; background-color: #236f55; margin-bottom: 8px; border-radius: 4px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5); } 1493 | pre { background-color: #3ce896; color: #0a2e1f; padding: 10px; border-radius: 5px; white-space: pre-wrap; word-wrap: break-word; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } th, td { padding: 12px; border: 1px solid #236f55; text-align: left; } 1494 | th { background-color: #3ce896; color: #0a2e1f; } td { background-color: #236f55; color: #c4f4e1; } footer { margin-top: 20px; font-size: 12px; color: #b0b0b0; text-align: center; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; border-radius: 8px; overflow: hidden; } 1495 | .stats-table { border-radius: 8px; overflow: hidden; } .coffee-link { color: #3ce896; text-decoration: none; } .coffee-link:hover { text-decoration: underline; } .coffee-logo { width: 20px; vertical-align: middle; margin-right: 5px; } 1496 | .separator { border-top: 3px solid #3ce896; margin: 20px 0; border-radius: 5px; } .centered-table th, .centered-table td { text-align: center; } body3 { background-color: #144f3b; font-family: Arial, sans-serif; color: #c4f4e1; } 1497 | .stats_backup { width: 300px auto; height: 210px; padding: 20px; background-color: #144f3b; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: center; color: #c4f4e1; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1498 | .stats_info_backup { width: 700px auto; height: 210px; padding: 20px; background-color: #144f3b; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: left; color: #c4f4e1; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1499 | .backup { font-size: 50px; margin-bottom: 15px; } .enc { font-size: 20px; font-weight: bold; } .size { font-size: 40px; color: #3ce896; font-family: 'Courier New', Courier, monospace; } .status { font-size: 14px; }" 1500 | 1501 | #Home Lab Tech 1502 | STYLE_6="body { font-family: 'Courier New', Courier, monospace; background-color: #1d1f21; color: #c5c8c6; } .container { max-width: 1200px; margin: 40px auto; background-color: #282a2e; padding: 20px; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); color: #c5c8c6; } 1503 | h1 { font-size: 28px; color: #81a2be; margin-bottom: 20px; } h2 { color: #c5c8c6; margin-bottom: 15px; font-size: 22px; } ul { list-style-type: none; padding: 0; } li { padding: 8px; background-color: #373b41; margin-bottom: 8px; border-radius: 4px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5); } 1504 | pre { background-color: #81a2be; color: #1d1f21; padding: 10px; border-radius: 5px; white-space: pre-wrap; word-wrap: break-word; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } th, td { padding: 12px; border: 1px solid #373b41; text-align: left; } 1505 | th { background-color: #81a2be; color: #1d1f21; } td { background-color: #373b41; color: #c5c8c6; } footer { margin-top: 20px; font-size: 12px; color: #707880; text-align: center; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; border-radius: 8px; overflow: hidden;} 1506 | .stats-table { border-radius: 8px; overflow: hidden; } .coffee-link { color: #81a2be; text-decoration: none; } .coffee-link:hover { text-decoration: underline; } .coffee-logo { width: 20px; vertical-align: middle; margin-right: 5px; } 1507 | .separator { border-top: 3px solid #81a2be; margin: 20px 0; border-radius: 5px; } .centered-table th, .centered-table td { text-align: center; } body3 { background-color: #121212; font-family: Arial, sans-serif; color: #c5c8c6; } 1508 | .stats_backup { width: 300px auto; height: 210px; padding: 20px; background-color: #282a2e; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: center; color: #c5c8c6; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1509 | .stats_info_backup { width: 700px auto; height: 210px; padding: 20px; background-color: #282a2e; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: left; color: #c5c8c6; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1510 | .backup { font-size: 50px; margin-bottom: 15px; } .enc { font-size: 20px; font-weight: bold; } .size { font-size: 40px; color: #81a2be; font-family: 'Courier New', Courier, monospace; } .status { font-size: 14px; }" 1511 | 1512 | # Light 1513 | STYLE_7="body { font-family: 'Courier New', Courier, monospace; background-color: #f9f9f9; color: #333333; } .container { max-width: 1200px; margin: 40px auto; background-color: #ffffff; padding: 20px; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.1); color: #333333; } 1514 | h1 { font-size: 28px; color: #007acc; margin-bottom: 20px; } h2 { color: #333333; margin-bottom: 15px; font-size: 22px; } ul { list-style-type: none; padding: 0; } li { padding: 8px; background-color: #dddddd; margin-bottom: 8px; border-radius: 4px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.1); } 1515 | pre { background-color: #007acc; color: #ffffff; padding: 10px; border-radius: 5px; white-space: pre-wrap; word-wrap: break-word; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } th, td { padding: 12px; border: 1px solid #dddddd; text-align: left; } 1516 | th { background-color: #007acc; color: #ffffff; } td { background-color: #f0f0f0; color: #333333; } footer { margin-top: 20px; font-size: 12px; color: #777777; text-align: center; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; border-radius: 8px; overflow: hidden;} 1517 | .stats-table { border-radius: 8px; overflow: hidden; } .coffee-link { color: #007acc; text-decoration: none; } .coffee-link:hover { text-decoration: underline; } .coffee-logo { width: 20px; vertical-align: middle; margin-right: 5px; } 1518 | .separator { border-top: 3px solid #007acc; margin: 20px 0; border-radius: 5px; } .centered-table th, .centered-table td { text-align: center; } body3 { background-color: #e0e0e0; font-family: Arial, sans-serif; color: #333333; } 1519 | .stats_backup { width: 300px auto; height: 210px; padding: 20px; background-color: #ffffff; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.1); text-align: center; color: #333333; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1520 | .stats_info_backup { width: 700px auto; height: 210px; padding: 20px; background-color: #ffffff; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.1); text-align: left; color: #333333; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1521 | .backup { font-size: 50px; margin-bottom: 15px; } .enc { font-size: 20px; font-weight: bold; } .size { font-size: 40px; color: #007acc; font-family: 'Courier New', Courier, monospace; } .status { font-size: 14px; }" 1522 | 1523 | # Midnight Blue 1524 | STYLE_8="body { font-family: 'Courier New', Courier, monospace; background-color: #0a1f44; color: #d6e3f8; } .container { max-width: 1200px; margin: 40px auto; background-color: #122d5e; padding: 20px; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); color: #d6e3f8; } 1525 | h1 { font-size: 28px; color: #55b3f3; margin-bottom: 20px; } h2 { color: #d6e3f8; margin-bottom: 15px; font-size: 22px; } ul { list-style-type: none; padding: 0; } li { padding: 8px; background-color: #1e3d6e; margin-bottom: 8px; border-radius: 4px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5); } 1526 | pre { background-color: #55b3f3; color: #0a1f44; padding: 10px; border-radius: 5px; white-space: pre-wrap; word-wrap: break-word; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } th, td { padding: 12px; border: 1px solid #1e3d6e; text-align: left; } 1527 | th { background-color: #55b3f3; color: #0a1f44; } td { background-color: #1e3d6e; color: #d6e3f8; } footer { margin-top: 20px; font-size: 12px; color: #b0b0b0; text-align: center; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; border-radius: 8px; overflow: hidden; } 1528 | .stats-table { border-radius: 8px; overflow: hidden; } .coffee-link { color: #55b3f3; text-decoration: none; } .coffee-link:hover { text-decoration: underline; } .coffee-logo { width: 20px; vertical-align: middle; margin-right: 5px; } 1529 | .separator { border-top: 3px solid #55b3f3; margin: 20px 0; border-radius: 5px; } .centered-table th, .centered-table td { text-align: center; } body3 { background-color: #122d5e; font-family: Arial, sans-serif; color: #d6e3f8; } 1530 | .stats_backup { width: 300px auto; height: 210px; padding: 20px; background-color: #122d5e; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: center; color: #d6e3f8; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1531 | .stats_info_backup { width: 700px auto; height: 210px; padding: 20px; background-color: #122d5e; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: left; color: #d6e3f8; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1532 | .backup { font-size: 50px; margin-bottom: 15px; } .enc { font-size: 20px; font-weight: bold; } .size { font-size: 40px; color: #55b3f3; font-family: 'Courier New', Courier, monospace; } .status { font-size: 14px; }" 1533 | 1534 | # Purple Dusk 1535 | STYLE_9="body { font-family: 'Courier New', Courier, monospace; background-color: #21024b; color: #e1d0f4; } .container { max-width: 1200px; margin: 40px auto; background-color: #3b0a6b; padding: 20px; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); color: #e1d0f4; } 1536 | h1 { font-size: 28px; color: #bb77ff; margin-bottom: 20px; } h2 { color: #e1d0f4; margin-bottom: 15px; font-size: 22px; } ul { list-style-type: none; padding: 0; } li { padding: 8px; background-color: #5e198e; margin-bottom: 8px; border-radius: 4px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5); } 1537 | pre { background-color: #bb77ff; color: #21024b; padding: 10px; border-radius: 5px; white-space: pre-wrap; word-wrap: break-word; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } th, td { padding: 12px; border: 1px solid #5e198e; text-align: left; } 1538 | th { background-color: #bb77ff; color: #21024b; } td { background-color: #5e198e; color: #e1d0f4; } footer { margin-top: 20px; font-size: 12px; color: #b0b0b0; text-align: center; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; border-radius: 8px; overflow: hidden; } 1539 | .stats-table { border-radius: 8px; overflow: hidden; } .coffee-link { color: #bb77ff; text-decoration: none; } .coffee-link:hover { text-decoration: underline; } .coffee-logo { width: 20px; vertical-align: middle; margin-right: 5px; } 1540 | .separator { border-top: 3px solid #bb77ff; margin: 20px 0; border-radius: 5px; } .centered-table th, .centered-table td { text-align: center; } body3 { background-color: #3b0a6b; font-family: Arial, sans-serif; color: #e1d0f4; } 1541 | .stats_backup { width: 300px auto; height: 210px; padding: 20px; background-color: #3b0a6b; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: center; color: #e1d0f4; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1542 | .stats_info_backup { width: 700px auto; height: 210px; padding: 20px; background-color: #3b0a6b; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: left; color: #e1d0f4; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1543 | .backup { font-size: 50px; margin-bottom: 15px; } .enc { font-size: 20px; font-weight: bold; } .size { font-size: 40px; color: #bb77ff; font-family: 'Courier New', Courier, monospace; } .status { font-size: 14px; }" 1544 | 1545 | # Retro Neon 1546 | STYLE_10="body { font-family: 'Courier New', Courier, monospace; background-color: #1a1a2e; color: #f5f5f5; } .container { max-width: 1200px; margin: 40px auto; background-color: #16213e; padding: 20px; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); color: #f5f5f5; } 1547 | h1 { font-size: 28px; color: #ff007f; margin-bottom: 20px; } h2 { color: #f5f5f5; margin-bottom: 15px; font-size: 22px; } ul { list-style-type: none; padding: 0; } li { padding: 8px; background-color: #0f3460; margin-bottom: 8px; border-radius: 4px; box-shadow: 0px 0px 5px rgba(0, 0, 0, 0.5); } 1548 | pre { background-color: #ff007f; color: #f5f5f5; padding: 10px; border-radius: 5px; white-space: pre-wrap; word-wrap: break-word; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; } th, td { padding: 12px; border: 1px solid #0f3460; text-align: left; } 1549 | th { background-color: #ff007f; color: #f5f5f5; } td { background-color: #0f3460; color: #f5f5f5; } footer { margin-top: 20px; font-size: 12px; color: #888888; text-align: center; } table { width: 100%; border-collapse: collapse; margin-bottom: 20px; border-radius: 8px; overflow: hidden;} 1550 | .stats-table { border-radius: 8px; overflow: hidden; } .coffee-link { color: #ff007f; text-decoration: none; } .coffee-link:hover { text-decoration: underline; } .coffee-logo { width: 20px; vertical-align: middle; margin-right: 5px; } 1551 | .separator { border-top: 3px solid #ff007f; margin: 20px 0; border-radius: 5px; } .centered-table th, .centered-table td { text-align: center; } body3 { background-color: #16213e; font-family: Arial, sans-serif; color: #f5f5f5; } 1552 | .stats_backup { width: 300px auto; height: 210px; padding: 20px; background-color: #16213e; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: center; color: #f5f5f5; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1553 | .stats_info_backup { width: 700px auto; height: 210px; padding: 20px; background-color: #16213e; border-radius: 8px; box-shadow: 0px 0px 15px rgba(0, 0, 0, 0.8); text-align: left; color: #f5f5f5; margin: auto; word-wrap: break-word; white-space: normal; overflow: hidden;} 1554 | .backup { font-size: 50px; margin-bottom: 15px; } .enc { font-size: 20px; font-weight: bold; } .size { font-size: 40px; color: #ff007f; font-family: 'Courier New', Courier, monospace; } .status { font-size: 14px; }" 1555 | 1556 | # Select the style based on MAIL_TEMPLATE 1557 | case "$MAIL_TEMPLATE" in 1558 | 1) SELECTED_STYLE="$STYLE_1" ;; 1559 | 2) SELECTED_STYLE="$STYLE_2" ;; 1560 | 3) SELECTED_STYLE="$STYLE_3" ;; 1561 | 4) SELECTED_STYLE="$STYLE_4" ;; 1562 | 5) SELECTED_STYLE="$STYLE_5" ;; 1563 | 6) SELECTED_STYLE="$STYLE_6" ;; 1564 | 7) SELECTED_STYLE="$STYLE_7" ;; 1565 | 8) SELECTED_STYLE="$STYLE_8" ;; 1566 | 9) SELECTED_STYLE="$STYLE_9" ;; 1567 | 10) SELECTED_STYLE="$STYLE_10" ;; 1568 | random|*) 1569 | RANDOM_INDEX=$((RANDOM % 10 + 1)) 1570 | SELECTED_STYLE=$(eval echo "\$STYLE_$RANDOM_INDEX") 1571 | ;; 1572 | esac 1573 | 1574 | # Calculate total and average database backup time 1575 | TOTAL_DATABASE_BACKUP_TIME=0 1576 | for duration in "${DATABASE_BACKUP_DURATIONS[@]}"; do 1577 | minutes=$(echo $duration | awk '{print $1}') 1578 | seconds=$(echo $duration | awk '{print $4}') 1579 | TOTAL_DATABASE_BACKUP_TIME=$((TOTAL_DATABASE_BACKUP_TIME + minutes * 60 + seconds)) 1580 | done 1581 | AVERAGE_DATABASE_BACKUP_TIME=$((TOTAL_DATABASE_BACKUP_TIME / TOTAL_DATABASES)) 1582 | AVERAGE_DATABASE_BACKUP_MINUTES=$((AVERAGE_DATABASE_BACKUP_TIME / 60)) 1583 | AVERAGE_DATABASE_BACKUP_SECONDS=$((AVERAGE_DATABASE_BACKUP_TIME % 60)) 1584 | 1585 | SUBJECT="Backup Report - $SERVER_NAME - $(date +'%d %b %Y, %H:%M:%S')" 1586 | 1587 | MESSAGE=" 1588 | 1589 | 1590 | 1591 | 1592 | 1593 | Backup Report 1594 | 1597 | 1598 | 1599 |
1600 |

$BACKUP_NAME

1601 |

1602 |

Backup successfully completed

1603 |

Server: $SERVER_NAME - Used $MAX_CPU_CORE cores for compression and decompression

1604 |

Backup performed on $(date +'%d %b %Y, %H:%M:%S')

1605 | 1606 |

Exported directories:

1607 | 1608 | 1609 | 1610 | 1611 | 1612 | 1613 | 1614 | 1615 | 1616 | 1617 | 1618 | 1619 | 1620 | 1621 | $SOURCE_DIR_LIST 1622 | 1623 |
Backup NameSource DirectoryDestination DirectoryBackup TimeStartStopSpeed
" 1624 | 1625 | # Condition to display the ‘Directories Excluded from Backup’ section only if EXCLUDE_DIRS is not empty 1626 | if [ ${#EXCLUDE_DIRS[@]} -ne 0 ]; then 1627 | MESSAGE+=" 1628 |

Directories Excluded from Backup

1629 |
1630 |
    1631 | $EXCLUDE_DIR_LIST 1632 |
1633 |
" 1634 | fi 1635 | 1636 | MESSAGE+=" 1637 |

Detailed Report DIRECTORY backup

1638 |
    " 1639 | 1640 | 1641 | 1642 | ########################################################################## 1643 | # 1644 | # START OF DIRECTORY EMAIL SECTION 1645 | # 1646 | ########################################################################## 1647 | 1648 | 1649 | # Add the details of the directory backups 1650 | for i in "${!BACKUP_FILES[@]}"; do 1651 | backup_file="${BACKUP_FILES[$i]}" 1652 | password="${BACKUP_PASSWORDS[$i]}" 1653 | md5_checksum="${MD5_SUMS[$i]}" 1654 | backup_duration="${BACKUP_DURATIONS[$i]}" 1655 | backup_start="${BACKUP_STARTS[$i]}" 1656 | backup_end="${BACKUP_ENDS[$i]}" 1657 | disk_speed="${DISK_SPEEDS[$i]}" 1658 | test_status="${TEST_STATUSES[$i]}" 1659 | dir_name=$(basename "${SOURCE_DIRS[$i]}") # Get the directory name 1660 | backup_size="${BACKUP_SIZES[$i]}" # Get the backup size 1661 | num_files="${NUM_FILES_ARRAY[$i]}" # Get the number of files for this backup 1662 | original_size="${ORIGINAL_SIZES[$i]}" # Get the original size for this backup 1663 | MESSAGE+=" 1664 | 1665 | 1666 | 1685 | 1693 | 1694 | 1695 | 1707 | 1708 |
    1667 | 1668 |
    1669 | 1670 | Directory: ${dir_name}
    1671 | Backup Name: $(basename $backup_file)
    1672 | Destination: ${BACKUP_DIR}
    1673 | Nr. File: $num_files
    1674 | Original File Size: $original_size
    1675 | File Size Compressed: $backup_size
    1676 | MD5 Checksum: $md5_checksum
    1677 | Verification Status: $test_status
    1678 | Backup Duration: $backup_duration
    1679 | Start: $backup_start
    1680 | Stop: $backup_end
    1681 | Disk Speed: $disk_speed MB/s
    1682 | 1683 |
    1684 |
    1686 |
    1687 |
    ${dir_name}
    1688 |
    $backup_size
    1689 |
    Encrypted: $ENCRYPTION_STATUS
    1690 |
    $test_status
    1691 |
    1692 |
    " 1696 | 1697 | # Command to extract the encrypted and compressed file 1698 | if [ "$ENCRYPT_BACKUP" == "Y" ]; then 1699 | MESSAGE+="Password: $password
    Command to extract the encrypted directory:
    1700 |
    sudo openssl enc -d -aes-256-cbc -pbkdf2 -iter 10000 -in ${backup_file} -pass pass:$password | pigz -p $MAX_CPU_CORE -d | sudo tar --strip-components=1 -xvf - -C $BACKUP_DIR
    " 1701 | else 1702 | MESSAGE+="Password: No encryption
    Command to extract the non-encrypted directory:
    1703 |
    sudo tar --strip-components=1 -xvf ${backup_file} -C $BACKUP_DIR
    " 1704 | fi 1705 | 1706 | MESSAGE+="
    " 1709 | done 1710 | 1711 | MESSAGE+="
" 1712 | 1713 | # Conditionally add the database backup section 1714 | if [ "$BACKUP_DOCKER_DATABASE" == "Y" ]; then 1715 | MESSAGE+=" 1716 |

Exported databases:

1717 | 1718 | 1719 | 1720 | 1721 | 1722 | 1723 | 1724 | 1725 | 1726 | 1727 | 1728 | ${DATABASE_BACKUP_DETAILS[*]} 1729 | 1730 |
DB NameContainer DB NameDestination DB Directory BackupSize
1731 |

Detailed Report DATABASE Backup:

1732 |
    " 1733 | 1734 | # Add the details of the database backups 1735 | for i in "${!DATABASE_BACKUP_FILES[@]}"; do 1736 | local backup_file="${DATABASE_BACKUP_FILES[$i]}" 1737 | local password="${DATABASE_PASSWORDS[$i]}" 1738 | local md5_checksum="${DATABASE_MD5_SUMS[$i]}" 1739 | local backup_duration="${DATABASE_BACKUP_DURATIONS[$i]}" 1740 | local backup_start="${DATABASE_BACKUP_STARTS[$i]}" 1741 | local backup_end="${DATABASE_BACKUP_ENDS[$i]}" 1742 | local test_status="${DATABASE_TEST_STATUSES[$i]}" 1743 | local disk_speed=$(calculate_disk_speed "$backup_file" "$BACKUP_START" "$BACKUP_END") 1744 | local db_name=$(echo "${DATABASES[$i]}" | cut -d'|' -f3) # Get the database name 1745 | local db_type=$(echo "${DATABASES[$i]}" | cut -d'|' -f1) # Get the database type 1746 | local container_name=$(echo "${DATABASES[$i]}" | cut -d'|' -f2) # Get the container name 1747 | local db_user=$(echo "${DATABASES[$i]}" | cut -d'|' -f4) # Get the database user 1748 | local backup_size=$(human_readable_size "$(stat -c%s "$backup_file")") # Get the backup size 1749 | local original_db_size="${DATABASE_ORIGINAL_SIZES[$i]}" # Get the original database size 1750 | 1751 | # Determine the restore command based on the database type 1752 | local restore_command="" 1753 | case "$db_type" in 1754 | "PostgreSQL"|"TimescaleDB") 1755 | restore_command="docker exec -i $container_name psql -U $db_user $db_name < $BACKUP_DATABASE_DIR/${db_name}-$(date +%Y%m%d).sql" 1756 | ;; 1757 | "MySQL"|"MariaDB") 1758 | restore_command="docker exec -i $container_name mysql -u $db_user -p$db_password $db_name < $BACKUP_DATABASE_DIR/${db_name}-$(date +%Y%m%d).sql" 1759 | ;; 1760 | "MongoDB") 1761 | restore_command="docker exec $container_name mongorestore --db $db_name $BACKUP_DATABASE_DIR/$db_name-$(date +%Y%m%d)" 1762 | ;; 1763 | "Redis") 1764 | restore_command="docker cp $backup_file $container_name:/data/dump.rdb && docker exec $container_name redis-cli shutdown save && docker start $container_name" 1765 | ;; 1766 | "Cassandra") 1767 | restore_command="docker exec $container_name nodetool refresh -- $db_name $(date +%Y%m%d)-snapshot" 1768 | ;; 1769 | "Elasticsearch") 1770 | restore_command="docker exec $container_name elasticsearch-snapshot --repository backup-repo --restore --snapshot $(date +%Y%m%d)" 1771 | ;; 1772 | "SQLite") 1773 | restore_command="docker cp $backup_file $container_name:/path/to/database.sqlite" 1774 | ;; 1775 | "Neo4j") 1776 | restore_command="docker exec $container_name neo4j-admin restore --from=$BACKUP_DATABASE_DIR" 1777 | ;; 1778 | "CockroachDB") 1779 | restore_command="docker exec $container_name cockroach sql --execute=\"RESTORE FROM '$BACKUP_DATABASE_DIR/backup'\"" 1780 | ;; 1781 | "InfluxDB") 1782 | restore_command="docker exec $container_name influx restore -portable $BACKUP_DATABASE_DIR" 1783 | ;; 1784 | "Oracle") 1785 | restore_command="docker exec $container_name rman target / < 1813 | Database: ${db_name}
    1814 | Backup file location: ${BACKUP_DATABASE_DIR}
    1815 | Original DB File Size: ${original_db_size}
    1816 | File Size: $backup_size
    1817 | MD5 Checksum: $md5_checksum
    1818 | Verification Status: $test_status
    1819 | Backup Duration: $backup_duration
    1820 | Start: $backup_start
    1821 | Stop: $backup_end
    1822 | Disk Speed: $disk_speed
    1823 |
1824 | 1825 | 1826 | 1827 |
1828 |
${db_name}
1829 |
$backup_size
1830 | 1831 |

Encrypted: $ENCRYPTION_STATUS
1832 | 1833 |
$test_status
1834 |
1835 | 1836 | 1837 | 1838 | " 1839 | 1840 | 1841 | 1842 | if [ "$password" != "No encryption" ]; then 1843 | MESSAGE+="Password: $password
Command to extract the encrypted database:
1844 |
sudo openssl enc -d -aes-256-cbc -pbkdf2 -iter 10000 -in ${backup_file} -pass pass:$password | pigz -d | sudo tar -xvf - -C $BACKUP_DATABASE_DIR
" 1845 | MESSAGE+="Command to restore the database:
1846 | Before running this command blindly, please double-check within the corresponding project that this is indeed the correct command for your setup.
1847 |
${restore_command}
" 1848 | else 1849 | MESSAGE+="Command to extract the non-encrypted database:
1850 |
sudo tar -xvf ${backup_file} -C $BACKUP_DATABASE_DIR
" 1851 | MESSAGE+="Command to restore the database:
1852 | Before running this command blindly, please double-check within the corresponding project that this is indeed the correct command for your setup.
1853 |
${restore_command}
" 1854 | fi 1855 | 1856 | MESSAGE+=" 1857 | 1858 | " 1859 | done 1860 | fi 1861 | 1862 | ########################################################################## 1863 | # 1864 | # FINAL SECTION 1865 | # 1866 | ########################################################################## 1867 | 1868 | MESSAGE+=" 1869 |
1870 |
1871 |
1872 |

Backup Statistics:

1873 | 1874 | 1875 | 1876 | 1877 | 1878 | 1879 | 1880 | 1881 | 1882 | 1883 | 1884 | 1885 | 1886 | 1887 | 1888 | 1889 | 1890 | 1891 | 1892 | 1893 | 1894 | 1895 | 1896 |
Nr. Directories backed upNr. Files backed upTotal size backed upTotal backup timeAverage backup timeEncryptedNr. of Rotations
$TOTAL_DIRECTORIES$TOTAL_FILES$TOTAL_BACKUP_SIZE$((TOTAL_BACKUP_TIME / 60)) minutes and $((TOTAL_BACKUP_TIME % 60)) seconds${AVERAGE_BACKUP_MINUTES} minutes and ${AVERAGE_BACKUP_SECONDS} seconds$ENCRYPTION_STATUS$TOTAL_DIRECTORY_BACKUPS
1897 | 1898 |

Database Backup Statistics:

1899 | 1900 | 1901 | 1902 | 1903 | 1904 | 1905 | 1906 | 1907 | 1908 | 1909 | 1910 | 1911 | 1912 | 1913 | 1914 | 1915 | 1916 | 1917 | 1918 | 1919 | 1920 |
Nr. Databases backed upTotal size backed upTotal backup timeAverage backup timeEncryptedNr. of Rotations
$TOTAL_DATABASES$TOTAL_DATABASE_BACKUP_SIZE$((TOTAL_DATABASE_BACKUP_TIME / 60)) minutes and $((TOTAL_DATABASE_BACKUP_TIME % 60)) seconds${AVERAGE_DATABASE_BACKUP_MINUTES} minutes and ${AVERAGE_DATABASE_BACKUP_SECONDS} seconds$ENCRYPTION_STATUS$TOTAL_DATABASE_BACKUPS
1921 | 1922 |

Host Statistics:

1923 | 1924 | 1925 | 1926 | 1927 | 1928 | 1929 | 1930 | 1931 | 1932 | 1933 | 1934 | 1935 | 1936 | 1937 | 1938 | 1939 | 1940 | 1941 | 1942 | 1943 | 1944 |
Kernel VersionTotal MemoryUsed MemoryAvailable MemoryCPU LoadDisk Usage
$KERNEL_VERSION$TOTAL_MEMORY$USED_MEMORY$AVAILABLE_MEMORY$CPU_LOAD$DISK_USAGE
" 1945 | 1946 | if [ "$BACKUP_123" == "Y" ]; then 1947 | MESSAGE+=" 1948 | 1949 |

123 BACKUP:

1950 | 1951 | 1952 | 1962 | 1971 | 1972 |
1953 |

First disk copy,
path: "$SATA_DISK1"

1954 |
    1955 |
  • Total Size: $TOTAL_SIZE_1
  • 1956 |
  • Total Time copy: $TOTAL_COPY_TIME_1
  • 1957 |
  • Speed copy: $COPY_SPEED_1
  • 1958 |
  • Available space before copy: $available_space_before_1
  • 1959 |
  • Available space after copy: $available_space_after_1
  • 1960 |
1961 |
1963 |

Second disk copy,
path: "$SATA_DISK2"

1964 |
    1965 |
  • Total Size: $TOTAL_SIZE_2
  • 1966 |
  • Total Time copy: $TOTAL_COPY_TIME_2
  • 1967 |
  • Speed copy: $COPY_SPEED_2
  • 1968 |
  • Available space before copy: $available_space_before_2
  • 1969 |
  • Available space after copy: $available_space_after_2
  • 1970 |
1973 | " 1974 | fi 1975 | 1976 | if [ "$BACKUP_123" == "Y" ] && [ "$SCP_ENABLED" == "Y" ]; then 1977 | MESSAGE+="

SCP Backup:

1978 | " 1981 | fi 1982 | 1983 | if [ "$BACKUP_123" == "Y" ] && [ "$SMB_ENABLED" == "Y" ]; then 1984 | MESSAGE+="

SMB Backup:

1985 | " 1988 | fi 1989 | 1990 | MESSAGE+=" 1991 |
1992 |

Generated by your Backup System - $(date +'%d %b %Y, %H:%M:%S')

1993 |

If you're enjoying this script, please 🎉 Buy me a coffee 🎉

1994 |
1995 | 1996 | 1997 | " 1998 | 1999 | # Send the email via msmtp 2000 | echo -e "Subject: $SUBJECT\nTo: $EMAIL_RECIPIENT\nFrom: $SMTP_FROM\nMIME-Version: 1.0\nContent-Type: text/html\n\n$MESSAGE" | \ 2001 | msmtp --host="$SMTP_HOST" --port="$SMTP_PORT" --auth=on --user="$SMTP_USER" --passwordeval="echo $SMTP_PASSWORD" \ 2002 | --tls=on --tls-starttls=on --from="$SMTP_FROM" "$EMAIL_RECIPIENT" 2003 | } 2004 | 2005 | # Function to calculate the total size of a directory in bytes 2006 | calculate_total_size() { 2007 | local dir="$1" 2008 | du -sb "$dir" | awk '{print $1}' 2009 | } 2010 | 2011 | # Calculate backup size in bytes and convert to a readable format 2012 | calculate_backup_size() { 2013 | local file_path=$1 2014 | local size_bytes=$(stat -c%s "$file_path") 2015 | 2016 | # Function to convert size to a readable scale (B, KiB, MiB, GiB, TiB) 2017 | human_readable_size() { 2018 | local size=$1 2019 | local unit="B" 2020 | if (( $(echo "$size >= 1024" | bc -l) )); then 2021 | size=$(echo "scale=2; $size / 1024" | bc) 2022 | unit="KiB" 2023 | fi 2024 | if (( $(echo "$size >= 1024" | bc -l) )); then 2025 | size=$(echo "scale=2; $size / 1024" | bc) 2026 | unit="MiB" 2027 | fi 2028 | if (( $(echo "$size >= 1024" | bc -l) )); then 2029 | size=$(echo "scale=2; $size / 1024" | bc) 2030 | unit="GiB" 2031 | fi 2032 | if (( $(echo "$size >= 1024" | bc -l) )); then 2033 | size=$(echo "scale=2; $size / 1024" | bc) 2034 | unit="TiB" 2035 | fi 2036 | echo "$size $unit" 2037 | } 2038 | 2039 | # Calculate the backup size in a readable format 2040 | BACKUP_SIZE=$(human_readable_size "$size_bytes") 2041 | } 2042 | 2043 | # Calculate the total size of the backup directories (non-compressed) 2044 | TOTAL_BACKUP_DIR_SIZE=$(calculate_total_size "$BACKUP_DIR") 2045 | TOTAL_BACKUP_DATABASE_DIR_SIZE=$(calculate_total_size "$BACKUP_DATABASE_DIR") 2046 | 2047 | # Calculate the total size of the compressed backup files 2048 | if [ "$ENCRYPT_BACKUP" == "Y" ]; then 2049 | TOTAL_COMPRESSED_BACKUP_SIZE=$(find "$BACKUP_DIR" -type f -name "*.tar.gz.enc" -exec stat -c%s {} + | awk '{s+=$1} END {print s}') 2050 | TOTAL_COMPRESSED_DATABASE_BACKUP_SIZE=$(find "$BACKUP_DATABASE_DIR" -type f -name "*.tar.gz.enc" -exec stat -c%s {} + | awk '{s+=$1} END {print s}') 2051 | else 2052 | TOTAL_COMPRESSED_BACKUP_SIZE=$(find "$BACKUP_DIR" -type f -name "*.tar.gz" -exec stat -c%s {} + | awk '{s+=$1} END {print s}') 2053 | TOTAL_COMPRESSED_DATABASE_BACKUP_SIZE=$(find "$BACKUP_DATABASE_DIR" -type f -name "*.tar.gz" -exec stat -c%s {} + | awk '{s+=$1} END {print s}') 2054 | fi 2055 | 2056 | # Convert the sizes to integers to avoid scientific notation issues 2057 | TOTAL_COMPRESSED_BACKUP_SIZE=$(printf "%.0f" "$TOTAL_COMPRESSED_BACKUP_SIZE") 2058 | TOTAL_COMPRESSED_DATABASE_BACKUP_SIZE=$(printf "%.0f" "$TOTAL_COMPRESSED_DATABASE_BACKUP_SIZE") 2059 | 2060 | # List of excluded directories in HTML 2061 | for EXCLUDED_DIR in "${EXCLUDE_DIRS[@]}"; do 2062 | EXCLUDE_DIR_LIST="$EXCLUDE_DIR_LIST
  • ${EXCLUDED_DIR}
  • " 2063 | done 2064 | 2065 | # Cleanup old backups based on the defined retention days 2066 | for DIR in "${SOURCE_DIRS[@]}"; do 2067 | find "$BACKUP_DIR" -type f -name "$(basename "$DIR")*.tar.gz" -mtime +$DAYS_TO_KEEP -exec rm -f {} \; 2068 | done 2069 | 2070 | # Function to calculate the total size of a directory in bytes 2071 | calculate_total_size() { 2072 | local dir="$1" 2073 | du -sb "$dir" | awk '{print $1}' 2074 | } 2075 | 2076 | # Function to calculate the total size of backup files in a directory 2077 | calculate_total_backup_size() { 2078 | local dir="$1" 2079 | find "$dir" -type f -name "*.tar.gz*" -exec stat -c%s {} + | awk '{s+=$1} END {print s}' 2080 | } 2081 | 2082 | # Function to calculate the total time taken for copying backups 2083 | calculate_total_copy_time() { 2084 | local start_time=$1 2085 | local end_time=$2 2086 | local duration=$(( end_time - start_time )) 2087 | 2088 | # Calculate minutes and seconds 2089 | local minutes=$(( duration / 60 )) 2090 | local seconds=$(( duration % 60 )) 2091 | 2092 | echo "${minutes} minutes and ${seconds} seconds" 2093 | } 2094 | 2095 | # Function to calculate the average speed of copying backups 2096 | calculate_copy_speed() { 2097 | local total_size=$1 2098 | local total_time=$2 2099 | 2100 | # Avoid division by zero 2101 | if [[ $total_time -eq 0 ]]; then 2102 | total_time=1 2103 | fi 2104 | 2105 | # Calculate speed in MB/s (bytes / seconds -> MB/s) 2106 | local speed=$(echo "scale=2; $total_size / $total_time / 1024 / 1024" | bc -l) 2107 | echo "$speed MB/s" 2108 | } 2109 | 2110 | # Function to check for an active internet connection 2111 | check_internet_connection() { 2112 | while ! ping -c 1 8.8.8.8 &>/dev/null; do 2113 | echo "No internet connection detected. Retrying in 5 minutes..." 2114 | sleep 60 # Retry every 1 minute 2115 | done 2116 | echo "Internet connection restored. Proceeding to send the backup report email." 2117 | } 2118 | 2119 | # Send the final email with all the backup files and database reports 2120 | if [ "$BACKUP_123" == "Y" ]; then 2121 | copy_backups_to_destinations "${BACKUP_FILES[@]}" "${DATABASE_BACKUP_FILES[@]}" 2122 | fi 2123 | 2124 | check_internet_connection 2125 | send_email 2126 | 2127 | echo 2128 | echo "##############################################################################################" 2129 | echo "# " 2130 | echo "# " 2131 | echo "# ALL COMPLETED HERE, AND MAIL SENT!! " 2132 | echo "# " 2133 | echo "# " 2134 | echo "##############################################################################################" 2135 | # END! :) 2136 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # Backup Script for Multiple Directories and Databases 3 | 4 | This script automates the process of backing up multiple directories and databases, compressing them, and sending a detailed report via email. It supports encryption, email notifications, Docker service management, and database backups. 5 | 6 | ## Features 7 | 8 | - **Multiple Directories and Databases**: You can specify multiple directories and Docker-based databases to back up. 9 | - **Exclusion of Directories**: Option to exclude certain directories from the backup process. 10 | - **Compression and Optional Encryption**: Backups are compressed using `pigz`, and encryption with AES-256 can be enabled or disabled. 11 | - **Email Reports**: Sends a detailed report after each backup operation, including backup size, MD5 checksum, and disk write speed. 12 | - **Docker Management**: The script can stop Docker services before the backup and restart them afterward. 13 | - **Database Backup**: Supports the backup of PostgreSQL, Redis, and other Docker-managed databases. 14 | - **Customizable**: Fully customizable with variables for backup names, source directories, email settings, database backup options, and retention policies. 15 | - **Backup to Multiple Destinations**: Option to back up files to two separate SATA disks for redundancy (1,2,3 principle), Samba and SCP!. 16 | 17 | 18 | 19 | ## Theme Comparison: Dark vs Light 20 | 21 |
    22 | 23 | 24 | 25 | 26 | 27 | 28 | 32 | 36 | 37 | 38 | 42 | 46 | 47 | 48 | 52 | 56 | 57 |
    Dark ThemeLight Theme
    29 | Samlple Dark Theme - small - Directory 30 |
    Directory 31 |
    33 | Sample Light Theme - Directory 34 |
    Directory 35 |
    39 | Samlple Dark Theme - small - Database 40 |
    Database 41 |
    43 | Sample Light Theme - Database 44 |
    Database 45 |
    49 | Samlple Dark Theme - small - Statistics 50 |
    Statistics 51 |
    53 | Sample Light Theme - Statistics 54 |
    Statistics 55 |
    58 |
    59 | 60 | 61 | 62 | ## Prerequisites 63 | 64 | Before running the script, ensure that the following packages are installed on your Linux system: 65 | 66 | - `tar` 67 | - `bar` 68 | - `pigz` 69 | - `openssl` 70 | - `msmtp` 71 | - `coreutils` 72 | - `cifs-utils` 73 | - `openssh-client` 74 | - `sshpass` 75 | - `smbclient` 76 | 77 | 78 | 79 | The script will automatically check and install these packages if they are not available on your system. 80 | 81 | ## Configuration 82 | 83 | The script includes several variables that can be customized: 84 | 85 | - **Backup name**: `BACKUP_NAME` – Defines the name of the backup. 86 | - **Server name**: `SERVER_NAME` – Name of the server to include in the email report. 87 | - **Encryption**: `ENCRYPT_BACKUP` – Set to `Y` to enable encryption of backup files, or `N` to disable it. 88 | - **Source directories**: `SOURCE_DIRS` – List of directories to include in the backup. 89 | - **Exclude directories**: `EXCLUDE_DIRS` – List of directories to exclude from the backup. 90 | - **Backup directory**: `BACKUP_DIR` – Where the backups will be saved. 91 | - **Email settings**: Set the recipient, SMTP host, port, and credentials for sending the backup report via email. 92 | - **Database backup**: `BACKUP_DOCKER_DATABASE` – Set to `Y` to enable Docker database backups. 93 | - **Database list**: `DATABASES` – Define which databases to back up, including container name, database name, and credentials. 94 | - **Backup retention**: `DAYS_TO_KEEP` – Number of days to keep old backups. 95 | - **Backup destinations**: `BACKUP_123` – Set to `Y` to enable backup to additional disks. 96 | - **Max CPU cores**: `MAX_CPU_CORE` – Set the number of CPU cores to use for compression. 97 | 98 | Example 99 | An example of the source and excluded directories setup: 100 | 101 | ```bash 102 | SOURCE_DIRS=( 103 | "/home/JohnDoe" 104 | "/etc" 105 | ) 106 | 107 | EXCLUDE_DIRS=( 108 | "/home/JohnDoe/Personal" 109 | ) 110 | ``` 111 | 112 | An example of database backup setup: 113 | 114 | ```bash 115 | DATABASES=( 116 | "PostgreSQL|Joplin-Postgress|joplindb|joplin|password" 117 | "Redis|immich_redis||" 118 | ) 119 | ``` 120 | 121 | ## **Automate with cron** 122 | 123 | You can automate the backup process by adding it to your crontab. For example, to run the backup every day at midnight: 124 | 125 | ```bash 126 | 0 0 * * * /path/to/backup.sh 127 | ``` 128 | 129 | ## Usage 130 | 131 | 1. **Modify the variables**: Edit the script to set the directories and databases to back up, email settings, and other configuration options. 132 | 2. **Run the script**: Make sure you are running the script as root or with `sudo`: 133 | ```bash 134 | sudo ./backup.sh 135 | ``` 136 | 137 | ## iCloud Users: Configure Email Notifications 138 | 139 | To configure the script to send email notifications using iCloud's SMTP server: 140 | 141 | 1. **Generate an App-Specific Password for iCloud**: 142 | - Go to [appleid.apple.com](https://appleid.apple.com) and generate a password for third-party apps. 143 | 144 | 2. **Update the Script** with iCloud SMTP settings: 145 | ```bash 146 | EMAIL_RECIPIENT="youraddress@icloud.com" 147 | SMTP_HOST="smtp.mail.me.com" 148 | SMTP_PORT="587" 149 | SMTP_FROM="youraddress@icloud.com" 150 | SMTP_USER="youraddress@icloud.com" 151 | SMTP_PASSWORD="app-specific-password" 152 | ``` 153 | 154 | ## Changelog 155 | 156 | ### [2.1] - 2024-10-28 157 | #### Fixed 158 | - Cleanup old backups keeping only the most recent ones based on DAYS_TO_KEEP 159 | 160 | ### [2.1] - 2024-09-26 161 | #### Added 162 | 163 | - Removed pv and introduce to bar 164 | - The script now supports the `BACKUP_123` feature, which enables backup redundancy by copying backups to different locations such as Samba shares and remote servers via SCP. 165 | - Added 10 different email templates. 166 | - The updated script includes enhancements for handling Docker containers and backing up databases, with improved consistency and performance. 167 | 168 | ### [2.0] - 2024-09-16 169 | #### Added 170 | - Added Docker-managed database backup functionality for PostgreSQL, Redis, and other databases. 171 | - Added option to back up to multiple SATA disks for redundancy (1,2,3 principle). 172 | - Introduced the ability to limit the number of CPU cores for compression. 173 | - Added functionality to select different email templates (10). 174 | 175 | #### Changed 176 | - Updated the encryption option to be disabled by default. 177 | 178 | ### [1.0.0] - 2024-09-08 179 | #### Added 180 | - Initial release of the backup script. 181 | -------------------------------------------------------------------------------- /Sample Template Preview/Samlple Dark Theme - small - Database.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DartSteven/Linux-Backup-Script/f06e5f2ff00548bbb91568c1f06c6df4958c8103/Sample Template Preview/Samlple Dark Theme - small - Database.png -------------------------------------------------------------------------------- /Sample Template Preview/Samlple Dark Theme - small - Directory.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DartSteven/Linux-Backup-Script/f06e5f2ff00548bbb91568c1f06c6df4958c8103/Sample Template Preview/Samlple Dark Theme - small - Directory.png -------------------------------------------------------------------------------- /Sample Template Preview/Samlple Dark Theme - small - Statistics.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DartSteven/Linux-Backup-Script/f06e5f2ff00548bbb91568c1f06c6df4958c8103/Sample Template Preview/Samlple Dark Theme - small - Statistics.png -------------------------------------------------------------------------------- /Sample Template Preview/Samlple Dark Theme.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DartSteven/Linux-Backup-Script/f06e5f2ff00548bbb91568c1f06c6df4958c8103/Sample Template Preview/Samlple Dark Theme.png -------------------------------------------------------------------------------- /Sample Template Preview/Sample Light Theme - small - Database.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DartSteven/Linux-Backup-Script/f06e5f2ff00548bbb91568c1f06c6df4958c8103/Sample Template Preview/Sample Light Theme - small - Database.png -------------------------------------------------------------------------------- /Sample Template Preview/Sample Light Theme - small - Directory.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DartSteven/Linux-Backup-Script/f06e5f2ff00548bbb91568c1f06c6df4958c8103/Sample Template Preview/Sample Light Theme - small - Directory.png -------------------------------------------------------------------------------- /Sample Template Preview/Sample Light Theme - small - Statistics.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DartSteven/Linux-Backup-Script/f06e5f2ff00548bbb91568c1f06c6df4958c8103/Sample Template Preview/Sample Light Theme - small - Statistics.png -------------------------------------------------------------------------------- /Sample Template Preview/Sample Light Theme.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DartSteven/Linux-Backup-Script/f06e5f2ff00548bbb91568c1f06c6df4958c8103/Sample Template Preview/Sample Light Theme.png --------------------------------------------------------------------------------