├── .github ├── ISSUE_TEMPLATE.md └── PULL_REQUEST_TEMPLATE.md ├── .gitignore ├── README.md ├── TODO.md ├── data └── etc │ ├── mysql │ └── conf.d │ │ ├── my-16gb.cnf │ │ ├── my-2gb.cnf │ │ ├── my-32gb.cnf │ │ └── my-4gb.cnf │ └── redis │ └── redis.conf ├── default.env ├── docker-compose.yml ├── docker-vis-full.png ├── docker-vis-novols.png ├── extras └── borgmatic-borg-backup │ └── etc │ ├── borgmatic │ └── config.yaml │ └── cron.d │ └── borgmatic ├── volumes └── .gitignore └── xshok-admin.sh /.github/ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | 2 | ## Suggestions 3 | 4 | #### Suggestion Summary 5 | What you think your suggestion brings to the project, or fixes with the project 6 | If it's a fix, would it be better suited as a Pull request to the repo ? 7 | 8 | ## Issues 9 | 10 | #### Issue Summary 11 | A summary of the issue and the environment in which it occurs. 12 | If suitable, include the steps required to reproduce the bug. 13 | Please feel free to include screenshots, screencasts, code examples, log output. 14 | 15 | 16 | #### Steps to Reproduce 17 | 18 | 1. This is the first step 19 | 2. This is the second step 20 | 3. Further steps, etc. 21 | 22 | Any other information you want to share that is relevant to the issue being reported. 23 | Especially, why do you consider this to be a bug? What do you expect to happen instead? 24 | 25 | #### Technical details: 26 | 27 | * Version: master (latest commit: [commit number]) 28 | * Operating System: 29 | * CPU Architecture: 30 | 31 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | ols-docker-env 2 | .xs_password 3 | .env 4 | volumes/* 5 | sites/* 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # docker-webserver 2 | Our optimized production web-server setup based on docker 3 | * openlitespeed + PHP 7.4 + letsencrypt ssl + mariadb(mysql) + redis + memcached 4 | 5 | ## This setup is used for most of our web servers and has been used for more than 6 years. 6 | * We have near or perfect scores for all the major webpage and performance tests 7 | * There are literally thousands of sites using this setup, everything from online shops with more than 35 000 active customers to a simple blogs and forums. 8 | * In 2020 nginx + php-fpm was replaced with openlitespeed due to the massive performance advantage wordpress has with lscache. 9 | * Everything is optimized and the config values used are derived by years of testing, tweaking and observing real world data. 10 | 11 | ![Full Docker Visualization](docker-vis-full.png) 12 | 13 | ### used dockers: 14 | * [extremeshok/unbound](https://hub.docker.com/repository/docker/extremeshok/unbound) **caching dns** 15 | * [extremeshok/openlitespeed-php](https://hub.docker.com/repository/docker/extremeshok/openlitespeed-php) **optimised openlitespeed with php webserver** 16 | * [extremeshok/acme-http2https](https://hub.docker.com/repository/docker/extremeshok/acme-http2https) **generates letsencrypt certificates and forwards all http to httpS** 17 | * containrrr/watchtower **autoupdates docker containers** 18 | * mariadb:10.5 **mysql, but better** 19 | * tiredofit/db-backup **backup mysql databases every 1 hour** 20 | * bitnami/phpmyadmin **webbased database admin** 21 | * redis **caching store** 22 | * memcached **caching store** 23 | * robbertkl/ipv6nat **ipv6nat** 24 | 25 | ### Benefits 26 | * optimized 27 | * vhosts (host multiple independent domains) 28 | * hourly mysql database backups 29 | * simple management 30 | * automatic updates for wordpress 31 | * fully integrated 32 | * stable 33 | * quickly backup and restore databases 34 | * webserver file permissions and owenership are corrected on startup (non blocking) 35 | 36 | ### Why ? 37 | * administration via a single shell command 38 | * webinterfaces are so 2000;s 39 | * optimized out of the box 40 | * a user can host multiple websites 41 | * low resource usage 42 | * quick to bootstrap 43 | * ubuntu + docker 44 | 45 | ### Recommended setup: 46 | * VM / VPS (as a rule, always run a vm instead of baremetal, makes it easy to upgrade and do maintenance) 47 | * Fresh/clean UBUNTU LTS configured with the xshok-ubuntu-docker-host.sh script https://github.com/extremeshok/xshok-docker 48 | * Project run from the /datastore dir. 49 | 50 | ### Notes: 51 | * .env is generated on first install, as the passwords are always randomised. 52 | * there is no need to configure or edit the docker-compose.yml 53 | * all administration is done via xshok-admin.sh 54 | * files are saved into the volumes dir 55 | * restoring sql files, a temporary filtered sql file is created with the create database, alter database, drop database and use statements removed 56 | 57 | ### Recommended VM: 58 | 2 vcpu, 4GB ram (2GB can be used), NVME storage (webservers need nvme, sata ssd is too slow and hdd is pointless) 59 | 60 | ### Usage / Installation 61 | * Download and place the files into /datastore 62 | * start servers 63 | ``` bash xshok-admin.sh --start ``` 64 | * start servers at boot 65 | ``` bash xshok-admin.sh --boot ``` 66 | * set a password for the litespeed weadmin https://hostname:7080 67 | ``` bash xshok-admin.sh --password ``` 68 | * add a FQDN domain, create a database and generate a letsencrypt ssl 69 | ``` bash xshok-admin.sh --qa fqdn.com ``` 70 | * restart litespeed to apply the changes 71 | ``` bash xshok-admin.sh --restart ``` 72 | 73 | # xshok-admin.sh 74 | used to control and manage the webserver, add domains, databases, ssl etc. 75 | ``` 76 | eXtremeSHOK.com Webserver 77 | WEBSITE OPTIONS 78 | -wl | --website-list 79 | list all websites 80 | -wa | --website-add [domain_name] 81 | add a website 82 | -wd | --website-delete [domain_name] 83 | delete a website 84 | -wp | --website-permissions [domain_name] 85 | fix permissions and ownership of a website 86 | DATABASE OPTIONS 87 | -dl | --database-list [domain_name] 88 | list all databases for domain 89 | -da | --database-add [domain_name] 90 | add a database to domain, database name, user and pass autogenerated 91 | -dd | --database-delete [database_name] 92 | delete a database 93 | -dp | --database-password [database_name] 94 | reset the password for a database 95 | -dr | --database-restore [database_name] [/your/path/file_name] 96 | restore a database backup file to database_name, supports .gz and .sql 97 | BACKUP OPTIONS 98 | -ba | --backup-all [/your/path]*optional* 99 | backup all databases, optional backup path, file will use the default sql/databasename.sql.gz 100 | -bd | --backup-database [database_name] [/your/path/file_name]*optional* 101 | backup a database, optional backup filename, will use the default sql/databasename.sql.gz if not specified 102 | SSL OPTIONS 103 | -sl | --ssl-list 104 | list all ssl 105 | -sa | --ssl-add [domain_name] 106 | add ssl to a website 107 | -sd | --ssl-delete [domain_name] 108 | delete ssl from a website 109 | QUICK OPTIONS 110 | -qa | --quick-add [domain_name] 111 | add website, database, ssl, restart server 112 | ADVANCED OPTIONS 113 | -wc | --warm-cache [domain_name] 114 | loads a website sitemap and visits each page, used to warm the cache 115 | GENERAL OPTIONS 116 | --up | --start | --init 117 | start xshok-webserver (will launch docker-compose.yml) 118 | --down | --stop 119 | stop all dockers and docker-compose 120 | -r | --restart 121 | gracefully restart openlitespeed with zero down time 122 | -b | --boot | --service | --systemd 123 | creates a systemd service to start docker and run docker-compose.yml on boot 124 | -p | --password 125 | generate and set a new web-admin password 126 | -e | --env 127 | generate a new .env from the default.env 128 | -H, --help 129 | Display help and exit. 130 | 131 | ``` 132 | 133 | ![No volumes Docker Visualization](docker-vis-novols.png) 134 | -------------------------------------------------------------------------------- /TODO.md: -------------------------------------------------------------------------------- 1 | # TODO 2 | - automated app installs (wordpress, ghost, flarum, generic_template) 3 | - automatic updates (update of the xshok-admin.sh and docker-compose.yml) 4 | -------------------------------------------------------------------------------- /data/etc/mysql/conf.d/my-16gb.cnf: -------------------------------------------------------------------------------- 1 | ################################################################################ 2 | # This is property of eXtremeSHOK.com 3 | # You are free to use, modify and distribute, however you may not remove this notice. 4 | # Copyright (c) Adrian Jon Kriel :: admin@extremeshok.com 5 | ################################################################################ 6 | [client] 7 | default-character-set = utf8mb4 8 | socket=/var/run/mysqld/mysqld.sock 9 | 10 | [mysql] 11 | default-character-set = utf8mb4 12 | max_allowed_packet = 256M 13 | 14 | [mysqld] 15 | performance_schema = on 16 | 17 | #legacy specific 18 | character-set-client-handshake = FALSE 19 | character-set-server = utf8mb4 20 | collation-server = utf8mb4_unicode_ci 21 | 22 | # Accept connections from any IP address 23 | bind-address = 0.0.0.0 24 | 25 | local-infile=0 26 | ignore_db_dirs=lost+found 27 | datadir=/var/lib/mysql 28 | socket=/var/run/mysqld/mysqld.sock 29 | 30 | tmpdir=/tmp 31 | 32 | innodb=ON 33 | #skip-federated 34 | #skip-pbxt 35 | #skip-pbxt_statistics 36 | #skip-archive 37 | skip-name-resolve = 1 38 | #old_passwords 39 | back_log = 1024 40 | max_connections = 511 41 | key_buffer_size = 512M 42 | myisam_sort_buffer_size = 512M 43 | myisam_max_sort_file_size = 4096M 44 | join_buffer_size = 4M 45 | read_buffer_size = 4M 46 | sort_buffer_size = 8M 47 | table_definition_cache = 10240 48 | table_open_cache = 20480 49 | thread_cache_size = 384 50 | wait_timeout = 1800 51 | connect_timeout = 10 52 | tmp_table_size = 1024M 53 | max_heap_table_size = 1024M 54 | max_allowed_packet = 256M 55 | #max_seeks_for_key = 4294967295 56 | #group_concat_max_len = 1024 57 | max_length_for_sort_data = 1024 58 | net_buffer_length = 16384 59 | max_connect_errors = 100000 60 | concurrent_insert = 2 61 | read_rnd_buffer_size = 4M 62 | bulk_insert_buffer_size = 8M 63 | # query_cache boost for MariaDB >10.1.2+ 64 | # https://community.centminmod.com/posts/30811/ 65 | query_cache_limit = 1536K 66 | 67 | # mysqltuner: disabled 68 | #query_cache_size = 256M 69 | #query_cache_type = 1 70 | # mysqltuner: recommended 71 | query_cache_size = 0 72 | query_cache_type = 0 73 | 74 | 75 | query_cache_min_res_unit = 2K 76 | query_prealloc_size = 262144 77 | query_alloc_block_size = 65536 78 | transaction_alloc_block_size = 8192 79 | transaction_prealloc_size = 4096 80 | default-storage-engine = InnoDB 81 | 82 | log_warnings=1 83 | long_query_time=1 84 | slow_query_log=0 85 | #slow_query_log_file=/var/lib/mysql/slowq.log 86 | #log-error=/var/log/mysqld.log 87 | 88 | # innodb settings 89 | innodb_purge_threads=1 90 | innodb_file_per_table = 1 91 | innodb_open_files = 10000 92 | innodb_data_file_path= ibdata1:10M:autoextend 93 | innodb_buffer_pool_size = 8192M 94 | 95 | ## https://mariadb.com/kb/en/mariadb/xtradbinnodb-server-system-variables/#innodb_buffer_pool_instances 96 | innodb_buffer_pool_instances=8 97 | 98 | innodb_log_files_in_group = 4 99 | innodb_log_file_size = 512M 100 | innodb_log_buffer_size = 64M 101 | innodb_flush_log_at_trx_commit = 2 102 | innodb_thread_concurrency = 0 103 | innodb_lock_wait_timeout=50 104 | innodb_flush_method = O_DIRECT 105 | 106 | # 200 * # DISKS 107 | #innodb_io_capacity = 400 108 | #innodb_io_capacity_max = 2000 109 | # SSD 110 | innodb_io_capacity=3000 111 | innodb_io_capacity_max=6000 112 | innodb_read_io_threads = 4 113 | innodb_write_io_threads = 2 114 | innodb_flush_neighbors = 1 115 | 116 | # mariadb settings 117 | [mariadb] 118 | #thread-handling = pool-of-threads 119 | #thread-pool-size= 20 120 | #mysql --port=3307 --protocol=tcp 121 | #extra-port=3307 122 | #extra-max-connections=1 123 | 124 | userstat = 0 125 | key_cache_segments = 1 126 | aria_group_commit = none 127 | aria_group_commit_interval = 0 128 | aria_log_file_size = 64M 129 | aria_log_purge_type = immediate 130 | aria_pagecache_buffer_size = 128M 131 | aria_sort_buffer_size = 128M 132 | 133 | [mariadb-5.5] 134 | innodb_file_format = Barracuda 135 | innodb_file_per_table = 1 136 | 137 | #ignore_db_dirs= 138 | query_cache_strip_comments=0 139 | 140 | innodb_read_ahead = linear 141 | innodb_adaptive_flushing_method = estimate 142 | innodb_flush_neighbor_pages = 1 143 | innodb_stats_update_need_lock = 0 144 | innodb_log_block_size = 512 145 | 146 | log_slow_filter =admin,filesort,filesort_on_disk,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk 147 | 148 | [mysqld_safe] 149 | socket=/var/run/mysqld/mysqld.sock 150 | #log-error=/var/log/mysqld.log 151 | #nice = -5 152 | open-files-limit = 8192 153 | 154 | [mysqldump] 155 | quick 156 | max_allowed_packet = 256M 157 | 158 | [myisamchk] 159 | tmpdir=/tmp 160 | key_buffer = 1024M 161 | sort_buffer = 256M 162 | read_buffer = 256M 163 | write_buffer = 256M 164 | 165 | [mysqlhotcopy] 166 | interactive-timeout 167 | 168 | [mariadb-10.0] 169 | innodb_file_format = Barracuda 170 | innodb_file_per_table = 1 171 | 172 | # 2 variables needed to switch from XtraDB to InnoDB plugins 173 | #plugin-load=ha_innodb 174 | #ignore_builtin_innodb 175 | 176 | ## MariaDB 10 only save and restore buffer pool pages 177 | ## warm up InnoDB buffer pool on server restarts 178 | innodb_buffer_pool_dump_at_shutdown=1 179 | innodb_buffer_pool_load_at_startup=1 180 | innodb_buffer_pool_populate=0 181 | ## Disabled settings 182 | performance_schema=OFF 183 | innodb_stats_on_metadata=OFF 184 | innodb_sort_buffer_size=16M 185 | innodb_online_alter_log_max_size=128M 186 | query_cache_strip_comments=0 187 | log_slow_filter =admin,filesort,filesort_on_disk,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk 188 | -------------------------------------------------------------------------------- /data/etc/mysql/conf.d/my-2gb.cnf: -------------------------------------------------------------------------------- 1 | ##https://github.com/centminmod/centminmod/blob/master/config/mysql/my-mdb10-8gb.cnf 2 | # docker exec apollo-legacy_mysql-apollo_1 'mysql -u root --password=$MYSQL_ROOT_PASSWORD -e "SHOW VARIABLES;"' 3 | 4 | [client] 5 | default-character-set = utf8mb4 6 | socket=/var/run/mysqld/mysqld.sock 7 | 8 | [mysql] 9 | default-character-set = utf8mb4 10 | max_allowed_packet = 256M 11 | 12 | [mysqld] 13 | performance_schema = on 14 | 15 | #legacy specific 16 | character-set-client-handshake = FALSE 17 | character-set-server = utf8mb4 18 | collation-server = utf8mb4_unicode_ci 19 | 20 | # Accept connections from any IP address 21 | bind-address = 0.0.0.0 22 | 23 | local-infile=0 24 | ignore_db_dirs=lost+found 25 | datadir=/var/lib/mysql 26 | socket=/var/run/mysqld/mysqld.sock 27 | 28 | tmpdir=/tmp 29 | 30 | innodb=ON 31 | #skip-federated 32 | #skip-pbxt 33 | #skip-pbxt_statistics 34 | #skip-archive 35 | skip-name-resolve = 1 36 | #old_passwords 37 | back_log = 256 38 | max_connections = 255 39 | key_buffer_size = 128M 40 | myisam_sort_buffer_size = 128M 41 | myisam_max_sort_file_size = 1024M 42 | join_buffer_size = 4M 43 | read_buffer_size = 4M 44 | sort_buffer_size = 8M 45 | table_definition_cache = 4096 46 | table_open_cache = 4096 47 | thread_cache_size = 128 48 | wait_timeout = 1800 49 | connect_timeout = 10 50 | tmp_table_size = 256M 51 | max_heap_table_size = 256M 52 | max_allowed_packet = 256M 53 | #max_seeks_for_key = 4294967295 54 | #group_concat_max_len = 1024 55 | max_length_for_sort_data = 1024 56 | net_buffer_length = 16384 57 | max_connect_errors = 100000 58 | concurrent_insert = 2 59 | read_rnd_buffer_size = 4M 60 | bulk_insert_buffer_size = 8M 61 | # query_cache boost for MariaDB >10.1.2+ 62 | # https://community.centminmod.com/posts/30811/ 63 | query_cache_limit = 1024K 64 | 65 | # mysqltuner: disabled 66 | #query_cache_size = 128M 67 | #query_cache_type = 1 68 | # mysqltuner: recommended 69 | query_cache_size = 0 70 | query_cache_type = 0 71 | 72 | 73 | query_cache_min_res_unit = 2K 74 | query_prealloc_size = 262144 75 | query_alloc_block_size = 65536 76 | transaction_alloc_block_size = 8192 77 | transaction_prealloc_size = 4096 78 | default-storage-engine = InnoDB 79 | 80 | log_warnings=1 81 | long_query_time=1 82 | slow_query_log=0 83 | #slow_query_log_file=/var/lib/mysql/slowq.log 84 | #log-error=/var/log/mysqld.log 85 | 86 | # innodb settings 87 | innodb_purge_threads=1 88 | innodb_file_per_table = 1 89 | innodb_open_files = 6000 90 | innodb_data_file_path= ibdata1:10M:autoextend 91 | innodb_buffer_pool_size = 2048M 92 | 93 | ## https://mariadb.com/kb/en/mariadb/xtradbinnodb-server-system-variables/#innodb_buffer_pool_instances 94 | innodb_buffer_pool_instances=4 95 | 96 | innodb_log_files_in_group = 4 97 | innodb_log_file_size = 128M 98 | innodb_log_buffer_size = 16M 99 | innodb_flush_log_at_trx_commit = 2 100 | innodb_thread_concurrency = 0 101 | innodb_lock_wait_timeout=50 102 | innodb_flush_method = O_DIRECT 103 | 104 | # 200 * # DISKS 105 | #innodb_io_capacity = 400 106 | #innodb_io_capacity_max = 2000 107 | # SSD 108 | innodb_io_capacity=3000 109 | innodb_io_capacity_max=6000 110 | innodb_read_io_threads = 4 111 | innodb_write_io_threads = 2 112 | innodb_flush_neighbors = 1 113 | 114 | # mariadb settings 115 | [mariadb] 116 | #thread-handling = pool-of-threads 117 | #thread-pool-size= 20 118 | #mysql --port=3307 --protocol=tcp 119 | #extra-port=3307 120 | #extra-max-connections=1 121 | 122 | userstat = 0 123 | key_cache_segments = 1 124 | aria_group_commit = none 125 | aria_group_commit_interval = 0 126 | aria_log_file_size = 64M 127 | aria_log_purge_type = immediate 128 | aria_pagecache_buffer_size = 64M 129 | aria_sort_buffer_size = 64M 130 | 131 | [mariadb-5.5] 132 | innodb_file_format = Barracuda 133 | innodb_file_per_table = 1 134 | 135 | #ignore_db_dirs= 136 | query_cache_strip_comments=0 137 | 138 | innodb_read_ahead = linear 139 | innodb_adaptive_flushing_method = estimate 140 | innodb_flush_neighbor_pages = 1 141 | innodb_stats_update_need_lock = 0 142 | innodb_log_block_size = 512 143 | 144 | log_slow_filter =admin,filesort,filesort_on_disk,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk 145 | 146 | [mysqld_safe] 147 | socket=/var/run/mysqld/mysqld.sock 148 | #log-error=/var/log/mysqld.log 149 | #nice = -5 150 | open-files-limit = 8192 151 | 152 | [mysqldump] 153 | quick 154 | max_allowed_packet = 256M 155 | 156 | [myisamchk] 157 | tmpdir=/tmp 158 | key_buffer = 256M 159 | sort_buffer = 64M 160 | read_buffer = 64M 161 | write_buffer = 64M 162 | 163 | [mysqlhotcopy] 164 | interactive-timeout 165 | 166 | [mariadb-10.0] 167 | innodb_file_format = Barracuda 168 | innodb_file_per_table = 1 169 | 170 | # 2 variables needed to switch from XtraDB to InnoDB plugins 171 | #plugin-load=ha_innodb 172 | #ignore_builtin_innodb 173 | 174 | ## MariaDB 10 only save and restore buffer pool pages 175 | ## warm up InnoDB buffer pool on server restarts 176 | innodb_buffer_pool_dump_at_shutdown=1 177 | innodb_buffer_pool_load_at_startup=1 178 | innodb_buffer_pool_populate=0 179 | ## Disabled settings 180 | performance_schema=OFF 181 | innodb_stats_on_metadata=OFF 182 | innodb_sort_buffer_size=8M 183 | innodb_online_alter_log_max_size=128M 184 | query_cache_strip_comments=0 185 | log_slow_filter =admin,filesort,filesort_on_disk,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk 186 | -------------------------------------------------------------------------------- /data/etc/mysql/conf.d/my-32gb.cnf: -------------------------------------------------------------------------------- 1 | ################################################################################ 2 | # This is property of eXtremeSHOK.com 3 | # You are free to use, modify and distribute, however you may not remove this notice. 4 | # Copyright (c) Adrian Jon Kriel :: admin@extremeshok.com 5 | ################################################################################ 6 | # Will use a maximum of 20GB 7 | [client] 8 | default-character-set = utf8mb4 9 | socket=/var/run/mysqld/mysqld.sock 10 | 11 | [mysql] 12 | default-character-set = utf8mb4 13 | max_allowed_packet = 256M 14 | 15 | [mysqld] 16 | performance_schema = on 17 | 18 | #legacy specific 19 | character-set-client-handshake = FALSE 20 | character-set-server = utf8mb4 21 | collation-server = utf8mb4_unicode_ci 22 | 23 | # Accept connections from any IP address 24 | bind-address = 0.0.0.0 25 | 26 | local-infile=0 27 | ignore_db_dirs=lost+found 28 | datadir=/var/lib/mysql 29 | socket=/var/run/mysqld/mysqld.sock 30 | 31 | tmpdir=/tmp 32 | 33 | innodb=ON 34 | #skip-federated 35 | #skip-pbxt 36 | #skip-pbxt_statistics 37 | #skip-archive 38 | skip-name-resolve = 1 39 | #old_passwords 40 | back_log = 1024 41 | max_connections = 511 42 | key_buffer_size = 512M 43 | myisam_sort_buffer_size = 512M 44 | myisam_max_sort_file_size = 4096M 45 | join_buffer_size = 4M 46 | read_buffer_size = 4M 47 | sort_buffer_size = 8M 48 | table_definition_cache = 10240 49 | table_open_cache = 20480 50 | thread_cache_size = 384 51 | wait_timeout = 1800 52 | connect_timeout = 10 53 | tmp_table_size = 1024M 54 | max_heap_table_size = 1024M 55 | max_allowed_packet = 256M 56 | #max_seeks_for_key = 4294967295 57 | #group_concat_max_len = 1024 58 | max_length_for_sort_data = 1024 59 | net_buffer_length = 16384 60 | max_connect_errors = 100000 61 | concurrent_insert = 2 62 | read_rnd_buffer_size = 4M 63 | bulk_insert_buffer_size = 8M 64 | # query_cache boost for MariaDB >10.1.2+ 65 | # https://community.centminmod.com/posts/30811/ 66 | query_cache_limit = 1536K 67 | 68 | # mysqltuner: disabled 69 | #query_cache_size = 256M 70 | #query_cache_type = 1 71 | # mysqltuner: recommended 72 | query_cache_size = 0 73 | query_cache_type = 0 74 | 75 | 76 | query_cache_min_res_unit = 2K 77 | query_prealloc_size = 262144 78 | query_alloc_block_size = 65536 79 | transaction_alloc_block_size = 8192 80 | transaction_prealloc_size = 4096 81 | default-storage-engine = InnoDB 82 | 83 | log_warnings=1 84 | long_query_time=1 85 | slow_query_log=0 86 | #slow_query_log_file=/var/lib/mysql/slowq.log 87 | #log-error=/var/log/mysqld.log 88 | 89 | # innodb settings 90 | innodb_purge_threads=1 91 | innodb_file_per_table = 1 92 | innodb_open_files = 10000 93 | innodb_data_file_path= ibdata1:10M:autoextend 94 | innodb_buffer_pool_size = 8192M 95 | 96 | ## https://mariadb.com/kb/en/mariadb/xtradbinnodb-server-system-variables/#innodb_buffer_pool_instances 97 | innodb_buffer_pool_instances=8 98 | 99 | innodb_log_files_in_group = 4 100 | innodb_log_file_size = 512M 101 | innodb_log_buffer_size = 64M 102 | innodb_flush_log_at_trx_commit = 2 103 | innodb_thread_concurrency = 0 104 | innodb_lock_wait_timeout=50 105 | innodb_flush_method = O_DIRECT 106 | 107 | # 200 * # DISKS 108 | #innodb_io_capacity = 400 109 | #innodb_io_capacity_max = 2000 110 | # SSD 111 | innodb_io_capacity=3000 112 | innodb_io_capacity_max=6000 113 | innodb_read_io_threads = 4 114 | innodb_write_io_threads = 2 115 | innodb_flush_neighbors = 1 116 | 117 | # mariadb settings 118 | [mariadb] 119 | #thread-handling = pool-of-threads 120 | #thread-pool-size= 20 121 | #mysql --port=3307 --protocol=tcp 122 | #extra-port=3307 123 | #extra-max-connections=1 124 | 125 | userstat = 0 126 | key_cache_segments = 1 127 | aria_group_commit = none 128 | aria_group_commit_interval = 0 129 | aria_log_file_size = 64M 130 | aria_log_purge_type = immediate 131 | aria_pagecache_buffer_size = 128M 132 | aria_sort_buffer_size = 128M 133 | 134 | [mariadb-5.5] 135 | innodb_file_format = Barracuda 136 | innodb_file_per_table = 1 137 | 138 | #ignore_db_dirs= 139 | query_cache_strip_comments=0 140 | 141 | innodb_read_ahead = linear 142 | innodb_adaptive_flushing_method = estimate 143 | innodb_flush_neighbor_pages = 1 144 | innodb_stats_update_need_lock = 0 145 | innodb_log_block_size = 512 146 | 147 | log_slow_filter =admin,filesort,filesort_on_disk,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk 148 | 149 | [mysqld_safe] 150 | socket=/var/run/mysqld/mysqld.sock 151 | #log-error=/var/log/mysqld.log 152 | #nice = -5 153 | open-files-limit = 8192 154 | 155 | [mysqldump] 156 | quick 157 | max_allowed_packet = 256M 158 | 159 | [myisamchk] 160 | tmpdir=/tmp 161 | key_buffer = 1024M 162 | sort_buffer = 256M 163 | read_buffer = 256M 164 | write_buffer = 256M 165 | 166 | [mysqlhotcopy] 167 | interactive-timeout 168 | 169 | [mariadb-10.0] 170 | innodb_file_format = Barracuda 171 | innodb_file_per_table = 1 172 | 173 | # 2 variables needed to switch from XtraDB to InnoDB plugins 174 | #plugin-load=ha_innodb 175 | #ignore_builtin_innodb 176 | 177 | ## MariaDB 10 only save and restore buffer pool pages 178 | ## warm up InnoDB buffer pool on server restarts 179 | innodb_buffer_pool_dump_at_shutdown=1 180 | innodb_buffer_pool_load_at_startup=1 181 | innodb_buffer_pool_populate=0 182 | ## Disabled settings 183 | performance_schema=OFF 184 | innodb_stats_on_metadata=OFF 185 | innodb_sort_buffer_size=16M 186 | innodb_online_alter_log_max_size=128M 187 | query_cache_strip_comments=0 188 | log_slow_filter =admin,filesort,filesort_on_disk,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk 189 | -------------------------------------------------------------------------------- /data/etc/mysql/conf.d/my-4gb.cnf: -------------------------------------------------------------------------------- 1 | ################################################################################ 2 | # This is property of eXtremeSHOK.com 3 | # You are free to use, modify and distribute, however you may not remove this notice. 4 | # Copyright (c) Adrian Jon Kriel :: admin@extremeshok.com 5 | ################################################################################ 6 | [client] 7 | default-character-set = utf8mb4 8 | socket=/var/run/mysqld/mysqld.sock 9 | 10 | [mysql] 11 | default-character-set = utf8mb4 12 | max_allowed_packet = 256M 13 | 14 | [mysqld] 15 | performance_schema = on 16 | 17 | #legacy specific 18 | character-set-client-handshake = FALSE 19 | character-set-server = utf8mb4 20 | collation-server = utf8mb4_unicode_ci 21 | 22 | # Accept connections from any IP address 23 | bind-address = 0.0.0.0 24 | 25 | local-infile=0 26 | ignore_db_dirs=lost+found 27 | datadir=/var/lib/mysql 28 | socket=/var/run/mysqld/mysqld.sock 29 | 30 | tmpdir=/tmp 31 | 32 | innodb=ON 33 | #skip-federated 34 | #skip-pbxt 35 | #skip-pbxt_statistics 36 | #skip-archive 37 | skip-name-resolve = 1 38 | #old_passwords 39 | back_log = 512 40 | max_connections = 500 41 | key_buffer_size = 128M 42 | myisam_sort_buffer_size = 128M 43 | myisam_max_sort_file_size = 2048M 44 | join_buffer_size = 4M 45 | read_buffer_size = 4M 46 | sort_buffer_size = 4M 47 | table_definition_cache = 8192 48 | table_open_cache = 4096 49 | thread_cache_size = 256 50 | wait_timeout = 1200 51 | connect_timeout = 10 52 | tmp_table_size = 256M 53 | max_heap_table_size = 256M 54 | max_allowed_packet = 256M64M 55 | #max_seeks_for_key = 4294967295 56 | #group_concat_max_len = 1024 57 | max_length_for_sort_data = 1024 58 | net_buffer_length = 16384 59 | max_connect_errors = 100000 60 | concurrent_insert = 2 61 | read_rnd_buffer_size = 2M 62 | bulk_insert_buffer_size = 8M 63 | # query_cache boost for MariaDB >10.1.2+ 64 | # https://community.centminmod.com/posts/30811/ 65 | query_cache_limit = 1024K 66 | 67 | # mysqltuner: disabled 68 | query_cache_size = 128M 69 | query_cache_type = 1 70 | # mysqltuner: recommended 71 | #query_cache_size = 0 72 | #query_cache_type = 0 73 | 74 | 75 | query_cache_min_res_unit = 2K 76 | query_prealloc_size = 262144 77 | query_alloc_block_size = 65536 78 | transaction_alloc_block_size = 8192 79 | transaction_prealloc_size = 4096 80 | default-storage-engine = InnoDB 81 | 82 | log_warnings=0 83 | long_query_time=1 84 | slow_query_log=0 85 | slow_query_log_file=/var/log/mysql/slow.log 86 | log_queries_not_using_indexes=1 87 | 88 | #log-error=/var/log/mysql/error.log 89 | 90 | # no longer supported 91 | #innodb_large_prefix=1 92 | 93 | # innodb settings 94 | innodb_purge_threads=1 95 | innodb_file_per_table = 1 96 | innodb_open_files = 6000 97 | innodb_data_file_path= ibdata1:10M:autoextend 98 | innodb_buffer_pool_size = 3G 99 | 100 | ## https://mariadb.com/kb/en/mariadb/xtradbinnodb-server-system-variables/#innodb_buffer_pool_instances 101 | innodb_buffer_pool_instances=3 102 | 103 | innodb_log_files_in_group = 4 104 | innodb_log_file_size = 192M 105 | innodb_log_buffer_size = 8M 106 | innodb_flush_log_at_trx_commit = 2 107 | innodb_thread_concurrency = 0 108 | innodb_lock_wait_timeout=50 109 | innodb_flush_method = O_DIRECT 110 | 111 | # no longer supported 112 | #innodb_support_xa=1 113 | 114 | # 200 * # DISKS 115 | #innodb_io_capacity = 400 116 | #innodb_io_capacity_max = 2000 117 | # SSD 118 | innodb_io_capacity=3000 119 | innodb_io_capacity_max=6000 120 | innodb_read_io_threads = 4 121 | innodb_write_io_threads = 2 122 | innodb_flush_neighbors = 1 123 | 124 | # mariadb settings 125 | [mariadb] 126 | #thread-handling = pool-of-threads 127 | #thread-pool-size= 20 128 | #mysql --port=3307 --protocol=tcp 129 | #extra-port=3307 130 | #extra-max-connections=1 131 | 132 | userstat = 0 133 | key_cache_segments = 1 134 | aria_group_commit = none 135 | aria_group_commit_interval = 0 136 | aria_log_file_size = 64M 137 | aria_log_purge_type = immediate 138 | aria_pagecache_buffer_size = 64M 139 | aria_sort_buffer_size = 64M 140 | 141 | [mariadb-5.5] 142 | innodb_file_format = Barracuda 143 | innodb_file_per_table = 1 144 | 145 | #ignore_db_dirs= 146 | query_cache_strip_comments=0 147 | 148 | innodb_read_ahead = linear 149 | innodb_adaptive_flushing_method = estimate 150 | innodb_flush_neighbor_pages = 1 151 | innodb_stats_update_need_lock = 0 152 | innodb_log_block_size = 512 153 | 154 | log_slow_filter =admin,filesort,filesort_on_disk,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk 155 | 156 | [mysqld_safe] 157 | socket=/var/run/mysqld/mysqld.sock 158 | #log-error=/var/log/mysqld.log 159 | #nice = -5 160 | open-files-limit = 8192 161 | 162 | [mysqldump] 163 | quick 164 | max_allowed_packet = 256M 165 | 166 | [myisamchk] 167 | tmpdir=/tmp 168 | key_buffer = 128M 169 | sort_buffer = 64M 170 | read_buffer = 64M 171 | write_buffer = 64M 172 | 173 | [mysqlhotcopy] 174 | interactive-timeout 175 | 176 | [mariadb-10.0] 177 | innodb_file_format = Barracuda 178 | innodb_file_per_table = 1 179 | 180 | # 2 variables needed to switch from XtraDB to InnoDB plugins 181 | #plugin-load=ha_innodb 182 | #ignore_builtin_innodb 183 | 184 | ## MariaDB 10 only save and restore buffer pool pages 185 | ## warm up InnoDB buffer pool on server restarts 186 | innodb_buffer_pool_dump_at_shutdown=1 187 | innodb_buffer_pool_load_at_startup=1 188 | innodb_buffer_pool_populate=0 189 | ## Disabled settings 190 | performance_schema=OFF 191 | innodb_stats_on_metadata=OFF 192 | innodb_sort_buffer_size=8M 193 | innodb_online_alter_log_max_size=128M 194 | query_cache_strip_comments=0 195 | log_slow_filter =admin,filesort,filesort_on_disk,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk 196 | -------------------------------------------------------------------------------- /data/etc/redis/redis.conf: -------------------------------------------------------------------------------- 1 | ################################################################################ 2 | # This is property of eXtremeSHOK.com 3 | # You are free to use, modify and distribute, however you may not remove this notice. 4 | # Copyright (c) Adrian Jon Kriel :: admin@extremeshok.com 5 | ################################################################################ 6 | # Redis configuration file example. 7 | # 8 | # Note that in order to read the configuration file, Redis must be 9 | # started with the file path as first argument: 10 | # 11 | # ./redis-server /path/to/redis.conf 12 | 13 | # Note on units: when memory size is needed, it is possible to specify 14 | # it in the usual form of 1k 5GB 4M and so forth: 15 | # 16 | # 1k => 1000 bytes 17 | # 1kb => 1024 bytes 18 | # 1m => 1000000 bytes 19 | # 1mb => 1024*1024 bytes 20 | # 1g => 1000000000 bytes 21 | # 1gb => 1024*1024*1024 bytes 22 | # 23 | # units are case insensitive so 1GB 1Gb 1gB are all the same. 24 | 25 | ################################## INCLUDES ################################### 26 | 27 | # Include one or more other config files here. This is useful if you 28 | # have a standard template that goes to all Redis servers but also need 29 | # to customize a few per-server settings. Include files can include 30 | # other files, so use this wisely. 31 | # 32 | # Notice option "include" won't be rewritten by command "CONFIG REWRITE" 33 | # from admin or Redis Sentinel. Since Redis always uses the last processed 34 | # line as value of a configuration directive, you'd better put includes 35 | # at the beginning of this file to avoid overwriting config change at runtime. 36 | # 37 | # If instead you are interested in using includes to override configuration 38 | # options, it is better to use include as the last line. 39 | # 40 | # include /path/to/local.conf 41 | # include /path/to/other.conf 42 | 43 | ################################## MODULES ##################################### 44 | 45 | # Load modules at startup. If the server is not able to load modules 46 | # it will abort. It is possible to use multiple loadmodule directives. 47 | # 48 | # loadmodule /path/to/my_module.so 49 | # loadmodule /path/to/other_module.so 50 | 51 | ################################## NETWORK ##################################### 52 | 53 | # By default, if no "bind" configuration directive is specified, Redis listens 54 | # for connections from all the network interfaces available on the server. 55 | # It is possible to listen to just one or multiple selected interfaces using 56 | # the "bind" configuration directive, followed by one or more IP addresses. 57 | # 58 | # Examples: 59 | # 60 | # bind 192.168.1.100 10.0.0.1 61 | # bind 127.0.0.1 ::1 62 | # 63 | # ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the 64 | # internet, binding to all the interfaces is dangerous and will expose the 65 | # instance to everybody on the internet. So by default we uncomment the 66 | # following bind directive, that will force Redis to listen only into 67 | # the IPv4 loopback interface address (this means Redis will be able to 68 | # accept connections only from clients running into the same computer it 69 | # is running). 70 | # 71 | # IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES 72 | # JUST COMMENT THE FOLLOWING LINE. 73 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 74 | bind 0.0.0.0 75 | 76 | # Protected mode is a layer of security protection, in order to avoid that 77 | # Redis instances left open on the internet are accessed and exploited. 78 | # 79 | # When protected mode is on and if: 80 | # 81 | # 1) The server is not binding explicitly to a set of addresses using the 82 | # "bind" directive. 83 | # 2) No password is configured. 84 | # 85 | # The server only accepts connections from clients connecting from the 86 | # IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain 87 | # sockets. 88 | # 89 | # By default protected mode is enabled. You should disable it only if 90 | # you are sure you want clients from other hosts to connect to Redis 91 | # even if no authentication is configured, nor a specific set of interfaces 92 | # are explicitly listed using the "bind" directive. 93 | protected-mode no 94 | 95 | # Accept connections on the specified port, default is 6379 (IANA #815344). 96 | # If port 0 is specified Redis will not listen on a TCP socket. 97 | port 6379 98 | 99 | # TCP listen() backlog. 100 | # 101 | # In high requests-per-second environments you need an high backlog in order 102 | # to avoid slow clients connections issues. Note that the Linux kernel 103 | # will silently truncate it to the value of /proc/sys/net/core/somaxconn so 104 | # make sure to raise both the value of somaxconn and tcp_max_syn_backlog 105 | # in order to get the desired effect. 106 | tcp-backlog 511 107 | 108 | # Unix socket. 109 | # 110 | # Specify the path for the Unix socket that will be used to listen for 111 | # incoming connections. There is no default, so Redis will not listen 112 | # on a unix socket when not specified. 113 | # 114 | #unixsocket /run/redis/redis.sock 115 | #unixsocketperm 770 116 | 117 | # Close the connection after a client is idle for N seconds (0 to disable) 118 | timeout 0 119 | 120 | # TCP keepalive. 121 | # 122 | # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence 123 | # of communication. This is useful for two reasons: 124 | # 125 | # 1) Detect dead peers. 126 | # 2) Take the connection alive from the point of view of network 127 | # equipment in the middle. 128 | # 129 | # On Linux, the specified value (in seconds) is the period used to send ACKs. 130 | # Note that to close the connection the double of the time is needed. 131 | # On other kernels the period depends on the kernel configuration. 132 | # 133 | # A reasonable value for this option is 300 seconds, which is the new 134 | # Redis default starting with Redis 3.2.1. 135 | tcp-keepalive 300 136 | 137 | ################################# GENERAL ##################################### 138 | 139 | # If you run Redis from upstart or systemd, Redis can interact with your 140 | # supervision tree. Options: 141 | # supervised no - no supervision interaction 142 | # supervised upstart - signal upstart by putting Redis into SIGSTOP mode 143 | # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET 144 | # supervised auto - detect upstart or systemd method based on 145 | # UPSTART_JOB or NOTIFY_SOCKET environment variables 146 | # Note: these supervision methods only signal "process is ready." 147 | # They do not enable continuous liveness pings back to your supervisor. 148 | supervised no 149 | 150 | # Specify the server verbosity level. 151 | # This can be one of: 152 | # debug (a lot of information, useful for development/testing) 153 | # verbose (many rarely useful info, but not a mess like the debug level) 154 | # notice (moderately verbose, what you want in production probably) 155 | # warning (only very important / critical messages are logged) 156 | loglevel notice 157 | 158 | # Specify the log file name. Also the empty string can be used to force 159 | # Redis to log on the standard output. Note that if you use standard 160 | # output for logging but daemonize, logs will be sent to /dev/null 161 | #logfile /var/log/redis/redis.log 162 | 163 | # To enable logging to the system logger, just set 'syslog-enabled' to yes, 164 | # and optionally update the other syslog parameters to suit your needs. 165 | # syslog-enabled no 166 | 167 | # Specify the syslog identity. 168 | # syslog-ident redis 169 | 170 | # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. 171 | # syslog-facility local0 172 | 173 | # Set the number of databases. The default database is DB 0, you can select 174 | # a different one on a per-connection basis using SELECT where 175 | # dbid is a number between 0 and 'databases'-1 176 | databases 16 177 | 178 | # By default Redis shows an ASCII art logo only when started to log to the 179 | # standard output and if the standard output is a TTY. Basically this means 180 | # that normally a logo is displayed only in interactive sessions. 181 | # 182 | # However it is possible to force the pre-4.0 behavior and always show a 183 | # ASCII art logo in startup logs by setting the following option to yes. 184 | always-show-logo no 185 | 186 | ################################ SNAPSHOTTING ################################ 187 | # 188 | # Save the DB on disk: 189 | # 190 | # save 191 | # 192 | # Will save the DB if both the given number of seconds and the given 193 | # number of write operations against the DB occurred. 194 | # 195 | # In the example below the behaviour will be to save: 196 | # after 900 sec (15 min) if at least 1 key changed 197 | # after 300 sec (5 min) if at least 10 keys changed 198 | # after 60 sec if at least 10000 keys changed 199 | # 200 | # Note: you can disable saving completely by commenting out all "save" lines. 201 | # 202 | # It is also possible to remove all the previously configured save 203 | # points by adding a save directive with a single empty string argument 204 | # like in the following example: 205 | # 206 | # save "" 207 | 208 | save 900 1 209 | save 300 10 210 | save 60 10000 211 | 212 | # By default Redis will stop accepting writes if RDB snapshots are enabled 213 | # (at least one save point) and the latest background save failed. 214 | # This will make the user aware (in a hard way) that data is not persisting 215 | # on disk properly, otherwise chances are that no one will notice and some 216 | # disaster will happen. 217 | # 218 | # If the background saving process will start working again Redis will 219 | # automatically allow writes again. 220 | # 221 | # However if you have setup your proper monitoring of the Redis server 222 | # and persistence, you may want to disable this feature so that Redis will 223 | # continue to work as usual even if there are problems with disk, 224 | # permissions, and so forth. 225 | stop-writes-on-bgsave-error yes 226 | 227 | # Compress string objects using LZF when dump .rdb databases? 228 | # For default that's set to 'yes' as it's almost always a win. 229 | # If you want to save some CPU in the saving child set it to 'no' but 230 | # the dataset will likely be bigger if you have compressible values or keys. 231 | rdbcompression yes 232 | 233 | # Since version 5 of RDB a CRC64 checksum is placed at the end of the file. 234 | # This makes the format more resistant to corruption but there is a performance 235 | # hit to pay (around 10%) when saving and loading RDB files, so you can disable it 236 | # for maximum performances. 237 | # 238 | # RDB files created with checksum disabled have a checksum of zero that will 239 | # tell the loading code to skip the check. 240 | rdbchecksum yes 241 | 242 | # The filename where to dump the DB 243 | dbfilename dump.rdb 244 | 245 | # The working directory. 246 | # 247 | # The DB will be written inside this directory, with the filename specified 248 | # above using the 'dbfilename' configuration directive. 249 | # 250 | # The Append Only File will also be created inside this directory. 251 | # 252 | # Note that you must specify a directory here, not a file name. 253 | dir /var/lib/redis 254 | 255 | ################################# REPLICATION ################################# 256 | 257 | # Master-Replica replication. Use replicaof to make a Redis instance a copy of 258 | # another Redis server. A few things to understand ASAP about Redis replication. 259 | # 260 | # +------------------+ +---------------+ 261 | # | Master | ---> | Replica | 262 | # | (receive writes) | | (exact copy) | 263 | # +------------------+ +---------------+ 264 | # 265 | # 1) Redis replication is asynchronous, but you can configure a master to 266 | # stop accepting writes if it appears to be not connected with at least 267 | # a given number of replicas. 268 | # 2) Redis replicas are able to perform a partial resynchronization with the 269 | # master if the replication link is lost for a relatively small amount of 270 | # time. You may want to configure the replication backlog size (see the next 271 | # sections of this file) with a sensible value depending on your needs. 272 | # 3) Replication is automatic and does not need user intervention. After a 273 | # network partition replicas automatically try to reconnect to masters 274 | # and resynchronize with them. 275 | # 276 | # replicaof 277 | 278 | # If the master is password protected (using the "requirepass" configuration 279 | # directive below) it is possible to tell the replica to authenticate before 280 | # starting the replication synchronization process, otherwise the master will 281 | # refuse the replica request. 282 | # 283 | # masterauth 284 | 285 | # When a replica loses its connection with the master, or when the replication 286 | # is still in progress, the replica can act in two different ways: 287 | # 288 | # 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will 289 | # still reply to client requests, possibly with out of date data, or the 290 | # data set may just be empty if this is the first synchronization. 291 | # 292 | # 2) if replica-serve-stale-data is set to 'no' the replica will reply with 293 | # an error "SYNC with master in progress" to all the kind of commands 294 | # but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG, 295 | # SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, 296 | # COMMAND, POST, HOST: and LATENCY. 297 | # 298 | replica-serve-stale-data yes 299 | 300 | # You can configure a replica instance to accept writes or not. Writing against 301 | # a replica instance may be useful to store some ephemeral data (because data 302 | # written on a replica will be easily deleted after resync with the master) but 303 | # may also cause problems if clients are writing to it because of a 304 | # misconfiguration. 305 | # 306 | # Since Redis 2.6 by default replicas are read-only. 307 | # 308 | # Note: read only replicas are not designed to be exposed to untrusted clients 309 | # on the internet. It's just a protection layer against misuse of the instance. 310 | # Still a read only replica exports by default all the administrative commands 311 | # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve 312 | # security of read only replicas using 'rename-command' to shadow all the 313 | # administrative / dangerous commands. 314 | replica-read-only yes 315 | 316 | # Replication SYNC strategy: disk or socket. 317 | # 318 | # ------------------------------------------------------- 319 | # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY 320 | # ------------------------------------------------------- 321 | # 322 | # New replicas and reconnecting replicas that are not able to continue the replication 323 | # process just receiving differences, need to do what is called a "full 324 | # synchronization". An RDB file is transmitted from the master to the replicas. 325 | # The transmission can happen in two different ways: 326 | # 327 | # 1) Disk-backed: The Redis master creates a new process that writes the RDB 328 | # file on disk. Later the file is transferred by the parent 329 | # process to the replicas incrementally. 330 | # 2) Diskless: The Redis master creates a new process that directly writes the 331 | # RDB file to replica sockets, without touching the disk at all. 332 | # 333 | # With disk-backed replication, while the RDB file is generated, more replicas 334 | # can be queued and served with the RDB file as soon as the current child producing 335 | # the RDB file finishes its work. With diskless replication instead once 336 | # the transfer starts, new replicas arriving will be queued and a new transfer 337 | # will start when the current one terminates. 338 | # 339 | # When diskless replication is used, the master waits a configurable amount of 340 | # time (in seconds) before starting the transfer in the hope that multiple replicas 341 | # will arrive and the transfer can be parallelized. 342 | # 343 | # With slow disks and fast (large bandwidth) networks, diskless replication 344 | # works better. 345 | repl-diskless-sync no 346 | 347 | # When diskless replication is enabled, it is possible to configure the delay 348 | # the server waits in order to spawn the child that transfers the RDB via socket 349 | # to the replicas. 350 | # 351 | # This is important since once the transfer starts, it is not possible to serve 352 | # new replicas arriving, that will be queued for the next RDB transfer, so the server 353 | # waits a delay in order to let more replicas arrive. 354 | # 355 | # The delay is specified in seconds, and by default is 5 seconds. To disable 356 | # it entirely just set it to 0 seconds and the transfer will start ASAP. 357 | repl-diskless-sync-delay 5 358 | 359 | # Replicas send PINGs to server in a predefined interval. It's possible to change 360 | # this interval with the repl_ping_replica_period option. The default value is 10 361 | # seconds. 362 | # 363 | # repl-ping-replica-period 10 364 | 365 | # The following option sets the replication timeout for: 366 | # 367 | # 1) Bulk transfer I/O during SYNC, from the point of view of replica. 368 | # 2) Master timeout from the point of view of replicas (data, pings). 369 | # 3) Replica timeout from the point of view of masters (REPLCONF ACK pings). 370 | # 371 | # It is important to make sure that this value is greater than the value 372 | # specified for repl-ping-replica-period otherwise a timeout will be detected 373 | # every time there is low traffic between the master and the replica. 374 | # 375 | # repl-timeout 60 376 | 377 | # Disable TCP_NODELAY on the replica socket after SYNC? 378 | # 379 | # If you select "yes" Redis will use a smaller number of TCP packets and 380 | # less bandwidth to send data to replicas. But this can add a delay for 381 | # the data to appear on the replica side, up to 40 milliseconds with 382 | # Linux kernels using a default configuration. 383 | # 384 | # If you select "no" the delay for data to appear on the replica side will 385 | # be reduced but more bandwidth will be used for replication. 386 | # 387 | # By default we optimize for low latency, but in very high traffic conditions 388 | # or when the master and replicas are many hops away, turning this to "yes" may 389 | # be a good idea. 390 | repl-disable-tcp-nodelay no 391 | 392 | # Set the replication backlog size. The backlog is a buffer that accumulates 393 | # replica data when replicas are disconnected for some time, so that when a replica 394 | # wants to reconnect again, often a full resync is not needed, but a partial 395 | # resync is enough, just passing the portion of data the replica missed while 396 | # disconnected. 397 | # 398 | # The bigger the replication backlog, the longer the time the replica can be 399 | # disconnected and later be able to perform a partial resynchronization. 400 | # 401 | # The backlog is only allocated once there is at least a replica connected. 402 | # 403 | # repl-backlog-size 1mb 404 | 405 | # After a master has no longer connected replicas for some time, the backlog 406 | # will be freed. The following option configures the amount of seconds that 407 | # need to elapse, starting from the time the last replica disconnected, for 408 | # the backlog buffer to be freed. 409 | # 410 | # Note that replicas never free the backlog for timeout, since they may be 411 | # promoted to masters later, and should be able to correctly "partially 412 | # resynchronize" with the replicas: hence they should always accumulate backlog. 413 | # 414 | # A value of 0 means to never release the backlog. 415 | # 416 | # repl-backlog-ttl 3600 417 | 418 | # The replica priority is an integer number published by Redis in the INFO output. 419 | # It is used by Redis Sentinel in order to select a replica to promote into a 420 | # master if the master is no longer working correctly. 421 | # 422 | # A replica with a low priority number is considered better for promotion, so 423 | # for instance if there are three replicas with priority 10, 100, 25 Sentinel will 424 | # pick the one with priority 10, that is the lowest. 425 | # 426 | # However a special priority of 0 marks the replica as not able to perform the 427 | # role of master, so a replica with priority of 0 will never be selected by 428 | # Redis Sentinel for promotion. 429 | # 430 | # By default the priority is 100. 431 | replica-priority 100 432 | 433 | # It is possible for a master to stop accepting writes if there are less than 434 | # N replicas connected, having a lag less or equal than M seconds. 435 | # 436 | # The N replicas need to be in "online" state. 437 | # 438 | # The lag in seconds, that must be <= the specified value, is calculated from 439 | # the last ping received from the replica, that is usually sent every second. 440 | # 441 | # This option does not GUARANTEE that N replicas will accept the write, but 442 | # will limit the window of exposure for lost writes in case not enough replicas 443 | # are available, to the specified number of seconds. 444 | # 445 | # For example to require at least 3 replicas with a lag <= 10 seconds use: 446 | # 447 | # min-replicas-to-write 3 448 | # min-replicas-max-lag 10 449 | # 450 | # Setting one or the other to 0 disables the feature. 451 | # 452 | # By default min-replicas-to-write is set to 0 (feature disabled) and 453 | # min-replicas-max-lag is set to 10. 454 | 455 | # A Redis master is able to list the address and port of the attached 456 | # replicas in different ways. For example the "INFO replication" section 457 | # offers this information, which is used, among other tools, by 458 | # Redis Sentinel in order to discover replica instances. 459 | # Another place where this info is available is in the output of the 460 | # "ROLE" command of a master. 461 | # 462 | # The listed IP and address normally reported by a replica is obtained 463 | # in the following way: 464 | # 465 | # IP: The address is auto detected by checking the peer address 466 | # of the socket used by the replica to connect with the master. 467 | # 468 | # Port: The port is communicated by the replica during the replication 469 | # handshake, and is normally the port that the replica is using to 470 | # listen for connections. 471 | # 472 | # However when port forwarding or Network Address Translation (NAT) is 473 | # used, the replica may be actually reachable via different IP and port 474 | # pairs. The following two options can be used by a replica in order to 475 | # report to its master a specific set of IP and port, so that both INFO 476 | # and ROLE will report those values. 477 | # 478 | # There is no need to use both the options if you need to override just 479 | # the port or the IP address. 480 | # 481 | # replica-announce-ip 5.5.5.5 482 | # replica-announce-port 1234 483 | 484 | ################################## SECURITY ################################### 485 | 486 | # Require clients to issue AUTH before processing any other 487 | # commands. This might be useful in environments in which you do not trust 488 | # others with access to the host running redis-server. 489 | # 490 | # This should stay commented out for backward compatibility and because most 491 | # people do not need auth (e.g. they run their own servers). 492 | # 493 | # Warning: since Redis is pretty fast an outside user can try up to 494 | # 150k passwords per second against a good box. This means that you should 495 | # use a very strong password otherwise it will be very easy to break. 496 | # 497 | # requirepass foobared 498 | 499 | # Command renaming. 500 | # 501 | # It is possible to change the name of dangerous commands in a shared 502 | # environment. For instance the CONFIG command may be renamed into something 503 | # hard to guess so that it will still be available for internal-use tools 504 | # but not available for general clients. 505 | # 506 | # Example: 507 | # 508 | # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 509 | # 510 | # It is also possible to completely kill a command by renaming it into 511 | # an empty string: 512 | # 513 | rename-command CONFIG "" 514 | # 515 | # Please note that changing the name of commands that are logged into the 516 | # AOF file or transmitted to replicas may cause problems. 517 | 518 | ################################### CLIENTS #################################### 519 | 520 | # Set the max number of connected clients at the same time. By default 521 | # this limit is set to 10000 clients, however if the Redis server is not 522 | # able to configure the process file limit to allow for the specified limit 523 | # the max number of allowed clients is set to the current file limit 524 | # minus 32 (as Redis reserves a few file descriptors for internal uses). 525 | # 526 | # Once the limit is reached Redis will close all the new connections sending 527 | # an error 'max number of clients reached'. 528 | # 529 | # maxclients 10000 530 | 531 | ############################## MEMORY MANAGEMENT ################################ 532 | 533 | # Set a memory usage limit to the specified amount of bytes. 534 | # When the memory limit is reached Redis will try to remove keys 535 | # according to the eviction policy selected (see maxmemory-policy). 536 | # 537 | # If Redis can't remove keys according to the policy, or if the policy is 538 | # set to 'noeviction', Redis will start to reply with errors to commands 539 | # that would use more memory, like SET, LPUSH, and so on, and will continue 540 | # to reply to read-only commands like GET. 541 | # 542 | # This option is usually useful when using Redis as an LRU or LFU cache, or to 543 | # set a hard memory limit for an instance (using the 'noeviction' policy). 544 | # 545 | # WARNING: If you have replicas attached to an instance with maxmemory on, 546 | # the size of the output buffers needed to feed the replicas are subtracted 547 | # from the used memory count, so that network problems / resyncs will 548 | # not trigger a loop where keys are evicted, and in turn the output 549 | # buffer of replicas is full with DELs of keys evicted triggering the deletion 550 | # of more keys, and so forth until the database is completely emptied. 551 | # 552 | # In short... if you have replicas attached it is suggested that you set a lower 553 | # limit for maxmemory so that there is some free RAM on the system for replica 554 | # output buffers (but this is not needed if the policy is 'noeviction'). 555 | # 556 | maxmemory 512M 557 | 558 | # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory 559 | # is reached. You can select among five behaviors: 560 | # 561 | # volatile-lru -> Evict using approximated LRU among the keys with an expire set. 562 | # allkeys-lru -> Evict any key using approximated LRU. 563 | # volatile-lfu -> Evict using approximated LFU among the keys with an expire set. 564 | # allkeys-lfu -> Evict any key using approximated LFU. 565 | # volatile-random -> Remove a random key among the ones with an expire set. 566 | # allkeys-random -> Remove a random key, any key. 567 | # volatile-ttl -> Remove the key with the nearest expire time (minor TTL) 568 | # noeviction -> Don't evict anything, just return an error on write operations. 569 | # 570 | # LRU means Least Recently Used 571 | # LFU means Least Frequently Used 572 | # 573 | # Both LRU, LFU and volatile-ttl are implemented using approximated 574 | # randomized algorithms. 575 | # 576 | # Note: with any of the above policies, Redis will return an error on write 577 | # operations, when there are no suitable keys for eviction. 578 | # 579 | # At the date of writing these commands are: set setnx setex append 580 | # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd 581 | # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby 582 | # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby 583 | # getset mset msetnx exec sort 584 | # 585 | # The default is: 586 | # 587 | # maxmemory-policy noeviction 588 | maxmemory-policy allkeys-lru 589 | 590 | # LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated 591 | # algorithms (in order to save memory), so you can tune it for speed or 592 | # accuracy. For default Redis will check five keys and pick the one that was 593 | # used less recently, you can change the sample size using the following 594 | # configuration directive. 595 | # 596 | # The default of 5 produces good enough results. 10 Approximates very closely 597 | # true LRU but costs more CPU. 3 is faster but not very accurate. 598 | # 599 | # maxmemory-samples 5 600 | 601 | # Starting from Redis 5, by default a replica will ignore its maxmemory setting 602 | # (unless it is promoted to master after a failover or manually). It means 603 | # that the eviction of keys will be just handled by the master, sending the 604 | # DEL commands to the replica as keys evict in the master side. 605 | # 606 | # This behavior ensures that masters and replicas stay consistent, and is usually 607 | # what you want, however if your replica is writable, or you want the replica to have 608 | # a different memory setting, and you are sure all the writes performed to the 609 | # replica are idempotent, then you may change this default (but be sure to understand 610 | # what you are doing). 611 | # 612 | # Note that since the replica by default does not evict, it may end using more 613 | # memory than the one set via maxmemory (there are certain buffers that may 614 | # be larger on the replica, or data structures may sometimes take more memory and so 615 | # forth). So make sure you monitor your replicas and make sure they have enough 616 | # memory to never hit a real out-of-memory condition before the master hits 617 | # the configured maxmemory setting. 618 | # 619 | # replica-ignore-maxmemory yes 620 | 621 | ############################# LAZY FREEING #################################### 622 | 623 | # Redis has two primitives to delete keys. One is called DEL and is a blocking 624 | # deletion of the object. It means that the server stops processing new commands 625 | # in order to reclaim all the memory associated with an object in a synchronous 626 | # way. If the key deleted is associated with a small object, the time needed 627 | # in order to execute the DEL command is very small and comparable to most other 628 | # O(1) or O(log_N) commands in Redis. However if the key is associated with an 629 | # aggregated value containing millions of elements, the server can block for 630 | # a long time (even seconds) in order to complete the operation. 631 | # 632 | # For the above reasons Redis also offers non blocking deletion primitives 633 | # such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and 634 | # FLUSHDB commands, in order to reclaim memory in background. Those commands 635 | # are executed in constant time. Another thread will incrementally free the 636 | # object in the background as fast as possible. 637 | # 638 | # DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled. 639 | # It's up to the design of the application to understand when it is a good 640 | # idea to use one or the other. However the Redis server sometimes has to 641 | # delete keys or flush the whole database as a side effect of other operations. 642 | # Specifically Redis deletes objects independently of a user call in the 643 | # following scenarios: 644 | # 645 | # 1) On eviction, because of the maxmemory and maxmemory policy configurations, 646 | # in order to make room for new data, without going over the specified 647 | # memory limit. 648 | # 2) Because of expire: when a key with an associated time to live (see the 649 | # EXPIRE command) must be deleted from memory. 650 | # 3) Because of a side effect of a command that stores data on a key that may 651 | # already exist. For example the RENAME command may delete the old key 652 | # content when it is replaced with another one. Similarly SUNIONSTORE 653 | # or SORT with STORE option may delete existing keys. The SET command 654 | # itself removes any old content of the specified key in order to replace 655 | # it with the specified string. 656 | # 4) During replication, when a replica performs a full resynchronization with 657 | # its master, the content of the whole database is removed in order to 658 | # load the RDB file just transferred. 659 | # 660 | # In all the above cases the default is to delete objects in a blocking way, 661 | # like if DEL was called. However you can configure each case specifically 662 | # in order to instead release memory in a non-blocking way like if UNLINK 663 | # was called, using the following configuration directives: 664 | 665 | lazyfree-lazy-eviction yes 666 | lazyfree-lazy-expire yes 667 | lazyfree-lazy-server-del yes 668 | replica-lazy-flush yes 669 | 670 | ############################## APPEND ONLY MODE ############################### 671 | 672 | # By default Redis asynchronously dumps the dataset on disk. This mode is 673 | # good enough in many applications, but an issue with the Redis process or 674 | # a power outage may result into a few minutes of writes lost (depending on 675 | # the configured save points). 676 | # 677 | # The Append Only File is an alternative persistence mode that provides 678 | # much better durability. For instance using the default data fsync policy 679 | # (see later in the config file) Redis can lose just one second of writes in a 680 | # dramatic event like a server power outage, or a single write if something 681 | # wrong with the Redis process itself happens, but the operating system is 682 | # still running correctly. 683 | # 684 | # AOF and RDB persistence can be enabled at the same time without problems. 685 | # If the AOF is enabled on startup Redis will load the AOF, that is the file 686 | # with the better durability guarantees. 687 | # 688 | # Please check http://redis.io/topics/persistence for more information. 689 | 690 | appendonly no 691 | 692 | # The name of the append only file (default: "appendonly.aof") 693 | 694 | appendfilename "appendonly.aof" 695 | 696 | # The fsync() call tells the Operating System to actually write data on disk 697 | # instead of waiting for more data in the output buffer. Some OS will really flush 698 | # data on disk, some other OS will just try to do it ASAP. 699 | # 700 | # Redis supports three different modes: 701 | # 702 | # no: don't fsync, just let the OS flush the data when it wants. Faster. 703 | # always: fsync after every write to the append only log. Slow, Safest. 704 | # everysec: fsync only one time every second. Compromise. 705 | # 706 | # The default is "everysec", as that's usually the right compromise between 707 | # speed and data safety. It's up to you to understand if you can relax this to 708 | # "no" that will let the operating system flush the output buffer when 709 | # it wants, for better performances (but if you can live with the idea of 710 | # some data loss consider the default persistence mode that's snapshotting), 711 | # or on the contrary, use "always" that's very slow but a bit safer than 712 | # everysec. 713 | # 714 | # More details please check the following article: 715 | # http://antirez.com/post/redis-persistence-demystified.html 716 | # 717 | # If unsure, use "everysec". 718 | 719 | # appendfsync always 720 | appendfsync everysec 721 | # appendfsync no 722 | 723 | # When the AOF fsync policy is set to always or everysec, and a background 724 | # saving process (a background save or AOF log background rewriting) is 725 | # performing a lot of I/O against the disk, in some Linux configurations 726 | # Redis may block too long on the fsync() call. Note that there is no fix for 727 | # this currently, as even performing fsync in a different thread will block 728 | # our synchronous write(2) call. 729 | # 730 | # In order to mitigate this problem it's possible to use the following option 731 | # that will prevent fsync() from being called in the main process while a 732 | # BGSAVE or BGREWRITEAOF is in progress. 733 | # 734 | # This means that while another child is saving, the durability of Redis is 735 | # the same as "appendfsync none". In practical terms, this means that it is 736 | # possible to lose up to 30 seconds of log in the worst scenario (with the 737 | # default Linux settings). 738 | # 739 | # If you have latency problems turn this to "yes". Otherwise leave it as 740 | # "no" that is the safest pick from the point of view of durability. 741 | 742 | no-appendfsync-on-rewrite no 743 | 744 | # Automatic rewrite of the append only file. 745 | # Redis is able to automatically rewrite the log file implicitly calling 746 | # BGREWRITEAOF when the AOF log size grows by the specified percentage. 747 | # 748 | # This is how it works: Redis remembers the size of the AOF file after the 749 | # latest rewrite (if no rewrite has happened since the restart, the size of 750 | # the AOF at startup is used). 751 | # 752 | # This base size is compared to the current size. If the current size is 753 | # bigger than the specified percentage, the rewrite is triggered. Also 754 | # you need to specify a minimal size for the AOF file to be rewritten, this 755 | # is useful to avoid rewriting the AOF file even if the percentage increase 756 | # is reached but it is still pretty small. 757 | # 758 | # Specify a percentage of zero in order to disable the automatic AOF 759 | # rewrite feature. 760 | 761 | auto-aof-rewrite-percentage 100 762 | auto-aof-rewrite-min-size 64mb 763 | 764 | # An AOF file may be found to be truncated at the end during the Redis 765 | # startup process, when the AOF data gets loaded back into memory. 766 | # This may happen when the system where Redis is running 767 | # crashes, especially when an ext4 filesystem is mounted without the 768 | # data=ordered option (however this can't happen when Redis itself 769 | # crashes or aborts but the operating system still works correctly). 770 | # 771 | # Redis can either exit with an error when this happens, or load as much 772 | # data as possible (the default now) and start if the AOF file is found 773 | # to be truncated at the end. The following option controls this behavior. 774 | # 775 | # If aof-load-truncated is set to yes, a truncated AOF file is loaded and 776 | # the Redis server starts emitting a log to inform the user of the event. 777 | # Otherwise if the option is set to no, the server aborts with an error 778 | # and refuses to start. When the option is set to no, the user requires 779 | # to fix the AOF file using the "redis-check-aof" utility before to restart 780 | # the server. 781 | # 782 | # Note that if the AOF file will be found to be corrupted in the middle 783 | # the server will still exit with an error. This option only applies when 784 | # Redis will try to read more data from the AOF file but not enough bytes 785 | # will be found. 786 | aof-load-truncated yes 787 | 788 | # When rewriting the AOF file, Redis is able to use an RDB preamble in the 789 | # AOF file for faster rewrites and recoveries. When this option is turned 790 | # on the rewritten AOF file is composed of two different stanzas: 791 | # 792 | # [RDB file][AOF tail] 793 | # 794 | # When loading Redis recognizes that the AOF file starts with the "REDIS" 795 | # string and loads the prefixed RDB file, and continues loading the AOF 796 | # tail. 797 | aof-use-rdb-preamble yes 798 | 799 | ################################ LUA SCRIPTING ############################### 800 | 801 | # Max execution time of a Lua script in milliseconds. 802 | # 803 | # If the maximum execution time is reached Redis will log that a script is 804 | # still in execution after the maximum allowed time and will start to 805 | # reply to queries with an error. 806 | # 807 | # When a long running script exceeds the maximum execution time only the 808 | # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be 809 | # used to stop a script that did not yet called write commands. The second 810 | # is the only way to shut down the server in the case a write command was 811 | # already issued by the script but the user doesn't want to wait for the natural 812 | # termination of the script. 813 | # 814 | # Set it to 0 or a negative value for unlimited execution without warnings. 815 | lua-time-limit 5000 816 | 817 | ################################ REDIS CLUSTER ############################### 818 | 819 | # Normal Redis instances can't be part of a Redis Cluster; only nodes that are 820 | # started as cluster nodes can. In order to start a Redis instance as a 821 | # cluster node enable the cluster support uncommenting the following: 822 | # 823 | # cluster-enabled yes 824 | 825 | # Every cluster node has a cluster configuration file. This file is not 826 | # intended to be edited by hand. It is created and updated by Redis nodes. 827 | # Every Redis Cluster node requires a different cluster configuration file. 828 | # Make sure that instances running in the same system do not have 829 | # overlapping cluster configuration file names. 830 | # 831 | # cluster-config-file nodes-6379.conf 832 | 833 | # Cluster node timeout is the amount of milliseconds a node must be unreachable 834 | # for it to be considered in failure state. 835 | # Most other internal time limits are multiple of the node timeout. 836 | # 837 | # cluster-node-timeout 15000 838 | 839 | # A replica of a failing master will avoid to start a failover if its data 840 | # looks too old. 841 | # 842 | # There is no simple way for a replica to actually have an exact measure of 843 | # its "data age", so the following two checks are performed: 844 | # 845 | # 1) If there are multiple replicas able to failover, they exchange messages 846 | # in order to try to give an advantage to the replica with the best 847 | # replication offset (more data from the master processed). 848 | # Replicas will try to get their rank by offset, and apply to the start 849 | # of the failover a delay proportional to their rank. 850 | # 851 | # 2) Every single replica computes the time of the last interaction with 852 | # its master. This can be the last ping or command received (if the master 853 | # is still in the "connected" state), or the time that elapsed since the 854 | # disconnection with the master (if the replication link is currently down). 855 | # If the last interaction is too old, the replica will not try to failover 856 | # at all. 857 | # 858 | # The point "2" can be tuned by user. Specifically a replica will not perform 859 | # the failover if, since the last interaction with the master, the time 860 | # elapsed is greater than: 861 | # 862 | # (node-timeout * replica-validity-factor) + repl-ping-replica-period 863 | # 864 | # So for example if node-timeout is 30 seconds, and the replica-validity-factor 865 | # is 10, and assuming a default repl-ping-replica-period of 10 seconds, the 866 | # replica will not try to failover if it was not able to talk with the master 867 | # for longer than 310 seconds. 868 | # 869 | # A large replica-validity-factor may allow replicas with too old data to failover 870 | # a master, while a too small value may prevent the cluster from being able to 871 | # elect a replica at all. 872 | # 873 | # For maximum availability, it is possible to set the replica-validity-factor 874 | # to a value of 0, which means, that replicas will always try to failover the 875 | # master regardless of the last time they interacted with the master. 876 | # (However they'll always try to apply a delay proportional to their 877 | # offset rank). 878 | # 879 | # Zero is the only value able to guarantee that when all the partitions heal 880 | # the cluster will always be able to continue. 881 | # 882 | # cluster-replica-validity-factor 10 883 | 884 | # Cluster replicas are able to migrate to orphaned masters, that are masters 885 | # that are left without working replicas. This improves the cluster ability 886 | # to resist to failures as otherwise an orphaned master can't be failed over 887 | # in case of failure if it has no working replicas. 888 | # 889 | # Replicas migrate to orphaned masters only if there are still at least a 890 | # given number of other working replicas for their old master. This number 891 | # is the "migration barrier". A migration barrier of 1 means that a replica 892 | # will migrate only if there is at least 1 other working replica for its master 893 | # and so forth. It usually reflects the number of replicas you want for every 894 | # master in your cluster. 895 | # 896 | # Default is 1 (replicas migrate only if their masters remain with at least 897 | # one replica). To disable migration just set it to a very large value. 898 | # A value of 0 can be set but is useful only for debugging and dangerous 899 | # in production. 900 | # 901 | # cluster-migration-barrier 1 902 | 903 | # By default Redis Cluster nodes stop accepting queries if they detect there 904 | # is at least an hash slot uncovered (no available node is serving it). 905 | # This way if the cluster is partially down (for example a range of hash slots 906 | # are no longer covered) all the cluster becomes, eventually, unavailable. 907 | # It automatically returns available as soon as all the slots are covered again. 908 | # 909 | # However sometimes you want the subset of the cluster which is working, 910 | # to continue to accept queries for the part of the key space that is still 911 | # covered. In order to do so, just set the cluster-require-full-coverage 912 | # option to no. 913 | # 914 | # cluster-require-full-coverage yes 915 | 916 | # This option, when set to yes, prevents replicas from trying to failover its 917 | # master during master failures. However the master can still perform a 918 | # manual failover, if forced to do so. 919 | # 920 | # This is useful in different scenarios, especially in the case of multiple 921 | # data center operations, where we want one side to never be promoted if not 922 | # in the case of a total DC failure. 923 | # 924 | # cluster-replica-no-failover no 925 | 926 | # In order to setup your cluster make sure to read the documentation 927 | # available at http://redis.io web site. 928 | 929 | ########################## CLUSTER DOCKER/NAT support ######################## 930 | 931 | # In certain deployments, Redis Cluster nodes address discovery fails, because 932 | # addresses are NAT-ted or because ports are forwarded (the typical case is 933 | # Docker and other containers). 934 | # 935 | # In order to make Redis Cluster working in such environments, a static 936 | # configuration where each node knows its public address is needed. The 937 | # following two options are used for this scope, and are: 938 | # 939 | # * cluster-announce-ip 940 | # * cluster-announce-port 941 | # * cluster-announce-bus-port 942 | # 943 | # Each instruct the node about its address, client port, and cluster message 944 | # bus port. The information is then published in the header of the bus packets 945 | # so that other nodes will be able to correctly map the address of the node 946 | # publishing the information. 947 | # 948 | # If the above options are not used, the normal Redis Cluster auto-detection 949 | # will be used instead. 950 | # 951 | # Note that when remapped, the bus port may not be at the fixed offset of 952 | # clients port + 10000, so you can specify any port and bus-port depending 953 | # on how they get remapped. If the bus-port is not set, a fixed offset of 954 | # 10000 will be used as usually. 955 | # 956 | # Example: 957 | # 958 | # cluster-announce-ip 10.1.1.5 959 | # cluster-announce-port 6379 960 | # cluster-announce-bus-port 6380 961 | 962 | ################################## SLOW LOG ################################### 963 | 964 | # The Redis Slow Log is a system to log queries that exceeded a specified 965 | # execution time. The execution time does not include the I/O operations 966 | # like talking with the client, sending the reply and so forth, 967 | # but just the time needed to actually execute the command (this is the only 968 | # stage of command execution where the thread is blocked and can not serve 969 | # other requests in the meantime). 970 | # 971 | # You can configure the slow log with two parameters: one tells Redis 972 | # what is the execution time, in microseconds, to exceed in order for the 973 | # command to get logged, and the other parameter is the length of the 974 | # slow log. When a new command is logged the oldest one is removed from the 975 | # queue of logged commands. 976 | 977 | # The following time is expressed in microseconds, so 1000000 is equivalent 978 | # to one second. Note that a negative number disables the slow log, while 979 | # a value of zero forces the logging of every command. 980 | slowlog-log-slower-than 10000 981 | 982 | # There is no limit to this length. Just be aware that it will consume memory. 983 | # You can reclaim memory used by the slow log with SLOWLOG RESET. 984 | slowlog-max-len 128 985 | 986 | ################################ LATENCY MONITOR ############################## 987 | 988 | # The Redis latency monitoring subsystem samples different operations 989 | # at runtime in order to collect data related to possible sources of 990 | # latency of a Redis instance. 991 | # 992 | # Via the LATENCY command this information is available to the user that can 993 | # print graphs and obtain reports. 994 | # 995 | # The system only logs operations that were performed in a time equal or 996 | # greater than the amount of milliseconds specified via the 997 | # latency-monitor-threshold configuration directive. When its value is set 998 | # to zero, the latency monitor is turned off. 999 | # 1000 | # By default latency monitoring is disabled since it is mostly not needed 1001 | # if you don't have latency issues, and collecting data has a performance 1002 | # impact, that while very small, can be measured under big load. Latency 1003 | # monitoring can easily be enabled at runtime using the command 1004 | # "CONFIG SET latency-monitor-threshold " if needed. 1005 | latency-monitor-threshold 0 1006 | 1007 | ############################# EVENT NOTIFICATION ############################## 1008 | 1009 | # Redis can notify Pub/Sub clients about events happening in the key space. 1010 | # This feature is documented at http://redis.io/topics/notifications 1011 | # 1012 | # For instance if keyspace events notification is enabled, and a client 1013 | # performs a DEL operation on key "foo" stored in the Database 0, two 1014 | # messages will be published via Pub/Sub: 1015 | # 1016 | # PUBLISH __keyspace@0__:foo del 1017 | # PUBLISH __keyevent@0__:del foo 1018 | # 1019 | # It is possible to select the events that Redis will notify among a set 1020 | # of classes. Every class is identified by a single character: 1021 | # 1022 | # K Keyspace events, published with __keyspace@__ prefix. 1023 | # E Keyevent events, published with __keyevent@__ prefix. 1024 | # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... 1025 | # $ String commands 1026 | # l List commands 1027 | # s Set commands 1028 | # h Hash commands 1029 | # z Sorted set commands 1030 | # x Expired events (events generated every time a key expires) 1031 | # e Evicted events (events generated when a key is evicted for maxmemory) 1032 | # A Alias for g$lshzxe, so that the "AKE" string means all the events. 1033 | # 1034 | # The "notify-keyspace-events" takes as argument a string that is composed 1035 | # of zero or multiple characters. The empty string means that notifications 1036 | # are disabled. 1037 | # 1038 | # Example: to enable list and generic events, from the point of view of the 1039 | # event name, use: 1040 | # 1041 | # notify-keyspace-events Elg 1042 | # 1043 | # Example 2: to get the stream of the expired keys subscribing to channel 1044 | # name __keyevent@0__:expired use: 1045 | # 1046 | # notify-keyspace-events Ex 1047 | # 1048 | # By default all notifications are disabled because most users don't need 1049 | # this feature and the feature has some overhead. Note that if you don't 1050 | # specify at least one of K or E, no events will be delivered. 1051 | notify-keyspace-events "" 1052 | 1053 | ############################### ADVANCED CONFIG ############################### 1054 | 1055 | # Hashes are encoded using a memory efficient data structure when they have a 1056 | # small number of entries, and the biggest entry does not exceed a given 1057 | # threshold. These thresholds can be configured using the following directives. 1058 | hash-max-ziplist-entries 512 1059 | hash-max-ziplist-value 64 1060 | 1061 | # Lists are also encoded in a special way to save a lot of space. 1062 | # The number of entries allowed per internal list node can be specified 1063 | # as a fixed maximum size or a maximum number of elements. 1064 | # For a fixed maximum size, use -5 through -1, meaning: 1065 | # -5: max size: 64 Kb <-- not recommended for normal workloads 1066 | # -4: max size: 32 Kb <-- not recommended 1067 | # -3: max size: 16 Kb <-- probably not recommended 1068 | # -2: max size: 8 Kb <-- good 1069 | # -1: max size: 4 Kb <-- good 1070 | # Positive numbers mean store up to _exactly_ that number of elements 1071 | # per list node. 1072 | # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size), 1073 | # but if your use case is unique, adjust the settings as necessary. 1074 | list-max-ziplist-size -2 1075 | 1076 | # Lists may also be compressed. 1077 | # Compress depth is the number of quicklist ziplist nodes from *each* side of 1078 | # the list to *exclude* from compression. The head and tail of the list 1079 | # are always uncompressed for fast push/pop operations. Settings are: 1080 | # 0: disable all list compression 1081 | # 1: depth 1 means "don't start compressing until after 1 node into the list, 1082 | # going from either the head or tail" 1083 | # So: [head]->node->node->...->node->[tail] 1084 | # [head], [tail] will always be uncompressed; inner nodes will compress. 1085 | # 2: [head]->[next]->node->node->...->node->[prev]->[tail] 1086 | # 2 here means: don't compress head or head->next or tail->prev or tail, 1087 | # but compress all nodes between them. 1088 | # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail] 1089 | # etc. 1090 | list-compress-depth 0 1091 | 1092 | # Sets have a special encoding in just one case: when a set is composed 1093 | # of just strings that happen to be integers in radix 10 in the range 1094 | # of 64 bit signed integers. 1095 | # The following configuration setting sets the limit in the size of the 1096 | # set in order to use this special memory saving encoding. 1097 | set-max-intset-entries 512 1098 | 1099 | # Similarly to hashes and lists, sorted sets are also specially encoded in 1100 | # order to save a lot of space. This encoding is only used when the length and 1101 | # elements of a sorted set are below the following limits: 1102 | zset-max-ziplist-entries 128 1103 | zset-max-ziplist-value 64 1104 | 1105 | # HyperLogLog sparse representation bytes limit. The limit includes the 1106 | # 16 bytes header. When an HyperLogLog using the sparse representation crosses 1107 | # this limit, it is converted into the dense representation. 1108 | # 1109 | # A value greater than 16000 is totally useless, since at that point the 1110 | # dense representation is more memory efficient. 1111 | # 1112 | # The suggested value is ~ 3000 in order to have the benefits of 1113 | # the space efficient encoding without slowing down too much PFADD, 1114 | # which is O(N) with the sparse encoding. The value can be raised to 1115 | # ~ 10000 when CPU is not a concern, but space is, and the data set is 1116 | # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. 1117 | hll-sparse-max-bytes 3000 1118 | 1119 | # Streams macro node max size / items. The stream data structure is a radix 1120 | # tree of big nodes that encode multiple items inside. Using this configuration 1121 | # it is possible to configure how big a single node can be in bytes, and the 1122 | # maximum number of items it may contain before switching to a new node when 1123 | # appending new stream entries. If any of the following settings are set to 1124 | # zero, the limit is ignored, so for instance it is possible to set just a 1125 | # max entires limit by setting max-bytes to 0 and max-entries to the desired 1126 | # value. 1127 | stream-node-max-bytes 4096 1128 | stream-node-max-entries 100 1129 | 1130 | # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in 1131 | # order to help rehashing the main Redis hash table (the one mapping top-level 1132 | # keys to values). The hash table implementation Redis uses (see dict.c) 1133 | # performs a lazy rehashing: the more operation you run into a hash table 1134 | # that is rehashing, the more rehashing "steps" are performed, so if the 1135 | # server is idle the rehashing is never complete and some more memory is used 1136 | # by the hash table. 1137 | # 1138 | # The default is to use this millisecond 10 times every second in order to 1139 | # actively rehash the main dictionaries, freeing memory when possible. 1140 | # 1141 | # If unsure: 1142 | # use "activerehashing no" if you have hard latency requirements and it is 1143 | # not a good thing in your environment that Redis can reply from time to time 1144 | # to queries with 2 milliseconds delay. 1145 | # 1146 | # use "activerehashing yes" if you don't have such hard requirements but 1147 | # want to free memory asap when possible. 1148 | activerehashing yes 1149 | 1150 | # The client output buffer limits can be used to force disconnection of clients 1151 | # that are not reading data from the server fast enough for some reason (a 1152 | # common reason is that a Pub/Sub client can't consume messages as fast as the 1153 | # publisher can produce them). 1154 | # 1155 | # The limit can be set differently for the three different classes of clients: 1156 | # 1157 | # normal -> normal clients including MONITOR clients 1158 | # replica -> replica clients 1159 | # pubsub -> clients subscribed to at least one pubsub channel or pattern 1160 | # 1161 | # The syntax of every client-output-buffer-limit directive is the following: 1162 | # 1163 | # client-output-buffer-limit 1164 | # 1165 | # A client is immediately disconnected once the hard limit is reached, or if 1166 | # the soft limit is reached and remains reached for the specified number of 1167 | # seconds (continuously). 1168 | # So for instance if the hard limit is 32 megabytes and the soft limit is 1169 | # 16 megabytes / 10 seconds, the client will get disconnected immediately 1170 | # if the size of the output buffers reach 32 megabytes, but will also get 1171 | # disconnected if the client reaches 16 megabytes and continuously overcomes 1172 | # the limit for 10 seconds. 1173 | # 1174 | # By default normal clients are not limited because they don't receive data 1175 | # without asking (in a push way), but just after a request, so only 1176 | # asynchronous clients may create a scenario where data is requested faster 1177 | # than it can read. 1178 | # 1179 | # Instead there is a default limit for pubsub and replica clients, since 1180 | # subscribers and replicas receive data in a push fashion. 1181 | # 1182 | # Both the hard or the soft limit can be disabled by setting them to zero. 1183 | client-output-buffer-limit normal 0 0 0 1184 | client-output-buffer-limit replica 256mb 64mb 60 1185 | client-output-buffer-limit pubsub 32mb 8mb 60 1186 | 1187 | # Client query buffers accumulate new commands. They are limited to a fixed 1188 | # amount by default in order to avoid that a protocol desynchronization (for 1189 | # instance due to a bug in the client) will lead to unbound memory usage in 1190 | # the query buffer. However you can configure it here if you have very special 1191 | # needs, such us huge multi/exec requests or alike. 1192 | # 1193 | # client-query-buffer-limit 1gb 1194 | 1195 | # In the Redis protocol, bulk requests, that are, elements representing single 1196 | # strings, are normally limited ot 512 mb. However you can change this limit 1197 | # here. 1198 | # 1199 | # proto-max-bulk-len 512mb 1200 | 1201 | # Redis calls an internal function to perform many background tasks, like 1202 | # closing connections of clients in timeout, purging expired keys that are 1203 | # never requested, and so forth. 1204 | # 1205 | # Not all tasks are performed with the same frequency, but Redis checks for 1206 | # tasks to perform according to the specified "hz" value. 1207 | # 1208 | # By default "hz" is set to 10. Raising the value will use more CPU when 1209 | # Redis is idle, but at the same time will make Redis more responsive when 1210 | # there are many keys expiring at the same time, and timeouts may be 1211 | # handled with more precision. 1212 | # 1213 | # The range is between 1 and 500, however a value over 100 is usually not 1214 | # a good idea. Most users should use the default of 10 and raise this up to 1215 | # 100 only in environments where very low latency is required. 1216 | hz 10 1217 | 1218 | # Normally it is useful to have an HZ value which is proportional to the 1219 | # number of clients connected. This is useful in order, for instance, to 1220 | # avoid too many clients are processed for each background task invocation 1221 | # in order to avoid latency spikes. 1222 | # 1223 | # Since the default HZ value by default is conservatively set to 10, Redis 1224 | # offers, and enables by default, the ability to use an adaptive HZ value 1225 | # which will temporary raise when there are many connected clients. 1226 | # 1227 | # When dynamic HZ is enabled, the actual configured HZ will be used as 1228 | # as a baseline, but multiples of the configured HZ value will be actually 1229 | # used as needed once more clients are connected. In this way an idle 1230 | # instance will use very little CPU time while a busy instance will be 1231 | # more responsive. 1232 | dynamic-hz yes 1233 | 1234 | # When a child rewrites the AOF file, if the following option is enabled 1235 | # the file will be fsync-ed every 32 MB of data generated. This is useful 1236 | # in order to commit the file to the disk more incrementally and avoid 1237 | # big latency spikes. 1238 | aof-rewrite-incremental-fsync yes 1239 | 1240 | # When redis saves RDB file, if the following option is enabled 1241 | # the file will be fsync-ed every 32 MB of data generated. This is useful 1242 | # in order to commit the file to the disk more incrementally and avoid 1243 | # big latency spikes. 1244 | rdb-save-incremental-fsync yes 1245 | 1246 | # Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good 1247 | # idea to start with the default settings and only change them after investigating 1248 | # how to improve the performances and how the keys LFU change over time, which 1249 | # is possible to inspect via the OBJECT FREQ command. 1250 | # 1251 | # There are two tunable parameters in the Redis LFU implementation: the 1252 | # counter logarithm factor and the counter decay time. It is important to 1253 | # understand what the two parameters mean before changing them. 1254 | # 1255 | # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis 1256 | # uses a probabilistic increment with logarithmic behavior. Given the value 1257 | # of the old counter, when a key is accessed, the counter is incremented in 1258 | # this way: 1259 | # 1260 | # 1. A random number R between 0 and 1 is extracted. 1261 | # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1). 1262 | # 3. The counter is incremented only if R < P. 1263 | # 1264 | # The default lfu-log-factor is 10. This is a table of how the frequency 1265 | # counter changes with a different number of accesses with different 1266 | # logarithmic factors: 1267 | # 1268 | # +--------+------------+------------+------------+------------+------------+ 1269 | # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | 1270 | # +--------+------------+------------+------------+------------+------------+ 1271 | # | 0 | 104 | 255 | 255 | 255 | 255 | 1272 | # +--------+------------+------------+------------+------------+------------+ 1273 | # | 1 | 18 | 49 | 255 | 255 | 255 | 1274 | # +--------+------------+------------+------------+------------+------------+ 1275 | # | 10 | 10 | 18 | 142 | 255 | 255 | 1276 | # +--------+------------+------------+------------+------------+------------+ 1277 | # | 100 | 8 | 11 | 49 | 143 | 255 | 1278 | # +--------+------------+------------+------------+------------+------------+ 1279 | # 1280 | # NOTE: The above table was obtained by running the following commands: 1281 | # 1282 | # redis-benchmark -n 1000000 incr foo 1283 | # redis-cli object freq foo 1284 | # 1285 | # NOTE 2: The counter initial value is 5 in order to give new objects a chance 1286 | # to accumulate hits. 1287 | # 1288 | # The counter decay time is the time, in minutes, that must elapse in order 1289 | # for the key counter to be divided by two (or decremented if it has a value 1290 | # less <= 10). 1291 | # 1292 | # The default value for the lfu-decay-time is 1. A Special value of 0 means to 1293 | # decay the counter every time it happens to be scanned. 1294 | # 1295 | # lfu-log-factor 10 1296 | # lfu-decay-time 1 1297 | 1298 | ########################### ACTIVE DEFRAGMENTATION ####################### 1299 | # 1300 | # WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested 1301 | # even in production and manually tested by multiple engineers for some 1302 | # time. 1303 | # 1304 | # What is active defragmentation? 1305 | # ------------------------------- 1306 | # 1307 | # Active (online) defragmentation allows a Redis server to compact the 1308 | # spaces left between small allocations and deallocations of data in memory, 1309 | # thus allowing to reclaim back memory. 1310 | # 1311 | # Fragmentation is a natural process that happens with every allocator (but 1312 | # less so with Jemalloc, fortunately) and certain workloads. Normally a server 1313 | # restart is needed in order to lower the fragmentation, or at least to flush 1314 | # away all the data and create it again. However thanks to this feature 1315 | # implemented by Oran Agra for Redis 4.0 this process can happen at runtime 1316 | # in an "hot" way, while the server is running. 1317 | # 1318 | # Basically when the fragmentation is over a certain level (see the 1319 | # configuration options below) Redis will start to create new copies of the 1320 | # values in contiguous memory regions by exploiting certain specific Jemalloc 1321 | # features (in order to understand if an allocation is causing fragmentation 1322 | # and to allocate it in a better place), and at the same time, will release the 1323 | # old copies of the data. This process, repeated incrementally for all the keys 1324 | # will cause the fragmentation to drop back to normal values. 1325 | # 1326 | # Important things to understand: 1327 | # 1328 | # 1. This feature is disabled by default, and only works if you compiled Redis 1329 | # to use the copy of Jemalloc we ship with the source code of Redis. 1330 | # This is the default with Linux builds. 1331 | # 1332 | # 2. You never need to enable this feature if you don't have fragmentation 1333 | # issues. 1334 | # 1335 | # 3. Once you experience fragmentation, you can enable this feature when 1336 | # needed with the command "CONFIG SET activedefrag yes". 1337 | # 1338 | # The configuration parameters are able to fine tune the behavior of the 1339 | # defragmentation process. If you are not sure about what they mean it is 1340 | # a good idea to leave the defaults untouched. 1341 | 1342 | # Enabled active defragmentation 1343 | # activedefrag yes 1344 | 1345 | # Minimum amount of fragmentation waste to start active defrag 1346 | # active-defrag-ignore-bytes 100mb 1347 | 1348 | # Minimum percentage of fragmentation to start active defrag 1349 | # active-defrag-threshold-lower 10 1350 | 1351 | # Maximum percentage of fragmentation at which we use maximum effort 1352 | # active-defrag-threshold-upper 100 1353 | 1354 | # Minimal effort for defrag in CPU percentage 1355 | # active-defrag-cycle-min 5 1356 | 1357 | # Maximal effort for defrag in CPU percentage 1358 | # active-defrag-cycle-max 75 1359 | 1360 | # Maximum number of set/hash/zset/list fields that will be processed from 1361 | # the main dictionary scan 1362 | # active-defrag-max-scan-fields 1000 1363 | -------------------------------------------------------------------------------- /default.env: -------------------------------------------------------------------------------- 1 | # ------------------------------ 2 | # eXtremeSHOK.com Docker Webserver Configuration 3 | # ------------------------------ 4 | # Your timezone 5 | TIMEZONE=UTC 6 | 7 | # Hostname (fqdn) 8 | FQDN_HOSTNAME=$(hostname -f) 9 | 10 | # Email address used to enable letsencrypt 11 | ACME_EMAIL=admin@extremeshok.com 12 | 13 | # Fixed project name 14 | COMPOSE_PROJECT_NAME=xs 15 | 16 | # Fix for unknown PWD 17 | PWD=${PWD:-/datastore} 18 | 19 | # Disable IPv6 20 | # network will be created IPv6 enabled, all containers will be created without IPv6 support. 21 | # Use 1 for disabled, 0 for enabled 22 | SYSCTL_IPV6_DISABLED=0 23 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | ### eXtremeSHOK.com Docker Webserver 2 | version: '3.1' 3 | ################################################################################ 4 | # This is property of eXtremeSHOK.com 5 | # You are free to use, modify and distribute, however you may not remove this notice. 6 | # Copyright (c) Adrian Jon Kriel :: admin@extremeshok.com 7 | ################################################################################ 8 | ########## SERVICES ######## 9 | services: 10 | ###### Unbound is a validating, recursive, and caching DNS resolver. 11 | unbound: 12 | image: extremeshok/unbound:latest 13 | environment: 14 | - TZ=${TIMEZONE} 15 | volumes: 16 | - vol-unbound-keys:/etc/unbound/keys/:rw 17 | restart: always 18 | tty: true 19 | sysctls: 20 | - net.ipv6.conf.all.disable_ipv6=${SYSCTL_IPV6_DISABLED:-0} 21 | networks: 22 | network: 23 | ipv4_address: 172.22.1.254 24 | aliases: 25 | - dns 26 | 27 | ###### Watchtower allows for automatically updating and restarting containers 28 | watchtower: 29 | image: containrrr/watchtower:latest 30 | volumes: 31 | - /var/run/docker.sock:/var/run/docker.sock:rw 32 | environment: 33 | - TZ=${TIMEZONE} 34 | - WATCHTOWER_CLEANUP=true 35 | - WATCHTOWER_POLL_INTERVAL=1800 36 | - WATCHTOWER_INCLUDE_STOPPED=true 37 | - WATCHTOWER_REVIVE_STOPPED=true 38 | - WATCHTOWER_TIMEOUT=180 39 | restart: always 40 | sysctls: 41 | - net.ipv6.conf.all.disable_ipv6=1 42 | dns: 43 | - 172.22.1.254 44 | networks: 45 | - network 46 | 47 | ###### mariadb aka mysql 48 | mysql: 49 | image: mariadb:10.5 50 | volumes: 51 | - vol-mysql:/var/lib/mysql/:delegated 52 | # - ./data/etc/mysql/conf.d/my-2gb.cnf:/etc/mysql/conf.d/my.cnf:ro 53 | - ./data/etc/mysql/conf.d/my-4gb.cnf:/etc/mysql/conf.d/my.cnf:ro 54 | # - ./data/etc/mysql/conf.d/my-16gb.cnf:/etc/mysql/conf.d/my.cnf:ro 55 | # - ./data/etc/mysql/conf.d/my-32gb.cnf:/etc/mysql/conf.d/my.cnf:ro 56 | environment: 57 | - TZ=${TIMEZONE} 58 | - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} 59 | - MYSQL_DATABASE=${MYSQL_DATABASE} 60 | - MYSQL_USER=${MYSQL_USER} 61 | - MYSQL_PASSWORD=${MYSQL_PASSWORD} 62 | restart: always 63 | # DO NOT EXPOSE PORTS..SECURITY RISK 64 | # ports: 65 | # - 3306:3306 66 | sysctls: 67 | - net.ipv6.conf.all.disable_ipv6=1 68 | dns: 69 | - 172.22.1.254 70 | networks: 71 | - network 72 | 73 | ###### mysql-backup 74 | mysql-backup: 75 | image: tiredofit/db-backup:latest 76 | volumes: 77 | - vol-mysql-backup:/backup/:rw 78 | environment: 79 | - DB_TYPE=mysql 80 | - DB_HOST=mysql 81 | - DB_USER=root 82 | - DB_PASS=${MYSQL_ROOT_PASSWORD} 83 | - DB_DUMP_FREQ=60 84 | - DB_DUMP_BEGIN=+10 85 | - DB_CLEANUP_TIME=1500 86 | - SPLIT_DB=TRUE 87 | - MD5=TRUE 88 | - COMPRESSION=GZ 89 | - PARALLEL_COMPRESSION=FALSE 90 | - DEBUG_MODE=FALSE 91 | - EXTRA_OPTS=--default-character-set=utf8mb4 92 | restart: always 93 | dns: 94 | - 172.22.1.254 95 | sysctls: 96 | - net.ipv6.conf.all.disable_ipv6=1 97 | networks: 98 | - network 99 | 100 | ##### PHPMYADMIN 101 | phpmyadmin: 102 | image: bitnami/phpmyadmin:latest 103 | ports: 104 | # - 8082:80 105 | - 8443:443 106 | environment: 107 | - DATABASE_HOST=mysql 108 | depends_on: 109 | - mysql 110 | dns: 111 | - 172.22.1.254 112 | networks: 113 | - network 114 | 115 | ###### REDIS 116 | redis: 117 | image: redis:latest 118 | volumes: 119 | - ./data/etc/redis/:/etc/redis/:ro 120 | - vol-redis:/var/lib/redis:delegated 121 | restart: always 122 | # DO NOT EXPOSE PORTS..SECURITY RISK 123 | # ports: 124 | # - 6379:6379 125 | command: 126 | - redis-server 127 | - /etc/redis/redis.conf 128 | environment: 129 | - TZ=${TIMEZONE} 130 | sysctls: 131 | - net.ipv6.conf.all.disable_ipv6=1 132 | dns: 133 | - 172.22.1.254 134 | networks: 135 | - network 136 | 137 | ###### MEMCACHED 138 | memcached: 139 | image: memcached:latest 140 | restart: always 141 | # DO NOT EXPOSE PORTS..SECURITY RISK 142 | # ports: 143 | # - "11211:11211" 144 | environment: 145 | - TZ=${TIMEZONE} 146 | - MEMCACHED_CACHE_SIZE=${MEMCACHED_CACHE_SIZE:-64} 147 | sysctls: 148 | - net.ipv6.conf.all.disable_ipv6=1 149 | dns: 150 | - 172.22.1.254 151 | networks: 152 | - network 153 | 154 | ###### Openlitespeed with builtin acme and lsphp 155 | openlitespeed: 156 | image: extremeshok/openlitespeed-php:latest 157 | env_file: 158 | - .env 159 | volumes: 160 | - vol-www-vhosts:/var/www/vhosts:rw 161 | - vol-www-conf:/etc/openlitespeed:rw 162 | - ./xs:/xs:rw 163 | restart: always 164 | ports: 165 | # httpS 166 | - 443:443 167 | # quic aka http/3 168 | - 443:443/udp 169 | # webadmin 170 | - 7080:7080 171 | environment: 172 | - TZ=${TIMEZONE} 173 | - PHP_REDIS_SESSIONS=yes 174 | - VHOST_CRON_ENABLE=true 175 | - PHP_MAX_UPLOAD_SIZE=64 176 | - PHP_MAX_TIME=600 177 | - WP_AUTOUPDATE_ENABLE=true 178 | sysctls: 179 | - net.ipv6.conf.all.disable_ipv6=${SYSCTL_IPV6_DISABLED:-0} 180 | depends_on: 181 | - redis 182 | - mysql 183 | dns: 184 | - 172.22.1.254 185 | networks: 186 | - network 187 | 188 | ###### xshokacmehttp 189 | xshokacmehttp: 190 | image: extremeshok/acme-http2https:latest 191 | environment: 192 | - TZ=${TIMEZONE} 193 | volumes: 194 | - vol-acme:/acme:rw 195 | - vol-www-vhosts:/var/www/vhosts:rw 196 | ports: 197 | - 80:80 198 | restart: always 199 | dns: 200 | - 172.22.1.254 201 | sysctls: 202 | - net.ipv6.conf.all.disable_ipv6=${SYSCTL_IPV6_DISABLED:-0} 203 | networks: 204 | - network 205 | 206 | ###### ELASTICSEARCH 207 | # ### http://elasticsearch:9200 208 | # ###### elasticsearch: 209 | # image: extremeshok/elasticsearch-elasticpress:latest 210 | # volumes: 211 | # - vol-elasticsearch:/usr/share/elasticsearch/data 212 | # restart: always 213 | # ports: 214 | # - 9200:9200 215 | # environment: 216 | # - TZ=${TIMEZONE} 217 | # - discovery.type=single-node 218 | # - bootstrap.memory_lock=true 219 | # - "ES_JAVA_OPTS=-Xms4096m -Xmx4096m" 220 | # ulimits: 221 | # memlock: 222 | # soft: -1 223 | # hard: -1 224 | # dns: 225 | # - 172.22.1.254 226 | # sysctls: 227 | # - net.ipv6.conf.all.disable_ipv6=1 228 | # networks: 229 | # - network 230 | 231 | # ###### xshokgeoip 232 | # xshokgeoip: 233 | # image: extremeshok/geoip:latest 234 | # restart: always 235 | # environment: 236 | # - TZ=${TIMEZONE} 237 | # - DISABLE_MAXMIND_LEGACY=true 238 | # volumes: 239 | # - vol-geoip:/geoip/:rw 240 | # restart: always 241 | # sysctls: 242 | # - net.ipv6.conf.all.disable_ipv6=${SYSCTL_IPV6_DISABLED:-0} 243 | # networks: 244 | # - network 245 | 246 | # ###### phpredisadmin: 247 | # image: erikdubbelboer/phpredisadmin:latest 248 | # container_name: phpredisadmin 249 | # restart: always 250 | # ports: 251 | # - "8082:80" 252 | # environment: 253 | # - TZ=${TimeZone} 254 | # - REDIS_1_HOST=redis 255 | # - REDIS_1_PORT=6379 256 | # sysctls: 257 | # - net.ipv6.conf.all.disable_ipv6=${SYSCTL_IPV6_DISABLED:-0} 258 | # networks: 259 | # - network 260 | 261 | # ###### FTP 262 | # ftpd_server: 263 | # image: stilliard/pure-ftpd:hardened 264 | # container_name: pure-ftpd 265 | # ports: 266 | # - "21:21" 267 | # - "30000-30009:30000-30009" 268 | # volumes: 269 | # - ./data/var/www/keyfile:/var/www/keyfile:ro 270 | # - vol-www-html:/var/www/html/:rw 271 | # environment: 272 | # PUBLICHOST: "localhost" 273 | # FTP_USER_NAME: nobody 274 | # FTP_USER_PASS: 3498hkjhku21398721938usjal920197siu1o32iu7012397019kjs 275 | # FTP_USER_HOME: /var/www/html/le_connector 276 | # FTP_USER_UID: 65534 277 | # FTP_USER_GID: 65534 278 | # restart: always 279 | 280 | ###### IPv6 NAT 281 | ipv6nat: 282 | image: robbertkl/ipv6nat:latest 283 | restart: always 284 | privileged: true 285 | network_mode: "host" 286 | volumes: 287 | - /var/run/docker.sock:/var/run/docker.sock:ro 288 | - /lib/modules:/lib/modules:ro 289 | 290 | ########## NETWORKS ######## 291 | networks: 292 | network: 293 | driver: bridge 294 | # enable_ipv6: true 295 | ipam: 296 | driver: default 297 | config: 298 | - subnet: 172.22.1.0/24 299 | # - subnet: fd4d:6169:6c63:6f77::/64 300 | 301 | ########## VOLUMES ######## 302 | volumes: 303 | vol-www-vhosts: 304 | driver: local 305 | driver_opts: 306 | type: bindW 307 | o: bind 308 | device: ${PWD}/volumes/www-vhosts 309 | vol-www-conf: 310 | driver: local 311 | driver_opts: 312 | type: bind 313 | o: bind 314 | device: ${PWD}/volumes/www-conf 315 | vol-acme: 316 | driver: local 317 | driver_opts: 318 | type: bind 319 | o: bind 320 | device: ${PWD}/volumes/acme 321 | vol-logs: 322 | driver: local 323 | driver_opts: 324 | type: bind 325 | o: bind 326 | device: ${PWD}/volumes/logs 327 | vol-unbound-keys: 328 | driver: local 329 | driver_opts: 330 | type: bind 331 | o: bind 332 | device: ${PWD}/volumes/unbound-keys 333 | vol-mysql: 334 | driver: local 335 | driver_opts: 336 | type: bind 337 | o: bind 338 | device: ${PWD}/volumes/mysql 339 | vol-mysql-backup: 340 | driver: local 341 | driver_opts: 342 | type: bind 343 | o: bind 344 | device: ${PWD}/volumes/mysql-backup 345 | vol-redis: 346 | driver: local 347 | driver_opts: 348 | type: bind 349 | o: bind 350 | device: ${PWD}/volumes/redis 351 | # vol-elasticsearch: 352 | # driver: local 353 | # driver_opts: 354 | # type: bind 355 | # o: bind 356 | # device: ${PWD}/volumes/elasticsearch 357 | # vol-geoip: 358 | # driver: local 359 | # driver_opts: 360 | # type: bind 361 | # o: bind 362 | # device: ${PWD}/volumes/geoip 363 | # vol-geoip-maxmind: 364 | # driver: local 365 | # driver_opts: 366 | # type: bind 367 | # o: bind 368 | # device: ${PWD}/volumes/geoip/maxmind 369 | # vol-geoip-country-cidr: 370 | # driver: local 371 | # driver_opts: 372 | # type: bind 373 | # o: bind 374 | # device: ${PWD}/volumes/geoip/country-cidr 375 | -------------------------------------------------------------------------------- /docker-vis-full.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/extremeshok/docker-webserver/4795ef6d695fd573426dca39ea8fc207b1043d91/docker-vis-full.png -------------------------------------------------------------------------------- /docker-vis-novols.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/extremeshok/docker-webserver/4795ef6d695fd573426dca39ea8fc207b1043d91/docker-vis-novols.png -------------------------------------------------------------------------------- /extras/borgmatic-borg-backup/etc/borgmatic/config.yaml: -------------------------------------------------------------------------------- 1 | # Where to look for files to backup, and where to store those backups. See 2 | # https://borgbackup.readthedocs.io/en/stable/quickstart.html and 3 | # https://borgbackup.readthedocs.io/en/stable/usage.html#borg-create for details. 4 | location: 5 | # List of source directories to backup (required). Globs and tildes are expanded. 6 | source_directories: 7 | - /datastore 8 | - /etc 9 | 10 | # Paths to local or remote repositories (required). Tildes are expanded. Multiple 11 | # repositories are backed up to in sequence. See ssh_command for SSH options like 12 | # identity file or port. 13 | repositories: 14 | - yyyyyy@xxxx.xxxx.com:repo 15 | 16 | # Stay in same file system (do not cross mount points). Defaults to false. But when 17 | # a database hook is used, the setting here is ignored and one_file_system is 18 | # considered true. 19 | # one_file_system: true 20 | 21 | # Only store/extract numeric user and group identifiers. Defaults to false. 22 | # numeric_owner: true 23 | 24 | # Store atime into archive. Defaults to true. 25 | # atime: false 26 | 27 | # Store ctime into archive. Defaults to true. 28 | # ctime: false 29 | 30 | # Store birthtime (creation date) into archive. Defaults to true. 31 | # birthtime: false 32 | 33 | # Use Borg's --read-special flag to allow backup of block and other special 34 | # devices. Use with caution, as it will lead to problems if used when 35 | # backing up special devices such as /dev/zero. Defaults to false. But when a 36 | # database hook is used, the setting here is ignored and read_special is 37 | # considered true. 38 | # read_special: false 39 | 40 | # Record bsdflags (e.g. NODUMP, IMMUTABLE) in archive. Defaults to true. 41 | # bsd_flags: true 42 | 43 | # Mode in which to operate the files cache. See 44 | # https://borgbackup.readthedocs.io/en/stable/usage/create.html#description for 45 | # details. Defaults to "ctime,size,inode". 46 | # files_cache: ctime,size,inode 47 | 48 | # Alternate Borg local executable. Defaults to "borg". 49 | # local_path: borg1 50 | 51 | # Alternate Borg remote executable. Defaults to "borg". 52 | # remote_path: borg1 53 | 54 | # Any paths matching these patterns are included/excluded from backups. Globs are 55 | # expanded. (Tildes are not.) Note that Borg considers this option experimental. 56 | # See the output of "borg help patterns" for more details. Quote any value if it 57 | # contains leading punctuation, so it parses correctly. 58 | # patterns: 59 | # - R / 60 | # - '- /home/*/.cache' 61 | # - + /home/susan 62 | # - '- /home/*' 63 | 64 | # Read include/exclude patterns from one or more separate named files, one pattern 65 | # per line. Note that Borg considers this option experimental. See the output of 66 | # "borg help patterns" for more details. 67 | # patterns_from: 68 | # - /etc/borgmatic/patterns 69 | 70 | # Any paths matching these patterns are excluded from backups. Globs and tildes 71 | # are expanded. See the output of "borg help patterns" for more details. 72 | exclude_patterns: 73 | - '*.pyc' 74 | - ~/*/.cache 75 | # - /etc/ssl 76 | 77 | # Read exclude patterns from one or more separate named files, one pattern per 78 | # line. See the output of "borg help patterns" for more details. 79 | # exclude_from: 80 | # - /etc/borgmatic/excludes 81 | 82 | # Exclude directories that contain a CACHEDIR.TAG file. See 83 | # http://www.brynosaurus.com/cachedir/spec.html for details. Defaults to false. 84 | exclude_caches: true 85 | 86 | # Exclude directories that contain a file with the given filenames. Defaults to not 87 | # set. 88 | exclude_if_present: 89 | - .nobackup 90 | 91 | # If true, the exclude_if_present filename is included in backups. Defaults to 92 | # false, meaning that the exclude_if_present filename is omitted from backups. 93 | # keep_exclude_tags: true 94 | 95 | # Exclude files with the NODUMP flag. Defaults to false. 96 | # exclude_nodump: true 97 | 98 | # Path for additional source files used for temporary internal state like 99 | # borgmatic database dumps. Note that changing this path prevents "borgmatic 100 | # restore" from finding any database dumps created before the change. Defaults 101 | # to ~/.borgmatic 102 | # borgmatic_source_directory: /tmp/borgmatic 103 | 104 | # Repository storage options. See 105 | # https://borgbackup.readthedocs.io/en/stable/usage.html#borg-create and 106 | # https://borgbackup.readthedocs.io/en/stable/usage/general.html#environment-variables for 107 | # details. 108 | storage: 109 | compression: auto,zstd 110 | encryption_passphrase: 'PASSPHRASE' 111 | archive_name_format: '{hostname}-{now}' 112 | 113 | retention: 114 | keep_hourly: 24 115 | keep_daily: 7 116 | keep_weekly: 4 117 | keep_monthly: 6 118 | keep_yearly: 0 119 | prefix: '{hostname}-' 120 | 121 | consistency: 122 | checks: 123 | # uncomment to always do integrity checks. (takes long time for large repos) 124 | #- repository 125 | - disabled 126 | 127 | check_last: 3 128 | prefix: '{hostname}-' 129 | 130 | hooks: 131 | # List of one or more shell commands or scripts to execute before creating a backup. 132 | before_backup: 133 | - echo "`date` - Starting backup" 134 | after_backup: 135 | - echo "`date` - Finished backup" 136 | -------------------------------------------------------------------------------- /extras/borgmatic-borg-backup/etc/cron.d/borgmatic: -------------------------------------------------------------------------------- 1 | # run borgmatic hourly 2 | 3 | 0 * * * * root borgmatic --syslog-verbosity 1 4 | -------------------------------------------------------------------------------- /volumes/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !.gitignore 3 | -------------------------------------------------------------------------------- /xshok-admin.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | ################################################################################ 3 | # This is property of eXtremeSHOK.com 4 | # You are free to use, modify and distribute, however you may not remove this notice. 5 | # Copyright (c) Adrian Jon Kriel :: admin@extremeshok.com 6 | ################################################################################ 7 | # 8 | # Notes: 9 | # Script must be placed into the same directory as the docker-compose.yml 10 | # 11 | # Assumptions: Docker and Docker-compose Installed 12 | # 13 | # Tested on KVM, VirtualBox and Dedicated Server 14 | # 15 | # 16 | ################################################################################ 17 | # 18 | # THERE ARE NO USER CONFIGURABLE OPTIONS IN THIS SCRIPT 19 | # 20 | ################################################################################ 21 | 22 | ################# VARIBLES 23 | PWD="/datastore" 24 | VOLUMES="${PWD}/volumes" 25 | VHOST_DIR="${VOLUMES}/www-vhosts" 26 | OLS_HTTPD_CONF="${VOLUMES}/www-conf/conf/httpd_config.conf" 27 | ACME_DOMAIN_LIST="${VOLUMES}/acme/domain_list.txt" 28 | TIMESTAMP_SQL_BACKUP='no' 29 | CONTAINER_OLS='openlitespeed' 30 | CONTAINER_MYSQL='mysql' 31 | EPACE=' ' 32 | 33 | ################# GLOBALS 34 | DOMAIN="" 35 | DOMAIN_ESCAPED="" 36 | FILTERED_DOMAIN="" 37 | 38 | 39 | ################# Script Info 40 | script_version="1.4" 41 | script_version_date="2020-12-21" 42 | 43 | ################# SUPPORTING FUNCTIONS :: START 44 | 45 | # Check if the current running user is the root user, otherwise return false 46 | function xshok_is_root() { 47 | id_bin="$(command -v id 2> /dev/null)" 48 | if [ "$($id_bin -u)" == 0 ] ; then 49 | return 0 50 | else 51 | return 1 # Not root 52 | fi 53 | } 54 | 55 | function fst_match_line () { 56 | FIRST_LINE_NUM=$(grep -n -m 1 ${1} ${2} | awk -F ':' '{print $1}') 57 | } 58 | 59 | function fst_match_after () { 60 | FIRST_NUM_AFTER=$(tail -n +${1} ${2} | grep -n -m 1 ${3} | awk -F ':' '{print $1}') 61 | } 62 | 63 | function lst_match_line () { 64 | fst_match_after ${1} ${2} ${3} 65 | LAST_LINE_NUM=$((${FIRST_LINE_NUM}+${FIRST_NUM_AFTER}-1)) 66 | } 67 | 68 | # function to check if the $2 value is not null and does not start with - 69 | function xshok_check_s2 () { 70 | if [ "$1" ]; then 71 | if [[ "$1" =~ ^-.* ]]; then 72 | echo "ERROR: Missing value for option or value begins with -" "=" 73 | exit 1 74 | fi 75 | else 76 | echo "ERROR: Missing value for option" "=" 77 | exit 1 78 | fi 79 | } 80 | 81 | function xshok_validate_domain () { #domain 82 | DOMAIN="${1}" 83 | DOMAIN="${DOMAIN,,}" 84 | DOMAIN="${DOMAIN#www.*}" # remove www. 85 | DOMAIN_ESCAPED=${DOMAIN/\./\\.} 86 | FILTERED_DOMAIN=${DOMAIN//\./_} 87 | if [[ $DOMAIN = .* ]] ; then 88 | echo "ERROR: do not start domain with ." 89 | exit 1 90 | fi 91 | if [[ $DOMAIN = *. ]] ; then 92 | echo "ERROR: do not end domain with ." 93 | exit 1 94 | fi 95 | if [ "$DOMAIN" == "" ] ; then 96 | echo "ERROR: empty domain, please add the domain name after the command option" 97 | exit 1 98 | fi 99 | if [ "${DOMAIN%%.*}" == "" ] || [ "${DOMAIN##*.}" == "" ] || [ "${DOMAIN##*.}" == "${DOMAIN%%.*}" ] ; then 100 | echo "ERROR: invalid domain: ${DOMAIN}" 101 | exit 1 102 | fi 103 | VALID_DOMAIN=$( echo "$DOMAIN" | grep -P "^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])$" ) 104 | if [ "$VALID_DOMAIN" != "$DOMAIN" ] || [ "$DOMAIN" == "" ] ; then 105 | echo "ERROR: invalid domain: ${DOMAIN}" 106 | exit 1 107 | fi 108 | echo "DOMAIN: ${DOMAIN}" 109 | } 110 | 111 | function xshok_container_is_running (){ #container_name 112 | CONTAINER_NAME="${1}" 113 | if [ -z $CONTAINER_NAME ] ; then 114 | echo "ERROR: xshok_container_is_running container_name not specificed" 115 | exit 1 116 | fi 117 | if [ -z $(docker-compose ps -q "$CONTAINER_NAME") ] || [ -z $(docker ps -q --no-trunc | grep $(docker-compose ps -q "$CONTAINER_NAME")) ]; then 118 | echo "ERROR: Docker Container: ${CONTAINER_NAME}, it's not running." 119 | exit 1 120 | fi 121 | } 122 | 123 | ################# SUPPORTING FUNCTIONS :: END 124 | 125 | 126 | ################# WEBSITE FUNCTIONS :: START 127 | 128 | ################# list all websites 129 | function xshok_website_list () { 130 | ## /var/www/vhosts 131 | if [ -d "${VHOST_DIR}" ] ; then 132 | echo "Website List" 133 | echo "==== vhost ============ home ============ domains ====" 134 | while IFS= read -r -d '' vhost_dir; do 135 | vhost="${vhost_dir##*/}" 136 | short_vhost_dir="${vhost_dir/$VOLUMES\//}" 137 | echo "${vhost} = ${short_vhost_dir} = $(grep "vhDomain.*${vhost}" "${OLS_HTTPD_CONF}" | sed -e 's/vhDomain//g' | xargs)" 138 | done < <(find "${VHOST_DIR}" -mindepth 1 -maxdepth 1 -type d -print0) #dirs 139 | echo "" 140 | fi 141 | } 142 | 143 | ################# add a website 144 | function xshok_website_add () { #domain 145 | xshok_validate_domain "${1}" 146 | if [ "$(grep -E "member.*${DOMAIN_ESCAPED}" ${OLS_HTTPD_CONF})" != '' ]; then 147 | echo "Warning: ${DOMAIN} already exists, Check ${OLS_HTTPD_CONF}" 148 | else 149 | echo "Adding ${DOMAIN},www.${DOMAIN} " 150 | perl -0777 -p -i -e 's/(vhTemplate vhost \{[^}]+)\}*(^.*listeners.*$)/\1$2 151 | member '${DOMAIN}' { 152 | vhDomain '${DOMAIN},www.${DOMAIN}' 153 | }/gmi' ${OLS_HTTPD_CONF} 154 | fi 155 | if [ ! -d "${VHOST_DIR}/${DOMAIN}/cron" ]; then 156 | echo "Creating Directory: ${VHOST_DIR}/${DOMAIN}/cron" 157 | mkdir -p "${VHOST_DIR}/${DOMAIN}/cron" 158 | fi 159 | if [ ! -d "${VHOST_DIR}/${DOMAIN}/certs" ]; then 160 | echo "Creating Directory: ${VHOST_DIR}/${DOMAIN}/certs" 161 | mkdir -p "${VHOST_DIR}/${DOMAIN}/certs" 162 | fi 163 | if [ ! -d "${VHOST_DIR}/${DOMAIN}/dbinfo" ]; then 164 | echo "Creating Directory: ${VHOST_DIR}/${DOMAIN}/dbinfo" 165 | mkdir -p "${VHOST_DIR}/${DOMAIN}/dbinfo" 166 | fi 167 | if [ ! -d "${VHOST_DIR}/${DOMAIN}/html" ]; then 168 | echo "Creating Directory: ${VHOST_DIR}/${DOMAIN}/html" 169 | mkdir -p "${VHOST_DIR}/${DOMAIN}/html" 170 | fi 171 | if [ ! -d "${VHOST_DIR}/${DOMAIN}/logs" ]; then 172 | echo "Creating Directory: ${VHOST_DIR}/${DOMAIN}/logs" 173 | mkdir -p "${VHOST_DIR}/${DOMAIN}/logs" 174 | fi 175 | if [ ! -d "${VHOST_DIR}/${DOMAIN}/sql" ]; then 176 | echo "Creating Directory: ${VHOST_DIR}/${DOMAIN}/sql" 177 | mkdir -p "${VHOST_DIR}/${DOMAIN}/sql" 178 | fi 179 | chmod 777 ${VHOST_DIR}/${DOMAIN} 180 | chmod 777 ${VHOST_DIR}/${DOMAIN}/* 181 | } 182 | 183 | ################# delete an existing website 184 | function xshok_website_delete () { #domain 185 | xshok_validate_domain "${1}" 186 | if [ "$(grep -E "member.*${DOMAIN_ESCAPED}" ${OLS_HTTPD_CONF})" == '' ]; then 187 | echo "ERROR: ${DOMAIN} does NOT exist, Check ${OLS_HTTPD_CONF}" 188 | exit 1 189 | fi 190 | echo "Removing ${DOMAIN},www.${DOMAIN} " 191 | fst_match_line ${1} ${OLS_HTTPD_CONF} 192 | lst_match_line ${FIRST_LINE_NUM} ${OLS_HTTPD_CONF} '}' 193 | sed -i "${FIRST_LINE_NUM},${LAST_LINE_NUM}d" ${OLS_HTTPD_CONF} 194 | echo "Remeber to remove the dir: ${VHOST_DIR}/${DOMAIN}/" 195 | xshok_restart 196 | } 197 | 198 | ################# fix permission and ownership of an existing website 199 | function xshok_website_permissions () { #domain 200 | xshok_validate_domain "${1}" 201 | if [ "$(grep -E "member.*${DOMAIN_ESCAPED}" ${OLS_HTTPD_CONF})" == '' ]; then 202 | echo "Error: ${DOMAIN} does not exist" 203 | exit 1 204 | fi 205 | if [ ! -d "${VHOST_DIR}/${DOMAIN}/html" ]; then 206 | echo "Error: ${VHOST_DIR}/${DOMAIN}/html does not exist" 207 | exit 1 208 | fi 209 | echo "Setting permissions" 210 | find "${VHOST_DIR}/${DOMAIN}/html" -type f -exec chmod 0664 {} \; 211 | find "${VHOST_DIR}/${DOMAIN}/html" -type d -exec chmod 0775 {} \; 212 | echo "Setting Ownership" 213 | chown -R nobody:nogroup "${VHOST_DIR}/${DOMAIN}/html" 214 | } 215 | 216 | ################# WEBSITE FUNCTIONS :: END 217 | 218 | ################# DATABASE FUNCTIONS :: START 219 | 220 | ################## list all databases for domain 221 | function xshok_database_list () { #domain 222 | xshok_container_is_running "$CONTAINER_MYSQL" 223 | xshok_validate_domain "${1}" 224 | 225 | result="$(docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe \"SHOW DATABASES LIKE '%%${FILTERED_DOMAIN}%%'\"")" 226 | if [ "$result" != "" ] ; then 227 | echo "Databases list for ${DOMAIN}: " 228 | echo "$result" 229 | else 230 | echo "No Databases found for ${DOMAIN}" 231 | fi 232 | } 233 | 234 | ################## add database to domain 235 | function xshok_database_add () { #domain 236 | xshok_container_is_running "$CONTAINER_MYSQL" 237 | xshok_validate_domain "${1}" 238 | 239 | DBNAME="${FILTERED_DOMAIN}-$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 8 | head -n 1)" 240 | DBUSER="$DBNAME" 241 | DBPASS="$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)" 242 | 243 | if [ -f "${VHOST_DIR}/${DOMAIN}/dbinfo/${DBNAME}" ] ; then 244 | echo "Database Info File Found" 245 | fi 246 | 247 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e 'status'" >/dev/null 2>&1 248 | if [ ${?} != 0 ]; then 249 | echo 'ERROR: DB access failed, please check!' 250 | fi 251 | 252 | if docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -s -N -e \"SELECT IF(EXISTS (SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = '${DBNAME}'), 'yes','no')\"" | grep -q "yes" ; then 253 | echo "ERROR: Database exists" 254 | exit 1 255 | else 256 | echo "DATABASE: ${DBNAME}" 257 | fi 258 | 259 | echo "Start Transaction" 260 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'START TRANSACTION'" 261 | if [ ${?} != 0 ]; then 262 | echo 'ERROR: failed starting transaction, please check!' 263 | exit 1 264 | fi 265 | echo "- Create DB" 266 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'CREATE DATABASE IF NOT EXISTS \`${DBNAME}\` CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci'" 267 | if [ ${?} != 0 ]; then 268 | echo 'ERROR: create DB, please check!' 269 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'ROLLBACK'" 270 | exit 1 271 | fi 272 | echo "- Create user" 273 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'CREATE USER IF NOT EXISTS \`${DBUSER}\`@\`%\`'" 274 | if [ ${?} != 0 ]; then 275 | echo 'ERROR:create user failed, please check!' 276 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'ROLLBACK'" 277 | exit 1 278 | fi 279 | echo "- Set user password" 280 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe \"ALTER USER '${DBUSER}'@'%' IDENTIFIED BY '${DBPASS}'\"" 281 | if [ ${?} != 0 ]; then 282 | echo 'ERROR: set user password failed, please check!' 283 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'ROLLBACK'" 284 | exit 1 285 | fi 286 | echo "- Assign permissions" 287 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'GRANT ALL PRIVILEGES ON \`${DBNAME}\`.* TO \`${DBUSER}\`@\`%\`'" 288 | if [ ${?} != 0 ]; then 289 | echo 'ERROR: assign permissions failed, please check!' 290 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'ROLLBACK'" 291 | exit 1 292 | fi 293 | echo "Commit transaction" 294 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'COMMIT'" 295 | if [ ${?} != 0 ]; then 296 | echo 'ERROR: failed starting transaction, please check!' 297 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'ROLLBACK'" 298 | exit 1 299 | fi 300 | echo "Flush (apply) privileges" 301 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'FLUSH PRIVILEGES'" 302 | if [ ${?} != 0 ]; then 303 | echo 'ERROR: flush failed, please check!' 304 | exit 1 305 | fi 306 | echo "Saving dbinfo to ${VHOST_DIR}/${DOMAIN}/dbinfo/${DBNAME}" 307 | mkdir -p "${VHOST_DIR}/${DOMAIN}/dbinfo" 308 | cat << EOF > "${VHOST_DIR}/${DOMAIN}/dbinfo/${DBNAME}" 309 | DB NAME: ${DBNAME} 310 | DB USER: ${DBUSER} 311 | DB PASS: ${DBPASS} 312 | EOF 313 | echo "DB NAME: ${DBNAME}" 314 | echo "DB USER: ${DBUSER}" 315 | echo "DB PASS: ${DBPASS}" 316 | } 317 | 318 | ################## delete database and database user 319 | function xshok_database_delete () { #database 320 | DBNAME="${1}" 321 | #get valid domain from database name 322 | FILTERED_DOMAIN=${DBNAME%-*} 323 | DOMAIN="${FILTERED_DOMAIN//_/.}" 324 | xshok_container_is_running "$CONTAINER_MYSQL" 325 | xshok_validate_domain "${DOMAIN}" 326 | 327 | if [ -f "${VHOST_DIR}/${DOMAIN}/dbinfo/${DBNAME}" ] ; then 328 | echo "Database Info File Found" 329 | fi 330 | 331 | if docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -s -N -e \"SELECT IF(EXISTS (SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = '${DBNAME}'), 'yes','no')\"" | grep -q "yes" ; then 332 | echo "DATABASE: ${DBNAME}" 333 | else 334 | echo "ERROR: Database does not exist" 335 | exit 1 336 | fi 337 | 338 | echo "Start Transaction" 339 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'START TRANSACTION'" 340 | if [ ${?} != 0 ]; then 341 | echo 'ERROR: failed starting transaction, please check!' 342 | exit 1 343 | fi 344 | echo "- Drop the database" 345 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'DROP DATABASE IF EXISTS \`${DBNAME}\`'" 346 | if [ ${?} != 0 ]; then 347 | echo 'ERROR: assign permissions failed, please check!' 348 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'ROLLBACK'" 349 | exit 1 350 | fi 351 | echo "- Delete the user" 352 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe \"DELETE FROM mysql.global_priv WHERE User = '${DBNAME}'\"" 353 | if [ ${?} != 0 ]; then 354 | echo 'ERROR: assign permissions failed, please check!' 355 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'ROLLBACK'" 356 | exit 1 357 | fi 358 | echo "Commit transaction" 359 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'COMMIT'" 360 | if [ ${?} != 0 ]; then 361 | echo 'ERROR: failed starting transaction, please check!' 362 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'ROLLBACK'" 363 | exit 1 364 | fi 365 | echo "Flush (apply) privileges" 366 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe 'FLUSH PRIVILEGES'" 367 | if [ ${?} != 0 ]; then 368 | echo 'ERROR: flush failed, please check!' 369 | exit 1 370 | fi 371 | echo "Remove dbinfo for ${DBNAME}" 372 | rm -f "${VHOST_DIR}/${DOMAIN}/dbinfo/${DBNAME}" 373 | 374 | } 375 | 376 | ################## reset password for database 377 | function xshok_database_password () { #database 378 | DBNAME="${1}" 379 | #get valid domain from database name 380 | FILTERED_DOMAIN=${DBNAME%-*} 381 | DOMAIN="${FILTERED_DOMAIN//_/.}" 382 | xshok_container_is_running "$CONTAINER_MYSQL" 383 | xshok_validate_domain "${DOMAIN}" 384 | 385 | if [ -f "${VHOST_DIR}/${DOMAIN}/dbinfo/${DBNAME}" ] ; then 386 | echo "Database Info File Found" 387 | fi 388 | 389 | if docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -s -N -e \"SELECT IF(EXISTS (SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = '${DBNAME}'), 'yes','no')\"" | grep -q "yes" ; then 390 | echo "DATABASE: ${DBNAME}" 391 | else 392 | echo "ERROR: Database does not exist" 393 | exit 1 394 | fi 395 | 396 | DBUSER="$DBNAME" 397 | DBPASS="$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)" 398 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe \"ALTER USER '${DBUSER}'@'%' IDENTIFIED BY '${DBPASS}'\"" 399 | if [ ${?} != 0 ]; then 400 | echo 'ERROR: password set failed, please check!' 401 | exit 1 402 | fi 403 | echo "Saving sql to ${VHOST_DIR}/${DOMAIN}/sql/${DBNAME}" 404 | mkdir -p "${VHOST_DIR}/${DOMAIN}/dbinfo" 405 | cat << EOF > "${VHOST_DIR}/${DOMAIN}/dbinfo/${DBNAME}" 406 | DB NAME: ${DBNAME} 407 | DB USER: ${DBUSER} 408 | DB PASS: ${DBPASS} 409 | EOF 410 | echo "DB NAME: ${DBNAME}" 411 | echo "DB USER: ${DBUSER}" 412 | echo "DB PASS: ${DBPASS}" 413 | } 414 | 415 | ################## backup database 416 | function xshok_backup_database () { #database #filename*optional 417 | DBNAME="${1}" 418 | DBFILENAME="${2}" 419 | #get valid domain from database name 420 | FILTERED_DOMAIN=${DBNAME%-*} 421 | DOMAIN="${FILTERED_DOMAIN//_/.}" 422 | xshok_container_is_running "$CONTAINER_MYSQL" 423 | xshok_validate_domain "${DOMAIN}" 424 | 425 | if [ "$TIMESTAMP_SQL_BACKUP" == "yes" ] ; then 426 | echo "Using a timestamp for the backup name" 427 | TIMESTAMP="-$(date +%Y-%m-%d-%H-%M-%S)" 428 | else 429 | TIMESTAMP="" 430 | fi 431 | 432 | if [ "$DBFILENAME" == "" ] ; then 433 | DBFILENAME="${VHOST_DIR}/${DOMAIN}/sql/${DBNAME}${TIMESTAMP}.sql.gz" 434 | fi 435 | 436 | if docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -s -N -e \"SELECT IF(EXISTS (SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = '${DBNAME}'), 'yes','no')\"" | grep -q "yes" ; then 437 | echo "DATABASE: ${DBNAME}" 438 | else 439 | echo "ERROR: Database does not exist" 440 | exit 1 441 | fi 442 | 443 | mkdir -p "${VHOST_DIR}/${DOMAIN}/sql" 444 | 445 | if [ -f "${DBFILENAME}" ] ; then 446 | echo "Previous Backup Found, overwriting" 447 | fi 448 | 449 | if [ ${DBFILENAME##*.} == "gz" ] ; then 450 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysqldump -uroot -p'${MYSQL_ROOT_PASSWORD}' --net_buffer_length 4096 --no-create-info --single-transaction \"${DBNAME}\"" | gzip -9 > "${DBFILENAME}" 451 | if [ ${?} != 0 ]; then 452 | echo 'ERROR: backup failed, please check!' 453 | exit 1 454 | fi 455 | else 456 | docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysqldump -uroot -p'${MYSQL_ROOT_PASSWORD}' --net_buffer_length 4096 --no-create-info --single-transaction \"${DBNAME}\"" > "${DBFILENAME}" 457 | if [ ${?} != 0 ]; then 458 | echo 'ERROR: backup failed, please check!' 459 | exit 1 460 | fi 461 | fi 462 | echo "Backup saved to : ${DBFILENAME}" 463 | } 464 | 465 | ################## backup all database 466 | function xshok_backup_all_database () { #path*optional 467 | DBPATH="${1}" 468 | xshok_container_is_running "$CONTAINER_MYSQL" 469 | 470 | database_list="$(docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -qfNsBe \"SHOW DATABASES\"" | xargs)" 471 | 472 | for DBNAME in $database_list; do 473 | if [ "$DBNAME" != "" ] && [ "$DBNAME" != "information_schema" ] && [ "$DBNAME" != "information_schema" ] && [ "$DBNAME" != "performance_schema" ] && [ "$DBNAME" != "mysql" ] ; then 474 | echo ":------${DBNAME}-----:" 475 | if [ -z "$DBPATH" ] ; then 476 | result="$(xshok_backup_database "$DBNAME")" 477 | else 478 | result="$(xshok_backup_database "$DBNAME" "${DBPATH}/${DBNAME}.sql.gz")" 479 | fi 480 | echo $result 481 | fi 482 | done 483 | } 484 | 485 | ################## restore database 486 | function xshok_database_restore () { #database #filename 487 | DBNAME="${1}" 488 | DBFILENAME="${2}" 489 | #get valid domain from database name 490 | FILTERED_DOMAIN=${DBNAME%-*} 491 | DOMAIN="${FILTERED_DOMAIN//_/.}" 492 | xshok_container_is_running "$CONTAINER_MYSQL" 493 | xshok_validate_domain "${DOMAIN}" 494 | 495 | if [ "$DBFILENAME" == "" ] ; then 496 | echo "ERROR: empty filename, please add the domain name after the domain" 497 | exit 1 498 | fi 499 | if [ -f "${DBFILENAME}" ] ; then 500 | echo "FILE: ${DBFILENAME}" 501 | elif [ -f "${VHOST_DIR}/${DOMAIN}/${DBFILENAME}" ] ; then 502 | echo "FILE: ${VHOST_DIR}/${DOMAIN}/${DBFILENAME}" 503 | DBFILENAME="${VHOST_DIR}/${DOMAIN}/${DBFILENAME}" 504 | elif [ -f "${VHOST_DIR}/${DOMAIN}/sql/${DBFILENAME}" ] ; then 505 | echo "FILE: ${VHOST_DIR}/${DOMAIN}/sql/${DBFILENAME}" 506 | DBFILENAME="${VHOST_DIR}/${DOMAIN}/sql/${DBFILENAME}" 507 | else 508 | echo "ERROR: file not found ${DBFILENAME}" 509 | exit 1 510 | fi 511 | 512 | if docker-compose exec -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' -s -N -e \"SELECT IF(EXISTS (SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = '${DBNAME}'), 'yes','no')\"" | grep -q "yes" ; then 513 | echo "DATABASE: ${DBNAME}" 514 | else 515 | echo "ERROR: Database does not exist" 516 | exit 1 517 | fi 518 | 519 | TEMPSQL="/tmp/xs-sql-xs-$(date +%s).sql" 520 | 521 | # create tempsql 522 | if [ ${DBFILENAME##*.} == "gz" ] ; then 523 | zcat "${DBFILENAME}" > "${TEMPSQL}" 524 | else 525 | cp -f "${DBFILENAME}" "${TEMPSQL}" 526 | fi 527 | 528 | # filter(remove) create database, alter database, drop database and use statments from restore sql file 529 | sed -i -e 's/^[[:blank:]]*CREATE DATABASE.*//Ig' "${TEMPSQL}" 530 | sed -i -e 's/^[[:blank:]]*ALTER DATABASE.*//Ig' "${TEMPSQL}" 531 | sed -i -e 's/^[[:blank:]]*DROP DATABASE.*//Ig' "${TEMPSQL}" 532 | sed -i -e 's/^[[:blank:]]*USE.*//Ig' "${TEMPSQL}" 533 | 534 | if [ -f "${TEMPSQL}" ] ; then 535 | cat "${TEMPSQL}" | docker-compose exec -T -T ${CONTAINER_MYSQL} su -c "mysql -uroot -p'${MYSQL_ROOT_PASSWORD}' \"${DBNAME}\"" 536 | if [ ${?} != 0 ]; then 537 | echo 'ERROR: restore failed, please check!' 538 | exit 1 539 | fi 540 | else 541 | echo "ERROR: TEMPSQL does not exist: ${TEMPSQL}" 542 | exit 1 543 | fi 544 | 545 | # remove tempsql 546 | rm -f "${TEMPSQL}" 547 | 548 | echo "Restored ${DBFILENAME} to ${DBNAME} for ${DOMAIN}" 549 | } 550 | 551 | ################# DATABASE FUNCTIONS :: END 552 | 553 | ################# SSL FUNCTIONS :: START 554 | 555 | function xshok_ssl_list () { 556 | echo "SSL List" 557 | echo "---- vhost -------- subdomains ----" 558 | cat "${ACME_DOMAIN_LIST}" 559 | 560 | } 561 | function xshok_ssl_add () { #domain 562 | xshok_validate_domain "${1}" 563 | if grep -q "^${DOMAIN}" "${ACME_DOMAIN_LIST}" ; then 564 | echo "Warning: SSL already exists for ${DOMAIN} www.${DOMAIN}, Check ${ACME_DOMAIN_LIST}" 565 | else 566 | echo "Adding SSL for ${DOMAIN} www.${DOMAIN} " 567 | echo "${DOMAIN} www.${DOMAIN}" >> "${ACME_DOMAIN_LIST}" 568 | fi 569 | } 570 | 571 | function xshok_ssl_delete () { 572 | xshok_validate_domain "${1}" 573 | if ! grep -q "^${DOMAIN}" "${ACME_DOMAIN_LIST}" ; then 574 | echo "ERROR: SSL for ${DOMAIN} does NOT exist, Check ${ACME_DOMAIN_LIST}" 575 | exit 1 576 | fi 577 | echo "Removing SSL ${DOMAIN} www.${DOMAIN} " 578 | sed -i "/${DOMAIN} .*/d" /datastore/volumes/acme/domain_list.txt 579 | echo "Remeber to remove the dir: ${VHOST_DIR}/${DOMAIN}/certs" 580 | } 581 | 582 | ################# SSL FUNCTIONS :: END 583 | 584 | ################# ADVANCED FUNCTIONS :: START 585 | 586 | function xshok_website_warm_cache () { #domain 587 | xshok_container_is_running "$CONTAINER_OLS" 588 | xshok_validate_domain "${1}" 589 | wget --quiet "https://${DOMAIN}/sitemap.xml" --no-cache --output-document - | egrep -o "http(s?):\/\/${DOMAIN[^]} \"\(\)\]*" | while read line; do 590 | time curl -A 'Cache Warmer' -s -L $line > /dev/null 2>&1 591 | echo $line 592 | done 593 | } 594 | 595 | function xshok_docker_mysql_optimiser () { 596 | xshok_container_is_running "$CONTAINER_MYSQL" 597 | ## works, but needs refactoring .. ie own docker image 598 | docker-compose exec -T ${CONTAINER_MYSQL} /bin/bash -c 'apt-get update && apt-get install -y wget perl && wget http://mysqltuner.pl/ -O /tmp/mysqltuner.pl && perl /tmp/mysqltuner.pl --host 127.0.0.1 --user root --pass ${MYSQL_ROOT_PASSWORD}' 599 | #docker-compose exec -T mysql /bin/bash -c '/usr/bin/mysqlcheck --host 127.0.0.1 --user root --password=${MYSQL_ROOT_PASSWORD} --all-databases --optimize --skip-write-binlog' 600 | } 601 | 602 | # Check for a new version 603 | function check_new_version() { 604 | found_upgrade="no" 605 | # shellcheck disable=SC2086 606 | latest_version="$(curl --compressed --connect-timeout "30" --remote-time --location --retry "3" --max-time "30" "https://raw.githubusercontent.com/extremeshok/docker-webserver/master/xshok-admin.sh" 2> /dev/null | grep "^script_version=" | head -n1 | cut -d '"' -f 2)" 607 | if [ "$latest_version" != "" ] ; then 608 | # shellcheck disable=SC2183,SC2086 609 | if [ "$(printf "%02d%02d%02d%02d" ${latest_version//./ })" -gt "$(printf "%02d%02d%02d%02d" ${script_version//./ })" ] ; then 610 | echo "------------------------------" 611 | echo "ALERT: New version : v${latest_version} @ https://github.com/extremeshok/docker-webserver" 612 | found_upgrade="yes" 613 | fi 614 | fi 615 | 616 | if [ "$found_upgrade" == "yes" ] ; then 617 | echo "Quickly upgrade, run the following command as root:" 618 | echo "bash xshok-admin.sh --upgrade" 619 | fi 620 | 621 | } 622 | 623 | 624 | # Auto upgrade the master.conf and the 625 | function xshok_upgrade() { 626 | 627 | if ! xshok_is_root ; then 628 | echo "ERROR: Only root can run the upgrade" 629 | exit 1 630 | fi 631 | 632 | echo "Checking for updates ..." 633 | 634 | found_upgrade="no" 635 | latest_version="$(curl --compressed --connect-timeout "30" --remote-time --location --retry "3" --max-time "30" "https://raw.githubusercontent.com/extremeshok/docker-webserver/master/xshok-admin.sh" 2> /dev/null | grep "^script_version=" | head -n1 | cut -d '"' -f 2)" 636 | 637 | if [ "$latest_version" != "" ] ; then 638 | # shellcheck disable=SC2183,SC2086 639 | if [ "$(printf "%02d%02d%02d%02d" ${latest_version//./ })" -gt "$(printf "%02d%02d%02d%02d" ${script_version//./ })" ] ; then 640 | found_upgrade="yes" 641 | echo "ALERT: Upgrading script from v${script_version} to v${latest_version}" 642 | echo "Downloading https://raw.githubusercontent.com/extremeshok/docker-webserver/master/xshok-admin.sh" 643 | 644 | curl --fail --compressed --connect-timeout "30" --remote-time --location --retry "3" --max-time "30" --time-cond "/datastore/xshok-admin.sh.tmp" --output "/datastore/xshok-admin.sh.tmp" "https://raw.githubusercontent.com/extremeshok/docker-webserver/master/xshok-admin.sh" 2> /dev/null 645 | ret=$? 646 | if [ "$ret" -ne 0 ] ; then 647 | echo "ERROR: Could not download https://raw.githubusercontent.com/extremeshok/docker-webserver/master/xshok-admin.sh" 648 | exit 1 649 | fi 650 | # Detect to make sure the entire script is avilable, fail if the script is missing contents 651 | if [ "$(tail -n 1 "/datastore/xshok-admin.sh.tmp" | head -n 1 | cut -c 1-7)" != "exit \$?" ] ; then 652 | echo "ERROR: Downloaded xshok-admin.sh is incomplete, please re-run" 653 | exit 1 654 | fi 655 | # Copy over permissions from old version 656 | OCTAL_MODE="$(stat -c "%a" "/datastore/xshok-admin.sh")" 657 | 658 | echo "Inserting update process..." 659 | # Generate the update script 660 | cat > "/datastore/xshok_update_script.sh" << EOF 661 | #!/usr/bin/env bash 662 | echo "Running update process" 663 | # Overwrite old file with new 664 | if ! mv -f "/datastore/xshok-admin.sh.tmp" "/datastore/xshok-admin.sh" ; then 665 | echo "ERROR: failed moving /datastore/xshok-admin.sh.tmp to /datastore/xshok-admin.sh" 666 | rm -f \$0 667 | exit 1 668 | fi 669 | if ! chmod "$OCTAL_MODE" "/datastore/xshok-admin.sh" ; then 670 | echo "ERROR: unable to set permissions on /datastore/xshok-admin.sh" 671 | rm -f \$0 672 | exit 1 673 | fi 674 | echo "Completed" 675 | 676 | #remove the tmp script before exit 677 | rm -f \$0 678 | EOF 679 | # Replaced with $0, so code will update and then call itself with the same parameters it had 680 | #exec "${0}" "$@" 681 | bash_bin="$(command -v bash 2> /dev/null)" 682 | exec "$bash_bin" "/datastore/xshok_update_script.sh" 683 | echo "Running once as root" 684 | fi 685 | fi 686 | 687 | if [ "$found_upgrade" == "no" ] ; then 688 | echo "No updates available" 689 | fi 690 | } 691 | 692 | 693 | ################# ADVANCED FUNCTIONS :: END 694 | 695 | 696 | ################# GENERAL FUNCTIONS :: START 697 | 698 | ################# docker start 699 | function xshok_docker_up () { 700 | 701 | #Automatically create required volume dirs 702 | ## remove all comments 703 | TEMP_COMPOSE="/tmp/xs_$(date +"%s")" 704 | sed -e '1{/^#!/ {p}}; /^[\t\ ]*#/d;/\.*#.*/ {/[\x22\x27].*#.*[\x22\x27]/ !{:regular_loop s/\(.*\)*[^\]#.*/\1/;t regular_loop}; /[\x22\x27].*#.*[\x22\x27]/ {:special_loop s/\([\x22\x27].*#.*[^\x22\x27]\)#.*/\1/;t special_loop}; /\\#/ {:second_special_loop s/\(.*\\#.*[^\]\)#.*/\1/;t second_special_loop}}' "${PWD}/docker-compose.yml" > "$TEMP_COMPOSE" 705 | mkdir -p "${PWD}/volumes/" 706 | VOLUMEDIR_ARRAY=$(grep "device:.*\${PWD}/volumes/" "$TEMP_COMPOSE") 707 | for VOLUMEDIR in $VOLUMEDIR_ARRAY ; do 708 | if [[ $VOLUMEDIR =~ "\${PWD}" ]]; then 709 | VOLUMEDIR="${VOLUMEDIR/\$\{PWD\}\//}" 710 | if [ ! -d "$VOLUMEDIR" ] ; then 711 | if [ ! -f "$VOLUMEDIR" ] && [[ $VOLUMEDIR != *.* ]] ; then 712 | echo "Creating dir: $VOLUMEDIR" 713 | mkdir -p "$VOLUMEDIR" 714 | chmod 777 "$VOLUMEDIR" 715 | elif [ ! -d "$VOLUMEDIR" ] && [[ $VOLUMEDIR == *.* ]] ; then 716 | echo "Creating file: $VOLUMEDIR" 717 | touch -p "$VOLUMEDIR" 718 | fi 719 | fi 720 | fi 721 | done 722 | rm -f "$TEMP_COMPOSE" 723 | 724 | docker-compose down --remove-orphans 725 | # detect if there are any running containers and manually stop and remove them 726 | if docker ps -q 2> /dev/null ; then 727 | docker stop $(docker ps -q) 728 | sleep 1 729 | docker rm $(docker ps -q) 730 | fi 731 | 732 | docker-compose pull --include-deps 733 | docker-compose up -d --force-recreate --build 734 | 735 | } 736 | 737 | ################# docker stop 738 | function xshok_docker_down () { 739 | docker-compose down 740 | sync 741 | docker stop $(docker ps -q) 742 | sync 743 | } 744 | 745 | ################## restart webserver 746 | function xshok_restart () { 747 | xshok_container_is_running "$CONTAINER_OLS" 748 | echo "Gracefully restarting web server with zero down time" 749 | docker-compose exec -T ${CONTAINER_OLS} su -c '/usr/local/lsws/bin/lswsctrl restart >/dev/null' 750 | } 751 | 752 | ################## generate new password for web admin 753 | function xshok_password () { 754 | xshok_container_is_running "$CONTAINER_OLS" 755 | ADMINPASS="$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)" 756 | 757 | docker-compose exec -T ${CONTAINER_OLS} su -c 'echo "admin:$(/usr/local/lsws/admin/fcgi-bin/admin_php* -q /usr/local/lsws/admin/misc/htpasswd.php '${ADMINPASS}')" > /usr/local/lsws/admin/conf/htpasswd'; 758 | if [ ${?} != 0 ]; then 759 | echo 'ERROR: password set failed, please check!' 760 | exit 1 761 | fi 762 | echo "WEBADMIN: https://$(hostname -f):7080" 763 | echo "WEBADMIN USER: admin" 764 | echo "WEBADMIN PASS: ${ADMINPASS}" 765 | } 766 | 767 | ################# docker boot 768 | function xshok_docker_boot () { 769 | 770 | if [ ! -d "/etc/systemd/system/" ] ; then 771 | echo "ERROR: systemd not detected" 772 | exit 1 773 | fi 774 | if [ ! -f "${PWD}/xshok-admin.sh" ] ; then 775 | echo "ERROR: ${PWD}/xshok-admin.sh not detected" 776 | exit 1 777 | fi 778 | 779 | if [ -f "/etc/systemd/system/docker-datastore.service" ] ; then 780 | echo "removing conflicting systemd docker-datastore.service" 781 | systemctl disable docker-datastore.service 782 | rm -f /etc/systemd/system/docker-datastore.service 783 | fi 784 | 785 | echo "Generating Systemd service" 786 | cat << EOF > "/etc/systemd/system/xshok-webserver.service" 787 | [Unit] 788 | Description=eXtremeSHOK Webserver Service 789 | Requires=docker.service 790 | After=docker.service 791 | 792 | [Install] 793 | WantedBy=multi-user.target 794 | 795 | [Service] 796 | Type=forking 797 | TimeoutStartSec=0 798 | RemainAfterExit=yes 799 | WorkingDirectory=${DIRNAME} 800 | 801 | EOF 802 | 803 | 804 | echo "ExecStart=/bin/bash ${PWD}/xshok-admin.sh --start" >> "/etc/systemd/system/xshok-webserver.service" 805 | echo "ExecStop=/bin/bash ${PWD}/xshok-admin.sh --stop" >> "/etc/systemd/system/xshok-webserver.service" 806 | echo "ExecReload=/bin/bash ${PWD}/xshok-admin.sh --reload" >> "/etc/systemd/system/xshok-webserver.service" 807 | 808 | echo "Created: /etc/systemd/system/xshok-webserver.service" 809 | systemctl daemon-reload 810 | systemctl enable xshok-webserver 811 | 812 | echo "Available Systemd Commands:" 813 | echo "Start-> systemctl start xshok-webserver" 814 | echo "Stop-> systemctl stop xshok-webserver" 815 | echo "Reload-> systemctl reload xshok-webserver" 816 | 817 | } 818 | 819 | ################## generate env 820 | function xshok_env () { 821 | echo "Generating .env" 822 | if [ ! -f "${PWD}/default.env" ] ; then 823 | echo "missing ${PWD}/default.env" 824 | exit 1 825 | fi 826 | cp -f "${PWD}/default.env" "${PWD}/.env" 827 | cat << EOF >> "${PWD}/.env" 828 | # ------------------------------ 829 | # SQL database configuration 830 | # ------------------------------ 831 | MYSQL_DATABASE=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 8 | head -n 1) 832 | MYSQL_USER=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 8 | head -n 1) 833 | MYSQL_PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16 | head -n 1) 834 | MYSQL_ROOT_PASSWORD=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1) 835 | # ------------------------------ 836 | EOF 837 | echo "Done" 838 | } 839 | 840 | ################# GENERAL FUNCTIONS :: END 841 | 842 | if [ ! -f "${PWD}/.env" ] ; then 843 | echo "missing .env" 844 | xshok_env 845 | fi 846 | 847 | source "${PWD}/.env" 848 | 849 | echo "eXtremeSHOK.com Docker Webserver ${script_version} (${script_version_date})" 850 | 851 | help_message () { 852 | echo -e "\033[1mWEBSITE OPTIONS\033[0m" 853 | echo "${EPACE}-wl | --website-list" 854 | echo "${EPACE}${EPACE} list all websites" 855 | echo "${EPACE}-wa | --website-add [domain_name]" 856 | echo "${EPACE}${EPACE} add a website" 857 | echo "${EPACE}-wd | --website-delete [domain_name]" 858 | echo "${EPACE}${EPACE} delete a website" 859 | echo "${EPACE}-wp | --website-permissions [domain_name]" 860 | echo "${EPACE}${EPACE} fix permissions and ownership of a website" 861 | echo -e "\033[1mDATABASE OPTIONS\033[0m" 862 | echo "${EPACE}-dl | --database-list [domain_name]" 863 | echo "${EPACE}${EPACE} list all databases for domain" 864 | echo "${EPACE}-da | --database-add [domain_name]" 865 | echo "${EPACE}${EPACE} add a database to domain, database name, user and pass autogenerated" 866 | echo "${EPACE}-dd | --database-delete [database_name]" 867 | echo "${EPACE}${EPACE} delete a database" 868 | echo "${EPACE}-dp | --database-password [database_name]" 869 | echo "${EPACE}${EPACE} reset the password for a database" 870 | echo "${EPACE}-dr | --database-restore [database_name] [/your/path/file_name]" 871 | echo "${EPACE}${EPACE} restore a database backup file to database_name, supports .gz and .sql" 872 | echo -e "\033[1mBACKUP OPTIONS\033[0m" 873 | echo "${EPACE}-ba | --backup-all [/your/path]*optional*" 874 | echo "${EPACE}${EPACE} backup all databases, optional backup path, file will use the default sql/databasename.sql.gz" 875 | echo "${EPACE}-bd | --backup-database [database_name] [/your/path/file_name]*optional*" 876 | echo "${EPACE}${EPACE} backup a database, optional backup filename, will use the default sql/databasename.sql.gz if not specified" 877 | echo -e "\033[1mSSL OPTIONS\033[0m" 878 | echo "${EPACE}-sl | --ssl-list" 879 | echo "${EPACE}${EPACE} list all ssl" 880 | echo "${EPACE}-sa | --ssl-add [domain_name]" 881 | echo "${EPACE}${EPACE} add ssl to a website" 882 | echo "${EPACE}-sd | --ssl-delete [domain_name]" 883 | echo "${EPACE}${EPACE} delete ssl from a website" 884 | echo -e "\033[1mQUICK OPTIONS\033[0m" 885 | echo "${EPACE}-qa | --quick-add [domain_name]" 886 | echo "${EPACE}${EPACE} add website, database, ssl, restart server" 887 | echo -e "\033[1mADVANCED OPTIONS\033[0m" 888 | echo "${EPACE}-wc | --warm-cache [domain_name]" 889 | echo "${EPACE}${EPACE} loads a website sitemap and visits each page, used to warm the cache" 890 | echo -e "\033[1mGENERAL OPTIONS\033[0m" 891 | echo "${EPACE}--up | --start | --init" 892 | echo "${EPACE}${EPACE} start xshok-webserver (will launch docker-compose.yml)" 893 | echo "${EPACE}--down | --stop" 894 | echo "${EPACE}${EPACE} stop all dockers and docker-compose" 895 | echo "${EPACE}-r | --restart" 896 | echo "${EPACE}${EPACE} gracefully restart openlitespeed with zero down time" 897 | echo "${EPACE}-b | --boot | --service | --systemd" 898 | echo "${EPACE}${EPACE} creates a systemd service to start docker and run docker-compose.yml on boot" 899 | echo "${EPACE}-p | --password" 900 | echo "${EPACE}${EPACE} generate and set a new web-admin password" 901 | echo "${EPACE}-e | --env" 902 | echo "${EPACE}${EPACE} generate a new .env from the default.env" 903 | echo "${EPACE}--upgrade" 904 | echo "${EPACE}${EPACE} upgrades the script to the latest version" 905 | echo "${EPACE}-H, --help" 906 | echo "${EPACE}${EPACE}Display help and exit." 907 | } 908 | 909 | if [ -z "${1}" ]; then 910 | help_message 911 | check_new_version 912 | exit 1 913 | fi 914 | while [ ! -z "${1}" ]; do 915 | case ${1} in 916 | -[hH] | --help ) 917 | help_message 918 | ;; 919 | # -da | --domain-add | --domainadd ) shift 920 | # xshok_domain_add ${1} 921 | # ;; 922 | # -dd | --domain-delete | --domaindelete ) shift 923 | # xshok_domain_delete ${1} 924 | # ;; 925 | ## WEBSITE 926 | -wl | --website-list | --websitelist ) 927 | xshok_website_list 928 | ;; 929 | -wa | --website-add | --websiteadd ) 930 | xshok_check_s2 "$2"; 931 | xshok_website_add "$2" 932 | xshok_restart 933 | shift 934 | ;; 935 | -wd | --website-delete | --websitedelete ) 936 | xshok_check_s2 "$2"; 937 | xshok_website_delete "$2" 938 | xshok_restart 939 | shift 940 | ;; 941 | -wp | --website-permissions | --websitepermissions ) 942 | xshok_check_s2 "$2"; 943 | xshok_website_permissions "$2" 944 | shift 945 | ;; 946 | ## DATABASE 947 | -dl | --database-list | --databaselist) 948 | xshok_check_s2 "$2"; 949 | xshok_database_list "$2" 950 | shift 951 | ;; 952 | -da | --database-add | --databaseadd) 953 | xshok_check_s2 "$2"; 954 | xshok_database_add "$2" 955 | shift 956 | ;; 957 | -dd | --database-delete | --databasedelete) 958 | xshok_check_s2 "$2"; 959 | xshok_database_delete "$2" 960 | shift 961 | ;; 962 | -dp | --database-password | --databasepassword) 963 | xshok_check_s2 "$2"; 964 | xshok_database_password "$2" 965 | shift 966 | ;; 967 | -dr | --database-restore | --databaserestore) 968 | xshok_check_s2 "$2"; 969 | xshok_check_s2 "$3"; 970 | xshok_database_restore "$2" "$3" 971 | shift 2 972 | ;; 973 | ## BACKUP 974 | -bd | --backup-database | --backupdatabase) 975 | xshok_check_s2 "$2"; 976 | xshok_check_s2 "$3"; 977 | xshok_backup_database "$2" "$3" 978 | shift 2 979 | ;; 980 | -ba | --backup-all | --backupall) 981 | xshok_backup_all_database "$2" 982 | shift 983 | ;; 984 | ## SSL 985 | -sl | --ssl-list | --ssllist ) 986 | xshok_ssl_list 987 | ;; 988 | -sa | --ssl-add | --ssladd ) 989 | xshok_check_s2 "$2"; 990 | xshok_ssl_add "$2" 991 | shift 992 | ;; 993 | -sd | --ssl-delete | --ssldelete ) 994 | xshok_check_s2 "$2"; 995 | xshok_ssl_delete "$2" 996 | shift 997 | ;; 998 | ## QUICK 999 | -qa | --quick-add | --quickadd ) 1000 | xshok_check_s2 "$2"; 1001 | xshok_website_add "$2" 1002 | xshok_database_add "$2" 1003 | xshok_ssl_add "$2" 1004 | xshok_restart 1005 | shift 1006 | ;; 1007 | ## ADVANCED 1008 | -wc | --warm-cache | --warmcache ) 1009 | xshok_check_s2 "$2"; 1010 | xshok_website_warm_cache "$2" 1011 | shift 1012 | ;; 1013 | ## GENERAL 1014 | --up | --start | --init ) 1015 | xshok_docker_up 1016 | shift 1017 | ;; 1018 | --down | --stop ) 1019 | xshok_docker_down 1020 | shift 1021 | ;; 1022 | -r | --restart ) 1023 | xshok_restart 1024 | shift 1025 | ;; 1026 | -b | --boot | --service | --systemd ) 1027 | xshok_docker_boot 1028 | shift 1029 | ;; 1030 | -p | --password ) 1031 | xshok_password 1032 | shift 1033 | ;; 1034 | --upgrade) 1035 | xshok_upgrade 1036 | shift 1037 | ;; 1038 | -e | --env ) 1039 | xshok_env 1040 | shift 1041 | ;; 1042 | *) 1043 | help_message 1044 | check_new_version 1045 | ;; 1046 | esac 1047 | shift 1048 | done 1049 | 1050 | # And lastly we exit, Note: the exit is always on the 2nd last line 1051 | exit $? 1052 | --------------------------------------------------------------------------------