├── .gitignore ├── LICENSE ├── README.md ├── config ├── nginx │ ├── cache │ ├── fastcgicache │ ├── fastcgicache_global │ ├── global_nginx_conf │ ├── gzip_conf │ ├── log_format_conf │ ├── mod_pagespeed │ └── nginx_conf ├── security │ ├── rotate_rules │ └── updates_rules ├── site.nginx.conf ├── site.nginx.proxy.conf ├── site.nginx.ssl.conf ├── site.nginx.vueapp.conf ├── site.nginx.vueapp.ssl.conf ├── site.php-fpm.conf └── versions │ ├── mariadb-version │ ├── nginx-version │ ├── node-version │ ├── openssl-version │ ├── pagespeed-version │ └── php-version ├── install ├── components │ ├── composer │ │ └── install │ ├── custom │ │ ├── install │ │ └── scripts │ │ │ └── 000-running-custom-scripts.sh │ ├── nvm │ │ ├── install │ │ └── shell-envs │ ├── pm2 │ │ └── install │ ├── postgresql │ │ ├── install │ │ └── pg_hba.conf │ ├── redis │ │ ├── install │ │ └── redis.conf │ └── upgrade-php │ │ ├── UPGRADE_PHP_README.md │ │ └── upgrade-php.sh └── install └── scripts ├── cert-cron-scripts └── .gitkeep ├── install-cert.sh ├── logrotate └── TEMPLATE ├── server-initial-setup.sh ├── setup-logrotate ├── setup-site └── setup-ssh /.gitignore: -------------------------------------------------------------------------------- 1 | SimpleDocker 2 | config/nginx/global_nginx_conf_custom 3 | TEMPLATE_CUSTOM 4 | config/versions/override* -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Angela Murrell 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Easy-to-Use LEMP Setup Guide 2 | 3 | 🙌 This guide offers a streamlined provisioning script to help you easily install a LEMP stack (Nginx, MariaDB/MySQL, PHP) on LTS Ubuntu (18.04, 20.04, 22.04). It also includes individual "adhoc" scripts for quickly setting up websites, managing logs, installing SSL certificates, and adding other helpful tools. 4 | 5 | You can also incorporate this LEMP setup guide into other provisioning scripts. See how to do it with [SimpleDocker](https://github.com/amurrell/SimpleDocker). 6 | 7 | ## Features 8 | 9 | ### Initial Installation 10 | - **Nginx**: Built from source, includes Pagespeed module 11 | - **MariaDB/MySQL**: Comes with MariaDB by default 12 | - **PHP**: Includes PHP-FPM for better performance 13 | 14 | ### Additional Scripts 15 | - **Website Setup**: Automates the setup of a new website, including configurations for Nginx, PHP, and database credentials. 16 | - **Log Management**: Helps manage server logs to prevent them from becoming too large. 17 | - **SSL Certificates**: Uses Certbot to automatically secure your site with SSL. 18 | - **Extra Tools**: Includes optional installations for Composer, NVM, PM2, Redis, and PostgreSQL for more tooling resources. 19 | 20 | ## Default Software Versions 21 | 22 | These are the versions installed by default, but you can easily change them: 23 | 24 | - MariaDB: 10.11 (LTS) 25 | - Nginx: 1.24.0 (Stable) 26 | - OpenSSL: 3.0.10 (LTS) 27 | - Pagespeed: 1.15.0.0-8917 (master branch) - _(recommend using 1.13.35.2-stable for nginx < 1.23.0)_ 28 | - PHP: 8.2 29 | 30 | ### Customizing Versions 31 | If you want to use different versions of the software, navigate to the `config/versions` folder. Create new files with the prefix `override-`, like `override-php-version`, and follow the [detailed setup guide](#setup-guide-with-overrides) to implement your changes. 32 | 33 | --- 34 | 35 | ## Quick start 36 | 37 | This method accepts the default versions. [Read the setup guide for using overrides here](#setup-guide-with-overrides). 38 | 39 | ### SSH into your server: 40 | 41 | This is where you want to setup the environment. 42 | 43 | Recommended to ssh into your server with agent forwarding ie `ssh root@yourip -A`. However, it is recommended to use deploy keys if you want to use git repos in automated provisioning scripts. 44 | 45 | ### CD & Paste: 46 | 47 | `cd /var/www/`, or where you want your websites to live. 48 | 49 | ``` 50 | sudo apt-get install wget 51 | wget https://raw.githubusercontent.com/amurrell/LEMP-setup-guide/main/install/install 52 | sudo chmod +x install 53 | ./install 54 | ``` 55 | 56 | Jump down to: [During & After The Script](#during--after-the-script) 57 | 58 | --- 59 | 60 | ## Setup Guide With Overrides 61 | 62 | ### SSH & cd into preferred folder 63 | 64 | First, `ssh` into your server and navigate to your website installation location, eg. `cd /var/www/`. 65 | 66 | Recommended to ssh into your server with agent forwarding ie `ssh root@yourip -A` if you are not planning to use deployment keys for your projects. 67 | 68 | ### Clone the repo 69 | 70 | ``` 71 | git clone https://github.com/amurrell/LEMP-setup-guide.git 72 | ``` 73 | 74 | ### Update your versions 75 | 76 | ``` 77 | cd LEMP-setup-guide/config/versions 78 | 79 | # see the services 80 | ls 81 | 82 | # look at one of them 83 | cat php-version 84 | 85 | # create override file 86 | echo "8.0" > override-php-version 87 | ``` 88 | 89 | ### Run the install script 90 | 91 | After you're done overriding versions, you can install! 92 | 93 | ``` 94 | cd scripts 95 | chmod +x server-initial-setup.sh 96 | ./server-initial-setup.sh 97 | ``` 98 | --- 99 | 100 | ### During & After The Script 101 | 102 | - You'll get prompted for your ssh public key (to setup authorized keys for easier ssh access) 103 | 104 | - (Only on 16.04 install,) You'll get prompted to setup mariadb/mysql password, just use "password" for now. 105 | 106 | - After everything is installed, you can run `sudo mysql_secure_installation` (or `sudo mariadb_secure_installation`) and follow prompts to remove test databases, anonymous users, and change the root password to something more secure. 107 | 108 | - If you choose to **skip setting up a site**, you can always run the setup-site script later from `/var/www/LEMP-setup-guide/scripts/`. You can setup multiple sites using this script, one per run. 109 | 110 | --- 111 | 112 | ## Other Scripts & Components 113 | 114 | ### Scripts 115 | 116 | The following scripts are used "per site" that you want to setup on your server. They prompt and guide you through their functionality. 117 | 118 | - **setup-site** - sets up a site based on git repo, creates nginx / php as needed. 119 | 120 | This script can also take flags to support provisioning script use cases of this repo. 121 | 122 | For more info, run `./setup-site --help`, or the table below. 123 | 124 | ``` 125 | ./setup-site 126 | --domain=mysite.com 127 | --github=git@github... 128 | --deploy-subfolder=false 129 | --web-root-path=null 130 | --deploy-key-public-file=mysite-deploy-key.pub 131 | --deploy-key-private-file=mysite-deploy-key 132 | --php-pools=true 133 | --nginx-with-php=true 134 | --nginx-site-conf-path=/var/www/LEMP-setup-guide/config/site.nginx.conf (or site.nginx.vueapp.conf) 135 | --php-with-mysql=true 136 | --php-site-conf-path=/var/www/LEMP-setup-guide/config/site.php-fpm.conf 137 | --mysql-create-db=true 138 | --mysql-root-user=root 139 | --mysql-root-pass=1234 140 | --database-name=site_com 141 | --database-user=site.com 142 | --database-pass=cRaZyPaSs 143 | --database-host=localhost 144 | --database-port=3306 145 | ``` 146 | 147 | | Option | Description | Default Value | 148 | |--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------| 149 | | `--domain=DOMAIN_NAME` | Specify domain e.g. `mysite.com` | None | 150 | | `--github=GITHUB_REPO` | Specify GitHub repo using SSH e.g. `git@github.com:youruser/yourrepo.git` | None | 151 | | `--deploy-subfolder` | Specify if you want to deploy a subfolder of the repo. Possible values: `true`, `false` or path e.g. `releases` | `false` | 152 | | `--web-root-path=PATH` | Specify the path to the web root path within your "domain" folder or repo. | Blank (index file should be directly in the folder) | 153 | | `--owner-user=USER` | Specify the owner user (used for deploy & ssh key ownership) | Result of `whoami` | 154 | | `--deploy-key-public-file=PATH`| Specify the path to the public deploy key file | None | 155 | | `--deploy-key-private-file=PATH`| Specify the path to the private deploy key file | None | 156 | | `--php-pools` | Specify if you want to set up PHP pools. Possible values: `true` or `false` | `false` | 157 | | `--nginx-with-php` | Specify if you want to set up Nginx with PHP upstreams. Possible values: `true` or `false` | `false` | 158 | | `--nginx-site-conf-path=PATH` | Specify the path to the Nginx site conf file | `/var/www/LEMP-setup-guide/config/site.nginx.conf` (or `site.nginx.vueapp.conf`) | 159 | | `--php-with-mysql` | Specify if you want to set up PHP with MySQL env vars. Possible values: `true` or `false` | `false` | 160 | | `--php-site-conf-path=PATH` | Specify the path to the PHP site conf file | `/var/www/LEMP-setup-guide/config/site.php-fpm.conf` | 161 | | `--mysql-create-db` | Specify if you want to set up a MySQL database. Possible values: `true` or `false` | `false` | 162 | | `--mysql-root-user=USER` | Specify the MySQL root user | `root` | 163 | | `--mysql-root-pass=PASS` | Specify the MySQL root pass | `1234` | 164 | | `--database-name=NAME` | Specify the database name | None | 165 | | `--database-user=USER` | Specify the database user | None | 166 | | `--database-pass=PASS` | Specify the database password | None | 167 | | `--database-host=HOST` | Specify the database host | `localhost` | 168 | | `--database-port=PORT` | Specify the database port | `3306` | 169 | | `--help` | Display the help message and exit | None | 170 | 171 | 172 | - **setup-logrotate** (needs logrotate command and syslog user) 173 | - **install-cert** - sets up certbot for ssl on your site, with option to update nginx or not - creates a cronjob to keep fetching. ideal if you want control over how certbot affects nginx conf files. 174 | 175 | --- 176 | 177 | ### Installable Components 178 | There are also **components** in the `install` folder, which allow you to install other specific common tools, as well as your own custom scripts. 179 | 180 | - composer 181 | - pm2 182 | - nvm 183 | - redis 184 | - postgressql 185 | - custom 186 | - upgrade-php (see [Upgrade PHP Readme for help](./install/components/upgrade-php/UPGRADE_PHP_README.md)) 187 | 188 | ### Custom Scripts 189 | 190 | The custom scripts have a `install` file that will loop through scripts in the `scripts` folder. You can name these with numbers to create an order of when they will run. 191 | 192 | - **install** - (running this will loop through scripts in `/scripts` 193 | - **/scripts** - Add bash scripts here, make sure to `chmod +x` them. eg. `000-running-custom-scripts.sh` 194 | 195 | --- 196 | 197 | ## Use SimpleDocker to test script 198 | 199 | [SimpleDocker](https://github.com/amurrell/SimpleDocker) is a blank canvas for an ubuntu Docker Container that you can use to develop and test provisioning scripts. It comes with a few helpful tools to get you started and to test your scripts efficiently. 200 | 201 | SimpleDocker comes with a "pre-run" example of LEMP-setup-guide's initial installation script. You can use this to test your own changes to the script, or to test your custom scripts. The "pre-run" scripts get cached into the docker container so that it's easy to up/down without waiting beyond the original script. 202 | 203 | Note: 204 | 205 | If testing with simple docker, you'll need to install `apt-get -y install rsyslog` and `apt-get -y install logrotate` to use setup-logrorate. 206 | -------------------------------------------------------------------------------- /config/nginx/cache: -------------------------------------------------------------------------------- 1 | location ~* .(jpg|jpeg|png|gif|ico|css|js)$ { 2 | expires 365d; 3 | } -------------------------------------------------------------------------------- /config/nginx/fastcgicache: -------------------------------------------------------------------------------- 1 | set $skip_cache 1; 2 | # POST requests and urls with a query string should always go to PHP 3 | if ($request_method = POST) { 4 | set $skip_cache 1; 5 | } 6 | if ($query_string != "") { 7 | set $skip_cache 1; 8 | } 9 | # Dont cache uris containing the following segments 10 | if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { 11 | set $skip_cache 1; 12 | } 13 | # Dont use the cache for logged in users or recent commenters 14 | if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { 15 | set $skip_cache 1; 16 | } 17 | location ~ \.php$ { 18 | include fastcgi.conf; 19 | fastcgi_pass unix:/run/php/php7.2-fpm.sock; 20 | try_files $uri =404; 21 | fastcgi_split_path_info ^(.+\.php)(/.+)$; 22 | 23 | fastcgi_cache_bypass $skip_cache; 24 | fastcgi_no_cache $skip_cache; 25 | fastcgi_cache microcache; 26 | fastcgi_cache_valid 60m; 27 | } -------------------------------------------------------------------------------- /config/nginx/fastcgicache_global: -------------------------------------------------------------------------------- 1 | fastcgi_cache_path /var/nginx_cache/fcgi levels=1:2 keys_zone=microcache:100m inactive=60m; 2 | fastcgi_cache_key "$scheme$request_method$host$request_uri"; 3 | fastcgi_cache_use_stale updating error timeout invalid_header http_500; 4 | fastcgi_ignore_headers Cache-Control Expires Set-Cookie; -------------------------------------------------------------------------------- /config/nginx/global_nginx_conf: -------------------------------------------------------------------------------- 1 | user www-data www-data; 2 | worker_processes WORKER_PROCESSES; 3 | events { 4 | worker_connections WORKER_CONNECTIONS; 5 | } 6 | http { 7 | include /etc/nginx/mime.types; 8 | default_type application/octet-stream; 9 | server_tokens off; 10 | sendfile on; 11 | #tcp_nopush on; 12 | #log format 13 | include /etc/nginx/conf/log_format.conf; 14 | # Gzip configuration 15 | include /etc/nginx/conf/gzip.conf; 16 | #PHP and FastCGI cache 17 | include /etc/nginx/conf/fastcgicache_global.conf; 18 | # Add my servers 19 | include /etc/nginx/sites/*; 20 | # Buffers 21 | client_body_buffer_size 16K; 22 | client_header_buffer_size 2k; 23 | client_max_body_size 64m; 24 | large_client_header_buffers 4 32k; 25 | 26 | # Proxy Buffers 27 | proxy_buffer_size 128k; 28 | proxy_buffers 4 256k; 29 | proxy_busy_buffers_size 256k; 30 | proxy_max_temp_file_size 0; 31 | 32 | # Timeouts 33 | client_body_timeout 12; 34 | client_header_timeout 12; 35 | keepalive_timeout 15; 36 | send_timeout 10; 37 | # Log off 38 | access_log off; 39 | 40 | # SSL TLS versions 41 | ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3; 42 | ssl_prefer_server_ciphers on; 43 | } 44 | -------------------------------------------------------------------------------- /config/nginx/gzip_conf: -------------------------------------------------------------------------------- 1 | gzip on; 2 | gzip_disable "msie6"; 3 | gzip_vary on; 4 | gzip_proxied any; 5 | gzip_comp_level 6; 6 | gzip_buffers 16 8k; 7 | gzip_http_version 1.1; 8 | gzip_types text/plain text/css text/xml application/xml application/javascript application/x-javascript text/javascript; -------------------------------------------------------------------------------- /config/nginx/log_format_conf: -------------------------------------------------------------------------------- 1 | log_format main_ext '$remote_addr - $remote_user [$time_local] "$request" ' 2 | '$status $body_bytes_sent "$http_referer" ' 3 | '"$http_user_agent" "$http_x_forwarded_for" ' 4 | '"$host" sn="$server_name" ' 5 | 'rt=$request_time ' 6 | 'ua="$upstream_addr" us="$upstream_status" ' 7 | 'ut="$upstream_response_time" ul="$upstream_response_length" ' 8 | 'cs=$upstream_cache_status' ; 9 | -------------------------------------------------------------------------------- /config/nginx/mod_pagespeed: -------------------------------------------------------------------------------- 1 | pagespeed on; 2 | pagespeed RewriteLevel PassThrough; 3 | pagespeed FetchHttps enable; 4 | pagespeed EnableFilters add_head; 5 | pagespeed CriticalImagesBeaconEnabled false; 6 | pagespeed RespectXForwardedProto on; 7 | # pagespeed EnableFilters combine_css; 8 | # pagespeed EnableFilters rewrite_css; 9 | # pagespeed EnableFilters fallback_rewrite_css_urls; 10 | # pagespeed EnableFilters rewrite_style_attributes; 11 | # pagespeed EnableFilters rewrite_style_attributes_with_url; 12 | # pagespeed EnableFilters flatten_css_imports; 13 | # pagespeed EnableFilters inline_css; 14 | # pagespeed EnableFilters inline_google_font_css; 15 | # pagespeed EnableFilters prioritize_critical_css; 16 | 17 | #pagespeed CssInlineMaxBytes 25600; 18 | #pagespeed JsInlineMaxBytes 8192; 19 | pagespeed ImageRecompressionQuality 75; 20 | pagespeed JpegRecompressionQualityForSmallScreens 65; 21 | 22 | # pagespeed EnableFilters rewrite_javascript; 23 | # pagespeed EnableFilters rewrite_javascript_external; 24 | # pagespeed EnableFilters rewrite_javascript_inline; 25 | # pagespeed EnableFilters combine_javascript; 26 | # pagespeed EnableFilters canonicalize_javascript_libraries; 27 | # pagespeed EnableFilters inline_javascript; 28 | #pagespeed EnableFilters defer_javascript; 29 | pagespeed EnableFilters dedup_inlined_images; 30 | pagespeed EnableFilters lazyload_images; 31 | 32 | #pagespeed EnableFilters local_storage_cache; 33 | pagespeed EnableFilters rewrite_images; 34 | #pagespeed EnableFilters convert_jpeg_to_progressive; 35 | #pagespeed EnableFilters convert_png_to_jpeg; 36 | #pagespeed EnableFilters convert_jpeg_to_webp; 37 | #pagespeed EnableFilters convert_to_webp_lossless; 38 | #pagespeed EnableFilters insert_image_dimensions; 39 | pagespeed EnableFilters inline_images; 40 | pagespeed EnableFilters recompress_images; 41 | #pagespeed EnableFilters recompress_jpeg; 42 | #pagespeed EnableFilters recompress_png; 43 | #pagespeed EnableFilters recompress_webp; 44 | #pagespeed EnableFilters convert_gif_to_png; 45 | pagespeed EnableFilters strip_image_color_profile; 46 | pagespeed EnableFilters strip_image_meta_data; 47 | pagespeed EnableFilters resize_images; 48 | pagespeed EnableFilters resize_rendered_image_dimensions; 49 | pagespeed EnableFilters resize_mobile_images; 50 | 51 | ## CANNOT REMOVE COMMENTS FOR NUXT APP 52 | #pagespeed EnableFilters remove_comments; 53 | 54 | pagespeed EnableFilters collapse_whitespace; 55 | pagespeed EnableFilters elide_attributes; 56 | pagespeed EnableFilters extend_cache; 57 | pagespeed EnableFilters extend_cache_css; 58 | pagespeed EnableFilters extend_cache_images; 59 | pagespeed EnableFilters extend_cache_scripts; 60 | pagespeed EnableFilters sprite_images; 61 | pagespeed EnableFilters convert_meta_tags; 62 | pagespeed EnableFilters in_place_optimize_for_browser; 63 | pagespeed EnableFilters insert_dns_prefetch; 64 | pagespeed FileCachePath /var/ngx_pagespeed_cache; 65 | location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" { 66 | add_header "" ""; 67 | } 68 | location ~ "^/pagespeed_static/" { } 69 | location ~ "^/ngx_pagespeed_beacon$" { } 70 | pagespeed EnableCachePurge on; 71 | -------------------------------------------------------------------------------- /config/nginx/nginx_conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80 default_server; 3 | listen [::]:80 default_server; 4 | root /var/www/localhost; 5 | index index.html index.htm index.php; 6 | server_name localhost; 7 | client_max_body_size 32M; 8 | large_client_header_buffers 4 16k; 9 | include /etc/nginx/conf/mod_pagespeed.conf; 10 | include /etc/nginx/conf/cache.conf; 11 | include /etc/nginx/conf/gzip.conf; 12 | # PHP and fastcgicache 13 | include /etc/nginx/conf/fastcgicache.conf; 14 | location / { 15 | try_files $uri $uri/ /index.php; 16 | } 17 | error_page 401 403 404 /404.html; 18 | error_page 500 502 503 504 /50x.html; 19 | location = /50x.html { 20 | } 21 | } -------------------------------------------------------------------------------- /config/security/rotate_rules: -------------------------------------------------------------------------------- 1 | /var/log/apt-security-updates { 2 | rotate 2 3 | weekly 4 | size 250k 5 | compress 6 | notifempty 7 | } -------------------------------------------------------------------------------- /config/security/updates_rules: -------------------------------------------------------------------------------- 1 | echo "**************" >> /var/log/apt-security-updates 2 | date >> /var/log/apt-security-updates 3 | aptitude update >> /var/log/apt-security-updates 4 | aptitude safe-upgrade -o Aptitude::Delete-Unused=false --assume-yes --target-release `lsb_release -cs`-security >> /var/log/apt-security-updates 5 | echo "Security updates (if any) installed" -------------------------------------------------------------------------------- /config/site.nginx.conf: -------------------------------------------------------------------------------- 1 | upstream SITE_COM_backend { 2 | server unix:/var/run/php-fpm.SITEDOTCOM.sock1 weight=100 max_fails=5 fail_timeout=5; 3 | server unix:/var/run/php-fpm.SITEDOTCOM.sock2 weight=100 max_fails=5 fail_timeout=5; 4 | } 5 | 6 | server { 7 | listen 80; 8 | server_name www.SITEDOTCOM; 9 | return 301 http://SITEDOTCOM$request_uri; 10 | } 11 | 12 | server { 13 | listen 80; 14 | server_name SITEDOTCOM; 15 | root /var/www/SITEDOTCOM/WEBROOTPATH; 16 | index index.php index.html; 17 | 18 | access_log /var/www/SITEDOTCOM/DockerLocal/logs/access.log; 19 | error_log /var/www/SITEDOTCOM/DockerLocal/logs/error.log warn; 20 | 21 | include /etc/nginx/conf/mod_pagespeed.conf; 22 | 23 | location ~ /themes/.*?\.php$ { 24 | return 404; 25 | } 26 | 27 | location / { 28 | try_files $uri $uri/ /index.php?$args; 29 | } 30 | 31 | set $skip_cache 0; 32 | 33 | # POST requests and urls with a query string should always go to PHP 34 | if ($request_method = POST) { 35 | set $skip_cache 1; 36 | } 37 | if ($query_string != "") { 38 | set $skip_cache 1; 39 | } 40 | 41 | # Dont cache uris containing the following segments 42 | if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { 43 | set $skip_cache 1; 44 | } 45 | 46 | # Dont use the cache for logged in users or recent commenters 47 | if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { 48 | set $skip_cache 1; 49 | } 50 | 51 | location ~ \.php$ { 52 | include fastcgi.conf; 53 | fastcgi_pass SITE_COM_backend; 54 | 55 | try_files $uri =404; 56 | fastcgi_split_path_info ^(.+\.php)(/.+)$; 57 | 58 | fastcgi_cache_bypass $skip_cache; 59 | fastcgi_no_cache $skip_cache; 60 | 61 | fastcgi_cache microcache; 62 | fastcgi_cache_valid 5m; 63 | 64 | } 65 | 66 | location ~* \.(js|css|png|jpg|jpeg|gif|ico|woff)$ { 67 | expires 30d; 68 | add_header Pragma public; 69 | add_header Cache-Control "public"; 70 | } 71 | } 72 | -------------------------------------------------------------------------------- /config/site.nginx.proxy.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | server_name www.SITEDOTCOM; 4 | return 301 http://SITEDOTCOM$request_uri; 5 | } 6 | 7 | server { 8 | listen 80; 9 | server_name SITEDOTCOM; 10 | root /var/www/SITEDOTCOM/WEBROOTPATH; 11 | index index.php index.html; 12 | 13 | access_log /var/www/SITEDOTCOM/logs/access.log; 14 | error_log /var/www/SITEDOTCOM/logs/error.log warn; 15 | 16 | location / { 17 | proxy_pass http://localhost:8080; 18 | proxy_redirect off; 19 | proxy_set_header Host $host; 20 | proxy_set_header X-Real-IP $remote_addr; 21 | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 22 | proxy_set_header X-Forwarded-Host $server_name; 23 | } 24 | } 25 | -------------------------------------------------------------------------------- /config/site.nginx.ssl.conf: -------------------------------------------------------------------------------- 1 | upstream SITE_COM_backend { 2 | server unix:/var/run/php-fpm.SITEDOTCOM.sock1 weight=100 max_fails=5 fail_timeout=5; 3 | server unix:/var/run/php-fpm.SITEDOTCOM.sock2 weight=100 max_fails=5 fail_timeout=5; 4 | } 5 | 6 | server { 7 | listen 80; 8 | server_name www.SITEDOTCOM; 9 | return 301 http://SITEDOTCOM$request_uri; 10 | } 11 | 12 | server { 13 | listen 80; 14 | server_name SITEDOTCOM; 15 | return 301 https://SITEDOTCOM$request_uri; 16 | } 17 | 18 | server { 19 | listen 443 ssl; 20 | server_name www.SITEDOTCOM; 21 | 22 | ssl_certificate /etc/letsencrypt/live/SITEDOTCOM/fullchain.pem; 23 | ssl_certificate_key /etc/letsencrypt/live/SITEDOTCOM/privkey.pem; 24 | 25 | return 301 https://SITEDOTCOM$request_uri; 26 | } 27 | 28 | server { 29 | listen 443 ssl; 30 | server_name SITEDOTCOM; 31 | 32 | ssl_certificate /etc/letsencrypt/live/SITEDOTCOM/fullchain.pem; 33 | ssl_certificate_key /etc/letsencrypt/live/SITEDOTCOM/privkey.pem; 34 | 35 | root /var/www/SITEDOTCOM/WEBROOTPATH; 36 | index index.php index.html; 37 | 38 | access_log /var/www/SITEDOTCOM/DockerLocal/logs/access.log; 39 | error_log /var/www/SITEDOTCOM/DockerLocal/logs/error.log warn; 40 | 41 | include /etc/nginx/conf/mod_pagespeed.conf; 42 | 43 | location ~ /themes/.*?\.php$ { 44 | return 404; 45 | } 46 | 47 | location / { 48 | try_files $uri $uri/ /index.php?$args; 49 | } 50 | 51 | set $skip_cache 0; 52 | 53 | # POST requests and urls with a query string should always go to PHP 54 | if ($request_method = POST) { 55 | set $skip_cache 1; 56 | } 57 | if ($query_string != "") { 58 | set $skip_cache 1; 59 | } 60 | 61 | # Dont cache uris containing the following segments 62 | if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { 63 | set $skip_cache 1; 64 | } 65 | 66 | # Dont use the cache for logged in users or recent commenters 67 | if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { 68 | set $skip_cache 1; 69 | } 70 | 71 | location ~ \.php$ { 72 | include fastcgi.conf; 73 | fastcgi_pass SITE_COM_backend; 74 | 75 | try_files $uri =404; 76 | fastcgi_split_path_info ^(.+\.php)(/.+)$; 77 | 78 | fastcgi_cache_bypass $skip_cache; 79 | fastcgi_no_cache $skip_cache; 80 | 81 | fastcgi_cache microcache; 82 | fastcgi_cache_valid 5m; 83 | 84 | } 85 | 86 | location ~* \.(js|css|png|jpg|jpeg|gif|ico|woff)$ { 87 | expires 30d; 88 | add_header Pragma public; 89 | add_header Cache-Control "public"; 90 | } 91 | } 92 | -------------------------------------------------------------------------------- /config/site.nginx.vueapp.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | server_name www.SITEDOTCOM; 4 | return 301 http://SITEDOTCOM$request_uri; 5 | } 6 | 7 | server { 8 | listen 80; 9 | server_name SITEDOTCOM www.SITEDOTCOM; 10 | 11 | root /var/www/SITEDOTCOM/WEBROOTPATH; 12 | index index.html; 13 | 14 | access_log /var/www/SITEDOTCOM/logs/access.log; 15 | error_log /var/www/SITEDOTCOM/logs/error.log warn; 16 | 17 | include /etc/nginx/conf/mod_pagespeed.conf; 18 | 19 | location / { 20 | try_files $uri $uri/ /index.html?$args; 21 | } 22 | 23 | set $skip_cache 0; 24 | 25 | # POST requests and urls with a query string should always go to PHP 26 | if ($request_method = POST) { 27 | set $skip_cache 1; 28 | } 29 | if ($query_string != "") { 30 | set $skip_cache 1; 31 | } 32 | 33 | # Dont cache uris containing the following segments 34 | if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { 35 | set $skip_cache 1; 36 | } 37 | 38 | # Dont use the cache for logged in users or recent commenters 39 | if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { 40 | set $skip_cache 1; 41 | } 42 | 43 | location ~* \.(js|css|png|jpg|jpeg|gif|ico|woff)$ { 44 | expires 5d; 45 | add_header Pragma public; 46 | add_header Cache-Control "public"; 47 | add_header 'Access-Control-Allow-Origin' '*'; 48 | } 49 | } 50 | 51 | 52 | -------------------------------------------------------------------------------- /config/site.nginx.vueapp.ssl.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | server_name www.SITEDOTCOM; 4 | return 301 http://SITEDOTCOM$request_uri; 5 | } 6 | 7 | server { 8 | listen 80; 9 | server_name SITEDOTCOM; 10 | return 301 https://SITEDOTCOM$request_uri; 11 | } 12 | 13 | server { 14 | listen 443 ssl; 15 | server_name www.SITEDOTCOM; 16 | 17 | ssl_certificate /etc/letsencrypt/live/SITEDOTCOM/fullchain.pem; 18 | ssl_certificate_key /etc/letsencrypt/live/SITEDOTCOM/privkey.pem; 19 | 20 | return 301 https://SITEDOTCOM$request_uri; 21 | } 22 | 23 | server { 24 | listen 443 ssl; 25 | server_name SITEDOTCOM; 26 | 27 | ssl_certificate /etc/letsencrypt/live/SITEDOTCOM/fullchain.pem; 28 | ssl_certificate_key /etc/letsencrypt/live/SITEDOTCOM/privkey.pem; 29 | 30 | root /var/www/SITEDOTCOM/WEBROOTPATH; 31 | index index.html; 32 | 33 | access_log /var/www/SITEDOTCOM/logs/access.log; 34 | error_log /var/www/SITEDOTCOM/logs/error.log warn; 35 | 36 | include /etc/nginx/conf/mod_pagespeed.conf; 37 | 38 | location / { 39 | try_files $uri $uri/ /index.html?$args; 40 | } 41 | 42 | set $skip_cache 0; 43 | 44 | # POST requests and urls with a query string should always go to PHP 45 | if ($request_method = POST) { 46 | set $skip_cache 1; 47 | } 48 | if ($query_string != "") { 49 | set $skip_cache 1; 50 | } 51 | 52 | # Dont cache uris containing the following segments 53 | if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { 54 | set $skip_cache 1; 55 | } 56 | 57 | # Dont use the cache for logged in users or recent commenters 58 | if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { 59 | set $skip_cache 1; 60 | } 61 | 62 | location ~* \.(js|css|png|jpg|jpeg|gif|ico|woff)$ { 63 | expires 5d; 64 | add_header Pragma public; 65 | add_header Cache-Control "public"; 66 | add_header 'Access-Control-Allow-Origin' '*'; 67 | } 68 | } 69 | 70 | 71 | -------------------------------------------------------------------------------- /config/site.php-fpm.conf: -------------------------------------------------------------------------------- 1 | [SITE_COM_1] 2 | listen = /var/run/php-fpm.SITEDOTCOM.sock1 3 | listen.allowed_clients = 0.0.0.0 4 | listen.mode = 0666 5 | user = www-data 6 | group = www-data 7 | catch_workers_output = yes 8 | clear_env = no 9 | pm = static 10 | pm.max_children = 10 11 | pm.start_servers = 2 12 | pm.min_spare_servers = 2 13 | pm.max_spare_servers = 3 14 | pm.max_requests = 500 15 | request_terminate_timeout = 300 16 | 17 | env[DB_HOST]="DATABASEHOST" 18 | env[DB_PORT]="DATABASEPORT" 19 | env[DB_USER]="DATABASEUSER" 20 | env[DB_DATABASE]="DATABASENAME" 21 | env[DB_PASSWORD]="DATABASEPASS" 22 | 23 | env[APP_WP_DB_USER]="DATABASEUSER" 24 | env[APP_WP_DB_PASSWORD]="DATABASEPASS" 25 | env[APP_WP_DB_PASS]="DATABASEPASS" 26 | env[APP_WP_DB_NAME]="DATABASENAME" 27 | env[APP_WP_DB_HOST]="DATABASEHOST" 28 | env[APP_WP_DB_PORT]="DATABASEPORT" 29 | 30 | access.log = /var/www/SITEDOTCOM/DockerLocal/logs/php_access_log.log 31 | php_admin_value[error_log] = /var/www/SITEDOTCOM/DockerLocal/logs/php_error_log.log 32 | php_admin_value[session.save_path] = memcached:11211 33 | php_admin_value[session.save_handler] = memcached 34 | php_admin_value[session.use_cookies] = 1 35 | php_admin_value[session.cookie_secure] = 1 36 | php_admin_value[session.use_only_cookies] = 1 37 | php_admin_value[session.cookie_httponly] = 1 38 | php_admin_value[session.entropy_length] = 32 39 | php_admin_value[session.entropy_file] = /dev/urandom 40 | php_admin_value[session.hash_function] = sha256 41 | php_admin_value[session.hash_bits_per_character] = 5 42 | 43 | [SITE_COM_2] 44 | listen = /var/run/php-fpm.SITEDOTCOM.sock2 45 | listen.allowed_clients = 0.0.0.0 46 | listen.mode = 0666 47 | user = www-data 48 | group = www-data 49 | catch_workers_output = yes 50 | clear_env = no 51 | pm = static 52 | pm.max_children = 10 53 | pm.start_servers = 2 54 | pm.min_spare_servers = 2 55 | pm.max_spare_servers = 3 56 | pm.max_requests = 500 57 | request_terminate_timeout = 300 58 | 59 | env[DB_HOST]="DATABASEHOST" 60 | env[DB_PORT]="DATABASEPORT" 61 | env[DB_USER]="DATABASEUSER" 62 | env[DB_DATABASE]="DATABASENAME" 63 | env[DB_PASSWORD]="DATABASEPASS" 64 | 65 | env[APP_WP_DB_USER]="DATABASEUSER" 66 | env[APP_WP_DB_PASSWORD]="DATABASEPASS" 67 | env[APP_WP_DB_PASS]="DATABASEPASS" 68 | env[APP_WP_DB_NAME]="DATABASENAME" 69 | env[APP_WP_DB_HOST]="DATABASEHOST" 70 | 71 | access.log = /var/www/SITEDOTCOM/DockerLocal/logs/php_access_log.log 72 | php_admin_value[error_log] = /var/www/SITEDOTCOM/DockerLocal/logs/php_error_log.log 73 | php_admin_value[session.save_path] = memcached:11211 74 | php_admin_value[session.save_handler] = memcached 75 | php_admin_value[session.use_cookies] = 1 76 | php_admin_value[session.cookie_secure] = 1 77 | php_admin_value[session.use_only_cookies] = 1 78 | php_admin_value[session.cookie_httponly] = 1 79 | php_admin_value[session.entropy_length] = 32 80 | php_admin_value[session.entropy_file] = /dev/urandom 81 | php_admin_value[session.hash_function] = sha256 82 | php_admin_value[session.hash_bits_per_character] = 5 83 | -------------------------------------------------------------------------------- /config/versions/mariadb-version: -------------------------------------------------------------------------------- 1 | 10.11 2 | -------------------------------------------------------------------------------- /config/versions/nginx-version: -------------------------------------------------------------------------------- 1 | 1.24.0 2 | -------------------------------------------------------------------------------- /config/versions/node-version: -------------------------------------------------------------------------------- 1 | 18.17.1 2 | -------------------------------------------------------------------------------- /config/versions/openssl-version: -------------------------------------------------------------------------------- 1 | 3.0.10 2 | -------------------------------------------------------------------------------- /config/versions/pagespeed-version: -------------------------------------------------------------------------------- 1 | 1.15.0.0-8917 2 | -------------------------------------------------------------------------------- /config/versions/php-version: -------------------------------------------------------------------------------- 1 | 8.2 2 | -------------------------------------------------------------------------------- /install/components/composer/install: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer -------------------------------------------------------------------------------- /install/components/custom/install: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # run any other files that get put in this same folder. 4 | WHERE=$(pwd) 5 | 6 | chmod +x "$WHERE/scripts" 7 | 8 | WHERE="$WHERE/scripts/*.sh" 9 | 10 | for f in $WHERE; do # or wget-*.sh instead of *.sh 11 | bash "$f" -H || break # if needed 12 | done -------------------------------------------------------------------------------- /install/components/custom/scripts/000-running-custom-scripts.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | printf "Running Custom Scripts:\n\n" -------------------------------------------------------------------------------- /install/components/nvm/install: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash 4 | 5 | . ~/.nvm/nvm.sh 6 | . ~/.profile 7 | . ~/.bashrc 8 | 9 | command -v nvm -------------------------------------------------------------------------------- /install/components/nvm/shell-envs: -------------------------------------------------------------------------------- 1 | export NVM_DIR="$HOME/.nvm" 2 | [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm 3 | [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion -------------------------------------------------------------------------------- /install/components/pm2/install: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # if using nvm, do not sudo with npm. 4 | npm install -g pm2 -------------------------------------------------------------------------------- /install/components/postgresql/install: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Vars 4 | PG_RELEASE=$(lsb_release -cs) 5 | PG_CONF=$(find \ -name "postgresql.conf") 6 | PG_HBA_CONF=$(find \ -name "pg_hba.conf") 7 | 8 | # Update 9 | sudo apt update 10 | sudo apt install -y wget vim 11 | 12 | # Add Repo 13 | wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - 14 | echo "deb http://apt.postgresql.org/pub/repos/apt/ ${PG_RELEASE}"-pgdg main | sudo tee /etc/apt/sources.list.d/pgdg.list 15 | sudo apt update 16 | sudo apt -y install postgresql-11 17 | 18 | # View port - shows localhost 19 | sudo ss -tunelp | grep 5432 20 | 21 | # Edit PG CONF to allow incoming connections 22 | if [ -f "$PG_CONF" ]; then 23 | sed -i "/listen_addresses = 'localhost'/c\listen_addresses = '*'" $PG_CONF 24 | fi 25 | 26 | # Use pg_hba to further control those connections 27 | if [ -f "$PG_HBA_CONF" ]; then 28 | echo $(> PG_HBA_CONF 29 | fi 30 | 31 | sudo systemctl restart postgresql -------------------------------------------------------------------------------- /install/components/postgresql/pg_hba.conf: -------------------------------------------------------------------------------- 1 | host all all 0.0.0.0/0 md5 2 | host all all ::/0 md5 -------------------------------------------------------------------------------- /install/components/redis/install: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Vars 4 | REDIS_CONF=$(> "$REDIS_CONF_PATH" 26 | else 27 | # append to file 28 | echo "$REDIS_CONF" > "$REDIS_CONF_PATH" 29 | fi 30 | 31 | # Restart 32 | sudo systemctl restart redis-server.service 33 | 34 | # Add php redis depending on PHP_VERSION 35 | sudo apt-get install -y php"$PHP_VERSION"-redis 36 | 37 | # Test it with: redis-cli 38 | # Default host:port - 127.0.0.1:6379 -------------------------------------------------------------------------------- /install/components/redis/redis.conf: -------------------------------------------------------------------------------- 1 | # Redis configuration file example. 2 | # 3 | # Note that in order to read the configuration file, Redis must be 4 | # started with the file path as first argument: 5 | # 6 | # ./redis-server /path/to/redis.conf 7 | 8 | # Note on units: when memory size is needed, it is possible to specify 9 | # it in the usual form of 1k 5GB 4M and so forth: 10 | # 11 | # 1k => 1000 bytes 12 | # 1kb => 1024 bytes 13 | # 1m => 1000000 bytes 14 | # 1mb => 1024*1024 bytes 15 | # 1g => 1000000000 bytes 16 | # 1gb => 1024*1024*1024 bytes 17 | # 18 | # units are case insensitive so 1GB 1Gb 1gB are all the same. 19 | 20 | ################################## INCLUDES ################################### 21 | 22 | # Include one or more other config files here. This is useful if you 23 | # have a standard template that goes to all Redis servers but also need 24 | # to customize a few per-server settings. Include files can include 25 | # other files, so use this wisely. 26 | # 27 | # Notice option "include" won't be rewritten by command "CONFIG REWRITE" 28 | # from admin or Redis Sentinel. Since Redis always uses the last processed 29 | # line as value of a configuration directive, you'd better put includes 30 | # at the beginning of this file to avoid overwriting config change at runtime. 31 | # 32 | # If instead you are interested in using includes to override configuration 33 | # options, it is better to use include as the last line. 34 | # 35 | # include /path/to/local.conf 36 | # include /path/to/other.conf 37 | 38 | ################################## MODULES ##################################### 39 | 40 | # Load modules at startup. If the server is not able to load modules 41 | # it will abort. It is possible to use multiple loadmodule directives. 42 | # 43 | # loadmodule /path/to/my_module.so 44 | # loadmodule /path/to/other_module.so 45 | 46 | ################################## NETWORK ##################################### 47 | 48 | # By default, if no "bind" configuration directive is specified, Redis listens 49 | # for connections from all the network interfaces available on the server. 50 | # It is possible to listen to just one or multiple selected interfaces using 51 | # the "bind" configuration directive, followed by one or more IP addresses. 52 | # 53 | # Examples: 54 | # 55 | # bind 192.168.1.100 10.0.0.1 56 | # bind 127.0.0.1 ::1 57 | # 58 | # ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the 59 | # internet, binding to all the interfaces is dangerous and will expose the 60 | # instance to everybody on the internet. So by default we uncomment the 61 | # following bind directive, that will force Redis to listen only into 62 | # the IPv4 lookback interface address (this means Redis will be able to 63 | # accept connections only from clients running into the same computer it 64 | # is running). 65 | # 66 | # IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES 67 | # JUST COMMENT THE FOLLOWING LINE. 68 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 69 | bind 127.0.0.1 ::1 70 | 71 | # Protected mode is a layer of security protection, in order to avoid that 72 | # Redis instances left open on the internet are accessed and exploited. 73 | # 74 | # When protected mode is on and if: 75 | # 76 | # 1) The server is not binding explicitly to a set of addresses using the 77 | # "bind" directive. 78 | # 2) No password is configured. 79 | # 80 | # The server only accepts connections from clients connecting from the 81 | # IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain 82 | # sockets. 83 | # 84 | # By default protected mode is enabled. You should disable it only if 85 | # you are sure you want clients from other hosts to connect to Redis 86 | # even if no authentication is configured, nor a specific set of interfaces 87 | # are explicitly listed using the "bind" directive. 88 | protected-mode yes 89 | 90 | # Accept connections on the specified port, default is 6379 (IANA #815344). 91 | # If port 0 is specified Redis will not listen on a TCP socket. 92 | port 6379 93 | 94 | # TCP listen() backlog. 95 | # 96 | # In high requests-per-second environments you need an high backlog in order 97 | # to avoid slow clients connections issues. Note that the Linux kernel 98 | # will silently truncate it to the value of /proc/sys/net/core/somaxconn so 99 | # make sure to raise both the value of somaxconn and tcp_max_syn_backlog 100 | # in order to get the desired effect. 101 | tcp-backlog 511 102 | 103 | # Unix socket. 104 | # 105 | # Specify the path for the Unix socket that will be used to listen for 106 | # incoming connections. There is no default, so Redis will not listen 107 | # on a unix socket when not specified. 108 | # 109 | # unixsocket /var/run/redis/redis-server.sock 110 | # unixsocketperm 700 111 | 112 | # Close the connection after a client is idle for N seconds (0 to disable) 113 | timeout 0 114 | 115 | # TCP keepalive. 116 | # 117 | # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence 118 | # of communication. This is useful for two reasons: 119 | # 120 | # 1) Detect dead peers. 121 | # 2) Take the connection alive from the point of view of network 122 | # equipment in the middle. 123 | # 124 | # On Linux, the specified value (in seconds) is the period used to send ACKs. 125 | # Note that to close the connection the double of the time is needed. 126 | # On other kernels the period depends on the kernel configuration. 127 | # 128 | # A reasonable value for this option is 300 seconds, which is the new 129 | # Redis default starting with Redis 3.2.1. 130 | tcp-keepalive 300 131 | 132 | ################################# GENERAL ##################################### 133 | 134 | # By default Redis does not run as a daemon. Use 'yes' if you need it. 135 | # Note that Redis will write a pid file in /var/run/redis.pid when daemonized. 136 | daemonize yes 137 | 138 | # If you run Redis from upstart or systemd, Redis can interact with your 139 | # supervision tree. Options: 140 | # supervised no - no supervision interaction 141 | # supervised upstart - signal upstart by putting Redis into SIGSTOP mode 142 | # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET 143 | # supervised auto - detect upstart or systemd method based on 144 | # UPSTART_JOB or NOTIFY_SOCKET environment variables 145 | # Note: these supervision methods only signal "process is ready." 146 | # They do not enable continuous liveness pings back to your supervisor. 147 | supervised no 148 | 149 | # If a pid file is specified, Redis writes it where specified at startup 150 | # and removes it at exit. 151 | # 152 | # When the server runs non daemonized, no pid file is created if none is 153 | # specified in the configuration. When the server is daemonized, the pid file 154 | # is used even if not specified, defaulting to "/var/run/redis.pid". 155 | # 156 | # Creating a pid file is best effort: if Redis is not able to create it 157 | # nothing bad happens, the server will start and run normally. 158 | pidfile /var/run/redis/redis-server.pid 159 | 160 | # Specify the server verbosity level. 161 | # This can be one of: 162 | # debug (a lot of information, useful for development/testing) 163 | # verbose (many rarely useful info, but not a mess like the debug level) 164 | # notice (moderately verbose, what you want in production probably) 165 | # warning (only very important / critical messages are logged) 166 | loglevel notice 167 | 168 | # Specify the log file name. Also the empty string can be used to force 169 | # Redis to log on the standard output. Note that if you use standard 170 | # output for logging but daemonize, logs will be sent to /dev/null 171 | logfile /var/log/redis/redis-server.log 172 | 173 | # To enable logging to the system logger, just set 'syslog-enabled' to yes, 174 | # and optionally update the other syslog parameters to suit your needs. 175 | # syslog-enabled no 176 | 177 | # Specify the syslog identity. 178 | # syslog-ident redis 179 | 180 | # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. 181 | # syslog-facility local0 182 | 183 | # Set the number of databases. The default database is DB 0, you can select 184 | # a different one on a per-connection basis using SELECT where 185 | # dbid is a number between 0 and 'databases'-1 186 | databases 16 187 | 188 | # By default Redis shows an ASCII art logo only when started to log to the 189 | # standard output and if the standard output is a TTY. Basically this means 190 | # that normally a logo is displayed only in interactive sessions. 191 | # 192 | # However it is possible to force the pre-4.0 behavior and always show a 193 | # ASCII art logo in startup logs by setting the following option to yes. 194 | always-show-logo yes 195 | 196 | ################################ SNAPSHOTTING ################################ 197 | # 198 | # Save the DB on disk: 199 | # 200 | # save 201 | # 202 | # Will save the DB if both the given number of seconds and the given 203 | # number of write operations against the DB occurred. 204 | # 205 | # In the example below the behaviour will be to save: 206 | # after 900 sec (15 min) if at least 1 key changed 207 | # after 300 sec (5 min) if at least 10 keys changed 208 | # after 60 sec if at least 10000 keys changed 209 | # 210 | # Note: you can disable saving completely by commenting out all "save" lines. 211 | # 212 | # It is also possible to remove all the previously configured save 213 | # points by adding a save directive with a single empty string argument 214 | # like in the following example: 215 | # 216 | # save "" 217 | 218 | save 900 1 219 | save 300 10 220 | save 60 10000 221 | 222 | # By default Redis will stop accepting writes if RDB snapshots are enabled 223 | # (at least one save point) and the latest background save failed. 224 | # This will make the user aware (in a hard way) that data is not persisting 225 | # on disk properly, otherwise chances are that no one will notice and some 226 | # disaster will happen. 227 | # 228 | # If the background saving process will start working again Redis will 229 | # automatically allow writes again. 230 | # 231 | # However if you have setup your proper monitoring of the Redis server 232 | # and persistence, you may want to disable this feature so that Redis will 233 | # continue to work as usual even if there are problems with disk, 234 | # permissions, and so forth. 235 | stop-writes-on-bgsave-error yes 236 | 237 | # Compress string objects using LZF when dump .rdb databases? 238 | # For default that's set to 'yes' as it's almost always a win. 239 | # If you want to save some CPU in the saving child set it to 'no' but 240 | # the dataset will likely be bigger if you have compressible values or keys. 241 | rdbcompression yes 242 | 243 | # Since version 5 of RDB a CRC64 checksum is placed at the end of the file. 244 | # This makes the format more resistant to corruption but there is a performance 245 | # hit to pay (around 10%) when saving and loading RDB files, so you can disable it 246 | # for maximum performances. 247 | # 248 | # RDB files created with checksum disabled have a checksum of zero that will 249 | # tell the loading code to skip the check. 250 | rdbchecksum yes 251 | 252 | # The filename where to dump the DB 253 | dbfilename dump.rdb 254 | 255 | # The working directory. 256 | # 257 | # The DB will be written inside this directory, with the filename specified 258 | # above using the 'dbfilename' configuration directive. 259 | # 260 | # The Append Only File will also be created inside this directory. 261 | # 262 | # Note that you must specify a directory here, not a file name. 263 | dir /var/lib/redis 264 | 265 | ################################# REPLICATION ################################# 266 | 267 | # Master-Slave replication. Use slaveof to make a Redis instance a copy of 268 | # another Redis server. A few things to understand ASAP about Redis replication. 269 | # 270 | # 1) Redis replication is asynchronous, but you can configure a master to 271 | # stop accepting writes if it appears to be not connected with at least 272 | # a given number of slaves. 273 | # 2) Redis slaves are able to perform a partial resynchronization with the 274 | # master if the replication link is lost for a relatively small amount of 275 | # time. You may want to configure the replication backlog size (see the next 276 | # sections of this file) with a sensible value depending on your needs. 277 | # 3) Replication is automatic and does not need user intervention. After a 278 | # network partition slaves automatically try to reconnect to masters 279 | # and resynchronize with them. 280 | # 281 | # slaveof 282 | 283 | # If the master is password protected (using the "requirepass" configuration 284 | # directive below) it is possible to tell the slave to authenticate before 285 | # starting the replication synchronization process, otherwise the master will 286 | # refuse the slave request. 287 | # 288 | # masterauth 289 | 290 | # When a slave loses its connection with the master, or when the replication 291 | # is still in progress, the slave can act in two different ways: 292 | # 293 | # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will 294 | # still reply to client requests, possibly with out of date data, or the 295 | # data set may just be empty if this is the first synchronization. 296 | # 297 | # 2) if slave-serve-stale-data is set to 'no' the slave will reply with 298 | # an error "SYNC with master in progress" to all the kind of commands 299 | # but to INFO and SLAVEOF. 300 | # 301 | slave-serve-stale-data yes 302 | 303 | # You can configure a slave instance to accept writes or not. Writing against 304 | # a slave instance may be useful to store some ephemeral data (because data 305 | # written on a slave will be easily deleted after resync with the master) but 306 | # may also cause problems if clients are writing to it because of a 307 | # misconfiguration. 308 | # 309 | # Since Redis 2.6 by default slaves are read-only. 310 | # 311 | # Note: read only slaves are not designed to be exposed to untrusted clients 312 | # on the internet. It's just a protection layer against misuse of the instance. 313 | # Still a read only slave exports by default all the administrative commands 314 | # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve 315 | # security of read only slaves using 'rename-command' to shadow all the 316 | # administrative / dangerous commands. 317 | slave-read-only yes 318 | 319 | # Replication SYNC strategy: disk or socket. 320 | # 321 | # ------------------------------------------------------- 322 | # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY 323 | # ------------------------------------------------------- 324 | # 325 | # New slaves and reconnecting slaves that are not able to continue the replication 326 | # process just receiving differences, need to do what is called a "full 327 | # synchronization". An RDB file is transmitted from the master to the slaves. 328 | # The transmission can happen in two different ways: 329 | # 330 | # 1) Disk-backed: The Redis master creates a new process that writes the RDB 331 | # file on disk. Later the file is transferred by the parent 332 | # process to the slaves incrementally. 333 | # 2) Diskless: The Redis master creates a new process that directly writes the 334 | # RDB file to slave sockets, without touching the disk at all. 335 | # 336 | # With disk-backed replication, while the RDB file is generated, more slaves 337 | # can be queued and served with the RDB file as soon as the current child producing 338 | # the RDB file finishes its work. With diskless replication instead once 339 | # the transfer starts, new slaves arriving will be queued and a new transfer 340 | # will start when the current one terminates. 341 | # 342 | # When diskless replication is used, the master waits a configurable amount of 343 | # time (in seconds) before starting the transfer in the hope that multiple slaves 344 | # will arrive and the transfer can be parallelized. 345 | # 346 | # With slow disks and fast (large bandwidth) networks, diskless replication 347 | # works better. 348 | repl-diskless-sync no 349 | 350 | # When diskless replication is enabled, it is possible to configure the delay 351 | # the server waits in order to spawn the child that transfers the RDB via socket 352 | # to the slaves. 353 | # 354 | # This is important since once the transfer starts, it is not possible to serve 355 | # new slaves arriving, that will be queued for the next RDB transfer, so the server 356 | # waits a delay in order to let more slaves arrive. 357 | # 358 | # The delay is specified in seconds, and by default is 5 seconds. To disable 359 | # it entirely just set it to 0 seconds and the transfer will start ASAP. 360 | repl-diskless-sync-delay 5 361 | 362 | # Slaves send PINGs to server in a predefined interval. It's possible to change 363 | # this interval with the repl_ping_slave_period option. The default value is 10 364 | # seconds. 365 | # 366 | # repl-ping-slave-period 10 367 | 368 | # The following option sets the replication timeout for: 369 | # 370 | # 1) Bulk transfer I/O during SYNC, from the point of view of slave. 371 | # 2) Master timeout from the point of view of slaves (data, pings). 372 | # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings). 373 | # 374 | # It is important to make sure that this value is greater than the value 375 | # specified for repl-ping-slave-period otherwise a timeout will be detected 376 | # every time there is low traffic between the master and the slave. 377 | # 378 | # repl-timeout 60 379 | 380 | # Disable TCP_NODELAY on the slave socket after SYNC? 381 | # 382 | # If you select "yes" Redis will use a smaller number of TCP packets and 383 | # less bandwidth to send data to slaves. But this can add a delay for 384 | # the data to appear on the slave side, up to 40 milliseconds with 385 | # Linux kernels using a default configuration. 386 | # 387 | # If you select "no" the delay for data to appear on the slave side will 388 | # be reduced but more bandwidth will be used for replication. 389 | # 390 | # By default we optimize for low latency, but in very high traffic conditions 391 | # or when the master and slaves are many hops away, turning this to "yes" may 392 | # be a good idea. 393 | repl-disable-tcp-nodelay no 394 | 395 | # Set the replication backlog size. The backlog is a buffer that accumulates 396 | # slave data when slaves are disconnected for some time, so that when a slave 397 | # wants to reconnect again, often a full resync is not needed, but a partial 398 | # resync is enough, just passing the portion of data the slave missed while 399 | # disconnected. 400 | # 401 | # The bigger the replication backlog, the longer the time the slave can be 402 | # disconnected and later be able to perform a partial resynchronization. 403 | # 404 | # The backlog is only allocated once there is at least a slave connected. 405 | # 406 | # repl-backlog-size 1mb 407 | 408 | # After a master has no longer connected slaves for some time, the backlog 409 | # will be freed. The following option configures the amount of seconds that 410 | # need to elapse, starting from the time the last slave disconnected, for 411 | # the backlog buffer to be freed. 412 | # 413 | # Note that slaves never free the backlog for timeout, since they may be 414 | # promoted to masters later, and should be able to correctly "partially 415 | # resynchronize" with the slaves: hence they should always accumulate backlog. 416 | # 417 | # A value of 0 means to never release the backlog. 418 | # 419 | # repl-backlog-ttl 3600 420 | 421 | # The slave priority is an integer number published by Redis in the INFO output. 422 | # It is used by Redis Sentinel in order to select a slave to promote into a 423 | # master if the master is no longer working correctly. 424 | # 425 | # A slave with a low priority number is considered better for promotion, so 426 | # for instance if there are three slaves with priority 10, 100, 25 Sentinel will 427 | # pick the one with priority 10, that is the lowest. 428 | # 429 | # However a special priority of 0 marks the slave as not able to perform the 430 | # role of master, so a slave with priority of 0 will never be selected by 431 | # Redis Sentinel for promotion. 432 | # 433 | # By default the priority is 100. 434 | slave-priority 100 435 | 436 | # It is possible for a master to stop accepting writes if there are less than 437 | # N slaves connected, having a lag less or equal than M seconds. 438 | # 439 | # The N slaves need to be in "online" state. 440 | # 441 | # The lag in seconds, that must be <= the specified value, is calculated from 442 | # the last ping received from the slave, that is usually sent every second. 443 | # 444 | # This option does not GUARANTEE that N replicas will accept the write, but 445 | # will limit the window of exposure for lost writes in case not enough slaves 446 | # are available, to the specified number of seconds. 447 | # 448 | # For example to require at least 3 slaves with a lag <= 10 seconds use: 449 | # 450 | # min-slaves-to-write 3 451 | # min-slaves-max-lag 10 452 | # 453 | # Setting one or the other to 0 disables the feature. 454 | # 455 | # By default min-slaves-to-write is set to 0 (feature disabled) and 456 | # min-slaves-max-lag is set to 10. 457 | 458 | # A Redis master is able to list the address and port of the attached 459 | # slaves in different ways. For example the "INFO replication" section 460 | # offers this information, which is used, among other tools, by 461 | # Redis Sentinel in order to discover slave instances. 462 | # Another place where this info is available is in the output of the 463 | # "ROLE" command of a master. 464 | # 465 | # The listed IP and address normally reported by a slave is obtained 466 | # in the following way: 467 | # 468 | # IP: The address is auto detected by checking the peer address 469 | # of the socket used by the slave to connect with the master. 470 | # 471 | # Port: The port is communicated by the slave during the replication 472 | # handshake, and is normally the port that the slave is using to 473 | # list for connections. 474 | # 475 | # However when port forwarding or Network Address Translation (NAT) is 476 | # used, the slave may be actually reachable via different IP and port 477 | # pairs. The following two options can be used by a slave in order to 478 | # report to its master a specific set of IP and port, so that both INFO 479 | # and ROLE will report those values. 480 | # 481 | # There is no need to use both the options if you need to override just 482 | # the port or the IP address. 483 | # 484 | # slave-announce-ip 5.5.5.5 485 | # slave-announce-port 1234 486 | 487 | ################################## SECURITY ################################### 488 | 489 | # Require clients to issue AUTH before processing any other 490 | # commands. This might be useful in environments in which you do not trust 491 | # others with access to the host running redis-server. 492 | # 493 | # This should stay commented out for backward compatibility and because most 494 | # people do not need auth (e.g. they run their own servers). 495 | # 496 | # Warning: since Redis is pretty fast an outside user can try up to 497 | # 150k passwords per second against a good box. This means that you should 498 | # use a very strong password otherwise it will be very easy to break. 499 | # 500 | # requirepass foobared 501 | 502 | # Command renaming. 503 | # 504 | # It is possible to change the name of dangerous commands in a shared 505 | # environment. For instance the CONFIG command may be renamed into something 506 | # hard to guess so that it will still be available for internal-use tools 507 | # but not available for general clients. 508 | # 509 | # Example: 510 | # 511 | # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 512 | # 513 | # It is also possible to completely kill a command by renaming it into 514 | # an empty string: 515 | # 516 | # rename-command CONFIG "" 517 | # 518 | # Please note that changing the name of commands that are logged into the 519 | # AOF file or transmitted to slaves may cause problems. 520 | 521 | ################################### CLIENTS #################################### 522 | 523 | # Set the max number of connected clients at the same time. By default 524 | # this limit is set to 10000 clients, however if the Redis server is not 525 | # able to configure the process file limit to allow for the specified limit 526 | # the max number of allowed clients is set to the current file limit 527 | # minus 32 (as Redis reserves a few file descriptors for internal uses). 528 | # 529 | # Once the limit is reached Redis will close all the new connections sending 530 | # an error 'max number of clients reached'. 531 | # 532 | # maxclients 10000 533 | 534 | ############################## MEMORY MANAGEMENT ################################ 535 | 536 | # Set a memory usage limit to the specified amount of bytes. 537 | # When the memory limit is reached Redis will try to remove keys 538 | # according to the eviction policy selected (see maxmemory-policy). 539 | # 540 | # If Redis can't remove keys according to the policy, or if the policy is 541 | # set to 'noeviction', Redis will start to reply with errors to commands 542 | # that would use more memory, like SET, LPUSH, and so on, and will continue 543 | # to reply to read-only commands like GET. 544 | # 545 | # This option is usually useful when using Redis as an LRU or LFU cache, or to 546 | # set a hard memory limit for an instance (using the 'noeviction' policy). 547 | # 548 | # WARNING: If you have slaves attached to an instance with maxmemory on, 549 | # the size of the output buffers needed to feed the slaves are subtracted 550 | # from the used memory count, so that network problems / resyncs will 551 | # not trigger a loop where keys are evicted, and in turn the output 552 | # buffer of slaves is full with DELs of keys evicted triggering the deletion 553 | # of more keys, and so forth until the database is completely emptied. 554 | # 555 | # In short... if you have slaves attached it is suggested that you set a lower 556 | # limit for maxmemory so that there is some free RAM on the system for slave 557 | # output buffers (but this is not needed if the policy is 'noeviction'). 558 | # 559 | # maxmemory 560 | 561 | # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory 562 | # is reached. You can select among five behaviors: 563 | # 564 | # volatile-lru -> Evict using approximated LRU among the keys with an expire set. 565 | # allkeys-lru -> Evict any key using approximated LRU. 566 | # volatile-lfu -> Evict using approximated LFU among the keys with an expire set. 567 | # allkeys-lfu -> Evict any key using approximated LFU. 568 | # volatile-random -> Remove a random key among the ones with an expire set. 569 | # allkeys-random -> Remove a random key, any key. 570 | # volatile-ttl -> Remove the key with the nearest expire time (minor TTL) 571 | # noeviction -> Don't evict anything, just return an error on write operations. 572 | # 573 | # LRU means Least Recently Used 574 | # LFU means Least Frequently Used 575 | # 576 | # Both LRU, LFU and volatile-ttl are implemented using approximated 577 | # randomized algorithms. 578 | # 579 | # Note: with any of the above policies, Redis will return an error on write 580 | # operations, when there are no suitable keys for eviction. 581 | # 582 | # At the date of writing these commands are: set setnx setex append 583 | # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd 584 | # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby 585 | # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby 586 | # getset mset msetnx exec sort 587 | # 588 | # The default is: 589 | # 590 | # maxmemory-policy noeviction 591 | 592 | # LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated 593 | # algorithms (in order to save memory), so you can tune it for speed or 594 | # accuracy. For default Redis will check five keys and pick the one that was 595 | # used less recently, you can change the sample size using the following 596 | # configuration directive. 597 | # 598 | # The default of 5 produces good enough results. 10 Approximates very closely 599 | # true LRU but costs more CPU. 3 is faster but not very accurate. 600 | # 601 | # maxmemory-samples 5 602 | 603 | ############################# LAZY FREEING #################################### 604 | 605 | # Redis has two primitives to delete keys. One is called DEL and is a blocking 606 | # deletion of the object. It means that the server stops processing new commands 607 | # in order to reclaim all the memory associated with an object in a synchronous 608 | # way. If the key deleted is associated with a small object, the time needed 609 | # in order to execute the DEL command is very small and comparable to most other 610 | # O(1) or O(log_N) commands in Redis. However if the key is associated with an 611 | # aggregated value containing millions of elements, the server can block for 612 | # a long time (even seconds) in order to complete the operation. 613 | # 614 | # For the above reasons Redis also offers non blocking deletion primitives 615 | # such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and 616 | # FLUSHDB commands, in order to reclaim memory in background. Those commands 617 | # are executed in constant time. Another thread will incrementally free the 618 | # object in the background as fast as possible. 619 | # 620 | # DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled. 621 | # It's up to the design of the application to understand when it is a good 622 | # idea to use one or the other. However the Redis server sometimes has to 623 | # delete keys or flush the whole database as a side effect of other operations. 624 | # Specifically Redis deletes objects independently of a user call in the 625 | # following scenarios: 626 | # 627 | # 1) On eviction, because of the maxmemory and maxmemory policy configurations, 628 | # in order to make room for new data, without going over the specified 629 | # memory limit. 630 | # 2) Because of expire: when a key with an associated time to live (see the 631 | # EXPIRE command) must be deleted from memory. 632 | # 3) Because of a side effect of a command that stores data on a key that may 633 | # already exist. For example the RENAME command may delete the old key 634 | # content when it is replaced with another one. Similarly SUNIONSTORE 635 | # or SORT with STORE option may delete existing keys. The SET command 636 | # itself removes any old content of the specified key in order to replace 637 | # it with the specified string. 638 | # 4) During replication, when a slave performs a full resynchronization with 639 | # its master, the content of the whole database is removed in order to 640 | # load the RDB file just transfered. 641 | # 642 | # In all the above cases the default is to delete objects in a blocking way, 643 | # like if DEL was called. However you can configure each case specifically 644 | # in order to instead release memory in a non-blocking way like if UNLINK 645 | # was called, using the following configuration directives: 646 | 647 | lazyfree-lazy-eviction no 648 | lazyfree-lazy-expire no 649 | lazyfree-lazy-server-del no 650 | slave-lazy-flush no 651 | 652 | ############################## APPEND ONLY MODE ############################### 653 | 654 | # By default Redis asynchronously dumps the dataset on disk. This mode is 655 | # good enough in many applications, but an issue with the Redis process or 656 | # a power outage may result into a few minutes of writes lost (depending on 657 | # the configured save points). 658 | # 659 | # The Append Only File is an alternative persistence mode that provides 660 | # much better durability. For instance using the default data fsync policy 661 | # (see later in the config file) Redis can lose just one second of writes in a 662 | # dramatic event like a server power outage, or a single write if something 663 | # wrong with the Redis process itself happens, but the operating system is 664 | # still running correctly. 665 | # 666 | # AOF and RDB persistence can be enabled at the same time without problems. 667 | # If the AOF is enabled on startup Redis will load the AOF, that is the file 668 | # with the better durability guarantees. 669 | # 670 | # Please check http://redis.io/topics/persistence for more information. 671 | 672 | appendonly no 673 | 674 | # The name of the append only file (default: "appendonly.aof") 675 | 676 | appendfilename "appendonly.aof" 677 | 678 | # The fsync() call tells the Operating System to actually write data on disk 679 | # instead of waiting for more data in the output buffer. Some OS will really flush 680 | # data on disk, some other OS will just try to do it ASAP. 681 | # 682 | # Redis supports three different modes: 683 | # 684 | # no: don't fsync, just let the OS flush the data when it wants. Faster. 685 | # always: fsync after every write to the append only log. Slow, Safest. 686 | # everysec: fsync only one time every second. Compromise. 687 | # 688 | # The default is "everysec", as that's usually the right compromise between 689 | # speed and data safety. It's up to you to understand if you can relax this to 690 | # "no" that will let the operating system flush the output buffer when 691 | # it wants, for better performances (but if you can live with the idea of 692 | # some data loss consider the default persistence mode that's snapshotting), 693 | # or on the contrary, use "always" that's very slow but a bit safer than 694 | # everysec. 695 | # 696 | # More details please check the following article: 697 | # http://antirez.com/post/redis-persistence-demystified.html 698 | # 699 | # If unsure, use "everysec". 700 | 701 | # appendfsync always 702 | appendfsync everysec 703 | # appendfsync no 704 | 705 | # When the AOF fsync policy is set to always or everysec, and a background 706 | # saving process (a background save or AOF log background rewriting) is 707 | # performing a lot of I/O against the disk, in some Linux configurations 708 | # Redis may block too long on the fsync() call. Note that there is no fix for 709 | # this currently, as even performing fsync in a different thread will block 710 | # our synchronous write(2) call. 711 | # 712 | # In order to mitigate this problem it's possible to use the following option 713 | # that will prevent fsync() from being called in the main process while a 714 | # BGSAVE or BGREWRITEAOF is in progress. 715 | # 716 | # This means that while another child is saving, the durability of Redis is 717 | # the same as "appendfsync none". In practical terms, this means that it is 718 | # possible to lose up to 30 seconds of log in the worst scenario (with the 719 | # default Linux settings). 720 | # 721 | # If you have latency problems turn this to "yes". Otherwise leave it as 722 | # "no" that is the safest pick from the point of view of durability. 723 | 724 | no-appendfsync-on-rewrite no 725 | 726 | # Automatic rewrite of the append only file. 727 | # Redis is able to automatically rewrite the log file implicitly calling 728 | # BGREWRITEAOF when the AOF log size grows by the specified percentage. 729 | # 730 | # This is how it works: Redis remembers the size of the AOF file after the 731 | # latest rewrite (if no rewrite has happened since the restart, the size of 732 | # the AOF at startup is used). 733 | # 734 | # This base size is compared to the current size. If the current size is 735 | # bigger than the specified percentage, the rewrite is triggered. Also 736 | # you need to specify a minimal size for the AOF file to be rewritten, this 737 | # is useful to avoid rewriting the AOF file even if the percentage increase 738 | # is reached but it is still pretty small. 739 | # 740 | # Specify a percentage of zero in order to disable the automatic AOF 741 | # rewrite feature. 742 | 743 | auto-aof-rewrite-percentage 100 744 | auto-aof-rewrite-min-size 64mb 745 | 746 | # An AOF file may be found to be truncated at the end during the Redis 747 | # startup process, when the AOF data gets loaded back into memory. 748 | # This may happen when the system where Redis is running 749 | # crashes, especially when an ext4 filesystem is mounted without the 750 | # data=ordered option (however this can't happen when Redis itself 751 | # crashes or aborts but the operating system still works correctly). 752 | # 753 | # Redis can either exit with an error when this happens, or load as much 754 | # data as possible (the default now) and start if the AOF file is found 755 | # to be truncated at the end. The following option controls this behavior. 756 | # 757 | # If aof-load-truncated is set to yes, a truncated AOF file is loaded and 758 | # the Redis server starts emitting a log to inform the user of the event. 759 | # Otherwise if the option is set to no, the server aborts with an error 760 | # and refuses to start. When the option is set to no, the user requires 761 | # to fix the AOF file using the "redis-check-aof" utility before to restart 762 | # the server. 763 | # 764 | # Note that if the AOF file will be found to be corrupted in the middle 765 | # the server will still exit with an error. This option only applies when 766 | # Redis will try to read more data from the AOF file but not enough bytes 767 | # will be found. 768 | aof-load-truncated yes 769 | 770 | # When rewriting the AOF file, Redis is able to use an RDB preamble in the 771 | # AOF file for faster rewrites and recoveries. When this option is turned 772 | # on the rewritten AOF file is composed of two different stanzas: 773 | # 774 | # [RDB file][AOF tail] 775 | # 776 | # When loading Redis recognizes that the AOF file starts with the "REDIS" 777 | # string and loads the prefixed RDB file, and continues loading the AOF 778 | # tail. 779 | # 780 | # This is currently turned off by default in order to avoid the surprise 781 | # of a format change, but will at some point be used as the default. 782 | aof-use-rdb-preamble no 783 | 784 | ################################ LUA SCRIPTING ############################### 785 | 786 | # Max execution time of a Lua script in milliseconds. 787 | # 788 | # If the maximum execution time is reached Redis will log that a script is 789 | # still in execution after the maximum allowed time and will start to 790 | # reply to queries with an error. 791 | # 792 | # When a long running script exceeds the maximum execution time only the 793 | # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be 794 | # used to stop a script that did not yet called write commands. The second 795 | # is the only way to shut down the server in the case a write command was 796 | # already issued by the script but the user doesn't want to wait for the natural 797 | # termination of the script. 798 | # 799 | # Set it to 0 or a negative value for unlimited execution without warnings. 800 | lua-time-limit 5000 801 | 802 | ################################ REDIS CLUSTER ############################### 803 | # 804 | # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 805 | # WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however 806 | # in order to mark it as "mature" we need to wait for a non trivial percentage 807 | # of users to deploy it in production. 808 | # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 809 | # 810 | # Normal Redis instances can't be part of a Redis Cluster; only nodes that are 811 | # started as cluster nodes can. In order to start a Redis instance as a 812 | # cluster node enable the cluster support uncommenting the following: 813 | # 814 | # cluster-enabled yes 815 | 816 | # Every cluster node has a cluster configuration file. This file is not 817 | # intended to be edited by hand. It is created and updated by Redis nodes. 818 | # Every Redis Cluster node requires a different cluster configuration file. 819 | # Make sure that instances running in the same system do not have 820 | # overlapping cluster configuration file names. 821 | # 822 | # cluster-config-file nodes-6379.conf 823 | 824 | # Cluster node timeout is the amount of milliseconds a node must be unreachable 825 | # for it to be considered in failure state. 826 | # Most other internal time limits are multiple of the node timeout. 827 | # 828 | # cluster-node-timeout 15000 829 | 830 | # A slave of a failing master will avoid to start a failover if its data 831 | # looks too old. 832 | # 833 | # There is no simple way for a slave to actually have an exact measure of 834 | # its "data age", so the following two checks are performed: 835 | # 836 | # 1) If there are multiple slaves able to failover, they exchange messages 837 | # in order to try to give an advantage to the slave with the best 838 | # replication offset (more data from the master processed). 839 | # Slaves will try to get their rank by offset, and apply to the start 840 | # of the failover a delay proportional to their rank. 841 | # 842 | # 2) Every single slave computes the time of the last interaction with 843 | # its master. This can be the last ping or command received (if the master 844 | # is still in the "connected" state), or the time that elapsed since the 845 | # disconnection with the master (if the replication link is currently down). 846 | # If the last interaction is too old, the slave will not try to failover 847 | # at all. 848 | # 849 | # The point "2" can be tuned by user. Specifically a slave will not perform 850 | # the failover if, since the last interaction with the master, the time 851 | # elapsed is greater than: 852 | # 853 | # (node-timeout * slave-validity-factor) + repl-ping-slave-period 854 | # 855 | # So for example if node-timeout is 30 seconds, and the slave-validity-factor 856 | # is 10, and assuming a default repl-ping-slave-period of 10 seconds, the 857 | # slave will not try to failover if it was not able to talk with the master 858 | # for longer than 310 seconds. 859 | # 860 | # A large slave-validity-factor may allow slaves with too old data to failover 861 | # a master, while a too small value may prevent the cluster from being able to 862 | # elect a slave at all. 863 | # 864 | # For maximum availability, it is possible to set the slave-validity-factor 865 | # to a value of 0, which means, that slaves will always try to failover the 866 | # master regardless of the last time they interacted with the master. 867 | # (However they'll always try to apply a delay proportional to their 868 | # offset rank). 869 | # 870 | # Zero is the only value able to guarantee that when all the partitions heal 871 | # the cluster will always be able to continue. 872 | # 873 | # cluster-slave-validity-factor 10 874 | 875 | # Cluster slaves are able to migrate to orphaned masters, that are masters 876 | # that are left without working slaves. This improves the cluster ability 877 | # to resist to failures as otherwise an orphaned master can't be failed over 878 | # in case of failure if it has no working slaves. 879 | # 880 | # Slaves migrate to orphaned masters only if there are still at least a 881 | # given number of other working slaves for their old master. This number 882 | # is the "migration barrier". A migration barrier of 1 means that a slave 883 | # will migrate only if there is at least 1 other working slave for its master 884 | # and so forth. It usually reflects the number of slaves you want for every 885 | # master in your cluster. 886 | # 887 | # Default is 1 (slaves migrate only if their masters remain with at least 888 | # one slave). To disable migration just set it to a very large value. 889 | # A value of 0 can be set but is useful only for debugging and dangerous 890 | # in production. 891 | # 892 | # cluster-migration-barrier 1 893 | 894 | # By default Redis Cluster nodes stop accepting queries if they detect there 895 | # is at least an hash slot uncovered (no available node is serving it). 896 | # This way if the cluster is partially down (for example a range of hash slots 897 | # are no longer covered) all the cluster becomes, eventually, unavailable. 898 | # It automatically returns available as soon as all the slots are covered again. 899 | # 900 | # However sometimes you want the subset of the cluster which is working, 901 | # to continue to accept queries for the part of the key space that is still 902 | # covered. In order to do so, just set the cluster-require-full-coverage 903 | # option to no. 904 | # 905 | # cluster-require-full-coverage yes 906 | 907 | # This option, when set to yes, prevents slaves from trying to failover its 908 | # master during master failures. However the master can still perform a 909 | # manual failover, if forced to do so. 910 | # 911 | # This is useful in different scenarios, especially in the case of multiple 912 | # data center operations, where we want one side to never be promoted if not 913 | # in the case of a total DC failure. 914 | # 915 | # cluster-slave-no-failover no 916 | 917 | # In order to setup your cluster make sure to read the documentation 918 | # available at http://redis.io web site. 919 | 920 | ########################## CLUSTER DOCKER/NAT support ######################## 921 | 922 | # In certain deployments, Redis Cluster nodes address discovery fails, because 923 | # addresses are NAT-ted or because ports are forwarded (the typical case is 924 | # Docker and other containers). 925 | # 926 | # In order to make Redis Cluster working in such environments, a static 927 | # configuration where each node knows its public address is needed. The 928 | # following two options are used for this scope, and are: 929 | # 930 | # * cluster-announce-ip 931 | # * cluster-announce-port 932 | # * cluster-announce-bus-port 933 | # 934 | # Each instruct the node about its address, client port, and cluster message 935 | # bus port. The information is then published in the header of the bus packets 936 | # so that other nodes will be able to correctly map the address of the node 937 | # publishing the information. 938 | # 939 | # If the above options are not used, the normal Redis Cluster auto-detection 940 | # will be used instead. 941 | # 942 | # Note that when remapped, the bus port may not be at the fixed offset of 943 | # clients port + 10000, so you can specify any port and bus-port depending 944 | # on how they get remapped. If the bus-port is not set, a fixed offset of 945 | # 10000 will be used as usually. 946 | # 947 | # Example: 948 | # 949 | # cluster-announce-ip 10.1.1.5 950 | # cluster-announce-port 6379 951 | # cluster-announce-bus-port 6380 952 | 953 | ################################## SLOW LOG ################################### 954 | 955 | # The Redis Slow Log is a system to log queries that exceeded a specified 956 | # execution time. The execution time does not include the I/O operations 957 | # like talking with the client, sending the reply and so forth, 958 | # but just the time needed to actually execute the command (this is the only 959 | # stage of command execution where the thread is blocked and can not serve 960 | # other requests in the meantime). 961 | # 962 | # You can configure the slow log with two parameters: one tells Redis 963 | # what is the execution time, in microseconds, to exceed in order for the 964 | # command to get logged, and the other parameter is the length of the 965 | # slow log. When a new command is logged the oldest one is removed from the 966 | # queue of logged commands. 967 | 968 | # The following time is expressed in microseconds, so 1000000 is equivalent 969 | # to one second. Note that a negative number disables the slow log, while 970 | # a value of zero forces the logging of every command. 971 | slowlog-log-slower-than 10000 972 | 973 | # There is no limit to this length. Just be aware that it will consume memory. 974 | # You can reclaim memory used by the slow log with SLOWLOG RESET. 975 | slowlog-max-len 128 976 | 977 | ################################ LATENCY MONITOR ############################## 978 | 979 | # The Redis latency monitoring subsystem samples different operations 980 | # at runtime in order to collect data related to possible sources of 981 | # latency of a Redis instance. 982 | # 983 | # Via the LATENCY command this information is available to the user that can 984 | # print graphs and obtain reports. 985 | # 986 | # The system only logs operations that were performed in a time equal or 987 | # greater than the amount of milliseconds specified via the 988 | # latency-monitor-threshold configuration directive. When its value is set 989 | # to zero, the latency monitor is turned off. 990 | # 991 | # By default latency monitoring is disabled since it is mostly not needed 992 | # if you don't have latency issues, and collecting data has a performance 993 | # impact, that while very small, can be measured under big load. Latency 994 | # monitoring can easily be enabled at runtime using the command 995 | # "CONFIG SET latency-monitor-threshold " if needed. 996 | latency-monitor-threshold 0 997 | 998 | ############################# EVENT NOTIFICATION ############################## 999 | 1000 | # Redis can notify Pub/Sub clients about events happening in the key space. 1001 | # This feature is documented at http://redis.io/topics/notifications 1002 | # 1003 | # For instance if keyspace events notification is enabled, and a client 1004 | # performs a DEL operation on key "foo" stored in the Database 0, two 1005 | # messages will be published via Pub/Sub: 1006 | # 1007 | # PUBLISH __keyspace@0__:foo del 1008 | # PUBLISH __keyevent@0__:del foo 1009 | # 1010 | # It is possible to select the events that Redis will notify among a set 1011 | # of classes. Every class is identified by a single character: 1012 | # 1013 | # K Keyspace events, published with __keyspace@__ prefix. 1014 | # E Keyevent events, published with __keyevent@__ prefix. 1015 | # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... 1016 | # $ String commands 1017 | # l List commands 1018 | # s Set commands 1019 | # h Hash commands 1020 | # z Sorted set commands 1021 | # x Expired events (events generated every time a key expires) 1022 | # e Evicted events (events generated when a key is evicted for maxmemory) 1023 | # A Alias for g$lshzxe, so that the "AKE" string means all the events. 1024 | # 1025 | # The "notify-keyspace-events" takes as argument a string that is composed 1026 | # of zero or multiple characters. The empty string means that notifications 1027 | # are disabled. 1028 | # 1029 | # Example: to enable list and generic events, from the point of view of the 1030 | # event name, use: 1031 | # 1032 | # notify-keyspace-events Elg 1033 | # 1034 | # Example 2: to get the stream of the expired keys subscribing to channel 1035 | # name __keyevent@0__:expired use: 1036 | # 1037 | # notify-keyspace-events Ex 1038 | # 1039 | # By default all notifications are disabled because most users don't need 1040 | # this feature and the feature has some overhead. Note that if you don't 1041 | # specify at least one of K or E, no events will be delivered. 1042 | notify-keyspace-events "" 1043 | 1044 | ############################### ADVANCED CONFIG ############################### 1045 | 1046 | # Hashes are encoded using a memory efficient data structure when they have a 1047 | # small number of entries, and the biggest entry does not exceed a given 1048 | # threshold. These thresholds can be configured using the following directives. 1049 | hash-max-ziplist-entries 512 1050 | hash-max-ziplist-value 64 1051 | 1052 | # Lists are also encoded in a special way to save a lot of space. 1053 | # The number of entries allowed per internal list node can be specified 1054 | # as a fixed maximum size or a maximum number of elements. 1055 | # For a fixed maximum size, use -5 through -1, meaning: 1056 | # -5: max size: 64 Kb <-- not recommended for normal workloads 1057 | # -4: max size: 32 Kb <-- not recommended 1058 | # -3: max size: 16 Kb <-- probably not recommended 1059 | # -2: max size: 8 Kb <-- good 1060 | # -1: max size: 4 Kb <-- good 1061 | # Positive numbers mean store up to _exactly_ that number of elements 1062 | # per list node. 1063 | # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size), 1064 | # but if your use case is unique, adjust the settings as necessary. 1065 | list-max-ziplist-size -2 1066 | 1067 | # Lists may also be compressed. 1068 | # Compress depth is the number of quicklist ziplist nodes from *each* side of 1069 | # the list to *exclude* from compression. The head and tail of the list 1070 | # are always uncompressed for fast push/pop operations. Settings are: 1071 | # 0: disable all list compression 1072 | # 1: depth 1 means "don't start compressing until after 1 node into the list, 1073 | # going from either the head or tail" 1074 | # So: [head]->node->node->...->node->[tail] 1075 | # [head], [tail] will always be uncompressed; inner nodes will compress. 1076 | # 2: [head]->[next]->node->node->...->node->[prev]->[tail] 1077 | # 2 here means: don't compress head or head->next or tail->prev or tail, 1078 | # but compress all nodes between them. 1079 | # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail] 1080 | # etc. 1081 | list-compress-depth 0 1082 | 1083 | # Sets have a special encoding in just one case: when a set is composed 1084 | # of just strings that happen to be integers in radix 10 in the range 1085 | # of 64 bit signed integers. 1086 | # The following configuration setting sets the limit in the size of the 1087 | # set in order to use this special memory saving encoding. 1088 | set-max-intset-entries 512 1089 | 1090 | # Similarly to hashes and lists, sorted sets are also specially encoded in 1091 | # order to save a lot of space. This encoding is only used when the length and 1092 | # elements of a sorted set are below the following limits: 1093 | zset-max-ziplist-entries 128 1094 | zset-max-ziplist-value 64 1095 | 1096 | # HyperLogLog sparse representation bytes limit. The limit includes the 1097 | # 16 bytes header. When an HyperLogLog using the sparse representation crosses 1098 | # this limit, it is converted into the dense representation. 1099 | # 1100 | # A value greater than 16000 is totally useless, since at that point the 1101 | # dense representation is more memory efficient. 1102 | # 1103 | # The suggested value is ~ 3000 in order to have the benefits of 1104 | # the space efficient encoding without slowing down too much PFADD, 1105 | # which is O(N) with the sparse encoding. The value can be raised to 1106 | # ~ 10000 when CPU is not a concern, but space is, and the data set is 1107 | # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. 1108 | hll-sparse-max-bytes 3000 1109 | 1110 | # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in 1111 | # order to help rehashing the main Redis hash table (the one mapping top-level 1112 | # keys to values). The hash table implementation Redis uses (see dict.c) 1113 | # performs a lazy rehashing: the more operation you run into a hash table 1114 | # that is rehashing, the more rehashing "steps" are performed, so if the 1115 | # server is idle the rehashing is never complete and some more memory is used 1116 | # by the hash table. 1117 | # 1118 | # The default is to use this millisecond 10 times every second in order to 1119 | # actively rehash the main dictionaries, freeing memory when possible. 1120 | # 1121 | # If unsure: 1122 | # use "activerehashing no" if you have hard latency requirements and it is 1123 | # not a good thing in your environment that Redis can reply from time to time 1124 | # to queries with 2 milliseconds delay. 1125 | # 1126 | # use "activerehashing yes" if you don't have such hard requirements but 1127 | # want to free memory asap when possible. 1128 | activerehashing yes 1129 | 1130 | # The client output buffer limits can be used to force disconnection of clients 1131 | # that are not reading data from the server fast enough for some reason (a 1132 | # common reason is that a Pub/Sub client can't consume messages as fast as the 1133 | # publisher can produce them). 1134 | # 1135 | # The limit can be set differently for the three different classes of clients: 1136 | # 1137 | # normal -> normal clients including MONITOR clients 1138 | # slave -> slave clients 1139 | # pubsub -> clients subscribed to at least one pubsub channel or pattern 1140 | # 1141 | # The syntax of every client-output-buffer-limit directive is the following: 1142 | # 1143 | # client-output-buffer-limit 1144 | # 1145 | # A client is immediately disconnected once the hard limit is reached, or if 1146 | # the soft limit is reached and remains reached for the specified number of 1147 | # seconds (continuously). 1148 | # So for instance if the hard limit is 32 megabytes and the soft limit is 1149 | # 16 megabytes / 10 seconds, the client will get disconnected immediately 1150 | # if the size of the output buffers reach 32 megabytes, but will also get 1151 | # disconnected if the client reaches 16 megabytes and continuously overcomes 1152 | # the limit for 10 seconds. 1153 | # 1154 | # By default normal clients are not limited because they don't receive data 1155 | # without asking (in a push way), but just after a request, so only 1156 | # asynchronous clients may create a scenario where data is requested faster 1157 | # than it can read. 1158 | # 1159 | # Instead there is a default limit for pubsub and slave clients, since 1160 | # subscribers and slaves receive data in a push fashion. 1161 | # 1162 | # Both the hard or the soft limit can be disabled by setting them to zero. 1163 | client-output-buffer-limit normal 0 0 0 1164 | client-output-buffer-limit slave 256mb 64mb 60 1165 | client-output-buffer-limit pubsub 32mb 8mb 60 1166 | 1167 | # Client query buffers accumulate new commands. They are limited to a fixed 1168 | # amount by default in order to avoid that a protocol desynchronization (for 1169 | # instance due to a bug in the client) will lead to unbound memory usage in 1170 | # the query buffer. However you can configure it here if you have very special 1171 | # needs, such us huge multi/exec requests or alike. 1172 | # 1173 | # client-query-buffer-limit 1gb 1174 | 1175 | # In the Redis protocol, bulk requests, that are, elements representing single 1176 | # strings, are normally limited ot 512 mb. However you can change this limit 1177 | # here. 1178 | # 1179 | # proto-max-bulk-len 512mb 1180 | 1181 | # Redis calls an internal function to perform many background tasks, like 1182 | # closing connections of clients in timeout, purging expired keys that are 1183 | # never requested, and so forth. 1184 | # 1185 | # Not all tasks are performed with the same frequency, but Redis checks for 1186 | # tasks to perform according to the specified "hz" value. 1187 | # 1188 | # By default "hz" is set to 10. Raising the value will use more CPU when 1189 | # Redis is idle, but at the same time will make Redis more responsive when 1190 | # there are many keys expiring at the same time, and timeouts may be 1191 | # handled with more precision. 1192 | # 1193 | # The range is between 1 and 500, however a value over 100 is usually not 1194 | # a good idea. Most users should use the default of 10 and raise this up to 1195 | # 100 only in environments where very low latency is required. 1196 | hz 10 1197 | 1198 | # When a child rewrites the AOF file, if the following option is enabled 1199 | # the file will be fsync-ed every 32 MB of data generated. This is useful 1200 | # in order to commit the file to the disk more incrementally and avoid 1201 | # big latency spikes. 1202 | aof-rewrite-incremental-fsync yes 1203 | 1204 | # Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good 1205 | # idea to start with the default settings and only change them after investigating 1206 | # how to improve the performances and how the keys LFU change over time, which 1207 | # is possible to inspect via the OBJECT FREQ command. 1208 | # 1209 | # There are two tunable parameters in the Redis LFU implementation: the 1210 | # counter logarithm factor and the counter decay time. It is important to 1211 | # understand what the two parameters mean before changing them. 1212 | # 1213 | # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis 1214 | # uses a probabilistic increment with logarithmic behavior. Given the value 1215 | # of the old counter, when a key is accessed, the counter is incremented in 1216 | # this way: 1217 | # 1218 | # 1. A random number R between 0 and 1 is extracted. 1219 | # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1). 1220 | # 3. The counter is incremented only if R < P. 1221 | # 1222 | # The default lfu-log-factor is 10. This is a table of how the frequency 1223 | # counter changes with a different number of accesses with different 1224 | # logarithmic factors: 1225 | # 1226 | # +--------+------------+------------+------------+------------+------------+ 1227 | # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | 1228 | # +--------+------------+------------+------------+------------+------------+ 1229 | # | 0 | 104 | 255 | 255 | 255 | 255 | 1230 | # +--------+------------+------------+------------+------------+------------+ 1231 | # | 1 | 18 | 49 | 255 | 255 | 255 | 1232 | # +--------+------------+------------+------------+------------+------------+ 1233 | # | 10 | 10 | 18 | 142 | 255 | 255 | 1234 | # +--------+------------+------------+------------+------------+------------+ 1235 | # | 100 | 8 | 11 | 49 | 143 | 255 | 1236 | # +--------+------------+------------+------------+------------+------------+ 1237 | # 1238 | # NOTE: The above table was obtained by running the following commands: 1239 | # 1240 | # redis-benchmark -n 1000000 incr foo 1241 | # redis-cli object freq foo 1242 | # 1243 | # NOTE 2: The counter initial value is 5 in order to give new objects a chance 1244 | # to accumulate hits. 1245 | # 1246 | # The counter decay time is the time, in minutes, that must elapse in order 1247 | # for the key counter to be divided by two (or decremented if it has a value 1248 | # less <= 10). 1249 | # 1250 | # The default value for the lfu-decay-time is 1. A Special value of 0 means to 1251 | # decay the counter every time it happens to be scanned. 1252 | # 1253 | # lfu-log-factor 10 1254 | # lfu-decay-time 1 1255 | 1256 | ########################### ACTIVE DEFRAGMENTATION ####################### 1257 | # 1258 | # WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested 1259 | # even in production and manually tested by multiple engineers for some 1260 | # time. 1261 | # 1262 | # What is active defragmentation? 1263 | # ------------------------------- 1264 | # 1265 | # Active (online) defragmentation allows a Redis server to compact the 1266 | # spaces left between small allocations and deallocations of data in memory, 1267 | # thus allowing to reclaim back memory. 1268 | # 1269 | # Fragmentation is a natural process that happens with every allocator (but 1270 | # less so with Jemalloc, fortunately) and certain workloads. Normally a server 1271 | # restart is needed in order to lower the fragmentation, or at least to flush 1272 | # away all the data and create it again. However thanks to this feature 1273 | # implemented by Oran Agra for Redis 4.0 this process can happen at runtime 1274 | # in an "hot" way, while the server is running. 1275 | # 1276 | # Basically when the fragmentation is over a certain level (see the 1277 | # configuration options below) Redis will start to create new copies of the 1278 | # values in contiguous memory regions by exploiting certain specific Jemalloc 1279 | # features (in order to understand if an allocation is causing fragmentation 1280 | # and to allocate it in a better place), and at the same time, will release the 1281 | # old copies of the data. This process, repeated incrementally for all the keys 1282 | # will cause the fragmentation to drop back to normal values. 1283 | # 1284 | # Important things to understand: 1285 | # 1286 | # 1. This feature is disabled by default, and only works if you compiled Redis 1287 | # to use the copy of Jemalloc we ship with the source code of Redis. 1288 | # This is the default with Linux builds. 1289 | # 1290 | # 2. You never need to enable this feature if you don't have fragmentation 1291 | # issues. 1292 | # 1293 | # 3. Once you experience fragmentation, you can enable this feature when 1294 | # needed with the command "CONFIG SET activedefrag yes". 1295 | # 1296 | # The configuration parameters are able to fine tune the behavior of the 1297 | # defragmentation process. If you are not sure about what they mean it is 1298 | # a good idea to leave the defaults untouched. 1299 | 1300 | # Enabled active defragmentation 1301 | # activedefrag yes 1302 | 1303 | # Minimum amount of fragmentation waste to start active defrag 1304 | # active-defrag-ignore-bytes 100mb 1305 | 1306 | # Minimum percentage of fragmentation to start active defrag 1307 | # active-defrag-threshold-lower 10 1308 | 1309 | # Maximum percentage of fragmentation at which we use maximum effort 1310 | # active-defrag-threshold-upper 100 1311 | 1312 | # Minimal effort for defrag in CPU percentage 1313 | # active-defrag-cycle-min 25 1314 | 1315 | # Maximal effort for defrag in CPU percentage 1316 | # active-defrag-cycle-max 75 1317 | 1318 | 1319 | # maxmemory 256mb 1320 | # maxmemory-policy allkeys-lru -------------------------------------------------------------------------------- /install/components/upgrade-php/UPGRADE_PHP_README.md: -------------------------------------------------------------------------------- 1 | # Upgrade PHP 2 | 3 | ## Overview 4 | 5 | This script will upgrade PHP from one version to another, and make all changes that LEMP setup guide makes to the PHP and PHP-FPM configs as part of the initial/per-site setup. 6 | 7 | > Note: 8 | > - The script must be run as root (`sudo su` first if you're not already root) 9 | > - It will exit on the first error it encounters 10 | > - It assumes that at least 1 site has already been set up. You can run the LEMP setup guide to set up a site. 11 | 12 | It will do the following: 13 | 14 | 1. Install new versions of PHP and php-fpm if they don't already exist 15 | 1. Install all php packages 16 | 1. Enable all php extensions 17 | 1. Make edits to PHP.ini file 18 | 1. Make edits to www.conf file if needed 19 | 1. Copy all php-fpm configs but www.conf over to the new version 20 | 1. Loop through each file in /etc/nginx/sites-enabled. Per site: 21 | - Update php-fpm config 22 | - Test php-fpm config 23 | - Check that new php-fpm sockets have been created 24 | - Update nginx site conf to point to new socket 25 | - Test nginx config 26 | 1. Update nginx fastcgicache conf to point to a new global socket 27 | 1. Test nginx config and reload (this sets the new php version live) 28 | 1. Update default PHP version 29 | 1. Output next steps 30 | 31 | ## Usage 32 | 33 | ```sh 34 | upgrade-php.sh OLD_VERSION NEW_VERSION 35 | ``` 36 | 37 | Example: upgrade-php.sh 8.0 8.2 38 | 39 | > Note: 40 | > - Set $DEBUG for a verbose output - $DEBUG=true ./upgrade-php.sh [options] 41 | > - You can also run it without args to see the usage info: ./upgrade-php.sh 42 | 43 | ## Troubleshooting 44 | 45 | - If you're having issues with the script, try running it with the $DEBUG variable set to true. This will output more verbose information about what the script is doing. 46 | 47 | ```sh 48 | $DEBUG=true ./upgrade-php.sh OLD_VERSION NEW_VERSION 49 | ``` 50 | 51 | ### Helpful commands 52 | 53 | 54 | - Get php version 55 | ```sh 56 | php -v 57 | ``` 58 | 59 | - Check pool.d configs (this will show the old and new versions) 60 | ```sh 61 | grep -R "listen = /var/run/php" /etc/php/*/fpm/pool.d/ 62 | ``` 63 | 64 | - Look for sockets 65 | ```sh 66 | ls -al /var/run/ | grep php 67 | ``` 68 | 69 | - Check nginx site configs 70 | ```sh 71 | grep -R "unix:/var/run/php" /etc/nginx/sites-enabled/ 72 | ``` 73 | 74 | - Find running php-fpm processes 75 | ```sh 76 | ps aux | grep php-fpm 77 | ``` 78 | -------------------------------------------------------------------------------- /install/components/upgrade-php/upgrade-php.sh: -------------------------------------------------------------------------------- 1 | set -e 2 | 3 | # function to script_echo something in purple 4 | function script_echo() { 5 | echo -e "\033[35m$1\033[0m" 6 | } 7 | 8 | # check that user is root 9 | if [ ! $EUID = 0 ]; then 10 | script_echo "Please run this script as root!" 11 | exit 1 12 | fi 13 | 14 | 15 | # If no args were passed, script_echo out some helpful info 16 | if [ $# -eq 0 ]; then 17 | script_echo "Usage: upgrade-php.sh OLD_VERSION NEW_VERSION" 18 | script_echo "Example: upgrade-php.sh 8.0 8.2" 19 | script_echo "Notes:" 20 | script_echo '- Set $DEBUG for a verbose output - $DEBUG=true ./upgrade-php.sh [options]' 21 | exit 1 22 | fi 23 | 24 | # For debugging the script 25 | # set -v if 26 | if [ ! -z "$DEBUG" ]; then 27 | set -v 28 | fi 29 | 30 | # Validate that the user passed at least two args 31 | if [ $# -lt 2 ]; then 32 | script_echo "Please pass the version args (OLD_VERSION NEW_VERSION)" 33 | script_echo "Example: upgrade-php.sh 8.0 8.2" 34 | exit 1 35 | fi 36 | 37 | # Validate that the version args are valid 38 | # they should be numbers with a decimal point 39 | if [[ ! $1 =~ ^[0-9]+(\.[0-9]+)?$ ]] || [[ ! $2 =~ ^[0-9]+(\.[0-9]+)?$ ]]; then 40 | script_echo "Please pass valid version args" 41 | script_echo "Example: upgrade-php.sh 8.0 8.2" 42 | exit 1 43 | fi 44 | 45 | OLD_PHP_VERSION=$1 46 | NEW_PHP_VERSION=$2 47 | 48 | # Install new versions of PHP and php-fpm but only if they don't already exist 49 | if [ -f /usr/bin/php${NEW_PHP_VERSION} ]; then 50 | script_echo "ℹ️ php${NEW_PHP_VERSION} already exists, skipping install" 51 | else 52 | script_echo "📦 Installing php${NEW_PHP_VERSION}" 53 | sudo apt-get update 54 | sudo apt-get -y --no-install-recommends install php${NEW_PHP_VERSION} 55 | fi 56 | 57 | if [ -f /usr/sbin/php-fpm${NEW_PHP_VERSION} ]; then 58 | script_echo "ℹ️ php-fpm${NEW_PHP_VERSION} already exists, skipping install" 59 | else 60 | script_echo "📦 Installing php-fpm${NEW_PHP_VERSION}" 61 | sudo apt-get update 62 | sudo apt-get -y --no-install-recommends install php${NEW_PHP_VERSION}-fpm 63 | fi 64 | 65 | # Install all php packages if they don't already exist 66 | script_echo "📦 Installing php packages" 67 | sudo apt-get update && \ 68 | sudo apt-get -y install php${NEW_PHP_VERSION}-curl && \ 69 | sudo apt-get -y install php${NEW_PHP_VERSION}-mysql && \ 70 | sudo apt-get -y install php${NEW_PHP_VERSION}-dev && \ 71 | sudo apt-get -y install php${NEW_PHP_VERSION}-zip && \ 72 | sudo apt-get -y install php${NEW_PHP_VERSION}-mbstring && \ 73 | sudo apt-get -y install php${NEW_PHP_VERSION}-memcached && \ 74 | sudo apt-get -y install php${NEW_PHP_VERSION}-pgsql && \ 75 | sudo apt-get -y install php${NEW_PHP_VERSION}-xml && \ 76 | sudo apt-get -y install php${NEW_PHP_VERSION}-intl && \ 77 | sudo apt-get -y install php${NEW_PHP_VERSION}-redis && \ 78 | sudo apt-get -y install php${NEW_PHP_VERSION}-bcmath && \ 79 | sudo apt-get -y install php${NEW_PHP_VERSION}-gd 80 | 81 | # make edits to PHP.ini file 82 | script_echo "🖊 Editing php.ini" 83 | sed -i 's/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/g' /etc/php/${NEW_PHP_VERSION}/fpm/php.ini 84 | 85 | # Enable all php extensions 86 | script_echo "📦 Enabling all php extensions" 87 | phpenmod -v $NEW_PHP_VERSION mbstring 88 | phpenmod -v $NEW_PHP_VERSION curl 89 | phpenmod -v $NEW_PHP_VERSION xml 90 | phpenmod -v $NEW_PHP_VERSION xmlreader 91 | phpenmod -v $NEW_PHP_VERSION simplexml 92 | phpenmod -v $NEW_PHP_VERSION gd 93 | 94 | # make edits to www.conf file if needed 95 | PHP_WWW_CONF=/etc/php/${NEW_PHP_VERSION}/fpm/pool.d/www.conf 96 | if [ ! -f $PHP_WWW_CONF ]; then 97 | script_echo "ℹ️ www.conf not found, skipping edits" 98 | else 99 | script_echo "🖊 Editing www.conf" 100 | sed -i "s/^;listen.owner = www-data/listen.owner = www-data/g" $PHP_WWW_CONF 101 | sed -i "s/^;listen.group = www-data/listen.group = www-data/g" $PHP_WWW_CONF 102 | sed -i "s/^;listen.mode = 0660/listen.mode = 0660/" $PHP_WWW_CONF 103 | fi 104 | 105 | # Copy all php-fpm configs but www.conf over to the new version 106 | ls /etc/php/${OLD_PHP_VERSION}/fpm/pool.d/ | grep -v www.conf | while read -r line; do 107 | # Copy the file to the new version 108 | script_echo "📦 Copying php-fpm configs for site pool: ${line}" 109 | sudo cp /etc/php/${OLD_PHP_VERSION}/fpm/pool.d/${line} /etc/php/${NEW_PHP_VERSION}/fpm/pool.d/${line} 110 | done 111 | 112 | # loop through each file in sites-enabled 113 | ls /etc/nginx/sites-enabled/ | while read -r line; do 114 | NGINX_SITE_CONF=${line} 115 | 116 | # Remove the .conf from the end of the file name and save to SITE_NAME 117 | SITE_NAME=${line%.conf} 118 | 119 | # script_echo which file we're on 120 | script_echo "🦾 Processing site: ${SITE_NAME}" 121 | 122 | # Get the socket name from inside the nginx file. use that to replace the socket name in the php-fpm file 123 | # This is the old socket name (get everything between unix: and .sock), and add .sock to the end 124 | OLD_SOCKET_PREFIX=$(cat /etc/nginx/sites-enabled/${NGINX_SITE_CONF} | grep "unix:" | head -n 1 | grep -oP '(?<=unix:).*(?=sock)')sock 125 | # replace / with \/ to get a sed friendly string 126 | OLD_SOCKET_GLOB=$(echo ${OLD_SOCKET_PREFIX} | sed 's/\//\\\//g') 127 | 128 | NEW_SOCKET_PREFIX="/var/run/php${NEW_PHP_VERSION}-fpm.${SITE_NAME}.sock" 129 | # replace / with \/ to get a sed friendly string 130 | NEW_SOCKET_GLOB=$(echo ${NEW_SOCKET_PREFIX} | sed 's/\//\\\//g') 131 | 132 | # Do a recursive grep in pool.d to find all the files that have the old socket name 133 | grep -rl ${OLD_SOCKET_GLOB} /etc/php/${NEW_PHP_VERSION}/fpm/pool.d/ | while read -r phpconfig; do 134 | # Update php-fpm config 135 | script_echo "🖊 Editing php-fpm config for site php-fpm config: ${phpconfig}" 136 | sudo sed -i "s/^listen = ${OLD_SOCKET_GLOB}/listen = ${NEW_SOCKET_GLOB}/g" ${phpconfig} 137 | done 138 | 139 | # Test php-fpm config and restart 140 | script_echo "🕵️‍♀️ Testing php-fpm config and restarting" 141 | sudo php-fpm${NEW_PHP_VERSION} -t && sudo service php${NEW_PHP_VERSION}-fpm restart 142 | 143 | # check that sockets have been created, exit if not 144 | # there should be 2, sock1 and sock2 145 | # pattern is /var/run/php${NEW_PHP_VERSION}-fpm.${SITE_NAME}.sock(1 or 2) 146 | # the -S flag checks if the file exists and is a socket. -f fails bc it's not a regular file 147 | script_echo "🕵️‍♀️ Checking for php-fpm sockets" 148 | if [ ! -S ${NEW_SOCKET_PREFIX}1 ] || [ ! -S ${NEW_SOCKET_PREFIX}2 ]; then 149 | script_echo "Missing php-fpm sockets" 150 | script_echo "Expected: ${NEW_SOCKET_PREFIX}1" 151 | script_echo "Expected: ${NEW_SOCKET_PREFIX}2" 152 | exit 1 153 | fi 154 | 155 | # update nginx site conf to point to new socket 156 | # Note! We always want to edit sites-available, not sites-enabled. 157 | # sites-enabled containts symlinks to sites-available. Making changes there would break the symlink 158 | script_echo "🖊 Editing nginx site conf" 159 | sudo sed -i "s/unix:${OLD_SOCKET_GLOB}/unix:${NEW_SOCKET_GLOB}/g" /etc/nginx/sites-available/${NGINX_SITE_CONF} 160 | 161 | # test nginx config 162 | script_echo "🕵️‍♀️ Testing nginx config" 163 | sudo nginx -t 164 | done 165 | 166 | # update nginx fastcgicache conf to point to a new global socket 167 | # replace any socket that starts with /var/run/php*.sock 168 | # with /var/run/php${NEW_PHP_VERSION}-fpm.sock 169 | if [ ! -f /etc/nginx/conf/fastcgicache.conf ]; then 170 | script_echo "ℹ️ fastcgicache.conf not found, skipping edits" 171 | else 172 | script_echo "🖊 Editing nginx fastcgicache conf" 173 | # sudo sed -i "s/unix:${OLD_SOCKET_GLOB}/unix:${NEW_SOCKET_GLOB}/g" /etc/nginx/conf/fastcgicache.conf 174 | sudo sed -i "s/\/var\/run\/php.*\.sock/\/var\/run\/php${NEW_PHP_VERSION}-fpm.sock/g" /etc/nginx/conf/fastcgicache.conf 175 | fi 176 | 177 | # test nginx config and reload 178 | script_echo "🕵️‍♀️ Testing nginx config and reloading" 179 | sudo nginx -t && sudo service nginx reload 180 | 181 | # # Set default PHP version 182 | script_echo "📦 Setting default php version to ${NEW_PHP_VERSION}" 183 | sudo update-alternatives --set php /usr/bin/php${NEW_PHP_VERSION} 184 | 185 | script_echo " 186 | ✅ All done! 187 | -------------------- 188 | 📝 Next steps: 189 | - Restart any processes running the old php version, ex: pm2. Check with: \`ps aux | grep php${OLD_PHP_VERSION}\` 190 | - Shut down the old php-fpm service: \`sudo service php${OLD_PHP_VERSION}-fpm stop\` 191 | " 192 | -------------------------------------------------------------------------------- /install/install: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | cr=`echo $'\n.'` 3 | cr=${cr%.} 4 | 5 | # Check if /var/www exists 6 | if [ ! -d "/var/www" ]; then 7 | mkdir /var/www/ 8 | fi 9 | 10 | printf "Installing git...\n" 11 | sudo apt-get install -y git 12 | 13 | THIS_REPO=https://github.com/amurrell/LEMP-setup-guide.git 14 | 15 | cd /var/www/ 16 | 17 | # Check if /var/www exists 18 | if [ ! -d "/var/www/LEMP-setup-guide/" ]; then 19 | git clone $THIS_REPO 20 | fi 21 | 22 | cd LEMP-setup-guide/scripts 23 | 24 | ./server-initial-setup.sh -------------------------------------------------------------------------------- /scripts/cert-cron-scripts/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/amurrell/LEMP-setup-guide/597321fd273ca981b84809d7358d9650af2d9dc8/scripts/cert-cron-scripts/.gitkeep -------------------------------------------------------------------------------- /scripts/install-cert.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Define Funcs 4 | extract_webroot_path() { 5 | local domain="$1" 6 | local nginx_conf_path="$2" 7 | 8 | # Ensure domain and nginx_conf_path are provided 9 | if [[ -z "$domain" || -z "$nginx_conf_path" ]]; then 10 | echo "Error: Either domain or nginx_conf_path is not provided." 11 | return 1 12 | fi 13 | 14 | local result=$(grep -o "root /var/www/$domain[^;]*;" $nginx_conf_path) 15 | 16 | local webrootpath="${result#*"$domain"}" 17 | webrootpath="${webrootpath%;}" 18 | 19 | # Check if webrootpath starts with /current and remove it if it does 20 | if [[ $webrootpath == /current* ]]; then 21 | webrootpath="${webrootpath#/current}" 22 | fi 23 | 24 | # Print the webrootpath 25 | echo "$webrootpath" 26 | } 27 | 28 | fix_root_path() { 29 | local domain="$1" 30 | local nginx_conf_path="$2" 31 | 32 | # Ensure domain and nginx_conf_path are provided 33 | if [[ -z "$domain" || -z "$nginx_conf_path" ]]; then 34 | echo "Error: Either domain or nginx_conf_path is not provided." 35 | return 1 36 | fi 37 | 38 | # Read the 'root' line from the provided nginx configuration file 39 | local root_line=$(sed -n '/^[ \t]*root [^;]*;/p' "$nginx_conf_path") 40 | 41 | # printf "========== ROOT LINE\n" 42 | # printf "$root_line\n" 43 | 44 | # Remove double slashes and trailing slash before the semicolon 45 | local modified_line=$(echo "$root_line" | sed 's#//\+#/#g' | sed 's#/\(;\)$#\1#') 46 | # printf "========== MODIFIED ROOT LINE\n" 47 | # printf "$modified_line\n" 48 | 49 | # Use sed to replace root_line with modified_line in the provided nginx configuration file 50 | sed -i "s#$(echo "$root_line" | sed 's#[\&/\.]#\\&#g')#$modified_line#g" "$nginx_conf_path" 51 | 52 | # printf "========== AFTER SED\n" 53 | # cat "$nginx_conf_path" 54 | } 55 | 56 | # check for certbot 57 | if ! [ -x "$(command -v certbot)" ]; then 58 | sudo apt-get update 59 | sudo apt-get install -y software-properties-common 60 | LC_ALL=C.UTF-8 sudo add-apt-repository -y ppa:certbot/certbot 61 | sudo apt-get update 62 | sudo apt-get install -y certbot 63 | fi 64 | 65 | # check registered 66 | printf "checking if registered...\n" 67 | certbot register 68 | if [ $? -eq 0 ]; then 69 | printf "assuming you went through registration...\n" 70 | else 71 | printf "already registered!\n" 72 | fi 73 | 74 | CERTBOT_CMD='certbot certonly --non-interactive --keep-until-expiring --post-hook "sudo service nginx reload" --cert-name' 75 | 76 | # PROMPT - Certificate name 77 | read -p "Certificate name, eg your-site.com:" CERTNAME 78 | 79 | CERTBOT_CMD="$CERTBOT_CMD $CERTNAME" 80 | 81 | # PROMPT - domains to secure, eg. yoursite.com,www.yoursite.com 82 | read -p "Domain to secure, eg. yoursite.com,www.yoursite.com: " SITEDOMAINS 83 | 84 | for i in $(echo $SITEDOMAINS | sed "s/,/ /g") 85 | do 86 | # PROMPT - webroot for domain $i 87 | read -p "Webroot for domain $i, eg. /var/www/yoursite.com/app/dist or /var/www/yoursite.com/current/html: " CUR_WEBROOT 88 | 89 | # APPEND 90 | CERTBOT_CMD="$CERTBOT_CMD --webroot -w $CUR_WEBROOT -d $i" 91 | done 92 | 93 | # if you forgot www.your-site.com and go to add it and nginx has config to redirect to https://yoursite.com 94 | # the cert could fail to validate via webroot challenge so you may want to convert back to non ssl and no redirects before 95 | # running this. This is a weird case that could happen though. 96 | eval $CERTBOT_CMD 97 | 98 | 99 | # NGINX SSL CERT CODE? 100 | read -p "Do you want to create a cron for this cert?. (y)|(n): " -n 1 -r 101 | echo # (optional) move to a new line 102 | if [[ $REPLY =~ ^[Yy]$ ]] 103 | then 104 | # PROMPT - for the main domain 105 | read -p "Main domain to use for name of command to run on cron, eg. yoursite.com: " DOMAIN 106 | 107 | # Save command as script 108 | echo "${CERTBOT_CMD}" > "cert-cron-scripts/$DOMAIN.sh" 109 | sudo chmod +x "cert-cron-scripts/$DOMAIN.sh" 110 | 111 | ## get current cron text 112 | if [ $(crontab -l | wc -c) -eq 0 ]; then 113 | echo crontab is empty 114 | touch curcron 115 | else 116 | crontab -l > curcron 117 | fi 118 | 119 | #echo new cron into cron file ... 120 | 121 | 122 | # * * * * * "command to be executed" 123 | # - - - - - 124 | # | | | | | 125 | # | | | | ----- Day of week (0 - 7) (Sunday=0 or 7) 126 | # | | | ------- Month (1 - 12) 127 | # | | --------- Day of month (1 - 31) 128 | # | ----------- Hour (0 - 23) 129 | # ------------- Minute (0 - 59) 130 | 131 | # a */12 in second from left would mean every 12 hours (the slash makes it an every...) 132 | echo "* */12 * * * /var/www/LEMP-setup-guide/scripts/cert-cron-scripts/$DOMAIN.sh" >> curcron 133 | #install new cron file 134 | crontab curcron 135 | rm curcron 136 | fi 137 | 138 | 139 | # NGINX SSL CERT CODE? 140 | read -p "Do you want nginx SSL cert config to get generated? It will override what you have now. (y)|(n): " -n 1 -r 141 | echo # (optional) move to a new line 142 | if [[ $REPLY =~ ^[Yy]$ ]] 143 | then 144 | # PROMPT - for the main domain 145 | read -p "Main domain to use for config, eg. yoursite.com: " DOMAIN 146 | 147 | # CREATE the domain_com variable eg yoursite_com 148 | echo "$DOMAIN" >> tempdomain.txt 149 | sed -i 's/\./_/g' tempdomain.txt 150 | DOMAIN_COM=$(cat tempdomain.txt) 151 | rm tempdomain.txt 152 | 153 | # Get Web Root Path for replacing in nginx config 154 | WEBROOTPATH=$(extract_webroot_path "$DOMAIN" "/etc/nginx/sites-available/$DOMAIN.conf") 155 | 156 | rm /etc/nginx/sites-available/"$DOMAIN.conf" 157 | 158 | # NGINX - Proxy or not to proxy? 159 | read -p "Do you need to setup nginx config with upstreams and php? (y)|(n): " -n 1 -r 160 | echo # (optional) move to a new line 161 | if [[ $REPLY =~ ^[Yy]$ ]] 162 | then 163 | # Create nginx config for php streams 164 | cp /var/www/LEMP-setup-guide/config/site.nginx.ssl.conf /etc/nginx/sites-available/"$DOMAIN.conf" 165 | else 166 | # Create nginx config with proxy pass 167 | cp /var/www/LEMP-setup-guide/config/site.nginx.vueapp.ssl.conf /etc/nginx/sites-available/"$DOMAIN.conf" 168 | fi 169 | 170 | cd /etc/nginx/sites-available/ 171 | 172 | sed -i "s/SITEDOTCOM/${DOMAIN}/g;" "$DOMAIN.conf" 173 | sed -i "s/SITE_COM/${DOMAIN_COM}/g;" "$DOMAIN.conf" 174 | sed -i "s|WEBROOTPATH|${WEBROOTPATH}|g;" "$DOMAIN.conf" 175 | 176 | # nginx - handle current path stuff - 177 | # if /var/www/$DOMAIN/current exists, need to sed nginx config to swap paths 178 | if [ -d "/var/www/$DOMAIN/current" ]; then 179 | # For lines that do not contain '/var/www/${DOMAIN}/current', replace '/var/www/${DOMAIN}/' with '/var/www/${DOMAIN}/current/' 180 | sed -i "/var\/www\/${DOMAIN}\/current/!s/var\/www\/${DOMAIN}\//var\/www\/${DOMAIN}\/current\//g" "$DOMAIN.conf" 181 | fi 182 | 183 | # Fix possible issues in root path 184 | fix_root_path "$DOMAIN" "$DOMAIN.conf" 185 | 186 | # This may already exist... 187 | ln -s /etc/nginx/sites-available/"${DOMAIN}.conf" /etc/nginx/sites-enabled/ 188 | 189 | # Test nginx config 190 | sudo nginx -t 191 | 192 | # if nginx test is good - reload nginx 193 | if [ $? -eq 0 ]; then 194 | sudo service nginx reload 195 | fi 196 | else 197 | printf "okay, bye!\n" 198 | fi 199 | -------------------------------------------------------------------------------- /scripts/logrotate/TEMPLATE: -------------------------------------------------------------------------------- 1 | LOG_PATH { 2 | daily 3 | missingok 4 | rotate 14 5 | compress 6 | notifempty 7 | create 0640 www-data www-data 8 | sharedscripts 9 | postrotate 10 | systemctl reload PHP_FPM 11 | systemctl reload nginx 12 | endscript 13 | } -------------------------------------------------------------------------------- /scripts/server-initial-setup.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # 3 | # Adapted 4 | # from: https://github.com/actuallymentor/Setup-Script-Nginx-Pagespeed-PHP7-Mariadb 5 | # by: @actuallymentor, github 6 | # 7 | # This install script uses the following version defaults: 8 | # 9 | # NOTE: override these versions in the config/versions folder 10 | # by adding files with override prepended eg "override-php-version" 11 | # 12 | # - openssl: 1.1.1k 13 | # - pagespeed: 1.13.35.2-stable 14 | # - nginx: 1.20.0 15 | # - php: 7.4 16 | # - mariadb: 10.6 17 | # 18 | 19 | ########################### Variables ############################# 20 | workerprocesses=$(grep processor /proc/cpuinfo | wc -l) 21 | workerconnections=$(ulimit -n) 22 | fastcgicache_global=$(<../config/nginx/fastcgicache_global) 23 | fastcgicache=$(<../config/nginx/fastcgicache) 24 | 25 | # Figure out global_nginx_conf 26 | cp ../config/nginx/global_nginx_conf ../config/nginx/global_nginx_conf_custom 27 | sed -i'.bak' "s/WORKER_PROCESSES/${workerprocesses}/g;" ../config/nginx/global_nginx_conf_custom 28 | sed -i'.bak' "s/WORKER_CONNECTIONS/${workerconnections}/g;" ../config/nginx/global_nginx_conf_custom 29 | rm ../config/nginx/global_nginx_conf_custom.bak 30 | 31 | # Versions 32 | PHP_VERSION=$(<../config/versions/php-version) 33 | NGINX_VERSION=$(<../config/versions/nginx-version) 34 | PAGESPEED_VERSION=$(<../config/versions/pagespeed-version) 35 | OPENSSL_VERSION=$(<../config/versions/openssl-version) 36 | MARIADB_VERSION=$(<../config/versions/mariadb-version) 37 | 38 | # detect the ubuntu version and release name 39 | UBUNTU_VERSION=$(lsb_release -r | awk '{print $2}') 40 | UBUNTU_RELEASE_NAME=$(lsb_release -c | awk '{print $2}') 41 | 42 | # Version overrides 43 | if [ -f ../config/versions/override-php-version ] && [ ! -z $(<../config/versions/override-php-version) ]; then 44 | PHP_VERSION=$(<../config/versions/override-php-version) 45 | fi 46 | 47 | if [ -f ../config/versions/override-nginx-version ] && [ ! -z $(<../config/versions/override-nginx-version) ]; then 48 | NGINX_VERSION=$(<../config/versions/override-nginx-version) 49 | fi 50 | 51 | if [ -f ../config/versions/override-pagespeed-version ] && [ ! -z $(<../config/versions/override-pagespeed-version) ]; then 52 | PAGESPEED_VERSION=$(<../config/versions/override-pagespeed-version) 53 | fi 54 | 55 | if [ -f ../config/versions/override-openssl-version ] && [ ! -z $(<../config/versions/override-openssl-version) ]; then 56 | OPENSSL_VERSION=$(<../config/versions/override-openssl-version) 57 | fi 58 | 59 | if [ -f ../config/versions/override-mariadb-version ] && [ ! -z $(<../config/versions/override-mariadb-version) ]; then 60 | MARIADB_VERSION=$(<../config/versions/override-mariadb-version) 61 | fi 62 | 63 | # Nginx configurations 64 | global_nginx_conf=$(<../config/nginx/global_nginx_conf_custom) 65 | nginx_conf=$(<../config/nginx/nginx_conf) 66 | mod_pagespeed=$(<../config/nginx/mod_pagespeed) 67 | cache=$(<../config/nginx/cache) 68 | gzipconf=$(<../config/nginx/gzip_conf) 69 | log_format=$(<../config/nginx/log_format_conf) 70 | 71 | #Auto security update rules 72 | updaterules=$(<../config/security/updates_rules) 73 | rotaterules=$(<../config/security/rotate_rules) 74 | 75 | ################################################################## 76 | ################################################################## 77 | ################################################################## 78 | ################################################################## 79 | ################################################################## 80 | ################################################################## 81 | ################################################################## 82 | ################################################################## 83 | 84 | sudo apt-get update 85 | 86 | ### 18.04 AMI has grub issue 87 | ### Workaround: Pre-update /etc/default/grub and 88 | ### remove /boot/grub/menu.lst to avoid 'file changed' 89 | ### prompts from blocking completion of unattended update process 90 | # patch /etc/default/grub <<'EOF' 91 | # 10c10 92 | # < GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0" 93 | # --- 94 | # > GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 nvme.io_timeout=4294967295" 95 | # 19c19 96 | # < GRUB_TERMINAL=console 97 | # --- 98 | # > #GRUB_TERMINAL=console 99 | # EOF 100 | # rm /boot/grub/menu.lst 101 | 102 | # Avoid php packaging prompts - setting a timezone 103 | dpkg-reconfigure debconf --frontend=noninteractive 104 | DEBIAN_FRONTEND=noninteractive apt-get -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install grub-pc 105 | DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata 106 | # ln -fs /usr/share/zoneinfo/America/Los_Angeles /etc/localtime 107 | dpkg-reconfigure --frontend noninteractive tzdata 108 | 109 | # sudo apt-get -y upgrade 110 | 111 | ### Workaround part 2: re-generate /boot/grub/menu.lst 112 | # /usr/sbin/update-grub-legacy-ec2 -y 113 | 114 | # Dependencies etc 115 | DEBIAN_FRONTEND=noninteractive apt-get install -y wget 116 | DEBIAN_FRONTEND=noninteractive apt-get install -y git 117 | DEBIAN_FRONTEND=noninteractive apt-get install -y build-essential 118 | DEBIAN_FRONTEND=noninteractive apt-get install -y python 119 | DEBIAN_FRONTEND=noninteractive apt-get install -y dpkg-dev 120 | DEBIAN_FRONTEND=noninteractive apt-get install -y zlib1g-dev 121 | DEBIAN_FRONTEND=noninteractive apt-get install -y libpcre3 122 | DEBIAN_FRONTEND=noninteractive apt-get install -y libpcre3-dev 123 | DEBIAN_FRONTEND=noninteractive apt-get install -y unzip 124 | DEBIAN_FRONTEND=noninteractive apt-get install -y software-properties-common 125 | DEBIAN_FRONTEND=noninteractive apt-get install -y uuid-dev 126 | 127 | # Pagespeed download - we will be in root 128 | cd 129 | 130 | # if PAGESPEED_VERSION contains "beta" or "stable" then use that version 131 | # - eg. options are: 1.14.33.1-RC1, 1.13.35.2-stable, and before... from releases 132 | # otherwise, if the version starts with 1.15, then use the master branch. It's archived so that's the last version. They never released it. 133 | # - eg. 1.15.0.0-8917 (master branch) git clone --depth=1 https://github.com/apache/incubator-pagespeed-ngx.git 134 | if [[ $PAGESPEED_VERSION == *"beta"* ]] || [[ $PAGESPEED_VERSION == *"stable"* ]]; then 135 | wget https://github.com/apache/incubator-pagespeed-ngx/archive/v${PAGESPEED_VERSION}.zip 136 | unzip v${PAGESPEED_VERSION}.zip 137 | elif [[ $PAGESPEED_VERSION == "1.15"* ]]; then 138 | git clone --depth=1 https://github.com/apache/incubator-pagespeed-ngx.git incubator-pagespeed-ngx-${PAGESPEED_VERSION} 139 | fi 140 | 141 | nps_dir=$(find . -name "*pagespeed-ngx-${PAGESPEED_VERSION}" -type d) 142 | cd "$nps_dir" 143 | 144 | # Fix psol nginx/ubuntu version combability issues 145 | # issue: on newer glibc - eg with ubuntu jammy, error "undefined reference to `pthread_yield'". 146 | # solution: Install correct psol for unbutu release, based on another contributors' kind fix. 147 | # which will rely on their psol hosted files for now. 148 | # - source - fix: https://github.com/apache/incubator-pagespeed-ngx/issues/1760#issue-1385031517 149 | # - source - build psol from source example: https://github.com/eilandert/build_psol/blob/main/docker/bootstrap.sh 150 | psol_url=http://www.tiredofit.nl/psol-${UBUNTU_RELEASE_NAME}.tar.xz 151 | wget ${psol_url} 152 | tar xvf psol-${UBUNTU_RELEASE_NAME}.tar.xz 153 | 154 | # Openssl Download 155 | wget -qO - https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz | tar xzf - -C /tmp 156 | cd 157 | 158 | # Nginx Download 159 | mkdir -p /etc/nginx 160 | cd 161 | wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz 162 | tar -xvzf nginx-${NGINX_VERSION}.tar.gz 163 | cd nginx-${NGINX_VERSION}/ 164 | ./configure \ 165 | --add-module=$HOME/$nps_dir ${PS_NGX_EXTRA_FLAGS} \ 166 | --prefix=/etc/nginx \ 167 | --sbin-path=/usr/sbin/nginx \ 168 | --conf-path=/etc/nginx/nginx.conf \ 169 | --error-log-path=/var/log/nginx/error.log \ 170 | --http-log-path=/var/log/nginx/access.log \ 171 | --pid-path=/var/run/nginx.pid \ 172 | --lock-path=/var/run/nginx.lock \ 173 | --http-client-body-temp-path=/var/cache/nginx/client_temp \ 174 | --http-proxy-temp-path=/var/cache/nginx/proxy_temp \ 175 | --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \ 176 | --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \ 177 | --http-scgi-temp-path=/var/cache/nginx/scgi_temp \ 178 | --user=www-data \ 179 | --group=www-data \ 180 | --with-http_ssl_module \ 181 | --with-http_realip_module \ 182 | --with-http_addition_module \ 183 | --with-http_sub_module \ 184 | --with-http_dav_module \ 185 | --with-http_flv_module \ 186 | --with-http_mp4_module \ 187 | --with-http_gunzip_module \ 188 | --with-http_gzip_static_module \ 189 | --with-http_random_index_module \ 190 | --with-http_secure_link_module \ 191 | --with-http_stub_status_module \ 192 | --with-http_auth_request_module \ 193 | --without-http_autoindex_module \ 194 | --without-http_ssi_module \ 195 | --with-threads \ 196 | --with-stream \ 197 | --with-stream_ssl_module \ 198 | --with-mail \ 199 | --with-mail_ssl_module \ 200 | --with-file-aio \ 201 | --with-http_v2_module \ 202 | --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' \ 203 | --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,--as-needed' \ 204 | --with-ipv6 \ 205 | --with-pcre-jit \ 206 | --with-openssl=/tmp/openssl-${OPENSSL_VERSION} 207 | make 208 | sudo make install 209 | 210 | # Finish installing pagespeed and nginx configuration 211 | mkdir -p /var/log/ngx_pagespeed 212 | mkdir -p /var/cache/nginx/ 213 | mkdir -p /var/cache/nginx/client_temp 214 | git clone https://github.com/Fleshgrinder/nginx-sysvinit-script.git 215 | cd nginx-sysvinit-script 216 | make 217 | 218 | sudo update-rc.d -f nginx defaults 219 | 220 | mkdir -p /etc/nginx/conf 221 | mkdir -p /etc/nginx/sites 222 | echo "$global_nginx_conf" > /etc/nginx/nginx.conf; 223 | echo "$fastcgicache_global" > /etc/nginx/conf/fastcgicache_global.conf 224 | echo "$nginx_conf" > /etc/nginx/sites/default; 225 | echo "$mod_pagespeed" > /etc/nginx/conf/mod_pagespeed.conf; 226 | echo "$log_format" > /etc/nginx/conf/log_format.conf; 227 | echo "$cache" > /etc/nginx/conf/cache.conf; 228 | echo "$gzipconf" > /etc/nginx/conf/gzip.conf; 229 | echo "$fastcgicache" > /etc/nginx/conf/fastcgicache.conf 230 | 231 | # Mariadb 232 | sudo apt-get update 233 | curl -LsS -O https://downloads.mariadb.com/MariaDB/mariadb_repo_setup 234 | sudo bash mariadb_repo_setup "--mariadb-server-version=${MARIADB_VERSION}" 235 | sudo apt-get update 236 | cat /etc/apt/sources.list.d/mariadb.list 237 | MARIADB_PASSWORD="mariadb-server-${MARIADB_VERSION} mysql-server/root_password password PASS" 238 | MARIADB_PASSWORD_AGAIN="mariadb-server-${MARIADB_VERSION} mysql-server/root_password_again password PASS" 239 | sudo debconf-set-selections <<< ${MARIADB_PASSWORD} 240 | sudo debconf-set-selections <<< ${MARIADB_PASSWORD_AGAIN} 241 | sudo apt-get install -y mariadb-server mariadb-client > /dev/null 242 | sudo service mariadb start # note that this is not mysql anymore, it's mariadb 243 | mysql -u root -pPASS -e "ALTER USER 'root'@'localhost' IDENTIFIED BY 'password';" 244 | 245 | # remind folks about these: 246 | 247 | # Run this to enable mariadb to start on reboot 248 | #sudo systemctl enable mariadb 249 | 250 | # Run this to secure the installation! 251 | #sudo mariadb-secure-installation 252 | 253 | # PHP 254 | LC_ALL=C.UTF-8 sudo add-apt-repository -y ppa:ondrej/php 255 | sudo apt-get update 256 | sudo apt-get -y --no-install-recommends install php${PHP_VERSION} 257 | sudo apt-get -y --no-install-recommends install php${PHP_VERSION}-fpm 258 | sudo apt-get clean 259 | 260 | sudo update-alternatives --set php /usr/bin/php${PHP_VERSION} 261 | service php${PHP_VERSION}-fpm start 262 | 263 | # PHP 264 | sudo apt-get update && \ 265 | sudo apt-get install -y php${PHP_VERSION}-curl && \ 266 | sudo apt-get install -y php${PHP_VERSION}-mysql && \ 267 | sudo apt-get -y install php-pear && \ 268 | sudo apt-get -y install php${PHP_VERSION}-dev && \ 269 | sudo apt-get -y install libcurl3-openssl-dev && \ 270 | sudo apt-get -y install libyaml-dev && \ 271 | sudo apt-get -y install php${PHP_VERSION}-zip && \ 272 | sudo apt-get -y install php${PHP_VERSION}-mbstring && \ 273 | sudo apt-get -y install php${PHP_VERSION}-memcached && \ 274 | sudo apt-get -y install php${PHP_VERSION}-pgsql && \ 275 | sudo apt-get -y install php${PHP_VERSION}-xml && \ 276 | sudo apt-get -y install php${PHP_VERSION}-intl && \ 277 | sudo apt-get -y install php${PHP_VERSION}-redis && \ 278 | sudo apt-get -y install php${PHP_VERSION}-bcmath && \ 279 | sudo apt-get -y install php${PHP_VERSION}-gd 280 | 281 | # FastCGI microcaching 282 | sed -i 's/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/g' /etc/php/${PHP_VERSION}/fpm/php.ini 283 | sed -i "s/^;listen.owner = www-data/listen.owner = www-data/g" /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf 284 | sed -i "s/^;listen.group = www-data/listen.group = www-data/g" /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf 285 | sed -i "s/^;listen.mode = 0660/listen.mode = 0660/" /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf 286 | mkdir -p /var/nginx_cache 287 | 288 | # note: mcrypt for 7.3 is a pecl extension, so i got rid of it for now. 289 | 290 | # adminer and sendy modules 291 | phpenmod mbstring 292 | phpenmod curl 293 | phpenmod xml 294 | phpenmod xmlreader 295 | phpenmod simplexml 296 | phpenmod gd 297 | service php${PHP_VERSION}-fpm restart 298 | 299 | # auto security updates 300 | touch /etc/cron.daily/apt-security-updates 301 | touch /etc/logrotate.d/apt-security-updates 302 | echo $updaterules > /etc/cron.daily/apt-security-updates 303 | echo $rotaterules > /etc/logrotate.d/apt-security-updates 304 | sudo chmod +x /etc/cron.daily/apt-security-updates 305 | 306 | service nginx restart 307 | 308 | # Alter Nginx Conf things 309 | 310 | if [ ! -d "/etc/nginx" ]; then 311 | printf "Does not seem that nginx is installed properly.\n" 312 | exit 1 313 | fi 314 | 315 | cd /etc/nginx 316 | sed -i "s=include /etc/nginx/sites=include /etc/nginx/sites-enabled=g;" nginx.conf 317 | mkdir -p /etc/nginx/sites-available/ 318 | mkdir -p /etc/nginx/sites-enabled/ 319 | mv /etc/nginx/sites/* /etc/nginx/sites-available/ 320 | 321 | printf "Script: Done. Read log for status of services" 322 | printf "MariaDB - Action required: Run this to enable mariadb to start on reboot.\n" 323 | printf "sudo systemctl enable mariadb\n" 324 | 325 | printf "MariaDB - Action required: Run this to secure the installation!\n" 326 | printf "sudo mariadb-secure-installation\n" 327 | -------------------------------------------------------------------------------- /scripts/setup-logrotate: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Check logrotate exists 4 | if ! type "logrotate" > /dev/null; then 5 | printf "Does not seem that logrotate is installed properly... do that first.\n" 6 | exit 1 7 | fi 8 | 9 | # Prompt for things 10 | ## Set up a logrotate? 11 | read -p "Do you want to setup a logrotate? " -n 1 -r 12 | echo # (optional) move to a new line 13 | if [[ ! $REPLY =~ ^[Yy]$ ]] 14 | then 15 | printf "Okay, stopping script. Set up logrotation later using ./setup-logrotate in scripts folder.\n" 16 | exit 1; 17 | fi 18 | 19 | ## D - Domain 20 | read -p "Your Domain - to save the logrotate conf as, ie: yoursite.com $cr" DOMAIN 21 | 22 | if [ "$DOMAIN" == '' ]; then 23 | printf "You must specify a domain, ie: yoursite.com\n" 24 | exit 1 25 | fi 26 | 27 | if [ -d "/etc/logrotate.d/$DOMAIN" ]; then 28 | printf "A logrotate configuration file for this domain already exists, delete/back it up first.\n" 29 | exit 1 30 | fi 31 | 32 | echo "$DOMAIN" >> tempdomain.txt 33 | sed -i 's/\./_/g' tempdomain.txt 34 | DOMAIN_COM=$(cat tempdomain.txt) 35 | rm tempdomain.txt 36 | 37 | # PROMPT Ask for path to logs directory 38 | read -p "Path to logs for domain $i, eg. /var/www/yoursite.com/DockerLocal/logs/*.log: " CUR_LOGFOLDERPATH 39 | 40 | # Figure out php version 41 | read -p "Your PHP service - used to post process logrotate. ie: php7.2-fpm $cr" PHP_FPM 42 | if [ "$PHP_FPM" == '' ]; then 43 | printf "You must specify a serivce for php, ie: php7.3-fpm or php7.2-fpm\n" 44 | exit 1 45 | fi 46 | 47 | # REPLACE VARIABLES 48 | cp logrotate/TEMPLATE logrotate/TEMPLATE_CUSTOM 49 | sed -i'.bak' "s/PHP_FPM/${PHP_FPM}/g;" logrotate/TEMPLATE_CUSTOM 50 | sed -i'.bak' "s~LOG_PATH~${CUR_LOGFOLDERPATH}~g;" logrotate/TEMPLATE_CUSTOM 51 | rm logrotate/TEMPLATE_CUSTOM.bak 52 | CONF_CONTENTS=$(> "/etc/logrotate.d/$DOMAIN" 57 | 58 | # TEST 59 | # sudo logrotate /etc/logrotate.conf --debug -------------------------------------------------------------------------------- /scripts/setup-site: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | cr=`echo $'\n.'` 3 | cr=${cr%.} 4 | 5 | PHP_VERSION=$(<../config/versions/php-version) 6 | if [ -f ../config/versions/override-php-version ] && [ ! -z $(<../config/versions/override-php-version) ]; then 7 | PHP_VERSION=$(<../config/versions/override-php-version) 8 | fi 9 | 10 | # Check for Nginx Conf things 11 | if [ ! -d "/etc/nginx" ]; then 12 | printf "Does not seem that nginx is installed properly, cannot install a site.\n" 13 | exit 1 14 | fi 15 | 16 | ORIGINAL_ARG_COUNT=$# 17 | 18 | # For all prompts below, make switches for them all so that they dont need prompts 19 | # 20 | # ie: ./setup-site \ 21 | # --domain=mysite.com \ 22 | # --github=git@github... \ 23 | # --deploy-subfolder=releases \ 24 | # --web-root-path=app/dist \ 25 | # --owner-user=ubuntu \ 26 | # --deploy-key-public-file=path-to-key \ 27 | # --deploy-key-private-file=path-to-key \ 28 | # --php-pools=true \ 29 | # --nginx-with-php=true \ 30 | # --nginx-site-conf-path=/var/www/LEMP-setup-guide/config/site.nginx.conf (or site.nginx.vueapp.conf) \ 31 | # --php-with-mysql=true \ 32 | # --php-site-conf-path=/var/www/LEMP-setup-guide/config/site.php-fpm.conf \ 33 | # --mysql-create-db=true \ 34 | # --mysql-root-user=root \ 35 | # --mysql-root-pass=1234 \ 36 | # --database-name=site_com \ 37 | # --database-user=site.com \ 38 | # --database-pass=cRaZyPaSs \ 39 | # --database-host=localhost \ 40 | # --database-port=3306 41 | 42 | # Default values (can be overwritten by passed parameters) 43 | DOMAIN="" 44 | GITHUB="" 45 | DEPLOY_SUBFOLDER="" 46 | WEB_ROOT_PATH=false 47 | OWNER_USER=$(whoami) 48 | ORIGINAL_USER=$(whoami) 49 | DEPLOY_KEY_PUBLIC=false 50 | DEPLOY_KEY_PRIVATE=false 51 | PHP_POOLS="" 52 | NGINX_WITH_PHP="" 53 | NGINX_SITE_CONF_PATH="" 54 | PHP_WITH_MYSQL="" 55 | PHP_SITE_CONF_PATH="" 56 | MYSQL_CREATE_DB="" 57 | MYSQL_ROOT_USER="" 58 | MYSQL_ROOT_PASS="" 59 | DATABASE_NAME="" 60 | DATABASE_USER="" 61 | DATABASE_PASS="" 62 | DATABASE_HOST="localhost" 63 | DATABASE_PORT=3306 64 | 65 | help_message() { 66 | echo "Usage: ./setup-site [OPTIONS]" 67 | echo 68 | echo "Options:" 69 | printf " --domain=DOMAIN_NAME Specify domain eg. --domain=mysite.com\n" 70 | printf " --github=GITHUB_REPO Specify github repo using ssh eg. --github=git@github.com:youruser/yourrepo.git\n" 71 | printf " --deploy-subfolder=false Specify if you want to deploy a subfolder of the repo; default is false; otherwise put path eg. releases\n" 72 | printf " --web-root-path=PATH Specify the path to the web root path within your "domain" folder or repo; the default is blank - so index file should be directly in the folder.\n" 73 | printf " --owner-user=USER Specify the owner user (used for deploy & ssh key ownership); default is whoami result\n" 74 | printf " --deploy-key-public-file=PATH Specify the path to the public deploy key file\n" 75 | printf " --deploy-key-private-file=PATH Specify the path to the private deploy key file\n" 76 | printf " --php-pools=true|false Specify if you want to setup php pools; default is false\n" 77 | printf " --nginx-with-php=true|false Specify if you want to setup nginx with php upstreams; default is false\n" 78 | printf " --nginx-site-conf-path=PATH Specify the path to the nginx site conf file; default is /var/www/LEMP-setup-guide/config/site.nginx.conf (or site.nginx.vueapp.conf)\n" 79 | printf " --php-with-mysql=true|false Specify if you want to setup php with mysql env vars; default is false\n" 80 | printf " --php-site-conf-path=PATH Specify the path to the php site conf file; default is /var/www/LEMP-setup-guide/config/site.php-fpm.conf\n" 81 | printf " --mysql-create-db=true|false Specify if you want to setup mysql database; default is false\n" 82 | printf " --mysql-root-user=USER Specify the mysql root user; default is root\n" 83 | printf " --mysql-root-pass=PASS Specify the mysql root pass; default is 1234\n" 84 | printf " --database-name=NAME Specify the database name\n" 85 | printf " --database-user=USER Specify the database user\n" 86 | printf " --database-pass=PASS Specify the database password\n" 87 | printf " --database-host=HOST Specify the database host; default is localhost\n" 88 | printf " --database-port=PORT Specify the database port; default is 3306\n" 89 | printf " --help Display this help message and exit\n" 90 | } 91 | 92 | while [[ $# -gt 0 ]]; do 93 | key="$1" 94 | case $key in 95 | --domain=*) 96 | DOMAIN="${key#*=}" 97 | shift 98 | ;; 99 | --github=*) 100 | GITHUB="${key#*=}" 101 | shift 102 | ;; 103 | --deploy-subfolder=*) 104 | DEPLOY_SUBFOLDER="${key#*=}" 105 | shift 106 | ;; 107 | --web-root-path=*) 108 | WEB_ROOT_PATH="${key#*=}" 109 | shift 110 | ;; 111 | --owner-user=*) 112 | # if the user exists 113 | if id "${key#*=}" >/dev/null 2>&1; then 114 | OWNER_USER="${key#*=}" 115 | fi 116 | shift 117 | ;; 118 | --deploy-key-public-file=*) 119 | # if the file exists, read it in 120 | if [ -f "${key#*=}" ]; then 121 | DEPLOY_KEY_PUBLIC=$(cat "${key#*=}") 122 | fi 123 | # if this key is set to null set deploy key to null 124 | if [ "${key#*=}" == "null" ]; then 125 | DEPLOY_KEY_PUBLIC=null 126 | fi 127 | shift 128 | ;; 129 | --deploy-key-private-file=*) 130 | # if the file exists, read it in 131 | if [ -f "${key#*=}" ]; then 132 | DEPLOY_KEY_PRIVATE=$(cat "${key#*=}") 133 | fi 134 | # if this key is set to null set deploy key to null 135 | if [ "${key#*=}" == "null" ]; then 136 | DEPLOY_KEY_PRIVATE=null 137 | fi 138 | shift 139 | ;; 140 | --php-pools=*) 141 | PHP_POOLS="${key#*=}" 142 | # make sure it is either true or false if passed 143 | if [ "$PHP_POOLS" != "true" ] && [ "$PHP_POOLS" != "false" ]; then 144 | PHP_POOLS=false 145 | fi 146 | shift 147 | ;; 148 | --nginx-with-php=*) 149 | NGINX_WITH_PHP="${key#*=}" 150 | # make sure it is either true or false if passed 151 | if [ "$NGINX_WITH_PHP" != "true" ] && [ "$NGINX_WITH_PHP" != "false" ]; then 152 | NGINX_WITH_PHP=false 153 | fi 154 | shift 155 | ;; 156 | --nginx-site-conf-path=*) 157 | NGINX_SITE_CONF_PATH="${key#*=}" 158 | shift 159 | ;; 160 | --php-with-mysql=*) 161 | PHP_WITH_MYSQL="${key#*=}" 162 | # make sure it is either true or false if passed 163 | if [ "$PHP_WITH_MYSQL" != "true" ] && [ "$PHP_WITH_MYSQL" != "false" ]; then 164 | PHP_WITH_MYSQL=false 165 | fi 166 | shift 167 | ;; 168 | --php-site-conf-path=*) 169 | PHP_SITE_CONF_PATH="${key#*=}" 170 | shift 171 | ;; 172 | --mysql-create-db=*) 173 | MYSQL_CREATE_DB="${key#*=}" 174 | # make sure it is either true or false if passed 175 | if [ "$MYSQL_CREATE_DB" != "true" ] && [ "$MYSQL_CREATE_DB" != "false" ]; then 176 | MYSQL_CREATE_DB=false 177 | fi 178 | shift 179 | ;; 180 | --mysql-root-user=*) 181 | MYSQL_ROOT_USER="${key#*=}" 182 | shift 183 | ;; 184 | --mysql-root-pass=*) 185 | MYSQL_ROOT_PASS="${key#*=}" 186 | shift 187 | ;; 188 | --database-name=*) 189 | DATABASE_NAME="${key#*=}" 190 | shift 191 | ;; 192 | --database-user=*) 193 | DATABASE_USER="${key#*=}" 194 | shift 195 | ;; 196 | --database-pass=*) 197 | DATABASE_PASS="${key#*=}" 198 | shift 199 | ;; 200 | --database-host=*) 201 | DATABASE_HOST="${key#*=}" 202 | shift 203 | ;; 204 | --database-port=*) 205 | DATABASE_PORT="${key#*=}" 206 | shift 207 | ;; 208 | --help) 209 | help_message 210 | exit 0 211 | ;; 212 | *) 213 | # Unknown option 214 | echo "Unknown option: $key" 215 | help_message 216 | exit 1 217 | ;; 218 | esac 219 | done 220 | 221 | fix_root_path() { 222 | local domain="$1" 223 | local nginx_conf_path="$2" 224 | 225 | # Ensure domain and nginx_conf_path are provided 226 | if [[ -z "$domain" || -z "$nginx_conf_path" ]]; then 227 | echo "Error: Either domain or nginx_conf_path is not provided." 228 | return 1 229 | fi 230 | 231 | # Read the 'root' line from the provided nginx configuration file 232 | local root_line=$(sed -n '/^[ \t]*root [^;]*;/p' "$nginx_conf_path") 233 | 234 | # printf "========== ROOT LINE\n" 235 | # printf "$root_line\n" 236 | 237 | # Remove double slashes and trailing slash before the semicolon 238 | local modified_line=$(echo "$root_line" | sed 's#//\+#/#g' | sed 's#/\(;\)$#\1#') 239 | # printf "========== MODIFIED ROOT LINE\n" 240 | # printf "$modified_line\n" 241 | 242 | # Use sed to replace root_line with modified_line in the provided nginx configuration file 243 | sed -i "s#$(echo "$root_line" | sed 's#[\&/\.]#\\&#g')#$modified_line#g" "$nginx_conf_path" 244 | 245 | # printf "========== AFTER SED\n" 246 | # cat "$nginx_conf_path" 247 | } 248 | 249 | ## SSH_PATH 250 | # if OWNER_USER is the same as whoami, then we can use the ssh keys in ${SSH_PATH} 251 | if [ "$OWNER_USER" == "$(whoami)" ]; then 252 | SSH_PATH=$(echo ~)/.ssh 253 | else 254 | # get home directory of $OWNER_USER 255 | OWNER_HOME=$(getent passwd "$OWNER_USER" | cut -d: -f6) 256 | SSH_PATH="$OWNER_HOME"/.ssh 257 | fi 258 | 259 | # Prompt for things 260 | 261 | ## Set up a site? Ask only if no options are passed 262 | if [[ $ORIGINAL_ARG_COUNT -eq 0 ]]; then 263 | read -p "Do you want to setup a site? " -n 1 -r 264 | echo # (optional) move to a new line 265 | if [[ ! $REPLY =~ ^[Yy]$ ]]; then 266 | printf "Okay, stopping script. Things are installed... carry on. Set up a site later using ./setup-site in scripts folder.\n" 267 | exit 1 268 | fi 269 | fi 270 | 271 | ## D - Domain 272 | if [ -z "$DOMAIN" ]; then 273 | read -p "Your Domain, ie: yoursite.com $cr" DOMAIN 274 | fi 275 | 276 | if [ "$DOMAIN" == '' ]; then 277 | printf "You must specify a domain, ie: yoursite.com\n" 278 | exit 1 279 | fi 280 | 281 | if [ -d "/var/www/$DOMAIN" ]; then 282 | printf "A folder for this domain already exists, delete/back it up first.\n" 283 | exit 1 284 | fi 285 | 286 | echo "$DOMAIN" >> tempdomain.txt 287 | sed -i 's/\./_/g' tempdomain.txt 288 | DOMAIN_COM=$(cat tempdomain.txt) 289 | rm tempdomain.txt 290 | 291 | # PROMPT - Github Repo URL 292 | if [ -z "$GITHUB" ]; then 293 | read -p "Your Github/Bitbucket Repo (use ssh url git@github.com:youruser/yourrepo.git):" GITHUB 294 | fi 295 | 296 | # PROMPT - Deploy Subfolder - prompt if = "" 297 | if [ -z "$DEPLOY_SUBFOLDER" ]; then 298 | read -p "Your Deploy Subfolder (if you want to deploy a subfolder of the repo); press enter to skip:" DEPLOY_SUBFOLDER 299 | if [ "$DEPLOY_SUBFOLDER" == '' ]; then 300 | DEPLOY_SUBFOLDER=false 301 | fi 302 | fi 303 | 304 | # PROMPT - Web Root Path - prompt if = false 305 | if [ "$WEB_ROOT_PATH" == false ]; then 306 | read -p "Your Web Root Path (relative to repo or domain folder - eg. html or app/dist; press enter to leave blank:" WEB_ROOT_PATH 307 | if [ "$WEB_ROOT_PATH" == '' ]; then 308 | WEB_ROOT_PATH="" 309 | fi 310 | fi 311 | 312 | # if WEB_ROOT_PATH=null, then set to "" 313 | if [ "$WEB_ROOT_PATH" == null ]; then 314 | WEB_ROOT_PATH="" 315 | fi 316 | 317 | # Extract repo name from the github URL 318 | get_repo_name() { 319 | local url="$1" 320 | echo "$url" | awk -F/ '{print $2}' | sed 's/\.git$//' 321 | } 322 | 323 | # Add to ssh config file to ensure this host uses the correct key for the repo 324 | setup_repo_ssh_with_deployment_key() { 325 | local repo_name="$1" 326 | printf "============ Add to ssh config file to ensure this host uses the correct key for the repo\n" 327 | mkdir -p "$SSH_PATH" 328 | 329 | # if the config file doesnt exist, touch it as owner user 330 | if [ ! -f "${SSH_PATH}/config" ]; then 331 | su - $OWNER_USER -c "bash -c 'touch ${SSH_PATH}/config'" 332 | fi 333 | 334 | # append this config to SSH_PATH/config as owneruser 335 | # Host github.com-${repo_name} 336 | # Hostname github.com 337 | # IdentityFile=${SSH_PATH}/${repo_name}_deploy_key 338 | # IdentitiesOnly=yes 339 | su - $OWNER_USER -c "bash -c 'echo -e \"\nHost github.com-${repo_name}\n\tHostname github.com\n\tIdentityFile=${SSH_PATH}/${repo_name}_deploy_key\n\tIdentitiesOnly=yes\" >> ${SSH_PATH}/config'" 340 | 341 | # Alter GITHUB url to use the new host 342 | GITHUB=$(echo "$GITHUB" | sed "s/github.com/github.com-${repo_name}/g") 343 | } 344 | 345 | # Save the deployment keys and fix their permissions 346 | save_and_fix_permissions() { 347 | local repo_name="$1" 348 | local public_key="$2" 349 | local private_key="$3" 350 | 351 | printf "============ Save the deployment keys and fix their permissions\n" 352 | 353 | # do as OWNER USER 354 | # echo "$public_key" > ${SSH_PATH}/${repo_name}_deploy_key.pub 355 | # echo "$private_key" > ${SSH_PATH}/${repo_name}_deploy_key 356 | 357 | su - $OWNER_USER -c "bash -c 'mkdir -p ${SSH_PATH}'" 358 | su - $OWNER_USER -c "bash -c 'echo \"$public_key\" > ${SSH_PATH}/${repo_name}_deploy_key.pub'" 359 | su - $OWNER_USER -c "bash -c 'echo \"$private_key\" > ${SSH_PATH}/${repo_name}_deploy_key'" 360 | 361 | printf "============ Fix permissions on devop keys\n" 362 | # do as OWNER USER 363 | # chmod u=rw,go= ${SSH_PATH}/${repo_name}_deploy_key ${SSH_PATH}/${repo_name}_deploy_key.pub 364 | su - $OWNER_USER -c "bash -c 'chmod u=rw,go= ${SSH_PATH}/${repo_name}_deploy_key ${SSH_PATH}/${repo_name}_deploy_key.pub'" 365 | } 366 | 367 | REPO_NAME=$(get_repo_name "$GITHUB") 368 | 369 | # either deployment key is exactly false 370 | if [ "$DEPLOY_KEY_PUBLIC" == false ] || [ "$DEPLOY_KEY_PRIVATE" == false ]; then 371 | read -p "Do you have a deployment key for this repo? " -n 1 -r 372 | echo 373 | if [[ $REPLY =~ ^[Yy]$ ]]; then 374 | printf "Paste your deploy public key, press enter, and press ctrl-d when done:\n" 375 | DEPLOY_KEY_PUBLIC=$(cat) 376 | printf "Paste your deploy private key, press enter, and press ctrl-d when done:\n" 377 | DEPLOY_KEY_PRIVATE=$(cat) 378 | 379 | save_and_fix_permissions "$REPO_NAME" "$DEPLOY_KEY_PUBLIC" "$DEPLOY_KEY_PRIVATE" 380 | setup_repo_ssh_with_deployment_key "$REPO_NAME" 381 | else 382 | read -p "Do you want to make a deployment key and add it to your ssh config? " -n 1 -r 383 | echo 384 | if [[ $REPLY =~ ^[Yy]$ ]]; then 385 | printf "============ Generating new deploy keys - using -t ed25519\n" 386 | ssh-keygen -t ed25519 -f ${SSH_PATH}/${REPO_NAME}_deploy_key -q -N "" 387 | printf "============ 📋 Reveal Devops Public Key - you need to paste into github deploy key in repo settings. Then run this setup site again.\n" 388 | cat ${SSH_PATH}/${REPO_NAME}_deploy_key.pub 389 | printf "============ 📋 Reveal Devops Private Key - you will need to answer with this when you run the script again.\n" 390 | cat ${SSH_PATH}/${REPO_NAME}_deploy_key 391 | 392 | setup_repo_ssh_with_deployment_key "$REPO_NAME" 393 | else 394 | printf "============ ✋ Okay, stopping moving on. We will try to use ssh agent forwarding or other means then...\n" 395 | fi 396 | fi 397 | # github is set, deploy keys length is at least 10 398 | elif [ ! -z "$GITHUB" ] && [ ${#DEPLOY_KEY_PUBLIC} -gt 10 ] && [ ${#DEPLOY_KEY_PRIVATE} -gt 10 ]; then 399 | save_and_fix_permissions "$REPO_NAME" "$DEPLOY_KEY_PUBLIC" "$DEPLOY_KEY_PRIVATE" 400 | setup_repo_ssh_with_deployment_key "$REPO_NAME" 401 | else 402 | printf "============ Skipping deployment key handling as it seems it's not being used...\n" 403 | fi 404 | 405 | # SSH - use SSH agent forwarding - only if we dont have ${SSH_PATH}/${REPO_NAME}_deploy_key 406 | if [ ! -f ${SSH_PATH}/${REPO_NAME}_deploy_key ]; then 407 | 408 | # if we are using SimpleDocker to test this script, we will have a shared volume of tempssh to forward the keys 409 | if [ -d ${SSH_PATH} ]; then 410 | 411 | # SSH - check for ssh-add -L to see if we have ssh keys loaded (from port forwarding or ssh-agent in docker) 412 | ssh-add -L | grep 'Could not open a connection to your authentication agent' &> /dev/null 413 | if [ $? == 0 ]; then 414 | printf "=========== Trying to copy over a possible /root/tempssh if running in SimpleDocker\n" 415 | WHO=$('whoami') 416 | cp /root/tempssh/* /root/.ssh/ -R 417 | chown $WHO:$WHO ${SSH_PATH}/* -R 418 | eval `ssh-agent -s` 419 | ssh-add 420 | fi 421 | 422 | # if the github var does not start with https and still dont have a path 423 | if [[ ! $GITHUB =~ ^https.* ]] && [ -d ${SSH_PATH} ]; then 424 | printf "$OWNER_USER does not have an ${SSH_PATH} folder... worried that it is not setup properly\n" 425 | printf "If using a git repo, probably use the HTTPS url instead.\n" 426 | fi 427 | fi 428 | fi 429 | 430 | cd /var/www 431 | 432 | # GIT REPO - setup a git repo if we have GITHUB AND (ssh folder exists OR we have a deploy key at .ssh/${REPO_NAME}_deploy_key) 433 | if [ ! -z "$GITHUB" ] && ([ -d ${SSH_PATH} ] || [ -f ${SSH_PATH}/${REPO_NAME}_deploy_key ]); then 434 | if [ "$DEPLOY_SUBFOLDER" == false ]; then 435 | printf "=========== Cloning $GITHUB into /var/www/$DOMAIN\n" 436 | su - $OWNER_USER -c "bash -c 'cd /var/www/ && git clone $GITHUB $DOMAIN'" 437 | # dockerlocal logs 438 | su - $OWNER_USER -c "bash -c 'mkdir -p /var/www/$DOMAIN/DockerLocal/logs'" 439 | su - $OWNER_USER -c "bash -c 'mkdir -p /var/www/$DOMAIN/logs'" 440 | else 441 | printf "=========== Cloning $GITHUB into /var/www/$DOMAIN/$DEPLOY_SUBFOLDER/init\n" 442 | su - $OWNER_USER -c "bash -c 'mkdir -p /var/www/$DOMAIN/$DEPLOY_SUBFOLDER && cd /var/www/$DOMAIN/$DEPLOY_SUBFOLDER && git clone $GITHUB init'" 443 | # also need to make this "init" release a symbolic link to /var/www/$DOMAIN/current - and we will need to adjust nginx conf too! 444 | printf "=========== Making /var/www/$DOMAIN/current a symbolic link to /var/www/$DOMAIN/$DEPLOY_SUBFOLDER/init\n" 445 | su - $OWNER_USER -c "bash -c 'cd /var/www/$DOMAIN && ln -s /var/www/$DOMAIN/$DEPLOY_SUBFOLDER/init current'" 446 | # Make dockerlocal logs so nginx and php dont complain 447 | su - $OWNER_USER -c "bash -c 'mkdir -p /var/www/$DOMAIN/current/DockerLocal/logs'" 448 | su - $OWNER_USER -c "bash -c 'mkdir -p /var/www/$DOMAIN/current/logs'" 449 | fi 450 | else 451 | su -c "cd /var/www/ && mkdir $DOMAIN" "$OWNER_USER" 452 | printf "=========== 👉 Without a github repo or ssh setup, you are responsible for putting your files into /var/www/$DOMAIN\n" 453 | # switch back to original user 454 | sudo su $ORIGINAL_USER 455 | fi 456 | 457 | # globalize a gitignore for anything in /var/www/ that has DockerLocal/logs 458 | if [ ! -f "/var/www/.gitignore" ]; then 459 | printf "=========== Creating a global gitignore for /var/www/ to ignore DockerLocal/logs\n" 460 | su - $OWNER_USER -c "bash -c 'echo \"DockerLocal/logs\" > /var/www/.gitignore'" 461 | fi 462 | 463 | # NGINX - Vueapp or PHP? 464 | if [ -z "$NGINX_WITH_PHP" ]; then 465 | read -p "Do you need to setup nginx with upstreams and php?" -n 1 -r 466 | echo # (optional) move to a new line 467 | if [[ $REPLY =~ ^[Yy]$ ]]; then 468 | NGINX_WITH_PHP=true 469 | else 470 | NGINX_WITH_PHP=false 471 | fi 472 | fi 473 | 474 | apply_nginx_config() { 475 | local site_config_path="$1" 476 | cp "$site_config_path" /etc/nginx/sites-available/"$DOMAIN.conf" 477 | cd /etc/nginx/sites-available/ 478 | sed -i "s/SITEDOTCOM/${DOMAIN}/g" "$DOMAIN.conf" 479 | sed -i "s/SITE_COM/${DOMAIN_COM}/g" "$DOMAIN.conf" 480 | sed -i "s|WEBROOTPATH|${WEB_ROOT_PATH}|g" "$DOMAIN.conf" 481 | ln -s /etc/nginx/sites-available/"${DOMAIN}.conf" /etc/nginx/sites-enabled/ 482 | 483 | # if we have DEPLOY_SUBFOLDER and therefore a "current" directory - then we need to sed some stuff in nginx 484 | if [ "$DEPLOY_SUBFOLDER" != false ]; then 485 | # For lines that do not contain '/var/www/${DOMAIN}/current', replace '/var/www/${DOMAIN}/' with '/var/www/${DOMAIN}/current/' 486 | sed -i "/var\/www\/${DOMAIN}\/current/!s/var\/www\/${DOMAIN}\//var\/www\/${DOMAIN}\/current\//g" "$DOMAIN.conf" 487 | fi 488 | 489 | # fix root path 490 | fix_root_path "$DOMAIN" "$DOMAIN.conf" 491 | } 492 | 493 | # (only) if NGINX_SITE_CONF_PATH is empty, does NGINX_WITH_PHP matter in determining the default used: 494 | if [ -z "$NGINX_SITE_CONF_PATH" ]; then 495 | # if NGINX_WITH_PHP is true, then the default is site.nginx.php.conf 496 | if [ "$NGINX_WITH_PHP" == true ]; then 497 | NGINX_SITE_CONF_PATH="/var/www/LEMP-setup-guide/config/site.nginx.conf" 498 | else 499 | # use vueapp version 500 | NGINX_SITE_CONF_PATH="/var/www/LEMP-setup-guide/config/site.nginx.vueapp.conf" 501 | fi 502 | fi 503 | 504 | apply_nginx_config "$NGINX_SITE_CONF_PATH" 505 | 506 | # TEST nginx 507 | sudo nginx -t 508 | # Try to reload nginx only if the test was successful 509 | if [ $? == 0 ]; then 510 | sudo service nginx reload 511 | else 512 | printf "============ ❌ Nginx test failed, not reloading nginx\n" 513 | fi 514 | 515 | # No PHP? 516 | if [ -z "$PHP_POOLS" ]; then 517 | read -p "Do you need to set up PHP FPM pools?" -n 1 -r 518 | echo # (optional) move to a new line 519 | if [[ $REPLY =~ ^[Yy]$ ]]; then 520 | PHP_POOLS=true 521 | else 522 | PHP_POOLS=false 523 | fi 524 | fi 525 | 526 | if [ "$PHP_POOLS" == false ]; 527 | then 528 | printf "Okay, stopping script then.\n" 529 | exit 1 530 | fi 531 | 532 | # Create php site config 533 | mkdir -p /etc/php/${PHP_VERSION}/fpm/pool.d/ 534 | # if PHP_SITE_CONF_PATH is set, use it, otherwise use the default 535 | if [ -z "$PHP_SITE_CONF_PATH" ]; then 536 | PHP_SITE_CONF_PATH="/var/www/LEMP-setup-guide/config/site.php-fpm.conf" 537 | fi 538 | cp "$PHP_SITE_CONF_PATH" "/etc/php/${PHP_VERSION}/fpm/pool.d/${DOMAIN}.conf" 539 | cd /etc/php/${PHP_VERSION}/fpm/pool.d/ 540 | sed -i "s/SITEDOTCOM/${DOMAIN}/g;" "$DOMAIN.conf" 541 | sed -i "s/SITE_COM/${DOMAIN_COM}/g;" "$DOMAIN.conf" 542 | 543 | # if we have DEPLOY_SUBFOLDER and therefore a "current" directory - then we need to sed some stuff in PHP 544 | if [ "$DEPLOY_SUBFOLDER" != false ]; then 545 | # For lines that do not contain '/var/www/${DOMAIN}/current', replace '/var/www/${DOMAIN}/' with '/var/www/${DOMAIN}/current/' 546 | sed -i "/var\/www\/${DOMAIN}\/current/!s/var\/www\/${DOMAIN}\//var\/www\/${DOMAIN}\/current\//g" "$DOMAIN.conf" 547 | fi 548 | 549 | # PROMPT for PHP ENV VARS 550 | if [ -z "$PHP_WITH_MYSQL" ]; then 551 | read -p "Do you want to setup PHP environmental variables for a mysql database? " -n 1 -r 552 | echo # (optional) move to a new line 553 | if [[ $REPLY =~ ^[Yy]$ ]]; then 554 | PHP_WITH_MYSQL=true 555 | else 556 | PHP_WITH_MYSQL=false 557 | fi 558 | fi 559 | 560 | if [ "$PHP_WITH_MYSQL" == true ]; 561 | then 562 | 563 | # PROMPT for DATABASE_NAME 564 | if [ -z "$DATABASE_NAME" ]; then 565 | read -p "Your database name, ie: site_com: $cr" DATABASE_NAME 566 | fi 567 | 568 | # PROMPT for DATABASE_USER 569 | if [ -z "$DATABASE_USER" ]; then 570 | read -p "Your database user, ie: site.com $cr" DATABASE_USER 571 | fi 572 | 573 | # PROMPT for DATABASE_PASS 574 | if [ -z "$DATABASE_PASS" ]; then 575 | read -p "Your database password, ie: cRaZyPaSs $cr" DATABASE_PASS 576 | fi 577 | 578 | # PROMPT for DATABASE_HOST 579 | if [ -z "$DATABASE_HOST" ]; then 580 | read -p "Your database host, ie: localhost $cr" DATABASE_HOST 581 | fi 582 | 583 | # PROMPT for DATABASE_PORT 584 | if [ -z "$DATABASE_PORT" ]; then 585 | read -p "Your database host, ie: 3306 $cr" DATABASE_PORT 586 | fi 587 | 588 | 589 | sed -i "s/DATABASENAME/${DATABASE_NAME}/g;" "$DOMAIN.conf" 590 | sed -i "s/DATABASEUSER/${DATABASE_USER}/g;" "$DOMAIN.conf" 591 | sed -i "s/DATABASEHOST/${DATABASE_HOST}/g;" "$DOMAIN.conf" 592 | sed -i "s/DATABASEPASS/${DATABASE_PASS}/g;" "$DOMAIN.conf" 593 | sed -i "s/DATABASEPORT/${DATABASE_PORT}/g;" "$DOMAIN.conf" 594 | 595 | # test php 596 | sudo php-fpm${PHP_VERSION} -t 597 | 598 | # restart php 599 | sudo service php${PHP_VERSION}-fpm reload 600 | 601 | if [ -z "$MYSQL_CREATE_DB" ]; then 602 | read -p "Do you want to setup a local mysql database using the info for php env vars above? " -n 1 -r 603 | echo # (optional) move to a new line 604 | if [[ $REPLY =~ ^[Yy]$ ]]; then 605 | MYSQL_CREATE_DB=true 606 | else 607 | MYSQL_CREATE_DB=false 608 | fi 609 | fi 610 | 611 | if [ "$MYSQL_CREATE_DB" == true ]; 612 | then 613 | 614 | # PROMPT for MYSQL_ROOT_USER 615 | if [ -z "$MYSQL_ROOT_USER" ]; then 616 | read -p "Your mysql root user, ie: root $cr" MYSQL_ROOT_USER 617 | fi 618 | 619 | # PROMPT for MYSQL_ROOT_PASS 620 | if [ -z "$MYSQL_ROOT_PASS" ]; then 621 | read -p "Your mysql root pass, ie: 1234 $cr" MYSQL_ROOT_PASS 622 | fi 623 | 624 | printf "============ Creating mysql database: $DATABASE_NAME\n" 625 | mysql -u $MYSQL_ROOT_USER -p$MYSQL_ROOT_PASS -e "CREATE DATABASE ${DATABASE_NAME};" 626 | printf "============ Creating mysql user: $DATABASE_USER\n" 627 | mysql -u $MYSQL_ROOT_USER -p$MYSQL_ROOT_PASS -e "GRANT ALL PRIVILEGES ON ${DATABASE_NAME}.* TO '${DATABASE_USER}'@'localhost' IDENTIFIED BY '${DATABASE_PASS}';" 628 | 629 | fi 630 | fi 631 | 632 | 633 | -------------------------------------------------------------------------------- /scripts/setup-ssh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # PROMPT - SSH PUBLIC KEY (Authorized Keys) 4 | cr=`echo $'\n.'` 5 | cr=${cr%.} 6 | read -p "Your SSH public key...paste it $cr" SSHPUBKEY 7 | 8 | # Check if /var/www exists 9 | if [ ! -d "/root/.ssh/" ]; then 10 | printf "SSH folder does not seem to exist for this user. Going to create the folder now.\n" 11 | cd ~/ && mkdir -p .ssh 12 | fi 13 | 14 | if [ ! "$SSHPUBKEY" == '' ]; then 15 | echo "$SSHPUBKEY" > ~/.ssh/authorized_keys 16 | fi 17 | --------------------------------------------------------------------------------