├── LICENSE └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2014-2018 Denis Denisov and other contributors 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | NGINX Tuning For Best Performance 2 | ================================= 3 | 4 | For this configuration you can use web server you like, I decided, because I work mostly with it to use nginx. 5 | 6 | Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered). Most what I saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, of course, this was `2 x Intel Xeon` with HyperThreading enabled, but it can work without problem on slower machines. 7 | 8 | __You must understand that this config is used in a testing environment and not in production, so you will need to find a way to implement most of those features as best possible for your servers.__ 9 | 10 | * [Stable version NGINX (deb/rpm)](https://nginx.org/en/linux_packages.html#stable) 11 | * [Mainline version NGINX (deb/rpm)](https://nginx.org/en/linux_packages.html#mainline) 12 | 13 | First, you will need to install nginx 14 | 15 | ```bash 16 | yum install nginx 17 | apt install nginx 18 | ``` 19 | 20 | Backup your original configs and you can start reconfigure your configs. You will need to open your `nginx.conf` at `/etc/nginx/nginx.conf` with your favorite editor. 21 | 22 | ```nginx 23 | # you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that 24 | worker_processes auto; #some last versions calculate it automatically 25 | 26 | # number of file descriptors used for nginx 27 | # the limit for the maximum FDs on the server is usually set by the OS. 28 | # if you don't set FD's then OS settings will be used which is by default 2000 29 | worker_rlimit_nofile 100000; 30 | 31 | # only log critical errors 32 | error_log /var/log/nginx/error.log crit; 33 | 34 | # provides the configuration file context in which the directives that affect connection processing are specified. 35 | events { 36 | # determines how much clients will be served per worker 37 | # max clients = worker_connections * worker_processes 38 | # max clients is also limited by the number of socket connections available on the system (~64k) 39 | worker_connections 4000; 40 | 41 | # optimized to serve many clients with each thread, essential for linux -- for testing environment 42 | use epoll; 43 | 44 | # accept as many connections as possible, may flood worker connections if set too low -- for testing environment 45 | multi_accept on; 46 | } 47 | 48 | http { 49 | # cache informations about FDs, frequently accessed files 50 | # can boost performance, but you need to test those values 51 | open_file_cache max=200000 inactive=20s; 52 | open_file_cache_valid 30s; 53 | open_file_cache_min_uses 2; 54 | open_file_cache_errors on; 55 | 56 | # to boost I/O on HDD we can disable access logs 57 | access_log off; 58 | 59 | # copies data between one FD and other from within the kernel 60 | # faster than read() + write() 61 | sendfile on; 62 | 63 | # send headers in one piece, it is better than sending them one by one 64 | tcp_nopush on; 65 | 66 | # don't buffer data sent, good for small data bursts in real time 67 | # https://brooker.co.za/blog/2024/05/09/nagle.html 68 | # https://news.ycombinator.com/item?id=10608356 69 | #tcp_nodelay on; 70 | 71 | # reduce the data that needs to be sent over network -- for testing environment 72 | gzip on; 73 | # gzip_static on; 74 | gzip_min_length 10240; 75 | gzip_comp_level 1; 76 | gzip_vary on; 77 | gzip_disable msie6; 78 | gzip_proxied expired no-cache no-store private auth; 79 | gzip_types 80 | # text/html is always compressed by HttpGzipModule 81 | text/css 82 | text/javascript 83 | text/xml 84 | text/plain 85 | text/x-component 86 | application/javascript 87 | application/x-javascript 88 | application/json 89 | application/xml 90 | application/rss+xml 91 | application/atom+xml 92 | font/truetype 93 | font/opentype 94 | application/vnd.ms-fontobject 95 | image/svg+xml; 96 | 97 | # allow the server to close connection on non responding client, this will free up memory 98 | reset_timedout_connection on; 99 | 100 | # request timed out -- default 60 101 | client_body_timeout 10; 102 | 103 | # if client stop responding, free up memory -- default 60 104 | send_timeout 2; 105 | 106 | # server will close connection after this time -- default 75 107 | keepalive_timeout 30; 108 | 109 | # number of requests client can make over keep-alive -- for testing environment 110 | keepalive_requests 100000; 111 | } 112 | ``` 113 | 114 | Now you can save the configuration and run the below [command](https://www.nginx.com/resources/wiki/start/topics/tutorials/commandline/#stopping-or-restarting-nginx) 115 | 116 | ``` 117 | nginx -s reload 118 | /etc/init.d/nginx start|restart 119 | ``` 120 | 121 | If you wish to test the configuration first you can run 122 | 123 | ``` 124 | nginx -t 125 | /etc/init.d/nginx configtest 126 | ``` 127 | 128 | Just For Security Reasons 129 | ------------------------ 130 | 131 | ```nginx 132 | server_tokens off; 133 | ``` 134 | 135 | NGINX Simple DDoS Defense 136 | ------------------------- 137 | 138 | This is far away from a secure DDoS defense but can slow down some small DDoS. This configuration is for a testing environment and you should use your own values. 139 | 140 | ```nginx 141 | # limit the number of connections per single IP 142 | limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m; 143 | 144 | # limit the number of requests for a given session 145 | limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s; 146 | 147 | # zone which we want to limit by upper values, we want limit whole server 148 | server { 149 | limit_conn conn_limit_per_ip 10; 150 | limit_req zone=req_limit_per_ip burst=10 nodelay; 151 | } 152 | 153 | # if the request body size is more than the buffer size, then the entire (or partial) 154 | # request body is written into a temporary file 155 | client_body_buffer_size 128k; 156 | 157 | # buffer size for reading client request header -- for testing environment 158 | client_header_buffer_size 3m; 159 | 160 | # maximum number and size of buffers for large headers to read from client request 161 | large_client_header_buffers 4 256k; 162 | 163 | # read timeout for the request body from client -- for testing environment 164 | client_body_timeout 3m; 165 | 166 | # how long to wait for the client to send a request header -- for testing environment 167 | client_header_timeout 3m; 168 | ``` 169 | 170 | Now you can test the configuration again 171 | 172 | ```bash 173 | nginx -t # /etc/init.d/nginx configtest 174 | ``` 175 | And then [reload or restart your nginx](https://www.nginx.com/resources/wiki/start/topics/tutorials/commandline/#stopping-or-restarting-nginx) 176 | 177 | ``` 178 | nginx -s reload 179 | /etc/init.d/nginx reload|restart 180 | ``` 181 | 182 | You can test this configuration with `tsung` and when you are satisfied with the result you can hit `Ctrl+C` because it can run for hours. 183 | 184 | Increase The Maximum Number Of Open Files (`nofile` limit) – Linux 185 | ----------------------------------------------- 186 | 187 | There are two ways to raise the nofile/max open files/file descriptors/file handles limit for NGINX in RHEL/CentOS 7+. 188 | With NGINX running, check the current limit on the master process 189 | 190 | $ cat /proc/$(cat /var/run/nginx.pid)/limits | grep open.files 191 | Max open files 1024 4096 files 192 | 193 | #### And worker processes 194 | 195 | ps --ppid $(cat /var/run/nginx.pid) -o %p|sed '1d'|xargs -I{} cat /proc/{}/limits|grep open.files 196 | 197 | Max open files 1024 4096 files 198 | Max open files 1024 4096 files 199 | 200 | Trying with the `worker_rlimit_nofile` directive in `{,/usr/local}/etc/nginx/nginx.conf` fails as SELinux policy doesn't allow `setrlimit`. This is shown in `/var/log/nginx/error.log` 201 | 202 | 015/07/24 12:46:40 [alert] 12066#0: setrlimit(RLIMIT_NOFILE, 2342) failed (13: Permission denied) 203 | 204 | #### And in /var/log/audit/audit.log 205 | 206 | type=AVC msg=audit(1437731200.211:366): avc: denied { setrlimit } for pid=12066 comm="nginx" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:httpd_t:s0 tclass=process 207 | 208 | #### `nolimit` without Systemd 209 | 210 | # /etc/security/limits.conf 211 | # /etc/default/nginx (ULIMIT) 212 | $ nano /etc/security/limits.d/nginx.conf 213 | nginx soft nofile 65536 214 | nginx hard nofile 65536 215 | $ sysctl -p 216 | 217 | #### `nolimit` with Systemd 218 | 219 | $ mkdir -p /etc/systemd/system/nginx.service.d 220 | $ nano /etc/systemd/system/nginx.service.d/nginx.conf 221 | [Service] 222 | LimitNOFILE=30000 223 | $ systemctl daemon-reload 224 | $ systemctl restart nginx.service 225 | 226 | #### SELinux boolean `httpd_setrlimit` to true(1) 227 | 228 | This will set fd limits for the worker processes. Leave the `worker_rlimit_nofile` directive in `{,/usr/local}/etc/nginx/nginx.conf` and run the following as root 229 | 230 | setsebool -P httpd_setrlimit 1 231 | 232 | DoS [HTTP/1.1 and above: Range Requests](https://tools.ietf.org/html/rfc7233#section-6.1) 233 | ---------------------------------------- 234 | 235 | By default [`max_ranges`](https://nginx.org/r/max_ranges) is not limited. 236 | DoS attacks can create many Range-Requests (Impact on stability I/O). 237 | 238 | Socket Sharding in NGINX 1.9.1+ (DragonFly BSD and Linux 3.9+) 239 | ------------------------------------------------------------------- 240 | 241 | | Socket type | Latency (ms) | Latency stdev (ms) | CPU Load | 242 | |------------------|--------------|--------------------|----------| 243 | | Default | 15.65 | 26.59 | 0.3 | 244 | | accept_mutex off | 15.59 | 26.48 | 10 | 245 | | reuseport | 12.35 | 3.15 | 0.3 | 246 | 247 | [Thread Pools](https://nginx.org/r/thread_pool) in NGINX Boost Performance 9x! (Linux) 248 | -------------- 249 | 250 | [Multi-threaded](https://nginx.org/r/aio) sending of files is currently supported only in Linux. 251 | Without [`sendfile_max_chunk`](https://nginx.org/r/sendfile_max_chunk) limit, one fast connection may seize the worker process entirely. 252 | 253 | Selecting an upstream based on SSL protocol version 254 | --------------------------------------------------- 255 | ```nginx 256 | map $ssl_preread_protocol $upstream { 257 | "" ssh.example.com:22; 258 | "TLSv1.2" new.example.com:443; 259 | default tls.example.com:443; 260 | } 261 | 262 | # ssh and https on the same port 263 | server { 264 | listen 192.168.0.1:443; 265 | proxy_pass $upstream; 266 | ssl_preread on; 267 | } 268 | ``` 269 | 270 | Happy Hacking! 271 | ============== 272 | 273 | Reference links 274 | --------------- 275 | 276 | * __https://github.com/trimstray/nginx-admins-handbook__ 277 | * __https://github.com/GrrrDog/weird_proxies__ 278 | * __https://github.com/h5bp/server-configs-nginx__ 279 | * __https://github.com/leandromoreira/linux-network-performance-parameters__ 280 | * https://github.com/nginx-boilerplate/nginx-boilerplate 281 | * https://www.nginx.com/blog/thread-pools-boost-performance-9x/ 282 | * https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/ 283 | * https://www.nginx.com/blog/nginx-1-13-9-http2-server-push/ 284 | * https://www.nginx.com/blog/performing-a-b-testing-nginx-plus/ 285 | * https://www.nginx.com/blog/10-tips-for-10x-application-performance/ 286 | * https://www.nginx.com/blog/http-keepalives-and-web-performance/ 287 | * https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/ 288 | * https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/ 289 | * https://www.nginx.com/blog/introducing-cicd-with-nginx-and-nginx-plus/ 290 | * https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/ 291 | * https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/ 292 | * https://www.nginx.com/blog/nginx-high-performance-caching/ 293 | * https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/ 294 | * https://nginx.org/r/pcre_jit 295 | * https://nginx.org/r/ssl_engine (`openssl engine -t `) 296 | * https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/ 297 | * https://www.nginx.com/blog/tuning-nginx/ 298 | * https://github.com/intel/asynch_mode_nginx 299 | * https://openresty.org/download/agentzh-nginx-tutorials-en.html 300 | * https://www.maxcdn.com/blog/nginx-application-performance-optimization/ 301 | * https://www.nginx.com/blog/nginx-se-linux-changes-upgrading-rhel-6-6/ 302 | * https://medium.freecodecamp.org/a8afdbfde64d 303 | * https://medium.freecodecamp.org/secure-your-web-application-with-these-http-headers-fd66e0367628 304 | * https://gist.github.com/CMCDragonkai/6bfade6431e9ffb7fe88 305 | * https://gist.github.com/denji/9130d1c95e350c58bc50e4b3a9e29bf4 306 | * https://8gwifi.org/docs/nginx-secure.jsp 307 | * http://www.codestance.com/tutorials-archive/nginx-tuning-for-best-performance-255 308 | * https://ospi.fi/blog/centos-7-raise-nofile-limit-for-nginx.html 309 | * https://www.linode.com/docs/websites/nginx/configure-nginx-for-optimized-performance 310 | * https://haydenjames.io/nginx-tuning-tips-tls-ssl-https-ttfb-latency/ 311 | * https://gist.github.com/kekru/c09dbab5e78bf76402966b13fa72b9d2 312 | 313 | 314 | Static analyzers 315 | ---------------- 316 | * https://github.com/yandex/gixy 317 | 318 | Syntax highlighting 319 | ------------------- 320 | * https://github.com/chr4/sslsecure.vim 321 | * https://github.com/chr4/nginx.vim 322 | * https://github.com/nginx/nginx/tree/master/contrib/vim 323 | 324 | NGINX config formatter 325 | ---------------------- 326 | * https://github.com/rwx------/nginxConfigFormatterGo 327 | * https://github.com/1connect/nginx-config-formatter 328 | * https://github.com/lovette/nginx-tools/tree/master/nginx-minify-conf 329 | 330 | NGINX configuration tools 331 | ------------------------- 332 | * https://github.com/nginxinc/crossplane 333 | * https://github.com/valentinxxx/nginxconfig.io 334 | 335 | BBR (Linux 4.9+) 336 | ---------------- 337 | * https://blog.cloudflare.com/http-2-prioritization-with-nginx/ 338 | * Linux v4.13+ as no longer required FQ (`q_disc`) with BBR. 339 | * https://github.com/google/bbr/blob/master/Documentation/bbr-quick-start.md 340 | * https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/commit/?id=218af599fa635b107cfe10acf3249c4dfe5e4123 341 | * https://github.com/systemd/systemd/issues/9725#issuecomment-413369212 342 | * If the latest Linux kernel distribution does not have `tcp_bbr` enabled by default: 343 | ```sh 344 | modprobe tcp_bbr && echo 'tcp_bbr' >> /etc/modules-load.d/bbr.conf 345 | echo 'net.ipv4.tcp_congestion_control=bbr' >> /etc/sysctl.d/99-bbr.conf 346 | # Recommended for production, but with Linux v4.13rc1+ can be used not only in FQ (`q_disc') in BBR mode. 347 | echo 'net.core.default_qdisc=fq' >> /etc/sysctl.d/99-bbr.conf 348 | sysctl --system 349 | ``` 350 | --------------------------------------------------------------------------------