├── CHANGES ├── LICENSE ├── README ├── config ├── ngx_supervisord.c ├── ngx_supervisord.h └── patches ├── ngx_http_upstream_fair_module.patch ├── ngx_http_upstream_init_busy-0.8.0.patch ├── ngx_http_upstream_init_busy-0.8.17.patch └── ngx_http_upstream_round_robin.patch /CHANGES: -------------------------------------------------------------------------------- 1 | 2010-04-29 VERSION 1.4 2 | * Send shutdown command to supervisord on backend failure 3 | and try to bring back first backend after all backends fail. 4 | This is default and non-configurable behavior. 5 | Requested by Grzegorz Nosek. 6 | 7 | 2010-01-04 VERSION 1.3 8 | * Add "supervisord_inherit_backend_status" directive. 9 | 10 | * Add "supervisord_start" and "supervisord_stop" handlers. 11 | 12 | * Add "none" as valid argument in "supervisord" directive. 13 | 14 | * Add patch against bulit-in load balancer. 15 | 16 | Combination of the above changes allows one to dynamically 17 | take backend servers out of rotation without the need to 18 | use supervisord daemon. 19 | Those changes were somehow inspired by James Byers's comment 20 | on Hacker News saying that nginx is missing such feature. 21 | 22 | For detailed description please check README file. 23 | 24 | 2009-11-19 VERSION 1.2 25 | * Don't run "monitors" on "cache manager" and "cache loader" 26 | processes (this could lead to crash of either of them 27 | when ngx_supervisord-enable load balancer tried to access 28 | data available only on "worker" processes). 29 | 30 | Following applies only to versions older than nginx-0.8.28: 31 | NOTE: This modification uses undocumented nginx's "feature" 32 | to distinguish mentioned processes and starting from this 33 | release "worker_connections" cannot be set to 512 (it can be 34 | set to either lower or higher number, just not equal to 512). 35 | 36 | * patches: Workaround possible bug in nginx-upstream-fair. 37 | 38 | 2009-11-16 VERSION 1.1 39 | * Add "supervisord_name" directive (which overrides 40 | upstream{}'s name when communicating with supervisord). 41 | 42 | 2009-10-30 43 | * Allow module to compile with nginx-0.7.63+ and 0.8.7+. 44 | 45 | 2009-10-29 VERSION 1.0 46 | * Initial release 47 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2009-2010, FRiCKLE Piotr Sikora 2 | All rights reserved. 3 | 4 | This project was fully funded by megiteam.pl. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions 8 | are met: 9 | 1. Redistributions of source code must retain the above copyright 10 | notice, this list of conditions and the following disclaimer. 11 | 2. Redistributions in binary form must reproduce the above copyright 12 | notice, this list of conditions and the following disclaimer in the 13 | documentation and/or other materials provided with the distribution. 14 | 15 | THIS SOFTWARE IS PROVIDED BY FRiCKLE PIOTR SIKORA AND CONTRIBUTORS 16 | "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 17 | LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 18 | A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL FRiCKLE PIOTR 19 | SIKORA OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 20 | SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 21 | LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 22 | DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 23 | THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 24 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 25 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 26 | 27 | -------------------------------------------------------------------------------- /README: -------------------------------------------------------------------------------- 1 | ABOUT: 2 | ------ 3 | ngx_supervisord is a module that provides API to communicate with 4 | supervisord daemon (http://supervisord.org). 5 | 6 | As a "side effect", it also provides a way for dynamically 7 | taking backend servers out of rotation (configuration example #2). 8 | 9 | Interface is described in ngx_supervisord.h. For example of implementation 10 | please check patches/ngx_http_upstream_fair_module.patch. 11 | 12 | 13 | SPONSORS: 14 | --------- 15 | ngx_supervisord-1.0 was fully funded by megiteam.pl. 16 | 17 | 18 | REQUIREMENTS: 19 | ------------- 20 | * nginx >= 0.7.63 or >= 0.8.7, 21 | * ngx_http_upstream_init_busy patch by Ryan Lienhart Dahl (included in patches). 22 | * ngx_supervisord-aware module(s). 23 | 24 | 25 | INCLUDED PATCHES: 26 | ----------------- 27 | ngx_http_upstream_fair_module.patch: 28 | Patch against nginx-upstream-fair load balancer by Grzegorz Nosek 29 | (http://github.com/gnosek/nginx-upstream-fair), which adds capabilities to: 30 | * start first backend server, 31 | * start/stop backend servers depending on the load, 32 | * set number of minimum running backend servers. 33 | 34 | ngx_http_upstream_init_busy-0.8.0.patch: 35 | Patch (by Ryan Lienhart Dahl) against nginx versions 0.7.65+ and 0.8.0-0.8.16 36 | which adds ability to stop/resume request processing. 37 | 38 | ngx_http_upstream_init_busy-0.8.17.patch: 39 | Same as above, for versions 0.8.17+ (last tested version is 0.8.42). 40 | 41 | ngx_http_upstream_round_robin.patch: 42 | Patch against bulit-in load balancer, which adds ability to control 43 | status of the backend servers ("alive" / "down") on-the-fly without 44 | modifications in nginx configuration. 45 | 46 | 47 | INSTALLATION (with patched nginx-upstream-fair, versions ommited): 48 | ------------------------------------------------------------------ 49 | // unpack releases 50 | $ tar -zxf nginx.tar.gz 51 | $ tar -zxf ngx_supervisord.tar.gz 52 | $ tar -zxf gnosek-nginx-upstream-fair.tar.gz 53 | 54 | // patch gnosek-nginx-upstream-fair 55 | $ cp ngx_supervisord/patches/ngx_http_upstream_fair_module.patch 56 | gnosek-nginx-upstream-fair/ 57 | $ cd gnosek-nginx-upstream-fair; patch -p0 < ngx_http_upstream_fair_module.patch 58 | 59 | // patch nginx 60 | $ cp ngx_supervisord/patches/ngx_http_upstream_init_busy.patch nginx/ 61 | $ cd nginx 62 | $ patch -p0 < ngx_http_upstream_init_busy.patch 63 | 64 | // build 65 | $ ./configure --add-module=/path/to/ngx_supervisord 66 | --add-module=/path/to/gnosek-nginx-upstream-fair 67 | $ make && make install 68 | 69 | 70 | CONFIGURATION NOTES: 71 | -------------------- 72 | Following applies only to versions older than nginx-0.8.28: 73 | Since ngx_supervisord-1.2 you cannot set "worker_connections" to 512, 74 | it can be set to either lower or higher number, just not equal to 512. 75 | For details please check CHANGES log and/or source code. 76 | 77 | 78 | CONFIGURATION DIRECTIVES: 79 | ------------------------- 80 | supervisord path [user:pass] (context: upstream) 81 | ------------------------------------------------ 82 | Path to supervisord's listening socket, path can be: 83 | * IP:port (127.0.0.1:8000) 84 | * UNIX socket path (unix:/path/to/supervisord.sock) 85 | * none. 86 | 87 | When supervisord is explicity set to "none" then module will execute 88 | every command instantly without even talking to supervisord daemon, 89 | but it will still notice registered modules about backend servers 90 | status changes, current load, etc. 91 | This basically enables all ngx_supervisord features without 92 | the need to run supervisord daemon. 93 | 94 | NOTE: 95 | When supervisord is set to "none" it won't try to auto-start first 96 | backend server when all backend servers are considered down. 97 | 98 | supervisord_name name (context: upstream) 99 | ----------------------------------------- 100 | Use name instead of upstream{}'s name when communicating with supervisord. 101 | 102 | supervisord_inherit_backend_status (context: upstream) 103 | ------------------------------------------------------ 104 | Use configured backend statuses (server is considered alive unless it 105 | is followed by "down" in nginx.conf). When this directive isn't used 106 | supervisord assumes that all servers are down and it will try to start 107 | them when needed. 108 | 109 | supervisord_start upstream_name (context: location) 110 | supervisord_stop upstream_name (context: location) 111 | --------------------------------------------------- 112 | Executes "start" / "stop" command on given upstream. 113 | 114 | Using one of above handlers creates valid URIs: 115 | /location/0, /location/1, ..., /location/n-1, 116 | /location/any 117 | for upstreams with n backends. 118 | 119 | NOTE: 120 | Successful response means that request was processed by ngx_supervisord, 121 | not that the command was already executed by supervisord daemon. 122 | 123 | 124 | EXAMPLE CONFIGURATION #1: 125 | ------------------------- 126 | upstream backend { 127 | server 127.0.0.1:8000; 128 | server 127.0.0.1:8001; 129 | supervisord 127.0.0.1:9001 admin:super; 130 | fair; 131 | } 132 | 133 | server { 134 | location / { 135 | proxy_pass http://backend; 136 | } 137 | } 138 | 139 | With such configuration, ngx_supervisord will be starting/stopping 140 | [program:backend0] (which should be listening on 127.0.0.1:8000) 141 | and [program:backend1] (which should be listening on 127.0.0.1:8001) 142 | from supervisord's configuration. 143 | 144 | 145 | EXAMPLE CONFIGURATION #2: 146 | ------------------------- 147 | upstream backend { 148 | server 127.0.0.1:8000; 149 | server 127.0.0.1:8001; 150 | server 127.0.0.1:8002 down; 151 | supervisord none; 152 | supervisord_inherit_backend_status; 153 | } 154 | 155 | server { 156 | location / { 157 | proxy_pass http://backend; 158 | } 159 | 160 | location /_start/ { 161 | allow 127.0.0.1; 162 | deny all; 163 | supervisord_start backend; 164 | } 165 | 166 | location /_stop/ { 167 | allow 127.0.0.1; 168 | deny all; 169 | supervisord_stop backend; 170 | } 171 | } 172 | 173 | With such configuration, ngx_supervisord will assume that 2 out of 3 174 | backends are alive and it will never talk to supervisord daemon. 175 | 176 | Calling "http://localhost/_start/2" will change status of 177 | "127.0.0.1:8002" backend to "alive" and ngx_supervisord will notice 178 | all ngx_supervisord-aware load balancers about this change. 179 | 180 | 181 | CREDITS: 182 | -------- 183 | * Magda Zarych (megiteam.pl), 184 | * Grzegorz Nosek (megiteam.pl), 185 | * Ryan Lienhart Dahl. 186 | -------------------------------------------------------------------------------- /config: -------------------------------------------------------------------------------- 1 | ngx_addon_name=ngx_supervisord_module 2 | 3 | HTTP_INCS="$HTTP_INCS $ngx_addon_dir" 4 | HTTP_MODULES="$HTTP_MODULES ngx_supervisord_module" 5 | NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_supervisord.c" 6 | 7 | have=NGX_SUPERVISORD_MODULE . auto/have 8 | -------------------------------------------------------------------------------- /ngx_supervisord.c: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2009-2010, FRiCKLE Piotr Sikora 3 | * All rights reserved. 4 | * 5 | * This project was fully funded by megiteam.pl. 6 | * 7 | * Redistribution and use in source and binary forms, with or without 8 | * modification, are permitted provided that the following conditions 9 | * are met: 10 | * 1. Redistributions of source code must retain the above copyright 11 | * notice, this list of conditions and the following disclaimer. 12 | * 2. Redistributions in binary form must reproduce the above copyright 13 | * notice, this list of conditions and the following disclaimer in the 14 | * documentation and/or other materials provided with the distribution. 15 | * 16 | * THIS SOFTWARE IS PROVIDED BY FRiCKLE PIOTR SIKORA AND CONTRIBUTORS 17 | * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 18 | * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 19 | * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL FRiCKLE PIOTR 20 | * SIKORA OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 21 | * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 22 | * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 23 | * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 24 | * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 25 | * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 26 | * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 27 | */ 28 | 29 | #include 30 | #include 31 | #include 32 | #include 33 | #include 34 | #include 35 | 36 | #if (NGX_HTTP_UPSTREAM_INIT_BUSY_PATCH_VERSION != 1) 37 | #error "ngx_supervisord requires NGX_HTTP_UPSTREAM_INIT_BUSY_PATCH v1" 38 | #endif 39 | 40 | #define NGX_SUPERVISORD_MONITOR_INTERVAL 10000 41 | #define NGX_SUPERVISORD_QUEUE_INTERVAL 500 42 | #define NGX_SUPERVISORD_LOAD_SKIP 6 43 | #define NGX_SUPERVISORD_ANTISPAM 30000 44 | 45 | void *ngx_supervisord_create_srv_conf(ngx_conf_t *); 46 | void *ngx_supervisord_create_loc_conf(ngx_conf_t *); 47 | ngx_int_t ngx_supervisord_preconf(ngx_conf_t *); 48 | char *ngx_supervisord_conf(ngx_conf_t *, ngx_command_t *, void *); 49 | char *ngx_supervisord_conf_name(ngx_conf_t *, ngx_command_t *, void *); 50 | char *ngx_supervisord_conf_inherit_backend_status(ngx_conf_t *, 51 | ngx_command_t *, void *); 52 | char *ngx_supervisord_conf_start_handler(ngx_conf_t *, ngx_command_t *, 53 | void *); 54 | char *ngx_supervisord_conf_stop_handler(ngx_conf_t *, ngx_command_t *, 55 | void *); 56 | ngx_int_t ngx_supervisord_module_init(ngx_cycle_t *); 57 | ngx_int_t ngx_supervisord_worker_init(ngx_cycle_t *); 58 | void ngx_supervisord_monitor(ngx_event_t *); 59 | void ngx_supervisord_queue_monitor(ngx_event_t *); 60 | void ngx_supervisord_finalize_request(ngx_http_request_t *, ngx_int_t); 61 | const char *ngx_supervisord_get_command(ngx_uint_t); 62 | 63 | typedef struct { 64 | ngx_url_t server; 65 | ngx_str_t userpass; /* user:pass format */ 66 | ngx_str_t name; 67 | ngx_int_t is_fake; 68 | } ngx_supervisord_conf_t; 69 | 70 | typedef struct { 71 | ngx_supervisord_conf_t supervisord; 72 | ngx_http_upstream_srv_conf_t *uscf; /* original uscf */ 73 | ngx_int_t inherit_backend_status; 74 | /* memory */ 75 | ngx_shm_zone_t *shm; 76 | ngx_pool_t *lpool; /* local memory pool */ 77 | ngx_slab_pool_t *shpool; /* shared memory pool */ 78 | /* backends */ 79 | ngx_uint_t nservers; /* number of servers */ 80 | ngx_uint_t aservers; /* number of active servers */ 81 | ngx_uint_t *lservers; /* local servers list */ 82 | ngx_uint_t *shservers; /* shared servers list */ 83 | ngx_uint_t *dservers; /* local diff between lists */ 84 | /* monitors */ 85 | ngx_supervisord_backend_pt backend_monitor; 86 | ngx_supervisord_load_pt load_monitor; 87 | ngx_uint_t load_skip; 88 | /* misc */ 89 | ngx_msec_t *last_cmd; /* shared */ 90 | ngx_uint_t *total_reqs; /* shared */ 91 | ngx_uint_t total_reported; 92 | ngx_queue_t queue; 93 | ngx_event_t queue_timer; 94 | } ngx_supervisord_srv_conf_t; 95 | 96 | typedef struct { 97 | ngx_http_upstream_srv_conf_t *upstream; 98 | ngx_uint_t command; 99 | } ngx_supervisord_loc_conf_t; 100 | 101 | typedef struct { 102 | ngx_uint_t command; 103 | ngx_uint_t backend; 104 | } ngx_supervisord_ctx_t; 105 | 106 | typedef struct { 107 | ngx_supervisord_srv_conf_t *supcf; 108 | ngx_http_request_t *request; 109 | ngx_uint_t command; 110 | ngx_uint_t backend; 111 | ngx_supervisord_checker_pt checker; 112 | } ngx_supervisord_queued_cmd_t; 113 | 114 | typedef struct { 115 | ngx_http_request_t *request; 116 | ngx_queue_t queue; 117 | } ngx_supervisord_queued_req_t; 118 | 119 | static ngx_command_t ngx_supervisord_module_commands[] = { 120 | 121 | { ngx_string("supervisord"), 122 | NGX_HTTP_UPS_CONF|NGX_CONF_TAKE12, 123 | ngx_supervisord_conf, 124 | 0, 125 | 0, 126 | NULL }, 127 | 128 | { ngx_string("supervisord_name"), 129 | NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, 130 | ngx_supervisord_conf_name, 131 | 0, 132 | 0, 133 | NULL }, 134 | 135 | { ngx_string("supervisord_inherit_backend_status"), 136 | NGX_HTTP_UPS_CONF|NGX_CONF_NOARGS, 137 | ngx_supervisord_conf_inherit_backend_status, 138 | 0, 139 | 0, 140 | NULL }, 141 | 142 | { ngx_string("supervisord_start"), 143 | NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, 144 | ngx_supervisord_conf_start_handler, 145 | NGX_HTTP_LOC_CONF_OFFSET, 146 | 0, 147 | NULL }, 148 | 149 | { ngx_string("supervisord_stop"), 150 | NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, 151 | ngx_supervisord_conf_stop_handler, 152 | NGX_HTTP_LOC_CONF_OFFSET, 153 | 0, 154 | NULL }, 155 | 156 | ngx_null_command 157 | }; 158 | 159 | static ngx_http_module_t ngx_supervisord_module_ctx = { 160 | ngx_supervisord_preconf, /* preconfiguration */ 161 | NULL, /* postconfiguration */ 162 | 163 | NULL, /* create main configuration */ 164 | NULL, /* init main configuration */ 165 | 166 | ngx_supervisord_create_srv_conf, /* create server configuration */ 167 | NULL, /* merge server configuration */ 168 | 169 | ngx_supervisord_create_loc_conf, /* create location configuration */ 170 | NULL /* merge location configuration */ 171 | }; 172 | 173 | /* cheap hack, but sadly we need it */ 174 | ngx_module_t ngx_http_copy_filter_module; 175 | 176 | ngx_module_t ngx_supervisord_module = { 177 | NGX_MODULE_V1, 178 | &ngx_supervisord_module_ctx, /* module context */ 179 | ngx_supervisord_module_commands, /* module directives */ 180 | NGX_HTTP_MODULE, /* module type */ 181 | NULL, /* init master */ 182 | ngx_supervisord_module_init, /* init module */ 183 | ngx_supervisord_worker_init, /* init process */ 184 | NULL, /* init thread */ 185 | NULL, /* exit thread */ 186 | NULL, /* exit process */ 187 | NULL, /* exit master */ 188 | NGX_MODULE_V1_PADDING 189 | }; 190 | 191 | /* 192 | * configuration & initialization 193 | */ 194 | 195 | ngx_array_t *ngx_supervisord_upstreams; 196 | ngx_event_t *ngx_supervisord_timer; 197 | 198 | void * 199 | ngx_supervisord_create_srv_conf(ngx_conf_t *cf) 200 | { 201 | ngx_supervisord_srv_conf_t *supcf; 202 | 203 | supcf = ngx_pcalloc(cf->pool, sizeof(ngx_supervisord_srv_conf_t)); 204 | if (supcf == NULL) { 205 | return NGX_CONF_ERROR; 206 | } 207 | 208 | return supcf; 209 | } 210 | 211 | void * 212 | ngx_supervisord_create_loc_conf(ngx_conf_t *cf) 213 | { 214 | ngx_supervisord_loc_conf_t *suplcf; 215 | 216 | suplcf = ngx_pcalloc(cf->pool, sizeof(ngx_supervisord_loc_conf_t)); 217 | if (suplcf == NULL) { 218 | return NGX_CONF_ERROR; 219 | } 220 | 221 | return suplcf; 222 | } 223 | 224 | ngx_int_t 225 | ngx_supervisord_preconf(ngx_conf_t *cf) 226 | { 227 | ngx_supervisord_upstreams = ngx_array_create(cf->pool, 8, 228 | sizeof(ngx_supervisord_srv_conf_t *)); 229 | if (ngx_supervisord_upstreams == NULL) { 230 | return NGX_ERROR; 231 | } 232 | 233 | ngx_supervisord_timer = ngx_pcalloc(cf->pool, sizeof(ngx_event_t)); 234 | if (ngx_supervisord_timer == NULL) { 235 | return NGX_ERROR; 236 | } 237 | 238 | return NGX_OK; 239 | } 240 | 241 | ngx_int_t 242 | ngx_supervisord_shm_init(ngx_shm_zone_t *shm, void *data) 243 | { 244 | ngx_supervisord_srv_conf_t *supcf, *osupcf; 245 | 246 | if (data) { 247 | osupcf = data; 248 | 249 | if (osupcf->shservers != NULL) { 250 | ngx_slab_free(osupcf->shpool, osupcf->shservers); 251 | } 252 | 253 | if (osupcf->total_reqs != NULL) { 254 | ngx_slab_free(osupcf->shpool, osupcf->total_reqs); 255 | } 256 | 257 | if (osupcf->last_cmd != NULL) { 258 | ngx_slab_free(osupcf->shpool, osupcf->last_cmd); 259 | } 260 | } 261 | 262 | supcf = shm->data; 263 | supcf->shpool = (ngx_slab_pool_t *) shm->shm.addr; 264 | 265 | return NGX_OK; 266 | } 267 | 268 | char * 269 | ngx_supervisord_conf(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) 270 | { 271 | ngx_str_t *value = cf->args->elts; 272 | ngx_http_upstream_srv_conf_t *uscf; 273 | ngx_supervisord_srv_conf_t *supcf; 274 | ngx_supervisord_srv_conf_t **supcfp; 275 | ngx_connection_t *dummy; 276 | 277 | uscf = ngx_http_conf_get_module_srv_conf(cf, ngx_http_upstream_module); 278 | supcf = ngx_http_conf_get_module_srv_conf(cf, ngx_supervisord_module); 279 | supcf->uscf = uscf; /* original uscf */ 280 | 281 | supcf->lpool = cf->pool; 282 | 283 | if (supcf->supervisord.server.url.data != NULL) { 284 | ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 285 | "supervisord already set to \"%V\"", 286 | &supcf->supervisord.server.url); 287 | 288 | return NGX_CONF_ERROR; 289 | } 290 | 291 | if (supcf->supervisord.is_fake) { 292 | ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 293 | "supervisord already set to \"none\""); 294 | 295 | return NGX_CONF_ERROR; 296 | } 297 | 298 | if (ngx_strncmp(value[1].data, "none", 4) == 0) { 299 | supcf->supervisord.is_fake = 1; 300 | } else { 301 | supcf->supervisord.server.url = value[1]; 302 | if (ngx_parse_url(cf->pool, &supcf->supervisord.server) != NGX_OK) { 303 | if (supcf->supervisord.server.err) { 304 | ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 305 | "%s in supervisord \"%V\"", 306 | supcf->supervisord.server.err, 307 | &supcf->supervisord.server.url); 308 | } 309 | 310 | return NGX_CONF_ERROR; 311 | } 312 | 313 | if (cf->args->nelts == 3) { 314 | supcf->supervisord.userpass = value[2]; 315 | } 316 | } 317 | 318 | supcf->shm = ngx_shared_memory_add(cf, &uscf->host, 4 * ngx_pagesize, 319 | &ngx_supervisord_module); 320 | if (supcf->shm == NULL) { 321 | return NGX_CONF_ERROR; 322 | } 323 | 324 | supcf->shm->init = ngx_supervisord_shm_init; 325 | supcf->shm->data = supcf; 326 | 327 | ngx_queue_init(&supcf->queue); 328 | 329 | dummy = ngx_pcalloc(cf->pool, sizeof(ngx_connection_t)); 330 | if (dummy == NULL) { 331 | return NGX_CONF_ERROR; 332 | } 333 | 334 | dummy->fd = (ngx_socket_t) -1; 335 | dummy->data = supcf; 336 | 337 | supcf->queue_timer.log = ngx_cycle->log; 338 | supcf->queue_timer.data = dummy; 339 | supcf->queue_timer.handler = ngx_supervisord_queue_monitor; 340 | 341 | supcfp = ngx_array_push(ngx_supervisord_upstreams); 342 | if (supcfp == NULL) { 343 | return NGX_CONF_ERROR; 344 | } 345 | 346 | *supcfp = supcf; 347 | 348 | return NGX_CONF_OK; 349 | } 350 | 351 | char * 352 | ngx_supervisord_conf_name(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) 353 | { 354 | ngx_str_t *value = cf->args->elts; 355 | ngx_supervisord_srv_conf_t *supcf; 356 | 357 | supcf = ngx_http_conf_get_module_srv_conf(cf, ngx_supervisord_module); 358 | 359 | if (supcf->supervisord.name.data != NULL) { 360 | ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 361 | "supervisord_name already set to \"%V\"", &supcf->supervisord.name); 362 | 363 | return NGX_CONF_ERROR; 364 | } 365 | 366 | supcf->supervisord.name = value[1]; 367 | 368 | return NGX_CONF_OK; 369 | } 370 | 371 | char * 372 | ngx_supervisord_conf_inherit_backend_status(ngx_conf_t *cf, ngx_command_t *cmd, 373 | void *conf) 374 | { 375 | ngx_supervisord_srv_conf_t *supcf; 376 | 377 | supcf = ngx_http_conf_get_module_srv_conf(cf, ngx_supervisord_module); 378 | supcf->inherit_backend_status = 1; 379 | 380 | return NGX_CONF_OK; 381 | } 382 | 383 | ngx_int_t 384 | ngx_supervisord_module_init(ngx_cycle_t *cycle) 385 | { 386 | ngx_supervisord_srv_conf_t **supcfp; 387 | ngx_http_upstream_server_t *server; 388 | ngx_uint_t i, n; 389 | size_t size; 390 | 391 | supcfp = ngx_supervisord_upstreams->elts; 392 | for (i = 0; i < ngx_supervisord_upstreams->nelts; i++) { 393 | ngx_log_debug1(NGX_LOG_DEBUG_HTTP, cycle->log, 0, 394 | "[supervisord] upstream: %V, initializing", 395 | &supcfp[i]->uscf->host); 396 | 397 | supcfp[i]->nservers = supcfp[i]->uscf->servers->nelts; 398 | 399 | size = supcfp[i]->nservers * sizeof(ngx_uint_t); 400 | 401 | supcfp[i]->lservers = ngx_pcalloc(supcfp[i]->lpool, size); 402 | if (supcfp[i]->lservers == NULL) { 403 | return NGX_ERROR; 404 | } 405 | 406 | if (supcfp[i]->inherit_backend_status) { 407 | server = supcfp[i]->uscf->servers->elts; 408 | for (n = 0; n < supcfp[i]->nservers; n++) { 409 | supcfp[i]->lservers[n] = server[n].down; 410 | if (!server[n].down) { 411 | supcfp[i]->aservers++; 412 | } 413 | } 414 | } else { 415 | for (n = 0; n < supcfp[i]->nservers; n++) { 416 | supcfp[i]->lservers[n] = NGX_SUPERVISORD_SRV_DOWN; 417 | } 418 | } 419 | 420 | if (supcfp[i]->backend_monitor != NULL) { 421 | for (n = 0; n < supcfp[i]->nservers; n++) { 422 | supcfp[i]->backend_monitor(supcfp[i]->uscf, n, 423 | supcfp[i]->lservers[n]); 424 | } 425 | } 426 | 427 | supcfp[i]->dservers = ngx_pcalloc(supcfp[i]->lpool, size); 428 | if (supcfp[i]->dservers == NULL) { 429 | return NGX_ERROR; 430 | } 431 | 432 | ngx_shmtx_lock(&supcfp[i]->shpool->mutex); 433 | 434 | supcfp[i]->shservers = ngx_slab_alloc_locked(supcfp[i]->shpool, size); 435 | if (supcfp[i]->shservers == NULL) { 436 | goto failed; 437 | } 438 | 439 | for (n = 0; n < supcfp[i]->nservers; n++) { 440 | supcfp[i]->shservers[n] = supcfp[i]->lservers[n]; 441 | } 442 | 443 | supcfp[i]->total_reqs = ngx_slab_alloc_locked(supcfp[i]->shpool, 444 | sizeof(ngx_uint_t)); 445 | if (supcfp[i]->total_reqs == NULL) { 446 | goto failed; 447 | } 448 | 449 | *supcfp[i]->total_reqs = 0; 450 | 451 | supcfp[i]->last_cmd = ngx_slab_alloc_locked(supcfp[i]->shpool, 452 | sizeof(ngx_msec_t)); 453 | if (supcfp[i]->last_cmd == NULL) { 454 | goto failed; 455 | } 456 | 457 | *supcfp[i]->last_cmd = 0; 458 | 459 | ngx_shmtx_unlock(&supcfp[i]->shpool->mutex); 460 | } 461 | 462 | return NGX_OK; 463 | 464 | failed: 465 | ngx_shmtx_unlock(&supcfp[i]->shpool->mutex); 466 | 467 | return NGX_ERROR; 468 | } 469 | 470 | ngx_int_t 471 | ngx_supervisord_worker_init(ngx_cycle_t *cycle) 472 | { 473 | ngx_connection_t *dummy; 474 | 475 | #if (nginx_version >= 8028) 476 | if (ngx_process > NGX_PROCESS_WORKER) { 477 | #else 478 | /* 479 | * This is really cheap hack, but it's the only way 480 | * to distinguish "workers" from "cache manager" 481 | * and "cache loader" without additional patch. 482 | * 483 | * NOTE: "worker_connections" cannot be set to 512! 484 | */ 485 | if (cycle->connection_n == 512) { 486 | #endif 487 | /* work only on real worker processes */ 488 | return NGX_OK; 489 | } 490 | 491 | if (ngx_supervisord_upstreams->nelts == 0) { 492 | /* nothing to do */ 493 | return NGX_OK; 494 | } 495 | 496 | dummy = ngx_pcalloc(ngx_supervisord_upstreams->pool, 497 | sizeof(ngx_connection_t)); 498 | if (dummy == NULL) { 499 | return NGX_ERROR; 500 | } 501 | 502 | dummy->fd = (ngx_socket_t) -1; 503 | dummy->data = ngx_supervisord_upstreams; 504 | 505 | ngx_supervisord_timer->log = ngx_cycle->log; 506 | ngx_supervisord_timer->data = dummy; 507 | ngx_supervisord_timer->handler = ngx_supervisord_monitor; 508 | 509 | ngx_add_timer(ngx_supervisord_timer, NGX_SUPERVISORD_MONITOR_INTERVAL); 510 | 511 | return NGX_OK; 512 | } 513 | 514 | /* 515 | * sync, monitors, etc. 516 | */ 517 | 518 | void 519 | ngx_supervisord_sync_servers(ngx_supervisord_srv_conf_t *supcf) 520 | { 521 | ngx_uint_t i; 522 | 523 | ngx_shmtx_lock(&supcf->shpool->mutex); 524 | 525 | for (i = 0; i < supcf->nservers; i++) { 526 | if (supcf->lservers[i] != supcf->shservers[i]) { 527 | supcf->lservers[i] = supcf->shservers[i]; 528 | supcf->dservers[i] = 1; 529 | } 530 | } 531 | 532 | ngx_shmtx_unlock(&supcf->shpool->mutex); 533 | 534 | supcf->aservers = 0; 535 | for (i = 0; i < supcf->nservers; i++) { 536 | if (supcf->lservers[i] == NGX_SUPERVISORD_SRV_UP) { 537 | supcf->aservers++; 538 | } 539 | 540 | if (supcf->dservers[i]) { 541 | supcf->dservers[i] = 0; 542 | 543 | ngx_log_debug3(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 544 | "[supervisord] upstream: %V, backend: %ui, new status: %ui", 545 | &supcf->uscf->host, i, supcf->lservers[i]); 546 | 547 | if (supcf->backend_monitor != NULL) { 548 | supcf->backend_monitor(supcf->uscf, i, supcf->lservers[i]); 549 | } 550 | } 551 | } 552 | } 553 | 554 | void 555 | ngx_supervisord_sync_load(ngx_supervisord_srv_conf_t *supcf) 556 | { 557 | ngx_supervisord_load_t report; 558 | ngx_uint_t curr, load; 559 | ngx_int_t diff; 560 | 561 | ngx_shmtx_lock(&supcf->shpool->mutex); 562 | curr = *supcf->total_reqs; 563 | ngx_shmtx_unlock(&supcf->shpool->mutex); 564 | 565 | diff = curr - supcf->total_reported; 566 | supcf->total_reported = curr; 567 | 568 | if (diff < 0) { 569 | /* overflow? */ 570 | return; 571 | } 572 | 573 | if ((diff == 0) || (supcf->aservers == 0)) { 574 | load = 0; 575 | } else { 576 | /* load = requests per second per active backend */ 577 | load = ((diff * NGX_SUPERVISORD_LOAD_MULTIPLIER) 578 | / (((NGX_SUPERVISORD_MONITOR_INTERVAL * NGX_SUPERVISORD_LOAD_SKIP) 579 | / 1000) * supcf->aservers)); 580 | } 581 | 582 | ngx_log_debug6(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 583 | "[supervisord] upstream: %V, load: %ui.%02ui, reqs: %i, up: %ui/%ui", 584 | &supcf->uscf->host, 585 | load / NGX_SUPERVISORD_LOAD_MULTIPLIER, 586 | load % NGX_SUPERVISORD_LOAD_MULTIPLIER, 587 | diff, supcf->aservers, supcf->nservers); 588 | 589 | if (supcf->load_monitor != NULL) { 590 | report.load = load; 591 | report.reqs = diff; 592 | report.aservers = supcf->aservers; 593 | report.nservers = supcf->nservers; 594 | report.interval = NGX_SUPERVISORD_MONITOR_INTERVAL 595 | * NGX_SUPERVISORD_LOAD_SKIP; 596 | 597 | supcf->load_monitor(supcf->uscf, report); 598 | } 599 | } 600 | 601 | ngx_int_t 602 | ngx_supervisord_resume_requests(ngx_supervisord_srv_conf_t *supcf) 603 | { 604 | ngx_supervisord_queued_req_t *qr; 605 | ngx_http_request_t *or; 606 | ngx_queue_t *q; 607 | ngx_int_t rc; 608 | 609 | if (ngx_queue_empty(&supcf->queue)) { 610 | return NGX_OK; 611 | } 612 | 613 | ngx_supervisord_sync_servers(supcf); 614 | 615 | if (supcf->lservers[0] > NGX_SUPERVISORD_SRV_DOWN) { 616 | /* retry later, backend status still changing */ 617 | return NGX_BUSY; 618 | } 619 | 620 | rc = (supcf->lservers[0] == NGX_SUPERVISORD_SRV_UP) 621 | ? 0 : NGX_HTTP_BAD_GATEWAY; 622 | 623 | ngx_log_debug2(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 624 | "[supervisord] upstream: %V, resuming queued requests, rc: %i", 625 | &supcf->uscf->host, rc); 626 | 627 | while (!ngx_queue_empty(&supcf->queue)) { 628 | q = ngx_queue_head(&supcf->queue); 629 | qr = ngx_queue_data(q, ngx_supervisord_queued_req_t, queue); 630 | or = qr->request; 631 | ngx_queue_remove(q); 632 | (void) ngx_pfree(supcf->lpool, qr); 633 | 634 | if (rc == 0) { 635 | /* resume processing */ 636 | ngx_http_upstream_connect(or, or->upstream); 637 | } else { 638 | /* remove cleanup, otherwise we end up with double free! */ 639 | if ((or->upstream) && (or->upstream->cleanup)) { 640 | *or->upstream->cleanup = NULL; 641 | } 642 | 643 | ngx_http_finalize_request(or, rc); 644 | } 645 | } 646 | 647 | return NGX_OK; 648 | } 649 | 650 | void 651 | ngx_supervisord_monitor(ngx_event_t *ev) 652 | { 653 | ngx_connection_t *dummy = ev->data; 654 | ngx_array_t *upstreams = dummy->data; 655 | ngx_supervisord_srv_conf_t **supcfp; 656 | ngx_uint_t i; 657 | 658 | if (ngx_exiting) { 659 | return; 660 | } 661 | 662 | supcfp = upstreams->elts; 663 | for (i = 0; i < upstreams->nelts; i++) { 664 | ngx_supervisord_sync_servers(supcfp[i]); 665 | 666 | supcfp[i]->load_skip = ++supcfp[i]->load_skip 667 | % NGX_SUPERVISORD_LOAD_SKIP; 668 | if (supcfp[i]->load_skip == 0) { 669 | ngx_supervisord_sync_load(supcfp[i]); 670 | } 671 | } 672 | 673 | ngx_add_timer(ev, NGX_SUPERVISORD_MONITOR_INTERVAL); 674 | } 675 | 676 | void 677 | ngx_supervisord_queue_monitor(ngx_event_t *ev) 678 | { 679 | ngx_connection_t *dummy = ev->data; 680 | ngx_supervisord_srv_conf_t *supcf = dummy->data; 681 | 682 | if (ngx_supervisord_resume_requests(supcf) == NGX_BUSY) { 683 | ngx_add_timer(ev, NGX_SUPERVISORD_QUEUE_INTERVAL); 684 | } 685 | } 686 | 687 | /* 688 | * ngx_supervisord API 689 | */ 690 | 691 | ngx_http_request_t *ngx_supervisord_init(ngx_pool_t *, 692 | ngx_http_upstream_srv_conf_t *); 693 | 694 | ngx_int_t 695 | ngx_supervisord_check_servers(ngx_http_request_t *or) 696 | { 697 | ngx_http_upstream_srv_conf_t *uscf; 698 | ngx_supervisord_srv_conf_t *supcf; 699 | ngx_supervisord_queued_req_t *qr; 700 | ngx_uint_t tr; 701 | ngx_int_t rc; 702 | 703 | if ((or->upstream == NULL) || (or->upstream->conf == NULL) 704 | || (or->upstream->conf->upstream == NULL)) 705 | { 706 | goto wrong_params; 707 | } 708 | 709 | uscf = ngx_http_get_module_srv_conf(or->upstream->conf->upstream, 710 | ngx_http_upstream_module); 711 | if (uscf == NULL) { 712 | goto wrong_params; 713 | } 714 | 715 | supcf = ngx_http_conf_upstream_srv_conf(uscf, ngx_supervisord_module); 716 | if (supcf == NULL) { 717 | goto wrong_params; 718 | } 719 | 720 | if (!supcf->supervisord.is_fake 721 | && supcf->supervisord.server.url.data == NULL) 722 | { 723 | /* 724 | * allow ngx_supervisord-enabled modules to work 725 | * even when supervisord is not configured. 726 | */ 727 | return NGX_OK; 728 | } 729 | 730 | ngx_supervisord_sync_servers(supcf); 731 | 732 | ngx_shmtx_lock(&supcf->shpool->mutex); 733 | tr = ++(*supcf->total_reqs); 734 | ngx_shmtx_unlock(&supcf->shpool->mutex); 735 | 736 | ngx_log_debug4(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 737 | "[supervisord] upstream: %V, checking servers, up: %ui/%ui req#: %ui", 738 | &supcf->uscf->host, supcf->aservers, supcf->nservers, tr); 739 | 740 | if (supcf->aservers > 0 || supcf->supervisord.is_fake) { 741 | return NGX_OK; 742 | } 743 | 744 | qr = ngx_pcalloc(supcf->lpool, sizeof(ngx_supervisord_queued_req_t)); 745 | if (qr == NULL) { 746 | return NGX_ERROR; 747 | } 748 | 749 | qr->request = or; 750 | 751 | if (supcf->lservers[0] == NGX_SUPERVISORD_SRV_STARTING_UP) { 752 | ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 753 | "[supervisord] upstream: %V, no alive backends, queuing request...", 754 | &supcf->uscf->host); 755 | 756 | /* add timer for first request queued on non-initializing process */ 757 | if (ngx_queue_empty(&supcf->queue)) { 758 | ngx_add_timer(&supcf->queue_timer, NGX_SUPERVISORD_QUEUE_INTERVAL); 759 | } 760 | 761 | ngx_queue_insert_tail(&supcf->queue, &qr->queue); 762 | } else { 763 | ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 764 | "[supervisord] upstream: %V, no alive backends, starting one...", 765 | &supcf->uscf->host); 766 | 767 | ngx_queue_insert_tail(&supcf->queue, &qr->queue); 768 | 769 | rc = ngx_supervisord_execute(uscf, NGX_SUPERVISORD_CMD_START, 0, NULL); 770 | if (rc != NGX_OK) { 771 | ngx_queue_remove(&qr->queue); 772 | 773 | return rc; 774 | } 775 | } 776 | 777 | return NGX_BUSY; 778 | 779 | wrong_params: 780 | ngx_log_error(NGX_LOG_EMERG, ngx_cycle->log, 0, 781 | "[supervisord] wrong parameters passed to: %s", __func__); 782 | 783 | return NGX_DECLINED; 784 | } 785 | 786 | void 787 | ngx_supervisord_fake_execute(ngx_uint_t cmd, ngx_uint_t backend, 788 | ngx_http_request_t *r) 789 | { 790 | ngx_supervisord_srv_conf_t *supcf; 791 | ngx_supervisord_ctx_t *ctx; 792 | 793 | supcf = ngx_http_get_module_srv_conf(r, ngx_supervisord_module); 794 | ctx = ngx_http_get_module_ctx(r, ngx_supervisord_module); 795 | 796 | ctx->command = cmd; 797 | ctx->backend = backend; 798 | 799 | ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, 800 | "[supervisord] upstream: %V, backend: %ui, command: %s", 801 | &supcf->uscf->host, backend, 802 | ngx_supervisord_get_command(cmd)); 803 | 804 | ngx_supervisord_finalize_request(r, 0); 805 | } 806 | 807 | void 808 | ngx_supervisord_real_execute(ngx_uint_t cmd, ngx_uint_t backend, 809 | ngx_http_request_t *r) 810 | { 811 | ngx_supervisord_ctx_t *ctx; 812 | 813 | ctx = ngx_http_get_module_ctx(r, ngx_supervisord_module); 814 | ctx->command = cmd; 815 | ctx->backend = backend; 816 | 817 | ngx_http_upstream_init(r); 818 | } 819 | 820 | void 821 | ngx_supervisord_cmd_checker(ngx_event_t *ev) 822 | { 823 | ngx_connection_t *dummy = ev->data; 824 | ngx_supervisord_queued_cmd_t *qcmd = dummy->data; 825 | ngx_pool_t *pool; 826 | ngx_int_t rc; 827 | 828 | ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 829 | "[supervisord] upstream: %V, executing checker...", 830 | &qcmd->supcf->uscf->host); 831 | rc = qcmd->checker(qcmd->supcf->uscf, qcmd->backend); 832 | ngx_log_debug2(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 833 | "[supervisord] upstream: %V, checker rc: %i", 834 | &qcmd->supcf->uscf->host, rc); 835 | 836 | if (rc != NGX_OK) { 837 | ngx_add_timer(ev, NGX_SUPERVISORD_QUEUE_INTERVAL); 838 | return; 839 | } 840 | 841 | if (qcmd->supcf->supervisord.is_fake) { 842 | ngx_supervisord_fake_execute(qcmd->command, qcmd->backend, 843 | qcmd->request); 844 | } else { 845 | ngx_supervisord_real_execute(qcmd->command, qcmd->backend, 846 | qcmd->request); 847 | } 848 | 849 | pool = qcmd->supcf->lpool; 850 | (void) ngx_pfree(pool, qcmd); 851 | (void) ngx_pfree(pool, dummy); 852 | (void) ngx_pfree(pool, ev); 853 | } 854 | 855 | ngx_int_t 856 | ngx_supervisord_execute(ngx_http_upstream_srv_conf_t *uscf, 857 | ngx_uint_t cmd, ngx_int_t backend, ngx_supervisord_checker_pt checker) 858 | { 859 | ngx_supervisord_srv_conf_t *supcf; 860 | ngx_supervisord_queued_cmd_t *qcmd; 861 | ngx_http_request_t *r; 862 | ngx_connection_t *c, *dummy; 863 | ngx_event_t *timer; 864 | ngx_uint_t i; 865 | ngx_int_t rc; 866 | 867 | if (uscf == NULL) { 868 | goto wrong_params; 869 | } 870 | 871 | supcf = ngx_http_conf_upstream_srv_conf(uscf, ngx_supervisord_module); 872 | if (supcf == NULL) { 873 | goto wrong_params; 874 | } 875 | 876 | if (!supcf->supervisord.is_fake 877 | && supcf->supervisord.server.url.data == NULL) 878 | { 879 | /* 880 | * allow ngx_supervisord-enabled modules to work 881 | * even when supervisord is not configured. 882 | */ 883 | return NGX_OK; 884 | } 885 | 886 | if ((backend >= (ngx_int_t) supcf->nservers) || (backend < -1)) { 887 | goto wrong_params; 888 | } 889 | 890 | if (cmd > NGX_SUPERVISORD_CMD_STOP) { 891 | goto wrong_params; 892 | } 893 | 894 | r = ngx_supervisord_init(supcf->lpool, uscf); 895 | if (r == NULL) { 896 | return NGX_ERROR; 897 | } 898 | 899 | ngx_shmtx_lock(&supcf->shpool->mutex); 900 | 901 | if ((backend == -1) 902 | && (*supcf->last_cmd + NGX_SUPERVISORD_ANTISPAM > ngx_current_msec)) 903 | { 904 | /* antispam for "-1" */ 905 | goto already_done; 906 | } 907 | 908 | switch (cmd) { 909 | case NGX_SUPERVISORD_CMD_START: 910 | if (backend == -1) { 911 | for (i = 0; i < supcf->nservers; i++) { 912 | if (supcf->shservers[i] == NGX_SUPERVISORD_SRV_STARTING_UP) { 913 | /* "-1" allowed only when nothing happens */ 914 | goto already_done; 915 | } else if (supcf->shservers[i] == NGX_SUPERVISORD_SRV_DOWN) { 916 | backend = i; 917 | break; 918 | } 919 | } 920 | 921 | if (backend == -1) { 922 | /* no available backends */ 923 | goto already_done; 924 | } 925 | } else if ((supcf->shservers[backend] == NGX_SUPERVISORD_SRV_UP) 926 | || (supcf->shservers[backend] == NGX_SUPERVISORD_SRV_STARTING_UP)) 927 | { 928 | /* command already executed on this backend */ 929 | goto already_done; 930 | } 931 | 932 | supcf->shservers[backend] = NGX_SUPERVISORD_SRV_STARTING_UP; 933 | *supcf->last_cmd = ngx_current_msec; 934 | break; 935 | case NGX_SUPERVISORD_CMD_STOP: 936 | if (backend == -1) { 937 | for (i = 0; i < supcf->nservers; i++) { 938 | if (supcf->shservers[i] == NGX_SUPERVISORD_SRV_SHUTTING_DOWN) { 939 | /* "-1" allowed only when nothing happens */ 940 | goto already_done; 941 | } else if (supcf->shservers[i] == NGX_SUPERVISORD_SRV_UP) { 942 | backend = i; 943 | break; 944 | } 945 | } 946 | 947 | if (backend == -1) { 948 | /* no available backends */ 949 | goto already_done; 950 | } 951 | } else if ((supcf->shservers[backend] == NGX_SUPERVISORD_SRV_DOWN) 952 | || (supcf->shservers[backend] == NGX_SUPERVISORD_SRV_SHUTTING_DOWN)) 953 | { 954 | /* command already executed on this backend */ 955 | goto already_done; 956 | } 957 | 958 | supcf->shservers[backend] = NGX_SUPERVISORD_SRV_SHUTTING_DOWN; 959 | *supcf->last_cmd = ngx_current_msec; 960 | break; 961 | default: 962 | ngx_shmtx_unlock(&supcf->shpool->mutex); 963 | 964 | c = r->connection; 965 | ngx_destroy_pool(r->pool); 966 | ngx_destroy_pool(c->pool); 967 | (void) ngx_pfree(supcf->lpool, c); 968 | 969 | goto wrong_params; 970 | } 971 | 972 | ngx_shmtx_unlock(&supcf->shpool->mutex); 973 | 974 | if (checker != NULL) { 975 | ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 976 | "[supervisord] upstream: %V, executing checker...", 977 | &uscf->host); 978 | rc = checker(uscf, (ngx_uint_t) backend); 979 | ngx_log_debug2(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 980 | "[supervisord] upstream: %V, checker rc: %i", 981 | &uscf->host, rc); 982 | 983 | if (rc != NGX_OK) { 984 | qcmd = ngx_pcalloc(supcf->lpool, 985 | sizeof(ngx_supervisord_queued_cmd_t)); 986 | if (qcmd == NULL) { 987 | return NGX_ERROR; 988 | } 989 | 990 | qcmd->supcf = supcf; 991 | qcmd->request = r; 992 | qcmd->command = cmd; 993 | qcmd->backend = (ngx_uint_t) backend; 994 | qcmd->checker = checker; 995 | 996 | dummy = ngx_pcalloc(supcf->lpool, sizeof(ngx_connection_t)); 997 | if (dummy == NULL) { 998 | return NGX_ERROR; 999 | } 1000 | 1001 | dummy->fd = (ngx_socket_t) -1; 1002 | dummy->data = qcmd; 1003 | 1004 | timer = ngx_pcalloc(supcf->lpool, sizeof(ngx_event_t)); 1005 | if (timer == NULL) { 1006 | return NGX_ERROR; 1007 | } 1008 | 1009 | timer->log = ngx_cycle->log; 1010 | timer->data = dummy; 1011 | timer->handler = ngx_supervisord_cmd_checker; 1012 | 1013 | ngx_add_timer(timer, NGX_SUPERVISORD_QUEUE_INTERVAL); 1014 | 1015 | return NGX_OK; 1016 | } 1017 | } 1018 | 1019 | if (supcf->supervisord.is_fake) { 1020 | ngx_supervisord_fake_execute(cmd, (ngx_uint_t) backend, r); 1021 | } else { 1022 | ngx_supervisord_real_execute(cmd, (ngx_uint_t) backend, r); 1023 | } 1024 | 1025 | return NGX_OK; 1026 | 1027 | wrong_params: 1028 | ngx_log_error(NGX_LOG_EMERG, ngx_cycle->log, 0, 1029 | "[supervisord] wrong parameters passed to: %s", __func__); 1030 | 1031 | return NGX_DECLINED; 1032 | 1033 | already_done: 1034 | /* same command already in progress or finished */ 1035 | ngx_shmtx_unlock(&supcf->shpool->mutex); 1036 | 1037 | /* internal request? */ 1038 | if ((!ngx_queue_empty(&supcf->queue)) && (!supcf->queue_timer.timer_set)) { 1039 | ngx_add_timer(&supcf->queue_timer, NGX_SUPERVISORD_QUEUE_INTERVAL); 1040 | } 1041 | 1042 | c = r->connection; 1043 | ngx_destroy_pool(r->pool); 1044 | ngx_destroy_pool(c->pool); 1045 | (void) ngx_pfree(supcf->lpool, c); 1046 | 1047 | return NGX_OK; 1048 | } 1049 | 1050 | ngx_int_t 1051 | ngx_supervisord_add_backend_monitor(ngx_http_upstream_srv_conf_t *uscf, 1052 | ngx_supervisord_backend_pt monitor) 1053 | { 1054 | ngx_supervisord_srv_conf_t *supcf; 1055 | ngx_uint_t i; 1056 | 1057 | if (monitor == NULL) { 1058 | goto wrong_params; 1059 | } 1060 | 1061 | supcf = ngx_http_conf_upstream_srv_conf(uscf, ngx_supervisord_module); 1062 | if (supcf == NULL) { 1063 | goto wrong_params; 1064 | } 1065 | 1066 | ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 1067 | "[supervisord] upstream: %V, adding backend monitor", 1068 | &uscf->host); 1069 | 1070 | supcf->backend_monitor = monitor; 1071 | 1072 | /* nservers > 0 only after module_init */ 1073 | for (i = 0; i < supcf->nservers; i++) { 1074 | monitor(uscf, i, supcf->lservers[i]); 1075 | } 1076 | 1077 | return NGX_OK; 1078 | 1079 | wrong_params: 1080 | ngx_log_error(NGX_LOG_EMERG, ngx_cycle->log, 0, 1081 | "[supervisord] wrong parameters passed to: %s", __func__); 1082 | 1083 | return NGX_ERROR; 1084 | } 1085 | 1086 | ngx_int_t 1087 | ngx_supervisord_add_load_monitor(ngx_http_upstream_srv_conf_t *uscf, 1088 | ngx_supervisord_load_pt monitor) 1089 | { 1090 | ngx_supervisord_srv_conf_t *supcf; 1091 | 1092 | if (monitor == NULL) { 1093 | goto wrong_params; 1094 | } 1095 | 1096 | supcf = ngx_http_conf_upstream_srv_conf(uscf, ngx_supervisord_module); 1097 | if (supcf == NULL) { 1098 | goto wrong_params; 1099 | } 1100 | 1101 | ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, 1102 | "[supervisord] upstream: %V, adding load monitor", 1103 | &uscf->host); 1104 | 1105 | supcf->load_monitor = monitor; 1106 | 1107 | return NGX_OK; 1108 | 1109 | wrong_params: 1110 | ngx_log_error(NGX_LOG_EMERG, ngx_cycle->log, 0, 1111 | "[supervisord] wrong parameters passed to: %s", __func__); 1112 | 1113 | return NGX_ERROR; 1114 | } 1115 | 1116 | /* 1117 | * nginx <> supervisord communication 1118 | */ 1119 | 1120 | typedef struct { 1121 | ngx_uint_t id; 1122 | const char *name; 1123 | } ngx_supervisord_cmd_t; 1124 | 1125 | 1126 | static ngx_supervisord_cmd_t ngx_supervisord_commands[] = 1127 | { 1128 | { NGX_SUPERVISORD_CMD_START, "startProcess" }, 1129 | { NGX_SUPERVISORD_CMD_STOP, "stopProcess" }, 1130 | { 0, NULL } 1131 | }; 1132 | 1133 | static char ngx_supervisord_headers[] = 1134 | "POST /RPC2 HTTP/1.0" CRLF 1135 | "Accept: text/xml" CRLF 1136 | "Content-Type: text/xml" CRLF 1137 | "User-Agent: ngx_supervisord" CRLF 1138 | "Content-Length: " 1139 | ; 1140 | 1141 | static char ngx_supervisord_auth_header[] = 1142 | "Authorization: Basic " 1143 | ; 1144 | 1145 | static char ngx_supervisord_body_p1[] = 1146 | "\n" 1147 | "\n" 1148 | "supervisor." 1149 | ; 1150 | 1151 | static char ngx_supervisord_body_p2[] = 1152 | "\n" 1153 | "\n" 1154 | "\n" 1155 | "" 1156 | ; 1157 | 1158 | static char ngx_supervisord_body_p3[] = 1159 | "\n" 1160 | "\n" 1161 | "\n" 1162 | "\n" 1163 | ; 1164 | 1165 | ngx_int_t 1166 | ngx_supervisord_peer_get(ngx_peer_connection_t *pc, void *data) 1167 | { 1168 | ngx_url_t *supervisord = data; 1169 | ngx_int_t n; 1170 | 1171 | n = supervisord->naddrs - pc->tries--; 1172 | 1173 | pc->sockaddr = supervisord->addrs[n].sockaddr; 1174 | pc->socklen = supervisord->addrs[n].socklen; 1175 | pc->name = &supervisord->addrs[n].name; 1176 | 1177 | return NGX_OK; 1178 | } 1179 | 1180 | void 1181 | ngx_supervisord_peer_free(ngx_peer_connection_t *pc, void *data, 1182 | ngx_uint_t state) 1183 | { 1184 | return; 1185 | } 1186 | 1187 | ngx_int_t 1188 | ngx_supervisord_peer_init(ngx_http_request_t *r, 1189 | ngx_http_upstream_srv_conf_t *uscf) 1190 | { 1191 | ngx_supervisord_srv_conf_t *supcf; 1192 | 1193 | supcf = ngx_http_get_module_srv_conf(r, ngx_supervisord_module); 1194 | 1195 | r->upstream->peer.get = ngx_supervisord_peer_get; 1196 | r->upstream->peer.free = ngx_supervisord_peer_free; 1197 | r->upstream->peer.tries = supcf->supervisord.server.naddrs; 1198 | r->upstream->peer.data = &supcf->supervisord.server; 1199 | 1200 | return NGX_OK; 1201 | } 1202 | 1203 | size_t 1204 | int_strlen(ngx_uint_t n) 1205 | { 1206 | size_t s = 1; 1207 | 1208 | while (n >= 10) { 1209 | n /= 10; 1210 | s++; 1211 | } 1212 | 1213 | return s; 1214 | } 1215 | 1216 | const char * 1217 | ngx_supervisord_get_command(ngx_uint_t id) 1218 | { 1219 | ngx_supervisord_cmd_t *cmd; 1220 | 1221 | cmd = ngx_supervisord_commands; 1222 | while (cmd->name != NULL) { 1223 | if (cmd->id == id) { 1224 | return cmd->name; 1225 | } 1226 | 1227 | cmd++; 1228 | } 1229 | 1230 | return NULL; 1231 | } 1232 | 1233 | ngx_int_t 1234 | ngx_supervisord_create_request(ngx_http_request_t *r) 1235 | { 1236 | ngx_http_upstream_srv_conf_t *uscf; 1237 | ngx_supervisord_srv_conf_t *supcf; 1238 | ngx_supervisord_ctx_t *ctx; 1239 | ngx_str_t auth; 1240 | ngx_buf_t *b; 1241 | ngx_chain_t *cl; 1242 | const char *cmd; 1243 | u_char *backend; 1244 | size_t len, blen; 1245 | 1246 | supcf = ngx_http_get_module_srv_conf(r, ngx_supervisord_module); 1247 | uscf = supcf->uscf; /* original uscf */ 1248 | ctx = ngx_http_get_module_ctx(r, ngx_supervisord_module); 1249 | 1250 | cmd = ngx_supervisord_get_command(ctx->command); 1251 | if (cmd == NULL) { 1252 | goto failed; 1253 | } 1254 | 1255 | if (supcf->supervisord.name.data != NULL) { 1256 | len = supcf->supervisord.name.len + int_strlen(ctx->backend) + 1; 1257 | } else { 1258 | len = uscf->host.len + int_strlen(ctx->backend) + 1; 1259 | } 1260 | 1261 | backend = ngx_palloc(r->pool, len); 1262 | if (backend == NULL) { 1263 | goto failed; 1264 | } 1265 | 1266 | /* ngx_snprintf *IS NOT* snprintf compatible */ 1267 | if (supcf->supervisord.name.data != NULL) { 1268 | (void) ngx_snprintf(backend, len - 1, "%V%i", 1269 | &supcf->supervisord.name, ctx->backend); 1270 | } else { 1271 | (void) ngx_snprintf(backend, len - 1, "%V%i", 1272 | &uscf->host, ctx->backend); 1273 | } 1274 | 1275 | backend[len - 1] = '\0'; 1276 | 1277 | ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, 1278 | "[supervisord] upstream: %V, backend: %ui, command: %s", 1279 | &uscf->host, ctx->backend, cmd); 1280 | 1281 | /* request body length */ 1282 | blen = sizeof(ngx_supervisord_body_p1) - 1 1283 | + sizeof(ngx_supervisord_body_p2) - 1 1284 | + sizeof(ngx_supervisord_body_p3) - 1 1285 | + ngx_strlen(cmd) + ngx_strlen(backend); 1286 | 1287 | /* request length */ 1288 | len = sizeof(ngx_supervisord_headers) - 1 1289 | + int_strlen(blen) + 2 * sizeof(CRLF) + blen; 1290 | 1291 | /* optional authorization */ 1292 | if (supcf->supervisord.userpass.data != NULL) { 1293 | auth.len = ngx_base64_encoded_length(supcf->supervisord.userpass.len); 1294 | auth.data = ngx_palloc(r->pool, auth.len + 2 * sizeof(CRLF)); 1295 | if (auth.data == NULL) { 1296 | goto failed; 1297 | } 1298 | 1299 | ngx_encode_base64(&auth, &supcf->supervisord.userpass); 1300 | 1301 | auth.data[auth.len++] = CR; 1302 | auth.data[auth.len++] = LF; 1303 | auth.data[auth.len++] = CR; 1304 | auth.data[auth.len++] = LF; 1305 | 1306 | len += sizeof(ngx_supervisord_auth_header) + auth.len; 1307 | } 1308 | 1309 | b = ngx_create_temp_buf(r->pool, len); 1310 | if (b == NULL) { 1311 | goto failed; 1312 | } 1313 | 1314 | cl = ngx_alloc_chain_link(r->pool); 1315 | if (cl == NULL) { 1316 | goto failed; 1317 | } 1318 | 1319 | cl->buf = b; 1320 | cl->next = NULL; 1321 | 1322 | r->upstream->request_bufs = cl; 1323 | 1324 | b->last = ngx_cpymem(b->last, ngx_supervisord_headers, 1325 | sizeof(ngx_supervisord_headers) - 1); 1326 | 1327 | if (supcf->supervisord.userpass.data != NULL) { 1328 | b->last = ngx_sprintf(b->last, "%i" CRLF, blen); 1329 | b->last = ngx_cpymem(b->last, ngx_supervisord_auth_header, 1330 | sizeof(ngx_supervisord_auth_header) - 1); 1331 | b->last = ngx_cpymem(b->last, auth.data, auth.len); 1332 | } else { 1333 | b->last = ngx_sprintf(b->last, "%i" CRLF CRLF, blen); 1334 | } 1335 | 1336 | b->last = ngx_cpymem(b->last, ngx_supervisord_body_p1, 1337 | sizeof(ngx_supervisord_body_p1) - 1); 1338 | b->last = ngx_cpymem(b->last, cmd, ngx_strlen(cmd)); 1339 | b->last = ngx_cpymem(b->last, ngx_supervisord_body_p2, 1340 | sizeof(ngx_supervisord_body_p2) - 1); 1341 | b->last = ngx_cpymem(b->last, backend, ngx_strlen(backend)); 1342 | b->last = ngx_cpymem(b->last, ngx_supervisord_body_p3, 1343 | sizeof(ngx_supervisord_body_p3) - 1); 1344 | 1345 | b->last_buf = 1; 1346 | 1347 | /* force nginx to read whole response into memory */ 1348 | r->subrequest_in_memory = 1; 1349 | 1350 | return NGX_OK; 1351 | 1352 | failed: 1353 | r->connection->error = 1; 1354 | 1355 | return NGX_ERROR; 1356 | } 1357 | 1358 | ngx_int_t 1359 | ngx_supervisord_reinit_request(ngx_http_request_t *r) 1360 | { 1361 | return NGX_OK; 1362 | } 1363 | 1364 | ngx_int_t 1365 | ngx_supervisord_process_header(ngx_http_request_t *r) 1366 | { 1367 | return NGX_OK; 1368 | } 1369 | 1370 | void 1371 | ngx_supervisord_abort_request(ngx_http_request_t *r) 1372 | { 1373 | return; 1374 | } 1375 | 1376 | ngx_int_t 1377 | ngx_supervisord_parse_response(ngx_buf_t *buf, ngx_str_t *host) 1378 | { 1379 | char *str = (char *) buf->start; 1380 | char *sep; 1381 | ngx_int_t code; 1382 | 1383 | /* just in case */ 1384 | *buf->last = '\0'; 1385 | 1386 | if (strncmp(str, "HTTP/1.0 401 ", strlen("HTTP/1.0 401 ")) == 0) { 1387 | ngx_log_error(NGX_LOG_EMERG, ngx_cycle->log, 0, 1388 | "[supervisord] upstream: %V, unauthorized connection", 1389 | host); 1390 | 1391 | return NGX_ERROR; 1392 | } 1393 | 1394 | while ((sep = strsep(&str, "<")) != NULL) { 1395 | if (strncmp(sep, "methodResponse", strlen("methodResponse")) == 0) { 1396 | goto valid_reply; 1397 | } 1398 | } 1399 | 1400 | return NGX_ERROR; 1401 | 1402 | valid_reply: 1403 | if ((sep = strsep(&str, "<")) == NULL) { 1404 | return NGX_ERROR; 1405 | } 1406 | 1407 | if (strncmp(sep, "fault", strlen("fault")) == 0) { 1408 | goto fault; 1409 | } else if (strncmp(sep, "params", strlen("params")) == 0) { 1410 | return NGX_OK; 1411 | } 1412 | 1413 | return NGX_ERROR; 1414 | 1415 | fault: 1416 | while ((sep = strsep(&str, "<")) != NULL) { 1417 | if (strncmp(sep, "int>", strlen("int>")) == 0) { 1418 | code = 0; 1419 | sep += strlen("int>"); 1420 | 1421 | while ((*sep >= '0') && (*sep <= '9')) { 1422 | code *= 10; 1423 | code += *sep++ - '0'; 1424 | } 1425 | 1426 | return code; 1427 | } 1428 | } 1429 | 1430 | return NGX_OK; 1431 | } 1432 | 1433 | void 1434 | ngx_supervisord_finalize_request(ngx_http_request_t *r, ngx_int_t rc) 1435 | { 1436 | ngx_supervisord_srv_conf_t *supcf; 1437 | ngx_supervisord_ctx_t *ctx; 1438 | ngx_int_t suprc; 1439 | 1440 | supcf = ngx_http_get_module_srv_conf(r, ngx_supervisord_module); 1441 | ctx = ngx_http_get_module_ctx(r, ngx_supervisord_module); 1442 | 1443 | ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, 1444 | "[supervisord] upstream: %V, finalizing request, rc: %i", 1445 | &supcf->uscf->host, rc); 1446 | 1447 | if (supcf->supervisord.is_fake) { 1448 | ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, 1449 | "[supervisord] upstream: %V, response: %i none", 1450 | &supcf->uscf->host, rc); 1451 | goto skip_fake; 1452 | } 1453 | 1454 | if (rc != 0) { 1455 | if (rc == 502) { 1456 | ngx_log_error(NGX_LOG_EMERG, ngx_cycle->log, 0, 1457 | "[supervisord] upstream: %V, couldn't connect to supervisord", 1458 | &supcf->uscf->host); 1459 | } 1460 | 1461 | goto failed; 1462 | } 1463 | 1464 | /* just in case overwrite last char, it should be '\n' anyway */ 1465 | *r->upstream->buffer.last = '\0'; 1466 | 1467 | ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, 1468 | "[supervisord] upstream: %V, response: %s", 1469 | &supcf->uscf->host, r->upstream->buffer.start); 1470 | 1471 | rc = ngx_supervisord_parse_response(&r->upstream->buffer, 1472 | &supcf->uscf->host); 1473 | suprc = rc; 1474 | 1475 | if ((rc == 60) && (ctx->command == NGX_SUPERVISORD_CMD_START)) { 1476 | /* already started */ 1477 | rc = 0; 1478 | } else if ((rc == 70) && (ctx->command == NGX_SUPERVISORD_CMD_STOP)) { 1479 | /* not running */ 1480 | rc = 0; 1481 | } 1482 | 1483 | ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, 1484 | "[supervisord] upstream: %V, response: %i %i", 1485 | &supcf->uscf->host, rc, suprc); 1486 | 1487 | if (rc != 0) { 1488 | goto failed; 1489 | } 1490 | 1491 | skip_fake: 1492 | ngx_shmtx_lock(&supcf->shpool->mutex); 1493 | 1494 | switch (ctx->command) { 1495 | case NGX_SUPERVISORD_CMD_START: 1496 | if (supcf->shservers[ctx->backend] == NGX_SUPERVISORD_SRV_STARTING_UP) { 1497 | supcf->shservers[ctx->backend] = NGX_SUPERVISORD_SRV_UP; 1498 | } 1499 | break; 1500 | case NGX_SUPERVISORD_CMD_STOP: 1501 | if (supcf->shservers[ctx->backend] 1502 | == NGX_SUPERVISORD_SRV_SHUTTING_DOWN) 1503 | { 1504 | supcf->shservers[ctx->backend] = NGX_SUPERVISORD_SRV_DOWN; 1505 | } 1506 | break; 1507 | } 1508 | 1509 | ngx_shmtx_unlock(&supcf->shpool->mutex); 1510 | 1511 | if ((ctx->command == NGX_SUPERVISORD_CMD_START) && (ctx->backend == 0)) { 1512 | (void) ngx_supervisord_resume_requests(supcf); 1513 | } 1514 | 1515 | /* disable further "less-than-important" logging... */ 1516 | r->connection->log->log_level = NGX_LOG_EMERG; 1517 | 1518 | return; 1519 | 1520 | failed: 1521 | ngx_shmtx_lock(&supcf->shpool->mutex); 1522 | 1523 | switch (ctx->command) { 1524 | case NGX_SUPERVISORD_CMD_START: 1525 | if (supcf->shservers[ctx->backend] == NGX_SUPERVISORD_SRV_STARTING_UP) { 1526 | supcf->shservers[ctx->backend] = NGX_SUPERVISORD_SRV_DOWN; 1527 | } 1528 | break; 1529 | case NGX_SUPERVISORD_CMD_STOP: 1530 | if (supcf->shservers[ctx->backend] 1531 | == NGX_SUPERVISORD_SRV_SHUTTING_DOWN) 1532 | { 1533 | supcf->shservers[ctx->backend] = NGX_SUPERVISORD_SRV_UP; 1534 | } 1535 | break; 1536 | } 1537 | 1538 | ngx_shmtx_unlock(&supcf->shpool->mutex); 1539 | 1540 | if (ctx->backend == 0) { 1541 | (void) ngx_supervisord_resume_requests(supcf); 1542 | } 1543 | 1544 | /* stop nginx from sending special response over "fake connection" */ 1545 | r->connection->error = 1; 1546 | 1547 | /* disable further "less-than-important" logging... */ 1548 | r->connection->log->log_level = NGX_LOG_EMERG; 1549 | } 1550 | 1551 | ngx_chain_t * 1552 | ngx_supervisord_send_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit) 1553 | { 1554 | return NULL; 1555 | } 1556 | 1557 | ngx_http_request_t * 1558 | ngx_supervisord_init(ngx_pool_t *pool, ngx_http_upstream_srv_conf_t *ouscf) 1559 | { 1560 | ngx_connection_t *c; 1561 | ngx_http_request_t *r; 1562 | ngx_log_t *log; 1563 | ngx_http_log_ctx_t *ctx; 1564 | ngx_http_upstream_t *u; 1565 | ngx_http_upstream_conf_t *ucf; 1566 | ngx_http_upstream_srv_conf_t *uscf; 1567 | 1568 | /* fake incoming connection */ 1569 | c = ngx_pcalloc(pool, sizeof(ngx_connection_t)); 1570 | if (c == NULL) { 1571 | goto failed_none; 1572 | } 1573 | 1574 | c->pool = ngx_create_pool(1024, ngx_cycle->log); 1575 | if (c->pool == NULL) { 1576 | goto failed_none; 1577 | } 1578 | 1579 | log = ngx_pcalloc(c->pool, sizeof(ngx_log_t)); 1580 | if (log == NULL) { 1581 | goto failed_conn; 1582 | } 1583 | 1584 | ctx = ngx_pcalloc(c->pool, sizeof(ngx_http_log_ctx_t)); 1585 | if (ctx == NULL) { 1586 | goto failed_conn; 1587 | } 1588 | 1589 | /* fake incoming request */ 1590 | r = ngx_pcalloc(c->pool, sizeof(ngx_http_request_t)); 1591 | if (r == NULL) { 1592 | goto failed_conn; 1593 | } 1594 | 1595 | r->pool = ngx_create_pool(8192, ngx_cycle->log); 1596 | if (r->pool == NULL) { 1597 | goto failed_conn; 1598 | } 1599 | 1600 | ctx->connection = c; 1601 | ctx->request = r; 1602 | ctx->current_request = r; 1603 | 1604 | log->action = "initializing fake request"; 1605 | log->data = ctx; 1606 | log->file = ngx_cycle->new_log.file; 1607 | log->log_level = NGX_LOG_DEBUG_CONNECTION 1608 | | NGX_LOG_DEBUG_ALL; 1609 | 1610 | c->log = log; 1611 | c->log_error = NGX_ERROR_INFO; 1612 | c->pool->log = log; 1613 | r->pool->log = log; 1614 | 1615 | c->fd = -1; 1616 | c->data = r; 1617 | 1618 | c->send_chain = ngx_supervisord_send_chain; 1619 | 1620 | r->main = r; 1621 | r->connection = c; 1622 | 1623 | #if (nginx_version >= 8011) 1624 | r->count = 1; 1625 | #endif 1626 | 1627 | /* used by ngx_http_upstream_init */ 1628 | c->read = ngx_pcalloc(c->pool, sizeof(ngx_event_t)); 1629 | if (c->read == NULL) { 1630 | goto failed_conn; 1631 | } 1632 | 1633 | c->read->log = log; 1634 | 1635 | c->write = ngx_pcalloc(c->pool, sizeof(ngx_event_t)); 1636 | if (c->write == NULL) { 1637 | goto failed_conn; 1638 | } 1639 | 1640 | c->write->log = log; 1641 | c->write->active = 1; 1642 | 1643 | /* used by ngx_http_log_request */ 1644 | r->main_conf = ngx_pcalloc(r->pool, sizeof(void *) * ngx_http_max_module); 1645 | if (r->main_conf == NULL) { 1646 | goto failed_req; 1647 | } 1648 | 1649 | r->main_conf[ngx_http_core_module.ctx_index] = 1650 | ngx_pcalloc(r->pool, sizeof(ngx_http_core_main_conf_t)); 1651 | if (r->main_conf[ngx_http_core_module.ctx_index] == NULL) { 1652 | goto failed_req; 1653 | } 1654 | 1655 | /* use original servers{}'s configuration for this module */ 1656 | r->srv_conf = ngx_pcalloc(r->pool, sizeof(void *) * ngx_http_max_module); 1657 | if (r->srv_conf == NULL) { 1658 | goto failed_req; 1659 | } 1660 | 1661 | r->srv_conf[ngx_http_upstream_module.ctx_index] = 1662 | ngx_pcalloc(r->pool, sizeof(ngx_http_upstream_srv_conf_t)); 1663 | if (r->srv_conf[ngx_http_upstream_module.ctx_index] == NULL) { 1664 | goto failed_req; 1665 | } 1666 | 1667 | uscf = r->srv_conf[ngx_http_upstream_module.ctx_index]; 1668 | uscf->srv_conf = r->srv_conf; 1669 | 1670 | uscf->peer.init = ngx_supervisord_peer_init; 1671 | 1672 | r->srv_conf[ngx_supervisord_module.ctx_index] = 1673 | ouscf->srv_conf[ngx_supervisord_module.ctx_index]; 1674 | if (r->srv_conf[ngx_supervisord_module.ctx_index] == NULL) { 1675 | goto failed_req; 1676 | } 1677 | 1678 | /* used by ngx_http_copy_filter */ 1679 | r->loc_conf = ngx_pcalloc(r->pool, sizeof(void *) * ngx_http_max_module); 1680 | if (r->loc_conf == NULL) { 1681 | goto failed_req; 1682 | } 1683 | 1684 | r->loc_conf[ngx_http_core_module.ctx_index] = 1685 | ngx_pcalloc(r->pool, sizeof(ngx_http_core_loc_conf_t)); 1686 | if (r->loc_conf[ngx_http_core_module.ctx_index] == NULL) { 1687 | goto failed_req; 1688 | } 1689 | 1690 | r->loc_conf[ngx_http_copy_filter_module.ctx_index] = 1691 | ngx_pcalloc(r->pool, sizeof(ngx_int_t) + sizeof(size_t)); 1692 | if (r->loc_conf[ngx_http_copy_filter_module.ctx_index] == NULL) { 1693 | goto failed_req; 1694 | } 1695 | 1696 | /* used by ngx_http_output_filter */ 1697 | r->ctx = ngx_pcalloc(r->pool, sizeof(void *) * ngx_http_max_module); 1698 | if (r->ctx == NULL) { 1699 | goto failed_req; 1700 | } 1701 | 1702 | r->ctx[ngx_supervisord_module.ctx_index] = 1703 | ngx_pcalloc(r->pool, sizeof(ngx_supervisord_ctx_t)); 1704 | if (r->ctx[ngx_supervisord_module.ctx_index] == NULL) { 1705 | goto failed_req; 1706 | } 1707 | 1708 | /* used by ngx_http_upstream_init */ 1709 | if (ngx_http_upstream_create(r) != NGX_OK) { 1710 | goto failed_req; 1711 | } 1712 | 1713 | u = r->upstream; 1714 | 1715 | u->create_request = ngx_supervisord_create_request; 1716 | u->reinit_request = ngx_supervisord_reinit_request; 1717 | u->process_header = ngx_supervisord_process_header; 1718 | u->abort_request = ngx_supervisord_abort_request; 1719 | u->finalize_request = ngx_supervisord_finalize_request; 1720 | 1721 | u->schema.len = sizeof("supervisord://") - 1; 1722 | u->schema.data = (u_char *) "supervisord://"; 1723 | 1724 | u->peer.log = log; 1725 | u->peer.log_error = NGX_ERROR_ERR; 1726 | 1727 | u->output.tag = (ngx_buf_tag_t) &ngx_supervisord_module; 1728 | 1729 | /* configure upstream */ 1730 | u->conf = ngx_pcalloc(r->pool, sizeof(ngx_http_upstream_conf_t)); 1731 | if (u->conf == NULL) { 1732 | goto failed_req; 1733 | } 1734 | 1735 | ucf = u->conf; 1736 | 1737 | /* must be enough to hold supervisord's response */ 1738 | ucf->buffer_size = 2048; 1739 | 1740 | ucf->connect_timeout = 5000; 1741 | ucf->read_timeout = 30000; 1742 | ucf->send_timeout = 30000; 1743 | 1744 | ucf->next_upstream = NGX_HTTP_UPSTREAM_FT_ERROR 1745 | | NGX_HTTP_UPSTREAM_FT_TIMEOUT; 1746 | 1747 | ucf->upstream = uscf; 1748 | 1749 | return r; 1750 | 1751 | failed_req: 1752 | ngx_destroy_pool(r->pool); 1753 | 1754 | failed_conn: 1755 | ngx_destroy_pool(c->pool); 1756 | 1757 | failed_none: 1758 | (void) ngx_pfree(pool, c); 1759 | 1760 | return NULL; 1761 | } 1762 | 1763 | /* 1764 | * ngx_supervisord handlers 1765 | */ 1766 | 1767 | static char ngx_supervisord_success_page_top[] = 1768 | "" CRLF 1769 | "Command executed successfully" CRLF 1770 | "" CRLF 1771 | "

Command executed successfully

" CRLF 1772 | ; 1773 | 1774 | static char ngx_supervisord_success_page_tail[] = 1775 | CRLF "
" CRLF 1776 | "
" NGINX_VER "
" CRLF 1777 | "" CRLF 1778 | "" CRLF 1779 | ; 1780 | 1781 | u_char * 1782 | ngx_strlrchr(u_char *p, u_char *last, u_char c) 1783 | { 1784 | while (p <= last) { 1785 | if (*last == c) { 1786 | return last; 1787 | } 1788 | 1789 | last--; 1790 | } 1791 | 1792 | return NULL; 1793 | } 1794 | 1795 | ngx_int_t 1796 | ngx_supervisord_command_handler(ngx_http_request_t *r) 1797 | { 1798 | ngx_supervisord_loc_conf_t *suplcf; 1799 | u_char *p, *last; 1800 | ngx_int_t backend, rc; 1801 | ngx_chain_t out; 1802 | ngx_buf_t *b; 1803 | const char *cmd; 1804 | size_t len; 1805 | 1806 | if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { 1807 | return NGX_HTTP_NOT_ALLOWED; 1808 | } 1809 | 1810 | suplcf = ngx_http_get_module_loc_conf(r, ngx_supervisord_module); 1811 | if (!suplcf->upstream) { 1812 | return NGX_HTTP_INTERNAL_SERVER_ERROR; 1813 | } 1814 | 1815 | last = r->uri.data + r->uri.len - 1; 1816 | p = ngx_strlrchr(r->uri.data, last, '/'); 1817 | p++; 1818 | 1819 | if (ngx_strncmp(p, "any", 3) == 0) { 1820 | backend = -1; 1821 | } else { 1822 | backend = ngx_atoi(p, last - p + 1); 1823 | if (backend == NGX_ERROR) { 1824 | return NGX_HTTP_NOT_FOUND; 1825 | } 1826 | } 1827 | 1828 | if (backend >= (ngx_int_t) suplcf->upstream->servers->nelts) { 1829 | return NGX_HTTP_NOT_FOUND; 1830 | } 1831 | 1832 | cmd = ngx_supervisord_get_command(suplcf->command); 1833 | if (cmd == NULL) { 1834 | return NGX_HTTP_INTERNAL_SERVER_ERROR; 1835 | } 1836 | 1837 | rc = ngx_supervisord_execute(suplcf->upstream, suplcf->command, backend, NULL); 1838 | if (rc != NGX_OK) { 1839 | return NGX_HTTP_INTERNAL_SERVER_ERROR; 1840 | } 1841 | 1842 | len = sizeof(ngx_supervisord_success_page_top) - 1 1843 | + sizeof(ngx_supervisord_success_page_tail) - 1 1844 | + sizeof("
Command: ") - 1 + sizeof(CRLF "
Backend: ") - 1 1845 | + ngx_strlen(cmd); 1846 | 1847 | if (backend == -1) { 1848 | len += 3; 1849 | } else { 1850 | len += int_strlen(backend); 1851 | } 1852 | 1853 | r->headers_out.content_type.len = sizeof("text/html") - 1; 1854 | r->headers_out.content_type.data = (u_char *) "text/html"; 1855 | r->headers_out.status = NGX_HTTP_OK; 1856 | r->headers_out.content_length_n = len; 1857 | 1858 | if (r->method == NGX_HTTP_HEAD) { 1859 | rc = ngx_http_send_header(r); 1860 | if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { 1861 | return rc; 1862 | } 1863 | } 1864 | 1865 | b = ngx_create_temp_buf(r->pool, len); 1866 | if (b == NULL) { 1867 | return NGX_HTTP_INTERNAL_SERVER_ERROR; 1868 | } 1869 | 1870 | out.buf = b; 1871 | out.next = NULL; 1872 | 1873 | b->last = ngx_cpymem(b->last, ngx_supervisord_success_page_top, 1874 | sizeof(ngx_supervisord_success_page_top) - 1); 1875 | b->last = ngx_cpymem(b->last, "
Command: ", sizeof("
Command: ") - 1); 1876 | b->last = ngx_cpymem(b->last, cmd, ngx_strlen(cmd)); 1877 | b->last = ngx_cpymem(b->last, CRLF "
Backend: ", 1878 | sizeof(CRLF "
Backend: ") - 1); 1879 | if (backend == -1) { 1880 | b->last = ngx_cpymem(b->last, "any", 3); 1881 | } else { 1882 | b->last = ngx_sprintf(b->last, "%i", backend); 1883 | } 1884 | b->last = ngx_cpymem(b->last, ngx_supervisord_success_page_tail, 1885 | sizeof(ngx_supervisord_success_page_tail) - 1); 1886 | b->last_buf = 1; 1887 | 1888 | rc = ngx_http_send_header(r); 1889 | if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { 1890 | return rc; 1891 | } 1892 | 1893 | return ngx_http_output_filter(r, &out); 1894 | } 1895 | 1896 | ngx_http_upstream_srv_conf_t * 1897 | ngx_supervisord_find_upstream(ngx_conf_t *cf, ngx_str_t value) 1898 | { 1899 | ngx_http_upstream_main_conf_t *umcf; 1900 | ngx_http_upstream_srv_conf_t **uscfp; 1901 | ngx_uint_t i; 1902 | 1903 | umcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_upstream_module); 1904 | 1905 | uscfp = umcf->upstreams.elts; 1906 | for (i = 0; i < umcf->upstreams.nelts; i++) { 1907 | if (uscfp[i]->host.len == value.len 1908 | && ngx_strncasecmp(uscfp[i]->host.data, value.data, value.len) == 0) 1909 | { 1910 | return uscfp[i]; 1911 | } 1912 | } 1913 | 1914 | return NULL; 1915 | } 1916 | 1917 | char * 1918 | ngx_supervisord_conf_start_handler(ngx_conf_t *cf, ngx_command_t *cmd, 1919 | void *conf) 1920 | { 1921 | ngx_str_t *value = cf->args->elts; 1922 | ngx_supervisord_loc_conf_t *suplcf = conf; 1923 | ngx_http_core_loc_conf_t *clcf; 1924 | 1925 | if (suplcf->upstream) { 1926 | return "is either duplicate or collides with \"supervisord_stop\""; 1927 | } 1928 | 1929 | suplcf->upstream = ngx_supervisord_find_upstream(cf, value[1]); 1930 | if (suplcf->upstream == NULL) { 1931 | ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 1932 | "supervisord_start refers to non-existing upstream \"%V\"", 1933 | &value[1]); 1934 | 1935 | return NGX_CONF_ERROR; 1936 | } 1937 | 1938 | suplcf->command = NGX_SUPERVISORD_CMD_START; 1939 | 1940 | clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module); 1941 | clcf->handler = ngx_supervisord_command_handler; 1942 | 1943 | return NGX_CONF_OK; 1944 | } 1945 | 1946 | char * 1947 | ngx_supervisord_conf_stop_handler(ngx_conf_t *cf, ngx_command_t *cmd, 1948 | void *conf) 1949 | { 1950 | ngx_str_t *value = cf->args->elts; 1951 | ngx_supervisord_loc_conf_t *suplcf = conf; 1952 | ngx_http_core_loc_conf_t *clcf; 1953 | 1954 | if (suplcf->upstream) { 1955 | return "is either duplicate or collides with \"supervisord_start\""; 1956 | } 1957 | 1958 | suplcf->upstream = ngx_supervisord_find_upstream(cf, value[1]); 1959 | if (suplcf->upstream == NULL) { 1960 | ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 1961 | "supervisord_stop refers to non-existing upstream \"%V\"", 1962 | &value[1]); 1963 | 1964 | return NGX_CONF_ERROR; 1965 | } 1966 | 1967 | suplcf->command = NGX_SUPERVISORD_CMD_STOP; 1968 | 1969 | clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module); 1970 | clcf->handler = ngx_supervisord_command_handler; 1971 | 1972 | return NGX_CONF_OK; 1973 | } 1974 | -------------------------------------------------------------------------------- /ngx_supervisord.h: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2009, FRiCKLE Piotr Sikora 3 | * All rights reserved. 4 | * 5 | * This project was fully funded by megiteam.pl. 6 | * 7 | * Redistribution and use in source and binary forms, with or without 8 | * modification, are permitted provided that the following conditions 9 | * are met: 10 | * 1. Redistributions of source code must retain the above copyright 11 | * notice, this list of conditions and the following disclaimer. 12 | * 2. Redistributions in binary form must reproduce the above copyright 13 | * notice, this list of conditions and the following disclaimer in the 14 | * documentation and/or other materials provided with the distribution. 15 | * 16 | * THIS SOFTWARE IS PROVIDED BY FRiCKLE PIOTR SIKORA AND CONTRIBUTORS 17 | * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 18 | * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 19 | * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL FRiCKLE PIOTR 20 | * SIKORA OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 21 | * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 22 | * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 23 | * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 24 | * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 25 | * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 26 | * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 27 | */ 28 | 29 | #ifndef _NGX_SUPERVISORD_H_ 30 | #define _NGX_SUPERVISORD_H_ 31 | 32 | #include 33 | 34 | #define NGX_SUPERVISORD_API_VERSION 2 35 | 36 | #define NGX_SUPERVISORD_CMD_START 1 37 | #define NGX_SUPERVISORD_CMD_STOP 2 38 | 39 | #define NGX_SUPERVISORD_SRV_UP 0 /* peer.down == 0 */ 40 | #define NGX_SUPERVISORD_SRV_DOWN 1 /* peer.down != 0 && == 1 */ 41 | #define NGX_SUPERVISORD_SRV_STARTING_UP 2 /* peer.down != 0 */ 42 | #define NGX_SUPERVISORD_SRV_SHUTTING_DOWN 3 /* peer.down != 0 */ 43 | 44 | #define NGX_SUPERVISORD_LOAD_MULTIPLIER 100 45 | 46 | /* 47 | * ngx_supervisord_check_servers: 48 | * This function should be called at the end of peer.init() instead of 49 | * returning NGX_OK. It halts processing of the request when all backends 50 | * are down and resumes it after starting first one: 51 | * 52 | * Parameters: 53 | * r - incoming request. 54 | * 55 | * Return values: 56 | * NGX_OK - at least 1 server is active, 57 | * NGX_BUSY - no active servers, 58 | * NGX_DECLINED - wrong parameters, 59 | * NGX_ERROR - internal nginx error. 60 | */ 61 | ngx_int_t ngx_supervisord_check_servers(ngx_http_request_t *r); 62 | 63 | /* 64 | * ngx_supervisord_execute: 65 | * Try to execute supervisord's command. 66 | * 67 | * Parameters: 68 | * uscf - upstream{}'s server configuration, 69 | * cmd - NGX_SUPERVISORD_CMD_* command, 70 | * backend - backend number (from original servers list), 71 | * value -1 means "first available". 72 | * checker - *OPTIONAL* function, which must return NGX_OK before 73 | * ngx_supervisord will try to send command to supervisord. 74 | * 75 | * Return values: 76 | * NGX_OK - command queued successfully. 77 | * NGX_DECLINED - wrong parameters, 78 | * NGX_ERROR - internal nginx error. 79 | * 80 | * IMPORTANT NOTE: 81 | * Returned NGX_OK *DOES NOT* mean that the command was completed, 82 | * it means that request was processed successfully by ngx_supervisord. 83 | */ 84 | typedef ngx_int_t (*ngx_supervisord_checker_pt)( 85 | ngx_http_upstream_srv_conf_t *uscf, 86 | ngx_uint_t backend); 87 | 88 | ngx_int_t ngx_supervisord_execute( 89 | ngx_http_upstream_srv_conf_t *uscf, 90 | ngx_uint_t cmd, 91 | ngx_int_t backend, 92 | ngx_supervisord_checker_pt checker); 93 | 94 | /* 95 | * ngx_supervisord_add_backend_monitor: 96 | * Register callback function, which will be invoked after every change 97 | * in status of backend server. 98 | * 99 | * Parameters: 100 | * uscf - upstream{}'s server configuration, 101 | * cb - callback function. 102 | * 103 | * IMPORTANT NOTE: 104 | * Callback function shouldn't do more than update status of backend server 105 | * in its internal list. It *MUST NOT* resume processing of any requests. 106 | */ 107 | typedef void (*ngx_supervisord_backend_pt)( 108 | ngx_http_upstream_srv_conf_t *uscf, 109 | ngx_uint_t backend, 110 | ngx_uint_t new_status); 111 | 112 | ngx_int_t ngx_supervisord_add_backend_monitor( 113 | ngx_http_upstream_srv_conf_t *uscf, 114 | ngx_supervisord_backend_pt cb); 115 | 116 | /* 117 | * ngx_supervisord_add_load_monitor: 118 | * Register callback function, which will be invoked periodically 119 | * with informations about current load. 120 | * 121 | * Parameters: 122 | * uscf - upstream{}'s server configuration, 123 | * cb - callback function. 124 | */ 125 | typedef struct { 126 | ngx_uint_t load; /* requests per second per active backend */ 127 | ngx_uint_t reqs; /* requests since last load report */ 128 | ngx_msec_t interval; /* interval between load reports */ 129 | ngx_uint_t aservers; /* number of currently active upstream servers */ 130 | ngx_uint_t nservers; /* total number of configured upstream servers */ 131 | } ngx_supervisord_load_t; 132 | 133 | typedef void (*ngx_supervisord_load_pt)( 134 | ngx_http_upstream_srv_conf_t *uscf, 135 | ngx_supervisord_load_t load); 136 | 137 | ngx_int_t ngx_supervisord_add_load_monitor( 138 | ngx_http_upstream_srv_conf_t *uscf, 139 | ngx_supervisord_load_pt cb); 140 | 141 | #endif /* !_NGX_SUPERVISORD_H_ */ 142 | -------------------------------------------------------------------------------- /patches/ngx_http_upstream_fair_module.patch: -------------------------------------------------------------------------------- 1 | --- ngx_http_upstream_fair_module.c.orig Wed Sep 23 17:38:16 2009 2 | +++ ngx_http_upstream_fair_module.c Thu Apr 29 04:36:07 2010 3 | @@ -8,8 +8,18 @@ 4 | #include 5 | #include 6 | #include 7 | +#include 8 | 9 | +#if (NGX_SUPERVISORD_API_VERSION != 2) 10 | + #error "ngx_http_upstream_fair_module requires NGX_SUPERVISORD_API v2" 11 | +#endif 12 | + 13 | typedef struct { 14 | + ngx_uint_t load_threshold; 15 | + ngx_uint_t min_servers; 16 | +} ngx_http_upstream_fair_srv_conf_t; 17 | + 18 | +typedef struct { 19 | ngx_uint_t nreq; 20 | ngx_uint_t total_req; 21 | ngx_uint_t last_req_id; 22 | @@ -37,6 +47,7 @@ 23 | struct sockaddr *sockaddr; 24 | socklen_t socklen; 25 | ngx_str_t name; 26 | + ngx_uint_t onumber; 27 | 28 | ngx_uint_t weight; 29 | ngx_uint_t max_fails; 30 | @@ -74,6 +85,7 @@ 31 | #define NGX_PEER_INVALID (~0UL) 32 | 33 | typedef struct { 34 | + ngx_http_upstream_srv_conf_t *uscf; 35 | ngx_http_upstream_fair_peers_t *peers; 36 | ngx_uint_t current; 37 | uintptr_t *tried; 38 | @@ -97,6 +109,12 @@ 39 | ngx_command_t *cmd, void *conf); 40 | static ngx_int_t ngx_http_upstream_fair_init_module(ngx_cycle_t *cycle); 41 | 42 | +static void *ngx_http_upstream_fair_create_srv_conf(ngx_conf_t *cf); 43 | +static char *ngx_http_upstream_fair_set_threshold(ngx_conf_t *cf, 44 | + ngx_command_t *cmd, void *conf); 45 | +static char *ngx_http_upstream_fair_set_min_servers(ngx_conf_t *cf, 46 | + ngx_command_t *cmd, void *conf); 47 | + 48 | #if (NGX_HTTP_EXTENDED_STATUS) 49 | static ngx_chain_t *ngx_http_upstream_fair_report_status(ngx_http_request_t *r, 50 | ngx_int_t *length); 51 | @@ -125,6 +143,20 @@ 52 | 0, 53 | NULL }, 54 | 55 | + { ngx_string("fair_load_threshold"), 56 | + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, 57 | + ngx_http_upstream_fair_set_threshold, 58 | + 0, 59 | + 0, 60 | + NULL }, 61 | + 62 | + { ngx_string("fair_min_servers"), 63 | + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1, 64 | + ngx_http_upstream_fair_set_min_servers, 65 | + 0, 66 | + 0, 67 | + NULL }, 68 | + 69 | ngx_null_command 70 | }; 71 | 72 | @@ -136,7 +168,7 @@ 73 | NULL, /* create main configuration */ 74 | NULL, /* init main configuration */ 75 | 76 | - NULL, /* create server configuration */ 77 | + ngx_http_upstream_fair_create_srv_conf, /* create server configuration */ 78 | NULL, /* merge server configuration */ 79 | 80 | NULL, /* create location configuration */ 81 | @@ -371,8 +403,95 @@ 82 | return NGX_CONF_OK; 83 | } 84 | 85 | +static void * 86 | +ngx_http_upstream_fair_create_srv_conf(ngx_conf_t *cf) 87 | +{ 88 | + ngx_http_upstream_fair_srv_conf_t *faircf; 89 | 90 | + faircf = ngx_pcalloc(cf->pool, sizeof(ngx_http_upstream_fair_srv_conf_t)); 91 | + if (faircf == NULL) { 92 | + return NGX_CONF_ERROR; 93 | + } 94 | + 95 | + faircf->load_threshold = 100; 96 | + 97 | + return faircf; 98 | +} 99 | + 100 | static char * 101 | +ngx_http_upstream_fair_set_threshold(ngx_conf_t *cf, ngx_command_t *cmd, 102 | + void *conf) 103 | +{ 104 | + ngx_str_t *value = cf->args->elts; 105 | + ngx_http_upstream_fair_srv_conf_t *faircf; 106 | + ssize_t threshold; 107 | + 108 | + faircf = ngx_http_conf_get_module_srv_conf(cf, 109 | + ngx_http_upstream_fair_module); 110 | + 111 | + if (faircf->load_threshold != 100) { 112 | + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 113 | + "fair_load_threshold already set to \"%ui\"", 114 | + faircf->load_threshold); 115 | + 116 | + return NGX_CONF_ERROR; 117 | + } 118 | + 119 | + threshold = ngx_parse_size(&value[1]); 120 | + 121 | + if (threshold == NGX_ERROR) { 122 | + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 123 | + "fair_load_threshold value must be a number > 0"); 124 | + 125 | + return NGX_CONF_ERROR; 126 | + } 127 | + 128 | + if (threshold == 0) { 129 | + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 130 | + "fair_load_threshold value must be a number > 0"); 131 | + 132 | + return NGX_CONF_ERROR; 133 | + } 134 | + 135 | + faircf->load_threshold = (ngx_uint_t) threshold; 136 | + 137 | + return NGX_CONF_OK; 138 | +} 139 | + 140 | +static char * 141 | +ngx_http_upstream_fair_set_min_servers(ngx_conf_t *cf, ngx_command_t *cmd, 142 | + void *conf) 143 | +{ 144 | + ngx_str_t *value = cf->args->elts; 145 | + ngx_http_upstream_fair_srv_conf_t *faircf; 146 | + ssize_t min_servers; 147 | + 148 | + faircf = ngx_http_conf_get_module_srv_conf(cf, 149 | + ngx_http_upstream_fair_module); 150 | + 151 | + if (faircf->min_servers != 0) { 152 | + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 153 | + "fair_min_servers already set to \"%ui\"", 154 | + faircf->min_servers); 155 | + 156 | + return NGX_CONF_ERROR; 157 | + } 158 | + 159 | + min_servers = ngx_parse_size(&value[1]); 160 | + 161 | + if (min_servers == NGX_ERROR) { 162 | + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, 163 | + "fair_min_servers value must be a number"); 164 | + 165 | + return NGX_CONF_ERROR; 166 | + } 167 | + 168 | + faircf->min_servers = (ngx_uint_t) min_servers; 169 | + 170 | + return NGX_CONF_OK; 171 | +} 172 | + 173 | +static char * 174 | ngx_http_upstream_fair(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) 175 | { 176 | ngx_http_upstream_srv_conf_t *uscf; 177 | @@ -474,6 +593,7 @@ 178 | peers->peer[n].fail_timeout = server[i].fail_timeout; 179 | peers->peer[n].down = server[i].down; 180 | peers->peer[n].weight = server[i].down ? 0 : server[i].weight; 181 | + peers->peer[n].onumber = i; 182 | n++; 183 | } 184 | } 185 | @@ -524,6 +644,7 @@ 186 | backup->peer[n].max_fails = server[i].max_fails; 187 | backup->peer[n].fail_timeout = server[i].fail_timeout; 188 | backup->peer[n].down = server[i].down; 189 | + backup->peer[n].onumber = i; 190 | n++; 191 | } 192 | } 193 | @@ -580,6 +701,7 @@ 194 | peers->peer[i].weight = 1; 195 | peers->peer[i].max_fails = 1; 196 | peers->peer[i].fail_timeout = 10; 197 | + peers->peer[i].onumber = 0; 198 | } 199 | 200 | us->peer.data = peers; 201 | @@ -589,6 +711,89 @@ 202 | return NGX_OK; 203 | } 204 | 205 | +ngx_int_t 206 | +ngx_http_upstream_fair_stop_checker(ngx_http_upstream_srv_conf_t *uscf, 207 | + ngx_uint_t backend) 208 | +{ 209 | + ngx_http_upstream_fair_peers_t *peers; 210 | + ngx_atomic_t *lock; 211 | + ngx_uint_t i; 212 | + 213 | + peers = uscf->peer.data; 214 | + if (peers->shared == NULL) { 215 | + return NGX_OK; 216 | + } 217 | + 218 | + lock = &peers->shared->lock; 219 | + ngx_spinlock(lock, ngx_pid, 1024); 220 | + 221 | + for (i = 0; i < peers->number; i++) { 222 | + if (peers->peer[i].onumber == backend) { 223 | + if (peers->peer[i].shared->nreq > 0) { 224 | + ngx_spinlock_unlock(lock); 225 | + return NGX_DECLINED; 226 | + } 227 | + } 228 | + } 229 | + 230 | + ngx_spinlock_unlock(lock); 231 | + return NGX_OK; 232 | +} 233 | + 234 | +void 235 | +ngx_http_upstream_fair_backend_monitor(ngx_http_upstream_srv_conf_t *uscf, 236 | + ngx_uint_t backend, ngx_uint_t new_status) 237 | +{ 238 | + ngx_http_upstream_fair_peers_t *peers; 239 | + ngx_uint_t i; 240 | + 241 | + peers = uscf->peer.data; 242 | + for (i = 0; i < peers->number; i++) { 243 | + if (peers->peer[i].onumber == backend) { 244 | + peers->peer[i].down = (new_status == NGX_SUPERVISORD_SRV_UP) 245 | + ? 0 : 1; 246 | + } 247 | + } 248 | +} 249 | + 250 | +void 251 | +ngx_http_upstream_fair_load_monitor(ngx_http_upstream_srv_conf_t *uscf, 252 | + ngx_supervisord_load_t report) 253 | +{ 254 | + ngx_http_upstream_fair_srv_conf_t *faircf; 255 | + 256 | + faircf = ngx_http_conf_upstream_srv_conf(uscf, 257 | + ngx_http_upstream_fair_module); 258 | + 259 | + if ((report.load > faircf->load_threshold 260 | + * NGX_SUPERVISORD_LOAD_MULTIPLIER) 261 | + && (report.aservers < report.nservers)) 262 | + { 263 | + (void) ngx_supervisord_execute(uscf, NGX_SUPERVISORD_CMD_START, -1, 264 | + NULL); 265 | + return; 266 | + } 267 | + 268 | + if (report.aservers <= faircf->min_servers) { 269 | + return; 270 | + } 271 | + 272 | + if ((report.reqs == 0) && (report.aservers != 0)) { 273 | + (void) ngx_supervisord_execute(uscf, NGX_SUPERVISORD_CMD_STOP, -1, 274 | + ngx_http_upstream_fair_stop_checker); 275 | + return; 276 | + } 277 | + 278 | + if ((report.load < faircf->load_threshold 279 | + * NGX_SUPERVISORD_LOAD_MULTIPLIER * 4 / 10) 280 | + && (report.aservers > 1)) 281 | + { 282 | + (void) ngx_supervisord_execute(uscf, NGX_SUPERVISORD_CMD_STOP, -1, 283 | + ngx_http_upstream_fair_stop_checker); 284 | + return; 285 | + } 286 | +} 287 | + 288 | static ngx_int_t 289 | ngx_http_upstream_init_fair(ngx_conf_t *cf, ngx_http_upstream_srv_conf_t *us) 290 | { 291 | @@ -601,6 +806,12 @@ 292 | return NGX_ERROR; 293 | } 294 | 295 | + ngx_supervisord_add_backend_monitor(us, 296 | + ngx_http_upstream_fair_backend_monitor); 297 | + 298 | + ngx_supervisord_add_load_monitor(us, 299 | + ngx_http_upstream_fair_load_monitor); 300 | + 301 | /* setup our wrapper around rr */ 302 | peers = ngx_palloc(cf->pool, sizeof *peers); 303 | if (peers == NULL) { 304 | @@ -976,6 +1187,10 @@ 305 | 306 | peer->shared->fails++; 307 | peer->accessed = ngx_time(); 308 | + 309 | + peer->shared->fails = 0; 310 | + (void) ngx_supervisord_execute(fp->uscf, NGX_SUPERVISORD_CMD_STOP, 311 | + peer->onumber, NULL); 312 | } 313 | ngx_spinlock_unlock(lock); 314 | } 315 | @@ -1119,6 +1334,7 @@ 316 | 317 | usfp = us->peer.data; 318 | 319 | + fp->uscf = us; 320 | fp->tried = ngx_bitvector_alloc(r->pool, usfp->number, &fp->data); 321 | fp->done = ngx_bitvector_alloc(r->pool, usfp->number, &fp->data2); 322 | 323 | @@ -1147,7 +1363,7 @@ 324 | ngx_http_upstream_fair_save_session; 325 | #endif 326 | 327 | - return NGX_OK; 328 | + return ngx_supervisord_check_servers(r); 329 | } 330 | 331 | #if (NGX_HTTP_SSL) 332 | -------------------------------------------------------------------------------- /patches/ngx_http_upstream_init_busy-0.8.0.patch: -------------------------------------------------------------------------------- 1 | Copyright 2008, 2009 Engine Yard, Inc. All rights reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy 4 | of this software and associated documentation files (the "Software"), to 5 | deal in the Software without restriction, including without limitation the 6 | rights to use, copy, modify, merge, publish, distribute, sublicense, and/or 7 | sell copies of the Software, and to permit persons to whom the Software is 8 | furnished to do so, subject to the following conditions: 9 | 10 | The above copyright notice and this permission notice shall be included in 11 | all copies or substantial portions of the Software. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 18 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 19 | IN THE SOFTWARE. 20 | 21 | diff -Naur ../nginx-0.8.0/src/http/ngx_http_upstream.c ./src/http/ngx_http_upstream.c 22 | --- ../nginx-0.8.0/src/http/ngx_http_upstream.c 2009-06-02 18:09:44.000000000 +0200 23 | +++ ./src/http/ngx_http_upstream.c 2009-06-05 09:46:09.000000000 +0200 24 | @@ -21,8 +21,6 @@ 25 | static void ngx_http_upstream_wr_check_broken_connection(ngx_http_request_t *r); 26 | static void ngx_http_upstream_check_broken_connection(ngx_http_request_t *r, 27 | ngx_event_t *ev); 28 | -static void ngx_http_upstream_connect(ngx_http_request_t *r, 29 | - ngx_http_upstream_t *u); 30 | static ngx_int_t ngx_http_upstream_reinit(ngx_http_request_t *r, 31 | ngx_http_upstream_t *u); 32 | static void ngx_http_upstream_send_request(ngx_http_request_t *r, 33 | @@ -524,12 +522,12 @@ 34 | 35 | found: 36 | 37 | - if (uscf->peer.init(r, uscf) != NGX_OK) { 38 | - ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); 39 | - return; 40 | + switch(uscf->peer.init(r, uscf)) { 41 | + case NGX_OK: ngx_http_upstream_connect(r, u); 42 | + case NGX_BUSY: return; 43 | } 44 | - 45 | - ngx_http_upstream_connect(r, u); 46 | + 47 | + ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); 48 | } 49 | 50 | 51 | @@ -808,9 +806,12 @@ 52 | return; 53 | } 54 | 55 | + /* max_connections patch */ 56 | + /* 57 | if (u->peer.connection == NULL) { 58 | return; 59 | } 60 | + */ 61 | 62 | #if (NGX_HAVE_KQUEUE) 63 | 64 | @@ -911,7 +912,7 @@ 65 | } 66 | 67 | 68 | -static void 69 | +void 70 | ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) 71 | { 72 | ngx_int_t rc; 73 | @@ -2765,6 +2765,17 @@ 74 | return; 75 | } 76 | 77 | + if (ft_type == NGX_HTTP_UPSTREAM_FT_NOLIVE) { 78 | + switch(u->conf->upstream->peer.init(r, u->conf->upstream)) { 79 | + case NGX_OK: ngx_http_upstream_connect(r, u); 80 | + case NGX_BUSY: return; 81 | + } 82 | + 83 | + ngx_http_upstream_finalize_request(r, u, 84 | + NGX_HTTP_INTERNAL_SERVER_ERROR); 85 | + return; 86 | + } 87 | + 88 | if (status) { 89 | u->state->status = status; 90 | 91 | diff -Naur ../nginx-0.8.0/src/http/ngx_http_upstream.h ./src/http/ngx_http_upstream.h 92 | --- ../nginx-0.8.0/src/http/ngx_http_upstream.h 2009-05-19 15:27:27.000000000 +0200 93 | +++ ./src/http/ngx_http_upstream.h 2009-06-05 09:46:09.000000000 +0200 94 | @@ -317,6 +317,8 @@ 95 | ngx_http_variable_value_t *v, uintptr_t data); 96 | 97 | void ngx_http_upstream_init(ngx_http_request_t *r); 98 | +#define NGX_HTTP_UPSTREAM_INIT_BUSY_PATCH_VERSION 1 99 | +void ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u); 100 | ngx_http_upstream_srv_conf_t *ngx_http_upstream_add(ngx_conf_t *cf, 101 | ngx_url_t *u, ngx_uint_t flags); 102 | ngx_int_t ngx_http_upstream_hide_headers_hash(ngx_conf_t *cf, 103 | -------------------------------------------------------------------------------- /patches/ngx_http_upstream_init_busy-0.8.17.patch: -------------------------------------------------------------------------------- 1 | Copyright 2008, 2009 Engine Yard, Inc. All rights reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy 4 | of this software and associated documentation files (the "Software"), to 5 | deal in the Software without restriction, including without limitation the 6 | rights to use, copy, modify, merge, publish, distribute, sublicense, and/or 7 | sell copies of the Software, and to permit persons to whom the Software is 8 | furnished to do so, subject to the following conditions: 9 | 10 | The above copyright notice and this permission notice shall be included in 11 | all copies or substantial portions of the Software. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 18 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 19 | IN THE SOFTWARE. 20 | 21 | diff -Naur ../nginx-0.8.0/src/http/ngx_http_upstream.c ./src/http/ngx_http_upstream.c 22 | --- ../nginx-0.8.0/src/http/ngx_http_upstream.c 2009-06-02 18:09:44.000000000 +0200 23 | +++ ./src/http/ngx_http_upstream.c 2009-06-05 09:46:09.000000000 +0200 24 | @@ -24,8 +24,6 @@ 25 | static void ngx_http_upstream_wr_check_broken_connection(ngx_http_request_t *r); 26 | static void ngx_http_upstream_check_broken_connection(ngx_http_request_t *r, 27 | ngx_event_t *ev); 28 | -static void ngx_http_upstream_connect(ngx_http_request_t *r, 29 | - ngx_http_upstream_t *u); 30 | static ngx_int_t ngx_http_upstream_reinit(ngx_http_request_t *r, 31 | ngx_http_upstream_t *u); 32 | static void ngx_http_upstream_send_request(ngx_http_request_t *r, 33 | @@ -598,13 +599,12 @@ 34 | 35 | found: 36 | 37 | - if (uscf->peer.init(r, uscf) != NGX_OK) { 38 | - ngx_http_upstream_finalize_request(r, u, 39 | - NGX_HTTP_INTERNAL_SERVER_ERROR); 40 | - return; 41 | + switch(uscf->peer.init(r, uscf)) { 42 | + case NGX_OK: ngx_http_upstream_connect(r, u); 43 | + case NGX_BUSY: return; 44 | } 45 | - 46 | - ngx_http_upstream_connect(r, u); 47 | + 48 | + ngx_http_upstream_finalize_request(r, u, NGX_HTTP_INTERNAL_SERVER_ERROR); 49 | } 50 | 51 | 52 | @@ -1023,7 +1024,7 @@ 53 | } 54 | 55 | 56 | -static void 57 | +void 58 | ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u) 59 | { 60 | ngx_int_t rc; 61 | @@ -2765,6 +2765,17 @@ 62 | return; 63 | } 64 | 65 | + if (ft_type == NGX_HTTP_UPSTREAM_FT_NOLIVE) { 66 | + switch(u->conf->upstream->peer.init(r, u->conf->upstream)) { 67 | + case NGX_OK: ngx_http_upstream_connect(r, u); 68 | + case NGX_BUSY: return; 69 | + } 70 | + 71 | + ngx_http_upstream_finalize_request(r, u, 72 | + NGX_HTTP_INTERNAL_SERVER_ERROR); 73 | + return; 74 | + } 75 | + 76 | if (status) { 77 | u->state->status = status; 78 | 79 | diff -Naur ../nginx-0.8.0/src/http/ngx_http_upstream.h ./src/http/ngx_http_upstream.h 80 | --- ../nginx-0.8.0/src/http/ngx_http_upstream.h 2009-05-19 15:27:27.000000000 +0200 81 | +++ ./src/http/ngx_http_upstream.h 2009-06-05 09:46:09.000000000 +0200 82 | @@ -317,6 +317,8 @@ 83 | ngx_http_variable_value_t *v, uintptr_t data); 84 | 85 | void ngx_http_upstream_init(ngx_http_request_t *r); 86 | +#define NGX_HTTP_UPSTREAM_INIT_BUSY_PATCH_VERSION 1 87 | +void ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u); 88 | ngx_http_upstream_srv_conf_t *ngx_http_upstream_add(ngx_conf_t *cf, 89 | ngx_url_t *u, ngx_uint_t flags); 90 | ngx_int_t ngx_http_upstream_hide_headers_hash(ngx_conf_t *cf, 91 | -------------------------------------------------------------------------------- /patches/ngx_http_upstream_round_robin.patch: -------------------------------------------------------------------------------- 1 | --- src/http/ngx_http_upstream_round_robin.c.orig Mon Jan 4 05:26:58 2010 2 | +++ src/http/ngx_http_upstream_round_robin.c Mon Jan 4 05:27:44 2010 3 | @@ -7,14 +7,46 @@ 4 | #include 5 | #include 6 | #include 7 | +#include 8 | 9 | +#if (NGX_SUPERVISORD_API_VERSION != 2) 10 | + #error "ngx_supervisord-aware upstream requires NGX_SUPERVISORD_API v2" 11 | +#endif 12 | 13 | + 14 | +/* 15 | + * disable sorting 16 | static ngx_int_t ngx_http_upstream_cmp_servers(const void *one, 17 | const void *two); 18 | + */ 19 | static ngx_uint_t 20 | ngx_http_upstream_get_peer(ngx_http_upstream_rr_peers_t *peers); 21 | 22 | 23 | +void 24 | +ngx_http_upstream_backend_monitor(ngx_http_upstream_srv_conf_t *uscf, 25 | + ngx_uint_t backend, ngx_uint_t new_status) 26 | +{ 27 | + ngx_http_upstream_rr_peers_t *peers; 28 | + ngx_uint_t i; 29 | + 30 | + peers = uscf->peer.data; 31 | + for (i = 0; i < peers->number; i++) { 32 | + if (peers->peer[i].onumber == backend) { 33 | + if (new_status == NGX_SUPERVISORD_SRV_UP) { 34 | + peers->peer[i].down = 0; 35 | + peers->peer[i].weight = peers->peer[i].oweight; 36 | + peers->peer[i].current_weight = peers->peer[i].oweight; 37 | + } else { 38 | + peers->peer[i].down = 1; 39 | + peers->peer[i].weight = 0; 40 | + peers->peer[i].current_weight = 0; 41 | + } 42 | + } 43 | + } 44 | +} 45 | + 46 | + 47 | ngx_int_t 48 | ngx_http_upstream_init_round_robin(ngx_conf_t *cf, 49 | ngx_http_upstream_srv_conf_t *us) 50 | @@ -27,6 +59,9 @@ 51 | us->peer.init = ngx_http_upstream_init_round_robin_peer; 52 | 53 | if (us->servers) { 54 | + ngx_supervisord_add_backend_monitor(us, 55 | + ngx_http_upstream_backend_monitor); 56 | + 57 | server = us->servers->elts; 58 | 59 | n = 0; 60 | @@ -65,15 +100,20 @@ 61 | peers->peer[n].down = server[i].down; 62 | peers->peer[n].weight = server[i].down ? 0 : server[i].weight; 63 | peers->peer[n].current_weight = peers->peer[n].weight; 64 | + peers->peer[n].onumber = i; 65 | + peers->peer[n].oweight = server[i].weight; 66 | n++; 67 | } 68 | } 69 | 70 | us->peer.data = peers; 71 | 72 | +/* 73 | + * disable sorting 74 | ngx_sort(&peers->peer[0], (size_t) n, 75 | sizeof(ngx_http_upstream_rr_peer_t), 76 | ngx_http_upstream_cmp_servers); 77 | + */ 78 | 79 | /* backup servers */ 80 | 81 | @@ -118,15 +158,20 @@ 82 | backup->peer[n].max_fails = server[i].max_fails; 83 | backup->peer[n].fail_timeout = server[i].fail_timeout; 84 | backup->peer[n].down = server[i].down; 85 | + backup->peer[n].onumber = i; 86 | + backup->peer[n].oweight = server[i].weight; 87 | n++; 88 | } 89 | } 90 | 91 | peers->next = backup; 92 | 93 | +/* 94 | + * disable sorting 95 | ngx_sort(&backup->peer[0], (size_t) n, 96 | sizeof(ngx_http_upstream_rr_peer_t), 97 | ngx_http_upstream_cmp_servers); 98 | + */ 99 | 100 | return NGX_OK; 101 | } 102 | @@ -176,6 +221,8 @@ 103 | peers->peer[i].current_weight = 1; 104 | peers->peer[i].max_fails = 1; 105 | peers->peer[i].fail_timeout = 10; 106 | + peers->peer[i].onumber = 0; 107 | + peers->peer[i].oweight = 1; 108 | } 109 | 110 | us->peer.data = peers; 111 | @@ -186,6 +233,8 @@ 112 | } 113 | 114 | 115 | +/* 116 | + * disable sorting 117 | static ngx_int_t 118 | ngx_http_upstream_cmp_servers(const void *one, const void *two) 119 | { 120 | @@ -196,6 +245,7 @@ 121 | 122 | return (first->weight < second->weight); 123 | } 124 | + */ 125 | 126 | 127 | ngx_int_t 128 | @@ -216,6 +266,7 @@ 129 | r->upstream->peer.data = rrp; 130 | } 131 | 132 | + rrp->uscf = us; 133 | rrp->peers = us->peer.data; 134 | rrp->current = 0; 135 | 136 | @@ -243,7 +293,7 @@ 137 | ngx_http_upstream_save_round_robin_peer_session; 138 | #endif 139 | 140 | - return NGX_OK; 141 | + return ngx_supervisord_check_servers(r); 142 | } 143 | 144 | 145 | @@ -402,6 +452,16 @@ 146 | } else { 147 | 148 | /* there are several peers */ 149 | + 150 | + for (i = 0; i < rrp->peers->number; i++) { 151 | + if (!rrp->peers->peer[i].down) { 152 | + break; 153 | + } 154 | + } 155 | + 156 | + if (i == rrp->peers->number) { 157 | + goto failed; 158 | + } 159 | 160 | if (pc->tries == rrp->peers->number) { 161 | 162 | @@ -668,6 +729,10 @@ 163 | if (peer->current_weight < 0) { 164 | peer->current_weight = 0; 165 | } 166 | + 167 | + peer->fails = 0; 168 | + (void) ngx_supervisord_execute(rrp->uscf, NGX_SUPERVISORD_CMD_STOP, 169 | + peer->onumber, NULL); 170 | 171 | /* ngx_unlock_mutex(rrp->peers->mutex); */ 172 | } 173 | --- src/http/ngx_http_upstream_round_robin.h.orig Mon Jan 4 05:27:00 2010 174 | +++ src/http/ngx_http_upstream_round_robin.h Mon Jan 4 05:27:05 2010 175 | @@ -17,9 +17,11 @@ 176 | struct sockaddr *sockaddr; 177 | socklen_t socklen; 178 | ngx_str_t name; 179 | + ngx_uint_t onumber; 180 | 181 | ngx_int_t current_weight; 182 | ngx_int_t weight; 183 | + ngx_int_t oweight; 184 | 185 | ngx_uint_t fails; 186 | time_t accessed; 187 | @@ -54,6 +56,7 @@ 188 | 189 | 190 | typedef struct { 191 | + ngx_http_upstream_srv_conf_t *uscf; 192 | ngx_http_upstream_rr_peers_t *peers; 193 | ngx_uint_t current; 194 | uintptr_t *tried; 195 | --------------------------------------------------------------------------------