├── .gitignore ├── README.md ├── caddy ├── Caddyfile ├── caddy.tf └── caddy_setup.sh ├── client ├── client_setup.sh └── run_tests.sh ├── docker └── docker_setup.sh ├── envoy ├── envoy.tf ├── envoy.yaml └── envoy_setup.sh ├── haproxy ├── haproxy.cfg ├── haproxy.tf └── haproxy_setup.sh ├── images ├── average_response_time.png ├── latency.png └── requests_per_second.png ├── main.tf ├── nginx ├── nginx.conf ├── nginx.tf └── nginx_setup.sh ├── traefik ├── dynamic_conf.toml ├── traefik.tf ├── traefik.toml └── traefik_setup.sh ├── variables.tf └── webserver ├── docker-compose.yml └── webserver_setup.sh /.gitignore: -------------------------------------------------------------------------------- 1 | .terraform 2 | *.pem 3 | *.tfstate 4 | *.tfstate.backup 5 | *.terraform.tfstate.lock.info -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Benchmarks 2 | 3 | Tests performance of various proxies/load balancers. Based on the blog post https://www.loggly.com/blog/benchmarking-5-popular-load-balancers-nginx-haproxy-envoy-traefik-and-alb/. 4 | 5 | We test the following proxies: 6 | 7 | * Caddy 8 | * Envoy 9 | * HAProxy 10 | * NGINX 11 | * Traefik 12 | 13 | **IMPORTANT! Be sure to SSH into the client VM and run the test against the proxy_server VM from there. Running the test from within the AWS VPC will reduce Internet latency.** 14 | 15 | Defaults to the AWS "US-East-2 (Ohio)" region. 16 | 17 | NOTE: The AWS plugin for Terraform can be finicky. The deployment may or may not work the first time. In that case, 18 | use `terraform taint aws_instance.proxy_server` for example to try it again. 19 | 20 | ## Setup 21 | 22 | Perform these steps: 23 | 24 | 1. In the [AWS Console](https://console.aws.amazon.com), create a new SSH keypair (default name is "benchmarks"): 25 | * Go to __EC2 > Key Pairs > Create Key Pair__. 26 | * Name it "benchmarks". 27 | * Save the .pem file to this project's directory. 28 | * Update the file's permissions with `chmod 400 ./benchmarks.pem` 29 | 2. Run: 30 | ``` 31 | terraform init 32 | terraform apply -auto-approve -var 'aws_access_key=' -var 'aws_secret_key=' 33 | ``` 34 | 3. Log into the *client* server with `ssh -i ./benchmarks.pem ubuntu@` and run the tests: 35 | 36 | ``` 37 | /tmp/run_tests.sh | tee results.txt 38 | ``` 39 | 40 | Performance tests use https://github.com/rakyll/hey. 41 | 42 | To tear down the servers: 43 | 44 | ``` 45 | terraform destroy -force -var 'aws_access_key=' -var 'aws_secret_key=' 46 | ``` 47 | 48 | ## Results using Hey 49 | 50 | ![alt text](images/requests_per_second.png "Requests per second") 51 | 52 | ![alt text](images/average_response_time.png "Average response time") 53 | 54 | ![alt text](images/latency.png "Latency") 55 | 56 | *Graphs created using https://www.rapidtables.com/tools/bar-graph.html* 57 | 58 | ### Caddy (2.6.4) 59 | 60 | ``` 61 | Summary: 62 | Total: 5.5605 secs 63 | Slowest: 0.1183 secs 64 | Fastest: 0.0003 secs 65 | Average: 0.0136 secs 66 | Requests/sec: 17983.9983 67 | 68 | Total data: 23000000 bytes 69 | Size/request: 230 bytes 70 | 71 | Response time histogram: 72 | 0.000 [1] | 73 | 0.012 [50511] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 74 | 0.024 [39038] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 75 | 0.036 [8895] |■■■■■■■ 76 | 0.048 [928] |■ 77 | 0.059 [185] | 78 | 0.071 [206] | 79 | 0.083 [93] | 80 | 0.095 [95] | 81 | 0.107 [19] | 82 | 0.118 [29] | 83 | 84 | 85 | Latency distribution: 86 | 10% in 0.0045 secs 87 | 25% in 0.0075 secs 88 | 50% in 0.0120 secs 89 | 75% in 0.0180 secs 90 | 90% in 0.0242 secs 91 | 95% in 0.0283 secs 92 | 99% in 0.0396 secs 93 | 94 | Details (average, fastest, slowest): 95 | DNS+dialup: 0.0000 secs, 0.0003 secs, 0.1183 secs 96 | DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs 97 | req write: 0.0000 secs, 0.0000 secs, 0.0079 secs 98 | resp wait: 0.0136 secs, 0.0003 secs, 0.1159 secs 99 | resp read: 0.0000 secs, 0.0000 secs, 0.0055 secs 100 | 101 | Status code distribution: 102 | [200] 100000 responses 103 | ``` 104 | 105 | 106 | ### Envoy (1.22.8) 107 | 108 | ``` 109 | Summary: 110 | Total: 3.8682 secs 111 | Slowest: 0.1110 secs 112 | Fastest: 0.0003 secs 113 | Average: 0.0095 secs 114 | Requests/sec: 25852.1492 115 | 116 | Total data: 25300000 bytes 117 | Size/request: 253 bytes 118 | 119 | Response time histogram: 120 | 0.000 [1] | 121 | 0.011 [83755] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 122 | 0.022 [15695] |■■■■■■■ 123 | 0.034 [299] | 124 | 0.045 [0] | 125 | 0.056 [0] | 126 | 0.067 [6] | 127 | 0.078 [50] | 128 | 0.089 [0] | 129 | 0.100 [69] | 130 | 0.111 [125] | 131 | 132 | 133 | Latency distribution: 134 | 10% in 0.0067 secs 135 | 25% in 0.0079 secs 136 | 50% in 0.0089 secs 137 | 75% in 0.0105 secs 138 | 90% in 0.0126 secs 139 | 95% in 0.0139 secs 140 | 99% in 0.0195 secs 141 | 142 | Details (average, fastest, slowest): 143 | DNS+dialup: 0.0000 secs, 0.0003 secs, 0.1110 secs 144 | DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs 145 | req write: 0.0000 secs, 0.0000 secs, 0.0101 secs 146 | resp wait: 0.0094 secs, 0.0003 secs, 0.1099 secs 147 | resp read: 0.0001 secs, 0.0000 secs, 0.0087 secs 148 | 149 | Status code distribution: 150 | [200] 100000 responses 151 | ``` 152 | 153 | ### HAProxy (2.7.3) 154 | 155 | ``` 156 | Summary: 157 | Total: 2.5357 secs 158 | Slowest: 0.0916 secs 159 | Fastest: 0.0002 secs 160 | Average: 0.0061 secs 161 | Requests/sec: 39437.2211 162 | 163 | Total data: 14000000 bytes 164 | Size/request: 140 bytes 165 | 166 | Response time histogram: 167 | 0.000 [1] | 168 | 0.009 [80753] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 169 | 0.019 [14220] |■■■■■■■ 170 | 0.028 [3604] |■■ 171 | 0.037 [857] | 172 | 0.046 [213] | 173 | 0.055 [191] | 174 | 0.064 [41] | 175 | 0.073 [57] | 176 | 0.083 [57] | 177 | 0.092 [6] | 178 | 179 | 180 | Latency distribution: 181 | 10% in 0.0009 secs 182 | 25% in 0.0019 secs 183 | 50% in 0.0041 secs 184 | 75% in 0.0078 secs 185 | 90% in 0.0138 secs 186 | 95% in 0.0185 secs 187 | 99% in 0.0306 secs 188 | 189 | Details (average, fastest, slowest): 190 | DNS+dialup: 0.0000 secs, 0.0002 secs, 0.0916 secs 191 | DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs 192 | req write: 0.0000 secs, 0.0000 secs, 0.0087 secs 193 | resp wait: 0.0060 secs, 0.0002 secs, 0.0880 secs 194 | resp read: 0.0001 secs, 0.0000 secs, 0.0106 secs 195 | 196 | Status code distribution: 197 | [200] 100000 responses 198 | ``` 199 | 200 | ### NGINX (1.22.1) 201 | 202 | ``` 203 | Summary: 204 | Total: 2.9845 secs 205 | Slowest: 0.0671 secs 206 | Fastest: 0.0002 secs 207 | Average: 0.0072 secs 208 | Requests/sec: 33506.2571 209 | 210 | Total data: 12600000 bytes 211 | Size/request: 126 bytes 212 | 213 | Response time histogram: 214 | 0.000 [1] | 215 | 0.007 [60989] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 216 | 0.014 [26929] |■■■■■■■■■■■■■■■■■■ 217 | 0.020 [8369] |■■■■■ 218 | 0.027 [2586] |■■ 219 | 0.034 [791] |■ 220 | 0.040 [203] | 221 | 0.047 [83] | 222 | 0.054 [44] | 223 | 0.060 [3] | 224 | 0.067 [2] | 225 | 226 | 227 | Latency distribution: 228 | 10% in 0.0019 secs 229 | 25% in 0.0035 secs 230 | 50% in 0.0057 secs 231 | 75% in 0.0092 secs 232 | 90% in 0.0147 secs 233 | 95% in 0.0187 secs 234 | 99% in 0.0275 secs 235 | 236 | Details (average, fastest, slowest): 237 | DNS+dialup: 0.0000 secs, 0.0002 secs, 0.0671 secs 238 | DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs 239 | req write: 0.0000 secs, 0.0000 secs, 0.0211 secs 240 | resp wait: 0.0071 secs, 0.0002 secs, 0.0670 secs 241 | resp read: 0.0000 secs, 0.0000 secs, 0.0207 secs 242 | 243 | Status code distribution: 244 | [200] 100000 responses 245 | ``` 246 | 247 | ### Traefik (2.9.8) 248 | 249 | ``` 250 | Summary: 251 | Total: 3.8469 secs 252 | Slowest: 0.1181 secs 253 | Fastest: 0.0003 secs 254 | Average: 0.0094 secs 255 | Requests/sec: 25994.9553 256 | 257 | Total data: 31000000 bytes 258 | Size/request: 310 bytes 259 | 260 | Response time histogram: 261 | 0.000 [1] | 262 | 0.012 [75314] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 263 | 0.024 [22329] |■■■■■■■■■■■■ 264 | 0.036 [1886] |■ 265 | 0.047 [243] | 266 | 0.059 [23] | 267 | 0.071 [50] | 268 | 0.083 [43] | 269 | 0.095 [42] | 270 | 0.106 [50] | 271 | 0.118 [19] | 272 | 273 | 274 | Latency distribution: 275 | 10% in 0.0033 secs 276 | 25% in 0.0054 secs 277 | 50% in 0.0083 secs 278 | 75% in 0.0120 secs 279 | 90% in 0.0164 secs 280 | 95% in 0.0197 secs 281 | 99% in 0.0290 secs 282 | 283 | Details (average, fastest, slowest): 284 | DNS+dialup: 0.0000 secs, 0.0003 secs, 0.1181 secs 285 | DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs 286 | req write: 0.0000 secs, 0.0000 secs, 0.0268 secs 287 | resp wait: 0.0093 secs, 0.0003 secs, 0.1072 secs 288 | resp read: 0.0000 secs, 0.0000 secs, 0.0251 secs 289 | 290 | Status code distribution: 291 | [200] 100000 responses 292 | ``` 293 | -------------------------------------------------------------------------------- /caddy/Caddyfile: -------------------------------------------------------------------------------- 1 | :80 2 | 3 | reverse_proxy 192.168.0.10:80 4 | -------------------------------------------------------------------------------- /caddy/caddy.tf: -------------------------------------------------------------------------------- 1 | variable "proxy_server_public_ip" { 2 | type = string 3 | } 4 | 5 | variable "ssh_keypair_name" { 6 | type = string 7 | } 8 | 9 | resource "null_resource" "caddy" { 10 | connection { 11 | type = "ssh" 12 | user = "ubuntu" 13 | private_key = file("./${var.ssh_keypair_name}.pem") 14 | host = var.proxy_server_public_ip 15 | } 16 | 17 | provisioner "file" { 18 | source = "./caddy/Caddyfile" 19 | destination = "/tmp/Caddyfile" 20 | } 21 | 22 | provisioner "file" { 23 | source = "./caddy/caddy_setup.sh" 24 | destination = "/tmp/caddy_setup.sh" 25 | } 26 | 27 | provisioner "remote-exec" { 28 | inline = [ 29 | "chmod +x /tmp/caddy_setup.sh", 30 | "sudo /tmp/caddy_setup.sh", 31 | ] 32 | } 33 | } 34 | -------------------------------------------------------------------------------- /caddy/caddy_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | docker run --name caddy -d -p 8000:80 -v /tmp/Caddyfile:/etc/caddy/Caddyfile caddy:2.6.4 3 | -------------------------------------------------------------------------------- /client/client_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | export DEBIAN_FRONTEND=noninteractive 3 | 4 | # Install Hey 5 | wget -O hey https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64 6 | chmod +x ./hey 7 | 8 | sysctl -w net.ipv4.ip_local_port_range="1024 65535" 9 | sysctl -w net.ipv4.tcp_tw_reuse=1 10 | sysctl -w net.ipv4.tcp_timestamps=1 11 | ulimit -n 250000 12 | -------------------------------------------------------------------------------- /client/run_tests.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "--------------------" 4 | echo " caddy" 5 | echo "--------------------" 6 | /home/ubuntu/hey -n 100000 -c 250 -m GET http://192.168.0.11:8000 7 | 8 | echo "--------------------" 9 | echo " envoy" 10 | echo "--------------------" 11 | /home/ubuntu/hey -n 100000 -c 250 -m GET http://192.168.0.11:8001 12 | 13 | echo "--------------------" 14 | echo " haproxy" 15 | echo "--------------------" 16 | /home/ubuntu/hey -n 100000 -c 250 -m GET http://192.168.0.11:8002 17 | 18 | echo "--------------------" 19 | echo " nginx" 20 | echo "--------------------" 21 | /home/ubuntu/hey -n 100000 -c 250 -m GET http://192.168.0.11:8003 22 | 23 | echo "--------------------" 24 | echo " traefik" 25 | echo "--------------------" 26 | /home/ubuntu/hey -n 100000 -c 250 -m GET http://192.168.0.11:8004 27 | -------------------------------------------------------------------------------- /docker/docker_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | export DEBIAN_FRONTEND=noninteractive 3 | 4 | sysctl -w net.ipv4.ip_local_port_range="1024 65535" 5 | sysctl -w net.ipv4.tcp_tw_reuse=1 6 | sysctl -w net.ipv4.tcp_timestamps=1 7 | ulimit -n 250000 8 | 9 | # Install Docker 10 | if [ ! $(which docker) ]; then 11 | echo "----Installing docker----" 12 | apt update 13 | apt install -y apt-transport-https ca-certificates curl software-properties-common 14 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 15 | add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 16 | apt update 17 | apt install -y docker-ce 18 | fi -------------------------------------------------------------------------------- /envoy/envoy.tf: -------------------------------------------------------------------------------- 1 | variable "proxy_server_public_ip" { 2 | type = string 3 | } 4 | 5 | variable "ssh_keypair_name" { 6 | type = string 7 | } 8 | 9 | resource "null_resource" "envoy" { 10 | connection { 11 | type = "ssh" 12 | user = "ubuntu" 13 | private_key = file("./${var.ssh_keypair_name}.pem") 14 | host = var.proxy_server_public_ip 15 | } 16 | 17 | provisioner "file" { 18 | source = "./envoy/envoy.yaml" 19 | destination = "/tmp/envoy.yaml" 20 | } 21 | 22 | provisioner "file" { 23 | source = "./envoy/envoy_setup.sh" 24 | destination = "/tmp/envoy_setup.sh" 25 | } 26 | 27 | provisioner "remote-exec" { 28 | inline = [ 29 | "chmod +x /tmp/envoy_setup.sh", 30 | "sudo /tmp/envoy_setup.sh", 31 | ] 32 | } 33 | } 34 | -------------------------------------------------------------------------------- /envoy/envoy.yaml: -------------------------------------------------------------------------------- 1 | static_resources: 2 | listeners: 3 | - name: listener_0 4 | address: 5 | socket_address: { protocol: TCP, address: 0.0.0.0, port_value: 80 } 6 | filter_chains: 7 | - filters: 8 | - name: envoy.filters.network.http_connection_manager 9 | typed_config: 10 | "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager 11 | stat_prefix: ingress_http 12 | http_filters: 13 | - name: envoy.filters.http.router 14 | typed_config: 15 | "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router 16 | route_config: 17 | name: local_route 18 | virtual_hosts: 19 | - name: local_service 20 | domains: ["*"] 21 | routes: 22 | - match: 23 | prefix: "/" 24 | route: 25 | cluster: some_service 26 | clusters: 27 | - name: some_service 28 | connect_timeout: 30s 29 | type: STATIC 30 | lb_policy: ROUND_ROBIN 31 | load_assignment: 32 | cluster_name: service_envoyproxy_io 33 | endpoints: 34 | - lb_endpoints: 35 | - endpoint: 36 | address: 37 | socket_address: 38 | address: 192.168.0.10 39 | port_value: 80 -------------------------------------------------------------------------------- /envoy/envoy_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | docker run --name envoy -p 8001:80 -v /tmp/envoy.yaml:/etc/envoy/envoy.yaml -d envoyproxy/envoy:v1.22.8 3 | -------------------------------------------------------------------------------- /haproxy/haproxy.cfg: -------------------------------------------------------------------------------- 1 | frontend frontend_server 2 | bind :80 3 | mode http 4 | default_backend backend_server 5 | 6 | backend backend_server 7 | mode http 8 | balance roundrobin 9 | server server0 192.168.0.10:80 check 10 | 11 | -------------------------------------------------------------------------------- /haproxy/haproxy.tf: -------------------------------------------------------------------------------- 1 | variable "proxy_server_public_ip" { 2 | type = string 3 | } 4 | 5 | variable "ssh_keypair_name" { 6 | type = string 7 | } 8 | 9 | resource "null_resource" "haproxy" { 10 | connection { 11 | type = "ssh" 12 | user = "ubuntu" 13 | private_key = file("./${var.ssh_keypair_name}.pem") 14 | host = var.proxy_server_public_ip 15 | } 16 | 17 | provisioner "file" { 18 | source = "./haproxy/haproxy.cfg" 19 | destination = "/tmp/haproxy.cfg" 20 | } 21 | 22 | provisioner "file" { 23 | source = "./haproxy/haproxy_setup.sh" 24 | destination = "/tmp/haproxy_setup.sh" 25 | } 26 | 27 | provisioner "remote-exec" { 28 | inline = [ 29 | "chmod +x /tmp/haproxy_setup.sh", 30 | "sudo /tmp/haproxy_setup.sh", 31 | ] 32 | } 33 | } 34 | -------------------------------------------------------------------------------- /haproxy/haproxy_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | docker run --name haproxy -d -p 8002:80 -v /tmp/haproxy.cfg:/etc/haproxy/haproxy.cfg haproxytech/haproxy-ubuntu:2.7.3 3 | -------------------------------------------------------------------------------- /images/average_response_time.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NickMRamirez/Proxy-Benchmarks/122944ff159209d21aa7e11d0fb955d85eea9e76/images/average_response_time.png -------------------------------------------------------------------------------- /images/latency.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NickMRamirez/Proxy-Benchmarks/122944ff159209d21aa7e11d0fb955d85eea9e76/images/latency.png -------------------------------------------------------------------------------- /images/requests_per_second.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NickMRamirez/Proxy-Benchmarks/122944ff159209d21aa7e11d0fb955d85eea9e76/images/requests_per_second.png -------------------------------------------------------------------------------- /main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = var.aws_region 3 | access_key = var.aws_access_key 4 | secret_key = var.aws_secret_key 5 | } 6 | 7 | data "aws_ami" "ubuntu" { 8 | most_recent = true 9 | 10 | filter { 11 | name = "name" 12 | values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server*"] 13 | } 14 | 15 | owners = ["099720109477"] 16 | } 17 | 18 | resource "aws_vpc" "benchmarks" { 19 | cidr_block = "192.168.0.0/16" 20 | instance_tenancy = "default" 21 | enable_dns_hostnames = true 22 | } 23 | 24 | resource "aws_subnet" "benchmarks" { 25 | vpc_id = aws_vpc.benchmarks.id 26 | cidr_block = "192.168.0.0/24" 27 | map_public_ip_on_launch = true 28 | } 29 | 30 | resource "aws_internet_gateway" "benchmarks" { 31 | vpc_id = aws_vpc.benchmarks.id 32 | } 33 | 34 | resource "aws_route_table" "benchmarks" { 35 | vpc_id = aws_vpc.benchmarks.id 36 | 37 | route { 38 | cidr_block = "0.0.0.0/0" 39 | gateway_id = aws_internet_gateway.benchmarks.id 40 | } 41 | } 42 | 43 | resource "aws_route_table_association" "benchmarks" { 44 | route_table_id = aws_route_table.benchmarks.id 45 | subnet_id = aws_subnet.benchmarks.id 46 | } 47 | 48 | resource "aws_security_group" "benchmarks" { 49 | name = "benchmarks_security_group" 50 | description = "Allow HTTP and SSH" 51 | vpc_id = aws_vpc.benchmarks.id 52 | 53 | egress { 54 | from_port = 0 55 | to_port = 0 56 | protocol = "-1" 57 | cidr_blocks = ["0.0.0.0/0"] 58 | } 59 | 60 | ingress { 61 | from_port = 80 62 | to_port = 80 63 | protocol = "tcp" 64 | cidr_blocks = ["0.0.0.0/0"] 65 | } 66 | 67 | ingress { 68 | from_port = 8000 69 | to_port = 8010 70 | protocol = "tcp" 71 | cidr_blocks = ["0.0.0.0/0"] 72 | } 73 | 74 | ingress { 75 | from_port = 22 76 | to_port = 22 77 | protocol = "tcp" 78 | cidr_blocks = ["0.0.0.0/0"] 79 | } 80 | } 81 | 82 | 83 | # ----------------------------------------------------------------- 84 | # client server 85 | # ----------------------------------------------------------------- 86 | resource "aws_instance" "client" { 87 | ami = data.aws_ami.ubuntu.id 88 | instance_type = var.aws_instance_type 89 | subnet_id = aws_subnet.benchmarks.id 90 | vpc_security_group_ids = [aws_security_group.benchmarks.id] 91 | key_name = var.ssh_keypair_name 92 | 93 | tags = { 94 | Name = "client" 95 | } 96 | 97 | connection { 98 | type = "ssh" 99 | user = "ubuntu" 100 | private_key = file("./${var.ssh_keypair_name}.pem") 101 | host = self.public_ip 102 | } 103 | 104 | provisioner "file" { 105 | source = "./client/run_tests.sh" 106 | destination = "/tmp/run_tests.sh" 107 | } 108 | 109 | provisioner "file" { 110 | source = "./client/client_setup.sh" 111 | destination = "/tmp/client_setup.sh" 112 | } 113 | 114 | provisioner "remote-exec" { 115 | inline = [ 116 | "chmod +x /tmp/client_setup.sh", 117 | "sudo /tmp/client_setup.sh", 118 | ] 119 | } 120 | } 121 | 122 | output "client" { 123 | value = "ssh -i ./benchmarks.pem ubuntu@${aws_instance.client.public_ip}" 124 | } 125 | 126 | 127 | # ----------------------------------------------------------------- 128 | # web server 129 | # ----------------------------------------------------------------- 130 | resource "aws_instance" "webserver" { 131 | ami = data.aws_ami.ubuntu.id 132 | instance_type = var.aws_instance_type 133 | subnet_id = aws_subnet.benchmarks.id 134 | vpc_security_group_ids = [aws_security_group.benchmarks.id] 135 | key_name = var.ssh_keypair_name 136 | private_ip = "192.168.0.10" 137 | 138 | tags = { 139 | Name = "webserver" 140 | } 141 | 142 | connection { 143 | type = "ssh" 144 | user = "ubuntu" 145 | private_key = file("./${var.ssh_keypair_name}.pem") 146 | host = self.public_ip 147 | } 148 | 149 | provisioner "file" { 150 | source = "./webserver/docker-compose.yml" 151 | destination = "/tmp/docker-compose.yml" 152 | } 153 | 154 | provisioner "file" { 155 | source = "./webserver/webserver_setup.sh" 156 | destination = "/tmp/setup.sh" 157 | } 158 | 159 | provisioner "remote-exec" { 160 | inline = [ 161 | "chmod +x /tmp/setup.sh", 162 | "sudo /tmp/setup.sh", 163 | ] 164 | } 165 | } 166 | 167 | 168 | # ----------------------------------------------------------------- 169 | # proxy server 170 | # ----------------------------------------------------------------- 171 | resource "aws_instance" "proxy_server" { 172 | ami = data.aws_ami.ubuntu.id 173 | instance_type = var.aws_instance_type 174 | subnet_id = aws_subnet.benchmarks.id 175 | vpc_security_group_ids = [aws_security_group.benchmarks.id] 176 | key_name = var.ssh_keypair_name 177 | private_ip = "192.168.0.11" 178 | depends_on = [aws_instance.webserver] 179 | 180 | tags = { 181 | Name = "proxy_server" 182 | } 183 | 184 | connection { 185 | type = "ssh" 186 | user = "ubuntu" 187 | private_key = file("./${var.ssh_keypair_name}.pem") 188 | host = aws_instance.proxy_server.public_ip 189 | } 190 | 191 | provisioner "file" { 192 | source = "./docker/docker_setup.sh" 193 | destination = "/tmp/docker_setup.sh" 194 | } 195 | 196 | provisioner "remote-exec" { 197 | inline = [ 198 | "chmod +x /tmp/docker_setup.sh", 199 | "sudo /tmp/docker_setup.sh", 200 | ] 201 | } 202 | } 203 | 204 | # ----------------------------------------------------------------- 205 | # caddy 206 | # ----------------------------------------------------------------- 207 | module "caddy" { 208 | source = "./caddy" 209 | proxy_server_public_ip = aws_instance.proxy_server.public_ip 210 | ssh_keypair_name = var.ssh_keypair_name 211 | } 212 | 213 | # ----------------------------------------------------------------- 214 | # envoy 215 | # ----------------------------------------------------------------- 216 | module "envoy" { 217 | source = "./envoy" 218 | proxy_server_public_ip = aws_instance.proxy_server.public_ip 219 | ssh_keypair_name = var.ssh_keypair_name 220 | } 221 | 222 | # ----------------------------------------------------------------- 223 | # haproxy 224 | # ----------------------------------------------------------------- 225 | module "haproxy" { 226 | source = "./haproxy" 227 | proxy_server_public_ip = aws_instance.proxy_server.public_ip 228 | ssh_keypair_name = var.ssh_keypair_name 229 | } 230 | 231 | # ----------------------------------------------------------------- 232 | # nginx 233 | # ----------------------------------------------------------------- 234 | module "nginx" { 235 | source = "./nginx" 236 | proxy_server_public_ip = aws_instance.proxy_server.public_ip 237 | ssh_keypair_name = var.ssh_keypair_name 238 | } 239 | 240 | # ----------------------------------------------------------------- 241 | # traefik 242 | # ----------------------------------------------------------------- 243 | module "traefik" { 244 | source = "./traefik" 245 | proxy_server_public_ip = aws_instance.proxy_server.public_ip 246 | ssh_keypair_name = var.ssh_keypair_name 247 | } 248 | -------------------------------------------------------------------------------- /nginx/nginx.conf: -------------------------------------------------------------------------------- 1 | worker_processes 3; 2 | 3 | events { 4 | worker_connections 4096; ## Default: 1024 5 | } 6 | 7 | http { 8 | upstream web { 9 | server 192.168.0.10:80; 10 | keepalive 32; 11 | } 12 | 13 | server { 14 | listen 80; 15 | location / { 16 | proxy_http_version 1.1; 17 | proxy_pass http://web; 18 | proxy_set_header Connection ""; 19 | } 20 | } 21 | } -------------------------------------------------------------------------------- /nginx/nginx.tf: -------------------------------------------------------------------------------- 1 | variable "proxy_server_public_ip" { 2 | type = string 3 | } 4 | 5 | variable "ssh_keypair_name" { 6 | type = string 7 | } 8 | 9 | resource "null_resource" "nginx" { 10 | connection { 11 | type = "ssh" 12 | user = "ubuntu" 13 | private_key = file("./${var.ssh_keypair_name}.pem") 14 | host = var.proxy_server_public_ip 15 | } 16 | 17 | provisioner "file" { 18 | source = "./nginx/nginx.conf" 19 | destination = "/tmp/nginx.conf" 20 | } 21 | 22 | provisioner "file" { 23 | source = "./nginx/nginx_setup.sh" 24 | destination = "/tmp/nginx_setup.sh" 25 | } 26 | 27 | provisioner "remote-exec" { 28 | inline = [ 29 | "chmod +x /tmp/nginx_setup.sh", 30 | "sudo /tmp/nginx_setup.sh", 31 | ] 32 | } 33 | } 34 | -------------------------------------------------------------------------------- /nginx/nginx_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | docker run --name nginx -d -p 8003:80 -v /tmp/nginx.conf:/etc/nginx/nginx.conf nginx:1.22.1 3 | -------------------------------------------------------------------------------- /traefik/dynamic_conf.toml: -------------------------------------------------------------------------------- 1 | [http] 2 | # Add the router 3 | [http.routers] 4 | [http.routers.web] 5 | entryPoints = ["web"] 6 | service = "service-foo" 7 | rule = "Path(`/`)" 8 | 9 | # Add the service 10 | [http.services] 11 | [http.services.service-foo] 12 | [http.services.service-foo.loadBalancer] 13 | [[http.services.service-foo.loadBalancer.servers]] 14 | url = "http://192.168.0.10:80" -------------------------------------------------------------------------------- /traefik/traefik.tf: -------------------------------------------------------------------------------- 1 | variable "proxy_server_public_ip" { 2 | type = string 3 | } 4 | 5 | variable "ssh_keypair_name" { 6 | type = string 7 | } 8 | 9 | resource "null_resource" "traefik" { 10 | connection { 11 | type = "ssh" 12 | user = "ubuntu" 13 | private_key = file("./${var.ssh_keypair_name}.pem") 14 | host = var.proxy_server_public_ip 15 | } 16 | 17 | provisioner "file" { 18 | source = "./traefik/traefik.toml" 19 | destination = "/tmp/traefik.toml" 20 | } 21 | 22 | provisioner "file" { 23 | source = "./traefik/dynamic_conf.toml" 24 | destination = "/tmp/dynamic_conf.toml" 25 | } 26 | 27 | provisioner "file" { 28 | source = "./traefik/traefik_setup.sh" 29 | destination = "/tmp/traefik_setup.sh" 30 | } 31 | 32 | provisioner "remote-exec" { 33 | inline = [ 34 | "chmod +x /tmp/traefik_setup.sh", 35 | "sudo /tmp/traefik_setup.sh", 36 | ] 37 | } 38 | } 39 | -------------------------------------------------------------------------------- /traefik/traefik.toml: -------------------------------------------------------------------------------- 1 | [entryPoints] 2 | [entryPoints.web] 3 | address = ":80" 4 | 5 | [providers.file] 6 | filename = "/etc/traefik/dynamic_conf.toml" 7 | 8 | -------------------------------------------------------------------------------- /traefik/traefik_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | docker run --name traefik -d -p 8004:80 -v /tmp/dynamic_conf.toml:/etc/traefik/dynamic_conf.toml -v /tmp/traefik.toml:/etc/traefik/traefik.toml traefik:v2.9.8 3 | -------------------------------------------------------------------------------- /variables.tf: -------------------------------------------------------------------------------- 1 | variable "aws_region" { 2 | description = "AWS region" 3 | default = "us-east-2" 4 | } 5 | 6 | variable "aws_instance_type" { 7 | description = "AWS instance type for nodes" 8 | default = "m5.xlarge" 9 | } 10 | 11 | variable "aws_access_key" {} 12 | 13 | variable "aws_secret_key" {} 14 | 15 | variable "ssh_keypair_name" { 16 | default = "benchmarks" 17 | } 18 | -------------------------------------------------------------------------------- /webserver/docker-compose.yml: -------------------------------------------------------------------------------- 1 | --- 2 | version: "3" 3 | services: 4 | 5 | web1: 6 | image: jmalloc/echo-server 7 | ports: 8 | - "80:8080" -------------------------------------------------------------------------------- /webserver/webserver_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Install Docker 3 | if [ ! $(which docker) ]; then 4 | sudo apt update 5 | sudo apt install -y apt-transport-https ca-certificates curl software-properties-common 6 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 7 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 8 | sudo apt update 9 | sudo apt install -y docker-ce 10 | sudo curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose 11 | sudo chmod +x /usr/local/bin/docker-compose 12 | else 13 | echo "docker already installed." 14 | fi 15 | 16 | # Start web app 17 | cd /tmp 18 | sudo docker-compose up -d --------------------------------------------------------------------------------