├── .gitignore ├── AWS ├── HOWTO.txt ├── HOWTO_CREATE_AMI.txt └── cloudformation.json ├── README.md ├── Vagrant ├── HOWTO.TXT ├── Vagrantfile ├── after-reboot.sh ├── first-time.sh └── kill-all.sh ├── cluster-detailed.png ├── cluster.png ├── docker-compose.override.yml ├── docker-compose.yml ├── results-app ├── Dockerfile ├── package.json ├── rebuild-redploy-single-node-vagrant.sh ├── server.js └── views │ ├── app.js │ ├── index.html │ ├── socket.io.js │ └── stylesheets │ └── style.css ├── vote-worker ├── Dockerfile ├── pom.xml └── src │ └── main │ └── java │ └── worker │ └── Worker.java └── web-vote-app ├── Dockerfile ├── app.py ├── requirements.txt ├── static └── stylesheets │ └── style.css ├── templates ├── env.html └── index.html └── utils └── __init__.py /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant 2 | -------------------------------------------------------------------------------- /AWS/HOWTO.txt: -------------------------------------------------------------------------------- 1 | ========================= 2 | AWS Setup 3 | ========================= 4 | 5 | 6 | 7 | # 8 | # Pre-requisites 9 | # 10 | 11 | 0. Create a VPC and a subnet within it 12 | VPC Name: SwarmCluster 13 | VPC Network: 192.168.0.0/16 14 | Subnet: "PublicSubnet", 192.168.33.0/24 (after create, make sure to enable "Auto-Assign Public IP") 15 | 16 | 1. Run CloudFormation template in this repo. 17 | 18 | - Use defaults for IPs 19 | - Select the VPC and subnet created in step 0 from the drop downs 20 | - Use a keypair you have the private key for, in case you need to ssh into a machine directly. 21 | 22 | 2. You will end up with these machines: 23 | 24 | master: t2.micro / 192.168.33.11 25 | interlock: t2.micro / 192.168.33.12 26 | frontend01: t2.micro / 192.168.33.20 27 | frontend02: t2.micro / 192.168.33.21 28 | worker01: t2.micro / 192.168.33.200 29 | store: m3.medium / 192.168.33.250 30 | 31 | AMI for all: ami-56f59e36 32 | SG for all: SG-WideOpen 33 | All have public IPs. 34 | 35 | 3. Now ssh into master using its public IP. We will do all cluster setup from this machine. 36 | 37 | Note: to ssh into any machines: 38 | - find the machine's public IP in EC2 dashboard 39 | ssh -i ~/.ssh/id_rsa_aws.pem ubuntu@ 40 | Replace ~/.ssh/id_rsa_aws.pem with the private key corresponding to the public key you entered in the CloudFormation setup. 41 | 42 | 43 | --- ALL STEPS AFTER THIS POINT DONE FROM MASATER --- 44 | 45 | 46 | 4. Set up Swarm layer containers: 47 | 48 | [same as Vagrant/after-reboot.sh in this repo] 49 | 50 | 5. Set up application layer containers: 51 | 52 | git clone https://github.com/docker/swarm-microservice-demo-v1 # (this repo) 53 | cd swarm-microservice-demo-v1 54 | 55 | Follow instructions in Vagrant/HOWTO.TXT 56 | 57 | -------------------------------------------------------------------------------- /AWS/HOWTO_CREATE_AMI.txt: -------------------------------------------------------------------------------- 1 | How I Created AMI "ami-bb90e5db" (Swarm-ready Ubuntu trusty): 2 | 3 | - Started with Ubuntu 14.04 box (ami-56f59e36) 4 | - Run these commands: 5 | 6 | sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D 7 | sudo sh -c 'echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" > /etc/apt/sources.list.d/docker.list' 8 | sudo apt-get update 9 | sudo apt-get -y install linux-image-extra-$(uname -r) docker-engine linux-image-generic-lts-utopic 10 | sudo reboot 11 | sudo usermod -a -G docker ubuntu ## add ubuntu user to docker group 12 | sudo service docker restart 13 | 14 | - Removed the ``/etc/docker/key.json` file 15 | 16 | - Now in AWS console do: Instances and select your instance -> Actions -> Image -> Create Image 17 | 18 | - My customized Ubuntu AMI: ami-bb90e5db 19 | 20 | - The CloudFormation template will now rely on this new "ami-bb90e5db" that already has docker and kernel 3.16 installed installed. 21 | 22 | * Note that with ami-bb90e5db you must reset the Engine ID on each instance using a user-data bash script like this: 23 | sudo service docker stop 24 | sudo rm -f /etc/docker/key.json 25 | sudo service docker start 26 | 27 | This is done for each machine automatically in the CloudFormation template accompanying this repo. 28 | -------------------------------------------------------------------------------- /AWS/cloudformation.json: -------------------------------------------------------------------------------- 1 | { 2 | "AWSTemplateFormatVersion" : "2010-09-09", 3 | 4 | "Description" : "Swarm Demonstration Cluster Template", 5 | 6 | "Parameters" : { 7 | "KeyName": { 8 | "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances", 9 | "Type": "AWS::EC2::KeyPair::KeyName", 10 | "Default" : "id_rsa_aws", 11 | "ConstraintDescription" : "must be the name of an existing EC2 KeyPair." 12 | }, 13 | 14 | "VpcId" : { 15 | "Type" : "AWS::EC2::VPC::Id", 16 | "Description" : "ID of the existing VPC into which all machines will be launched" 17 | }, 18 | 19 | "SubnetId" : { 20 | "Type" : "AWS::EC2::Subnet::Id", 21 | "Description" : "ID of an existing subnet into which all machines will be launched" 22 | }, 23 | 24 | "managerIP" : { 25 | "Description" : "IP of manager (where Swarm Manager and consul kv run)", 26 | "Type": "String", 27 | "MinLength": "7", 28 | "MaxLength": "15", 29 | "Default": "192.168.33.11", 30 | "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})", 31 | "ConstraintDescription": "must be a valid IP of the form x.x.x.x" 32 | }, 33 | 34 | "interlockIP" : { 35 | "Description" : "IP of interlock (load balancer for web-vote-app)", 36 | "Type": "String", 37 | "MinLength": "7", 38 | "MaxLength": "15", 39 | "Default": "192.168.33.12", 40 | "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})", 41 | "ConstraintDescription": "must be a valid IP of the form x.x.x.x" 42 | }, 43 | 44 | "frontend01IP" : { 45 | "Description" : "IP of frontend01", 46 | "Type": "String", 47 | "MinLength": "7", 48 | "MaxLength": "15", 49 | "Default": "192.168.33.20", 50 | "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})", 51 | "ConstraintDescription": "must be a valid IP of the form x.x.x.x" 52 | }, 53 | 54 | "frontend02IP" : { 55 | "Description" : "IP of frontend02", 56 | "Type": "String", 57 | "MinLength": "7", 58 | "MaxLength": "15", 59 | "Default": "192.168.33.21", 60 | "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})", 61 | "ConstraintDescription": "must be a valid IP of the form x.x.x.x" 62 | }, 63 | 64 | "worker01IP" : { 65 | "Description" : "IP of worker01", 66 | "Type": "String", 67 | "MinLength": "7", 68 | "MaxLength": "15", 69 | "Default": "192.168.33.200", 70 | "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})", 71 | "ConstraintDescription": "must be a valid IP of the form x.x.x.x" 72 | }, 73 | 74 | "storeIP" : { 75 | "Description" : "IP of store", 76 | "Type": "String", 77 | "MinLength": "7", 78 | "MaxLength": "15", 79 | "Default": "192.168.33.250", 80 | "AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})", 81 | "ConstraintDescription": "must be a valid IP of the form x.x.x.x" 82 | } 83 | 84 | }, 85 | 86 | "Mappings" : { 87 | "AWSInstanceType2Arch" : { 88 | "t2.micro" : { "Arch" : "HVM64" }, 89 | "t2.small" : { "Arch" : "HVM64" }, 90 | "t2.medium" : { "Arch" : "HVM64" }, 91 | "t2.large" : { "Arch" : "HVM64" }, 92 | "m3.medium" : { "Arch" : "HVM64" }, 93 | "m3.large" : { "Arch" : "HVM64" }, 94 | "m3.xlarge" : { "Arch" : "HVM64" }, 95 | "m3.2xlarge" : { "Arch" : "HVM64" }, 96 | "c3.large" : { "Arch" : "HVM64" }, 97 | "c3.xlarge" : { "Arch" : "HVM64" }, 98 | "c3.2xlarge" : { "Arch" : "HVM64" }, 99 | "c3.4xlarge" : { "Arch" : "HVM64" }, 100 | "c3.8xlarge" : { "Arch" : "HVM64" }, 101 | "c4.large" : { "Arch" : "HVM64" }, 102 | "c4.xlarge" : { "Arch" : "HVM64" }, 103 | "c4.2xlarge" : { "Arch" : "HVM64" }, 104 | "c4.4xlarge" : { "Arch" : "HVM64" }, 105 | "c4.8xlarge" : { "Arch" : "HVM64" }, 106 | "g2.2xlarge" : { "Arch" : "HVMG2" }, 107 | "r3.large" : { "Arch" : "HVM64" }, 108 | "r3.xlarge" : { "Arch" : "HVM64" }, 109 | "r3.2xlarge" : { "Arch" : "HVM64" }, 110 | "r3.4xlarge" : { "Arch" : "HVM64" }, 111 | "r3.8xlarge" : { "Arch" : "HVM64" }, 112 | "i2.xlarge" : { "Arch" : "HVM64" }, 113 | "i2.2xlarge" : { "Arch" : "HVM64" }, 114 | "i2.4xlarge" : { "Arch" : "HVM64" }, 115 | "i2.8xlarge" : { "Arch" : "HVM64" }, 116 | "d2.xlarge" : { "Arch" : "HVM64" }, 117 | "d2.2xlarge" : { "Arch" : "HVM64" }, 118 | "d2.4xlarge" : { "Arch" : "HVM64" }, 119 | "d2.8xlarge" : { "Arch" : "HVM64" }, 120 | "hi1.4xlarge" : { "Arch" : "HVM64" }, 121 | "hs1.8xlarge" : { "Arch" : "HVM64" }, 122 | "cr1.8xlarge" : { "Arch" : "HVM64" }, 123 | "cc2.8xlarge" : { "Arch" : "HVM64" } 124 | }, 125 | 126 | "AWSRegionArch2AMI" : { 127 | "us-east-1" : {"HVM64" : "NOT_SUPPORTED", "HVMG2" : "NOT_SUPPORTED"}, 128 | "us-west-2" : {"HVM64" : "ami-aff71acf", "HVMG2" : "ami-aff71acf"}, 129 | "us-west-1" : {"HVM64" : "ami-d05b2bb0", "HVMG2" : "ami-d05b2bb0"}, 130 | "eu-west-1" : {"HVM64" : "NOT_SUPPORTED", "HVMG2" : "NOT_SUPPORTED"}, 131 | "eu-central-1" : {"HVM64" : "NOT_SUPPORTED", "HVMG2" : "NOT_SUPPORTED"}, 132 | "ap-northeast-1" : {"HVM64" : "NOT_SUPPORTED", "HVMG2" : "NOT_SUPPORTED"}, 133 | "ap-southeast-1" : {"HVM64" : "NOT_SUPPORTED", "HVMG2" : "NOT_SUPPORTED"}, 134 | "ap-southeast-2" : {"HVM64" : "NOT_SUPPORTED", "HVMG2" : "NOT_SUPPORTED"}, 135 | "sa-east-1" : {"HVM64" : "NOT_SUPPORTED", "HVMG2" : "NOT_SUPPORTED"}, 136 | "cn-north-1" : {"HVM64" : "NOT_SUPPORTED", "HVMG2" : "NOT_SUPPORTED"} 137 | } 138 | 139 | }, 140 | 141 | "Resources" : { 142 | 143 | "manager" : { 144 | "Type" : "AWS::EC2::Instance", 145 | "Properties" : { 146 | "InstanceType" : "t2.micro", 147 | "SecurityGroupIds" : [ { "Ref" : "WideOpenVpcSG" } ], 148 | "KeyName" : { "Ref" : "KeyName" }, 149 | "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, 150 | { "Fn::FindInMap" : [ "AWSInstanceType2Arch", "t2.micro", "Arch" ] } 151 | ] }, 152 | "SubnetId" : { "Ref" : "SubnetId" }, 153 | "PrivateIpAddress" : { "Ref" : "managerIP" }, 154 | "UserData": { 155 | "Fn::Base64": { 156 | "Fn::Join": [ 157 | "\n", 158 | [ 159 | "#!/bin/bash", 160 | "sudo hostname manager", 161 | "sudo service docker stop", 162 | "sudo rm -f /etc/docker/key.json", 163 | "sudo service docker start" 164 | ] 165 | ] 166 | } 167 | }, 168 | "Tags" : [ 169 | {"Key" : "Name", "Value" : "manager"} 170 | ] 171 | } 172 | }, 173 | 174 | "interlock" : { 175 | "Type" : "AWS::EC2::Instance", 176 | "Properties" : { 177 | "InstanceType" : "t2.micro", 178 | "SecurityGroupIds" : [ { "Ref" : "WideOpenVpcSG" } ], 179 | "KeyName" : { "Ref" : "KeyName" }, 180 | "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, 181 | { "Fn::FindInMap" : [ "AWSInstanceType2Arch", "t2.micro", "Arch" ] } 182 | ] }, 183 | "SubnetId" : { "Ref" : "SubnetId" }, 184 | "PrivateIpAddress" : { "Ref" : "interlockIP" }, 185 | "UserData": { 186 | "Fn::Base64": { 187 | "Fn::Join": [ 188 | "\n", 189 | [ 190 | "#!/bin/bash", 191 | "sudo hostname interlock", 192 | "sudo service docker stop", 193 | "sudo rm -f /etc/docker/key.json", 194 | "sudo service docker start" 195 | ] 196 | ] 197 | } 198 | }, 199 | "Tags" : [ 200 | {"Key" : "Name", "Value" : "interlock"} 201 | ] 202 | } 203 | }, 204 | 205 | 206 | "frontend01" : { 207 | "Type" : "AWS::EC2::Instance", 208 | "Properties" : { 209 | "InstanceType" : "t2.micro", 210 | "SecurityGroupIds" : [ { "Ref" : "WideOpenVpcSG" } ], 211 | "KeyName" : { "Ref" : "KeyName" }, 212 | "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, 213 | { "Fn::FindInMap" : [ "AWSInstanceType2Arch", "t2.micro", "Arch" ] } 214 | ] }, 215 | "SubnetId" : { "Ref" : "SubnetId" }, 216 | "PrivateIpAddress" : { "Ref" : "frontend01IP" }, 217 | "UserData": { 218 | "Fn::Base64": { 219 | "Fn::Join": [ 220 | "\n", 221 | [ 222 | "#!/bin/bash", 223 | "sudo hostname frontend01", 224 | "sudo service docker stop", 225 | "sudo rm -f /etc/docker/key.json", 226 | "sudo service docker start" 227 | ] 228 | ] 229 | } 230 | }, 231 | "Tags" : [ 232 | {"Key" : "Name", "Value" : "frontend01"} 233 | ] 234 | } 235 | }, 236 | 237 | "frontend02" : { 238 | "Type" : "AWS::EC2::Instance", 239 | "Properties" : { 240 | "InstanceType" : "t2.micro", 241 | "SecurityGroupIds" : [ { "Ref" : "WideOpenVpcSG" } ], 242 | "KeyName" : { "Ref" : "KeyName" }, 243 | "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, 244 | { "Fn::FindInMap" : [ "AWSInstanceType2Arch", "t2.micro", "Arch" ] } 245 | ] }, 246 | "SubnetId" : { "Ref" : "SubnetId" }, 247 | "PrivateIpAddress" : { "Ref" : "frontend02IP" }, 248 | "UserData": { 249 | "Fn::Base64": { 250 | "Fn::Join": [ 251 | "\n", 252 | [ 253 | "#!/bin/bash", 254 | "sudo hostname frontend02", 255 | "sudo service docker stop", 256 | "sudo rm -f /etc/docker/key.json", 257 | "sudo service docker start" 258 | ] 259 | ] 260 | } 261 | }, 262 | "Tags" : [ 263 | {"Key" : "Name", "Value" : "frontend02"} 264 | ] 265 | } 266 | }, 267 | 268 | "worker01" : { 269 | "Type" : "AWS::EC2::Instance", 270 | "Properties" : { 271 | "InstanceType" : "t2.micro", 272 | "SecurityGroupIds" : [ { "Ref" : "WideOpenVpcSG" } ], 273 | "KeyName" : { "Ref" : "KeyName" }, 274 | "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, 275 | { "Fn::FindInMap" : [ "AWSInstanceType2Arch", "t2.micro", "Arch" ] } 276 | ] }, 277 | "SubnetId" : { "Ref" : "SubnetId" }, 278 | "PrivateIpAddress" : { "Ref" : "worker01IP" }, 279 | "UserData": { 280 | "Fn::Base64": { 281 | "Fn::Join": [ 282 | "\n", 283 | [ 284 | "#!/bin/bash", 285 | "sudo hostname worker01", 286 | "sudo service docker stop", 287 | "sudo rm -f /etc/docker/key.json", 288 | "sudo service docker start" 289 | ] 290 | ] 291 | } 292 | }, 293 | "Tags" : [ 294 | {"Key" : "Name", "Value" : "worker01"} 295 | ] 296 | } 297 | }, 298 | 299 | "store" : { 300 | "Type" : "AWS::EC2::Instance", 301 | "Properties" : { 302 | "InstanceType" : "m3.medium", 303 | "SecurityGroupIds" : [ { "Ref" : "WideOpenVpcSG" } ], 304 | "KeyName" : { "Ref" : "KeyName" }, 305 | "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, 306 | { "Fn::FindInMap" : [ "AWSInstanceType2Arch", "m3.medium", "Arch" ] } 307 | ] }, 308 | "SubnetId" : { "Ref" : "SubnetId" }, 309 | "PrivateIpAddress" : { "Ref" : "storeIP" }, 310 | "UserData": { 311 | "Fn::Base64": { 312 | "Fn::Join": [ 313 | "\n", 314 | [ 315 | "#!/bin/bash", 316 | "sudo hostname store", 317 | "sudo service docker stop", 318 | "sudo rm -f /etc/docker/key.json", 319 | "sudo service docker start" 320 | ] 321 | ] 322 | } 323 | }, 324 | "Tags" : [ 325 | {"Key" : "Name", "Value" : "store"} 326 | ] 327 | } 328 | }, 329 | 330 | "WideOpenVpcSG" : { 331 | "Type" : "AWS::EC2::SecurityGroup", 332 | "Properties" : { 333 | "VpcId" : { "Ref" : "VpcId" } , 334 | "GroupDescription" : "Wide open", 335 | 336 | "SecurityGroupIngress" : [ { 337 | "IpProtocol" : "-1", 338 | "FromPort" : "0", 339 | "ToPort" : "65535", 340 | "CidrIp" : "0.0.0.0/0" 341 | } ] 342 | 343 | } 344 | } 345 | 346 | }, 347 | 348 | "Outputs" : { 349 | 350 | "managerPrivateIP" : { 351 | "Description" : "Private IP of manager", 352 | "Value" : { "Fn::GetAtt" : [ "manager", "PrivateIp" ] } 353 | }, 354 | "managerPublicIP" : { 355 | "Description" : "Public IP of manager", 356 | "Value" : { "Fn::GetAtt" : [ "manager", "PublicIp" ] } 357 | }, 358 | 359 | "interlockPrivateIP" : { 360 | "Description" : "Private IP of interlock", 361 | "Value" : { "Fn::GetAtt" : [ "interlock", "PrivateIp" ] } 362 | }, 363 | "interlockPublicIP" : { 364 | "Description" : "Public IP of interlock", 365 | "Value" : { "Fn::GetAtt" : [ "interlock", "PublicIp" ] } 366 | }, 367 | 368 | 369 | "frontend01PrivateIP" : { 370 | "Description" : "Private IP of frontend01", 371 | "Value" : { "Fn::GetAtt" : [ "frontend01", "PrivateIp" ] } 372 | }, 373 | "frontend01PublicIP" : { 374 | "Description" : "Public IP of frontend01", 375 | "Value" : { "Fn::GetAtt" : [ "frontend01", "PublicIp" ] } 376 | }, 377 | 378 | "frontend02PrivateIP" : { 379 | "Description" : "Private IP of frontend02", 380 | "Value" : { "Fn::GetAtt" : [ "frontend02", "PrivateIp" ] } 381 | }, 382 | "frontend02PublicIP" : { 383 | "Description" : "Public IP of frontend02", 384 | "Value" : { "Fn::GetAtt" : [ "frontend02", "PublicIp" ] } 385 | }, 386 | 387 | "worker01PrivateIP" : { 388 | "Description" : "Private IP of worker01", 389 | "Value" : { "Fn::GetAtt" : [ "worker01", "PrivateIp" ] } 390 | }, 391 | "worker01PublicIP" : { 392 | "Description" : "Public IP of worker01", 393 | "Value" : { "Fn::GetAtt" : [ "worker01", "PublicIp" ] } 394 | }, 395 | 396 | "storePrivateIP" : { 397 | "Description" : "Private IP of store", 398 | "Value" : { "Fn::GetAtt" : [ "store", "PrivateIp" ] } 399 | }, 400 | "storePublicIP" : { 401 | "Description" : "Public IP of store", 402 | "Value" : { "Fn::GetAtt" : [ "store", "PublicIp" ] } 403 | } 404 | 405 | } 406 | } 407 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Swarm Example Cluster: Microservices App 2 | 3 | This is a sample Swarm cluster that illustrates how Swarm can be used as the foundation for a high-traffic microservice-architecture web application. It is based on the Docker Cats-vs-Dogs voting example application, but re-architected to accomodate arbitrarily large scale through the use of parallel vote capture frontends and asynchronous background workers processing each vote. 4 | 5 | # Use Case 6 | 7 | Imagine that your company is planning to buy an ad during the Superbowl to drive people to a web survey about whether they prefer cats or dogs as pets. (Perhaps your company sells pet food.) You need to ensure that millions of people can vote nearly simultaneously without your website becoming unavailable. You don't need exact real time results because you will announce them the next day, but you do need confidence that every vote will eventually get counted. 8 | 9 | # Architecture 10 | 11 | An Interlock load balancer (ha\_proxy plugin) sits in front of N web containers, each of which runs a simple Python (Flask) app that accepts votes and queues them into a redis container on the same node. These N web (+ redis) nodes capture votes quickly and can scale up to any value of N since they operate independently. Any level of expected voting traffic can thus be accomodated. 12 | 13 | Asynchronously, M background workers running on separate nodes scan through those N redis containers, dequeueing votes, de-duplicating them (to prevent double voting) and committing the results to a single postgres container that runs on its own node. 14 | 15 | ![Cluster Diagram](https://raw.githubusercontent.com/docker/swarm-microservice-demo-v1/master/cluster.png) 16 | 17 | (Detailed cluster diagram with port numbers in `./cluster-detailed.png`) 18 | 19 | # Usage 20 | 21 | This cluster can be deployed on either Vagrant and AWS. A CloudFormation template (for AWS) and a Vangrantfile (for Vagrant) are included. 22 | 23 | For AWS deployment: start at `AWS/HOWTO.TXT`, which will guide you through other text files documenting the various steps to set up the cluster and the application on AWS. 24 | 25 | For Vagrant deployment: start at `Vagrant/HOWTO.TXT`, again following the pointers to other text files and scripts. 26 | -------------------------------------------------------------------------------- /Vagrant/HOWTO.TXT: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ################################################### 4 | # 5 | # Build instructions; same on AWS and Vagrant 6 | # 7 | ################################################### 8 | 9 | # 10 | # To build web-vote-app: 11 | # 12 | cd web-vote-app 13 | docker rmi web-vote-app 14 | docker -H tcp://192.168.33.20:2375 build -t web-vote-app . 15 | docker -H tcp://192.168.33.21:2375 build -t web-vote-app . 16 | # [etc... build for each web node because `docker build` on swarm master won't put image on every machine] 17 | docker -H tcp://192.168.33.99:2375 build -t web-vote-app . 18 | 19 | # 20 | # Alternative: build once and push to Docker Hub 21 | # 22 | docker -H tcp://192.168.33.20:2375 build -t mikegoelzer/web-vote-app . 23 | docker -H tcp://192.168.33.20:2375 login -u mikegoelzer 24 | docker -H tcp://192.168.33.20:2375 push mikegoelzer/web-vote-app 25 | 26 | # 27 | # To build worker: 28 | # 29 | cd vote-worker 30 | docker rmi vote-worker 31 | docker -H tcp://192.168.33.200:2375 build -t vote-worker . 32 | docker -H tcp://192.168.33.201:2375 build -t vote-worker . 33 | # [etc... build for each worker node because `docker build` on swarm master won't put image on every machine] 34 | docker -H tcp://192.168.33.249:2375 build -t vote-worker . 35 | 36 | # 37 | # To put redis image on each web node: 38 | # 39 | docker -H tcp://192.168.33.20:2375 pull redis 40 | docker -H tcp://192.168.33.21:2375 pull redis 41 | # [etc... for each web+redis node] 42 | docker -H tcp://192.168.33.99:2375 pull redis 43 | 44 | # 45 | # To build results-app (which runs on store machine) 46 | # 47 | docker -H tcp://192.168.33.251:2375 build -t results-app . 48 | 49 | 50 | ################################################### 51 | # 52 | # DNS setup; do this on the machine where your 53 | # web browser runs 54 | # 55 | ################################################### 56 | 57 | # 58 | # Vagrant: point votingapp.local -> interlock on .11 59 | # 60 | vi /etc/hosts 61 | Add this line: 62 | 192.168.33.11 votingapp.local 63 | 64 | # 65 | # AWS: point votingapp.local -> public IP of interlock (found in Outputs tab of CloudFormation) 66 | # 67 | vi /etc/hosts 68 | Add this line: 69 | votingapp.local 70 | 71 | 72 | ################################################### 73 | # 74 | # Start application containers 75 | # 76 | ################################################### 77 | 78 | # 79 | # Vagrant: start Interlock on on .11 (which is shares with master): 80 | # 81 | docker -H tcp://192.168.33.11:2375 run --restart=unless-stopped -p 80:80 --name interlock -d ehazlett/interlock --swarm-url tcp://192.168.33.11:3375 --plugin haproxy start 82 | 83 | # 84 | # AWS: start Interlock on dedicated machine .12 (but listening to swarm master on .11): 85 | # 86 | docker -H tcp://192.168.33.12:2375 run --restart=unless-stopped -p 80:80 --name interlock -d ehazlett/interlock --swarm-url tcp://192.168.33.11:3375 --plugin haproxy start 87 | 88 | # 89 | # To run web+redis containers: 90 | # 91 | export DOCKER_HOST="tcp://192.168.33.11:3375" 92 | 93 | docker run --restart=unless-stopped --env="constraint:node==frontend01" -p 6379:6379 --name redis01 --net mynet -d redis ## expose 6379 for debugging only! 94 | docker run --restart=unless-stopped --env="constraint:node==frontend01" -d -p 5000:80 -e WEB_VOTE_NUMBER='01' --name frontend01 --net mynet --hostname votingapp.local web-vote-app 95 | 96 | docker run --restart=unless-stopped --env="constraint:node==frontend02" -p 6379:6379 --name redis02 --net mynet -d redis ## expose 6379 for debugging only! 97 | docker run --restart=unless-stopped --env="constraint:node==frontend02" -d -p 5000:80 -e WEB_VOTE_NUMBER='02' --name frontend02 --net mynet --hostname votingapp.local web-vote-app 98 | # [etc... for each web+redis node] 99 | 100 | # 101 | # To vote, browse to: 102 | # 103 | http://votingapp.local 104 | 105 | # 106 | # To verify votes are going into the redis queue: 107 | # 108 | redis-cli -h 192.168.33.20 109 | > llen votes 110 | > lindex votes 0 111 | > lindex votes 1 112 | (etc) 113 | 114 | # 115 | # To view ha_proxy stats, browse to: 116 | # 117 | http://stats:interlock@votingapp.local/haproxy?stats 118 | 119 | # 120 | # To start postgres container: 121 | # 122 | docker run --restart=unless-stopped --env="constraint:node==store" --name store -e POSTGRES_PASSWORD=pg8675309 --net mynet -p 5432:5432 -d postgres ## expose 5432 for debugging only 123 | 124 | And count votes like this: 125 | 126 | export PSQL=/Applications/Postgres.app/Contents/Versions/9.5/bin/psql 127 | PGPASSWORD=pg8675309 $PSQL -p5432 -h 192.168.33.251 -U postgres -d postgres 128 | #[and query like: SELECT * FROM votes;] 129 | 130 | 131 | # 132 | # To start workers: 133 | # 134 | docker run --restart=unless-stopped --env="constraint:node==worker01" -d -e WORKER_NUMBER='01' -e FROM_REDIS_HOST=1 -e TO_REDIS_HOST=2 --name worker01 --net mynet vote-worker 135 | #[etc... for each worker] 136 | 137 | # 138 | # To start results app 139 | # 140 | docker run --restart=unless-stopped --env="constraint:node==store" -p 80:80 -d --name results-app --net mynet results-app 141 | 142 | -------------------------------------------------------------------------------- /Vagrant/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure(2) do |config| 5 | # Every Vagrant development environment requires a box. You can search for 6 | # boxes at https://atlas.hashicorp.com/search. 7 | config.vm.box = "ubuntu/trusty64" 8 | 9 | # Machine-specific config (names and IPs) 10 | config.vm.define "master" do |machineconfig| 11 | machineconfig.vm.provider "virtualbox" do |vb| 12 | vb.name = "master" 13 | end 14 | machineconfig.vm.network "private_network", ip: "192.168.33.11" 15 | machineconfig.vm.hostname = "master" 16 | end 17 | 18 | config.vm.define "interlock" do |machineconfig| 19 | machineconfig.vm.provider "virtualbox" do |vb| 20 | vb.name = "interlock" 21 | end 22 | machineconfig.vm.network "private_network", ip: "192.168.33.12" 23 | machineconfig.vm.hostname = "interlock" 24 | end 25 | 26 | config.vm.define "frontend01" do |machineconfig| 27 | machineconfig.vm.provider "virtualbox" do |vb| 28 | vb.name = "frontend01" 29 | end 30 | machineconfig.vm.network "private_network", ip: "192.168.33.20" 31 | machineconfig.vm.hostname = "frontend01" 32 | end 33 | 34 | config.vm.define "worker01" do |machineconfig| 35 | machineconfig.vm.provider "virtualbox" do |vb| 36 | vb.name = "worker01" 37 | end 38 | machineconfig.vm.network "private_network", ip: "192.168.33.200" 39 | machineconfig.vm.hostname = "worker01" 40 | end 41 | 42 | config.vm.define "store" do |machineconfig| 43 | machineconfig.vm.provider "virtualbox" do |vb| 44 | vb.name = "store" 45 | end 46 | machineconfig.vm.network "private_network", ip: "192.168.33.250" 47 | machineconfig.vm.hostname = "store" 48 | end 49 | 50 | # Common config 51 | config.vm.provider "virtualbox" do |vb| 52 | vb.gui = false 53 | vb.memory = "1024" 54 | vb.cpus = 1 55 | vb.customize ["modifyvm", :id, "--vram", "8"] 56 | end 57 | 58 | config.vm.provision "fix-no-tty", type: "shell" do |s| 59 | s.privileged = false 60 | s.inline = "sudo sed -i '/tty/!s/mesg n/tty -s \\&\\& mesg n/' /root/.profile" 61 | end 62 | config.vm.provision "shell", inline: <<-SHELL 63 | sudo sh -c 'echo "root:root" | sudo chpasswd' 64 | sed -i 's/^PermitRootLogin .*/PermitRootLogin yes/' /etc/ssh/sshd_config 65 | sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D 66 | sudo sh -c 'echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" > /etc/apt/sources.list.d/docker.list' 67 | sudo apt-get update 68 | sudo apt-get -y install linux-image-extra-$(uname -r) 69 | sudo apt-get -y install docker-engine 70 | usermod -a -G docker vagrant ## add vagrant user to docker group 71 | sudo sh -c 'echo DOCKER_OPTS=\\"--cluster-store=consul://192.168.33.11:8500 --cluster-advertise=eth1:2375 -H=tcp://0.0.0.0:2375 -H=unix:///var/run/docker.sock\\" >> /etc/default/docker' 72 | sudo service docker restart ## not necessary, should already be running 73 | sleep 5 74 | sudo docker run hello-world ## if you want to verify docker-engine is ok 75 | SHELL 76 | end 77 | -------------------------------------------------------------------------------- /Vagrant/after-reboot.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ## 4 | ## Commands to create the Swarm layer containers 5 | ## 6 | 7 | docker -H=tcp://192.168.33.11:2375 run --restart=unless-stopped -d -p 8500:8500 -h consul progrium/consul -server -bootstrap 8 | 9 | docker -H=tcp://192.168.33.20:2375 run --restart=unless-stopped -d swarm join --advertise=192.168.33.20:2375 consul://192.168.33.11:8500/ 10 | docker -H=tcp://192.168.33.21:2375 run --restart=unless-stopped -d swarm join --advertise=192.168.33.21:2375 consul://192.168.33.11:8500/ 11 | [et cetera] 12 | 13 | docker -H=tcp://192.168.33.200:2375 run --restart=unless-stopped -d swarm join --advertise=192.168.33.200:2375 consul://192.168.33.11:8500/ 14 | docker -H=tcp://192.168.33.201:2375 run --restart=unless-stopped -d swarm join --advertise=192.168.33.201:2375 consul://192.168.33.11:8500/ 15 | [et cetera] 16 | 17 | docker -H=tcp://192.168.33.250:2375 run --restart=unless-stopped -d swarm join --advertise=192.168.33.250:2375 consul://192.168.33.11:8500/ 18 | 19 | docker -H=tcp://192.168.33.11:2375 run --restart=unless-stopped -d -p 3375:2375 swarm manage consul://192.168.33.11:8500/ 20 | 21 | export DOCKER_HOST="tcp://192.168.33.11:3375" 22 | docker network create --driver overlay mynet 23 | 24 | ## 25 | ## OPTIONAL: Commands to test the cluster 26 | ## 27 | docker info 28 | docker run -d --name web --net mynet nginx 29 | docker run -itd --name shell1 --net mynet alpine /bin/sh 30 | docker attach shell1 31 | $ ping web 32 | docker stop web shell1 ; docker rm web shell1 33 | -------------------------------------------------------------------------------- /Vagrant/first-time.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | vagrant up 4 | 5 | vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" master 6 | vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" interlock 7 | vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" frontend01 8 | vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" worker01 9 | vagrant ssh -c "sudo apt-get install -y linux-image-generic-lts-utopic && sudo reboot" store 10 | 11 | -------------------------------------------------------------------------------- /Vagrant/kill-all.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # Examples: 4 | # IP="192.168.33.11" sh -c 'docker -H tcp://$IP:2375 ps' 5 | # CID=ca476182988c IP="192.168.33.11" sh -c 'docker -$ tcp://$IP:2375 stop $CID ; docker rm $CID' 6 | 7 | arr=("192.168.33.11" "192.168.33.20" "192.168.33.200" "192.168.33.250") 8 | 9 | for IP in "${arr[@]}"; do 10 | echo "Clearing $IP" 11 | cids=( $(docker -H tcp://$IP:2375 ps -q -a) ) 12 | for CID in ${cids[@]}; do 13 | echo " - Removing $CID [on $IP]" 14 | docker -H tcp://$IP:2375 stop $CID 15 | docker -H tcp://$IP:2375 rm $CID 16 | done 17 | done 18 | -------------------------------------------------------------------------------- /cluster-detailed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/docker-archive/swarm-microservice-demo-v1/677a3326a9634a66c8d2dc169a1c33cf05f3aecf/cluster-detailed.png -------------------------------------------------------------------------------- /cluster.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/docker-archive/swarm-microservice-demo-v1/677a3326a9634a66c8d2dc169a1c33cf05f3aecf/cluster.png -------------------------------------------------------------------------------- /docker-compose.override.yml: -------------------------------------------------------------------------------- 1 | # 2 | # docker-compose.override.yml provides settings for local development 3 | # 4 | 5 | version: '2' 6 | 7 | services: 8 | web-vote-app: 9 | ports: ['8081:80'] 10 | 11 | results-app: 12 | ports: ['8082:80'] 13 | 14 | redis01: 15 | ports: ["6379:6379"] 16 | 17 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | # 2 | # Compose file to run the voting app and dependent services 3 | # 4 | 5 | version: '2' 6 | 7 | services: 8 | web-vote-app: 9 | build: web-vote-app 10 | environment: 11 | WEB_VOTE_NUMBER: "01" 12 | constraint:node: "=frontend01" 13 | 14 | vote-worker: 15 | build: vote-worker 16 | environment: 17 | FROM_REDIS_HOST: 1 18 | TO_REDIS_HOST: 1 19 | 20 | results-app: 21 | build: results-app 22 | 23 | redis01: 24 | image: redis:3 25 | 26 | store: 27 | image: postgres:9.5 28 | environment: 29 | - POSTGRES_USER=postgres 30 | - POSTGRES_PASSWORD=pg8675309 31 | -------------------------------------------------------------------------------- /results-app/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:0.10 2 | 3 | RUN mkdir /app 4 | WORKDIR /app 5 | 6 | ADD package.json /app/package.json 7 | RUN npm install && npm ls 8 | RUN mv /app/node_modules /node_modules 9 | 10 | ADD . /app 11 | 12 | ENV PORT 80 13 | EXPOSE 80 14 | 15 | CMD ["node", "server.js"] 16 | -------------------------------------------------------------------------------- /results-app/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "result-app", 3 | "version": "1.0.0", 4 | "description": "", 5 | "main": "server.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "author": "", 10 | "license": "MIT", 11 | "dependencies": { 12 | "body-parser": "^1.14.1", 13 | "cookie-parser": "^1.4.0", 14 | "express": "^4.13.3", 15 | "method-override": "^2.3.5", 16 | "async": "^1.5.0", 17 | "pg": "^4.4.3", 18 | "socket.io": "^1.3.7" 19 | } 20 | } 21 | -------------------------------------------------------------------------------- /results-app/rebuild-redploy-single-node-vagrant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | docker stop results-app ; docker rm results-app ; docker rmi results-app ; docker -H tcp://192.168.33.251:2375 build -t results-app . ; docker run --restart=unless-stopped --env="constraint:node==store" -p 80:80 -d --name results-app --net mynet results-app 4 | -------------------------------------------------------------------------------- /results-app/server.js: -------------------------------------------------------------------------------- 1 | var express = require('express'), 2 | async = require('async'), 3 | pg = require("pg"), 4 | cookieParser = require('cookie-parser'), 5 | bodyParser = require('body-parser'), 6 | methodOverride = require('method-override'), 7 | app = express(), 8 | server = require('http').Server(app), 9 | io = require('socket.io')(server); 10 | 11 | io.set('transports', ['polling']); 12 | 13 | var port = process.env.PORT || 4000; 14 | 15 | io.sockets.on('connection', function (socket) { 16 | 17 | socket.emit('message', { text : 'Welcome!' }); 18 | 19 | socket.on('subscribe', function (data) { 20 | socket.join(data.channel); 21 | }); 22 | }); 23 | 24 | async.retry( 25 | {times: 1000, interval: 1000}, 26 | function(callback) { 27 | pg.connect('postgres://postgres:pg8675309@store/postgres', function(err, client, done) { 28 | if (err) { 29 | console.error("Failed to connect to db"); 30 | } 31 | callback(err, client); 32 | }); 33 | }, 34 | function(err, client) { 35 | if (err) { 36 | return console.err("Giving up"); 37 | } 38 | console.log("Connected to db"); 39 | getVotes(client); 40 | } 41 | ); 42 | 43 | function getVotes(client) { 44 | var allVotes = []; 45 | client.query('SELECT id, vote, ts FROM votes', [], function(err, result) { 46 | if (err) { 47 | console.error("Error performing query: " + err); 48 | } else { 49 | allVotes = result.rows.reduce(function(obj, row) { 50 | obj.push( {'id':row.id, 'vote':row.vote, 'ts':row.ts} ); 51 | return obj; 52 | }, []); 53 | } 54 | }); 55 | 56 | client.query('SELECT vote, COUNT(id) AS count FROM votes GROUP BY vote', [], function(err, result) { 57 | if (err) { 58 | console.error("Error performing query: " + err); 59 | } else { 60 | var data = result.rows.reduce(function(obj, row) { 61 | obj[row.vote] = row.count; 62 | return obj; 63 | }, {}); 64 | data['allVotesArr'] = allVotes; 65 | io.sockets.emit("scores", JSON.stringify(data)); 66 | } 67 | 68 | setTimeout(function() {getVotes(client) }, 1000); 69 | }); 70 | } 71 | 72 | app.use(cookieParser()); 73 | app.use(bodyParser()); 74 | app.use(methodOverride('X-HTTP-Method-Override')); 75 | app.use(function(req, res, next) { 76 | res.header("Access-Control-Allow-Origin", "*"); 77 | res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept"); 78 | res.header("Access-Control-Allow-Methods", "PUT, GET, POST, DELETE, OPTIONS"); 79 | next(); 80 | }); 81 | 82 | app.use(express.static(__dirname + '/views')); 83 | 84 | app.get('/', function (req, res) { 85 | res.sendFile(path.resolve(__dirname + '/views/index.html')); 86 | }); 87 | 88 | server.listen(port, function () { 89 | var port = server.address().port; 90 | console.log('App running on port ' + port); 91 | }); 92 | -------------------------------------------------------------------------------- /results-app/views/app.js: -------------------------------------------------------------------------------- 1 | var app = angular.module('catsvsdogs', []); 2 | var socket = io.connect({transports:['polling']}); 3 | 4 | var bg1 = document.getElementById('background-stats-1'); 5 | var bg2 = document.getElementById('background-stats-2'); 6 | 7 | app.controller('statsCtrl', function($scope){ 8 | var animateStats = function(a,b){ 9 | if(a+b>0){ 10 | var percentA = a/(a+b)*100; 11 | var percentB = 100-percentA; 12 | bg1.style.width= percentA+"%"; 13 | bg2.style.width = percentB+"%"; 14 | } 15 | }; 16 | 17 | $scope.aPercent = 50; 18 | $scope.bPercent = 50; 19 | $scope.allVotes = []; 20 | 21 | var updateScores = function(){ 22 | socket.on('scores', function (json) { 23 | data = JSON.parse(json); 24 | var a = parseInt(data.a || 0); 25 | var b = parseInt(data.b || 0); 26 | var allVotesArr = data.allVotesArr; 27 | 28 | $scope.allVotes = []; 29 | for(var i = 0; i 0){ 38 | $scope.aPercent = a/(a+b) * 100; 39 | $scope.bPercent = b/(a+b) * 100; 40 | $scope.total = a + b 41 | } 42 | }); 43 | }); 44 | }; 45 | 46 | var init = function(){ 47 | document.body.style.opacity=1; 48 | updateScores(); 49 | }; 50 | socket.on('message',function(data){ 51 | init(); 52 | }); 53 | }); 54 | -------------------------------------------------------------------------------- /results-app/views/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Cats vs Dogs -- Result 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 |
15 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
Cats
24 |
{{aPercent | number:1}}%
25 |
26 |
27 |
28 |
Dogs
29 |
{{bPercent | number:1}}%
30 |
31 | 32 |
33 |
34 |
35 |
36 |
    37 |
  • 38 | {{ x.ts + ': ' + x.id + ' voted for \'' + (x.vote=='a'?'Cats':'Dogs') + '\'' }} 39 |
  • 40 |
41 |
42 |
43 | {{total}} votes 44 |
45 | 46 | 47 | 48 | 49 | 50 | -------------------------------------------------------------------------------- /results-app/views/stylesheets/style.css: -------------------------------------------------------------------------------- 1 | @import url(//fonts.googleapis.com/css?family=Open+Sans:400,700,600); 2 | 3 | *{ 4 | box-sizing:border-box; 5 | } 6 | html,body{ 7 | margin:0; 8 | padding:0; 9 | height:100%; 10 | font-family: 'Open Sans'; 11 | } 12 | body{ 13 | opacity:0; 14 | transition: all 1s linear; 15 | } 16 | 17 | .divider{ 18 | height: 150px; 19 | width:2px; 20 | background-color: #C0C9CE; 21 | position: relative; 22 | top: 50%; 23 | float: left; 24 | transform: translateY(-50%); 25 | } 26 | 27 | #background-stats-1{ 28 | background-color: #2196f3; 29 | } 30 | 31 | #background-stats-2{ 32 | background-color: #00cbca; 33 | } 34 | 35 | #content-container{ 36 | z-index:2; 37 | position:relative; 38 | margin:0 auto; 39 | display:table; 40 | padding:10px; 41 | max-width:940px; 42 | height:100%; 43 | } 44 | #content-container-center{ 45 | display:table-cell; 46 | text-align:center; 47 | vertical-align:middle; 48 | } 49 | #result{ 50 | z-index: 3; 51 | position: absolute; 52 | bottom: 40px; 53 | right: 20px; 54 | color: #fff; 55 | opacity: 0.5; 56 | font-size: 45px; 57 | font-weight: 600; 58 | } 59 | ul{ 60 | list-style: none; 61 | } 62 | #all-votes{ 63 | z-index: 3; 64 | position: absolute; 65 | bottom: 0px; 66 | color: #fff; 67 | font-size: 10px; 68 | line-height:normal; 69 | } 70 | #choice{ 71 | transition: all 300ms linear; 72 | line-height:1.3em; 73 | background:#fff; 74 | box-shadow: 10px 0 0 #fff, -10px 0 0 #fff; 75 | vertical-align:middle; 76 | font-size:40px; 77 | font-weight: 600; 78 | width: 450px; 79 | height: 200px; 80 | } 81 | #choice a{ 82 | text-decoration:none; 83 | } 84 | #choice a:hover, #choice a:focus{ 85 | outline:0; 86 | text-decoration:underline; 87 | } 88 | 89 | #choice .choice{ 90 | width: 49%; 91 | position: relative; 92 | top: 50%; 93 | transform: translateY(-50%); 94 | text-align: left; 95 | padding-left: 50px; 96 | } 97 | 98 | #choice .choice .label{ 99 | text-transform: uppercase; 100 | } 101 | 102 | #choice .choice.dogs{ 103 | color: #00cbca; 104 | float: right; 105 | } 106 | 107 | #choice .choice.cats{ 108 | color: #2196f3; 109 | float: left; 110 | } 111 | #background-stats{ 112 | z-index:1; 113 | height:100%; 114 | width:100%; 115 | position:absolute; 116 | } 117 | #background-stats div{ 118 | transition: width 400ms ease-in-out; 119 | display:inline-block; 120 | margin-bottom:-4px; 121 | width:50%; 122 | height:100%; 123 | } 124 | -------------------------------------------------------------------------------- /vote-worker/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM java:7 2 | 3 | RUN apt-get update -qq && apt-get install -y maven && apt-get clean 4 | 5 | WORKDIR /code 6 | 7 | ADD pom.xml /code/pom.xml 8 | RUN ["mvn", "dependency:resolve"] 9 | RUN ["mvn", "verify"] 10 | 11 | # Adding source, compile and package into a fat jar 12 | ADD src /code/src 13 | RUN ["mvn", "package"] 14 | 15 | CMD ["/usr/lib/jvm/java-7-openjdk-amd64/bin/java", "-jar", "target/worker-jar-with-dependencies.jar"] 16 | -------------------------------------------------------------------------------- /vote-worker/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 4 | 4.0.0 5 | 6 | worker 7 | worker 8 | 1.0-SNAPSHOT 9 | 10 | 11 | 12 | 13 | org.json 14 | json 15 | 20140107 16 | 17 | 18 | 19 | redis.clients 20 | jedis 21 | 2.7.2 22 | jar 23 | compile 24 | 25 | 26 | 27 | org.postgresql 28 | postgresql 29 | 9.4-1200-jdbc41 30 | 31 | 32 | 33 | 34 | 35 | 36 | org.apache.maven.plugins 37 | maven-jar-plugin 38 | 2.4 39 | 40 | worker 41 | 42 | 43 | true 44 | worker.Worker 45 | dependency-jars/ 46 | 47 | 48 | 49 | 50 | 51 | org.apache.maven.plugins 52 | maven-compiler-plugin 53 | 3.1 54 | 55 | 1.7 56 | 1.7 57 | 58 | 59 | 60 | org.apache.maven.plugins 61 | maven-assembly-plugin 62 | 63 | 64 | 65 | attached 66 | 67 | package 68 | 69 | worker 70 | 71 | jar-with-dependencies 72 | 73 | 74 | 75 | worker.Worker 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | -------------------------------------------------------------------------------- /vote-worker/src/main/java/worker/Worker.java: -------------------------------------------------------------------------------- 1 | package worker; 2 | 3 | import redis.clients.jedis.Jedis; 4 | import redis.clients.jedis.exceptions.JedisConnectionException; 5 | import java.sql.*; 6 | import org.json.JSONObject; 7 | import java.util.Map; 8 | 9 | class Worker { 10 | public static String[] getRedisQueueHostnames() throws Exception { 11 | boolean bFoundFromRedisHost = false; 12 | boolean bFoundToRedisHost = false; 13 | int nfromRedisHost = 0, ntoRedisHost = 0; 14 | Map env = System.getenv(); 15 | for (String envName : env.keySet()) { 16 | if (envName.equals("FROM_REDIS_HOST")) { 17 | bFoundFromRedisHost = true; 18 | nfromRedisHost = Integer.parseInt(env.get("FROM_REDIS_HOST")); 19 | } 20 | if (envName.equals("TO_REDIS_HOST")) { 21 | ntoRedisHost = Integer.parseInt(env.get("TO_REDIS_HOST")); 22 | bFoundToRedisHost = true; 23 | } 24 | } 25 | if (!bFoundFromRedisHost) { 26 | throw new Exception("Abort: no FROM_REDIS_HOST environment variable"); 27 | } 28 | if (!bFoundToRedisHost) { 29 | throw new Exception("Abort: no TO_REDIS_HOST environment variable"); 30 | } 31 | if (nfromRedisHost > ntoRedisHost) { 32 | throw new Exception("Abort: no FROM_REDIS_HOST must be <= TO_REDIS_HOST"); 33 | } 34 | String[] retArr = new String[ntoRedisHost-nfromRedisHost+1]; 35 | for (int i = nfromRedisHost; i <= ntoRedisHost; i++) { 36 | String redisHost = String.format("redis%02d", i); 37 | retArr[i-nfromRedisHost] = redisHost; 38 | } 39 | return retArr; 40 | } 41 | public static void main(String[] args) { 42 | 43 | try { 44 | String[] redisHosts = getRedisQueueHostnames(); 45 | System.err.printf("%d redis hosts\n", redisHosts.length); 46 | for (int i = 0; i < redisHosts.length; i++) { 47 | System.err.printf(" redisHosts[%d] = '%s'\n", i, redisHosts[i]); 48 | } 49 | 50 | Jedis[] redisArr = new Jedis[redisHosts.length]; 51 | for (int i = 0; i < redisHosts.length; i++) { 52 | redisArr[i] = connectToRedis(redisHosts[i]); 53 | } 54 | 55 | Connection dbConn = connectToDB("store"); 56 | 57 | System.err.println("Watching vote queue"); 58 | 59 | while (true) { 60 | for (int i = 0; i < redisArr.length; i++) { 61 | Jedis redis = redisArr[i]; 62 | String voteJSON = redis.blpop(0, "votes").get(1); 63 | JSONObject voteData = new JSONObject(voteJSON); 64 | String voterID = voteData.getString("voter_id"); 65 | String vote = voteData.getString("vote"); 66 | long epochMillis = voteData.getLong("ts"); 67 | System.err.printf("Processing vote for '%s' by '%s' from '%d': ", vote, voterID, epochMillis); 68 | updateVote(dbConn, voterID, vote, epochMillis); 69 | } 70 | } 71 | } catch (SQLException e) { 72 | e.printStackTrace(); 73 | System.exit(1); 74 | } catch (Exception e) { 75 | e.printStackTrace(); 76 | System.exit(1); 77 | } 78 | } 79 | 80 | static void updateVote(Connection dbConn, String voterID, String vote, long epochMillis) throws SQLException { 81 | Timestamp ts = new Timestamp(epochMillis); 82 | 83 | PreparedStatement insert = dbConn.prepareStatement( 84 | "INSERT INTO votes (id, vote, ts) VALUES (?, ?, ?)"); 85 | insert.setString(1, voterID); 86 | insert.setString(2, vote); 87 | insert.setTimestamp(3, ts); 88 | 89 | try { 90 | insert.executeUpdate(); 91 | System.err.printf("successful insert for '%s'\n", voterID); 92 | } catch (SQLException e) { 93 | PreparedStatement update = dbConn.prepareStatement( 94 | "UPDATE votes SET vote = ? WHERE id = ? AND ts < ?"); 95 | update.setString(1, vote); 96 | update.setString(2, voterID); 97 | update.setTimestamp(3, ts); 98 | int rowsAffected = update.executeUpdate(); 99 | System.err.printf("%d rows updated for '%s'\n", rowsAffected, voterID); 100 | } 101 | } 102 | 103 | static Jedis connectToRedis(String host) { 104 | Jedis conn = new Jedis(host); 105 | 106 | while (true) { 107 | try { 108 | conn.keys("*"); 109 | break; 110 | } catch (JedisConnectionException e) { 111 | System.err.println("Failed to connect to redis - retrying"); 112 | sleep(1000); 113 | } 114 | } 115 | 116 | System.err.println("Connected to redis"); 117 | return conn; 118 | } 119 | 120 | static Connection connectToDB(String host) throws SQLException { 121 | Connection conn = null; 122 | String password = "pg8675309"; 123 | 124 | try { 125 | 126 | Class.forName("org.postgresql.Driver"); 127 | String url = "jdbc:postgresql://" + host + "/postgres"; 128 | 129 | while (conn == null) { 130 | try { 131 | //Properties props = new Properties(); 132 | //props.setProperty("user","postgres"); 133 | conn = DriverManager.getConnection(url, "postgres", password); 134 | } catch (SQLException e) { 135 | System.err.println("Failed to connect to db - retrying"); 136 | sleep(1000); 137 | } 138 | } 139 | 140 | PreparedStatement st = conn.prepareStatement( 141 | "CREATE TABLE IF NOT EXISTS votes (id VARCHAR(255) NOT NULL UNIQUE, vote VARCHAR(255) NOT NULL, ts TIMESTAMP DEFAULT NOW())"); 142 | st.executeUpdate(); 143 | 144 | } catch (ClassNotFoundException e) { 145 | e.printStackTrace(); 146 | System.exit(1); 147 | } 148 | 149 | return conn; 150 | } 151 | 152 | static void sleep(long duration) { 153 | try { 154 | Thread.sleep(duration); 155 | } catch (InterruptedException e) { 156 | System.exit(1); 157 | } 158 | } 159 | } 160 | -------------------------------------------------------------------------------- /web-vote-app/Dockerfile: -------------------------------------------------------------------------------- 1 | # Using official python runtime base image 2 | FROM python:2.7 3 | 4 | # Set the application directory 5 | WORKDIR /app 6 | 7 | # Install our requirements.txt 8 | ADD requirements.txt /app/requirements.txt 9 | RUN pip install -r requirements.txt 10 | 11 | # Copy our code from the current folder to /app inside the container 12 | ADD . /app 13 | 14 | # Make port 80 available for links and/or publish 15 | EXPOSE 80 16 | 17 | # Define our command to be run when launching the container 18 | CMD ["python", "app.py"] 19 | -------------------------------------------------------------------------------- /web-vote-app/app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask 2 | from flask import render_template 3 | from flask import request 4 | from flask import make_response 5 | from utils import connect_to_redis 6 | import os 7 | import socket 8 | import random 9 | import json 10 | import time 11 | 12 | option_a = os.getenv('OPTION_A', "Cats") 13 | option_b = os.getenv('OPTION_B', "Dogs") 14 | hostname = socket.gethostname() 15 | 16 | db_server = "redis%s" % os.environ['WEB_VOTE_NUMBER'] 17 | redis = connect_to_redis(db_server) 18 | app = Flask(__name__) 19 | 20 | @app.route("/env", methods=['GET']) 21 | def dump_env(): 22 | s = '' 23 | for key in os.environ.keys(): 24 | s = "%s%30s=%s\n" % (s, key,os.environ[key]) 25 | resp = make_response(render_template( 26 | 'env.html', 27 | s=s 28 | )) 29 | return resp 30 | 31 | @app.route("/", methods=['POST','GET']) 32 | def index(): 33 | voter_id = request.cookies.get('voter_id') 34 | if not voter_id: 35 | voter_id = hex(random.getrandbits(64))[2:-1] 36 | 37 | vote = None 38 | 39 | if request.method == 'POST': 40 | vote = request.form['vote'] 41 | epoch_time_ms = long(time.time()*1000) 42 | data = json.dumps({'voter_id': voter_id, 'vote': vote, 'ts': epoch_time_ms}) 43 | redis.rpush('votes', data) 44 | 45 | resp = make_response(render_template( 46 | 'index.html', 47 | option_a=option_a, 48 | option_b=option_b, 49 | hostname=hostname, 50 | node="web%s" % os.environ['WEB_VOTE_NUMBER'], 51 | vote=vote, 52 | )) 53 | resp.set_cookie('voter_id', voter_id) 54 | return resp 55 | 56 | 57 | if __name__ == "__main__": 58 | app.run(host='0.0.0.0', port=80, debug=True) 59 | -------------------------------------------------------------------------------- /web-vote-app/requirements.txt: -------------------------------------------------------------------------------- 1 | Flask 2 | Redis -------------------------------------------------------------------------------- /web-vote-app/static/stylesheets/style.css: -------------------------------------------------------------------------------- 1 | @import url(//fonts.googleapis.com/css?family=Open+Sans:400,700,600); 2 | 3 | *{ 4 | box-sizing:border-box; 5 | } 6 | html,body{ 7 | margin: 0; 8 | padding: 0; 9 | background-color: #F7F8F9; 10 | height: 100vh; 11 | font-family: 'Open Sans'; 12 | } 13 | 14 | button{ 15 | border-radius: 0; 16 | width: 100%; 17 | height: 50%; 18 | } 19 | 20 | button[type="submit"] { 21 | -webkit-appearance:none; -webkit-border-radius:0; 22 | } 23 | 24 | button i{ 25 | float: right; 26 | padding-right: 30px; 27 | margin-top: 3px; 28 | } 29 | 30 | button.a{ 31 | background-color: #1aaaf8; 32 | } 33 | 34 | button.b{ 35 | background-color: #00cbca; 36 | } 37 | 38 | #tip{ 39 | text-align: left; 40 | color: #c0c9ce; 41 | font-size: 14px; 42 | } 43 | 44 | #hostname{ 45 | position: absolute; 46 | bottom: 100px; 47 | right: 0; 48 | left: 0; 49 | color: #8f9ea8; 50 | font-size: 14px; 51 | } 52 | 53 | #content-container{ 54 | z-index: 2; 55 | position: relative; 56 | margin: 0 auto; 57 | display: table; 58 | padding: 10px; 59 | max-width: 940px; 60 | height: 100%; 61 | } 62 | #content-container-center{ 63 | display: table-cell; 64 | text-align: center; 65 | } 66 | 67 | #content-container-center h3{ 68 | color: #254356; 69 | } 70 | 71 | #choice{ 72 | transition: all 300ms linear; 73 | line-height: 1.3em; 74 | display: inline; 75 | vertical-align: middle; 76 | font-size: 3em; 77 | } 78 | #choice a{ 79 | text-decoration:none; 80 | } 81 | #choice a:hover, #choice a:focus{ 82 | outline:0; 83 | text-decoration:underline; 84 | } 85 | 86 | #choice button{ 87 | display: block; 88 | height: 80px; 89 | width: 330px; 90 | border: none; 91 | color: white; 92 | text-transform: uppercase; 93 | font-size:18px; 94 | font-weight: 700; 95 | margin-top: 10px; 96 | margin-bottom: 10px; 97 | text-align: left; 98 | padding-left: 50px; 99 | } 100 | 101 | #choice button.a:hover{ 102 | background-color: #1488c6; 103 | } 104 | 105 | #choice button.b:hover{ 106 | background-color: #00a2a1; 107 | } 108 | 109 | #choice button.a:focus{ 110 | background-color: #1488c6; 111 | } 112 | 113 | #choice button.b:focus{ 114 | background-color: #00a2a1; 115 | } 116 | 117 | #background-stats{ 118 | z-index:1; 119 | height:100%; 120 | width:100%; 121 | position:absolute; 122 | } 123 | #background-stats div{ 124 | transition: width 400ms ease-in-out; 125 | display:inline-block; 126 | margin-bottom:-4px; 127 | width:50%; 128 | height:100%; 129 | } 130 | -------------------------------------------------------------------------------- /web-vote-app/templates/env.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
{{s}}
13 | 14 | 15 | -------------------------------------------------------------------------------- /web-vote-app/templates/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | {{option_a}} vs {{option_b}}! 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 |
16 |

{{option_a}} vs {{option_b}}!

17 |
18 | 19 | 20 |
21 |
22 | (Tip: you can change your vote) 23 |
24 |
25 | Vote processed by {{hostname}} ({{node}}) 26 |
27 |
28 |
29 | 30 | 31 | 32 | {% if vote %} 33 | 47 | {% endif %} 48 | 49 | 50 | -------------------------------------------------------------------------------- /web-vote-app/utils/__init__.py: -------------------------------------------------------------------------------- 1 | import time 2 | from redis import Redis, ConnectionError 3 | 4 | 5 | def connect_to_redis(host): 6 | time.sleep(2) 7 | print "Connecting to redis" 8 | 9 | while True: 10 | try: 11 | redis = Redis(host=host, db=0) 12 | redis.ping() 13 | print "Connected to redis" 14 | return redis 15 | except ConnectionError: 16 | print "Failed to connect to redis - retrying" 17 | time.sleep(1) 18 | --------------------------------------------------------------------------------