├── images ├── flag.jpg ├── swarm.png └── trophy.jpg ├── docker-compose.yml ├── Troubleshooting.md ├── Enhancements.md ├── create_swarm_test_cluster.sh ├── setup_glusterfs_on_swarm.sh └── README.md /images/flag.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tomwillfixit/swarm-test-cluster/HEAD/images/flag.jpg -------------------------------------------------------------------------------- /images/swarm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tomwillfixit/swarm-test-cluster/HEAD/images/swarm.png -------------------------------------------------------------------------------- /images/trophy.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tomwillfixit/swarm-test-cluster/HEAD/images/trophy.jpg -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "2" 2 | 3 | services: 4 | backend: 5 | image: registry:2 6 | frontend: 7 | image: jpetazzo/hamba 8 | command: 5000 backend:5000 9 | ports: 10 | - "127.0.0.1:5000:5000" 11 | depends_on: 12 | - backend 13 | -------------------------------------------------------------------------------- /Troubleshooting.md: -------------------------------------------------------------------------------- 1 | # Troubleshooting 2 | 3 | During the setup you may encounter a few issues. All the issues that I hit and the solutions can be found below. 4 | 5 | ## 1 : AWS Auth Failure 6 | 7 | If you see the following error then you might have a clock skew : 8 | ``` 9 | Running pre-create checks... 10 | Error with pre-create check: "AuthFailure: AWS was not able to validate the provided access credentials\n\tstatus code: 401, request id: " 11 | ``` 12 | 13 | Solution : 14 | ``` 15 | sudo apt-get install ntpdate 16 | sudo ntpdate ntp.ubuntu.com 17 | ``` 18 | 19 | ## 2 : Unable to ssh into node 20 | 21 | When a node is started Docker Machine hangs at the ssh login. Looks like this : 22 | ``` 23 | Running pre-create checks... 24 | Creating machine... 25 | (172.0.5.2) Launching instance... 26 | Waiting for machine to be running, this may take a few minutes... 27 | Detecting operating system of created instance... 28 | Waiting for SSH to be available... 29 | ``` 30 | 31 | Solution : 32 | ``` 33 | I needed to specify the subnet-id as well as the vpc-id at runtime. 34 | 35 | --amazonec2-vpc-id= 36 | --amazonec2-subnet-id= 37 | ``` 38 | -------------------------------------------------------------------------------- /Enhancements.md: -------------------------------------------------------------------------------- 1 | # Enhancements 2 | 3 | Test cluster provisioning will be largely manual for the first few attempts and will use default settings in Docker Machine and Swarm. 4 | I'll be keeping a note of areas which can be enhanced. As well as seamlessly scaling the test cluster must be fast to provision. 5 | 6 | ## Use a pre-build AMI 7 | 8 | Using the default AMI provided by Docker Machine caused the initial node startup to take several minutes. The setup process looks like : 9 | 10 | ``` 11 | Running pre-create checks... 12 | Creating machine... 13 | (node2) Launching instance... 14 | Waiting for machine to be running, this may take a few minutes... 15 | Detecting operating system of created instance... 16 | Waiting for SSH to be available... 17 | Detecting the provisioner... 18 | Provisioning with ubuntu(systemd)... 19 | Installing Docker... 20 | Copying certs to the local machine directory... 21 | Copying certs to the remote machine... 22 | Setting Docker configuration on the remote daemon... 23 | Configuring swarm... 24 | Checking connection to Docker... 25 | Docker is up and running! 26 | ``` 27 | 28 | Suggestion : Pre-build a node AMI using Packer (https://www.packer.io/) 29 | 30 | Using Ansible, Salt or good old bash install Docker, pull the latest Docker Swarm image and install any other default tools that your tests may need. 31 | 32 | 33 | -------------------------------------------------------------------------------- /create_swarm_test_cluster.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #set environment variables 4 | 5 | export AWS_ACCESS_KEY= 6 | export AWS_SECRET_KEY= 7 | export AWS_REGION= 8 | export AWS_ZONE= 9 | export AWS_VPC= 10 | export AWS_SUBNET= 11 | export AWS_INSTANCE_TYPE= 12 | 13 | #hardcoded to t2.micro since it is just running a consul container 14 | echo "Start Consul Machine and Service" 15 | docker-machine create --driver amazonec2 \ 16 | --amazonec2-access-key=${AWS_ACCESS_KEY} \ 17 | --amazonec2-secret-key=${AWS_SECRET_KEY} \ 18 | --amazonec2-region=${AWS_REGION} \ 19 | --amazonec2-zone=${AWS_ZONE} \ 20 | --amazonec2-vpc-id=${AWS_VPC} \ 21 | --amazonec2-subnet-id=${AWS_SUBNET} \ 22 | --amazonec2-instance-type=t2.micro \ 23 | --amazonec2-tags="Name,consul-tc" \ 24 | consul-tc 25 | 26 | eval $(docker-machine env consul-tc) 27 | 28 | docker run --name consul --restart=always -p 8400:8400 -p 8500:8500 \ 29 | -p 55:53/udp -d progrium/consul -server -bootstrap-expect 1 -ui-dir /ui 30 | 31 | docker-machine ls 32 | 33 | echo "Start swarm-master-tc" 34 | 35 | docker-machine create --driver amazonec2 \ 36 | --amazonec2-access-key=${AWS_ACCESS_KEY} \ 37 | --amazonec2-secret-key=${AWS_SECRET_KEY} \ 38 | --amazonec2-region=${AWS_REGION} \ 39 | --amazonec2-zone=${AWS_ZONE} \ 40 | --amazonec2-vpc-id=${AWS_VPC} \ 41 | --amazonec2-subnet-id=${AWS_SUBNET} \ 42 | --amazonec2-instance-type=${AWS_INSTANCE_TYPE} \ 43 | --amazonec2-tags="Name,swarm-master-tc" \ 44 | --swarm \ 45 | --swarm-master \ 46 | --swarm-discovery="consul://$(docker-machine ip consul-tc):8500" \ 47 | --engine-opt="cluster-store=consul://$(docker-machine ip consul-tc):8500" \ 48 | --engine-opt="cluster-advertise=eth0:2376" \ 49 | swarm-master-tc 50 | 51 | docker-machine ls 52 | 53 | echo "Start Swarm Nodes : swarm-node-tc-[1-4]" 54 | 55 | for node_number in $(seq 1 4); do 56 | docker-machine create --driver amazonec2 \ 57 | --amazonec2-access-key=${AWS_ACCESS_KEY} \ 58 | --amazonec2-secret-key=${AWS_SECRET_KEY} \ 59 | --amazonec2-region=${AWS_REGION} \ 60 | --amazonec2-zone=${AWS_ZONE} \ 61 | --amazonec2-vpc-id=${AWS_VPC} \ 62 | --amazonec2-subnet-id=${AWS_SUBNET} \ 63 | --amazonec2-instance-type=${AWS_INSTANCE_TYPE} \ 64 | --amazonec2-tags="Name,swarm-node-tc-${node_number}" \ 65 | --swarm \ 66 | --swarm-discovery="consul://$(docker-machine ip consul-tc):8500" \ 67 | --engine-opt="cluster-store=consul://$(docker-machine ip consul-tc):8500" \ 68 | --engine-opt="cluster-advertise=eth0:2376" \ 69 | swarm-node-tc-${node_number} 70 | done 71 | 72 | docker-machine ls 73 | 74 | sleep 5 75 | 76 | echo "Switching to swarm-master : docker-machine env --swarm swarm-master-tc" 77 | 78 | eval $(docker-machine env --swarm swarm-master-tc) 79 | 80 | docker info | grep "^Nodes:" 81 | 82 | echo "Finished" 83 | -------------------------------------------------------------------------------- /setup_glusterfs_on_swarm.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # If you have completed the tutorial you will have a swarm cluster that looks like : 4 | 5 | #docker-machine ls 6 | # NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS 7 | #swarm-master-tc * (swarm) amazonec2 Running tcp://52.40.9.5:2376 swarm-master-tc (master) v1.11.2 8 | #swarm-node-tc-1 - amazonec2 Running tcp://52.40.4.21:2376 swarm-master-tc v1.11.2 9 | #swarm-node-tc-2 - amazonec2 Running tcp://52.40.23.18:2376 swarm-master-tc v1.11.2 10 | #swarm-node-tc-3 - amazonec2 Running tcp://52.39.9.21:2376 swarm-master-tc v1.11.2 11 | #swarm-node-tc-4 - amazonec2 Running tcp://52.40.13.12:2376 swarm-master-tc v1.11.2 12 | 13 | #This script depends on this cluster existing. 14 | #This script will connect to each node and perform the following : 15 | 16 | # Install glusterfs-server 17 | # Append to /etc/hosts 18 | # Run peer probe to detect nodes in glusterfs 19 | # Create volume called ${volume_name} 20 | # Mount Volume in each node 21 | 22 | function usage { 23 | 24 | echo "Please provide the name of the shared volume you wish to create." 25 | exit 0 26 | 27 | } 28 | 29 | if [ $# -ne 1 ];then 30 | usage 31 | else 32 | volume_name="$1" 33 | fi 34 | 35 | echo "Install glusterfs-server on each Swarm Node" 36 | 37 | for node in `docker-machine ls -q |grep swarm`; 38 | do 39 | echo "Installing glusterfs-server on ${node}" 40 | docker-machine ssh ${node} sudo apt-get install glusterfs-server -y 41 | done 42 | 43 | echo "Create swarm.hosts file containing Swarm Nodes ip and hostname" 44 | 45 | if [ -f swarm.hosts ];then 46 | rm -rf swarm.hosts 47 | fi 48 | 49 | for node in `docker-machine ls -q |grep swarm`; 50 | do 51 | echo "Adding ${node} to swarm.hosts" 52 | ip=`docker-machine ip ${node}` 53 | echo "${ip} ${node}" >> swarm.hosts 54 | done 55 | 56 | echo "Append swarm.hosts to /etc/hosts in each Swarm Node" 57 | 58 | for node in `docker-machine ls -q |grep swarm`; 59 | do 60 | echo "Appending swarm.hosts to /etc/hosts" 61 | docker-machine scp swarm.hosts ${node}:/tmp/swarm.hosts 62 | docker-machine ssh ${node} "sudo cat /tmp/swarm.hosts >> /etc/hosts" 63 | done 64 | 65 | echo "Run Gluster Peer Probe on swarm-master-tc" 66 | 67 | for node in `docker-machine ls -q |grep swarm`; 68 | do 69 | docker-machine ssh swarm-master-tc "sudo gluster peer probe ${node}" 70 | done 71 | 72 | echo "Checking Peer Status on swarm-master-tc" 73 | docker-machine ssh swarm-master-tc "sudo gluster peer status" 74 | 75 | echo "Creating Volume called : ${volume_name}" 76 | 77 | docker-machine ssh swarm-master-tc "sudo gluster volume create ${volume_name} replica 5 transport tcp swarm-master-tc:/data/${volume_name} swarm-node-tc-1:/data/${volume_name} swarm-node-tc-2:/data/${volume_name} swarm-node-tc-3:/data/${volume_name} swarm-node-tc-4:/data/${volume_name} force" 78 | 79 | echo "Starting Volume" 80 | 81 | docker-machine ssh swarm-master-tc "sudo gluster volume start ${volume_name}" 82 | 83 | echo "Mounting volume ${volume_name} on each Swarm Node" 84 | 85 | for node in `docker-machine ls -q |grep swarm`; 86 | do 87 | docker-machine ssh ${node} "sudo mkdir /home/${volume_name}; \ 88 | echo \"Command : sudo mount -t glusterfs ${node}:${volume_name} /home/${volume_name}\"; \ 89 | sudo mount -t glusterfs ${node}:${volume_name} /home/${volume_name}; \ 90 | echo \"Creating file ${node} in /home/${volume_name}\"; \ 91 | sudo touch /home/${volume_name}/${node}; \ 92 | ls /home/${volume_name}" 93 | done 94 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # swarm-test-cluster 2 | 3 | Create a simple Swarm Cluster for the purpose of running unit tests 4 | 5 | ![swarm](images/swarm.png) 6 | 7 | ## Goal 8 | 9 | Create a seamlessly scalable test environment on which containerised unit tests can be run in a single container or across hundreds of containers. 10 | 11 | Using 2 tools from the Docker ecosystem (Machine and Swarm) we can create a scalable test environment in AWS and seamlessly scale our tests across a Swarm cluster. 12 | 13 | This walkthrough will include a number of "Checkpoints" which will help break the significant steps into smaller chunks. If you hit a Checkpoint and everything has worked as expected then continue to the next step. If you hit issues before reaching a Checkpoint then check for a solution in [Troubleshooting](Troubleshooting.md) before continuing. 14 | 15 | There are a number of areas of enhancement that will become apparent as you work through the manual steps. These will be added to [Enhancements] (Enhancements.md). 16 | 17 | ## Background 18 | 19 | We have been running unit tests in containers for a number of years and while this provides us with better test stability, portability and reduced resource footprint there are some issues with the current approach. 20 | 21 | * We use a static testlist which defines the number of containers that will be required by a test run 22 | * A test run is currently executed on a single docker host. 23 | * The number of containers used for a test run are restricted by the docker hosts memory, cpu, network and disk i/o 24 | 25 | We have essentially hit a glass ceiling in our test execution times due to these restrictions. 26 | 27 | ## Improvements 28 | 29 | The new approach will address the issues listed above. 30 | 31 | * **Dynamic testlist** 32 | 33 | Some work will be required at the testsuite level to create a dynamic list of tests which containers will query at runtime. We will store the testlist in a single mysql db container accessible to the entire cluster. Each new test container will query which tests need to be run and will reserve a "chunk" of tests which will be run within the test container. These "chunks" will be configurable to allow for larger numbers of tests to be run in each test container. Note : It is important that the unit tests do not have dependencies on previously run tests. 34 | 35 | * **Swarm Test Cluster** 36 | 37 | Using Docker Machine and Docker Swarm we will spread test execution across multiple Docker hosts and leverage these additional resources to reduce test execution time. 38 | 39 | * **Local Docker Registry** 40 | 41 | The Swarm Cluster will have a single docker registry which will store any images required by the unit tests. Each Docker host will pull from this local registry. This gives us the additional benefit of reducing our reliance on the private docker registry shared across the rest of the organisation. 42 | 43 | * **Consul Monitoring** 44 | 45 | We will use the progrium Consul docker image to monitor the test container health and to provide service discovery for shared test infrastructure containers within the cluster such as mysql, rabbitmq etc. 46 | 47 | ## Requirements 48 | 49 | * **Docker** 50 | * **Docker Compose** 51 | * **Docker Machine** 52 | * **Docker Swarm** 53 | * **AWS Account** 54 | * **Packer** (Not needed right now. In future we will use Packer to create the AMI used within the cluster.) 55 | 56 | ## Troubleshooting 57 | 58 | When setting up the Swarm cluster the first time you may come across a few minor issues. I've documented all the issues and solutions in [Troubleshooting] (Troubleshooting.md) 59 | 60 | ## Step 1 : Install tools 61 | 62 | Note : This work is being done on Ubuntu 16.04 but will be ported to Centos 7 in future. 63 | 64 | Install **Docker** : 65 | ``` 66 | https://docs.docker.com/engine/installation/linux/ubuntulinux/ 67 | ``` 68 | 69 | Install **Docker Compose** : 70 | ``` 71 | curl -L https://github.com/docker/compose/releases/download/1.7.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose 72 | ``` 73 | 74 | Install **Docker Machine** : 75 | ``` 76 | curl -L https://github.com/docker/machine/releases/download/v0.6.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine && chmod +x /usr/local/bin/docker-machine 77 | ``` 78 | 79 | Install **Docker Swarm** : 80 | 81 | You can install the Swarm binary but it is easier to use the Swarm Image. The command below will pull the latest Swarm image and print out the help text. 82 | ``` 83 | docker run swarm --help 84 | 85 | Example Output : 86 | 87 | A Docker-native clustering system 88 | 89 | Version: 1.2.2 (34e3da3) 90 | 91 | Options: 92 | --debug debug mode [$DEBUG] 93 | --log-level, -l "info" Log level (options: debug, info, warn, error, fatal, panic) 94 | --experimental enable experimental features 95 | --help, -h show help 96 | --version, -v print the version 97 | 98 | Commands: 99 | create, c Create a cluster 100 | list, l List nodes in a cluster 101 | manage, m Manage a docker cluster 102 | join, j Join a docker cluster 103 | help Shows a list of commands or help for one command 104 | 105 | ``` 106 | # Checkpoint 1 107 | ![Trophy](images/trophy.jpg) 108 | # Congratulations! You have all the tools installed !! 109 | 110 | 111 | ## Step 2 : Create Swarm Test Cluster 112 | 113 | The test cluster(tc) will consist of 1 Consul Node, a Swarm Master and 4 Swarm Nodes. 114 | 115 | * consul-tc 116 | * swarm-master-tc 117 | * swarm-node-tc-[1-4] 118 | 119 | Docker Machine requires the following AWS credentials to start a Machine on AWS : 120 | ``` 121 | --amazonec2-access-key 122 | --amazonec2-secret-key 123 | --amazonec2-region 124 | --amazonec2-zone 125 | --amazonec2-vpc-id 126 | ``` 127 | 128 | ### Set Environment variables 129 | ``` 130 | export AWS_ACCESS_KEY= 131 | export AWS_SECRET_KEY= 132 | export AWS_REGION= 133 | export AWS_ZONE= 134 | export AWS_VPC= 135 | export AWS_SUBNET= 136 | export AWS_INSTANCE_TYPE= 137 | 138 | ``` 139 | ### Start Consul Machine and Service 140 | ``` 141 | docker-machine create --driver amazonec2 \ 142 | --amazonec2-access-key=${AWS_ACCESS_KEY} \ 143 | --amazonec2-secret-key=${AWS_SECRET_KEY} \ 144 | --amazonec2-region=${AWS_REGION} \ 145 | --amazonec2-zone=${AWS_ZONE} \ 146 | --amazonec2-vpc-id=${AWS_VPC} \ 147 | --amazonec2-subnet-id=${AWS_SUBNET} \ 148 | --amazonec2-instance-type=${AWS_INSTANCE_TYPE} \ 149 | --amazonec2-tags="Name,consul-tc" \ 150 | consul-tc 151 | ``` 152 | 153 | ### Start Consul service 154 | ``` 155 | eval $(docker-machine env consul-tc) 156 | 157 | docker run --name consul --restart=always -p 8400:8400 -p 8500:8500 \ 158 | -p 55:53/udp -d progrium/consul -server -bootstrap-expect 1 -ui-dir /ui 159 | 160 | ``` 161 | ## Verify Consul started 162 | ``` 163 | docker-machine ls 164 | 165 | NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS 166 | consul-tc * amazonec2 Running tcp://*.*.*.*:2376 v1.11.1 167 | ``` 168 | 169 | # Checkpoint 2 170 | ![Trophy](images/trophy.jpg) 171 | # Congratulations! You have Consul running !! 172 | 173 | ## Start Swarm Master 174 | ``` 175 | docker-machine create --driver amazonec2 \ 176 | --amazonec2-access-key=${AWS_ACCESS_KEY} \ 177 | --amazonec2-secret-key=${AWS_SECRET_KEY} \ 178 | --amazonec2-region=${AWS_REGION} \ 179 | --amazonec2-zone=${AWS_ZONE} \ 180 | --amazonec2-vpc-id=${AWS_VPC} \ 181 | --amazonec2-subnet-id=${AWS_SUBNET} \ 182 | --amazonec2-instance-type=${AWS_INSTANCE_TYPE} \ 183 | --amazonec2-tags="Name,swarm-master-tc" \ 184 | --swarm \ 185 | --swarm-master \ 186 | --swarm-discovery="consul://$(docker-machine ip consul-tc):8500" \ 187 | --engine-opt="cluster-store=consul://$(docker-machine ip consul-tc):8500" \ 188 | --engine-opt="cluster-advertise=eth0:2376" \ 189 | swarm-master-tc 190 | ``` 191 | 192 | ## Verify Swarm Master started 193 | ``` 194 | docker-machine ls 195 | 196 | NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS 197 | consul-tc * amazonec2 Running tcp://*.*.*.*:2376 v1.11.1 198 | swarm-master-tc - amazonec2 Running tcp://*.*.*.*:2376 swarm-master-tc (master) v1.11.1 199 | ``` 200 | 201 | # Checkpoint 3 202 | ![Trophy](images/trophy.jpg) 203 | # Congratulations! You have created a Swarm Master !! 204 | 205 | 206 | ## Start Swarm Nodes 207 | ``` 208 | for node_number in $(seq 1 4); do 209 | docker-machine create --driver amazonec2 \ 210 | --amazonec2-access-key=${AWS_ACCESS_KEY} \ 211 | --amazonec2-secret-key=${AWS_SECRET_KEY} \ 212 | --amazonec2-region=${AWS_REGION} \ 213 | --amazonec2-zone=${AWS_ZONE} \ 214 | --amazonec2-vpc-id=${AWS_VPC} \ 215 | --amazonec2-subnet-id=${AWS_SUBNET} \ 216 | --amazonec2-instance-type=${AWS_INSTANCE_TYPE} \ 217 | --amazonec2-tags="Name,swarm-node-tc-${node_number}" \ 218 | --swarm \ 219 | --swarm-discovery="consul://$(docker-machine ip consul-tc):8500" \ 220 | --engine-opt="cluster-store=consul://$(docker-machine ip consul-tc):8500" \ 221 | --engine-opt="cluster-advertise=eth0:2376" \ 222 | swarm-node-tc-${node_number} 223 | done 224 | ``` 225 | 226 | ## Verify Swarm has started correctly 227 | ``` 228 | docker-machine ls 229 | 230 | Switch to swarm-master : 231 | 232 | eval $(docker-machine env --swarm swarm-master-tc) 233 | 234 | List number of Nodes attached to Swarm : 235 | 236 | docker info | grep "^Nodes:" 237 | ``` 238 | 239 | # Checkpoint 4 240 | ![Trophy](images/trophy.jpg) 241 | # Congratulations! You now have a fully functional Swarm Cluster!! 242 | 243 | 244 | ## Step 3 : Create a docker-registry in the Swarm 245 | 246 | Now that we have a Swarm running we will start a docker-registry that will allow each node in the cluster to access the same registry. 247 | 248 | Switch to swarm-master (if not already done): 249 | ``` 250 | eval $(docker-machine env --swarm swarm-master-tc) 251 | ``` 252 | 253 | Start the docker-registry : 254 | 255 | Code based on : https://github.com/jpetazzo/orchestration-workshop/blob/master/bin/setup-all-the-things.sh 256 | ``` 257 | docker-compose up -d 258 | 259 | Scale up the docker-registry frontend : 260 | 261 | docker-compose scale frontend=5 262 | ``` 263 | 264 | # Checkpoint 5 265 | ![Trophy](images/trophy.jpg) 266 | # Congratulations! You now have docker-registry running in your cluster!! 267 | 268 | ## Step 4 : Push to registry and run containers across the Swarm 269 | 270 | In this step we will pull an alpine linux image, modify the image, push to the registry and then each cluster node will run containers based on this modified image. 271 | 272 | ``` 273 | eval $(docker-machine env --swarm swarm-master-tc) 274 | 275 | docker pull alpine:latest 276 | docker run -it --cidfile="container_id" alpine:latest /bin/sh 277 | 278 | echo "Tests will run from this container" > /tmp/output 279 | exit 280 | ``` 281 | 282 | Save the change : 283 | ``` 284 | docker commit `cat container_id` localhost:5000/alpine:modified 285 | ``` 286 | 287 | Push the new image to the local docker-registry : 288 | ``` 289 | docker push localhost:5000/alpine:modified 290 | ``` 291 | 292 | If you see the following error : 293 | 294 | Put http://localhost:5000/v1/repositories/alpine/: dial tcp 127.0.0.1:5000: getsockopt: connection refused 295 | 296 | Then you will need to login to the AWS console and add port 5000 to the inbound port of the docker-machine security group. 297 | 298 | 299 | At this point we have a modified image which each of the nodes can run. Let's test this out. 300 | ``` 301 | for node_number in $(seq 1 4); do 302 | docker run -d 127.0.0.1:5000/alpine:modified /bin/sh -c "while(true); do cat /tmp/output;sleep 60; done" 303 | done 304 | ``` 305 | 306 | Now check where the containers are running : 307 | ``` 308 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 309 | 409cd9b4f853 127.0.0.1:5000/alpine:modified "/bin/sh -c 'while(tr" About a minute ago Up About a minute swarm-master-tc/serene_ride 310 | 7e6603dee9b1 127.0.0.1:5000/alpine:modified "/bin/sh -c 'while(tr" About a minute ago Up About a minute swarm-node-tc-4/loving_mcclintock 311 | c5286db16822 127.0.0.1:5000/alpine:modified "/bin/sh -c 'while(tr" About a minute ago Up About a minute swarm-node-tc-3/pedantic_goodall 312 | 263d988962c9 127.0.0.1:5000/alpine:modified "/bin/sh -c 'while(tr" About a minute ago Up About a minute swarm-node-tc-1/small_sammet 313 | eb33de99b305 127.0.0.1:5000/alpine:modified "/bin/sh -c 'while(tr" About a minute ago Up About a minute swarm-master-tc/drunk_payne 314 | 315 | ``` 316 | 317 | You can see from the container names that the containers have started across different nodes. 318 | 319 | # Congratulations! 320 | ![Trophy](images/flag.jpg) 321 | # You are now setup and ready to run your containerised unit tests across a Swarm Cluster!! 322 | 323 | # Advanced : Create a GlusterFS shared volume between cluster nodes 324 | 325 | For more details on what GlusterFS check out : https://www.gluster.org/ 326 | 327 | It is really useful to have a shared volume in the test cluster in which results and logs can be aggregated. It might also be preferable to store the source code to be tested on the shared volume. The following script will create a GlusterFS volume based on the Swarm cluster created above. 328 | 329 | ``` 330 | ./setup_glusterfs_on_swarm.sh shared_volume 331 | ``` 332 | 333 | This will create a volume called shared_volume which will be mounted on /home/shared_volume on each of the cluster nodes. 334 | The test containers can then mount this volume at run time. 335 | 336 | 337 | ## Teardown 338 | 339 | If you want to teardown the Swarm Cluster you can run the following command. Warning : This will delete any EC2 instances that Machine created so use at own risk. 340 | ``` 341 | docker-machine rm -f $(docker-machine ls -q) 342 | ``` 343 | 344 | --------------------------------------------------------------------------------