├── .gitignore ├── Makefile ├── conf └── docker.tpl ├── variables.tf ├── scripts ├── install-docker-ce.sh └── fetch-tokens.sh ├── main.tf ├── outputs.tf ├── LICENSE ├── managers.tf ├── security-groups.tf ├── workers.tf └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | #Gogland 2 | .idea/ 3 | 4 | # Compiled files 5 | *.tfstate 6 | *.tfstate.backup 7 | 8 | # Module directory 9 | .terraform/ 10 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | SHELL:=/bin/bash 2 | 3 | init: 4 | @brew update 5 | @brew install terraform 6 | @terraform -v 7 | @brew install jq 8 | @jq --version 9 | @terraform init 10 | 11 | reset: 12 | @terraform destroy -force 13 | @terraform apply 14 | -------------------------------------------------------------------------------- /conf/docker.tpl: -------------------------------------------------------------------------------- 1 | [Service] 2 | ExecStart= 3 | ExecStart=/usr/bin/dockerd -H fd:// \ 4 | -H tcp://${ip}:2375 \ 5 | --storage-driver=overlay2 \ 6 | --dns 8.8.4.4 --dns 8.8.8.8 \ 7 | --log-driver json-file \ 8 | --log-opt max-size=50m --log-opt max-file=10 \ 9 | --experimental=true \ 10 | --metrics-addr ${ip}:9323 11 | -------------------------------------------------------------------------------- /variables.tf: -------------------------------------------------------------------------------- 1 | variable "docker_version" { 2 | default = "17.06.0~ce-0~ubuntu" 3 | } 4 | 5 | variable "region" { 6 | default = "ams1" 7 | } 8 | 9 | variable "manager_instance_type" { 10 | default = "VC1S" 11 | } 12 | 13 | variable "worker_instance_type" { 14 | default = "VC1S" 15 | } 16 | 17 | variable "worker_instance_count" { 18 | default = 2 19 | } 20 | 21 | variable "docker_api_ip" { 22 | default = "127.0.0.1" 23 | } 24 | -------------------------------------------------------------------------------- /scripts/install-docker-ce.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | DOCKER_VERSION=$1 4 | 5 | # setup Docker repository 6 | apt-get -qq update 7 | apt-get -qq install \ 8 | apt-transport-https \ 9 | ca-certificates \ 10 | curl \ 11 | software-properties-common 12 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 13 | add-apt-repository \ 14 | "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ 15 | $(lsb_release -cs) \ 16 | stable" 17 | 18 | # install Docker CE 19 | apt-get -q update -y 20 | apt-get -q install -y docker-ce=$DOCKER_VERSION 21 | 22 | -------------------------------------------------------------------------------- /main.tf: -------------------------------------------------------------------------------- 1 | provider "scaleway" { 2 | region = "${var.region}" 3 | } 4 | 5 | // Using Racher since Scaleway Docker bootstrap is missing IPVS_NFCT and IPVS_RR 6 | // https://github.com/moby/moby/issues/28168 7 | data "scaleway_bootscript" "rancher" { 8 | architecture = "x86_64" 9 | //name_filter = "docker" 10 | name = "x86_64 mainline 4.9.48 rev1" 11 | } 12 | 13 | data "scaleway_image" "xenial" { 14 | architecture = "x86_64" 15 | name = "Ubuntu Xenial" 16 | } 17 | 18 | data "template_file" "docker_conf" { 19 | template = "${file("conf/docker.tpl")}" 20 | 21 | vars { 22 | ip = "${var.docker_api_ip}" 23 | } 24 | } 25 | -------------------------------------------------------------------------------- /outputs.tf: -------------------------------------------------------------------------------- 1 | output "swarm_manager_public_ip" { 2 | value = "${scaleway_ip.swarm_manager_ip.0.ip}" 3 | } 4 | 5 | output "swarm_manager_private_ip" { 6 | value = "${scaleway_server.swarm_manager.0.private_ip}" 7 | } 8 | 9 | output "swarm_manager_token" { 10 | value = "${data.external.swarm_tokens.result.manager}" 11 | } 12 | 13 | output "swarm_workers_public_ip" { 14 | value = "${concat(scaleway_server.swarm_worker.*.name, scaleway_server.swarm_worker.*.public_ip)}" 15 | } 16 | 17 | output "swarm_workers_private_ip" { 18 | value = "${concat(scaleway_server.swarm_worker.*.name, scaleway_server.swarm_worker.*.private_ip)}" 19 | } 20 | 21 | output "swarm_worker_token" { 22 | value = "${data.external.swarm_tokens.result.worker}" 23 | } 24 | 25 | output "workspace" { 26 | value = "${terraform.workspace}" 27 | } 28 | -------------------------------------------------------------------------------- /scripts/fetch-tokens.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Processing JSON in shell scripts 4 | # https://www.terraform.io/docs/providers/external/data_source.html#processing-json-in-shell-scripts 5 | 6 | # Exit if any of the intermediate steps fail 7 | set -e 8 | 9 | # Extract "host" argument from the input into HOST shell variable 10 | eval "$(jq -r '@sh "HOST=\(.host)"')" 11 | 12 | # Fetch the manager join token 13 | MANAGER=$(ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \ 14 | root@$HOST docker swarm join-token manager -q) 15 | 16 | # Fetch the worker join token 17 | WORKER=$(ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \ 18 | root@$HOST docker swarm join-token worker -q) 19 | 20 | # Produce a JSON object containing the tokens 21 | jq -n --arg manager "$MANAGER" --arg worker "$WORKER" \ 22 | '{"manager":$manager,"worker":$worker}' 23 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Stefan Prodan 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /managers.tf: -------------------------------------------------------------------------------- 1 | resource "scaleway_ip" "swarm_manager_ip" { 2 | count = 1 3 | } 4 | 5 | resource "scaleway_server" "swarm_manager" { 6 | count = 1 7 | name = "${terraform.workspace}-manager-${count.index + 1}" 8 | image = "${data.scaleway_image.xenial.id}" 9 | type = "${var.manager_instance_type}" 10 | bootscript = "${data.scaleway_bootscript.rancher.id}" 11 | security_group = "${scaleway_security_group.swarm_managers.id}" 12 | public_ip = "${element(scaleway_ip.swarm_manager_ip.*.ip, count.index)}" 13 | 14 | connection { 15 | type = "ssh" 16 | user = "root" 17 | } 18 | 19 | provisioner "remote-exec" { 20 | inline = [ 21 | "mkdir -p /etc/systemd/system/docker.service.d", 22 | ] 23 | } 24 | 25 | provisioner "file" { 26 | content = "${data.template_file.docker_conf.rendered}" 27 | destination = "/etc/systemd/system/docker.service.d/docker.conf" 28 | } 29 | 30 | provisioner "file" { 31 | source = "scripts/install-docker-ce.sh" 32 | destination = "/tmp/install-docker-ce.sh" 33 | } 34 | 35 | provisioner "remote-exec" { 36 | inline = [ 37 | "chmod +x /tmp/install-docker-ce.sh", 38 | "/tmp/install-docker-ce.sh ${var.docker_version}", 39 | "docker swarm init --advertise-addr ${self.private_ip}", 40 | ] 41 | } 42 | } 43 | -------------------------------------------------------------------------------- /security-groups.tf: -------------------------------------------------------------------------------- 1 | resource "scaleway_security_group" "swarm_managers" { 2 | name = "swarm_managers" 3 | description = "Allow HTTP/S and SSH traffic" 4 | } 5 | 6 | resource "scaleway_security_group_rule" "ssh_accept" { 7 | security_group = "${scaleway_security_group.swarm_managers.id}" 8 | 9 | action = "accept" 10 | direction = "inbound" 11 | ip_range = "0.0.0.0/0" 12 | protocol = "TCP" 13 | port = 22 14 | } 15 | 16 | resource "scaleway_security_group_rule" "http_accept" { 17 | security_group = "${scaleway_security_group.swarm_managers.id}" 18 | 19 | action = "accept" 20 | direction = "inbound" 21 | ip_range = "0.0.0.0/0" 22 | protocol = "TCP" 23 | port = 80 24 | } 25 | 26 | resource "scaleway_security_group_rule" "https_accept" { 27 | security_group = "${scaleway_security_group.swarm_managers.id}" 28 | 29 | action = "accept" 30 | direction = "inbound" 31 | ip_range = "0.0.0.0/0" 32 | protocol = "TCP" 33 | port = 443 34 | } 35 | 36 | resource "scaleway_security_group" "swarm_workers" { 37 | name = "swarm_workers" 38 | description = "Allow SSH traffic" 39 | } 40 | 41 | resource "scaleway_security_group_rule" "ssh_accept_workers" { 42 | security_group = "${scaleway_security_group.swarm_workers.id}" 43 | 44 | action = "accept" 45 | direction = "inbound" 46 | ip_range = "0.0.0.0/0" 47 | protocol = "TCP" 48 | port = 22 49 | } 50 | -------------------------------------------------------------------------------- /workers.tf: -------------------------------------------------------------------------------- 1 | resource "scaleway_ip" "swarm_worker_ip" { 2 | count = "${var.worker_instance_count}" 3 | } 4 | 5 | resource "scaleway_server" "swarm_worker" { 6 | count = "${var.worker_instance_count}" 7 | name = "${terraform.workspace}-worker-${count.index + 1}" 8 | image = "${data.scaleway_image.xenial.id}" 9 | type = "${var.worker_instance_type}" 10 | bootscript = "${data.scaleway_bootscript.rancher.id}" 11 | security_group = "${scaleway_security_group.swarm_workers.id}" 12 | public_ip = "${element(scaleway_ip.swarm_worker_ip.*.ip, count.index)}" 13 | 14 | connection { 15 | type = "ssh" 16 | user = "root" 17 | } 18 | 19 | provisioner "remote-exec" { 20 | inline = [ 21 | "mkdir -p /etc/systemd/system/docker.service.d", 22 | ] 23 | } 24 | 25 | provisioner "file" { 26 | content = "${data.template_file.docker_conf.rendered}" 27 | destination = "/etc/systemd/system/docker.service.d/docker.conf" 28 | } 29 | 30 | provisioner "file" { 31 | source = "scripts/install-docker-ce.sh" 32 | destination = "/tmp/install-docker-ce.sh" 33 | } 34 | 35 | provisioner "remote-exec" { 36 | inline = [ 37 | "chmod +x /tmp/install-docker-ce.sh", 38 | "/tmp/install-docker-ce.sh ${var.docker_version}", 39 | "docker swarm join --token ${data.external.swarm_tokens.result.worker} ${scaleway_server.swarm_manager.0.private_ip}:2377", 40 | ] 41 | } 42 | 43 | # drain worker on destroy 44 | provisioner "remote-exec" { 45 | when = "destroy" 46 | 47 | inline = [ 48 | "docker node update --availability drain ${self.name}", 49 | ] 50 | 51 | on_failure = "continue" 52 | 53 | connection { 54 | type = "ssh" 55 | user = "root" 56 | host = "${scaleway_ip.swarm_manager_ip.0.ip}" 57 | } 58 | } 59 | 60 | # leave swarm on destroy 61 | provisioner "remote-exec" { 62 | when = "destroy" 63 | 64 | inline = [ 65 | "docker swarm leave", 66 | ] 67 | 68 | on_failure = "continue" 69 | } 70 | 71 | # remove node on destroy 72 | provisioner "remote-exec" { 73 | when = "destroy" 74 | 75 | inline = [ 76 | "docker node rm --force ${self.name}", 77 | ] 78 | 79 | on_failure = "continue" 80 | 81 | connection { 82 | type = "ssh" 83 | user = "root" 84 | host = "${scaleway_ip.swarm_manager_ip.0.ip}" 85 | } 86 | } 87 | } 88 | 89 | data "external" "swarm_tokens" { 90 | program = ["./scripts/fetch-tokens.sh"] 91 | 92 | query = { 93 | host = "${scaleway_ip.swarm_manager_ip.0.ip}" 94 | } 95 | 96 | depends_on = ["scaleway_server.swarm_manager"] 97 | } 98 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # scaleway-swarm-terraform 2 | 3 | Automating Docker Swarm cluster operations with Terraform Scaleway provider. 4 | 5 | ### Initial setup 6 | 7 | Clone the repository and install the dependencies: 8 | 9 | ```bash 10 | $ git clone https://github.com/stefanprodan/scaleway-swarm-terraform.git 11 | $ cd scaleway-swarm-terraform 12 | 13 | # requires brew 14 | $ make init 15 | ``` 16 | 17 | Running `make init` will install Terraform and jq using Homebrew and will pull the required Terraform modules. 18 | If you are on linux, after installing Terraform and jq packages, run `terraform init`. 19 | Note that you'll need Terraform v0.10 or newer to run this project. 20 | 21 | Before running the project you'll have to create an access token for Terraform to connect to the Scaleway API. 22 | Using the token and your access key, create two environment variables: 23 | 24 | ```bash 25 | $ export SCALEWAY_ORGANIZATION="" 26 | $ export SCALEWAY_TOKEN="" 27 | ``` 28 | 29 | ### Usage 30 | 31 | Create a Docker Swarm Cluster with one manager and two workers: 32 | 33 | ```bash 34 | # create a workspace 35 | terraform workspace new dev 36 | 37 | # generate plan 38 | terraform plan 39 | 40 | # run the plan 41 | terraform apply 42 | ``` 43 | 44 | This will do the following: 45 | 46 | * reserves public IPs for each node 47 | * creates a security group for the manager node allowing SSH and HTTP/S inbound traffic 48 | * creates a security group for the worker nodes allowing SSH inbound traffic 49 | * provisions three VC1S servers with Ubuntu 16.04 LTS and Rancher boot script 50 | * starts the manager node and installs Docker CE using the local SSH agent 51 | * customizes the Docker daemon systemd config by enabling the experimental features and the metrics endpoint 52 | * initializes the manager node as Docker Swarm manager and extracts the join tokens 53 | * starts the worker nodes in parallel and setups Docker CE the same as on the manager node 54 | * joins the worker nodes in the cluster using the manager node private IP 55 | 56 | The naming convention for a swarm node is in `--` format, 57 | running the project on workspace dev will create 3 nodes: dev-manager-1, dev-worker-1, dev-worker-2. 58 | If you don't create a workspace then you'll be running on the default one and your nods prefix will be `default`. 59 | You can have multiple workspaces, each with it's own state, so you can run in parallel different Docker Swarm clusters. 60 | 61 | Customizing the cluster specs via terraform variables: 62 | 63 | ```bash 64 | terraform apply \ 65 | -var docker_version=17.06.0~ce-0~ubuntu \ 66 | -var region=ams1 \ 67 | -var manager_instance_type=VC1S \ 68 | -var worker_instance_type=VC1S \ 69 | -var worker_instance_count=2 70 | ``` 71 | 72 | You can scale up or down the Docker Swarm Cluster by modifying the `worker_instance_count`. 73 | On scale up, all new nodes will join the current cluster. 74 | When you scale down the workers, Terraform will first drain the node 75 | and remove it from the swarm before destroying the resources. 76 | 77 | After running the Terraform plan you'll see several output variables like the Swarm tokes, 78 | the private and public IPs of each node and the current workspace. 79 | You can use the manager public IP variable to connect via SSH and lunch a service within the Swarm. 80 | 81 | ```bash 82 | $ ssh root@$(terraform output swarm_manager_public_ip) 83 | 84 | root@dev-manager-1:~# docker service create \ 85 | --name nginx -dp 80:80 \ 86 | --replicas 2 \ 87 | --constraint 'node.role == worker' nginx 88 | 89 | $ curl $(terraform output swarm_manager_public_ip) 90 | ``` 91 | 92 | You could also expose the Docker engine remote API and metrics endpoint on the public IP by running: 93 | 94 | ```bash 95 | terraform apply -var docker_api_ip="0.0.0.0" 96 | ``` 97 | 98 | If you chose to do so, you should allow access to the API only from your IP. 99 | You'll have to add a security group rule for ports 2375 and 9323 to the managers and workers groups. 100 | 101 | Test your settings by calling the API and metrics endpoint: 102 | 103 | ```bash 104 | $ curl $(terraform output swarm_manager_public_ip):2375/containers/json 105 | 106 | $ curl $(terraform output swarm_manager_public_ip):9323/metrics 107 | ``` 108 | 109 | Tear down the whole infrastructure with: 110 | 111 | ```bash 112 | terraform destroy -force 113 | ``` 114 | 115 | Please see my [blog post](https://stefanprodan.com/2017/terraform-docker-swarm-cluster-scaleway/) for more information. 116 | --------------------------------------------------------------------------------