├── .gitignore ├── CA └── .gitkeep ├── Makefile ├── README.md ├── bin ├── cert_creation ├── create_CA ├── generate_apiserver_cert ├── generate_discovery_url.sh ├── generate_worker_cert ├── openssl.cnf ├── setup └── worker-openssl.cnf ├── config.tfvars.template ├── etcd ├── main.tf ├── outputs.tf ├── templates │ └── etcd.tpl └── variables.tf ├── k8s ├── main.tf ├── outputs.tf ├── templates │ ├── apiserver.tpl │ ├── kubelet.tpl │ └── lb.tpl └── variables.tf ├── main.tf ├── outputs.tf ├── ssl └── .gitkeep └── variables.tf /.gitignore: -------------------------------------------------------------------------------- 1 | *.tfvars 2 | *.tfstate.backup 3 | *.tfstate 4 | .envrc 5 | .terraform 6 | CA/ 7 | ssl/ 8 | -------------------------------------------------------------------------------- /CA/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/protochron/k8s-coreos-digitalocean/5aa957ba7400c20d5914a58a6391c36ca2084375/CA/.gitkeep -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | setup: 2 | bin/setup 3 | 4 | terraform-plan: 5 | terraform plan -var-file config.tfvars -var-file secrets.tfvars 6 | 7 | terraform-apply: 8 | bin/generate_discovery_url.sh 9 | terraform apply -var-file config.tfvars -var-file secrets.tfvars 10 | 11 | terraform-destroy: 12 | terraform destroy -var-file config.tfvars -var-file secrets.tfvars 13 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # k8s-digitalocean-coreos 2 | This repo contains a few [Terraform](https://www.terraform.io/) modules that 3 | should let you spin up a production-ready Kubernetes cluster on DigitalOcean. 4 | 5 | ## Requirements 6 | * terraform 7 | * A DigitalOcean API key with write access (directions 8 | [here](https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-api-v2)) 9 | * A DigitalOcean API key with read-only access (used for 10 | [DropLan](https://github.com/tam7t/droplan)) 11 | 12 | ## Getting Started 13 | 14 | Once you have your API keys handy, it's just a matter of setting up the shared 15 | configuration that the modules use. 16 | 17 | To prevent repeated requests for DO access token set your digital ocean api key 18 | as an environment variable: 19 | ``` 20 | export DIGITALOCEAN_ACCESS_TOKEN="INSERT_TOKEN_HERE" 21 | export DIGITALOCEAN_TOKEN=$DIGITALOCEAN_ACCESS_TOKEN 22 | ``` 23 | (We recommend using [direnv](https://github.com/direnv/direnv) and an .envrc file to make this easier.) 24 | 25 | ### `config.tfvars` 26 | Copy the `config.tfvars.template` file to `config.tfvars`. Set the value of the 27 | `do_read_token` variable to a read-only DigitalOcean token for your account. 28 | Your terraform'd resources will automatically be assigned tags but if you or 29 | your team need greater visibility into clusters we provide a `resource_prefix` 30 | configuration variable that will prepend its value to every created resource 31 | name. 32 | 33 | Next, you'll need to get an etcd discovery token 34 | 35 | ``` 36 | curl -w "\n" 'https://discovery.etcd.io/new?size=3' 37 | # Should return something like: https://discovery.etcd.io/6a28e078895c5ec737174db2419bb2f3 38 | ``` 39 | 40 | Set the value of the `discovery_url` variable to the URL given back by the 41 | discovery API. This will let your etcd cluster quickly bootstrap by using the 42 | token. 43 | 44 | ### `secrets.tfvars` 45 | You will also need to provide a DigitalOcean API token with write access to 46 | your account along with a comma separated list of ssh key ids or fingerprints 47 | associated with your account. 48 | 49 | You should create a `secrets.tfvars` file with the following content replacing the values as necessary: 50 | 51 | ``` 52 | ssh_keys = "YOUR_SSH_KEYS" 53 | do_token = "DO_API_WRITE_TOKEN" 54 | ``` 55 | 56 | ## Get the terraform modules 57 | ``` 58 | terraform get 59 | ``` 60 | 61 | ## Create your etcd cluster 62 | ``` 63 | terraform plan -var-file config.tfvars -var-file secrets.tfvars --target module.etcd 64 | ``` 65 | 66 | ## Create your Kubernetes cluster 67 | 68 | ``` 69 | terraform plan -var-file config.tfvars -var-file secrets.tfvars 70 | ``` 71 | -------------------------------------------------------------------------------- /bin/cert_creation: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | #create apiserver keypair 4 | openssl genrsa -out apiserver-key.pem 2048 5 | openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf 6 | openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf 7 | 8 | #generate kubernetes workers keypair 9 | openssl genrsa -out ${WORKER_FQDN}-worker-key.pem 2048 10 | WORKER_IP=${WORKER_IP} openssl req -new -key ${WORKER_FQDN}-worker-key.pem -out ${WORKER_FQDN}-worker.csr -subj "/CN=${WORKER_FQDN}" -config worker-openssl.cnf 11 | WORKER_IP=${WORKER_IP} openssl x509 -req -in ${WORKER_FQDN}-worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out ${WORKER_FQDN}-worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf 12 | 13 | #generate cluster admin keypair 14 | openssl genrsa -out admin-key.pem 2048 15 | openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=kube-admin" 16 | openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365 17 | -------------------------------------------------------------------------------- /bin/create_CA: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | SCRIPT=`python -c "import os,sys; print(os.path.realpath(os.path.expanduser(sys.argv[1])))" "${BASH_SOURCE:-$0}"` 4 | DIR=$(dirname $(dirname $SCRIPT)) 5 | 6 | mkdir -p $DIR/CA 7 | 8 | # cluster root CA 9 | openssl genrsa -out $DIR/CA/ca-key.pem 2048 10 | openssl req -x509 -new -nodes -key $DIR/CA/ca-key.pem -days 10000 -out $DIR/CA/ca.pem -subj "/CN=kube-ca" 11 | -------------------------------------------------------------------------------- /bin/generate_apiserver_cert: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | SCRIPT=`python -c "import os,sys; print(os.path.realpath(os.path.expanduser(sys.argv[1])))" "${BASH_SOURCE:-$0}"` 4 | DIR=$(dirname $(dirname $SCRIPT)) 5 | 6 | NODE_NAME="$1" 7 | IP_ADDR="$2" 8 | DNS_SERVICE_IP="$3" 9 | APISERVER_SERVICE_IP="$4" 10 | HOST="$5" 11 | 12 | tmpfile=$(mktemp) 13 | cat < $tmpfile 14 | [req] 15 | req_extensions = v3_req 16 | distinguished_name = req_distinguished_name 17 | [req_distinguished_name] 18 | [ v3_req ] 19 | basicConstraints = CA:FALSE 20 | keyUsage = nonRepudiation, digitalSignature, keyEncipherment 21 | subjectAltName = @alt_names 22 | [alt_names] 23 | DNS.1 = kubernetes 24 | DNS.2 = kubernetes.default 25 | DNS.3 = kubernetes.default.svc 26 | DNS.4 = kubernetes.default.svc.cluster.local 27 | DNS.5 = ${HOST} 28 | IP.1 = ${DNS_SERVICE_IP} 29 | IP.2 = ${IP_ADDR} 30 | IP.3 = ${APISERVER_SERVICE_IP} 31 | EOF 32 | 33 | mkdir -p $DIR/ssl 34 | 35 | openssl genrsa -out $DIR/ssl/$NODE_NAME-key.pem 2048 36 | openssl req -new -key $DIR/ssl/$NODE_NAME-key.pem -out $DIR/ssl/$NODE_NAME.csr -subj "/CN=kube-apiserver" -config $tmpfile 37 | openssl x509 -req -in $DIR/ssl/$NODE_NAME.csr -CA $DIR/CA/ca.pem -CAkey $DIR/CA/ca-key.pem -CAcreateserial -out $DIR/ssl/$NODE_NAME.pem -days 365 -extensions v3_req -extfile $tmpfile 38 | -------------------------------------------------------------------------------- /bin/generate_discovery_url.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | #note - it appears the discovery url needs to be regenerated per run 4 | SCRIPT=`python -c "import os,sys; print(os.path.realpath(os.path.expanduser(sys.argv[1])))" "${BASH_SOURCE:-$0}"` 5 | DIR=$(dirname $(dirname $SCRIPT)) 6 | 7 | discovery_url=$(curl -s -w "\n" 'https://discovery.etcd.io/new?size=3') 8 | 9 | sed -i'.bak' -e "s|DISCOVERY_URL|$discovery_url|" $DIR/config.tfvars 10 | 11 | rm $DIR/config.tfvars.bak 12 | -------------------------------------------------------------------------------- /bin/generate_worker_cert: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | SCRIPT=`python -c "import os,sys; print(os.path.realpath(os.path.expanduser(sys.argv[1])))" "${BASH_SOURCE:-$0}"` 4 | DIR=$(dirname $(dirname $SCRIPT)) 5 | 6 | NODE_NAME="$1" 7 | IP_ADDR="$2" 8 | HOST="$3" 9 | 10 | tmpfile=$(mktemp) 11 | cat < $tmpfile 12 | [req] 13 | req_extensions = v3_req 14 | distinguished_name = req_distinguished_name 15 | [req_distinguished_name] 16 | [ v3_req ] 17 | basicConstraints = CA:FALSE 18 | keyUsage = nonRepudiation, digitalSignature, keyEncipherment 19 | subjectAltName = @alt_names 20 | [alt_names] 21 | IP.1 = ${IP_ADDR} 22 | DNS.1 = ${HOST} 23 | EOF 24 | 25 | mkdir -p $DIR/ssl 26 | 27 | openssl genrsa -out $DIR/ssl/$NODE_NAME-key.pem 2048 28 | openssl req -new -key $DIR/ssl/$NODE_NAME-key.pem -out $DIR/ssl/$NODE_NAME.csr -subj "/CN=${NODE_NAME}" -config $tmpfile 29 | openssl x509 -req -in $DIR/ssl/$NODE_NAME.csr -CA $DIR/CA/ca.pem -CAkey $DIR/CA/ca-key.pem -CAcreateserial -out $DIR/ssl/$NODE_NAME.pem -days 365 -extensions v3_req -extfile $tmpfile 30 | -------------------------------------------------------------------------------- /bin/openssl.cnf: -------------------------------------------------------------------------------- 1 | [req] 2 | req_extensions = v3_req 3 | distinguished_name = req_distinguished_name 4 | [req_distinguished_name] 5 | [ v3_req ] 6 | basicConstraints = CA:FALSE 7 | keyUsage = nonRepudiation, digitalSignature, keyEncipherment 8 | subjectAltName = @alt_names 9 | [alt_names] 10 | DNS.1 = kubernetes 11 | DNS.2 = kubernetes.default 12 | DNS.3 = kubernetes.default.svc 13 | DNS.4 = kubernetes.default.svc.cluster.local 14 | IP.1 = ${K8S_SERVICE_IP} 15 | IP.2 = ${MASTER_HOST} 16 | -------------------------------------------------------------------------------- /bin/setup: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | SCRIPT=`python -c "import os,sys; print(os.path.realpath(os.path.expanduser(sys.argv[1])))" "${BASH_SOURCE:-$0}"` 4 | DIR=$(dirname $(dirname $SCRIPT)) 5 | 6 | echo "Grabbing terraform modules:" 7 | 8 | terraform get 9 | 10 | echo "Setting up configuration:" 11 | 12 | if [ -f $DIR/config.tfvars]; then 13 | echo "No need to run make setup. Config file already set." 14 | exit 1 15 | fi 16 | 17 | cp $DIR/config.tfvars.template $DIR/config.tfvars 18 | 19 | bash $DIR/bin/generate_discovery_url.sh 20 | 21 | echo "config.tfvars ready - Update DO_READ_TOKEN!!!" 22 | echo "Be sure to update your secrets file." 23 | -------------------------------------------------------------------------------- /bin/worker-openssl.cnf: -------------------------------------------------------------------------------- 1 | [req] 2 | req_extensions = v3_req 3 | distinguished_name = req_distinguished_name 4 | [req_distinguished_name] 5 | [ v3_req ] 6 | basicConstraints = CA:FALSE 7 | keyUsage = nonRepudiation, digitalSignature, keyEncipherment 8 | subjectAltName = @alt_names 9 | [alt_names] 10 | IP.1 = $ENV::WORKER_IP 11 | -------------------------------------------------------------------------------- /config.tfvars.template: -------------------------------------------------------------------------------- 1 | etcd_count = "3" 2 | etcd_size = "1gb" 3 | apiserver_count = "1" 4 | apiserver_size = "1gb" 5 | lb_size = "1gb" 6 | kubelet_count = "1" 7 | kubelet_size = "1gb" 8 | region = "sfo2" 9 | image = "coreos-stable" 10 | discovery_url = "DISCOVERY_URL" 11 | vxlan_id = "1001" 12 | do_read_token = "DO_READ_TOKEN" 13 | resource_prefix = "" 14 | -------------------------------------------------------------------------------- /etcd/main.tf: -------------------------------------------------------------------------------- 1 | data template_file "etcd" { 2 | template = "${file("${path.module}/templates/etcd.tpl")}" 3 | 4 | vars { 5 | discovery_url = "${var.discovery_url}" 6 | key = "${var.do_read_token}" 7 | } 8 | } 9 | 10 | resource digitalocean_droplet "etcd" { 11 | count = "${var.count}" 12 | image = "${var.image}" 13 | region = "${var.region}" 14 | size = "${var.size}" 15 | name = "${format("%setcd-%02d-%s", var.resource_prefix, count.index + 1, var.cluster_id)}" 16 | ssh_keys = ["${split(",", var.ssh_keys)}"] 17 | tags = ["${var.cluster_tag}"] 18 | private_networking = true 19 | 20 | user_data = "${data.template_file.etcd.rendered}" 21 | } 22 | -------------------------------------------------------------------------------- /etcd/outputs.tf: -------------------------------------------------------------------------------- 1 | output "server_urls" { 2 | value = "${join(",", formatlist("http://%s:2379", digitalocean_droplet.etcd.*.ipv4_address_private))}" 3 | } 4 | 5 | output "public_ipv4" { 6 | value = "${join(", ", digitalocean_droplet.etcd.*.ipv4_address)}" 7 | } 8 | -------------------------------------------------------------------------------- /etcd/templates/etcd.tpl: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | 3 | coreos: 4 | etcd2: 5 | discovery: ${discovery_url} 6 | advertise-client-urls: http://$private_ipv4:2379 7 | initial-advertise-peer-urls: http://$private_ipv4:2380 8 | listen-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 9 | listen-peer-urls: http://$private_ipv4:2380 10 | units: 11 | - name: etcd2.service 12 | command: start 13 | - name: docker.service 14 | drop-ins: 15 | - name: "60-docker-config.conf" 16 | content: | 17 | [Service] 18 | Environment="DOCKER_OPTS=--storage-driver=overlay --iptables=false" 19 | - name: droplan.service 20 | enable: true 21 | content: | 22 | [Unit] 23 | Description=updates iptables with peer droplets 24 | Requires=docker.service 25 | 26 | [Service] 27 | Type=oneshot 28 | Environment=DO_KEY=${key} 29 | ExecStart=/usr/bin/docker run --rm --net=host --cap-add=NET_ADMIN -e DO_KEY tam7t/droplan:latest 30 | - name: droplan.timer 31 | enable: true 32 | command: start 33 | content: | 34 | [Unit] 35 | Description=Run droplan.service every 5 minutes 36 | 37 | [Timer] 38 | OnCalendar=*:0/5 39 | -------------------------------------------------------------------------------- /etcd/variables.tf: -------------------------------------------------------------------------------- 1 | variable count { 2 | description = "Number of etcd droplets" 3 | default = 1 4 | } 5 | 6 | variable region { 7 | description = "Region to launch in" 8 | default = "sfo1" 9 | } 10 | 11 | variable cluster_id { 12 | description = "A unique id for the cluster" 13 | } 14 | 15 | variable cluster_tag { 16 | description = "A unique tag for the cluster" 17 | } 18 | 19 | variable do_read_token { 20 | description = "A read-only token for configuring droplan" 21 | } 22 | 23 | variable discovery_url { 24 | description = "etcd discovery url" 25 | } 26 | 27 | variable size { 28 | description = "Size of the etcd droplet" 29 | default = "1gb" 30 | } 31 | 32 | variable ssh_keys { 33 | description = "SSH keys to use" 34 | } 35 | 36 | variable image { 37 | description = "The image to use" 38 | } 39 | 40 | variable vxlan_id { 41 | description = "The vxlan id to use for the flannel network" 42 | } 43 | 44 | variable pod_network { 45 | description = "CIDR of pod IPs" 46 | default = "10.2.0.0/16" 47 | } 48 | 49 | variable resource_prefix { 50 | description = "a prefix for each resource name" 51 | default = "" 52 | } 53 | -------------------------------------------------------------------------------- /k8s/main.tf: -------------------------------------------------------------------------------- 1 | data "template_file" "apiserver" { 2 | template = "${file("${path.module}/templates/apiserver.tpl")}" 3 | 4 | vars { 5 | etcd_servers = "${var.etcd_server_urls}" 6 | dns_service_ip = "${var.dns_service_ip}" 7 | service_ip_range = "${var.service_ip_range}" 8 | k8s_version = "v${var.kubernetes_version}" 9 | key = "${var.do_read_token}" 10 | tag = "${var.cluster_tag}" 11 | } 12 | } 13 | 14 | data "template_file" "kubelet" { 15 | template = "${file("${path.module}/templates/kubelet.tpl")}" 16 | 17 | vars { 18 | etcd_servers = "${var.etcd_server_urls}" 19 | dns_service_ip = "${var.dns_service_ip}" 20 | service_ip_range = "${var.service_ip_range}" 21 | 22 | master = "https://${digitalocean_droplet.apiserver.ipv4_address_private}:8443" 23 | k8s_version = "v${var.kubernetes_version}" 24 | apiservers = "${join(",", formatlist("https://%s.kubelocal:8443", digitalocean_droplet.apiserver.*.name))}" 25 | key = "${var.do_read_token}" 26 | tag = "${var.cluster_tag}" 27 | } 28 | } 29 | 30 | data "template_file" "lb" { 31 | template = "${file("${path.module}/templates/lb.tpl")}" 32 | 33 | vars { 34 | etcd_servers = "${var.etcd_server_urls}" 35 | dns_service_ip = "${var.dns_service_ip}" 36 | service_ip_range = "${var.service_ip_range}" 37 | 38 | master = "https://${digitalocean_droplet.apiserver.ipv4_address_private}:8443" 39 | k8s_version = "v${var.kubernetes_version}" 40 | apiservers = "${join(",", formatlist("https://%s.kubelocal:8443", digitalocean_droplet.apiserver.*.name))}" 41 | key = "${var.do_read_token}" 42 | tag = "${var.cluster_tag}" 43 | } 44 | } 45 | 46 | resource digitalocean_droplet "apiserver" { 47 | count = "${var.apiserver_count}" 48 | image = "${var.image}" 49 | region = "${var.region}" 50 | size = "${var.apiserver_size}" 51 | name = "${format("%sapiserver-%02d-%s", var.resource_prefix, count.index + 1, var.cluster_id)}" 52 | ssh_keys = ["${split(",", var.ssh_keys)}"] 53 | tags = ["${var.cluster_tag}"] 54 | private_networking = true 55 | 56 | user_data = "${data.template_file.apiserver.rendered}" 57 | 58 | connection = { 59 | timeout = "30s" 60 | user = "core" 61 | agent = true 62 | } 63 | 64 | provisioner "local-exec" { 65 | command = "bin/generate_apiserver_cert ${self.name} ${self.ipv4_address_private} ${var.dns_service_ip} ${var.k8s_service_ip} ${self.name}.kubelocal" 66 | } 67 | 68 | provisioner "file" { 69 | source = "${path.module}/../CA/ca.pem" 70 | destination = "/home/core/ca.pem" 71 | } 72 | 73 | provisioner "file" { 74 | source = "${path.module}/../ssl/${self.name}.pem" 75 | destination = "/home/core/apiserver.pem" 76 | } 77 | 78 | provisioner "file" { 79 | source = "${path.module}/../ssl/${self.name}-key.pem" 80 | destination = "/home/core/apiserver-key.pem" 81 | } 82 | 83 | provisioner "remote-exec" { 84 | inline = [ 85 | "sudo mkdir -p /etc/kubernetes/ssl", 86 | "sudo mv /home/core/*.pem /etc/kubernetes/ssl/", 87 | "sudo chown root:root /etc/kubernetes/ssl/*.pem", 88 | "sudo chmod 600 /etc/kubernetes/ssl/*.pem", 89 | "sudo sed -i \"s@#MASTERURL#@https://${self.ipv4_address_private}:8443@\" /etc/kubernetes/addons/kube-dns/kube-dns-rc.yaml" 90 | ] 91 | } 92 | } 93 | 94 | resource digitalocean_droplet "kubelet" { 95 | count = "${var.kubelet_count}" 96 | image = "${var.image}" 97 | region = "${var.region}" 98 | size = "${var.kubelet_size}" 99 | name = "${format("%skubelet-%02d-%s", var.resource_prefix, count.index + 1, var.cluster_id)}" 100 | ssh_keys = ["${split(",", var.ssh_keys)}"] 101 | tags = ["${var.cluster_tag}"] 102 | private_networking = true 103 | 104 | user_data = "${data.template_file.kubelet.rendered}" 105 | 106 | connection = { 107 | timeout = "30s" 108 | user = "core" 109 | agent = true 110 | } 111 | 112 | provisioner "local-exec" { 113 | command = "bin/generate_worker_cert ${self.name} ${self.ipv4_address_private} ${self.name}.kubelocal" 114 | } 115 | 116 | provisioner "file" { 117 | source = "${path.module}/../CA/ca.pem" 118 | destination = "/home/core/ca.pem" 119 | } 120 | 121 | provisioner "file" { 122 | source = "${path.module}/../ssl/${self.name}.pem" 123 | destination = "/home/core/worker.pem" 124 | } 125 | 126 | provisioner "file" { 127 | source = "${path.module}/../ssl/${self.name}-key.pem" 128 | destination = "/home/core/worker-key.pem" 129 | } 130 | 131 | provisioner "remote-exec" { 132 | inline = [ 133 | "sudo mkdir -p /etc/kubernetes/ssl", 134 | "sudo mv /home/core/*.pem /etc/kubernetes/ssl/", 135 | "sudo chown root:root /etc/kubernetes/ssl/*.pem", 136 | "sudo chmod 600 /etc/kubernetes/ssl/*.pem" 137 | ] 138 | } 139 | } 140 | 141 | resource digitalocean_droplet "lb" { 142 | count = "${var.lb_count}" 143 | image = "${var.image}" 144 | region = "${var.region}" 145 | size = "${var.lb_size}" 146 | name = "${format("%slb-%02d-%s", var.resource_prefix, count.index + 1, var.cluster_id)}" 147 | ssh_keys = ["${split(",", var.ssh_keys)}"] 148 | tags = ["${var.cluster_tag}"] 149 | private_networking = true 150 | 151 | user_data = "${data.template_file.lb.rendered}" 152 | 153 | connection = { 154 | timeout = "30s" 155 | user = "core" 156 | agent = true 157 | } 158 | 159 | provisioner "local-exec" { 160 | command = "bin/generate_worker_cert ${self.name} ${self.ipv4_address_private} ${self.name}.kubelocal" 161 | } 162 | 163 | provisioner "file" { 164 | source = "${path.module}/../CA/ca.pem" 165 | destination = "/home/core/ca.pem" 166 | } 167 | 168 | provisioner "file" { 169 | source = "${path.module}/../ssl/${self.name}.pem" 170 | destination = "/home/core/worker.pem" 171 | } 172 | 173 | provisioner "file" { 174 | source = "${path.module}/../ssl/${self.name}-key.pem" 175 | destination = "/home/core/worker-key.pem" 176 | } 177 | 178 | provisioner "remote-exec" { 179 | inline = [ 180 | "sudo mkdir -p /etc/kubernetes/ssl", 181 | "sudo mv /home/core/*.pem /etc/kubernetes/ssl/", 182 | "sudo chown root:root /etc/kubernetes/ssl/*.pem", 183 | "sudo chmod 600 /etc/kubernetes/ssl/*.pem" 184 | ] 185 | } 186 | } 187 | -------------------------------------------------------------------------------- /k8s/outputs.tf: -------------------------------------------------------------------------------- 1 | output "apiservers" { 2 | value = "${join(", ", digitalocean_droplet.apiserver.*.ipv4_address)}" 3 | } 4 | 5 | output "load-balancer" { 6 | value = "${digitalocean_droplet.lb.ipv4_address}" 7 | } 8 | 9 | output "kubelets" { 10 | value = "${join(", ", digitalocean_droplet.kubelet.*.ipv4_address)}" 11 | } 12 | -------------------------------------------------------------------------------- /k8s/templates/apiserver.tpl: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | write_files: 3 | - path: '/etc/flannel/options.env' 4 | owner: root 5 | permissions: 0644 6 | content: | 7 | FLANNELD_IFACE=$private_ipv4 8 | FLANNELD_ETCD_ENDPOINTS=${etcd_servers} 9 | - path: /var/lib/iptables/rules-save 10 | permissions: 0644 11 | owner: 'root:root' 12 | content: | 13 | *filter 14 | :INPUT DROP [0:0] 15 | :FORWARD DROP [0:0] 16 | :OUTPUT ACCEPT [0:0] 17 | -A INPUT -i lo -j ACCEPT 18 | -A INPUT -i eth1 -j ACCEPT 19 | -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 20 | -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT 21 | -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT 22 | -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT 23 | -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT 24 | -A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT 25 | -A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT 26 | -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 27 | COMMIT 28 | - path: '/etc/kubernetes/manifests/kube-apiserver.yaml' 29 | owner: root 30 | permissions: 0644 31 | content: | 32 | apiVersion: v1 33 | kind: Pod 34 | metadata: 35 | name: kube-apiserver 36 | namespace: kube-system 37 | spec: 38 | hostNetwork: true 39 | containers: 40 | - name: kube-apiserver 41 | image: quay.io/coreos/hyperkube:${k8s_version}_coreos.0 42 | command: 43 | - /hyperkube 44 | - apiserver 45 | - --etcd-servers=${etcd_servers} 46 | - --allow-privileged=true 47 | - --bind-address=0.0.0.0 48 | - --service-cluster-ip-range=${service_ip_range} 49 | - --secure-port=8443 50 | - --advertise-address=$private_ipv4 51 | - --admission-control=NamespaceLifecycle,ServiceAccount,LimitRanger,DefaultStorageClass,ResourceQuota 52 | - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem 53 | - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem 54 | - --client-ca-file=/etc/kubernetes/ssl/ca.pem 55 | - --runtime-config=extensions/v1beta1=true,extensions/v1beta1/networkpolicies=true 56 | ports: 57 | - containerPort: 443 58 | hostPort: 443 59 | name: https 60 | - containerPort: 8080 61 | hostPort: 8080 62 | name: local 63 | volumeMounts: 64 | - mountPath: /etc/kubernetes/ssl 65 | name: ssl-certs-kubernetes 66 | readOnly: true 67 | - mountPath: /etc/ssl/certs 68 | name: ssl-certs-host 69 | readOnly: true 70 | volumes: 71 | - hostPath: 72 | path: /etc/kubernetes/ssl 73 | name: ssl-certs-kubernetes 74 | - hostPath: 75 | path: /usr/share/ca-certificates 76 | name: ssl-certs-host 77 | - path: '/etc/kubernetes/manifests/kube-proxy.yaml' 78 | owner: root 79 | permissions: 0644 80 | content: | 81 | apiVersion: v1 82 | kind: Pod 83 | metadata: 84 | name: kube-proxy 85 | namespace: kube-system 86 | spec: 87 | hostNetwork: true 88 | containers: 89 | - name: kube-proxy 90 | image: quay.io/coreos/hyperkube:${k8s_version}_coreos.0 91 | command: 92 | - /hyperkube 93 | - proxy 94 | - --master=http://127.0.0.1:8080 95 | - --proxy-mode=iptables 96 | securityContext: 97 | privileged: true 98 | volumeMounts: 99 | - mountPath: /etc/ssl/certs 100 | name: ssl-certs-host 101 | readOnly: true 102 | volumes: 103 | - hostPath: 104 | path: /usr/share/ca-certificates 105 | name: ssl-certs-host 106 | - path: '/etc/kubernetes/manifests/kube-controller-manager.yaml' 107 | owner: root 108 | permissions: 0644 109 | content: | 110 | apiVersion: v1 111 | kind: Pod 112 | metadata: 113 | name: kube-controller-manager 114 | namespace: kube-system 115 | spec: 116 | hostNetwork: true 117 | containers: 118 | - name: kube-controller-manager 119 | image: quay.io/coreos/hyperkube:${k8s_version}_coreos.0 120 | command: 121 | - /hyperkube 122 | - controller-manager 123 | - --master=http://127.0.0.1:8080 124 | - --leader-elect=true 125 | - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem 126 | - --root-ca-file=/etc/kubernetes/ssl/ca.pem 127 | livenessProbe: 128 | httpGet: 129 | host: 127.0.0.1 130 | path: /healthz 131 | port: 10252 132 | initialDelaySeconds: 15 133 | timeoutSeconds: 1 134 | volumeMounts: 135 | - mountPath: /etc/kubernetes/ssl 136 | name: ssl-certs-kubernetes 137 | readOnly: true 138 | - mountPath: /etc/ssl/certs 139 | name: ssl-certs-host 140 | readOnly: true 141 | volumes: 142 | - hostPath: 143 | path: /etc/kubernetes/ssl 144 | name: ssl-certs-kubernetes 145 | - hostPath: 146 | path: /usr/share/ca-certificates 147 | name: ssl-certs-host 148 | - path: '/etc/kubernetes/manifests/kube-scheduler.yaml' 149 | owner: root 150 | permissions: 0644 151 | content: | 152 | apiVersion: v1 153 | kind: Pod 154 | metadata: 155 | name: kube-scheduler 156 | namespace: kube-system 157 | spec: 158 | hostNetwork: true 159 | containers: 160 | - name: kube-scheduler 161 | image: quay.io/coreos/hyperkube:${k8s_version}_coreos.0 162 | command: 163 | - /hyperkube 164 | - scheduler 165 | - --master=http://127.0.0.1:8080 166 | - --leader-elect=true 167 | livenessProbe: 168 | httpGet: 169 | host: 127.0.0.1 170 | path: /healthz 171 | port: 10251 172 | initialDelaySeconds: 15 173 | timeoutSeconds: 1 174 | - path: "/etc/kubernetes/manifests/addons.yaml" 175 | owner: root 176 | permissions: 0644 177 | content: | 178 | apiVersion: v1 179 | kind: Pod 180 | metadata: 181 | name: kube-addon-manager 182 | namespace: kube-system 183 | labels: 184 | component: kube-addon-manager 185 | version: v4 186 | spec: 187 | hostNetwork: true 188 | containers: 189 | - name: kube-addon-manager 190 | # When updating version also bump it in cluster/images/hyperkube/static-pods/addon-manager.json 191 | image: gcr.io/google-containers/kube-addon-manager:v5.1 192 | resources: 193 | requests: 194 | cpu: 5m 195 | memory: 50Mi 196 | volumeMounts: 197 | - mountPath: /etc/kubernetes/ 198 | name: addons 199 | readOnly: true 200 | volumes: 201 | - hostPath: 202 | path: /etc/kubernetes/ 203 | name: addons 204 | - path: "/etc/kubernetes/addons/kube-dns/kube-dns-svc.yaml" 205 | owner: root 206 | permissions: 0644 207 | content: | 208 | apiVersion: v1 209 | kind: Service 210 | metadata: 211 | name: kube-dns 212 | namespace: kube-system 213 | labels: 214 | k8s-app: kube-dns 215 | kubernetes.io/cluster-service: "true" 216 | kubernetes.io/name: "KubeDNS" 217 | spec: 218 | selector: 219 | k8s-app: kube-dns 220 | clusterIP: 10.3.0.10 221 | ports: 222 | - name: dns 223 | port: 53 224 | protocol: UDP 225 | - name: dns-tcp 226 | port: 53 227 | protocol: TCP 228 | - path: "/etc/kubernetes/addons/kube-dns/kube-dns-rc.yaml" 229 | owner: root 230 | permissions: 0644 231 | content: | 232 | apiVersion: v1 233 | kind: ReplicationController 234 | metadata: 235 | name: kube-dns-v20 236 | namespace: kube-system 237 | labels: 238 | k8s-app: kube-dns 239 | version: v20 240 | kubernetes.io/cluster-service: "true" 241 | spec: 242 | replicas: 1 243 | selector: 244 | k8s-app: kube-dns 245 | version: v20 246 | template: 247 | metadata: 248 | labels: 249 | k8s-app: kube-dns 250 | version: v20 251 | annotations: 252 | scheduler.alpha.kubernetes.io/critical-pod: '' 253 | scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' 254 | spec: 255 | containers: 256 | - name: kubedns 257 | image: gcr.io/google_containers/kubedns-amd64:1.8 258 | resources: 259 | limits: 260 | memory: 170Mi 261 | requests: 262 | cpu: 100m 263 | memory: 70Mi 264 | livenessProbe: 265 | httpGet: 266 | path: /healthz-kubedns 267 | port: 8080 268 | scheme: HTTP 269 | initialDelaySeconds: 60 270 | timeoutSeconds: 5 271 | successThreshold: 1 272 | failureThreshold: 5 273 | readinessProbe: 274 | httpGet: 275 | path: /readiness 276 | port: 8081 277 | scheme: HTTP 278 | initialDelaySeconds: 3 279 | timeoutSeconds: 5 280 | args: 281 | - --domain=cluster.local. 282 | - --dns-port=10053 283 | - --kube-master-url=#MASTERURL# 284 | - --kubecfg-file=/etc/kubernetes/worker-kubeconfig.yaml 285 | ports: 286 | - containerPort: 10053 287 | name: dns-local 288 | protocol: UDP 289 | - containerPort: 10053 290 | name: dns-tcp-local 291 | protocol: TCP 292 | volumeMounts: 293 | - mountPath: /etc/kubernetes/ssl 294 | name: ssl-certs-kubernetes 295 | readOnly: true 296 | - mountPath: /etc/ssl/certs 297 | name: ssl-certs-host 298 | readOnly: true 299 | - mountPath: /etc/kubernetes/worker-kubeconfig.yaml 300 | name: "kubeconfig" 301 | readOnly: true 302 | - name: dnsmasq 303 | image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4 304 | livenessProbe: 305 | httpGet: 306 | path: /healthz-dnsmasq 307 | port: 8080 308 | scheme: HTTP 309 | initialDelaySeconds: 60 310 | timeoutSeconds: 5 311 | successThreshold: 1 312 | failureThreshold: 5 313 | args: 314 | - --cache-size=1000 315 | - --no-resolv 316 | - --server=127.0.0.1#10053 317 | - --log-facility=- 318 | ports: 319 | - containerPort: 53 320 | name: dns 321 | protocol: UDP 322 | - containerPort: 53 323 | name: dns-tcp 324 | protocol: TCP 325 | - name: healthz 326 | image: gcr.io/google_containers/exechealthz-amd64:1.2 327 | resources: 328 | limits: 329 | memory: 50Mi 330 | requests: 331 | cpu: 10m 332 | memory: 50Mi 333 | args: 334 | - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null 335 | - --url=/healthz-dnsmasq 336 | - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null 337 | - --url=/healthz-kubedns 338 | - --port=8080 339 | - --quiet 340 | ports: 341 | - containerPort: 8080 342 | protocol: TCP 343 | dnsPolicy: Default 344 | volumes: 345 | - hostPath: 346 | path: /etc/kubernetes/ssl 347 | name: ssl-certs-kubernetes 348 | - hostPath: 349 | path: /usr/share/ca-certificates 350 | name: ssl-certs-host 351 | - hostPath: 352 | path: "/etc/kubernetes/worker-kubeconfig.yaml" 353 | name: "kubeconfig" 354 | 355 | ####################### 356 | coreos: 357 | flannel: 358 | etcd_endpoints: ${etcd_servers} 359 | units: 360 | - name: iptables-restore.service 361 | enable: true 362 | command: start 363 | - name: install-kubectl.service 364 | command: start 365 | content: | 366 | [Unit] 367 | After=network-online.target 368 | Description=Installs kubectl Binary 369 | Requires=network-online.target 370 | 371 | [Service] 372 | Type=oneshot 373 | ExecStartPre=/bin/mkdir -p /opt/bin 374 | ExecStart=/usr/bin/curl -sL -o /opt/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/${k8s_version}/bin/linux/amd64/kubectl 375 | ExecStartPost=/usr/bin/chmod a+x /opt/bin/kubectl 376 | RemainAfterExit=yes 377 | - name: drophosts.service 378 | enable: true 379 | command: start 380 | content: | 381 | [Unit] 382 | Description=updates hosts with peer droplets 383 | Requires=docker.service 384 | 385 | [Service] 386 | Type=oneshot 387 | Environment=DO_KEY=${key} 388 | Environment=DO_TAG=${tag} 389 | ExecStartPre=-/usr/bin/docker pull qmxme/drophosts:latest 390 | ExecStart=/usr/bin/docker run --rm --privileged -e DO_KEY -e DO_TAG -v /etc/hosts:/etc/hosts qmxme/drophosts:latest 391 | - name: drophosts.timer 392 | enable: true 393 | command: start 394 | content: | 395 | [Unit] 396 | Description=Run drophosts.service every 5 minutes 397 | 398 | [Timer] 399 | OnCalendar=*:0/2 400 | - name: "flanneld.service" 401 | drop-ins: 402 | - name: "40-ExecStartPre-symlink.conf" 403 | content: | 404 | [Service] 405 | ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env 406 | command: start 407 | - name: "docker.service" 408 | drop-ins: 409 | - name: "50-require-flannel.conf" 410 | content: | 411 | [Unit] 412 | Requires=flanneld.service 413 | After=flanneld.service 414 | - name: "60-docker-config.conf" 415 | content: | 416 | [Service] 417 | Environment="DOCKER_OPTS=--storage-driver=overlay --iptables=false" 418 | command: start 419 | - name: "kubelet.service" 420 | enable: true 421 | command: start 422 | content: | 423 | [Service] 424 | ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests 425 | 426 | Environment=KUBELET_VERSION=${k8s_version}_coreos.0 427 | ExecStart=/usr/lib/coreos/kubelet-wrapper \ 428 | --api-servers=http://127.0.0.1:8080 \ 429 | --network-plugin-dir=/etc/kubernetes/cni/net.d \ 430 | --register-schedulable=false \ 431 | --allow-privileged=true \ 432 | --config=/etc/kubernetes/manifests \ 433 | --hostname-override=$private_ipv4 \ 434 | --cluster-dns=${dns_service_ip} \ 435 | --cluster-domain=cluster.local 436 | Restart=always 437 | RestartSec=10 438 | [Install] 439 | WantedBy=multi-user.target 440 | - name: droplan.service 441 | enable: true 442 | command: start 443 | content: | 444 | [Unit] 445 | Description=updates iptables with peer droplets 446 | Requires=docker.service 447 | 448 | [Service] 449 | Type=oneshot 450 | Environment=DO_KEY=${key} 451 | ExecStart=/usr/bin/docker run --rm --net=host --cap-add=NET_ADMIN -e DO_KEY tam7t/droplan:latest 452 | - name: droplan.timer 453 | enable: true 454 | command: start 455 | content: | 456 | [Unit] 457 | Description=Run droplan.service every 5 minutes 458 | 459 | [Timer] 460 | OnCalendar=*:0/5 461 | -------------------------------------------------------------------------------- /k8s/templates/kubelet.tpl: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | write_files: 3 | - path: /var/lib/iptables/rules-save 4 | permissions: 0644 5 | owner: 'root:root' 6 | content: | 7 | *filter 8 | :INPUT DROP [0:0] 9 | :FORWARD DROP [0:0] 10 | :OUTPUT ACCEPT [0:0] 11 | -A INPUT -i lo -j ACCEPT 12 | -A INPUT -i eth1 -j ACCEPT 13 | -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 14 | -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT 15 | -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT 16 | -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT 17 | -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT 18 | -A INPUT -p icmp -m icmp --icmp-type 3 -j ACCEPT 19 | -A INPUT -p icmp -m icmp --icmp-type 11 -j ACCEPT 20 | -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 21 | COMMIT 22 | - path: '/etc/flannel/options.env' 23 | owner: root 24 | permissions: 0644 25 | content: | 26 | FLANNELD_IFACE=$private_ipv4 27 | FLANNELD_ETCD_ENDPOINTS=${etcd_servers} 28 | - path: '/etc/kubernetes/manifests/kube-proxy.yaml' 29 | owner: root 30 | permissions: 0644 31 | content: | 32 | apiVersion: v1 33 | kind: Pod 34 | metadata: 35 | name: kube-proxy 36 | namespace: kube-system 37 | spec: 38 | hostNetwork: true 39 | containers: 40 | - name: kube-proxy 41 | image: quay.io/coreos/hyperkube:${k8s_version}_coreos.0 42 | command: 43 | - /hyperkube 44 | - proxy 45 | - --master=${master} 46 | - --proxy-mode=iptables 47 | - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml 48 | securityContext: 49 | privileged: true 50 | volumeMounts: 51 | - mountPath: /etc/ssl/certs 52 | name: "ssl-certs" 53 | - mountPath: /etc/kubernetes/ssl 54 | name: "etc-kube-ssl" 55 | readOnly: true 56 | - mountPath: /etc/kubernetes/worker-kubeconfig.yaml 57 | name: "kubeconfig" 58 | readOnly: true 59 | volumes: 60 | - name: "ssl-certs" 61 | hostPath: 62 | path: "/usr/share/ca-certificates" 63 | - name: "etc-kube-ssl" 64 | hostPath: 65 | path: "/etc/kubernetes/ssl" 66 | - name: "kubeconfig" 67 | hostPath: 68 | path: "/etc/kubernetes/worker-kubeconfig.yaml" 69 | - path: '/etc/kubernetes/worker-kubeconfig.yaml' 70 | owner: root 71 | permissions: 0644 72 | content: | 73 | apiVersion: v1 74 | kind: Config 75 | clusters: 76 | - name: local 77 | cluster: 78 | certificate-authority: /etc/kubernetes/ssl/ca.pem 79 | users: 80 | - name: kubelet 81 | user: 82 | client-certificate: /etc/kubernetes/ssl/worker.pem 83 | client-key: /etc/kubernetes/ssl/worker-key.pem 84 | contexts: 85 | - context: 86 | cluster: local 87 | user: kubelet 88 | name: kubelet-context 89 | current-context: kubelet-context 90 | 91 | ####################### 92 | coreos: 93 | flannel: 94 | etcd_endpoints: ${etcd_servers} 95 | units: 96 | - name: iptables-restore.service 97 | enable: true 98 | command: start 99 | - name: drophosts.service 100 | enable: true 101 | command: start 102 | content: | 103 | [Unit] 104 | Description=updates hosts with peer droplets 105 | Requires=docker.service 106 | 107 | [Service] 108 | Type=oneshot 109 | Environment=DO_KEY=${key} 110 | Environment=DO_TAG=${tag} 111 | ExecStartPre=-/usr/bin/docker pull qmxme/drophosts:latest 112 | ExecStart=/usr/bin/docker run --rm --privileged -e DO_KEY -e DO_TAG -v /etc/hosts:/etc/hosts qmxme/drophosts:latest 113 | - name: drophosts.timer 114 | enable: true 115 | command: start 116 | content: | 117 | [Unit] 118 | Description=Run drophosts.service every 5 minutes 119 | 120 | [Timer] 121 | OnCalendar=*:0/2 122 | - name: "flanneld.service" 123 | drop-ins: 124 | - name: "40-ExecStartPre-symlink.conf" 125 | content: | 126 | [Service] 127 | ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env 128 | command: start 129 | - name: "docker.service" 130 | drop-ins: 131 | - name: "50-require-flannel.conf" 132 | content: | 133 | [Unit] 134 | Requires=flanneld.service 135 | After=flanneld.service 136 | - name: "60-docker-config.conf" 137 | content: | 138 | [Service] 139 | Environment="DOCKER_OPTS=--storage-driver=overlay" 140 | command: start 141 | - name: "kubelet.service" 142 | command: start 143 | content: | 144 | [Service] 145 | ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests 146 | 147 | Environment=KUBELET_VERSION=${k8s_version}_coreos.0 148 | Environment="RKT_OPTS=--volume=resolv,kind=host,source=/etc/hosts --mount volume=resolv,target=/etc/hosts" 149 | ExecStart=/usr/lib/coreos/kubelet-wrapper \ 150 | --api-servers=${apiservers} \ 151 | --network-plugin-dir=/etc/kubernetes/cni/net.d \ 152 | --register-node=true \ 153 | --allow-privileged=true \ 154 | --config=/etc/kubernetes/manifests \ 155 | --hostname-override=$private_ipv4 \ 156 | --cluster-dns=${dns_service_ip} \ 157 | --cluster-domain=cluster.local \ 158 | --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \ 159 | --tls-cert-file=/etc/kubernetes/ssl/worker.pem \ 160 | --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem 161 | Restart=always 162 | RestartSec=10 163 | [Install] 164 | WantedBy=multi-user.target 165 | - name: droplan.service 166 | enable: true 167 | content: | 168 | [Unit] 169 | Description=updates iptables with peer droplets 170 | Requires=docker.service 171 | 172 | [Service] 173 | Type=oneshot 174 | Environment=DO_KEY=${key} 175 | ExecStart=/usr/bin/docker run --rm --net=host --cap-add=NET_ADMIN -e DO_KEY tam7t/droplan:latest 176 | - name: droplan.timer 177 | enable: true 178 | command: start 179 | content: | 180 | [Unit] 181 | Description=Run droplan.service every 5 minutes 182 | 183 | [Timer] 184 | OnCalendar=*:0/5 185 | -------------------------------------------------------------------------------- /k8s/templates/lb.tpl: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | write_files: 3 | - path: '/etc/flannel/options.env' 4 | owner: root 5 | permissions: 0644 6 | content: | 7 | FLANNELD_IFACE=$private_ipv4 8 | FLANNELD_ETCD_ENDPOINTS=${etcd_servers} 9 | - path: '/etc/kubernetes/manifests/kube-proxy.yaml' 10 | owner: root 11 | permissions: 0644 12 | content: | 13 | apiVersion: v1 14 | kind: Pod 15 | metadata: 16 | name: kube-proxy 17 | namespace: kube-system 18 | spec: 19 | hostNetwork: true 20 | containers: 21 | - name: kube-proxy 22 | image: quay.io/coreos/hyperkube:${k8s_version}_coreos.0 23 | command: 24 | - /hyperkube 25 | - proxy 26 | - --master=${master} 27 | - --proxy-mode=iptables 28 | - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml 29 | securityContext: 30 | privileged: true 31 | volumeMounts: 32 | - mountPath: /etc/ssl/certs 33 | name: "ssl-certs" 34 | - mountPath: /etc/kubernetes/ssl 35 | name: "etc-kube-ssl" 36 | readOnly: true 37 | - mountPath: /etc/kubernetes/worker-kubeconfig.yaml 38 | name: "kubeconfig" 39 | readOnly: true 40 | volumes: 41 | - name: "ssl-certs" 42 | hostPath: 43 | path: "/usr/share/ca-certificates" 44 | - name: "etc-kube-ssl" 45 | hostPath: 46 | path: "/etc/kubernetes/ssl" 47 | - name: "kubeconfig" 48 | hostPath: 49 | path: "/etc/kubernetes/worker-kubeconfig.yaml" 50 | - path: '/etc/kubernetes/worker-kubeconfig.yaml' 51 | owner: root 52 | permissions: 0644 53 | content: | 54 | apiVersion: v1 55 | kind: Config 56 | clusters: 57 | - name: local 58 | cluster: 59 | certificate-authority: /etc/kubernetes/ssl/ca.pem 60 | users: 61 | - name: kubelet 62 | user: 63 | client-certificate: /etc/kubernetes/ssl/worker.pem 64 | client-key: /etc/kubernetes/ssl/worker-key.pem 65 | contexts: 66 | - context: 67 | cluster: local 68 | user: kubelet 69 | name: kubelet-context 70 | current-context: kubelet-context 71 | 72 | ####################### 73 | coreos: 74 | flannel: 75 | etcd_endpoints: ${etcd_servers} 76 | units: 77 | - name: drophosts.service 78 | enable: true 79 | command: start 80 | content: | 81 | [Unit] 82 | Description=updates hosts with peer droplets 83 | Requires=docker.service 84 | 85 | [Service] 86 | Type=oneshot 87 | Environment=DO_KEY=${key} 88 | Environment=DO_TAG=${tag} 89 | ExecStartPre=-/usr/bin/docker pull qmxme/drophosts:latest 90 | ExecStart=/usr/bin/docker run --rm --privileged -e DO_KEY -e DO_TAG -v /etc/hosts:/etc/hosts qmxme/drophosts:latest 91 | - name: drophosts.timer 92 | enable: true 93 | command: start 94 | content: | 95 | [Unit] 96 | Description=Run drophosts.service every 5 minutes 97 | 98 | [Timer] 99 | OnCalendar=*:0/2 100 | - name: "flanneld.service" 101 | drop-ins: 102 | - name: "40-ExecStartPre-symlink.conf" 103 | content: | 104 | [Service] 105 | ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env 106 | command: start 107 | - name: "docker.service" 108 | drop-ins: 109 | - name: "50-require-flannel.conf" 110 | content: | 111 | [Unit] 112 | Requires=flanneld.service 113 | After=flanneld.service 114 | - name: "60-docker-config.conf" 115 | content: | 116 | [Service] 117 | Environment="DOCKER_OPTS=--storage-driver=overlay" 118 | command: start 119 | - name: "kubelet.service" 120 | command: start 121 | content: | 122 | [Service] 123 | ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests 124 | 125 | Environment=KUBELET_VERSION=${k8s_version}_coreos.0 126 | Environment="RKT_OPTS=--volume=resolv,kind=host,source=/etc/hosts --mount volume=resolv,target=/etc/hosts" 127 | ExecStart=/usr/lib/coreos/kubelet-wrapper \ 128 | --api-servers=${apiservers} \ 129 | --network-plugin-dir=/etc/kubernetes/cni/net.d \ 130 | --register-node=true \ 131 | --allow-privileged=true \ 132 | --config=/etc/kubernetes/manifests \ 133 | --hostname-override=$private_ipv4 \ 134 | --cluster-dns=${dns_service_ip} \ 135 | --cluster-domain=cluster.local \ 136 | --node-labels="role=edge-router" \ 137 | --register-schedulable=true \ 138 | --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \ 139 | --tls-cert-file=/etc/kubernetes/ssl/worker.pem \ 140 | --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem 141 | Restart=always 142 | RestartSec=10 143 | [Install] 144 | WantedBy=multi-user.target 145 | - name: droplan.service 146 | enable: true 147 | content: | 148 | [Unit] 149 | Description=updates iptables with peer droplets 150 | Requires=docker.service 151 | 152 | [Service] 153 | Type=oneshot 154 | Environment=DO_KEY=${key} 155 | ExecStart=/usr/bin/docker run --rm --net=host --cap-add=NET_ADMIN -e DO_KEY tam7t/droplan:latest 156 | - name: droplan.timer 157 | enable: true 158 | command: start 159 | content: | 160 | [Unit] 161 | Description=Run droplan.service every 5 minutes 162 | 163 | [Timer] 164 | OnCalendar=*:0/5 165 | -------------------------------------------------------------------------------- /k8s/variables.tf: -------------------------------------------------------------------------------- 1 | variable region { 2 | description = "Region to launch in" 3 | default = "sfo1" 4 | } 5 | 6 | variable ssh_keys { 7 | description = "SSH keys to use" 8 | } 9 | 10 | variable cluster_id { 11 | description = "A unique id for the cluster" 12 | } 13 | 14 | variable cluster_tag { 15 | description = "A unique tag for the cluster" 16 | } 17 | 18 | variable image { 19 | description = "Name of the image to use" 20 | } 21 | 22 | variable do_read_token { 23 | description = "Read-only DO token" 24 | } 25 | 26 | variable etcd_server_urls { 27 | description = "Comma-separated list of etcd urls" 28 | } 29 | 30 | variable kubelet_count { 31 | description = "Number of kubelets to use" 32 | default = 1 33 | } 34 | 35 | variable kubernetes_version { 36 | description = "Version of Kubernetes to install" 37 | default = "1.4.5" 38 | } 39 | 40 | variable kubelet_size { 41 | description = "Size of the kubelet server" 42 | default = "1gb" 43 | } 44 | 45 | variable apiserver_size { 46 | description = "Size of the apiserver" 47 | default = "1gb" 48 | } 49 | 50 | variable apiserver_count { 51 | description = "Number of apiservers" 52 | default = 1 53 | } 54 | 55 | variable service_ip_range { 56 | description = "CIDR for service IPs" 57 | default = "10.3.0.0/16" 58 | } 59 | 60 | variable k8s_service_ip { 61 | description = "VIP address of the API service" 62 | default = "10.3.0.1" 63 | } 64 | 65 | variable dns_service_ip { 66 | description = "DNS service VIP" 67 | default = "10.3.0.10" 68 | } 69 | 70 | # Load balancer 71 | variable lb_count { 72 | description = "Number of load balancers for the apiservers" 73 | default = 1 74 | } 75 | 76 | variable lb_size { 77 | description = "Size of the lb droplet" 78 | default = "512mb" 79 | } 80 | 81 | variable resource_prefix { 82 | description = "a prefix for each resource name" 83 | default = "" 84 | } 85 | -------------------------------------------------------------------------------- /main.tf: -------------------------------------------------------------------------------- 1 | resource random_id "cluster_id" { 2 | byte_length = 16 3 | } 4 | 5 | resource digitalocean_tag "cluster_tag" { 6 | name = "${format("k8s_cluster:%s", random_id.cluster_id.hex)}" 7 | } 8 | 9 | module "etcd" { 10 | source = "./etcd" 11 | size = "${var.etcd_size}" 12 | image = "${var.image}" 13 | region = "${var.region}" 14 | ssh_keys = "${var.ssh_keys}" 15 | count = "${var.etcd_count}" 16 | discovery_url = "${var.discovery_url}" 17 | pod_network = "${var.pod_network}" 18 | vxlan_id = "${var.vxlan_id}" 19 | do_read_token = "${var.do_read_token}" 20 | resource_prefix = "${var.resource_prefix}" 21 | cluster_id = "${random_id.cluster_id.hex}" 22 | cluster_tag = "${digitalocean_tag.cluster_tag.id}" 23 | } 24 | 25 | module "k8s" { 26 | source = "./k8s" 27 | image = "${var.image}" 28 | region = "${var.region}" 29 | ssh_keys = "${var.ssh_keys}" 30 | do_read_token = "${var.do_read_token}" 31 | cluster_id = "${random_id.cluster_id.hex}" 32 | cluster_tag = "${digitalocean_tag.cluster_tag.id}" 33 | 34 | # K8s specific 35 | kubernetes_version = "${var.kubernetes_version}" 36 | apiserver_count = "${var.apiserver_count}" 37 | apiserver_size = "${var.apiserver_size}" 38 | kubelet_count = "${var.kubelet_count}" 39 | kubelet_size = "${var.kubelet_size}" 40 | service_ip_range = "${var.service_ip_range}" 41 | k8s_service_ip = "${var.k8s_service_ip}" 42 | dns_service_ip = "${var.dns_service_ip}" 43 | resource_prefix = "${var.resource_prefix}" 44 | etcd_server_urls = "${module.etcd.server_urls}" 45 | 46 | # Load balancer 47 | lb_count = "${var.lb_count}" 48 | lb_size = "${var.lb_size}" 49 | } 50 | 51 | resource null_resource "flannel" { 52 | triggers = { 53 | etcd_servers = "${module.etcd.server_urls}" 54 | } 55 | 56 | connection = { 57 | host = "${element(split(",", module.etcd.public_ipv4), 0)}" 58 | timeout = "30s" 59 | user = "core" 60 | agent = true 61 | } 62 | 63 | provisioner "remote-exec" { 64 | inline = [ 65 | "curl -X PUT ${element(split(",", module.etcd.server_urls), 1)}/v2/keys/coreos.com/network/config -d value='{\"Network\": \"${var.pod_network}\", \"Backend\": {\"Type\": \"vxlan\", \"VNI\": ${var.vxlan_id}}}'", 66 | ] 67 | } 68 | } 69 | 70 | resource null_resource "ca" { 71 | provisioner "local-exec" { 72 | command = "bin/create_CA" 73 | } 74 | } 75 | -------------------------------------------------------------------------------- /outputs.tf: -------------------------------------------------------------------------------- 1 | output "etcd_servers" { 2 | value = "${module.etcd.public_ipv4}" 3 | } 4 | 5 | output "apiservers" { 6 | value = "${module.k8s.apiservers}" 7 | } 8 | 9 | output "kubelets" { 10 | value = "${module.k8s.kubelets}" 11 | } 12 | 13 | output "load-balancer" { 14 | value = "${module.k8s.load-balancer}" 15 | } 16 | 17 | output "cluster-tag" { 18 | value = "${digitalocean_tag.cluster_tag.id}" 19 | } 20 | -------------------------------------------------------------------------------- /ssl/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/protochron/k8s-coreos-digitalocean/5aa957ba7400c20d5914a58a6391c36ca2084375/ssl/.gitkeep -------------------------------------------------------------------------------- /variables.tf: -------------------------------------------------------------------------------- 1 | # Common 2 | variable region { 3 | default = "sfo1" 4 | description = "Region to launch in" 5 | } 6 | 7 | variable ssh_keys { 8 | description = "SSH keys to use" 9 | } 10 | 11 | variable image { 12 | description = "Name of the image to use" 13 | } 14 | 15 | variable do_read_token { 16 | description = "Read-only token for the DO api" 17 | } 18 | 19 | # Etcd 20 | variable etcd_count { 21 | description = "Number of etcd droplets" 22 | default = 1 23 | } 24 | 25 | variable discovery_url { 26 | description = "etcd discovery url" 27 | } 28 | 29 | variable etcd_size { 30 | description = "Size of the etcd droplet" 31 | default = "1gb" 32 | } 33 | 34 | # K8s 35 | variable kubelet_count { 36 | description = "Number of kubelets to use" 37 | default = 1 38 | } 39 | 40 | variable kubelet_size { 41 | description = "Size of the kubelet server" 42 | default = "1gb" 43 | } 44 | 45 | variable kubernetes_version { 46 | description = "Version of Kubernetes to install" 47 | default = "1.4.5" 48 | } 49 | 50 | variable apiserver_size { 51 | description = "Size of the apiserver" 52 | default = "1gb" 53 | } 54 | 55 | variable apiserver_count { 56 | description = "Number of apiservers" 57 | default = 1 58 | } 59 | 60 | variable pod_network { 61 | description = "CIDR of pod IPs" 62 | default = "10.2.0.0/16" 63 | } 64 | 65 | variable service_ip_range { 66 | description = "CIDR for service IPs" 67 | default = "10.3.0.0/16" 68 | } 69 | 70 | variable k8s_service_ip { 71 | description = "VIP address of the API service" 72 | default = "10.3.0.1" 73 | } 74 | 75 | variable dns_service_ip { 76 | description = "DNS service VIP" 77 | default = "10.3.0.10" 78 | } 79 | 80 | variable vxlan_id { 81 | description = "Vxlan id of the flannel network" 82 | } 83 | 84 | # Load balancer 85 | variable lb_count { 86 | description = "Number of load balancers for the apiservers" 87 | default = 1 88 | } 89 | 90 | variable lb_size { 91 | description = "Size of the lb droplet" 92 | } 93 | 94 | variable resource_prefix { 95 | description = "a prefix for each resource name" 96 | default = "" 97 | } 98 | --------------------------------------------------------------------------------