├── .gitignore ├── worker-openssl.cnf ├── master-openssl.cnf ├── configure-kubectl.sh ├── certonly-tpl.yaml └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | inventory/* 2 | ssl/* 3 | -------------------------------------------------------------------------------- /worker-openssl.cnf: -------------------------------------------------------------------------------- 1 | [req] 2 | req_extensions = v3_req 3 | distinguished_name = req_distinguished_name 4 | [req_distinguished_name] 5 | [ v3_req ] 6 | basicConstraints = CA:FALSE 7 | keyUsage = nonRepudiation, digitalSignature, keyEncipherment 8 | subjectAltName = @alt_names 9 | [alt_names] 10 | IP.1 = $ENV::WORKER_IP 11 | -------------------------------------------------------------------------------- /master-openssl.cnf: -------------------------------------------------------------------------------- 1 | [req] 2 | req_extensions = v3_req 3 | distinguished_name = req_distinguished_name 4 | [req_distinguished_name] 5 | [ v3_req ] 6 | basicConstraints = CA:FALSE 7 | keyUsage = nonRepudiation, digitalSignature, keyEncipherment 8 | subjectAltName = @alt_names 9 | [alt_names] 10 | DNS.1 = kubernetes 11 | DNS.2 = kubernetes.default 12 | DNS.3 = kubernetes.default.svc 13 | DNS.4 = kubernetes.default.svc.cluster.local 14 | IP.1 = 10.3.0.1 15 | IP.2 = $ENV::IP 16 | -------------------------------------------------------------------------------- /configure-kubectl.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | MASTER_HOST=$1 4 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 5 | 6 | CA_CERT="${DIR}/ssl/ca.pem" 7 | ADMIN_KEY="${DIR}/ssl/admin-key.pem" 8 | ADMIN_CERT="${DIR}/ssl/admin.pem" 9 | echo $CA_CERT 10 | 11 | kubectl config set-cluster default-cluster --server=https://${MASTER_HOST} --certificate-authority=${CA_CERT} 12 | kubectl config set-credentials default-admin --certificate-authority=${CA_CERT} --client-key=${ADMIN_KEY} --client-certificate=${ADMIN_CERT} 13 | kubectl config set-context default-system --cluster=default-cluster --user=default-admin 14 | kubectl config use-context default-system 15 | -------------------------------------------------------------------------------- /certonly-tpl.yaml: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | 3 | hostname: %HOST% 4 | # include one or more SSH public keys 5 | ssh_authorized_keys: 6 | 7 | write_files: 8 | - path: /etc/kubernetes/install.sh 9 | owner: "root" 10 | permissions: 0700 11 | content: | 12 | %INSTALL_SCRIPT% 13 | - path: /etc/kubernetes/ssl/ca.pem 14 | owner: "root" 15 | permissions: 0600 16 | content: | 17 | %CA_PEM% 18 | - path: /etc/kubernetes/ssl/%NODETYPE%.pem 19 | owner: "root" 20 | permissions: 0600 21 | content: | 22 | %NODE_PEM% 23 | - path: /etc/kubernetes/ssl/%NODETYPE%-key.pem 24 | owner: "root" 25 | permissions: 0600 26 | content: | 27 | %NODE_KEY_PEM% 28 | - path: /etc/kubernetes/cni/docker_opts_cni.env 29 | coreos: 30 | etcd2: 31 | # generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3 32 | # specify the initial size of your cluster with ?size=X 33 | #discovery: https://discovery.etcd.io/0997f1b9b4886974cf21d1b3193ede5c 34 | advertise-client-urls: http://%IP%:2379 35 | #initial-advertise-peer-urls: http://10.10.10.20:2380 36 | # listen on both the official ports and the legacy ports 37 | # legacy ports can be omitted if your application doesn't depend on them 38 | listen-client-urls: http://0.0.0.0:2379 39 | #listen-peer-urls: http://10.10.10.20:2380 40 | units: 41 | - name: etcd2.service 42 | command: start 43 | - name: fleet.service 44 | command: start 45 | - name: kubeinstall.service 46 | command: start 47 | content: | 48 | [Unit] 49 | Description=K8S installer 50 | After=etcd2.service 51 | Requires=etcd2.service 52 | 53 | [Service] 54 | Type=oneshot 55 | Environment=ADVERTISE_IP=%ADVERTISE_IP% 56 | ExecStart=/etc/kubernetes/install.sh 57 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Deploy kubernetes cluster on CoreOS using cloud-config 2 | 3 | This repository contains a collection of scripts to generate cloud-config iso images, which can setup CoreOS machine to run kubernetes. 4 | 5 | To sum up, this few files can: 6 | 7 | * `build-images.sh`: generate a ISO image from a specific folder, which are automatically detected as cloud config from CoreOS machines 8 | * `build-cloud-config.sh`: generate all certificates for controller, worker and the `kubectl` CLI and scaffolds a proofed cloud config, which installs either the controller or the worker configuration for CoreOS 9 | * `configure-kubectl.sh`: configures local `kubectl` to the generated cluster 10 | 11 | 12 | The first step is to take a look into the `certonly-tpl.yaml` file, and add one or more ssh public keys, to be able to access the machines. This step must be done before we generate our inventory. 13 | 14 | Then, assuming we got the following machines ready: 15 | 16 | * controller, 10.10.10.1 17 | * worker 1, 10.10.10.2 18 | * worker 2, 10.10.10.3 19 | * edge router, 123.234.234.123 20 | 21 | 22 | with CoreOS installed, the scripts can be used to generate cloud configs like this: 23 | 24 | ``` 25 | $ ./build-cloud-config.sh controller 10.10.10.1 26 | ... 27 | $ ./build-cloud-config.sh worker1 10.10.10.2 10.10.10.1 28 | ... 29 | $ ./build-cloud-config.sh worker2 10.10.10.3 10.10.10.1 30 | ... 31 | $ ./build-cloud-config.sh example.com 123.234.234.123 10.10.10.1 32 | ... 33 | ``` 34 | 35 | **attention**: The machine with a public IP needs access to the 10.10.10.X network, to join the master and reach the other nodes! Using a router like pfSense can solve this. 36 | 37 | the script will generate: 38 | 39 | * a ssl folder, containing: 40 | * the TLS certificate authority keypair, which we need to create and verify other TLS certs for this k8s cluster 41 | * the admin keypair, we use for `kubectl` 42 | * an inventory for each node 43 | 44 | ``` 45 | tree inventory 46 | inventory 47 | ├── node-controller 48 | │   ├── cloud-config 49 | │   │   └── openstack 50 | │   │   └── latest 51 | │   │   └── user_data 52 | │   ├── config.iso 53 | │   ├── install.sh 54 | │   └── ssl 55 | │   ├── apiserver.csr 56 | │   ├── apiserver-key.pem 57 | │   └── apiserver.pem 58 | ├── node-example.com 59 | │   ├── cloud-config 60 | │   │   └── openstack 61 | │   │   └── latest 62 | │   │   └── user_data 63 | │   ├── config.iso 64 | │   ├── install.sh 65 | │   └── ssl 66 | │   ├── worker.csr 67 | │   ├── worker-key.pem 68 | │   └── worker.pem 69 | ├── node-worker1 70 | │   ├── cloud-config 71 | │   │   └── openstack 72 | │   │   └── latest 73 | │   │   └── user_data 74 | │   ├── config.iso 75 | │   ├── install.sh 76 | │   └── ssl 77 | │   ├── worker.csr 78 | │   ├── worker-key.pem 79 | │   └── worker.pem 80 | └── node-worker2 81 | ├── cloud-config 82 | │   └── openstack 83 | │   └── latest 84 | │   └── user_data 85 | ├── config.iso 86 | ├── install.sh 87 | └── ssl 88 | ├── worker.csr 89 | ├── worker-key.pem 90 | └── worker.pem 91 | ``` 92 | 93 | **attention**: the etc2 setup provided with the script is very simple and working, but not suited for production, and should be reconfigured to a external etcd2 cluster. 94 | 95 | For changes, new config images can be generated using the `build-image.sh` tool, like 96 | 97 | ``` 98 | $ ./build-image.sh inventory/node-controller 99 | ``` 100 | 101 | 102 | It is also possible to use multiple controller machines, which have to be balanced over one DNS hostname. 103 | 104 | ## More resources 105 | 106 | Read my [blog article about deploying kubernetes](http://stytex.de/blog/2017/01/25/deploy-kubernetes-to-bare-metal-with-nginx/) 107 | --------------------------------------------------------------------------------