├── .gitignore ├── .travis.yml ├── LICENSE ├── Makefile ├── README.md ├── Vagrantfile ├── azure └── setup.sh ├── ca ├── admin-csr.json ├── config.json ├── csr.json ├── instance-csr.json ├── kube-proxy-csr.json └── kubernetes-csr.json ├── etc ├── cilium.yaml ├── cluster-role-binding-kube-apiserver-to-kubelet.yaml ├── cluster-role-binding-restricted.yaml ├── cluster-role-kube-apiserver-to-kubelet.yaml ├── cluster-role-restricted.yaml ├── cni │ └── net.d │ │ ├── 10-bridge.conf │ │ └── 99-loopback.conf ├── encryption-config.yaml ├── kube-dns.yaml ├── pod-nonewprivs.yaml ├── pod-security-policy-permissive.yaml ├── pod-security-policy-restricted.yaml └── systemd │ └── system │ ├── etcd.service │ ├── kube-apiserver.service │ ├── kube-controller-manager.service │ ├── kube-proxy.service │ ├── kube-scheduler.service │ └── kubelet.service ├── gcloud └── setup.sh ├── scripts ├── generate_certificates.sh ├── generate_configuration_files.sh ├── generate_encryption_config.sh ├── install_etcd.sh ├── install_kubernetes_controller.sh ├── install_kubernetes_worker.sh └── provision.sh ├── test.sh └── vagrant ├── _configure.sh └── setup.sh /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant 2 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | --- 2 | sudo: required 3 | notifications: 4 | email: true 5 | services: 6 | - docker 7 | script: 8 | - make test 9 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Jess Frazelle 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: all test shellcheck 2 | 3 | test: shellcheck 4 | 5 | # if this session isn't interactive, then we don't want to allocate a 6 | # TTY, which would fail, but if it is interactive, we do want to attach 7 | # so that the user can send e.g. ^C through. 8 | INTERACTIVE := $(shell [ -t 0 ] && echo 1 || echo 0) 9 | ifeq ($(INTERACTIVE), 1) 10 | DOCKER_FLAGS += -t 11 | endif 12 | 13 | shellcheck: 14 | docker run --rm -i $(DOCKER_FLAGS) \ 15 | --name configs-shellcheck \ 16 | -v $(CURDIR):/usr/src:ro \ 17 | --workdir /usr/src \ 18 | r.j3ss.co/shellcheck ./test.sh 19 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # k8s-snowflake 2 | 3 | [![Build Status](https://travis-ci.org/jessfraz/k8s-snowflake.svg?branch=master)](https://travis-ci.org/jessfraz/k8s-snowflake) 4 | 5 | Configs and scripts for bootstrapping an opinionated Kubernetes cluster anywhere. 6 | 7 | Except it's my _snowflake opinionated k8s distro_ :) 8 | 9 | > **NOTE:** current support is only for Azure and Google Cloud. 10 | 11 | **Table of Contents** 12 | 13 | 14 | 15 | - [Provisioning](#provisioning) 16 | * [Base OS](#base-os) 17 | * [Encrypted `etcd` secret data at rest](#encrypted-etcd-secret-data-at-rest) 18 | * [RBAC and Pod Security Policies](#rbac-and-pod-security-policies) 19 | * [Container Runtime](#container-runtime) 20 | * [Networking](#networking) 21 | - [Azure](#azure) 22 | - [Google Cloud](#google-cloud) 23 | - [Acknowledgements](#acknowledgements) 24 | 25 | 26 | 27 | ## Provisioning 28 | 29 | These are **opinionated scripts**. If you don't like my opinions maybe consider 30 | using one of the hundred-thousand other tools for provisioning a cluster. 31 | 32 | I literally made this _because_ I didn't like the opinion of other things... so 33 | here we are. :P 34 | 35 | I purposely tried to keep this as minimal and simple as possible from the OS 36 | base up. 37 | 38 | ### Base OS 39 | 40 | Every node uses [Intel's Clear Linux](https://clearlinux.org/) as the base. 41 | This is for reasons of security and performance. If you would like to learn 42 | more on that you should click the link to their site. 43 | 44 | ### Encrypted `etcd` secret data at rest 45 | 46 | Data is encrypted with `aescbc`. You verify it's encrypted by following [these 47 | instructions](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted). 48 | 49 | ### RBAC and Pod Security Policies 50 | 51 | Kubernetes is installed with [`RBAC`](https://kubernetes.io/docs/admin/authorization/rbac/) 52 | and is set up with a few roles and bindings that map to pod security policies. 53 | 54 | There is a [restricted pod security policy](etc/pod-security-policy-restricted.yaml) 55 | which does not allow running 56 | privileged pods and does not allow privilege escalation which is through the linux 57 | `no_new_privs` flag. 58 | 59 | There is also a [permissive pod security 60 | policy](etc/pod-security-policy-permissive.yaml). 61 | 62 | There are two cluster role bindings created (which grant permissions across 63 | namespaces): 64 | 65 | - `restricted`: cannot create privileged pods, cannot escalate privileges, 66 | cannot run containers as root, cannot use the host network, IPC or PID 67 | namespace 68 | - `permissive`: can create pods that are privileged and use the privileged pod 69 | security policy 70 | 71 | ### Container Runtime 72 | 73 | The cluster uses [`cri-containerd`](https://github.com/kubernetes-incubator/cri-containerd) 74 | with [`runc`](https://github.com/opencontainers/runc) as the container 75 | runtime. 76 | 77 | ### Networking 78 | 79 | The cluster uses [`cilium`](https://github.com/cilium/cilium) 80 | as a networking plugin. I like cilium because it uses BPF and XDP and their 81 | design is something I could wrap my head around. You should checkout their repo 82 | it's one of the cleanest implementations I have seen. You should checkout their 83 | really sweet 84 | [BPF and XDP Reference Guide](https://cilium.readthedocs.io/en/latest/bpf/#) too! 85 | 86 | ## Azure 87 | 88 | Make sure you have the `az` tool installed. You can find instructions on 89 | downloading that 90 | [here](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest). 91 | 92 | Make sure you are logged in. 93 | 94 | To provision your cluster, clone this repo and run: 95 | 96 | ```console 97 | $ ./azure/setup.sh 98 | ``` 99 | 100 | The script automatically sets up an `admin` user with kubeconfig locally so you 101 | should be able to just run `kubectl` after! 102 | 103 | > **NOTE:** if you want to change the number of nodes, etc checkout the 104 | > environment variables at the top of [`azure/setup.sh`](azure/setup.sh). 105 | 106 | ## Google Cloud 107 | 108 | Make sure you have the `gcloud` tool installed. You can find instructions on 109 | downloading that 110 | [here](https://cloud.google.com/sdk/downloads). 111 | 112 | Make sure you are logged in. 113 | 114 | To provision your cluster, clone this repo and run: 115 | 116 | ```console 117 | $ VM_USER="your_ssh_user" ./gcloud/setup.sh 118 | ``` 119 | 120 | The script automatically sets up an `admin` user with kubeconfig locally so you 121 | should be able to just run `kubectl` after! 122 | 123 | > **NOTE:** if you want to change the number of nodes, etc checkout the 124 | > environment variables at the top of [`gcloud/setup.sh`](gcloud/setup.sh). 125 | 126 | ## Acknowledgements 127 | 128 | Thanks to [@kelseyhightower](https://github.com/kelseyhightower) for 129 | [kubernetes-the-hard-way](https://github.com/kelseyhightower/kubernetes-the-hard-way) 130 | which helped a lot of this. 131 | 132 | If you are wondering why I didn't use something like `cloud-init` it's because 133 | Clear Linux has a pretty weirdly behaving version of `cloud-init` and I love 134 | bash, m'kay. -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # # vi: set ft=ruby : 3 | 4 | require 'fileutils' 5 | 6 | Vagrant.require_version ">= 1.9.0" 7 | 8 | # Defaults for config options defined in CONFIG 9 | $num_controllers = 1 10 | $num_workers = ENV["WORKERS"].to_i 11 | 12 | $vm_gui = false 13 | $vm_memory = 1024 14 | $vm_cpus = 1 15 | $subnet = "172.17.8" 16 | 17 | Vagrant.configure("2") do |config| 18 | # always use Vagrants insecure key 19 | config.ssh.insert_key = false 20 | config.vm.box = "bento/ubuntu-16.04" 21 | config.ssh.username = "vagrant" 22 | # plugin conflict 23 | if Vagrant.has_plugin?("vagrant-vbguest") then 24 | config.vbguest.auto_update = false 25 | end 26 | ["vmware_fusion", "vmware_workstation"].each do |vmware| 27 | config.vm.provider vmware do |v| 28 | v.vmx['memsize'] = $vm_memory 29 | v.vmx['numvcpus'] = $vm_cpus 30 | end 31 | end 32 | config.vm.provider :virtualbox do |vb| 33 | vb.gui = $vm_gui 34 | vb.memory = $vm_memory 35 | vb.cpus = $vm_cpus 36 | end 37 | 38 | # controller node 39 | config.vm.define vm_name = "controller-node" do |config| 40 | config.vm.hostname = vm_name 41 | ip = "#{$subnet}.100" 42 | config.vm.network :private_network, ip: ip 43 | config.vm.provision "shell", path: "vagrant/_configure.sh" 44 | end 45 | 46 | # workers 47 | (0..$num_workers).each do |i| 48 | config.vm.define vm_name = "worker-node-#{i}" do |config| 49 | config.vm.hostname = vm_name 50 | ip = "#{$subnet}.#{i+101}" 51 | config.vm.network :private_network, ip: ip 52 | config.vm.provision "shell", path: "vagrant/_configure.sh" 53 | end 54 | end 55 | 56 | end 57 | -------------------------------------------------------------------------------- /azure/setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This script provisions a cluster running in azure with clear linux os 4 | # and provisions a kubernetes cluster on it. 5 | # 6 | # The script assumes you already have the azure command line tool `az`. 7 | # 8 | set -e 9 | set -o pipefail 10 | 11 | export CLOUD_PROVIDER="azure" 12 | 13 | # Check if we have the azure command line. 14 | command -v az >/dev/null 2>&1 || { echo >&2 "This script requires the azure command line tool, az. Aborting."; exit 1; } 15 | 16 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 17 | SCRIPT_DIR="${DIR}/../scripts" 18 | 19 | export RESOURCE_GROUP=${RESOURCE_GROUP:-kubernetes-clear-linux-snowflake} 20 | export REGION=${REGION:-eastus} 21 | export CONTROLLER_NODE_NAME=${CONTROLLER_NODE_NAME:-controller-node} 22 | export SSH_KEYFILE=${SSH_KEYFILE:-${HOME}/.ssh/id_rsa} 23 | export WORKERS=${WORKERS:-2} 24 | export VM_USER=${VM_USER:-azureuser} 25 | 26 | if [[ ! -f "$SSH_KEYFILE" ]]; then 27 | echo >&2 "SSH_KEYFILE $SSH_KEYFILE does not exist." 28 | echo >&2 "Change the SSH_KEYFILE variable to a new path or create an ssh key there." 29 | exit 1 30 | fi 31 | SSH_KEYFILE_VALUE=$(cat "${SSH_KEYFILE}.pub") 32 | 33 | export PUBLIC_IP_NAME="k8s-public-ip" 34 | VIRTUAL_NETWORK_NAME="k8s-virtual-network" 35 | 36 | VM_SIZE="Standard_D2s_v3" 37 | # From: 38 | # az vm image list --publisher clear-linux-project --all 39 | OS_SYSTEM="clear-linux-project:clear-linux-os:containers:18860.0.0" 40 | 41 | create_resource_group() { 42 | exists=$(az group exists --name "$RESOURCE_GROUP" | tr -d '[:space:]') 43 | 44 | # Create the resource group if it does not already exist. 45 | if [[ "$exists" != "true" ]]; then 46 | echo "Creating resource group $RESOURCE_GROUP in region ${REGION}..." 47 | az group create --location "$REGION" --name "$RESOURCE_GROUP" 48 | fi 49 | } 50 | 51 | create_virtual_network() { 52 | echo "Creating virtual network ${VIRTUAL_NETWORK_NAME}..." 53 | az network vnet create --name "$VIRTUAL_NETWORK_NAME" --resource-group "$RESOURCE_GROUP" \ 54 | --address-prefix 10.0.0.0/8 --subnet-name "k8s-subnet" --subnet-prefix 10.240.0.0/16 55 | } 56 | 57 | create_apiserver_ip_address() { 58 | echo "Creating apiserver public ip address..." 59 | az network public-ip create --name "$PUBLIC_IP_NAME" --resource-group "$RESOURCE_GROUP" 60 | } 61 | 62 | create_controller_node() { 63 | echo "Creating controller node ${CONTROLLER_NODE_NAME}..." 64 | 65 | # create an availability set 66 | az vm availability-set create --resource-group "$RESOURCE_GROUP" \ 67 | --name "${CONTROLLER_NODE_NAME}-availability-set" 68 | 69 | # create the VM 70 | az vm create --name "$CONTROLLER_NODE_NAME" --resource-group "$RESOURCE_GROUP" \ 71 | --ssh-key-value "$SSH_KEYFILE_VALUE" \ 72 | --image "$OS_SYSTEM" \ 73 | --admin-username "$VM_USER" \ 74 | --size "$VM_SIZE" \ 75 | --vnet-name "$VIRTUAL_NETWORK_NAME" \ 76 | --availability-set "${CONTROLLER_NODE_NAME}-availability-set" \ 77 | --subnet "k8s-subnet" \ 78 | --private-ip-address 10.240.255.5 \ 79 | --public-ip-address "$PUBLIC_IP_NAME" \ 80 | --nsg "k8s-controller-security-group" \ 81 | --tags "controller,kubernetes" 82 | 83 | # create NSG rule to allow traffic on port 6443 84 | az network nsg rule create --resource-group "$RESOURCE_GROUP" \ 85 | --nsg-name "k8s-controller-security-group" \ 86 | --name kubeapi --access allow \ 87 | --protocol Tcp --direction Inbound --priority 200 \ 88 | --source-address-prefix "*" \ 89 | --source-port-range "*" \ 90 | --destination-address-prefix "*" \ 91 | --destination-port-range 6443 92 | 93 | # enable ip forwarding 94 | # enabling IP forwarding for a network interface causes Azure not to 95 | # check the source/destination IP address. 96 | # if you don't enable this setting, traffic destined for an IP address 97 | # other than the NIC that receives it, is dropped by Azure. 98 | az network nic update --resource-group "$RESOURCE_GROUP" \ 99 | --name "${CONTROLLER_NODE_NAME}VMNic" \ 100 | --ip-forwarding true 101 | 102 | # create the route table 103 | az network route-table create --resource-group "$RESOURCE_GROUP" \ 104 | --name "k8s-route-table" 105 | 106 | # update the subnet 107 | az network vnet subnet update --resource-group "$RESOURCE_GROUP" \ 108 | --name "k8s-subnet" \ 109 | --vnet-name "$VIRTUAL_NETWORK_NAME" \ 110 | --network-security-group "k8s-controller-security-group" \ 111 | --route-table "k8s-route-table" 112 | } 113 | 114 | create_worker_nodes() { 115 | for i in $(seq 0 "$WORKERS"); do 116 | worker_node_name="worker-node-${i}" 117 | echo "Creating worker node ${worker_node_name}..." 118 | 119 | # create an availability set 120 | az vm availability-set create --resource-group "$RESOURCE_GROUP" \ 121 | --name "${worker_node_name}-availability-set" 122 | 123 | # create the VM 124 | az vm create --name "$worker_node_name" --resource-group "$RESOURCE_GROUP" \ 125 | --private-ip-address "10.240.255.5${i}" \ 126 | --public-ip-address-allocation="dynamic" \ 127 | --ssh-key-value "$SSH_KEYFILE_VALUE" \ 128 | --image "$OS_SYSTEM" \ 129 | --admin-username "$VM_USER" \ 130 | --size "$VM_SIZE" \ 131 | --vnet-name "$VIRTUAL_NETWORK_NAME" \ 132 | --subnet "k8s-subnet" \ 133 | --availability-set "${worker_node_name}-availability-set" \ 134 | --tags "worker,kubernetes" 135 | 136 | # enable ip forwarding 137 | # enabling IP forwarding for a network interface causes Azure not to 138 | # check the source/destination IP address. 139 | # if you don't enable this setting, traffic destined for an IP address 140 | # other than the NIC that receives it, is dropped by Azure. 141 | az network nic update --resource-group "$RESOURCE_GROUP" \ 142 | --name "${worker_node_name}VMNic" \ 143 | --ip-forwarding true 144 | 145 | # get the internal ip for the instance 146 | # this is cloud provider specific 147 | # Google 148 | # internal_ip=$(gcloud compute instances describe "$instance" --format 'value(networkInterfaces[0].networkIP)') 149 | # Azure 150 | internal_ip=$(az vm show -g "$RESOURCE_GROUP" -n "$worker_node_name" --show-details --query 'privateIps' -o tsv | tr -d '[:space:]') 151 | 152 | # create the routes 153 | az network route-table route create --resource-group "$RESOURCE_GROUP" \ 154 | --route-table-name "k8s-route-table" \ 155 | --address-prefix "10.200.${i}.0/24" \ 156 | --name "worker-route-${i}" \ 157 | --next-hop-type VirtualAppliance \ 158 | --next-hop-ip-address "$internal_ip" 159 | done 160 | } 161 | 162 | create_resource_group 163 | create_virtual_network 164 | create_apiserver_ip_address 165 | create_controller_node 166 | create_worker_nodes 167 | 168 | "${SCRIPT_DIR}/provision.sh" 169 | -------------------------------------------------------------------------------- /ca/admin-csr.json: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "admin", 3 | "key": { 4 | "algo": "rsa", 5 | "size": 2048 6 | }, 7 | "names": [ 8 | { 9 | "C": "US", 10 | "L": "New York City", 11 | "O": "system:masters", 12 | "OU": "Kubernetes Cluster", 13 | "ST": "New York" 14 | } 15 | ] 16 | } 17 | -------------------------------------------------------------------------------- /ca/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "signing": { 3 | "default": { 4 | "expiry": "8760h" 5 | }, 6 | "profiles": { 7 | "kubernetes": { 8 | "usages": ["signing", "key encipherment", "server auth", "client auth"], 9 | "expiry": "8760h" 10 | } 11 | } 12 | } 13 | } 14 | -------------------------------------------------------------------------------- /ca/csr.json: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "Kubernetes", 3 | "key": { 4 | "algo": "rsa", 5 | "size": 2048 6 | }, 7 | "names": [ 8 | { 9 | "C": "US", 10 | "L": "New York City", 11 | "O": "Kubernetes", 12 | "OU": "CA", 13 | "ST": "New York" 14 | } 15 | ] 16 | } 17 | -------------------------------------------------------------------------------- /ca/instance-csr.json: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "system:node:INSTANCE", 3 | "key": { 4 | "algo": "rsa", 5 | "size": 2048 6 | }, 7 | "names": [ 8 | { 9 | "C": "US", 10 | "L": "New York City", 11 | "O": "system:nodes", 12 | "OU": "Kubernetes Cluster", 13 | "ST": "New York" 14 | } 15 | ] 16 | } 17 | -------------------------------------------------------------------------------- /ca/kube-proxy-csr.json: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "system:kube-proxy", 3 | "key": { 4 | "algo": "rsa", 5 | "size": 2048 6 | }, 7 | "names": [ 8 | { 9 | "C": "US", 10 | "L": "New York City", 11 | "O": "system:node-proxier", 12 | "OU": "Kubernetes Cluster", 13 | "ST": "New York" 14 | } 15 | ] 16 | } 17 | -------------------------------------------------------------------------------- /ca/kubernetes-csr.json: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "kubernetes", 3 | "key": { 4 | "algo": "rsa", 5 | "size": 2048 6 | }, 7 | "names": [ 8 | { 9 | "C": "US", 10 | "L": "New York City", 11 | "O": "Kubernetes", 12 | "OU": "Kubernetes Cluster", 13 | "ST": "New York" 14 | } 15 | ] 16 | } 17 | -------------------------------------------------------------------------------- /etc/cilium.yaml: -------------------------------------------------------------------------------- 1 | kind: ConfigMap 2 | apiVersion: v1 3 | metadata: 4 | name: cilium-config 5 | namespace: kube-system 6 | data: 7 | # This etcd-config contains the etcd endpoints of your cluster. If you use 8 | # TLS please make sure you uncomment the ca-file line and add the respective 9 | # certificate has a k8s secret, see explanation bellow in the comment labeled 10 | # "ETCD-CERT" 11 | etcd-config: |- 12 | --- 13 | endpoints: 14 | - https://INTERNAL_IP:2379 15 | # 16 | # In case you want to use TLS in etcd, uncomment the following line 17 | # and add the certificate as explained in the comment labeled "ETCD-CERT" 18 | ca-file: '/var/lib/etcd-secrets/etcd-ca' 19 | # 20 | # In case you want client to server authentication, uncomment the following 21 | # lines and add the certificate and key in cilium-etcd-secrets bellow 22 | key-file: '/var/lib/etcd-secrets/etcd-client-key' 23 | cert-file: '/var/lib/etcd-secrets/etcd-client-crt' 24 | 25 | # If you want to run cilium in debug mode change this value to true 26 | debug: "false" 27 | disable-ipv4: "false" 28 | --- 29 | # The etcd secrets can be populated in kubernetes. 30 | # For more information see: https://kubernetes.io/docs/concepts/configuration/secret 31 | apiVersion: v1 32 | kind: Secret 33 | type: Opaque 34 | metadata: 35 | name: cilium-etcd-secrets 36 | namespace: kube-system 37 | data: 38 | # ETCD-CERT: Each value should contain the whole certificate in base64, on a 39 | # single line. You can generate the base64 with: $ base64 -w 0 ./ca.pem 40 | # (the "-w 0" generates the output on a single line) 41 | etcd-ca: "ETCD_CA" 42 | etcd-client-key: "ETCD_CLIENT_KEY" 43 | etcd-client-crt: "ETCD_CLIENT_CERT" 44 | --- 45 | apiVersion: v1 46 | kind: ServiceAccount 47 | metadata: 48 | name: cilium 49 | namespace: kube-system 50 | --- 51 | kind: ClusterRoleBinding 52 | apiVersion: rbac.authorization.k8s.io/v1beta1 53 | metadata: 54 | name: cilium 55 | roleRef: 56 | apiGroup: rbac.authorization.k8s.io 57 | kind: ClusterRole 58 | name: cilium 59 | subjects: 60 | - kind: ServiceAccount 61 | name: cilium 62 | namespace: kube-system 63 | - kind: Group 64 | name: system:nodes 65 | --- 66 | apiVersion: extensions/v1beta1 67 | kind: DaemonSet 68 | metadata: 69 | name: cilium 70 | namespace: kube-system 71 | spec: 72 | template: 73 | metadata: 74 | labels: 75 | k8s-app: cilium 76 | kubernetes.io/cluster-service: "true" 77 | annotations: 78 | # This annotation plus the CriticalAddonsOnly toleration makes 79 | # cilium to be a critical pod in the cluster, which ensures cilium 80 | # gets priority scheduling. 81 | # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/ 82 | scheduler.alpha.kubernetes.io/critical-pod: '' 83 | scheduler.alpha.kubernetes.io/tolerations: >- 84 | [{"key":"dedicated","operator":"Equal","value":"master","effect":"NoSchedule"}] 85 | spec: 86 | serviceAccountName: cilium 87 | containers: 88 | - image: cilium/cilium:stable 89 | imagePullPolicy: Always 90 | name: cilium-agent 91 | command: [ "cilium-agent" ] 92 | args: 93 | - "--debug=$(CILIUM_DEBUG)" 94 | - "-t" 95 | - "vxlan" 96 | - "--kvstore" 97 | - "etcd" 98 | - "--kvstore-opt" 99 | - "etcd.config=/var/lib/etcd-config/etcd.config" 100 | - "--disable-ipv4=$(DISABLE_IPV4)" 101 | - "--k8s-kubeconfig-path=/host/var/lib/kubelet/kubeconfig" 102 | lifecycle: 103 | postStart: 104 | exec: 105 | command: 106 | - "/cni-install.sh" 107 | preStop: 108 | exec: 109 | command: 110 | - "/cni-uninstall.sh" 111 | env: 112 | - name: "K8S_NODE_NAME" 113 | valueFrom: 114 | fieldRef: 115 | fieldPath: spec.nodeName 116 | - name: "CILIUM_DEBUG" 117 | valueFrom: 118 | configMapKeyRef: 119 | name: cilium-config 120 | key: debug 121 | - name: "DISABLE_IPV4" 122 | valueFrom: 123 | configMapKeyRef: 124 | name: cilium-config 125 | key: disable-ipv4 126 | livenessProbe: 127 | exec: 128 | command: 129 | - cilium 130 | - status 131 | initialDelaySeconds: 180 132 | failureThreshold: 10 133 | periodSeconds: 10 134 | readinessProbe: 135 | exec: 136 | command: 137 | - cilium 138 | - status 139 | initialDelaySeconds: 180 140 | periodSeconds: 15 141 | volumeMounts: 142 | - name: bpf-maps 143 | mountPath: /sys/fs/bpf 144 | - name: cilium-run 145 | mountPath: /var/run/cilium 146 | - name: cni-path 147 | mountPath: /host/opt/cni/bin 148 | - name: etc-cni-netd 149 | mountPath: /host/etc/cni/net.d 150 | - name: var-lib-kubelet 151 | mountPath: /host/var/lib/kubelet 152 | readOnly: true 153 | - name: docker-socket 154 | mountPath: /var/run/docker.sock 155 | readOnly: true 156 | - name: etcd-config-path 157 | mountPath: /var/lib/etcd-config 158 | readOnly: true 159 | - name: etcd-secrets 160 | mountPath: /var/lib/etcd-secrets 161 | readOnly: true 162 | securityContext: 163 | allowPrivilegeEscalation: true 164 | capabilities: 165 | add: 166 | - "NET_ADMIN" 167 | privileged: true 168 | hostNetwork: true 169 | volumes: 170 | # To keep state between restarts / upgrades 171 | - name: cilium-run 172 | hostPath: 173 | path: /var/run/cilium 174 | # To keep state between restarts / upgrades 175 | - name: bpf-maps 176 | hostPath: 177 | path: /sys/fs/bpf 178 | # To read docker events from the node 179 | - name: docker-socket 180 | hostPath: 181 | path: /var/run/docker.sock 182 | # To install cilium cni plugin in the host 183 | - name: cni-path 184 | hostPath: 185 | path: /opt/cni/bin 186 | # To install cilium cni configuration in the host 187 | - name: etc-cni-netd 188 | hostPath: 189 | path: /etc/cni/net.d 190 | # to read the kubeconfig 191 | - name: var-lib-kubelet 192 | hostPath: 193 | path: /var/lib/kubelet 194 | # To read the etcd config stored in config maps 195 | - name: etcd-config-path 196 | configMap: 197 | name: cilium-config 198 | items: 199 | - key: etcd-config 200 | path: etcd.config 201 | # To read the k8s etcd secrets in case the user might want to use TLS 202 | - name: etcd-secrets 203 | secret: 204 | secretName: cilium-etcd-secrets 205 | tolerations: 206 | - effect: NoSchedule 207 | key: node-role.kubernetes.io/master 208 | - effect: NoSchedule 209 | key: node.cloudprovider.kubernetes.io/uninitialized 210 | value: "true" 211 | # Mark cilium's pod as critical for rescheduling 212 | - key: CriticalAddonsOnly 213 | operator: "Exists" 214 | --- 215 | kind: ClusterRole 216 | apiVersion: rbac.authorization.k8s.io/v1beta1 217 | metadata: 218 | name: cilium 219 | rules: 220 | - apiGroups: 221 | - "networking.k8s.io" 222 | resources: 223 | - networkpolicies 224 | verbs: 225 | - get 226 | - list 227 | - watch 228 | - apiGroups: 229 | - "" 230 | resources: 231 | - namespaces 232 | - services 233 | - nodes 234 | - endpoints 235 | - componentstatuses 236 | verbs: 237 | - get 238 | - list 239 | - watch 240 | - apiGroups: 241 | - "" 242 | resources: 243 | - pods 244 | - nodes 245 | verbs: 246 | - get 247 | - list 248 | - watch 249 | - update 250 | - apiGroups: 251 | - extensions 252 | resources: 253 | - networkpolicies #FIXME remove this when we drop support for k8s NP-beta GH-1202 254 | - thirdpartyresources 255 | - ingresses 256 | verbs: 257 | - create 258 | - get 259 | - list 260 | - watch 261 | - apiGroups: 262 | - "apiextensions.k8s.io" 263 | resources: 264 | - customresourcedefinitions 265 | verbs: 266 | - create 267 | - get 268 | - list 269 | - watch 270 | - apiGroups: 271 | - cilium.io 272 | resources: 273 | - ciliumnetworkpolicies 274 | verbs: 275 | - "*" 276 | -------------------------------------------------------------------------------- /etc/cluster-role-binding-kube-apiserver-to-kubelet.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1beta1 2 | kind: ClusterRoleBinding 3 | metadata: 4 | name: system:kube-apiserver 5 | namespace: "" 6 | roleRef: 7 | apiGroup: rbac.authorization.k8s.io 8 | kind: ClusterRole 9 | name: system:kube-apiserver-to-kubelet 10 | subjects: 11 | - apiGroup: rbac.authorization.k8s.io 12 | kind: User 13 | name: kubernetes 14 | -------------------------------------------------------------------------------- /etc/cluster-role-binding-restricted.yaml: -------------------------------------------------------------------------------- 1 | # privilegedPSP gives the privilegedPSP role 2 | # to the group privileged. 3 | apiVersion: rbac.authorization.k8s.io/v1 4 | kind: ClusterRoleBinding 5 | metadata: 6 | name: privileged-psp-users 7 | subjects: 8 | - kind: Group 9 | apiGroup: rbac.authorization.k8s.io 10 | name: privileged-psp-users 11 | roleRef: 12 | apiGroup: rbac.authorization.k8s.io 13 | kind: ClusterRole 14 | name: privileged-psp-user 15 | --- 16 | # restrictedPSP grants the restrictedPSP role to 17 | # the groups restricted and privileged. 18 | apiVersion: rbac.authorization.k8s.io/v1 19 | kind: ClusterRoleBinding 20 | metadata: 21 | name: restricted-psp-users 22 | subjects: 23 | - kind: Group 24 | apiGroup: rbac.authorization.k8s.io 25 | name: restricted-psp-users 26 | - kind: Group 27 | apiGroup: rbac.authorization.k8s.io 28 | name: privileged-psp-users 29 | roleRef: 30 | apiGroup: rbac.authorization.k8s.io 31 | kind: ClusterRole 32 | name: restricted-psp-user 33 | --- 34 | # edit grants edit role to the groups 35 | # restricted and privileged. 36 | apiVersion: rbac.authorization.k8s.io/v1 37 | kind: ClusterRoleBinding 38 | metadata: 39 | name: edit 40 | subjects: 41 | - kind: Group 42 | apiGroup: rbac.authorization.k8s.io 43 | name: privileged-psp-users 44 | - kind: Group 45 | apiGroup: rbac.authorization.k8s.io 46 | name: restricted-psp-users 47 | roleRef: 48 | apiGroup: rbac.authorization.k8s.io 49 | kind: ClusterRole 50 | name: edit 51 | -------------------------------------------------------------------------------- /etc/cluster-role-kube-apiserver-to-kubelet.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1beta1 2 | kind: ClusterRole 3 | metadata: 4 | annotations: 5 | rbac.authorization.kubernetes.io/autoupdate: "true" 6 | labels: 7 | kubernetes.io/bootstrapping: rbac-defaults 8 | name: system:kube-apiserver-to-kubelet 9 | rules: 10 | - apiGroups: 11 | - "" 12 | resources: 13 | - nodes/proxy 14 | - nodes/stats 15 | - nodes/log 16 | - nodes/spec 17 | - nodes/metrics 18 | verbs: 19 | - "*" 20 | -------------------------------------------------------------------------------- /etc/cluster-role-restricted.yaml: -------------------------------------------------------------------------------- 1 | # restrictedPSP grants access to use 2 | # the restricted PSP. 3 | apiVersion: rbac.authorization.k8s.io/v1 4 | kind: ClusterRole 5 | metadata: 6 | name: restricted-psp-user 7 | rules: 8 | - apiGroups: 9 | - extensions 10 | resources: 11 | - podsecuritypolicies 12 | resourceNames: 13 | - restricted 14 | verbs: 15 | - use 16 | --- 17 | # privilegedPSP grants access to use the privileged 18 | # PSP. 19 | apiVersion: rbac.authorization.k8s.io/v1 20 | kind: ClusterRole 21 | metadata: 22 | name: privileged-psp-user 23 | rules: 24 | - apiGroups: 25 | - extensions 26 | resources: 27 | - podsecuritypolicies 28 | resourceNames: 29 | - privileged 30 | verbs: 31 | - use 32 | -------------------------------------------------------------------------------- /etc/cni/net.d/10-bridge.conf: -------------------------------------------------------------------------------- 1 | { 2 | "cniVersion": "0.3.1", 3 | "name": "bridge", 4 | "type": "bridge", 5 | "bridge": "cnio0", 6 | "isGateway": true, 7 | "ipMasq": true, 8 | "ipam": { 9 | "type": "host-local", 10 | "ranges": [ 11 | [{"subnet": "POD_CIDR"}] 12 | ], 13 | "routes": [{"dst": "0.0.0.0/0"}] 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /etc/cni/net.d/99-loopback.conf: -------------------------------------------------------------------------------- 1 | { 2 | "cniVersion": "0.3.1", 3 | "type": "loopback" 4 | } 5 | -------------------------------------------------------------------------------- /etc/encryption-config.yaml: -------------------------------------------------------------------------------- 1 | kind: EncryptionConfig 2 | apiVersion: v1 3 | resources: 4 | - resources: 5 | - secrets 6 | providers: 7 | - aescbc: 8 | keys: 9 | - name: key1 10 | secret: SECRET 11 | - identity: {} 12 | -------------------------------------------------------------------------------- /etc/kube-dns.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: kube-dns 5 | namespace: kube-system 6 | --- 7 | apiVersion: v1 8 | kind: ConfigMap 9 | metadata: 10 | name: kube-dns 11 | namespace: kube-system 12 | labels: 13 | addonmanager.kubernetes.io/mode: EnsureExists 14 | --- 15 | apiVersion: v1 16 | kind: Service 17 | metadata: 18 | name: kube-dns 19 | namespace: kube-system 20 | labels: 21 | k8s-app: kube-dns 22 | kubernetes.io/cluster-service: "true" 23 | kubernetes.io/name: "KubeDNS" 24 | spec: 25 | clusterIP: 10.32.0.10 26 | ports: 27 | - name: dns 28 | port: 53 29 | protocol: UDP 30 | targetPort: 53 31 | - name: dns-tcp 32 | port: 53 33 | protocol: TCP 34 | targetPort: 53 35 | selector: 36 | k8s-app: kube-dns 37 | sessionAffinity: None 38 | type: ClusterIP 39 | --- 40 | apiVersion: extensions/v1beta1 41 | kind: Deployment 42 | metadata: 43 | labels: 44 | k8s-app: kube-dns 45 | kubernetes.io/cluster-service: "true" 46 | name: kube-dns 47 | namespace: kube-system 48 | spec: 49 | replicas: 2 50 | selector: 51 | matchLabels: 52 | k8s-app: kube-dns 53 | strategy: 54 | rollingUpdate: 55 | maxSurge: 10% 56 | maxUnavailable: 0 57 | type: RollingUpdate 58 | template: 59 | metadata: 60 | annotations: 61 | scheduler.alpha.kubernetes.io/critical-pod: "" 62 | creationTimestamp: null 63 | labels: 64 | k8s-app: kube-dns 65 | spec: 66 | containers: 67 | - name: kubedns 68 | image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 69 | env: 70 | - name: PROMETHEUS_PORT 71 | value: "10055" 72 | args: 73 | - --domain=cluster.local. 74 | - --dns-port=10053 75 | - --config-dir=/kube-dns-config 76 | - --v=2 77 | livenessProbe: 78 | failureThreshold: 5 79 | httpGet: 80 | path: /healthcheck/kubedns 81 | port: 10054 82 | scheme: HTTP 83 | initialDelaySeconds: 60 84 | periodSeconds: 10 85 | successThreshold: 1 86 | timeoutSeconds: 5 87 | ports: 88 | - name: dns-local 89 | containerPort: 10053 90 | protocol: UDP 91 | - name: dns-tcp-local 92 | containerPort: 10053 93 | protocol: TCP 94 | - name: metrics 95 | containerPort: 10055 96 | protocol: TCP 97 | readinessProbe: 98 | failureThreshold: 3 99 | httpGet: 100 | path: /readiness 101 | port: 8081 102 | scheme: HTTP 103 | initialDelaySeconds: 3 104 | periodSeconds: 10 105 | successThreshold: 1 106 | timeoutSeconds: 5 107 | resources: 108 | limits: 109 | memory: 170Mi 110 | requests: 111 | cpu: 100m 112 | memory: 70Mi 113 | volumeMounts: 114 | - name: kube-dns-config 115 | mountPath: /kube-dns-config 116 | - name: dnsmasq 117 | image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 118 | args: 119 | - -v=2 120 | - -logtostderr 121 | - -configDir=/etc/k8s/dns/dnsmasq-nanny 122 | - -restartDnsmasq=true 123 | - -- 124 | - -k 125 | - --cache-size=1000 126 | - --log-facility=- 127 | - --server=/cluster.local/127.0.0.1#10053 128 | - --server=/in-addr.arpa/127.0.0.1#10053 129 | - --server=/ip6.arpa/127.0.0.1#10053 130 | livenessProbe: 131 | failureThreshold: 5 132 | httpGet: 133 | path: /healthcheck/dnsmasq 134 | port: 10054 135 | scheme: HTTP 136 | initialDelaySeconds: 60 137 | periodSeconds: 10 138 | successThreshold: 1 139 | timeoutSeconds: 5 140 | ports: 141 | - name: dns 142 | containerPort: 53 143 | protocol: UDP 144 | - name: dns-tcp 145 | containerPort: 53 146 | protocol: TCP 147 | resources: 148 | requests: 149 | cpu: 150m 150 | memory: 20Mi 151 | volumeMounts: 152 | - name: kube-dns-config 153 | mountPath: /etc/k8s/dns/dnsmasq-nanny 154 | - name: sidecar 155 | image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 156 | args: 157 | - --v=2 158 | - --logtostderr 159 | - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A 160 | - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A 161 | livenessProbe: 162 | failureThreshold: 5 163 | httpGet: 164 | path: /metrics 165 | port: 10054 166 | scheme: HTTP 167 | initialDelaySeconds: 60 168 | periodSeconds: 10 169 | successThreshold: 1 170 | timeoutSeconds: 5 171 | ports: 172 | - name: metrics 173 | containerPort: 10054 174 | protocol: TCP 175 | resources: 176 | requests: 177 | cpu: 10m 178 | memory: 20Mi 179 | dnsPolicy: Default 180 | restartPolicy: Always 181 | serviceAccount: kube-dns 182 | serviceAccountName: kube-dns 183 | terminationGracePeriodSeconds: 30 184 | tolerations: 185 | - key: CriticalAddonsOnly 186 | operator: Exists 187 | volumes: 188 | - name: kube-dns-config 189 | configMap: 190 | defaultMode: 420 191 | name: kube-dns 192 | optional: true 193 | -------------------------------------------------------------------------------- /etc/pod-nonewprivs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | annotations: 5 | container.seccomp.security.alpha.kubernetes.io/nonewprivs: docker/default 6 | name: nonewprivs 7 | spec: 8 | restartPolicy: Never 9 | securityContext: 10 | runAsNonRoot: true 11 | runAsUser: 1337 12 | containers: 13 | - image: gcr.io/kubernetes-e2e-test-image/nonewprivs:1.0 14 | name: nonewprivs 15 | -------------------------------------------------------------------------------- /etc/pod-security-policy-permissive.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: PodSecurityPolicy 3 | metadata: 4 | name: permissive 5 | spec: 6 | privileged: true 7 | allowPrivilegeEscalation: true 8 | hostNetwork: true 9 | hostPID: true 10 | hostIPC: true 11 | seLinux: 12 | rule: RunAsAny 13 | supplementalGroups: 14 | rule: RunAsAny 15 | runAsUser: 16 | rule: RunAsAny 17 | fsGroup: 18 | rule: RunAsAny 19 | volumes: 20 | - '*' 21 | allowedCapabilities: 22 | - '*' 23 | hostPorts: 24 | - min: 1 25 | max: 65536 26 | -------------------------------------------------------------------------------- /etc/pod-security-policy-restricted.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: PodSecurityPolicy 3 | metadata: 4 | name: restricted 5 | spec: 6 | privileged: false 7 | allowPrivilegeEscalation: false 8 | defaultAllowPrivilegeEscalation: false 9 | hostNetwork: false 10 | hostPID: false 11 | hostIPC: false 12 | hostNetwork: false 13 | seLinux: 14 | rule: RunAsAny 15 | supplementalGroups: 16 | rule: RunAsAny 17 | runAsUser: 18 | rule: MustRunAsNonRoot 19 | fsGroup: 20 | rule: RunAsAny 21 | volumes: 22 | - '*' 23 | allowedCapabilities: 24 | - '*' 25 | hostPorts: 26 | - min: 8000 27 | max: 9999 28 | -------------------------------------------------------------------------------- /etc/systemd/system/etcd.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=etcd 3 | Documentation=https://github.com/coreos/etcd 4 | 5 | [Service] 6 | ExecStart=/usr/bin/etcd \ 7 | --name ETCD_NAME \ 8 | --cert-file=/etc/etcd/kubernetes.pem \ 9 | --key-file=/etc/etcd/kubernetes-key.pem \ 10 | --peer-cert-file=/etc/etcd/kubernetes.pem \ 11 | --peer-key-file=/etc/etcd/kubernetes-key.pem \ 12 | --trusted-ca-file=/etc/etcd/ca.pem \ 13 | --peer-trusted-ca-file=/etc/etcd/ca.pem \ 14 | --peer-client-cert-auth \ 15 | --client-cert-auth \ 16 | --initial-advertise-peer-urls https://INTERNAL_IP:2380 \ 17 | --listen-peer-urls https://INTERNAL_IP:2380 \ 18 | --listen-client-urls https://INTERNAL_IP:2379,http://127.0.0.1:2379 \ 19 | --advertise-client-urls https://INTERNAL_IP:2379 \ 20 | --initial-cluster-token etcd-cluster-0 \ 21 | --initial-cluster-state new \ 22 | --data-dir=/var/lib/etcd 23 | Restart=on-failure 24 | RestartSec=5 25 | 26 | [Install] 27 | WantedBy=multi-user.target 28 | -------------------------------------------------------------------------------- /etc/systemd/system/kube-apiserver.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes API Server 3 | Documentation=https://github.com/kubernetes/kubernetes 4 | 5 | [Service] 6 | ExecStart=/usr/bin/kube-apiserver \ 7 | --admission-control=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,PodSecurityPolicy \ 8 | --advertise-address=INTERNAL_IP \ 9 | --allow-privileged=true \ 10 | --apiserver-count=3 \ 11 | --audit-log-maxage=30 \ 12 | --audit-log-maxbackup=3 \ 13 | --audit-log-maxsize=100 \ 14 | --audit-log-path=/var/log/audit.log \ 15 | --authorization-mode=Node,RBAC \ 16 | --bind-address=0.0.0.0 \ 17 | --client-ca-file=/var/lib/kubernetes/ca.pem \ 18 | --enable-swagger-ui=true \ 19 | --etcd-cafile=/var/lib/kubernetes/ca.pem \ 20 | --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \ 21 | --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \ 22 | --etcd-servers=https://INTERNAL_IP:2379 \ 23 | --event-ttl=1h \ 24 | --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \ 25 | --insecure-bind-address=127.0.0.1 \ 26 | --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \ 27 | --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \ 28 | --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \ 29 | --kubelet-https=true \ 30 | --runtime-config=api/all \ 31 | --service-account-key-file=/var/lib/kubernetes/ca-key.pem \ 32 | --service-cluster-ip-range=10.32.0.0/24 \ 33 | --service-node-port-range=30000-32767 \ 34 | --tls-ca-file=/var/lib/kubernetes/ca.pem \ 35 | --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \ 36 | --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \ 37 | --v=2 38 | Restart=on-failure 39 | RestartSec=5 40 | 41 | [Install] 42 | WantedBy=multi-user.target 43 | -------------------------------------------------------------------------------- /etc/systemd/system/kube-controller-manager.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Controller Manager 3 | Documentation=https://github.com/kubernetes/kubernetes 4 | 5 | [Service] 6 | ExecStart=/usr/bin/kube-controller-manager \ 7 | --address=0.0.0.0 \ 8 | --cluster-cidr=10.200.0.0/16 \ 9 | --cluster-name=kubernetes \ 10 | --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \ 11 | --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \ 12 | --leader-elect=true \ 13 | --master=http://127.0.0.1:8080 \ 14 | --root-ca-file=/var/lib/kubernetes/ca.pem \ 15 | --service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \ 16 | --service-cluster-ip-range=10.32.0.0/24 \ 17 | --v=2 18 | Restart=on-failure 19 | RestartSec=5 20 | 21 | [Install] 22 | WantedBy=multi-user.target 23 | -------------------------------------------------------------------------------- /etc/systemd/system/kube-proxy.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Kube Proxy 3 | Documentation=https://github.com/kubernetes/kubernetes 4 | 5 | [Service] 6 | ExecStart=/usr/bin/kube-proxy \ 7 | --cluster-cidr=10.200.0.0/16 \ 8 | --kubeconfig=/var/lib/kube-proxy/kubeconfig \ 9 | --proxy-mode=iptables \ 10 | --v=2 11 | Restart=on-failure 12 | RestartSec=5 13 | 14 | [Install] 15 | WantedBy=multi-user.target 16 | -------------------------------------------------------------------------------- /etc/systemd/system/kube-scheduler.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Scheduler 3 | Documentation=https://github.com/kubernetes/kubernetes 4 | 5 | [Service] 6 | ExecStart=/usr/bin/kube-scheduler \ 7 | --leader-elect=true \ 8 | --master=http://127.0.0.1:8080 \ 9 | --v=2 10 | Restart=on-failure 11 | RestartSec=5 12 | 13 | [Install] 14 | WantedBy=multi-user.target 15 | -------------------------------------------------------------------------------- /etc/systemd/system/kubelet.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Kubelet 3 | Documentation=https://github.com/kubernetes/kubernetes 4 | After=cri-containerd.service 5 | Requires=cri-containerd.service 6 | 7 | [Service] 8 | ExecStart=/usr/bin/kubelet \ 9 | --allow-privileged=true \ 10 | --anonymous-auth=false \ 11 | --authorization-mode=Webhook \ 12 | --client-ca-file=/var/lib/kubernetes/ca.pem \ 13 | --cluster-dns=10.32.0.10 \ 14 | --cluster-domain=cluster.local \ 15 | --container-runtime=remote \ 16 | --container-runtime-endpoint=unix:///var/run/cri-containerd.sock \ 17 | --image-pull-progress-deadline=2m \ 18 | --kubeconfig=/var/lib/kubelet/kubeconfig \ 19 | --network-plugin=cni \ 20 | --pod-cidr=POD_CIDR \ 21 | --register-node=true \ 22 | --require-kubeconfig \ 23 | --runtime-request-timeout=15m \ 24 | --tls-cert-file=/var/lib/kubelet/HOSTNAME.pem \ 25 | --tls-private-key-file=/var/lib/kubelet/HOSTNAME-key.pem \ 26 | --v=2 27 | Restart=on-failure 28 | RestartSec=5 29 | 30 | [Install] 31 | WantedBy=multi-user.target 32 | -------------------------------------------------------------------------------- /gcloud/setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This script provisions a cluster running in google cloud with container os 4 | # and provisions a kubernetes cluster on it. 5 | # 6 | # The script assumes you already have the google cloud command line tool `gcloud`. 7 | # 8 | set -e 9 | set -o pipefail 10 | 11 | export CLOUD_PROVIDER="google" 12 | 13 | # Check if we have the gcloud command line tool. 14 | command -v gcloud >/dev/null 2>&1 || { echo >&2 "This script requires the google cloud command line tool, gcloud. Aborting."; exit 1; } 15 | 16 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 17 | SCRIPT_DIR="${DIR}/../scripts" 18 | 19 | export RESOURCE_GROUP=${RESOURCE_GROUP:-kubernetes-clear-linux-snowflake} 20 | export REGION=${REGION:-us-east1} 21 | export ZONE=${ZONE:-us-east1-c} 22 | export CONTROLLER_NODE_NAME=${CONTROLLER_NODE_NAME:-controller-node} 23 | export SSH_KEYFILE=${SSH_KEYFILE:-${HOME}/.ssh/id_ed25519} 24 | export WORKERS=${WORKERS:-2} 25 | export VM_USER=${VM_USER:-jessfraz} 26 | 27 | if [[ ! -f "$SSH_KEYFILE" ]]; then 28 | echo >&2 "SSH_KEYFILE $SSH_KEYFILE does not exist." 29 | echo >&2 "Change the SSH_KEYFILE variable to a new path or create an ssh key there." 30 | exit 1 31 | fi 32 | 33 | # set a default region 34 | gcloud config set compute/region "$REGION" 35 | 36 | # set a default zone 37 | gcloud config set compute/zone "$ZONE" 38 | 39 | export PUBLIC_IP_NAME="k8s-public-ip" 40 | VIRTUAL_NETWORK_NAME="k8s-virtual-network" 41 | 42 | VM_SIZE="n1-standard-1" 43 | # TODO: change this to container os, slacker 44 | IMAGE_FAMILY="ubuntu-1604-lts" 45 | IMAGE_PROJECT="ubuntu-os-cloud" 46 | 47 | create_virtual_network() { 48 | echo "Creating virtual network ${VIRTUAL_NETWORK_NAME}..." 49 | gcloud compute networks create "$VIRTUAL_NETWORK_NAME" --mode custom 50 | gcloud compute networks subnets create "k8s-subnet" \ 51 | --network "$VIRTUAL_NETWORK_NAME" \ 52 | --range 10.240.0.0/24 53 | 54 | # create firewall rules 55 | gcloud compute firewall-rules create "$VIRTUAL_NETWORK_NAME-allow-internal" \ 56 | --allow tcp,udp,icmp \ 57 | --network "$VIRTUAL_NETWORK_NAME" \ 58 | --source-ranges 10.240.0.0/24,10.200.0.0/16 59 | gcloud compute firewall-rules create "$VIRTUAL_NETWORK_NAME-allow-external" \ 60 | --allow tcp:22,tcp:6443,icmp \ 61 | --network "$VIRTUAL_NETWORK_NAME" \ 62 | --source-ranges 0.0.0.0/0 63 | } 64 | 65 | create_apiserver_ip_address() { 66 | echo "Creating apiserver public ip address..." 67 | gcloud compute addresses create "$PUBLIC_IP_NAME" \ 68 | --region "$REGION" 69 | } 70 | 71 | create_controller_node() { 72 | echo "Creating controller node ${CONTROLLER_NODE_NAME}..." 73 | gcloud compute instances create "$CONTROLLER_NODE_NAME" \ 74 | --boot-disk-size 200GB \ 75 | --can-ip-forward \ 76 | --image-family "$IMAGE_FAMILY" \ 77 | --image-project "$IMAGE_PROJECT" \ 78 | --machine-type "$VM_SIZE" \ 79 | --private-network-ip 10.240.0.10 \ 80 | --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ 81 | --subnet "k8s-subnet" \ 82 | --tags "controller,kubernetes" 83 | } 84 | 85 | create_worker_nodes() { 86 | for i in $(seq 0 "$WORKERS"); do 87 | worker_node_name="worker-node-${i}" 88 | echo "Creating worker node ${worker_node_name}..." 89 | 90 | gcloud compute instances create "$worker_node_name" \ 91 | --boot-disk-size 200GB \ 92 | --can-ip-forward \ 93 | --image-family "$IMAGE_FAMILY" \ 94 | --image-project "$IMAGE_PROJECT" \ 95 | --machine-type "$VM_SIZE" \ 96 | --private-network-ip "10.240.0.2${i}" \ 97 | --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ 98 | --subnet "k8s-subnet" \ 99 | --tags "worker,kubernetes" 100 | 101 | # configure routes 102 | gcloud compute routes create "${worker_node_name}-route" \ 103 | --network "$VIRTUAL_NETWORK_NAME" \ 104 | --next-hop-address "10.240.0.2${i}" \ 105 | --destination-range "10.200.${i}.0/24" 106 | done 107 | } 108 | 109 | create_loadbalancer(){ 110 | gcloud compute target-pools create kubernetes-target-pool 111 | gcloud compute target-pools add-instances kubernetes-target-pool \ 112 | --instances "$CONTROLLER_NODE_NAME" 113 | 114 | gcloud compute forwarding-rules create "${PUBLIC_IP_NAME}-forwarding-rule" \ 115 | --address "$PUBLIC_IP_NAME" \ 116 | --ports 6443 \ 117 | --region "$REGION" \ 118 | --target-pool kubernetes-target-pool 119 | } 120 | 121 | create_virtual_network 122 | create_apiserver_ip_address 123 | create_controller_node 124 | create_worker_nodes 125 | create_loadbalancer 126 | 127 | "${SCRIPT_DIR}/provision.sh" 128 | -------------------------------------------------------------------------------- /scripts/generate_certificates.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This script generates certificates for nodes. 4 | # 5 | set -e 6 | set -o pipefail 7 | 8 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 9 | CA_CONFIG_DIR="${DIR}/../ca" 10 | 11 | # From https://pkg.cfssl.org/ 12 | CFSSL_VERSION="1.2" 13 | 14 | install_cfssl() { 15 | # exit early if we already have cfssl installed 16 | command -v cfssljson >/dev/null 2>&1 && { echo "cfssl & cfssljson are already installed. Skipping installation."; return 0; } 17 | 18 | local download_uri="https://pkg.cfssl.org/R${CFSSL_VERSION}" 19 | 20 | sudo curl -sSL "${download_uri}/cfssl_linux-amd64" -o /usr/local/bin/cfssl 21 | sudo curl -sSL "${download_uri}/cfssljson_linux-amd64" -o /usr/local/bin/cfssljson 22 | 23 | sudo chmod +x /usr/local/bin/cfssl* 24 | 25 | echo "Successfully installed cfssl & cfssljson!" 26 | } 27 | 28 | generate_certificates() { 29 | tmpdir=$(mktemp -d) 30 | 31 | # create the certificates in a temporary directory 32 | cd "$tmpdir" 33 | 34 | # generate the CA certificate and private key 35 | # outputs: ca-key.pem ca.pem 36 | echo "Generating CA certificate and private key..." 37 | cfssl gencert -initca "${CA_CONFIG_DIR}/csr.json" | cfssljson -bare ca 38 | 39 | # create the client and server certificates 40 | 41 | # create the admin client cert 42 | # outputs: admin-key.pem admin.pem 43 | echo "Generating admin client certificate..." 44 | cfssl gencert \ 45 | -ca="${tmpdir}/ca.pem" \ 46 | -ca-key="${tmpdir}/ca-key.pem" \ 47 | -config="${CA_CONFIG_DIR}/config.json" \ 48 | -profile=kubernetes \ 49 | "${CA_CONFIG_DIR}/admin-csr.json" | cfssljson -bare admin 50 | 51 | # create the kubelet client certificates 52 | # outputs: worker-0-key.pem worker-0.pem worker-1-key.pem worker-1.pem... 53 | for i in $(seq 0 "$WORKERS"); do 54 | instance="worker-node-${i}" 55 | instance_csr_config="${tmpdir}/${instance}-csr.json" 56 | sed "s/INSTANCE/${instance}/g" "${CA_CONFIG_DIR}/instance-csr.json" > "$instance_csr_config" 57 | 58 | # get the external ip for the instance 59 | # this is cloud provider specific 60 | # Google Cloud 61 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 62 | external_ip=$(gcloud compute instances describe "$instance" --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') 63 | fi 64 | # Azure 65 | if [[ "$CLOUD_PROVIDER" == "azure" ]]; then 66 | external_ip=$(az vm show -g "$RESOURCE_GROUP" -n "$instance" --show-details --query 'publicIps' -o tsv | tr -d '[:space:]') 67 | fi 68 | 69 | # get the internal ip for the instance 70 | # this is cloud provider specific 71 | # Google Cloud 72 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 73 | internal_ip=$(gcloud compute instances describe "$instance" --format 'value(networkInterfaces[0].networkIP)') 74 | fi 75 | # Azure 76 | if [[ "$CLOUD_PROVIDER" == "azure" ]]; then 77 | internal_ip=$(az vm show -g "$RESOURCE_GROUP" -n "$instance" --show-details --query 'privateIps' -o tsv | tr -d '[:space:]') 78 | fi 79 | 80 | # generate the certificates 81 | echo "Generating certificate for ${instance}..." 82 | cfssl gencert \ 83 | -ca="${tmpdir}/ca.pem" \ 84 | -ca-key="${tmpdir}/ca-key.pem" \ 85 | -config="${CA_CONFIG_DIR}/config.json" \ 86 | -hostname="${instance},${external_ip},${internal_ip}" \ 87 | -profile=kubernetes \ 88 | "$instance_csr_config" | cfssljson -bare "$instance" 89 | done 90 | 91 | # create the kube-proxy client certificate 92 | # outputs: kube-proxy-key.pem kube-proxy.pem 93 | echo "Generating kube-proxy client certificate..." 94 | cfssl gencert \ 95 | -ca="${tmpdir}/ca.pem" \ 96 | -ca-key="${tmpdir}/ca-key.pem" \ 97 | -config="${CA_CONFIG_DIR}/config.json" \ 98 | -profile=kubernetes \ 99 | "${CA_CONFIG_DIR}/kube-proxy-csr.json" | cfssljson -bare kube-proxy 100 | 101 | # get the controller node public ip address 102 | # this is cloud provider specific 103 | # Google Cloud 104 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 105 | public_address=$(gcloud compute addresses describe "$PUBLIC_IP_NAME" --region "$REGION" --format 'value(address)') 106 | # get the controller internal ips 107 | internal_ips=$(gcloud compute instances describe "$CONTROLLER_NODE_NAME" --format 'value(networkInterfaces[0].networkIP)') 108 | fi 109 | # Azure 110 | if [[ "$CLOUD_PROVIDER" == "azure" ]]; then 111 | public_address=$(az network public-ip show -g "$RESOURCE_GROUP" --name "$PUBLIC_IP_NAME" --query 'ipAddress' -o tsv | tr -d '[:space:]') 112 | # get the controller internal ips 113 | internal_ips=$(az vm list-ip-addresses -g "$RESOURCE_GROUP" -o table | grep controller | awk '{print $3}' | tr -d '[:space:]' | tr '\n' ',' | sed 's/,*$//g') 114 | fi 115 | # Vagrant 116 | if [[ "$CLOUD_PROVIDER" == "vagrant" ]]; then 117 | internal_ips=172.17.8.100 118 | public_address=172.17.8.100 119 | fi 120 | 121 | # create the kube-apiserver client certificate 122 | # outputs: kubernetes-key.pem kubernetes.pem 123 | echo "Generating kube-apiserver client certificate..." 124 | cfssl gencert \ 125 | -ca="${tmpdir}/ca.pem" \ 126 | -ca-key="${tmpdir}/ca-key.pem" \ 127 | -config="${CA_CONFIG_DIR}/config.json" \ 128 | -hostname="${internal_ips},${public_address},0.0.0.0,127.0.0.1,kubernetes.default" \ 129 | -profile=kubernetes \ 130 | "${CA_CONFIG_DIR}/kubernetes-csr.json" | cfssljson -bare kubernetes 131 | 132 | export CERTIFICATE_TMP_DIR="$tmpdir" 133 | echo "Certs generated in CERTIFICATE_TMP_DIR env var: $CERTIFICATE_TMP_DIR" 134 | } 135 | -------------------------------------------------------------------------------- /scripts/generate_configuration_files.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | set -o pipefail 4 | 5 | generate_configuration_files() { 6 | tmpdir=$(mktemp -d) 7 | 8 | # create the kubeconfigs in a temporary directory 9 | cd "$tmpdir" 10 | 11 | # get the controller node public ip address 12 | # this is cloud provider specific 13 | # Google Cloud 14 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 15 | internal_ip=$(gcloud compute addresses describe "$PUBLIC_IP_NAME" --region "$REGION" --format 'value(address)') 16 | fi 17 | # Azure 18 | if [[ "$CLOUD_PROVIDER" == "azure" ]]; then 19 | internal_ip=$(az vm list-ip-addresses -g "$RESOURCE_GROUP" -o table | grep controller | awk '{print $3}' | tr -d '[:space:]' | tr '\n' ',' | sed 's/,*$//g') 20 | fi 21 | # Vagrant 22 | if [[ "$CLOUD_PROVIDER" == "vagrant" ]]; then 23 | internal_ip=172.17.8.100 24 | fi 25 | 26 | # Generate each workers kubeconfig 27 | # outputs: worker-0.kubeconfig worker-1.kubeconfig worker-2.kubeconfig 28 | for i in $(seq 0 "$WORKERS"); do 29 | instance="worker-node-${i}" 30 | kubectl config set-cluster "$RESOURCE_GROUP" \ 31 | --certificate-authority="${CERTIFICATE_TMP_DIR}/ca.pem" \ 32 | --embed-certs=true \ 33 | --server="https://${internal_ip}:6443" \ 34 | --kubeconfig="${instance}.kubeconfig" 35 | 36 | kubectl config set-credentials "system:node:${instance}" \ 37 | --client-certificate="${CERTIFICATE_TMP_DIR}/${instance}.pem" \ 38 | --client-key="${CERTIFICATE_TMP_DIR}/${instance}-key.pem" \ 39 | --embed-certs=true \ 40 | --kubeconfig="${instance}.kubeconfig" 41 | 42 | kubectl config set-context default \ 43 | --cluster="$RESOURCE_GROUP" \ 44 | --user=system:node:"${instance}" \ 45 | --kubeconfig="${instance}.kubeconfig" 46 | 47 | kubectl config use-context default --kubeconfig="${instance}.kubeconfig" 48 | done 49 | 50 | # Generate kube-proxy config 51 | # outputs: kube-proxy.kubeconfig 52 | kubectl config set-cluster "$RESOURCE_GROUP" \ 53 | --certificate-authority="${CERTIFICATE_TMP_DIR}/ca.pem" \ 54 | --embed-certs=true \ 55 | --server="https://${internal_ip}:6443" \ 56 | --kubeconfig=kube-proxy.kubeconfig 57 | 58 | kubectl config set-credentials kube-proxy \ 59 | --client-certificate="${CERTIFICATE_TMP_DIR}/kube-proxy.pem" \ 60 | --client-key="${CERTIFICATE_TMP_DIR}/kube-proxy-key.pem" \ 61 | --embed-certs=true \ 62 | --kubeconfig=kube-proxy.kubeconfig 63 | 64 | kubectl config set-context default \ 65 | --cluster="$RESOURCE_GROUP" \ 66 | --user=kube-proxy \ 67 | --kubeconfig=kube-proxy.kubeconfig 68 | 69 | kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig 70 | 71 | export KUBECONFIG_TMP_DIR="$tmpdir" 72 | echo "Kubeconfigs generated in KUBECONFIG_TMP_DIR env var: $KUBECONFIG_TMP_DIR" 73 | } 74 | -------------------------------------------------------------------------------- /scripts/generate_encryption_config.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | set -o pipefail 4 | 5 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 6 | 7 | generate_encryption_config() { 8 | ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) 9 | 10 | tmpdir=$(mktemp -d) 11 | configfile="${tmpdir}/encryption-config.yaml" 12 | sed "s#SECRET#${ENCRYPTION_KEY}#g" "${DIR}/../etc/encryption-config.yaml" > "$configfile" 13 | 14 | export ENCRYPTION_CONFIG="$configfile" 15 | echo "Encryption config generated in ENCRYPTION_CONFIG env var: $ENCRYPTION_CONFIG" 16 | } 17 | -------------------------------------------------------------------------------- /scripts/install_etcd.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | set -o pipefail 4 | 5 | # From https://github.com/coreos/etcd/releases 6 | # OR 7 | # curl -sSL https://api.github.com/repos/coreos/etcd/releases/latest | jq .tag_name 8 | ETCD_VERSION="v3.2.9" 9 | 10 | install_etcd() { 11 | local download_uri="https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz" 12 | 13 | curl -sSL "$download_uri" | tar -v -C /usr/bin -xz --strip-components=1 14 | 15 | chmod +x /usr/bin/etcd* 16 | 17 | # make the needed directories 18 | mkdir -p /etc/etcd /var/lib/etcd 19 | 20 | # get the internal ip 21 | # this is cloud provider specific 22 | # Vagrant 23 | if grep vagrant ~/.ssh/authorized_keys > /dev/null; then 24 | internal_ip="172.17.8.100" 25 | fi 26 | # Google Cloud 27 | if [[ -z "$internal_ip" ]]; then 28 | internal_ip=$(curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip || true) 29 | fi 30 | # Azure 31 | if [[ -z "$internal_ip" ]]; then 32 | internal_ip=$(curl -H "Metadata:true" "http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/privateIpAddress?api-version=2017-08-01&format=text") 33 | fi 34 | 35 | # each etcd member must have a unique name within an etcd cluster 36 | # set the etcd name to match the hostname of the current compute instance 37 | etcd_name=$(hostname -s) 38 | 39 | # update the etcd systemd service file 40 | sed -i "s/INTERNAL_IP/${internal_ip}/g" /etc/systemd/system/etcd.service 41 | sed -i "s/ETCD_NAME/${etcd_name}/g" /etc/systemd/system/etcd.service 42 | 43 | systemctl daemon-reload 44 | systemctl enable etcd 45 | systemctl start etcd 46 | } 47 | 48 | install_etcd 49 | -------------------------------------------------------------------------------- /scripts/install_kubernetes_controller.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | set -o pipefail 4 | 5 | # From https://github.com/kubernetes/kubernetes/releases 6 | # OR 7 | # curl -sSL https://storage.googleapis.com/kubernetes-release/release/stable.txt 8 | KUBERNETES_VERSION=v1.8.3 9 | 10 | install_kubernetes_controller() { 11 | local download_uri="https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64" 12 | 13 | curl -sSL "${download_uri}/kube-apiserver" > /usr/bin/kube-apiserver 14 | curl -sSL "${download_uri}/kube-controller-manager" > /usr/bin/kube-controller-manager 15 | curl -sSL "${download_uri}/kube-scheduler" > /usr/bin/kube-scheduler 16 | curl -sSL "${download_uri}/kubectl" > /usr/bin/kubectl 17 | 18 | chmod +x /usr/bin/kube* 19 | 20 | # make the needed directories 21 | mkdir -p /var/lib/kubernetes 22 | 23 | # get the internal ip 24 | # this is cloud provider specific 25 | # Vagrant 26 | if grep vagrant ~/.ssh/authorized_keys > /dev/null; then 27 | internal_ip="172.17.8.100" 28 | fi 29 | # Google Cloud 30 | if [[ -z "$internal_ip" ]]; then 31 | internal_ip=$(curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip || true) 32 | fi 33 | # Azure 34 | if [[ -z "$internal_ip" ]]; then 35 | internal_ip=$(curl -H "Metadata:true" "http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/privateIpAddress?api-version=2017-08-01&format=text") 36 | fi 37 | # update the kube-apiserver systemd service file 38 | sed -i "s/INTERNAL_IP/${internal_ip}/g" /etc/systemd/system/kube-apiserver.service 39 | 40 | # update the kube-controller-manager systemd service file 41 | sed -i "s/INTERNAL_IP/${internal_ip}/g" /etc/systemd/system/kube-controller-manager.service 42 | 43 | # update the kube-scheduler systemd service file 44 | sed -i "s/INTERNAL_IP/${internal_ip}/g" /etc/systemd/system/kube-scheduler.service 45 | 46 | systemctl daemon-reload 47 | systemctl enable kube-apiserver kube-controller-manager kube-scheduler 48 | systemctl start kube-apiserver kube-controller-manager kube-scheduler 49 | } 50 | 51 | install_kubernetes_controller 52 | -------------------------------------------------------------------------------- /scripts/install_kubernetes_worker.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | set -o pipefail 4 | 5 | # From https://github.com/kubernetes/kubernetes/releases 6 | # OR 7 | # curl -sSL https://storage.googleapis.com/kubernetes-release/release/stable.txt 8 | KUBERNETES_VERSION=v1.8.3 9 | 10 | # From https://github.com/containernetworking/plugins/releases 11 | # OR 12 | # curl -sSL https://api.github.com/repos/containernetworking/plugins/releases/latest | jq .tag_name 13 | CNI_VERSION=v0.6.0 14 | 15 | # From https://github.com/Azure/azure-container-networking/releases 16 | # OR 17 | # curl -sSL https://api.github.com/repos/Azure/azure-container-networking/releases/latest | jq .tag_name 18 | AZURE_CNI_VERSION=v0.91 19 | 20 | # From https://github.com/kubernetes-incubator/cri-containerd/releases 21 | # OR 22 | # curl -sSL https://api.github.com/repos/kubernetes-incubator/cri-containerd/releases/latest | jq .tag_name 23 | CRI_CONTAINERD_VERSION=1.0.0-alpha.1 24 | 25 | install_cni() { 26 | local download_uri="https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" 27 | local cni_bin="/opt/cni/bin" 28 | 29 | # make the needed directories 30 | mkdir -p "$cni_bin" /etc/cni/net.d 31 | 32 | curl -sSL "$download_uri" | tar -xz -C "$cni_bin" 33 | 34 | chmod +x "${cni_bin}"/* 35 | } 36 | 37 | install_azure_cni() { 38 | local download_uri="https://github.com/Azure/azure-container-networking/releases/download/${AZURE_CNI_VERSION}/azure-vnet-cni-linux-amd64-${AZURE_CNI_VERSION}.tgz" 39 | local cni_bin="/opt/cni/bin" 40 | local cni_opt="/etc/cni/net.d" 41 | 42 | # make the needed directories 43 | mkdir -p "$cni_bin" "$cni_opt" 44 | 45 | curl -sSL "$download_uri" | tar -xz -C "$cni_bin" 46 | 47 | # move config file 48 | mv "${cni_bin}/10-azure.conf" "$cni_opt" 49 | chmod 600 "${cni_opt}/10-azure.conf" 50 | 51 | # remove bridge config 52 | rm "${cni_opt}/10-bridge.conf" 53 | 54 | chmod +x "${cni_bin}"/* 55 | 56 | # Dump ebtables rules. 57 | /sbin/ebtables -t nat --list 58 | 59 | # touch /etc/hosts if it does not already exist 60 | if [[ ! -f /etc/hosts ]]; then 61 | touch /etc/hosts 62 | fi 63 | } 64 | 65 | install_cri_containerd() { 66 | # TODO: fix this when this is merged https://github.com/kubernetes-incubator/cri-containerd/pull/415 67 | # local download_uri="https://github.com/kubernetes-incubator/cri-containerd/releases/download/v${CRI_CONTAINERD_VERSION}/cri-containerd-${CRI_CONTAINERD_VERSION}.tar.gz" 68 | local download_uri="https://misc.j3ss.co/tmp/cri-containerd-${CRI_CONTAINERD_VERSION}-dirty.tar.gz" 69 | 70 | curl -sSL "$download_uri" | tar -xz -C / 71 | } 72 | 73 | install_kubernetes_components() { 74 | local download_uri="https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/amd64" 75 | 76 | curl -sSL "${download_uri}/kube-proxy" > /usr/bin/kube-proxy 77 | curl -sSL "${download_uri}/kubelet" > /usr/bin/kubelet 78 | curl -sSL "${download_uri}/kubectl" > /usr/bin/kubectl 79 | 80 | chmod +x /usr/bin/kube* 81 | 82 | # make the needed directories 83 | mkdir -p /var/lib/kubernetes /var/run/kubernetes /var/lib/kubelet /var/lib/kube-proxy 84 | } 85 | 86 | configure() { 87 | # get the hostname 88 | hostname=$(hostname -s) 89 | # get the worker number 90 | worker=$(echo "$hostname" | grep -Eo '[0-9]+$') 91 | pod_cidr="10.200.${worker}.0/24" 92 | 93 | # update the cni bridge conf file 94 | if [[ -f /etc/cni/net.d/10-bridge.conf ]]; then 95 | sed -i "s#POD_CIDR#${pod_cidr}#g" /etc/cni/net.d/10-bridge.conf 96 | fi 97 | 98 | # update the kubelet systemd service file 99 | sed -i "s#POD_CIDR#${pod_cidr}#g" /etc/systemd/system/kubelet.service 100 | sed -i "s/HOSTNAME/${hostname}/g" /etc/systemd/system/kubelet.service 101 | 102 | # update the kube-proxy systemd service file 103 | sed -i "s#POD_CIDR#${pod_cidr}#g" /etc/systemd/system/kube-proxy.service 104 | sed -i "s/HOSTNAME/${hostname}/g" /etc/systemd/system/kube-proxy.service 105 | 106 | systemctl daemon-reload 107 | systemctl enable containerd cri-containerd kubelet kube-proxy 108 | systemctl start containerd cri-containerd kubelet kube-proxy 109 | } 110 | 111 | install_kubernetes_worker(){ 112 | # TODO: remove this when you switch to container os on google cloud 113 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 114 | sudo apt-get -y install socat 115 | fi 116 | if [[ "$CLOUD_PROVIDER" == "vagrant" ]]; then 117 | sudo apt-get -y install socat 118 | fi 119 | 120 | install_cni 121 | if [[ "$CLOUD_PROVIDER" == "azure" ]]; then 122 | install_azure_cni 123 | fi 124 | install_cri_containerd 125 | install_kubernetes_components 126 | configure 127 | } 128 | 129 | install_kubernetes_worker 130 | -------------------------------------------------------------------------------- /scripts/provision.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This script provisions controller and worker nodes to run kubernetes. 4 | # 5 | set -e 6 | set -o pipefail 7 | 8 | [[ -n $DEBUG ]] && set -x 9 | 10 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 11 | 12 | [[ -n "${SSH_CONFIG}" ]] && SSH_OPTIONS=("${SSH_OPTIONS[@]}" "-F" "${SSH_CONFIG}") 13 | [[ -n "${SSH_KEYFILE}" ]] && SSH_OPTIONS=("${SSH_OPTIONS[@]}" "-i" "${SSH_KEYFILE}") 14 | 15 | 16 | 17 | # get the controller node public ip address 18 | # this is cloud provider specific 19 | # Google Cloud 20 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 21 | controller_ip=$(gcloud compute instances describe "$CONTROLLER_NODE_NAME" --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') 22 | #controller_ip=$(gcloud compute addresses describe "$PUBLIC_IP_NAME" --region "$REGION" --format 'value(address)') 23 | 24 | # Azure 25 | elif [[ "$CLOUD_PROVIDER" == "azure" ]]; then 26 | controller_ip=$(az network public-ip show -g "$RESOURCE_GROUP" --name "$PUBLIC_IP_NAME" --query 'ipAddress' -o tsv | tr -d '[:space:]') 27 | 28 | # Vagrant 29 | elif [[ "$CLOUD_PROVIDER" == "vagrant" ]]; then 30 | controller_ip=controller-node 31 | 32 | # none ????? 33 | else 34 | echo need to be install on a valid cloud 35 | exit 1 36 | fi 37 | 38 | echo "Provisioning kubernetes cluster for resource group $RESOURCE_GROUP..." 39 | 40 | do_scp(){ 41 | scp "${SSH_OPTIONS[@]}" "$@" 42 | } 43 | 44 | do_ssh(){ 45 | # we want this to expand on client 46 | # shellcheck disable=SC2029 47 | ssh "${SSH_OPTIONS[@]}" "$@" 48 | } 49 | 50 | do_certs(){ 51 | echo "Generating certificates locally with cfssl..." 52 | # shellcheck disable=SC1090 53 | source "${DIR}/generate_certificates.sh" 54 | # Make sure we have cfssl installed first 55 | install_cfssl 56 | generate_certificates 57 | echo "Certificates successfully generated in ${CERTIFICATE_TMP_DIR}!" 58 | 59 | echo "Copying certs to controller node..." 60 | do_scp "${CERTIFICATE_TMP_DIR}/ca.pem" "${CERTIFICATE_TMP_DIR}/ca-key.pem" "${CERTIFICATE_TMP_DIR}/kubernetes.pem" "${CERTIFICATE_TMP_DIR}/kubernetes-key.pem" "${VM_USER}@${controller_ip}":~/ 61 | 62 | echo "Copying certs to worker nodes..." 63 | for i in $(seq 0 "$WORKERS"); do 64 | instance="worker-node-${i}" 65 | 66 | # get the external ip for the instance 67 | # this is cloud provider specific 68 | # Google Cloud 69 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 70 | external_ip=$(gcloud compute instances describe "$instance" --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') 71 | fi 72 | # Azure 73 | if [[ "$CLOUD_PROVIDER" == "azure" ]]; then 74 | external_ip=$(az vm show -g "$RESOURCE_GROUP" -n "$instance" --show-details --query 'publicIps' -o tsv | tr -d '[:space:]') 75 | fi 76 | # Vagrant 77 | if [[ "$CLOUD_PROVIDER" == "vagrant" ]]; then 78 | external_ip=${instance} 79 | fi 80 | 81 | # Copy the certificates 82 | do_scp "${CERTIFICATE_TMP_DIR}/ca.pem" "${CERTIFICATE_TMP_DIR}/${instance}-key.pem" "${CERTIFICATE_TMP_DIR}/${instance}.pem" "${VM_USER}@${external_ip}":~/ 83 | done 84 | } 85 | 86 | do_kubeconfigs(){ 87 | echo "Generating kubeconfigs locally with kubectl..." 88 | # shellcheck disable=SC1090 89 | source "${DIR}/generate_configuration_files.sh" 90 | generate_configuration_files 91 | echo "Kubeconfigs successfully generated in ${KUBECONFIG_TMP_DIR}!" 92 | echo "Copying kubeconfigs to worker nodes..." 93 | for i in $(seq 0 "$WORKERS"); do 94 | instance="worker-node-${i}" 95 | 96 | # get the external ip for the instance 97 | # this is cloud provider specific 98 | # Google Cloud 99 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 100 | external_ip=$(gcloud compute instances describe "$instance" --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') 101 | fi 102 | # Azure 103 | if [[ "$CLOUD_PROVIDER" == "azure" ]]; then 104 | external_ip=$(az vm show -g "$RESOURCE_GROUP" -n "$instance" --show-details --query 'publicIps' -o tsv | tr -d '[:space:]') 105 | fi 106 | # Vagrant 107 | if [[ "$CLOUD_PROVIDER" == "vagrant" ]]; then 108 | external_ip=${instance} 109 | fi 110 | 111 | # Copy the kubeconfigs 112 | do_scp "${KUBECONFIG_TMP_DIR}/${instance}.kubeconfig" "${KUBECONFIG_TMP_DIR}/kube-proxy.kubeconfig" "${VM_USER}@${external_ip}":~/ 113 | done 114 | } 115 | 116 | do_encryption_config(){ 117 | echo "Generating encryption config locally..." 118 | # shellcheck disable=SC1090 119 | source "${DIR}/generate_encryption_config.sh" 120 | generate_encryption_config 121 | echo "Encryption config successfully generated in ${ENCRYPTION_CONFIG}!" 122 | 123 | echo "Copying encryption config to controller node..." 124 | do_scp "$ENCRYPTION_CONFIG" "${VM_USER}@${controller_ip}":~/ 125 | } 126 | 127 | do_etcd(){ 128 | echo "Moving certficates to correct location for etcd on controller node..." 129 | do_ssh "${VM_USER}@${controller_ip}" sudo mkdir -p /etc/etcd/ 130 | do_ssh "${VM_USER}@${controller_ip}" sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/ 131 | 132 | echo "Copying etcd.service to controller node..." 133 | do_scp "${DIR}/../etc/systemd/system/etcd.service" "${VM_USER}@${controller_ip}":~/ 134 | do_ssh "${VM_USER}@${controller_ip}" sudo mkdir -p /etc/systemd/system/ 135 | do_ssh "${VM_USER}@${controller_ip}" sudo mv etcd.service /etc/systemd/system/ 136 | 137 | echo "Copying etcd install script to controller node..." 138 | do_scp "${DIR}/install_etcd.sh" "${VM_USER}@${controller_ip}":~/ 139 | 140 | echo "Running install_etcd.sh on controller node..." 141 | do_ssh "${VM_USER}@${controller_ip}" sudo ./install_etcd.sh 142 | 143 | # cleanup the script after install 144 | do_ssh "${VM_USER}@${controller_ip}" rm install_etcd.sh 145 | 146 | # TODO: make this less shitty and not a sleep 147 | # sanity check for etcd 148 | while ! do_ssh "${VM_USER}@${controller_ip}" ETCDCTL_API=3 etcdctl member list; do 149 | sleep 5 150 | done 151 | } 152 | 153 | do_k8s_controller(){ 154 | echo "Moving certficates to correct location for k8s on controller node..." 155 | do_ssh "${VM_USER}@${controller_ip}" sudo mkdir -p /var/lib/kubernetes/ 156 | do_ssh "${VM_USER}@${controller_ip}" sudo mv ca.pem kubernetes-key.pem kubernetes.pem ca-key.pem encryption-config.yaml /var/lib/kubernetes/ 157 | 158 | do_ssh "${VM_USER}@${controller_ip}" sudo mkdir -p /etc/systemd/system/ 159 | services=( kube-apiserver.service kube-scheduler.service kube-controller-manager.service ) 160 | for service in "${services[@]}"; do 161 | echo "Copying $service to controller node..." 162 | do_scp "${DIR}/../etc/systemd/system/${service}" "${VM_USER}@${controller_ip}":~/ 163 | do_ssh "${VM_USER}@${controller_ip}" sudo mv "$service" /etc/systemd/system/ 164 | done 165 | 166 | echo "Copying k8s controller install script to controller node..." 167 | do_scp "${DIR}/install_kubernetes_controller.sh" "${VM_USER}@${controller_ip}":~/ 168 | 169 | echo "Running install_kubernetes_controller.sh on controller node..." 170 | do_ssh "${VM_USER}@${controller_ip}" sudo ./install_kubernetes_controller.sh 171 | 172 | # cleanup the script after install 173 | do_ssh "${VM_USER}@${controller_ip}" rm install_kubernetes_controller.sh 174 | 175 | echo "Copying k8s rbac configs to controller node..." 176 | do_scp "${DIR}/../etc/cluster-role-"*.yaml "${VM_USER}@${controller_ip}":~/ 177 | 178 | echo "Copying k8s pod configs to controller node..." 179 | do_scp "${DIR}/../etc/pod-"*.yaml "${VM_USER}@${controller_ip}":~/ 180 | 181 | echo "Copying k8s kube-dns config to controller node..." 182 | do_scp "${DIR}/../etc/kube-dns.yaml" "${VM_USER}@${controller_ip}":~/ 183 | 184 | # get the internal ip for the instance 185 | # this is cloud provider specific 186 | # Google Cloud 187 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 188 | internal_ip=$(gcloud compute instances describe "$CONTROLLER_NODE_NAME" --format 'value(networkInterfaces[0].networkIP)') 189 | fi 190 | # Azure 191 | if [[ "$CLOUD_PROVIDER" == "azure" ]]; then 192 | internal_ip=$(az vm show -g "$RESOURCE_GROUP" -n "$CONTROLLER_NODE_NAME" --show-details --query 'privateIps' -o tsv | tr -d '[:space:]') 193 | fi 194 | # Vagrant 195 | if [[ "$CLOUD_PROVIDER" == "vagrant" ]]; then 196 | internal_ip=172.17.8.100 197 | fi 198 | 199 | # configure cilium to use etcd tls 200 | tmpd=$(mktemp -d) 201 | ciliumconfig="${tmpd}/cilium.yaml" 202 | sed "s#ETCD_CA#$(base64 -w 0 "${CERTIFICATE_TMP_DIR}/ca.pem")#" "${DIR}/../etc/cilium.yaml" > "$ciliumconfig" 203 | sed -i "s#ETCD_CLIENT_KEY#$(base64 -w 0 "${CERTIFICATE_TMP_DIR}/kubernetes-key.pem")#" "$ciliumconfig" 204 | sed -i "s#ETCD_CLIENT_CERT#$(base64 -w 0 "${CERTIFICATE_TMP_DIR}/kubernetes.pem")#" "$ciliumconfig" 205 | sed -i "s#INTERNAL_IP#${internal_ip}#" "$ciliumconfig" 206 | 207 | echo "Copying k8s cilium config to controller node..." 208 | do_scp "$ciliumconfig" "${VM_USER}@${controller_ip}":~/ 209 | 210 | # cleanup 211 | rm -rf "$tmpd" 212 | 213 | # wait for kube-apiserver service to come up 214 | # TODO: make this not a shitty sleep you goddamn savage 215 | echo "Waiting for kube-apiserver" 216 | while ! do_ssh "${VM_USER}@${controller_ip}" kubectl get componentstatuses > /dev/null; do 217 | echo -n "." 218 | sleep 10 219 | done 220 | echo "." 221 | 222 | # get the component statuses for sanity 223 | do_ssh "${VM_USER}@${controller_ip}" kubectl get componentstatuses 224 | 225 | # create the pod permissive security policy 226 | # Sometimes the api server responds to basic stuff, but needs for time for applies 227 | # giving an error like: 228 | # error: unable to recognize "pod-security-policy-permissive.yaml": no matches for extensions/, Kind=PodSecurityPolicy 229 | # until we find a good test for it ... just keep trying... 230 | while ! do_ssh "${VM_USER}@${controller_ip}" kubectl apply -f pod-security-policy-permissive.yaml; do 231 | sleep 10 232 | done 233 | do_ssh "${VM_USER}@${controller_ip}" kubectl apply -f pod-security-policy-restricted.yaml 234 | 235 | # create the rbac cluster roles 236 | do_ssh "${VM_USER}@${controller_ip}" kubectl apply -f cluster-role-kube-apiserver-to-kubelet.yaml 237 | do_ssh "${VM_USER}@${controller_ip}" kubectl apply -f cluster-role-binding-kube-apiserver-to-kubelet.yaml 238 | do_ssh "${VM_USER}@${controller_ip}" kubectl apply -f cluster-role-restricted.yaml 239 | do_ssh "${VM_USER}@${controller_ip}" kubectl apply -f cluster-role-binding-restricted.yaml 240 | 241 | # create kube-dns 242 | do_ssh "${VM_USER}@${controller_ip}" kubectl apply -f kube-dns.yaml 243 | 244 | # create cilium 245 | do_ssh "${VM_USER}@${controller_ip}" kubectl apply -f cilium.yaml 246 | } 247 | 248 | do_k8s_worker(){ 249 | for i in $(seq 0 "$WORKERS"); do 250 | instance="worker-node-${i}" 251 | 252 | # get the external ip for the instance 253 | # this is cloud provider specific 254 | # Google Cloud 255 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 256 | external_ip=$(gcloud compute instances describe "$instance" --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') 257 | fi 258 | # Azure 259 | if [[ "$CLOUD_PROVIDER" == "azure" ]]; then 260 | external_ip=$(az vm show -g "$RESOURCE_GROUP" -n "$instance" --show-details --query 'publicIps' -o tsv | tr -d '[:space:]') 261 | fi 262 | # Vagrant 263 | if [[ "$CLOUD_PROVIDER" == "vagrant" ]]; then 264 | external_ip=${instance} 265 | fi 266 | 267 | echo "Moving certficates to correct location for k8s on ${instance}..." 268 | do_ssh "${VM_USER}@${external_ip}" sudo mkdir -p /var/lib/kubelet/ 269 | do_ssh "${VM_USER}@${external_ip}" sudo mv "${instance}-key.pem" "${instance}.pem" /var/lib/kubelet/ 270 | do_ssh "${VM_USER}@${external_ip}" sudo mkdir -p /var/lib/kubernetes/ 271 | do_ssh "${VM_USER}@${external_ip}" sudo mv ca.pem /var/lib/kubernetes/ 272 | do_ssh "${VM_USER}@${external_ip}" sudo mv "${instance}.kubeconfig" /var/lib/kubelet/kubeconfig 273 | do_ssh "${VM_USER}@${external_ip}" sudo mkdir -p /var/lib/kube-proxy/ 274 | do_ssh "${VM_USER}@${external_ip}" sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig 275 | 276 | do_scp "${DIR}/../etc/cni/net.d/"*.conf "${VM_USER}@${external_ip}":~/ 277 | 278 | echo "Moving cni configs to correct location for k8s on ${instance}..." 279 | do_ssh "${VM_USER}@${external_ip}" sudo mkdir -p /etc/cni/net.d/ 280 | do_ssh "${VM_USER}@${external_ip}" sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/ 281 | 282 | echo "Copying k8s worker install script to ${instance}..." 283 | do_scp "${DIR}/install_kubernetes_worker.sh" "${VM_USER}@${external_ip}":~/ 284 | 285 | do_ssh "${VM_USER}@${external_ip}" sudo mkdir -p /etc/systemd/system/ 286 | services=( kubelet.service kube-proxy.service ) 287 | for service in "${services[@]}"; do 288 | echo "Copying $service to ${instance}..." 289 | do_scp "${DIR}/../etc/systemd/system/${service}" "${VM_USER}@${external_ip}":~/ 290 | do_ssh "${VM_USER}@${external_ip}" sudo mv "$service" /etc/systemd/system/ 291 | done 292 | 293 | echo "Running install_kubernetes_worker.sh on ${instance}..." 294 | do_ssh "${VM_USER}@${external_ip}" CLOUD_PROVIDER="${CLOUD_PROVIDER}" sudo -E bash -c './install_kubernetes_worker.sh' 295 | 296 | # cleanup the script after install 297 | do_ssh "${VM_USER}@${external_ip}" rm install_kubernetes_worker.sh 298 | done 299 | } 300 | 301 | do_end_checks(){ 302 | if [[ "$CLOUD_PROVIDER" == "google" ]]; then 303 | controller_ip=$(gcloud compute addresses describe "$PUBLIC_IP_NAME" --region "$REGION" --format 'value(address)') 304 | fi 305 | if [[ "$CLOUD_PROVIDER" == "vagrant" ]]; then 306 | controller_ip=172.17.8.100 307 | fi 308 | 309 | # check that we can reach the kube-apiserver externally 310 | echo "Testing a curl to the apiserver..." 311 | curl --cacert "${CERTIFICATE_TMP_DIR}/ca.pem" "https://${controller_ip}:6443/version" 312 | echo "" 313 | } 314 | 315 | do_local_kubeconfig(){ 316 | # setup local kubectl 317 | kubectl config set-cluster "$RESOURCE_GROUP" \ 318 | --certificate-authority="${CERTIFICATE_TMP_DIR}/ca.pem" \ 319 | --embed-certs=true \ 320 | --server="https://${controller_ip}:6443" 321 | 322 | kubectl config set-credentials admin \ 323 | --client-certificate="${CERTIFICATE_TMP_DIR}/admin.pem" \ 324 | --client-key="${CERTIFICATE_TMP_DIR}/admin-key.pem" \ 325 | --embed-certs=true 326 | 327 | kubectl config set-context "$RESOURCE_GROUP" \ 328 | --cluster="$RESOURCE_GROUP" \ 329 | --user=admin 330 | 331 | kubectl config use-context "$RESOURCE_GROUP" 332 | 333 | echo "Checking get nodes..." 334 | kubectl get nodes 335 | } 336 | 337 | cleanup(){ 338 | # clean up all our temporary files 339 | rm -rf "$CERTIFICATE_TMP_DIR" "$KUBECONFIG_TMP_DIR" "$ENCRYPTION_CONFIG" 340 | } 341 | 342 | do_certs 343 | do_kubeconfigs 344 | do_encryption_config 345 | do_etcd 346 | do_k8s_controller 347 | do_k8s_worker 348 | do_end_checks 349 | do_local_kubeconfig 350 | cleanup 351 | -------------------------------------------------------------------------------- /test.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | set -o pipefail 4 | 5 | ERRORS=() 6 | 7 | # find all executables and run `shellcheck` 8 | for f in $(find . -type f -not -iwholename '*.git*' | sort -u); do 9 | if file "$f" | grep --quiet shell; then 10 | { 11 | shellcheck "$f" && echo "[OK]: successfully linted $f" 12 | } || { 13 | # add to errors 14 | ERRORS+=("$f") 15 | } 16 | fi 17 | done 18 | 19 | if [ ${#ERRORS[@]} -eq 0 ]; then 20 | echo "No errors, hooray" 21 | else 22 | echo "These files failed shellcheck: ${ERRORS[*]}" 23 | exit 1 24 | fi 25 | -------------------------------------------------------------------------------- /vagrant/_configure.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This script is run by vagrant on startup. Do not run it. 4 | # 5 | set -e 6 | set -o pipefail 7 | 8 | if grep vagrant /home/vagrant/.ssh/authorized_keys > /dev/null; then 9 | echo "Disabling swap" 10 | swapoff -a 11 | echo "Setting noop scheduler" 12 | echo noop > /sys/block/sda/queue/scheduler 13 | echo "Disabling IPv6" 14 | echo "net.ipv6.conf.all.disable_ipv6 = 1 15 | net.ipv6.conf.default.disable_ipv6 = 1 16 | net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf 17 | sysctl -p 18 | fi 19 | -------------------------------------------------------------------------------- /vagrant/setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This script provisions a cluster running in Vagrant with ubuntu os 4 | # and provisions a kubernetes cluster on it. 5 | # 6 | # The script assumes you already have Vagrant and VirtualBox installed. 7 | # 8 | set -e 9 | set -o pipefail 10 | 11 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 12 | SCRIPT_DIR="${DIR}/../scripts" 13 | 14 | export CLOUD_PROVIDER="vagrant" 15 | export VM_USER="vagrant" 16 | export WORKERS=${WORKERS:-0} 17 | 18 | export SSH_CONFIG="${DIR}/../.vagrant/ssh_config" 19 | export SSH_KEYFILE="${HOME}/.vagrant.d/insecure_private_key" 20 | export RESOURCE_GROUP=${RESOURCE_GROUP:-kubernetes-clear-linux-snowflake} 21 | 22 | if [[ $1 == "clean" ]]; then 23 | vagrant destroy -f 24 | else 25 | vagrant up 26 | vagrant ssh-config > "${SSH_CONFIG}" 27 | "${SCRIPT_DIR}/provision.sh" 28 | fi 29 | --------------------------------------------------------------------------------