├── .gitignore ├── README.md ├── Vagrantfile ├── ansible ├── k8s-master.yml ├── k8s-worker.yml └── roles │ ├── k8s-base │ ├── files │ │ ├── clean-k8s │ │ ├── config │ │ ├── id_rsa │ │ └── id_rsa.pub │ └── tasks │ │ └── main.yml │ ├── k8s-master │ ├── files │ │ ├── start-calico │ │ ├── start-canal │ │ └── start-weave │ └── tasks │ │ └── main.yml │ └── k8s-worker │ └── tasks │ └── main.yml ├── deployment.yml ├── scripts ├── bootstrap_ansible_centos.sh └── bootstrap_ansible_ubuntu.sh └── service.yml /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant 2 | *.retry 3 | *.log 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Playground 2 | This project contains a `Vagrantfile` and associated `Ansible` playbook scripts 3 | to provisioning a 3 nodes Kubernetes cluster using `VirtualBox` and `Ubuntu 4 | 16.04`. 5 | 6 | ### Prerequisites 7 | You need the following installed to use this playground. 8 | - `Vagrant`, version 1.8.6 or better. Earlier versions of vagrant may not work 9 | with the Vagrant Ubuntu 16.04 box and network configuration. 10 | - `VirtualBox`, tested with Version 5.0.26 r108824 11 | - Internet access, this playground pulls Vagrant boxes from the Internet as well 12 | as installs Ubuntu application packages from the Internet. 13 | 14 | ### Bringing Up The cluster 15 | To bring up the cluster, clone this repository to a working directory. 16 | 17 | ``` 18 | git clone http://github.com/davidkbainbridge/k8s-playground 19 | ``` 20 | 21 | Change into the working directory and `vagrant up` 22 | 23 | ``` 24 | cd k8s-playground 25 | vagrant up 26 | ``` 27 | 28 | Vagrant will start three machines. Each machine will have a NAT-ed network 29 | interface, through which it can access the Internet, and a `private-network` 30 | interface in the subnet 172.42.42.0/24. The private network is used for 31 | intra-cluster communication. 32 | 33 | The machines created are: 34 | 35 | | NAME | IP ADDRESS | ROLE | 36 | | --- | --- | --- | 37 | | k8s1 | 172.42.42.11 | Cluster Master | 38 | | k8s2 | 172.42.42.12 | Cluster Worker | 39 | | k8s3 | 172.42.42.13 | Cluster Worker | 40 | 41 | As the cluster brought up the cluster master (**k8s1**) will perform a `kubeadm 42 | init` and the cluster workers will perform a `kubeadmin join`. This cluster is 43 | using a static Kubernetes cluster token. 44 | 45 | After the `vagrant up` is complete, the following command and output should be 46 | visible on the cluster master (**k8s1**). 47 | 48 | ``` 49 | vagrant ssh k8s1 50 | kubectl -n kube-system get po -o wide 51 | 52 | NAME READY STATUS RESTARTS AGE IP NODE 53 | etcd-k8s1 1/1 Running 0 10m 172.42.42.11 k8s1 54 | kube-apiserver-k8s1 1/1 Running 1 10m 172.42.42.11 k8s1 55 | kube-controller-manager-k8s1 1/1 Running 0 11m 172.42.42.11 k8s1 56 | kube-discovery-982812725-pv5ib 1/1 Running 0 11m 172.42.42.11 k8s1 57 | kube-dns-2247936740-cucu9 0/3 ContainerCreating 0 10m k8s1 58 | kube-proxy-amd64-kt8d6 1/1 Running 0 10m 172.42.42.11 k8s1 59 | kube-proxy-amd64-o73p7 1/1 Running 0 5m 172.42.42.13 k8s3 60 | kube-proxy-amd64-piie9 1/1 Running 0 8m 172.42.42.12 k8s2 61 | kube-scheduler-k8s1 1/1 Running 0 11m 172.42.42.11 k8s1 62 | ``` 63 | 64 | ### Starting Networking 65 | Stating the clustering networking is **NOT** automated and must be completed 66 | after the `vagrant up` is complete. A script to start the networking is 67 | installed on the cluster master (**k8s1**) as `/usr/local/bin/start-weave`. 68 | 69 | ``` 70 | vagrant ssh k8s1 71 | sudo start-weave 72 | 73 | daemonset "weave-net" created 74 | ``` 75 | 76 | After the network is started, assuming `weave-net` is used, the following 77 | command and output should be visible on the master node (**k8s1**): 78 | 79 | ``` 80 | vagrant ssh k8s1 81 | $ kubectl -n kube-system get po -o wide 82 | NAME READY STATUS RESTARTS AGE IP NODE 83 | etcd-k8s1 1/1 Running 0 14m 172.42.42.11 k8s1 84 | kube-apiserver-k8s1 1/1 Running 1 13m 172.42.42.11 k8s1 85 | kube-controller-manager-k8s1 1/1 Running 0 14m 172.42.42.11 k8s1 86 | kube-discovery-982812725-pv5ib 1/1 Running 0 14m 172.42.42.11 k8s1 87 | kube-dns-2247936740-cucu9 3/3 Running 0 14m 10.40.0.1 k8s1 88 | kube-proxy-amd64-kt8d6 1/1 Running 0 13m 172.42.42.11 k8s1 89 | kube-proxy-amd64-o73p7 1/1 Running 0 8m 172.42.42.13 k8s3 90 | kube-proxy-amd64-piie9 1/1 Running 0 11m 172.42.42.12 k8s2 91 | kube-scheduler-k8s1 1/1 Running 0 14m 172.42.42.11 k8s1 92 | weave-net-33rjx 2/2 Running 0 3m 172.42.42.12 k8s2 93 | weave-net-3z7jj 2/2 Running 0 3m 172.42.42.11 k8s1 94 | weave-net-uvv48 2/2 Running 0 3m 172.42.42.13 k8s3 95 | ``` 96 | 97 | ### Starting A Sample Service / Deployment 98 | Included in the *git* repository is a sample *service* and *deployment* 99 | specification that work with Kubernetes. These can be found on the master node 100 | (**k8s1**) as `/vagrant/service.yml` and `/vagrant/deployment.yml`. 101 | 102 | These descriptors will create a *hello-service* sample service using a simple 103 | docker image `davidkbainbridge/docker-hello-world`. This image is a simple 104 | HTTP service that outputs the the hostname and the IP address information on 105 | which the request was processed. An example output is: 106 | 107 | ``` 108 | Hello, "/" 109 | HOST: hello-deployment-2911225940-qhfn2 110 | ADDRESSES: 111 | 127.0.0.1/8 112 | 10.40.0.5/12 113 | ::1/128 114 | fe80::dcc9:4ff:fe5c:f793/64 115 | ``` 116 | 117 | To start the *service* and *deployment* you can issue the following command 118 | on the master node (**k8s1**): 119 | 120 | ``` 121 | kubectl create -f /vagrant/service.yml -f /vagrant/deployment.yml 122 | ``` 123 | 124 | After issuing the `create` command you should be able to see the *service* and 125 | *deployment* using the following commands. 126 | 127 | ``` 128 | ubuntu@k8s1:~$ kubectl get service 129 | NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE 130 | hello-service 100.76.247.60 80/TCP 6s 131 | kubernetes 100.64.0.1 443/TCP 36m 132 | ``` 133 | 134 | ``` 135 | ubuntu@k8s1:~$ kubectl get deployment 136 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 137 | hello-deployment 3 3 3 0 12s 138 | ``` 139 | *After the sample service container is pulled from dockerhub and started the 140 | available count should go to the value `3`*. 141 | 142 | ### Accessing the Service 143 | The IP address for the service can be seen via the `kubectl get service` 144 | command, but also can be retrieved from the Kubernetes DNS server. Below is 145 | an example `dig` command to get the IP address of the service. 146 | 147 | ``` 148 | ubuntu@k8s1:~$ dig @100.64.0.10 +short hello-service.default.svc.cluster.local 149 | 100.76.247.60 150 | ``` 151 | 152 | The IP address returned in every enivornment may be differet. 153 | 154 | To test the service you can use the following command on any node in the 155 | cluster: 156 | 157 | ``` 158 | ubuntu@k8s1:~$ curl -sSL http://$(dig @100.64.0.10 +short hello-service.default.svc.cluster.local) 159 | Hello, "/" 160 | HOST: hello-deployment-2911225940-b3tyn 161 | ADDRESSES: 162 | 127.0.0.1/8 163 | 10.32.0.2/12 164 | ::1/128 165 | fe80::e89f:bfff:fec2:b67a/64 166 | ``` 167 | 168 | ### Scaling the Service 169 | To test the scaling of the service, you can open a second terminal and ssh 170 | to a node in the cluster (e.g. `vagrant up ssh k8s1`). In this terminal if you 171 | issue the following command it will periodically issue a `curl` request to 172 | the service and display the output, highlighting the difference from the 173 | previous request. This demonstates that the request is being handled by 174 | different services. 175 | 176 | ``` 177 | watch -d curl -sSL http://$(dig @100.64.0.10 +short hello-service.default.svc.cluster.local) 178 | ``` 179 | 180 | Currently there should be 3 instances of the service implementation being 181 | used. To scale to a single instance, issue the following command: 182 | 183 | ``` 184 | ubuntu@k8s1:~$ kubectl scale deployment hello-deployment --replicas=1 185 | deployment "hello-deployment" scaled 186 | ``` 187 | 188 | After scaling to a single instance the `watch` command from above should show 189 | no differences between successive request as all requests are being handled by 190 | the same instance. 191 | 192 | The following command scales the number of instances to 5 and after issuing 193 | this command differences in the `watch` command should be highlighted. 194 | 195 | ``` 196 | ubuntu@k8s1:~$ kubectl scale deployment hello-deployment --replicas=5 197 | deployment "hello-deployment" scaled 198 | ``` 199 | 200 | ### Service Health Check 201 | The test container image used above `davidkbainbridge/docker-hello-world:latest` 202 | is built with a health check capability. The container provides a REST end 203 | point that will return `200 Ok` by default, but this can be manual set to a 204 | different value to test error cases. See the container documentation 205 | at https://github.com/davidkbainbridge/docker-hello-world for more information. 206 | 207 | To see the health of any given instance of the service implementation, you can 208 | `ssh` to the k8s1 and perform a `kubectl get po -o wide`. This will show the 209 | pods augmented with the number of restarts. 210 | 211 | ``` 212 | ubuntu@k8s1:~$ kubectl get po -o wide 213 | NAME READY STATUS RESTARTS AGE IP NODE 214 | hello-deployment-3696513547-fhh2y 1/1 Running 0 12s 10.40.0.1 k8s2 215 | hello-deployment-3696513547-ocgas 1/1 Running 0 12s 10.38.0.2 k8s3 216 | hello-deployment-3696513547-y257u 1/1 Running 0 12s 10.38.0.1 k8s3 217 | ``` 218 | 219 | To demonstrate the health check capability of the cluster, you can open up a 220 | `ssh` session to k8s1 and run `watch -d kubectl get po -o wide`. This command 221 | will periodically update the screen with information about the pods including 222 | the number of restarts. 223 | 224 | To cause one of the container instances to start reporting a failed health 225 | value you can set a random instance to fail using 226 | 227 | ``` 228 | curl -XPOST -sSL http://$(dig @100.64.0.10 +short \ 229 | hello-service.default.svc.cluster.local)/health -d '{"status":501}' 230 | ``` 231 | 232 | This will set the health check on a random instance in the cluster to return 233 | "501 Internal Server Error". If you want to fail the health check on a specific 234 | instance you will nee to make a similar `curl` request to the specific 235 | container instance. 236 | 237 | After setting the health check to return a failure value monitor the 238 | `kubectl get po -o wide` command. After about 30 seconds one of the pod 239 | restarts counts should be incremented. This represented Kubernetes killing and 240 | restarting a pod because of a failed health check. 241 | 242 | *NOTE: the frequency of health checks is configurable* 243 | 244 | ### Clean Up 245 | On each vagrant machine is installed a utility as `/usr/local/bin/clean-k8s`. 246 | executing this script as `sudo` will reset the servers back to a point where 247 | you can execute vagrant provisioning. 248 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # # vi: set ft=ruby : 3 | 4 | boxes = { 5 | ubuntu: "ubuntu/xenial64", 6 | centos: "centos/7", 7 | } 8 | 9 | distro = :centos # :ubuntu 10 | 11 | Vagrant.configure(2) do |config| 12 | 13 | (1..3).each do |i| 14 | config.vm.define "k8s#{i}" do |s| 15 | s.ssh.forward_agent = true 16 | s.vm.box = boxes[distro] 17 | s.vm.hostname = "k8s#{i}" 18 | s.vm.provision :shell, path: "scripts/bootstrap_ansible_#{distro.to_s}.sh" 19 | if i == 1 20 | s.vm.provision :shell, inline: "PYTHONUNBUFFERED=1 ansible-playbook /vagrant/ansible/k8s-master.yml -c local" 21 | else 22 | s.vm.provision :shell, inline: "PYTHONUNBUFFERED=1 ansible-playbook /vagrant/ansible/k8s-worker.yml -c local" 23 | end 24 | n = 10 + i 25 | s.vm.network "private_network", ip: "172.42.42.#{n}", netmask: "255.255.255.0", 26 | auto_config: true, 27 | virtualbox__intnet: "k8s-net" 28 | s.vm.provider "virtualbox" do |v| 29 | v.name = "k8s#{i}" 30 | v.memory = 2048 31 | v.gui = false 32 | end 33 | end 34 | end 35 | 36 | if Vagrant.has_plugin?("vagrant-cachier") 37 | config.cache.scope = :box 38 | end 39 | 40 | end 41 | -------------------------------------------------------------------------------- /ansible/k8s-master.yml: -------------------------------------------------------------------------------- 1 | - hosts: localhost 2 | remote_user: vagrant 3 | serial: 1 4 | roles: 5 | - k8s-base 6 | - k8s-master 7 | -------------------------------------------------------------------------------- /ansible/k8s-worker.yml: -------------------------------------------------------------------------------- 1 | - hosts: localhost 2 | remote_user: vagrant 3 | serial: 1 4 | roles: 5 | - k8s-base 6 | - k8s-worker 7 | -------------------------------------------------------------------------------- /ansible/roles/k8s-base/files/clean-k8s: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | systemctl stop kubelet; 4 | docker rm -f $(docker ps -q); mount | grep "/var/lib/kubelet/*" | awk '{print $3}' | xargs umount 1>/dev/null 2>/dev/null; 5 | rm -rf /var/lib/kubelet /etc/kubernetes /var/lib/etcd /etc/cni /etc/kubernetes; 6 | mkdir -p /etc/kubernetes 7 | ip link set cbr0 down; ip link del cbr0; 8 | ip link set cni0 down; ip link del cni0; 9 | systemctl start kubelet 10 | 11 | -------------------------------------------------------------------------------- /ansible/roles/k8s-base/files/config: -------------------------------------------------------------------------------- 1 | Host * 2 | StrictHostKeyChecking=no 3 | UserKnownHostsFile=/dev/null 4 | -------------------------------------------------------------------------------- /ansible/roles/k8s-base/files/id_rsa: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEogIBAAKCAQEA5BK11+nQj/0PYOwhwd3AmKnEi1myDgfMHbqrx5uDhSHrhlqt 3 | h8TyPTHuMD0eTPm++RY3BLN0RP69yE9yRbWCmuLGKlwdGoC+xAxr0DUZNRnwWhnR 4 | aprrEyT8OD5x1FfuTs/G3nrLq9BVfD61JLqIdiKb2aU9grOXbg8r1UAy1j89Gi43 5 | vr1VJhLwVK4VuTe7ivIEqMhqg03tH3XO2oEiBMj4Ha4ITECE2wUpQygblh2Y817X 6 | DySVsvNlcejA4qoEoDatD+h4x2B09Ac8g11BW0SIlXnJ4GXKL4EGbH9eSWimcMox 7 | ASD0FzHnwUsaJ4/UKpurHodqOVo3+2XjX7VLDwIDAQABAoIBAGqa2EaQ8ryq84pB 8 | NVIxvbld+RGNnm1ydZUb0PlfFm2fOkC1l9ETXIsAEK6ZktU2E27IVHUtEFbDn5/G 9 | ispMmjydbTUVk0D1FrX6fFZ4y0yH0FG9KaajvOdY7U+42GoBo9FQy0roqNSpb5vA 10 | j9kYG3rkmGZ2FzdFjK2UB9AIzvpW+Ou41Ww6r73XNzQmkhJNlyz5a7kc66pFk4B2 11 | YDxBOY5LYxfmCx3fjqlFERNm2qpCGiV3pmtBkMKQ2mBOAh2/MVToWNinqEoOlGWI 12 | zkwZvi1UJoJr0U2CfIQQynp6LaTWgl0vEs2mcVKu1KsGCOCiMKLwnHZar+AMNL4I 13 | AM4Ca1kCgYEA8d2OSDoq2Jsrgq1GshPdKJvB6K8nGyGpyMp6zYjKDxne9jb0ruo8 14 | jLJNmSBRMcnx/1TleddKHXTa7r7YiywUaZxfxQw89lMn2XSRI6aEYX5tk0m44RbR 15 | TBRX8ni4DBtmQOudTLGCDCTBJSuwhxzd1Q8z9JQeZIxgiJfMKLBuhjsCgYEA8WbQ 16 | HM3NvwjragyKy8ZzSjtoOqQOslwwvfM0YOSYC8J7wBIe64W+kYDLMvfkuhHcfURv 17 | /RA1OgHC4U7zm9AP84xYbNcqKzNwUdrkwOoVlfPm+ttdqfTt+IXvb2aSvwEjrCXV 18 | xSzWOsfTq63XcpYBhPwwjbEipV0MXWE802m1/T0CgYBhAGaL+Sgt7y2oHy53RRgx 19 | rSY71+NrMjkR2oMd43qGS+3r+WZwsGjQVMJiY1+tBD0WFkpib0G+Rpt3nPrj9i3J 20 | nXmbYakhcYBN6j47ehEluLrhk3OecrRGOvJ6wIev810zNEvF8nshu6vq6HbH+X/O 21 | b2Z69NyrntEodxjeSMRK+QKBgHWsw6g23qPQKkng4UvialL2UKG9VXi2ngAKbS8K 22 | X9/jp0WCz9XJtZLiMKug0buud0gNM3YuD3Q+ZYxFW1VKAGydroEoBeNXSNpuFPLB 23 | aVJWufLxOmBeCB8M0yH/42r+mDATpXhfmfK/dDyNGqg93XHBKb34akYn7J4ch3Ub 24 | Y96VAoGAWEgqinKkwKmjFKetgKGI46SoF84VeytStwCJ7WHB/rYXiO/XUQba4cR4 25 | /BugVlIBI7vb7m82klK45g12xtsP/DQiwv3ZBSuTqwg9EAQfQhGRgYnwRhFUOK5k 26 | R0DdvF8etSYPkU9wS1gtiZmUHGsyNhPtgr1/yROzvTYx3vbrG9Q= 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /ansible/roles/k8s-base/files/id_rsa.pub: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDkErXX6dCP/Q9g7CHB3cCYqcSLWbIOB8wduqvHm4OFIeuGWq2HxPI9Me4wPR5M+b75FjcEs3RE/r3IT3JFtYKa4sYqXB0agL7EDGvQNRk1GfBaGdFqmusTJPw4PnHUV+5Oz8beesur0FV8PrUkuoh2IpvZpT2Cs5duDyvVQDLWPz0aLje+vVUmEvBUrhW5N7uK8gSoyGqDTe0fdc7agSIEyPgdrghMQITbBSlDKBuWHZjzXtcPJJWy82Vx6MDiqgSgNq0P6HjHYHT0BzyDXUFbRIiVecngZcovgQZsf15JaKZwyjEBIPQXMefBSxonj9Qqm6seh2o5Wjf7ZeNftUsP ubuntu 2 | -------------------------------------------------------------------------------- /ansible/roles/k8s-base/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ensure SSH Directories 3 | when: ansible_distribution == 'Ubuntu' 4 | become: yes 5 | file: 6 | path: /home/ubuntu/.ssh 7 | state: directory 8 | owner: ubuntu 9 | group: ubuntu 10 | mode: 0700 11 | 12 | - name: Copy SSH Key Files 13 | when: ansible_distribution == 'Ubuntu' 14 | become: yes 15 | copy: 16 | src: files/{{ item }} 17 | dest: /home/ubuntu/.ssh/{{ item }} 18 | owner: ubuntu 19 | group: ubuntu 20 | mode: 0600 21 | with_items: 22 | - id_rsa 23 | - id_rsa.pub 24 | - config 25 | 26 | - name: Ensure Authorized SSH Key 27 | when: ansible_distribution == 'Ubuntu' 28 | become: yes 29 | authorized_key: 30 | user: ubuntu 31 | key: "{{ lookup('file', '/home/ubuntu/.ssh/id_rsa.pub') }}" 32 | state: present 33 | 34 | - name: Remove Default Host Entry 35 | become: yes 36 | lineinfile: 37 | dest: /etc/hosts 38 | regexp: '^127\.0\.0\.1\s+k8s.*$' 39 | state: absent 40 | 41 | - name: Ensure Hosts File 42 | become: yes 43 | lineinfile: 44 | dest: /etc/hosts 45 | line: "{{ item.ip }} {{ item.name }}" 46 | with_items: 47 | - { ip: "172.42.42.11", name: "k8s1" } 48 | - { ip: "172.42.42.12", name: "k8s2" } 49 | - { ip: "172.42.42.13", name: "k8s3" } 50 | 51 | - name: Ensure Kubernetes APT Key 52 | when: ansible_distribution == 'Ubuntu' 53 | become: yes 54 | apt_key: 55 | url: https://packages.cloud.google.com/apt/doc/apt-key.gpg 56 | state: present 57 | tags: 58 | - k8s 59 | 60 | - name: Ensure Kubernetes APT Repository 61 | when: ansible_distribution == 'Ubuntu' 62 | become: yes 63 | apt_repository: 64 | repo: 'deb http://apt.kubernetes.io/ kubernetes-xenial-unstable main' 65 | state: present 66 | update_cache: yes 67 | tags: 68 | - k8s 69 | 70 | - name: Ensure Base Kubernetes 71 | when: ansible_distribution == 'Ubuntu' 72 | become: yes 73 | apt: 74 | name: "{{ item }}" 75 | state: latest 76 | with_items: 77 | - docker.io 78 | - kubelet 79 | - kubeadm 80 | - kubectl 81 | - kubernetes-cni 82 | tags: 83 | - k8s 84 | 85 | - name: Ensure Kubernetes YUM repository 86 | when: ansible_distribution == 'CentOS' 87 | become: yes 88 | yum_repository: 89 | name: Kubernetes 90 | description: Kubernetes YUM repository 91 | baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el$releasever-$basearch-unstable 92 | gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg 93 | https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 94 | gpgcheck: yes 95 | state: present 96 | tags: 97 | - k8s 98 | 99 | # Equivalent of `chcon -Rt svirt_sandbox_file_t /var/lib/kubelet` 100 | - file: path=/var/lib/kubelet state=directory setype=svirt_sandbox_file_t 101 | when: ansible_distribution == 'CentOS' 102 | become: yes 103 | 104 | - name: Ensure Base Kubernetes 105 | when: ansible_distribution == 'CentOS' 106 | become: yes 107 | yum: 108 | name: "{{ item }}" 109 | state: latest 110 | with_items: 111 | - docker 112 | - kubelet 113 | - kubeadm 114 | - kubectl 115 | - kubernetes-cni 116 | tags: 117 | - k8s 118 | 119 | - name: Ensure docker.service 120 | service: name=docker state=started enabled=yes 121 | tags: 122 | - k8s 123 | 124 | - name: Ensure kubelet.service 125 | service: name=kubelet state=started enabled=yes 126 | tags: 127 | - k8s 128 | 129 | - name: Ensure firewalld.service 130 | when: ansible_distribution == 'CentOS' 131 | service: name=firewalld state=started enabled=yes 132 | tags: 133 | - k8s 134 | 135 | 136 | # Equivalent of `firewall-cmd --permanent --zone=trusted --add-interface=eth1` 137 | - firewalld: zone=trusted interface=eth1 permanent=true state=enabled immediate=true 138 | when: ansible_distribution == 'CentOS' 139 | tags: 140 | - k8s 141 | 142 | # Equivalent of `firewall-cmd --permanent --zone=trusted --add-interface=weave` 143 | - firewalld: zone=trusted interface=weave permanent=true state=enabled immediate=true 144 | when: ansible_distribution == 'CentOS' 145 | tags: 146 | - k8s 147 | 148 | # Equivalent of `firewall-cmd --permanent --zone=trusted --add-source=172.42.42.0/24` 149 | - firewalld: source=172.42.42.0/24 zone=trusted permanent=true state=enabled immediate=true 150 | when: ansible_distribution == 'CentOS' 151 | tags: 152 | - k8s 153 | 154 | # 10.32.0.0/12 is the default pod CIDR for Weave Net 155 | # you will need to update this if you are using a different 156 | # network provider, or a different CIDR for whatever reason 157 | # Equivalent of `firewall-cmd --permanent --zone=trusted --add-source=10.32.0.0/12` 158 | - firewalld: source=10.32.0.0/12 zone=trusted permanent=true state=enabled 159 | when: ansible_distribution == 'CentOS' 160 | tags: 161 | - k8s 162 | 163 | # Equivalent of `firewall-cmd --permanent --zone=trusted --add-port=10250` 164 | - firewalld: port=10250/tcp zone=trusted permanent=true state=enabled immediate=true 165 | when: ansible_distribution == 'CentOS' 166 | tags: 167 | - k8s 168 | 169 | - command: firewall-cmd --reload 170 | when: ansible_distribution == 'CentOS' 171 | -------------------------------------------------------------------------------- /ansible/roles/k8s-master/files/start-calico: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | kubectl create -f http://docs.projectcalico.org/v1.5/getting-started/kubernetes/installation/hosted/calico.yaml 4 | -------------------------------------------------------------------------------- /ansible/roles/k8s-master/files/start-canal: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | kubectl create -f https://raw.githubusercontent.com/tigera/canal/master/k8s-install/kubeadm/canal.yaml 4 | -------------------------------------------------------------------------------- /ansible/roles/k8s-master/files/start-weave: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | kubectl apply -f https://git.io/weave-kube 3 | -------------------------------------------------------------------------------- /ansible/roles/k8s-master/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ensure kubeadm initialization 3 | become: yes 4 | command: "kubeadm init --token=2f1a31.00f66dec74fd53f3 --api-advertise-addresses=172.42.42.11 --skip-preflight-checks=true" 5 | tags: 6 | - k8s 7 | 8 | - name: Ensure Network Start Script 9 | become: yes 10 | copy: 11 | src: files/{{ item }} 12 | dest: /usr/local/bin/{{ item }} 13 | owner: root 14 | group: root 15 | mode: 0755 16 | with_items: 17 | - "start-weave" 18 | - "start-calico" 19 | - "start-canal" 20 | 21 | - name: Ensure jq package is installed 22 | become: yes 23 | when: ansible_distribution == 'Ubuntu' 24 | apt: 25 | name: "{{ item }}" 26 | state: latest 27 | with_items: 28 | - jq 29 | tags: 30 | - k8s 31 | 32 | - name: Ensure jq package is installed 33 | become: yes 34 | when: ansible_distribution == 'CentOS' 35 | yum: 36 | name: "{{ item }}" 37 | state: latest 38 | with_items: 39 | - jq 40 | tags: 41 | - k8s 42 | 43 | - name: Set --advertise-address flag in kube-apiserver static pod manifest (workaround for https://github.com/kubernetes/kubernetes/issues/34101) 44 | become: yes 45 | shell: "jq '.spec.containers[0].command |= .+ [\"--advertise-address=172.42.42.11\"]' /etc/kubernetes/manifests/kube-apiserver.json > /tmp/kube-apiserver.json && mv /tmp/kube-apiserver.json /etc/kubernetes/manifests/kube-apiserver.json" 46 | tags: 47 | - k8s 48 | 49 | - name: Set --cluster-cidr flag in kube-proxy daemonset (workaround for https://github.com/kubernetes/kubernetes/issues/34101) 50 | become: yes 51 | shell: "kubectl -n kube-system get ds -l 'component=kube-proxy-amd64' -o json | jq '.items[0].spec.template.spec.containers[0].command |= .+ [\"--cluster-cidr=10.32.0.0/12\"]' | kubectl apply -f - && kubectl -n kube-system delete pods -l 'component=kube-proxy-amd64'" 52 | tags: 53 | - k8s 54 | 55 | # Equivalent of `firewall-cmd --permanent --zone=trusted --add-port=9898` 56 | - firewalld: port=6443/tcp zone=trusted permanent=true state=enabled immediate=true 57 | when: ansible_distribution == 'CentOS' 58 | tags: 59 | - k8s 60 | 61 | # Equivalent of `firewall-cmd --permanent --zone=trusted --add-port=9898` 62 | - firewalld: port=9898/tcp zone=trusted permanent=true state=enabled immediate=true 63 | when: ansible_distribution == 'CentOS' 64 | tags: 65 | - k8s 66 | 67 | - command: firewall-cmd --reload 68 | when: ansible_distribution == 'CentOS' 69 | -------------------------------------------------------------------------------- /ansible/roles/k8s-worker/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Join Kubernetes Cluster 3 | become: yes 4 | command: "kubeadm join --token=2f1a31.00f66dec74fd53f3 --skip-preflight-checks=true 172.42.42.11" 5 | -------------------------------------------------------------------------------- /deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: hello-deployment 5 | spec: 6 | replicas: 3 7 | template: 8 | metadata: 9 | labels: 10 | app: hello 11 | spec: 12 | containers: 13 | - name: hello 14 | image: davidkbainbridge/docker-hello-world:latest 15 | ports: 16 | - containerPort: 8080 17 | livenessProbe: 18 | httpGet: 19 | path: /health 20 | port: 8080 21 | initialDelaySeconds: 15 22 | timeoutSeconds: 1 23 | -------------------------------------------------------------------------------- /scripts/bootstrap_ansible_centos.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Copyright 2012 the original author or authors. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | # 17 | 18 | set -e 19 | 20 | echo "Installing Ansible..." 21 | 22 | cat < /etc/yum.repos.d/epel.repo 23 | [EPEL] 24 | name=EPEL 25 | baseurl=http://download.fedoraproject.org/pub/epel/\$releasever/\$basearch/ 26 | enabled=1 27 | gpgcheck=0 28 | repo_gpgcheck=0 29 | EOF 30 | 31 | yum install -y ansible 32 | -------------------------------------------------------------------------------- /scripts/bootstrap_ansible_ubuntu.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Copyright 2012 the original author or authors. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | # 17 | 18 | set -e 19 | 20 | echo "Installing Ansible..." 21 | apt update -y 22 | apt install -y software-properties-common python-software-properties 23 | add-apt-repository ppa:ansible/ansible 24 | apt update 25 | apt install -y ansible apt-transport-https 26 | -------------------------------------------------------------------------------- /service.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: hello-service 5 | spec: 6 | selector: 7 | app: hello 8 | ports: 9 | - protocol: TCP 10 | port: 80 11 | targetPort: 8080 12 | --------------------------------------------------------------------------------