├── .gitignore ├── ansible ├── roles │ ├── k8s-master │ │ ├── files │ │ │ ├── start-weave │ │ │ ├── start-canal │ │ │ └── start-calico │ │ └── tasks │ │ │ └── main.yml │ ├── k8s-base │ │ ├── files │ │ │ ├── config │ │ │ ├── id_rsa.pub │ │ │ ├── clean-k8s │ │ │ └── id_rsa │ │ └── tasks │ │ │ └── main.yml │ └── k8s-worker │ │ └── tasks │ │ └── main.yml ├── k8s-master.yml └── k8s-worker.yml ├── examples └── client │ └── go │ ├── Dockerfile │ ├── Makefile │ └── hello-client.go ├── service.yml ├── kube-target ├── deployment.yml ├── scripts └── bootstrap_ansible.sh ├── Vagrantfile └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | admin.conf 2 | .vagrant 3 | *.retry 4 | *.log 5 | -------------------------------------------------------------------------------- /ansible/roles/k8s-master/files/start-weave: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | kubectl apply -f https://git.io/weave-kube-1.6 3 | -------------------------------------------------------------------------------- /ansible/roles/k8s-base/files/config: -------------------------------------------------------------------------------- 1 | Host * 2 | StrictHostKeyChecking=no 3 | UserKnownHostsFile=/dev/null 4 | -------------------------------------------------------------------------------- /ansible/k8s-master.yml: -------------------------------------------------------------------------------- 1 | - hosts: localhost 2 | remote_user: vagrant 3 | serial: 1 4 | roles: 5 | - k8s-base 6 | - k8s-master 7 | -------------------------------------------------------------------------------- /ansible/k8s-worker.yml: -------------------------------------------------------------------------------- 1 | - hosts: localhost 2 | remote_user: vagrant 3 | serial: 1 4 | roles: 5 | - k8s-base 6 | - k8s-worker 7 | -------------------------------------------------------------------------------- /ansible/roles/k8s-master/files/start-canal: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | kubectl create -f https://raw.githubusercontent.com/tigera/canal/master/k8s-install/kubeadm/canal.yaml 4 | -------------------------------------------------------------------------------- /ansible/roles/k8s-master/files/start-calico: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | kubectl create -f http://docs.projectcalico.org/v1.5/getting-started/kubernetes/installation/hosted/calico.yaml 4 | -------------------------------------------------------------------------------- /examples/client/go/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:alpine 2 | 3 | ADD . /go/src/github.com/k8s-playground/example 4 | 5 | RUN go install github.com/k8s-playground/example 6 | 7 | ENV GODEBUG=netdns=go 8 | ENTRYPOINT ["/go/bin/example"] 9 | -------------------------------------------------------------------------------- /service.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: hello-service 5 | spec: 6 | selector: 7 | app: hello 8 | ports: 9 | - name: http 10 | protocol: TCP 11 | port: 80 12 | targetPort: 8080 13 | -------------------------------------------------------------------------------- /ansible/roles/k8s-base/files/id_rsa.pub: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDkErXX6dCP/Q9g7CHB3cCYqcSLWbIOB8wduqvHm4OFIeuGWq2HxPI9Me4wPR5M+b75FjcEs3RE/r3IT3JFtYKa4sYqXB0agL7EDGvQNRk1GfBaGdFqmusTJPw4PnHUV+5Oz8beesur0FV8PrUkuoh2IpvZpT2Cs5duDyvVQDLWPz0aLje+vVUmEvBUrhW5N7uK8gSoyGqDTe0fdc7agSIEyPgdrghMQITbBSlDKBuWHZjzXtcPJJWy82Vx6MDiqgSgNq0P6HjHYHT0BzyDXUFbRIiVecngZcovgQZsf15JaKZwyjEBIPQXMefBSxonj9Qqm6seh2o5Wjf7ZeNftUsP ubuntu 2 | -------------------------------------------------------------------------------- /ansible/roles/k8s-base/files/clean-k8s: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | systemctl stop kubelet; 4 | docker rm -f $(docker ps -q); mount | grep "/var/lib/kubelet/*" | awk '{print $3}' | xargs umount 1>/dev/null 2>/dev/null; 5 | rm -rf /var/lib/kubelet /etc/kubernetes /var/lib/etcd /etc/cni /etc/kubernetes; 6 | mkdir -p /etc/kubernetes 7 | ip link set cbr0 down; ip link del cbr0; 8 | ip link set cni0 down; ip link del cni0; 9 | systemctl start kubelet 10 | 11 | -------------------------------------------------------------------------------- /kube-target: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ $# -eq 2 ]; then 4 | DNS=$1 5 | SRV=$2 6 | else 7 | DNS=$(kubectl -n kube-system get svc -l k8s-app==kube-dns -o json | jq -r '.items[].spec.clusterIP') 8 | SRV=$1 9 | fi 10 | 11 | INFO=$(dig +short @${DNS} $SRV SRV) 12 | if [ "$INFO X" == " X" ]; then 13 | >&2 echo "Unable to resolve service" 14 | exit 1 15 | fi 16 | 17 | HOST=$(dig +short @${DNS} $(echo $INFO | cut '-d ' -f4)) 18 | PORT=$(echo $INFO | cut '-d ' -f3) 19 | echo $HOST:$PORT 20 | -------------------------------------------------------------------------------- /deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: hello-deployment 5 | spec: 6 | replicas: 3 7 | template: 8 | metadata: 9 | labels: 10 | app: hello 11 | spec: 12 | containers: 13 | - name: hello 14 | image: davidkbainbridge/docker-hello-world:latest 15 | ports: 16 | - containerPort: 8080 17 | livenessProbe: 18 | httpGet: 19 | path: /health 20 | port: 8080 21 | initialDelaySeconds: 15 22 | timeoutSeconds: 1 23 | -------------------------------------------------------------------------------- /examples/client/go/Makefile: -------------------------------------------------------------------------------- 1 | all: 2 | 3 | .PHONY: prep 4 | prep: 5 | kubectl taint nodes --all dedicated- || echo "OK" 6 | 7 | .PHONY: image 8 | image: Dockerfile hello-client.go 9 | sudo docker build -t hello-client . 10 | 11 | .PHONY: run 12 | run: 13 | kubectl run hello-client --image=hello-client --image-pull-policy=Never --restart=Never --overrides='{"apiVersion":"v1", "spec": {"nodeSelector":{"kubernetes.io/hostname":"k8s1"}}}' 14 | 15 | .PHONY: logs 16 | logs: 17 | kubectl logs hello-client 18 | 19 | .PHONY: clean 20 | clean: 21 | kubectl delete po hello-client 22 | -------------------------------------------------------------------------------- /ansible/roles/k8s-worker/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Copy config to home directory 3 | become: yes 4 | copy: 5 | src: /vagrant/admin.conf 6 | dest: /home/ubuntu/admin.conf 7 | owner: ubuntu 8 | group: ubuntu 9 | mode: 0600 10 | 11 | - name: Update Environment 12 | become: yes 13 | lineinfile: 14 | path: /home/ubuntu/.bashrc 15 | regexp: '^export KUBECONFIG=' 16 | line: 'export KUBECONFIG=/home/ubuntu/admin.conf' 17 | state: present 18 | 19 | - name: Join Kubernetes Cluster 20 | become: yes 21 | command: "kubeadm join --token 2f1a31.00f66dec74fd53f3 172.42.42.1:6443" 22 | -------------------------------------------------------------------------------- /examples/client/go/hello-client.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "io/ioutil" 6 | "net" 7 | "net/http" 8 | ) 9 | 10 | func main() { 11 | // Lookup the SRV record for the hello service 12 | _, srvs, err := net.LookupSRV("http", "tcp", "hello-service.default.svc.cluster.local") 13 | if err != nil { 14 | panic(err) 15 | } 16 | 17 | // The assumption is that there is only SRV record, there could be more in theory 18 | srv := srvs[0] 19 | 20 | // Go Ask the service for a response 21 | resp, err := http.Get(fmt.Sprintf("http://%s:%d", srv.Target, srv.Port)) 22 | 23 | if err != nil { 24 | panic(err) 25 | } 26 | 27 | // Read the data from the GET response 28 | data, err := ioutil.ReadAll(resp.Body) 29 | if err != nil { 30 | panic(err) 31 | } 32 | 33 | // Display the response 34 | fmt.Println(string(data)) 35 | } 36 | -------------------------------------------------------------------------------- /scripts/bootstrap_ansible.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Copyright 2012 the original author or authors. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | # 17 | 18 | set -e 19 | 20 | echo "Installing Ansible..." 21 | apt-get update -y 22 | apt-get install -y software-properties-common 23 | apt-add-repository ppa:ansible/ansible 24 | apt-get update 25 | apt-get install -y ansible apt-transport-https 26 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # # vi: set ft=ruby : 3 | 4 | Vagrant.configure(2) do |config| 5 | 6 | (1..2).each do |i| 7 | config.vm.define "k8s#{i}" do |s| 8 | s.ssh.forward_agent = true 9 | s.vm.box = "ubuntu/xenial64" 10 | s.vm.hostname = "k8s#{i}" 11 | s.vm.provision :shell, path: "scripts/bootstrap_ansible.sh" 12 | if i == 1 13 | s.vm.provision :shell, inline: "PYTHONUNBUFFERED=1 ansible-playbook /vagrant/ansible/k8s-master.yml -c local" 14 | else 15 | s.vm.provision :shell, inline: "PYTHONUNBUFFERED=1 ansible-playbook /vagrant/ansible/k8s-worker.yml -c local" 16 | end 17 | s.vm.network "private_network", ip: "172.42.42.#{i}", netmask: "255.255.255.0", 18 | auto_config: true, 19 | virtualbox__intnet: "k8s-net" 20 | s.vm.provider "virtualbox" do |v| 21 | v.name = "k8s#{i}" 22 | v.memory = 2048 23 | v.gui = false 24 | end 25 | end 26 | end 27 | 28 | if Vagrant.has_plugin?("vagrant-cachier") 29 | config.cache.scope = :box 30 | end 31 | 32 | end 33 | -------------------------------------------------------------------------------- /ansible/roles/k8s-base/files/id_rsa: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEogIBAAKCAQEA5BK11+nQj/0PYOwhwd3AmKnEi1myDgfMHbqrx5uDhSHrhlqt 3 | h8TyPTHuMD0eTPm++RY3BLN0RP69yE9yRbWCmuLGKlwdGoC+xAxr0DUZNRnwWhnR 4 | aprrEyT8OD5x1FfuTs/G3nrLq9BVfD61JLqIdiKb2aU9grOXbg8r1UAy1j89Gi43 5 | vr1VJhLwVK4VuTe7ivIEqMhqg03tH3XO2oEiBMj4Ha4ITECE2wUpQygblh2Y817X 6 | DySVsvNlcejA4qoEoDatD+h4x2B09Ac8g11BW0SIlXnJ4GXKL4EGbH9eSWimcMox 7 | ASD0FzHnwUsaJ4/UKpurHodqOVo3+2XjX7VLDwIDAQABAoIBAGqa2EaQ8ryq84pB 8 | NVIxvbld+RGNnm1ydZUb0PlfFm2fOkC1l9ETXIsAEK6ZktU2E27IVHUtEFbDn5/G 9 | ispMmjydbTUVk0D1FrX6fFZ4y0yH0FG9KaajvOdY7U+42GoBo9FQy0roqNSpb5vA 10 | j9kYG3rkmGZ2FzdFjK2UB9AIzvpW+Ou41Ww6r73XNzQmkhJNlyz5a7kc66pFk4B2 11 | YDxBOY5LYxfmCx3fjqlFERNm2qpCGiV3pmtBkMKQ2mBOAh2/MVToWNinqEoOlGWI 12 | zkwZvi1UJoJr0U2CfIQQynp6LaTWgl0vEs2mcVKu1KsGCOCiMKLwnHZar+AMNL4I 13 | AM4Ca1kCgYEA8d2OSDoq2Jsrgq1GshPdKJvB6K8nGyGpyMp6zYjKDxne9jb0ruo8 14 | jLJNmSBRMcnx/1TleddKHXTa7r7YiywUaZxfxQw89lMn2XSRI6aEYX5tk0m44RbR 15 | TBRX8ni4DBtmQOudTLGCDCTBJSuwhxzd1Q8z9JQeZIxgiJfMKLBuhjsCgYEA8WbQ 16 | HM3NvwjragyKy8ZzSjtoOqQOslwwvfM0YOSYC8J7wBIe64W+kYDLMvfkuhHcfURv 17 | /RA1OgHC4U7zm9AP84xYbNcqKzNwUdrkwOoVlfPm+ttdqfTt+IXvb2aSvwEjrCXV 18 | xSzWOsfTq63XcpYBhPwwjbEipV0MXWE802m1/T0CgYBhAGaL+Sgt7y2oHy53RRgx 19 | rSY71+NrMjkR2oMd43qGS+3r+WZwsGjQVMJiY1+tBD0WFkpib0G+Rpt3nPrj9i3J 20 | nXmbYakhcYBN6j47ehEluLrhk3OecrRGOvJ6wIev810zNEvF8nshu6vq6HbH+X/O 21 | b2Z69NyrntEodxjeSMRK+QKBgHWsw6g23qPQKkng4UvialL2UKG9VXi2ngAKbS8K 22 | X9/jp0WCz9XJtZLiMKug0buud0gNM3YuD3Q+ZYxFW1VKAGydroEoBeNXSNpuFPLB 23 | aVJWufLxOmBeCB8M0yH/42r+mDATpXhfmfK/dDyNGqg93XHBKb34akYn7J4ch3Ub 24 | Y96VAoGAWEgqinKkwKmjFKetgKGI46SoF84VeytStwCJ7WHB/rYXiO/XUQba4cR4 25 | /BugVlIBI7vb7m82klK45g12xtsP/DQiwv3ZBSuTqwg9EAQfQhGRgYnwRhFUOK5k 26 | R0DdvF8etSYPkU9wS1gtiZmUHGsyNhPtgr1/yROzvTYx3vbrG9Q= 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /ansible/roles/k8s-master/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ensure kubeadm initialization 3 | become: yes 4 | command: "kubeadm init --token 2f1a31.00f66dec74fd53f3 --apiserver-advertise-address=172.42.42.1" 5 | tags: 6 | - k8s 7 | 8 | - name: Copy config to /Vagrant for other VMs 9 | become: yes 10 | copy: 11 | src: /etc/kubernetes/admin.conf 12 | dest: /vagrant/admin.conf 13 | owner: ubuntu 14 | group: ubuntu 15 | mode: 0600 16 | 17 | - name: Copy config to home directory 18 | become: yes 19 | copy: 20 | src: /etc/kubernetes/admin.conf 21 | dest: /home/ubuntu/admin.conf 22 | owner: ubuntu 23 | group: ubuntu 24 | mode: 0600 25 | 26 | - name: Update Environment 27 | become: yes 28 | lineinfile: 29 | path: /home/ubuntu/.bashrc 30 | regexp: '^export KUBECONFIG=' 31 | line: 'export KUBECONFIG=/home/ubuntu/admin.conf' 32 | state: present 33 | 34 | - name: Set --proxy-mode flag in kube-proxy daemonset (workaround for https://github.com/kubernetes/kubernetes/issues/34101) 35 | become: yes 36 | shell: "kubectl --kubeconfig=/home/ubuntu/admin.conf -n kube-system get ds -l 'k8s-app==kube-proxy' -o json | jq '.items[0].spec.template.spec.containers[0].command |= .+ [\"--proxy-mode=userspace\"]' | kubectl --kubeconfig=/home/ubuntu/admin.conf apply -f - && kubectl --kubeconfig=/home/ubuntu/admin.conf -n kube-system delete pods -l 'k8s-app==kube-proxy'" 37 | register: proxy 38 | until: proxy.rc == 0 39 | retries: 60 40 | delay: 10 41 | tags: 42 | - k8s 43 | 44 | - name: Ensure Network Start Script 45 | become: yes 46 | copy: 47 | src: files/{{ item }} 48 | dest: /usr/local/bin/{{ item }} 49 | owner: root 50 | group: root 51 | mode: 0755 52 | with_items: 53 | - "start-weave" 54 | - "start-calico" 55 | - "start-canal" 56 | -------------------------------------------------------------------------------- /ansible/roles/k8s-base/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ensure SSH Directories 3 | become: yes 4 | file: 5 | path: /home/ubuntu/.ssh 6 | state: directory 7 | owner: ubuntu 8 | group: ubuntu 9 | mode: 0700 10 | 11 | - name: Copy SSH Key Files 12 | become: yes 13 | copy: 14 | src: files/{{ item }} 15 | dest: /home/ubuntu/.ssh/{{ item }} 16 | owner: ubuntu 17 | group: ubuntu 18 | mode: 0600 19 | with_items: 20 | - id_rsa 21 | - id_rsa.pub 22 | - config 23 | 24 | - name: Ensure Authorized SSH Key 25 | become: yes 26 | authorized_key: 27 | user: ubuntu 28 | key: "{{ lookup('file', '/home/ubuntu/.ssh/id_rsa.pub') }}" 29 | state: present 30 | 31 | - name: Remove Default Host Entry 32 | become: yes 33 | lineinfile: 34 | dest: /etc/hosts 35 | regexp: '^127\.0\.0\.1\s+k8s.*$' 36 | state: absent 37 | 38 | - name: Ensure Hosts File 39 | become: yes 40 | lineinfile: 41 | dest: /etc/hosts 42 | line: "{{ item.ip }} {{ item.name }}" 43 | with_items: 44 | - { ip: "172.42.42.1", name: "k8s1" } 45 | - { ip: "172.42.42.2", name: "k8s2" } 46 | - { ip: "172.42.42.3", name: "k8s3" } 47 | 48 | - name: Ensure Google Cloud Apt Key 49 | become: yes 50 | apt_key: 51 | url: https://packages.cloud.google.com/apt/doc/apt-key.gpg 52 | state: present 53 | tags: 54 | - k8s 55 | 56 | - name: Ensure Kubernetes Repository 57 | become: yes 58 | apt_repository: 59 | repo: 'deb http://apt.kubernetes.io/ kubernetes-xenial main' 60 | state: present 61 | update_cache: yes 62 | tags: 63 | - k8s 64 | 65 | - name: Ensure Base Kubernetes 66 | become: yes 67 | apt: 68 | name: "{{ item }}" 69 | state: latest 70 | with_items: 71 | - apt-transport-https 72 | - docker-engine 73 | - kubelet 74 | - kubeadm 75 | - kubectl 76 | - kubernetes-cni 77 | - jq 78 | tags: 79 | - k8s 80 | 81 | - name: Ensure Docker Group 82 | group: 83 | name: docker 84 | state: present 85 | 86 | - name: Ensure User in Docker Group 87 | user: 88 | name=ubuntu 89 | groups=docker 90 | append=yes 91 | 92 | - name: Ensure Kubernetes Cleanup 93 | become: 94 | copy: 95 | src: files/clean-k8s 96 | dest: /usr/local/bin 97 | mode: 0755 98 | owner: root 99 | group: root 100 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Playground 2 | This project contains a `Vagrantfile` and associated `Ansible` playbook scripts 3 | to provisioning a 3 nodes Kubernetes cluster using `VirtualBox` and `Ubuntu 4 | 16.04`. 5 | 6 | ### Prerequisites 7 | You need the following installed to use this playground. 8 | - `Vagrant`, version 1.9.3 or better. Earlier versions of vagrant do not work 9 | with the Vagrant Ubuntu 16.04 box and network configuration. 10 | - `VirtualBox`, tested with Version 5.1.18 r114002 11 | - Internet access, this playground pulls Vagrant boxes from the Internet as well 12 | as installs Ubuntu application packages from the Internet. 13 | 14 | ### Bringing Up The cluster 15 | To bring up the cluster, clone this repository to a working directory. 16 | 17 | ``` 18 | git clone http://github.com/davidkbainbridge/k8s-playground 19 | ``` 20 | 21 | Change into the working directory and `vagrant up` 22 | 23 | ``` 24 | cd k8s-playground 25 | vagrant up 26 | ``` 27 | 28 | Vagrant will start three machines. Each machine will have a NAT-ed network 29 | interface, through which it can access the Internet, and a `private-network` 30 | interface in the subnet 172.42.42.0/24. The private network is used for 31 | intra-cluster communication. 32 | 33 | The machines created are: 34 | 35 | | NAME | IP ADDRESS | ROLE | 36 | | --- | --- | --- | 37 | | k8s1 | 172.42.42.1 | Cluster Master | 38 | | k8s2 | 172.42.42.2 | Cluster Worker | 39 | | k8s3 | 172.42.42.3 | Cluster Worker | 40 | 41 | As the cluster brought up the cluster master (**k8s1**) will perform a `kubeadm 42 | init` and the cluster workers will perform a `kubeadmin join`. This cluster is 43 | using a static Kubernetes cluster token. 44 | 45 | After the `vagrant up` is complete, the following command and output should be 46 | visible on the cluster master (**k8s1**). 47 | 48 | ``` 49 | vagrant ssh k8s1 50 | kubectl -n kube-system get po -o wide 51 | 52 | NAME READY STATUS RESTARTS AGE IP NODE 53 | etcd-k8s1 1/1 Running 0 10m 172.42.42.1 k8s1 54 | kube-apiserver-k8s1 1/1 Running 1 10m 172.42.42.1 k8s1 55 | kube-controller-manager-k8s1 1/1 Running 0 11m 172.42.42.1 k8s1 56 | kube-discovery-982812725-pv5ib 1/1 Running 0 11m 172.42.42.1 k8s1 57 | kube-dns-2247936740-cucu9 0/3 ContainerCreating 0 10m k8s1 58 | kube-proxy-amd64-kt8d6 1/1 Running 0 10m 172.42.42.1 k8s1 59 | kube-proxy-amd64-o73p7 1/1 Running 0 5m 172.42.42.3 k8s3 60 | kube-proxy-amd64-piie9 1/1 Running 0 8m 172.42.42.2 k8s2 61 | kube-scheduler-k8s1 1/1 Running 0 11m 172.42.42.1 k8s1 62 | ``` 63 | 64 | ### Starting Networking 65 | Stating the clustering networking is **NOT** automated and must be completed 66 | after the `vagrant up` is complete. A script to start the networking is 67 | installed on the cluster master (**k8s1**) as `/usr/local/bin/start-weave`. 68 | 69 | ``` 70 | vagrant ssh k8s1 71 | ubuntu@k8s1:~$ start-weave 72 | clusterrole "weave-net" created 73 | serviceaccount "weave-net" created 74 | clusterrolebinding "weave-net" created 75 | daemonset "weave-net" created 76 | ``` 77 | 78 | After the network is started, assuming `weave-net` is used, the following 79 | command and output should be visible on the master node (**k8s1**): 80 | 81 | ``` 82 | vagrant ssh k8s1 83 | $ kubectl -n kube-system get po -o wide 84 | NAME READY STATUS RESTARTS AGE IP NODE 85 | etcd-k8s1 1/1 Running 0 14m 172.42.42.1 k8s1 86 | kube-apiserver-k8s1 1/1 Running 1 13m 172.42.42.1 k8s1 87 | kube-controller-manager-k8s1 1/1 Running 0 14m 172.42.42.1 k8s1 88 | kube-discovery-982812725-pv5ib 1/1 Running 0 14m 172.42.42.1 k8s1 89 | kube-dns-2247936740-cucu9 3/3 Running 0 14m 10.40.0.1 k8s1 90 | kube-proxy-amd64-kt8d6 1/1 Running 0 13m 172.42.42.1 k8s1 91 | kube-proxy-amd64-o73p7 1/1 Running 0 8m 172.42.42.3 k8s3 92 | kube-proxy-amd64-piie9 1/1 Running 0 11m 172.42.42.2 k8s2 93 | kube-scheduler-k8s1 1/1 Running 0 14m 172.42.42.1 k8s1 94 | weave-net-33rjx 2/2 Running 0 3m 172.42.42.2 k8s2 95 | weave-net-3z7jj 2/2 Running 0 3m 172.42.42.1 k8s1 96 | weave-net-uvv48 2/2 Running 0 3m 172.42.42.3 k8s3 97 | ``` 98 | 99 | ### Starting A Sample Service / Deployment 100 | Included in the *git* repository is a sample *service* and *deployment* 101 | specification that work with Kubernetes. These can be found on the master node 102 | (**k8s1**) as `/vagrant/service.yml` and `/vagrant/deployment.yml`. 103 | 104 | These descriptors will create a *hello-service* sample service using a simple 105 | docker image `davidkbainbridge/docker-hello-world`. This image is a simple 106 | HTTP service that outputs the the hostname and the IP address information on 107 | which the request was processed. An example output is: 108 | 109 | ``` 110 | Hello, "/" 111 | HOST: hello-deployment-2911225940-qhfn2 112 | ADDRESSES: 113 | 127.0.0.1/8 114 | 10.40.0.5/12 115 | ::1/128 116 | fe80::dcc9:4ff:fe5c:f793/64 117 | ``` 118 | 119 | To start the *service* and *deployment* you can issue the following command 120 | on the master node (**k8s1**): 121 | 122 | ``` 123 | kubectl create -f /vagrant/service.yml -f /vagrant/deployment.yml 124 | ``` 125 | 126 | After issuing the `create` command you should be able to see the *service* and 127 | *deployment* using the following commands. 128 | 129 | ``` 130 | ubuntu@k8s1:~$ kubectl get service 131 | NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE 132 | hello-service 100.76.247.60 80/TCP 6s 133 | kubernetes 100.64.0.1 443/TCP 36m 134 | ``` 135 | 136 | ``` 137 | ubuntu@k8s1:~$ kubectl get deployment 138 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 139 | hello-deployment 3 3 3 0 12s 140 | ``` 141 | *After the sample service container is pulled from dockerhub and started the 142 | available count should go to the value `3`*. 143 | 144 | ### Fetching the Cluster DNS server 145 | Kubernetes includes a DNS server through which service IPs and ports can be 146 | queried. The IP address of the DNS service can be seen using the following 147 | command: 148 | 149 | ``` 150 | kubectl -n kube-system get svc -l k8s-app=kube-dns -o json | jq -r '.items[].spec.clusterIP' 151 | ``` 152 | 153 | The IP address of the DNS server will be used in some of the following commands, 154 | such as `dig` in for form `@a.b.c.d`. Please substitute the IP address returned 155 | about into this commands. 156 | 157 | ### Accessing the Service 158 | Kubernetes creates a SRV record in its DNS server for the service that was 159 | defined in the `service.yml` file. Looking at this file you can see that a 160 | service named `http` was defined. This information can be queried from DNS 161 | use the following command: 162 | ``` 163 | ubuntu@k8s1:~$ dig +short @a.b.c.d _http._tcp.hello-service.default.svc.cluster.local SRV 164 | 10 100 80 hello-service.default.svc.cluster.local. 165 | ``` 166 | 167 | The information returned form this query is of the form: 168 | ``` 169 | 170 | ``` 171 | 172 | Using a query recursively on this information, specifically the `target` the 173 | IP address for the service can be retrieved. 174 | ``` 175 | ubuntu@k8s1:~$ dig +short @a.b.c.d hello-service.default.svc.cluster.local. 176 | 100.76.247.60 177 | ``` 178 | 179 | Thus using DNS a client can dynamically determine the IP address and the port 180 | on which to connect to a service. To test the service the following command 181 | can be used on any node in the cluster: 182 | ``` 183 | curl -sSL http://$(dig +short @a.b.c.d $(dig +short @a.b.c.d \ 184 | _http._tcp.hello-service.default.svc.cluster.local SRV | cut '-d ' -f4)):\ 185 | $(dig +short @a.b.c.d _http._tcp.hello-service.default.svc.cluster.local SRV | cut '-d ' -f3) 186 | ``` 187 | 188 | A script has been provided to simplify this tasks and returns the host IP and the 189 | port given a DNS server and service. Thus the same results can be acheived in 190 | the playground with the following command: 191 | ``` 192 | curl -sSL http://$(/vagrant/kube-target _http._tcp.hello-service.default.svc.cluster.local) 193 | ``` 194 | 195 | The IP address for the service can also be seen via the `kubectl get service` 196 | command. 197 | 198 | The IP address returned in every enivornment may be differet. 199 | 200 | To test the service you can use the following command on any node in the 201 | cluster: 202 | 203 | ``` 204 | ubuntu@k8s1:~$ curl -sSL http://$(/vagrant/kube-target _http._tcp.hello-service.default.svc.cluster.local) 205 | Hello, "/" 206 | HOST: hello-deployment-2911225940-b3tyn 207 | ADDRESSES: 208 | 127.0.0.1/8 209 | 10.32.0.2/12 210 | ::1/128 211 | fe80::e89f:bfff:fec2:b67a/64 212 | ``` 213 | 214 | ### Scaling the Service 215 | To test the scaling of the service, you can open a second terminal and ssh 216 | to a node in the cluster (e.g. `vagrant up ssh k8s1`). In this terminal if you 217 | issue the following command it will periodically issue a `curl` request to 218 | the service and display the output, highlighting the difference from the 219 | previous request. This demonstates that the request is being handled by 220 | different services. 221 | 222 | ``` 223 | watch -d curl -sSL http://$(/vagrant/kube-target _http._tcp.hello-service.default.svc.cluster.local) 224 | ``` 225 | 226 | Currently there should be 3 instances of the service implementation being 227 | used. To scale to a single instance, issue the following command: 228 | 229 | ``` 230 | ubuntu@k8s1:~$ kubectl scale deployment hello-deployment --replicas=1 231 | deployment "hello-deployment" scaled 232 | ``` 233 | 234 | After scaling to a single instance the `watch` command from above should show 235 | no differences between successive request as all requests are being handled by 236 | the same instance. 237 | 238 | The following command scales the number of instances to 5 and after issuing 239 | this command differences in the `watch` command should be highlighted. 240 | 241 | ``` 242 | ubuntu@k8s1:~$ kubectl scale deployment hello-deployment --replicas=5 243 | deployment "hello-deployment" scaled 244 | ``` 245 | 246 | ### Service Health Check 247 | The test container image used above `davidkbainbridge/docker-hello-world:latest` 248 | is built with a health check capability. The container provides a REST end 249 | point that will return `200 Ok` by default, but this can be manual set to a 250 | different value to test error cases. See the container documentation 251 | at https://github.com/davidkbainbridge/docker-hello-world for more information. 252 | 253 | To see the health of any given instance of the service implementation, you can 254 | `ssh` to the k8s1 and perform a `kubectl get po -o wide`. This will show the 255 | pods augmented with the number of restarts. 256 | 257 | ``` 258 | ubuntu@k8s1:~$ kubectl get po -o wide 259 | NAME READY STATUS RESTARTS AGE IP NODE 260 | hello-deployment-3696513547-fhh2y 1/1 Running 0 12s 10.40.0.1 k8s2 261 | hello-deployment-3696513547-ocgas 1/1 Running 0 12s 10.38.0.2 k8s3 262 | hello-deployment-3696513547-y257u 1/1 Running 0 12s 10.38.0.1 k8s3 263 | ``` 264 | 265 | To demonstrate the health check capability of the cluster, you can open up a 266 | `ssh` session to k8s1 and run `watch -d kubectl get po -o wide`. This command 267 | will periodically update the screen with information about the pods including 268 | the number of restarts. 269 | 270 | To cause one of the container instances to start reporting a failed health 271 | value you can set a random instance to fail using 272 | 273 | ``` 274 | curl -XPOST -sSL http://$(/vagrant/kube-target _http._tcp.hello-service.default.svc.cluster.local)/health -d '{"status":501}' 275 | ``` 276 | 277 | This will set the health check on a random instance in the cluster to return 278 | "501 Internal Server Error". If you want to fail the health check on a specific 279 | instance you will nee to make a similar `curl` request to the specific 280 | container instance. 281 | 282 | After setting the health check to return a failure value monitor the 283 | `kubectl get po -o wide` command. After about 30 seconds one of the pod 284 | restarts counts should be incremented. This represented Kubernetes killing and 285 | restarting a pod because of a failed health check. 286 | 287 | *NOTE: the frequency of health checks is configurable* 288 | 289 | ### Example client 290 | Included in the repo is an example client written in the Go language that 291 | demonstrates how a container can directly look up another service and call 292 | that service. 293 | 294 | To test this client use the following steps: 295 | 296 | | COMMAND | DESCRIPTION | 297 | | --- | --- | 298 | | `vagrant ssh k8s1` | Logon to the k8s1 host | 299 | | `cd /vagrant/examples/client/go` | Move to the example directory | 300 | | `sudo apt-get -y install make` | Install the `make` command | 301 | | `make prep` | Enables deployment of pods to the master node, `k8s1` | 302 | | `make image` | Builds the sample client Docker image | 303 | | `make run` | Runs the sample client as a run-once pod | 304 | | `make logs` | Displays the logs for the client pod | 305 | | `make clean` | Deletes the sample client pod | 306 | 307 | The output of the `make logs` step should display a similar output as the `curl` 308 | statement above. 309 | 310 | ``` 311 | ubuntu@k8s1:/vagrant/examples/client/go$ make logs 312 | kubectl logs hello-client 313 | Hello, "/" 314 | HOST: hello-deployment-1725651635-u8f65 315 | ADDRESSES: 316 | 127.0.0.1/8 317 | 10.40.0.1/12 318 | ::1/128 319 | fe80::bcdc:21ff:fecd:83db/64 320 | ``` 321 | 322 | The above example forces the deployment of the sample client to the k8s1 node 323 | for simplicity. If you would like to have the client deployed to any of the 324 | nodes in the cluster you need to `ssh` to each node in the cluster and 325 | 326 | ``` 327 | cd /vagrant/examples/client/go 328 | sudo apt-get install -y make 329 | make image 330 | ``` 331 | 332 | The run the sample client use the following commands 333 | 334 | ``` 335 | vagrant ssh k8s1 336 | kubectl delete po hello-client 337 | kubectl run hello-client --image=hello-client --image-pull-policy=Never --restart=Never 338 | ``` 339 | 340 | ### Clean Up 341 | On each vagrant machine is installed a utility as `/usr/local/bin/clean-k8s`. 342 | executing this script as `sudo` will reset the servers back to a point where 343 | you can execute vagrant provisioning. 344 | --------------------------------------------------------------------------------