├── README.md ├── gce ├── bootstrap-controller ├── bootstrap-worker ├── create-controller └── create-worker ├── kubernetes-configs ├── inspector-canary-svc.yaml └── inspector-svc.yaml ├── labs ├── cluster-add-on-dns.md ├── cluster-add-on-ui.md ├── configure-networking.md ├── exposing-services-with-nginx.md ├── generate-server-and-client-certs.md ├── initialize-a-certificate-authority.md ├── install-and-configure-docker.md ├── install-and-configure-kube-proxy.md ├── install-and-configure-kubectl.md ├── install-and-configure-kubelet.md ├── install-gce-client-tools.md ├── kuberentes-controller-pod.md ├── pods.md ├── provisioning-coreos-on-gce.md ├── replication-controllers.md ├── rolling-updates.md ├── secure-the-kubernetes-api.md └── services.md ├── nginx └── inspector.conf └── slides ├── docker ├── large-dockerfile └── small-dockerfile ├── images ├── container.png ├── kubernetes-nodes-2.png ├── kubernetes-rc-reschedule.png ├── kubernetes-rc.png ├── kubernetes-scheduler.png └── pod.png ├── manifests ├── node.json ├── pod.json ├── replication-controller.json └── service.json └── talk.slide /README.md: -------------------------------------------------------------------------------- 1 | # Intro to Kubernetes Workshop 2 | 3 | Kubernetes Version: 1.0.3 4 | 5 | The slides from this workshop are hosted [online](http://go-talks.appspot.com/github.com/kelseyhightower/intro-to-kubernetes-workshop/slides/talk.slide#1) 6 | 7 | ## Course Outline 8 | 9 | ### Google Compute Engine (GCE) 10 | 11 | #### Labs 12 | 13 | * [Install GCE client tools](labs/install-gce-client-tools.md) 14 | 15 | ### Kubernetes base infrastructure 16 | 17 | #### Labs 18 | 19 | * [Provision CoreOS Cluster](labs/provisioning-coreos-on-gce.md) 20 | * [Install and configure Docker](labs/install-and-configure-docker.md) 21 | * [Configure Networking](labs/configure-networking.md) 22 | 23 | ### PKI infrastructure 24 | 25 | #### Labs 26 | 27 | * [Initialize a certificate authority](labs/initialize-a-certificate-authority.md) 28 | * [Generate server and client certs](labs/generate-server-and-client-certs.md) 29 | 30 | ### Provision the Controller Node 31 | 32 | #### Labs 33 | 34 | * [Install and configure the Kubernetes controller](labs/kuberentes-controller-pod.md) 35 | 36 | ### Provision the Kubernetes clients 37 | 38 | #### Labs 39 | 40 | * [Install and configure the kubectl CLI](labs/install-and-configure-kubectl.md) 41 | * [Deploy the Web UI](labs/cluster-add-on-ui.md) 42 | 43 | ### Provision the Worker Nodes 44 | 45 | #### Labs 46 | 47 | * [Install and configure the kubelet](labs/install-and-configure-kubelet.md) 48 | * [Install and configure the kube-proxy](labs/install-and-configure-kube-proxy.md) 49 | 50 | ### Managing Applications with Kubernetes 51 | 52 | #### Labs 53 | 54 | * [Creating and managing pods](labs/pods.md) 55 | * [Creating and managing replication controllers](labs/replication-controllers.md) 56 | * [Creating and managing services](labs/services.md) 57 | * [Exposing services with nginx](labs/exposing-services-with-nginx.md) 58 | * [Rolling updates](labs/rolling-updates.md) 59 | 60 | ### Cluster Add-ons 61 | 62 | #### Labs 63 | 64 | * [DNS](labs/cluster-add-on-dns.md) 65 | 66 | ## Links 67 | 68 | * [Kubernetes](http://googlecloudplatform.github.io/kubernetes) 69 | * [gcloud Tool Guide](https://cloud.google.com/sdk/gcloud) 70 | * [Docker](https://docs.docker.com) 71 | * [CoreOS](https://coreos.com) 72 | * [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd) 73 | * [nginx](http://nginx.org) 74 | 75 | ### Tips 76 | 77 | #### Get the project ID from the instance metadata server: 78 | 79 | ``` 80 | curl -H "Metadata-Flavor: Google" \ 81 | http://metadata.google.internal/computeMetadata/v1/project/project-id 82 | ``` 83 | 84 | #### Get the external-ip of the instance from the metadata server: 85 | 86 | ``` 87 | curl -H "Metadata-Flavor: Google" \ 88 | http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 89 | ``` 90 | -------------------------------------------------------------------------------- /gce/bootstrap-controller: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ## Download binaries. 4 | mkdir -p /opt/bin 5 | curl https://kuar.io/docker -o /opt/bin/docker 6 | curl https://kuar.io/kubelet -o /opt/bin/kubelet 7 | curl https://kuar.io/kube-proxy -o /opt/bin/kube-proxy 8 | curl https://kuar.io/linux/kubectl -o /opt/bin/kubectl 9 | chmod +x /opt/bin/docker /opt/bin/kubelet /opt/bin/kube-proxy /opt/bin/kubectl 10 | 11 | mkdir -p /etc/kubernetes/manifests 12 | mkdir -p /srv/kubernetes 13 | 14 | cat < /etc/kubernetes/manifests/etcd.yaml 15 | --- 16 | apiVersion: "v1" 17 | kind: "Pod" 18 | metadata: 19 | name: "etcd-server" 20 | spec: 21 | hostNetwork: true 22 | containers: 23 | - name: "etcd-container" 24 | image: "gcr.io/google_containers/etcd:2.0.12" 25 | resources: 26 | limits: 27 | cpu: "200m" 28 | command: 29 | - "/usr/local/bin/etcd" 30 | - "--advertise-client-urls=http://127.0.0.1:2379" 31 | - "--data-dir=/var/lib/etcd" 32 | - "--listen-client-urls=http://127.0.0.1:2379" 33 | - "--listen-peer-urls=http://127.0.0.1:2380" 34 | - "--name=etcd0" 35 | ports: 36 | - name: "server" 37 | containerPort: 2380 38 | hostPort: 2380 39 | - name: "client" 40 | containerPort: 2379 41 | hostPort: 2379 42 | volumeMounts: 43 | - name: "datadir" 44 | mountPath: "/var/lib/etcd" 45 | readOnly: false 46 | volumes: 47 | - name: "datadir" 48 | hostPath: 49 | path: "/var/lib/etcd" 50 | EOF 51 | 52 | cat < /etc/kubernetes/manifests/kube-apiserver.yaml 53 | --- 54 | kind: "Pod" 55 | apiVersion: "v1" 56 | metadata: 57 | name: "kube-apiserver" 58 | spec: 59 | hostNetwork: true 60 | containers: 61 | - name: "kube-apiserver" 62 | image: "kelseyhightower/kube-apiserver:1.0.2" 63 | command: 64 | - "/usr/local/bin/kube-apiserver" 65 | - "--insecure-bind-address=0.0.0.0" 66 | - "--etcd-servers=http://127.0.0.1:2379" 67 | - "--service-cluster-ip-range=10.200.20.0/24" 68 | - "--service-node-port-range=30000-37000" 69 | ports: 70 | - name: "https" 71 | hostPort: 443 72 | containerPort: 443 73 | - name: "local" 74 | hostPort: 8080 75 | containerPort: 8080 76 | volumeMounts: 77 | - name: "srvkube" 78 | mountPath: "/srv/kubernetes" 79 | readOnly: true 80 | - name: "etcssl" 81 | mountPath: "/etc/ssl" 82 | readOnly: true 83 | livenessProbe: 84 | httpGet: 85 | path: "/healthz" 86 | port: 8080 87 | initialDelaySeconds: 15 88 | timeoutSeconds: 15 89 | volumes: 90 | - name: "srvkube" 91 | hostPath: 92 | path: "/srv/kubernetes" 93 | - name: "etcssl" 94 | hostPath: 95 | path: "/etc/ssl" 96 | EOF 97 | 98 | cat < /etc/kubernetes/manifests/kube-scheduler.yaml 99 | --- 100 | kind: "Pod" 101 | apiVersion: "v1" 102 | metadata: 103 | name: "kube-scheduler" 104 | spec: 105 | hostNetwork: true 106 | containers: 107 | - name: "kube-scheduler" 108 | image: "kelseyhightower/kube-scheduler:1.0.2" 109 | command: 110 | - "/usr/local/bin/kube-scheduler" 111 | - "--address=0.0.0.0" 112 | - "--master=http://127.0.0.1:8080" 113 | ports: 114 | - name: "healthz" 115 | hostPort: 10251 116 | containerPort: 10251 117 | volumeMounts: 118 | - name: "srvkube" 119 | mountPath: "/srv/kubernetes" 120 | readOnly: true 121 | - name: "etcssl" 122 | mountPath: "/etc/ssl" 123 | readOnly: true 124 | livenessProbe: 125 | httpGet: 126 | path: "/healthz" 127 | port: 10251 128 | initialDelaySeconds: 15 129 | timeoutSeconds: 15 130 | volumes: 131 | - name: "srvkube" 132 | hostPath: 133 | path: "/srv/kubernetes" 134 | - name: "etcssl" 135 | hostPath: 136 | path: "/etc/ssl" 137 | EOF 138 | 139 | # Controller Manager 140 | cat < /etc/kubernetes/manifests/kube-controller-manager.yaml 141 | --- 142 | kind: "Pod" 143 | apiVersion: "v1" 144 | metadata: 145 | name: "kube-controller-manager" 146 | spec: 147 | hostNetwork: true 148 | containers: 149 | - name: "kube-controller-manager" 150 | image: "kelseyhightower/kube-controller-manager:1.0.2" 151 | command: 152 | - "/usr/local/bin/kube-controller-manager" 153 | - "--address=0.0.0.0" 154 | - "--master=http://127.0.0.1:8080" 155 | ports: 156 | - name: "healthz" 157 | hostPort: 10252 158 | containerPort: 10252 159 | livenessProbe: 160 | httpGet: 161 | path: "/healthz" 162 | port: 10252 163 | initialDelaySeconds: 15 164 | timeoutSeconds: 15 165 | EOF 166 | 167 | ## Docker 168 | cat < /etc/systemd/system/docker.service 169 | [Unit] 170 | Description=Docker Application Container Engine 171 | Documentation=http://docs.docker.io 172 | 173 | [Service] 174 | ExecStart=/opt/bin/docker --daemon \ 175 | --bip=10.200.0.1/24 \ 176 | --iptables=false \ 177 | --ip-masq=false \ 178 | --host=unix:///var/run/docker.sock \ 179 | --storage-driver=overlay 180 | Restart=on-failure 181 | RestartSec=5 182 | 183 | [Install] 184 | WantedBy=multi-user.target 185 | EOF 186 | 187 | ## Kubelet 188 | cat < /etc/systemd/system/kubelet.service 189 | [Unit] 190 | Description=Kubernetes Kubelet 191 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 192 | 193 | [Service] 194 | ExecStart=/opt/bin/kubelet \ 195 | --api-servers=http://127.0.0.1:8080 \ 196 | --cluster-dns=10.200.20.10 \ 197 | --cluster-domain=cluster.local \ 198 | --config=/etc/kubernetes/manifests \ 199 | --v=2 200 | Restart=on-failure 201 | RestartSec=5 202 | 203 | [Install] 204 | WantedBy=multi-user.target 205 | EOF 206 | 207 | ## kube-proxy 208 | 209 | cat < /etc/systemd/system/kube-proxy.service 210 | [Unit] 211 | Description=Kubernetes Proxy 212 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 213 | 214 | [Service] 215 | ExecStart=/opt/bin/kube-proxy --master=http://127.0.0.1:8080 --v=2 216 | Restart=on-failure 217 | RestartSec=5 218 | 219 | [Install] 220 | WantedBy=multi-user.target 221 | EOF 222 | 223 | ## Start systemd units. 224 | sleep 2 225 | systemctl daemon-reload 226 | systemctl enable docker 227 | systemctl enable kubelet 228 | systemctl enable kube-proxy 229 | systemctl start docker 230 | systemctl start kubelet 231 | systemctl start kube-proxy 232 | -------------------------------------------------------------------------------- /gce/bootstrap-worker: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | DOCKER_BIP=$(curl http://metadata/computeMetadata/v1/instance/attributes/bip -H "Metadata-Flavor: Google") 4 | API_SERVER=$(curl http://metadata/computeMetadata/v1/instance/attributes/apiserver -H "Metadata-Flavor: Google") 5 | 6 | ## Download binaries. 7 | mkdir -p /opt/bin 8 | curl https://kuar.io/docker -o /opt/bin/docker 9 | curl https://kuar.io/kubelet -o /opt/bin/kubelet 10 | curl https://kuar.io/kube-proxy -o /opt/bin/kube-proxy 11 | chmod +x /opt/bin/docker /opt/bin/kubelet /opt/bin/kube-proxy 12 | 13 | ## docker 14 | cat < /etc/systemd/system/docker.service 15 | [Unit] 16 | Description=Docker Application Container Engine 17 | Documentation=http://docs.docker.io 18 | 19 | [Service] 20 | ExecStart=/opt/bin/docker --daemon \ 21 | --bip=${DOCKER_BIP} \ 22 | --iptables=false \ 23 | --ip-masq=false \ 24 | --host=unix:///var/run/docker.sock \ 25 | --storage-driver=overlay 26 | Restart=on-failure 27 | RestartSec=5 28 | 29 | [Install] 30 | WantedBy=multi-user.target 31 | EOF 32 | 33 | ## kubelet 34 | cat < /etc/systemd/system/kubelet.service 35 | [Unit] 36 | Description=Kubernetes Kubelet 37 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 38 | 39 | [Service] 40 | ExecStart=/opt/bin/kubelet \ 41 | --api-servers=${API_SERVERS} \ 42 | --cluster-dns=10.200.20.10 \ 43 | --cluster-domain=cluster.local \ 44 | --v=2 45 | Restart=on-failure 46 | RestartSec=5 47 | 48 | [Install] 49 | WantedBy=multi-user.target 50 | EOF 51 | 52 | ## kube-proxy 53 | 54 | cat < /etc/systemd/system/kube-proxy.service 55 | [Unit] 56 | Description=Kubernetes Proxy 57 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 58 | 59 | [Service] 60 | ExecStart=/opt/bin/kube-proxy --master=${API_SERVERS} --v=2 61 | Restart=on-failure 62 | RestartSec=5 63 | 64 | [Install] 65 | WantedBy=multi-user.target 66 | EOF 67 | 68 | ## Start systemd units. 69 | systemctl daemon-reload 70 | systemctl enable docker 71 | systemctl enable kubelet 72 | systemctl enable kube-proxy 73 | systemctl start docker 74 | systemctl start kubelet 75 | systemctl start kube-proxy 76 | -------------------------------------------------------------------------------- /gce/create-controller: -------------------------------------------------------------------------------- 1 | gcloud compute instances create node0 \ 2 | --image-project coreos-cloud \ 3 | --image coreos-stable-723-3-0-v20150804 \ 4 | --boot-disk-size 200GB \ 5 | --machine-type n1-standard-1 \ 6 | --can-ip-forward \ 7 | --scopes compute-rw \ 8 | --metadata-from-file startup-script=bootstrap-controller 9 | -------------------------------------------------------------------------------- /gce/create-worker: -------------------------------------------------------------------------------- 1 | gcloud compute instances create node0 \ 2 | --image-project coreos-cloud \ 3 | --image coreos-stable-723-3-0-v20150804 \ 4 | --boot-disk-size 200GB \ 5 | --machine-type n1-standard-1 \ 6 | --can-ip-forward \ 7 | --scopes compute-rw \ 8 | --metadata-from-file startup-script=bootstrap-worker 9 | -------------------------------------------------------------------------------- /kubernetes-configs/inspector-canary-svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: inspector-canary 5 | labels: 6 | app: inspector 7 | track: canary 8 | spec: 9 | type: NodePort 10 | selector: 11 | app: inspector 12 | track: canary 13 | ports: 14 | - name: http 15 | nodePort: 36001 16 | port: 80 17 | protocol: TCP 18 | -------------------------------------------------------------------------------- /kubernetes-configs/inspector-svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: inspector 5 | labels: 6 | app: inspector 7 | spec: 8 | type: NodePort 9 | selector: 10 | app: inspector 11 | track: stable 12 | ports: 13 | - name: http 14 | nodePort: 36000 15 | port: 80 16 | protocol: TCP 17 | -------------------------------------------------------------------------------- /labs/cluster-add-on-dns.md: -------------------------------------------------------------------------------- 1 | # Cluster Add-on: DNS 2 | 3 | Kubernetes offers a DNS cluster add-on that provides DNS A and SRV records for Kubernetes services. The heavy lifting is done by SkyDNS, an etcd backed DNS server that supports dynamic updates from the Kubernetes API. 4 | 5 | ### laptop 6 | 7 | Allow add-ons to query the API server 8 | 9 | ``` 10 | gcloud compute firewall-rules create default-allow-local-api \ 11 | --allow tcp:8080 \ 12 | --source-ranges 10.200.0.0/16 13 | ``` 14 | 15 | Download the SkyDNS replication controller configuration: 16 | 17 | ``` 18 | wget https://kuar.io/skydns-rc.yaml 19 | ``` 20 | 21 | Edit the SkyDNS rc config: 22 | 23 | ``` 24 | vim skydns-rc.yaml 25 | ``` 26 | 27 | ``` 28 | - -kube_master_url=http://node0.c.PROJECT_ID.internal:8080 29 | ``` 30 | 31 | Create the SkyDNS replication controller: 32 | 33 | ``` 34 | kubectl create -f skydns-rc.yaml 35 | ``` 36 | 37 | Next create the SkyDNS service: 38 | 39 | ``` 40 | kubectl create -f https://kuar.io/skydns-svc.yaml 41 | ``` 42 | 43 | ### Validate 44 | 45 | ``` 46 | kubectl get rc --all-namespaces 47 | ``` 48 | 49 | Wait for "Running" status 50 | 51 | ``` 52 | kubectl get pods --namespace=kube-system --watch 53 | ``` 54 | 55 | Test DNS lookups 56 | 57 | ``` 58 | wget https://kuar.io/busybox.yaml 59 | ``` 60 | 61 | ``` 62 | cat busybox.yaml 63 | ``` 64 | 65 | ``` 66 | kubectl create -f busybox.yaml 67 | ``` 68 | 69 | ``` 70 | kubectl get pods busybox 71 | ``` 72 | 73 | ``` 74 | kubectl exec busybox -- nslookup kubernetes 75 | ``` 76 | -------------------------------------------------------------------------------- /labs/cluster-add-on-ui.md: -------------------------------------------------------------------------------- 1 | # Cluster Add-on: UI 2 | 3 | ### laptop 4 | 5 | Create `kube-system` namespace: 6 | 7 | ``` 8 | kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/saltbase/salt/kube-addons/namespace.yaml 9 | ``` 10 | 11 | Spawn `kube-ui` Replication Controller: 12 | 13 | ``` 14 | kubectl create -f https://kuar.io/kube-ui-rc.yaml 15 | ``` 16 | 17 | Next create the service for the UI: 18 | 19 | ``` 20 | kubectl create -f https://kuar.io/kube-ui-svc.yaml 21 | ``` 22 | 23 | Verify: 24 | 25 | ``` 26 | kubectl get pods --all-namespaces 27 | ``` 28 | 29 | At this point the Kubernetes UI add-on should be up and running. The Kubernetes API server provides access to the UI via the /ui endpoint. 30 | 31 | ``` 32 | kubectl proxy --port=8080 & 33 | ``` 34 | 35 | The UI is available at http://127.0.0.1:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/ on the client machine. 36 | -------------------------------------------------------------------------------- /labs/configure-networking.md: -------------------------------------------------------------------------------- 1 | # Configuring the Network 2 | 3 | In this lab you will configure the network between node0 and node1 to ensure cross host connectivity. You will also ensure containers can communicate across hosts and reach the internet. 4 | 5 | ### Create network routes between Docker on node0 and node1 6 | 7 | ``` 8 | gcloud compute routes create default-route-10-200-0-0-24 \ 9 | --destination-range 10.200.0.0/24 \ 10 | --next-hop-instance node0 11 | ``` 12 | ``` 13 | gcloud compute routes create default-route-10-200-1-0-24 \ 14 | --destination-range 10.200.1.0/24 \ 15 | --next-hop-instance node1 16 | ``` 17 | 18 | ``` 19 | gcloud compute routes list 20 | ``` 21 | 22 | ### Getting Containers Online 23 | 24 | ``` 25 | gcloud compute ssh node0 \ 26 | --command "sudo iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o ens4v1 -j MASQUERADE" 27 | ``` 28 | 29 | ``` 30 | gcloud compute ssh node1 \ 31 | --command "sudo iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o ens4v1 -j MASQUERADE" 32 | ``` 33 | 34 | ### Confirm networking 35 | 36 | #### Terminal 1 37 | 38 | ``` 39 | gcloud compute ssh node0 40 | ``` 41 | ``` 42 | docker run -t -i --rm busybox /bin/sh 43 | ``` 44 | 45 | ``` 46 | ip -f inet addr show eth0 47 | ``` 48 | 49 | ``` 50 | 4: eth0: mtu 1460 qdisc noqueue state UP group default 51 | inet 10.200.0.2/24 scope global eth0 52 | valid_lft forever preferred_lft forever 53 | ``` 54 | 55 | #### Terminal 2 56 | 57 | ``` 58 | gcloud compute ssh node1 59 | ``` 60 | 61 | ``` 62 | docker run -t -i --rm busybox /bin/sh 63 | ``` 64 | 65 | ``` 66 | ping -c 3 10.200.0.2 67 | ``` 68 | 69 | ``` 70 | ping -c 3 google.com 71 | ``` 72 | 73 | Exit both busybox instances. 74 | -------------------------------------------------------------------------------- /labs/exposing-services-with-nginx.md: -------------------------------------------------------------------------------- 1 | # Exposing Services with Nginx 2 | 3 | ### Provision nginx 4 | 5 | ``` 6 | gcloud compute instances create nginx \ 7 | --image-project coreos-cloud \ 8 | --image coreos-stable-723-3-0-v20150804 \ 9 | --boot-disk-size 200GB \ 10 | --machine-type n1-standard-1 \ 11 | --can-ip-forward \ 12 | --scopes compute-rw 13 | ``` 14 | 15 | ### Prep 16 | 17 | ``` 18 | gcloud config list project 19 | gcloud compute instances list 20 | ``` 21 | 22 | Edit etcd hosts on your local machine. 23 | 24 | ``` 25 | sudo bash -c 'echo "NGINX_EXTERNAL_IP inspector.PROJECT_ID.io" >> /etc/hosts' 26 | ``` 27 | 28 | ### Configure nginx 29 | 30 | ``` 31 | gcloud compute ssh nginx 32 | ``` 33 | 34 | ``` 35 | git clone https://github.com/kelseyhightower/intro-to-kubernetes-workshop.git 36 | ``` 37 | 38 | Review the nginx vhost configuration: 39 | 40 | ``` 41 | cat intro-to-kubernetes-workshop/nginx/inspector.conf 42 | ``` 43 | 44 | Substitute the project name: 45 | 46 | ``` 47 | PROJECT_ID=$(curl -H "Metadata-Flavor: Google" \ 48 | http://metadata.google.internal/computeMetadata/v1/project/project-id) 49 | ``` 50 | 51 | ``` 52 | sed -i -e "s/PROJECT_ID/${PROJECT_ID}/g;" intro-to-kubernetes-workshop/nginx/inspector.conf 53 | ``` 54 | 55 | ``` 56 | cat intro-to-kubernetes-workshop/nginx/inspector.conf 57 | ``` 58 | 59 | Copy the vhost configuration: 60 | 61 | ``` 62 | sudo mkdir -p /etc/nginx/conf.d 63 | ``` 64 | 65 | ``` 66 | sudo cp intro-to-kubernetes-workshop/nginx/inspector.conf /etc/nginx/conf.d/ 67 | ``` 68 | 69 | ### Start nginx 70 | 71 | ``` 72 | sudo docker run -d --net=host \ 73 | -v /etc/nginx/conf.d:/etc/nginx/conf.d \ 74 | nginx 75 | ``` 76 | 77 | ### Testing 78 | 79 | #### laptop 80 | 81 | ``` 82 | gcloud compute firewall-rules create default-allow-nginx --allow tcp:80 83 | ``` 84 | 85 | Visit 86 | 87 | ``` 88 | http://inspector.PROJECT_ID.io 89 | ``` 90 | 91 | Every page refresh should show different MAC and IP address: 92 | 93 | ``` 94 | http://inspector.PROJECT_ID.io/net 95 | ``` 96 | -------------------------------------------------------------------------------- /labs/generate-server-and-client-certs.md: -------------------------------------------------------------------------------- 1 | # Generate Server and Client Certs 2 | 3 | In this labs you will use cfssl to generate client and server TLS certs. 4 | 5 | ## Generate the kube-apiserver server cert 6 | 7 | ### node0 8 | 9 | ``` 10 | gcloud compute ssh node0 11 | ``` 12 | 13 | Create a CSR for the API server: 14 | 15 | ``` 16 | cat < apiserver-csr.json 17 | { 18 | "CN": "*.c.PROJECT_ID.internal", 19 | "hosts": [ 20 | "127.0.0.1", 21 | "EXTERNAL_IP", 22 | "*.c.PROJECT_ID.internal" 23 | ], 24 | "key": { 25 | "algo": "rsa", 26 | "size": 2048 27 | }, 28 | "names": [ 29 | { 30 | "C": "US", 31 | "L": "Portland", 32 | "O": "Kubernetes", 33 | "OU": "API Server", 34 | "ST": "Oregon" 35 | } 36 | ] 37 | } 38 | EOF 39 | ``` 40 | 41 | ### Customize apiserver-csr.json 42 | 43 | Get the PROJECT_ID: 44 | 45 | ``` 46 | PROJECT_ID=$(curl -H "Metadata-Flavor: Google" \ 47 | http://metadata.google.internal/computeMetadata/v1/project/project-id) 48 | ``` 49 | 50 | Get the EXTERNAL_IP: 51 | 52 | ``` 53 | EXTERNAL_IP=$(curl -H "Metadata-Flavor: Google" \ 54 | http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip) 55 | ``` 56 | 57 | Substitute the PROJECT_ID: 58 | 59 | ``` 60 | sed -i -e "s/PROJECT_ID/${PROJECT_ID}/g;" apiserver-csr.json 61 | ``` 62 | 63 | Substitute the EXTERNAL_IP: 64 | 65 | ``` 66 | sed -i -e "s/EXTERNAL_IP/${EXTERNAL_IP}/g;" apiserver-csr.json 67 | ``` 68 | 69 | ### Generate the API server private key and TLS cert 70 | 71 | ``` 72 | cfssl gencert \ 73 | -ca=ca.pem \ 74 | -ca-key=ca-key.pem \ 75 | -config=ca-config.json \ 76 | -profile=server \ 77 | apiserver-csr.json | cfssljson -bare apiserver 78 | ``` 79 | 80 | Results 81 | 82 | ``` 83 | apiserver-key.pem 84 | apiserver.csr 85 | apiserver.pem 86 | ``` 87 | 88 | ## Generate the admin client cert 89 | 90 | ``` 91 | cat < admin-csr.json 92 | { 93 | "CN": "admin", 94 | "hosts": [""], 95 | "key": { 96 | "algo": "rsa", 97 | "size": 2048 98 | }, 99 | "names": [ 100 | { 101 | "C": "US", 102 | "L": "Portland", 103 | "O": "Kubernetes", 104 | "OU": "Cluster Admins", 105 | "ST": "Oregon" 106 | } 107 | ] 108 | } 109 | EOF 110 | ``` 111 | 112 | ``` 113 | cfssl gencert \ 114 | -ca=ca.pem \ 115 | -ca-key=ca-key.pem \ 116 | -config=ca-config.json \ 117 | -profile=client \ 118 | admin-csr.json | cfssljson -bare admin 119 | ``` 120 | 121 | Results 122 | 123 | ``` 124 | admin-key.pem 125 | admin.csr 126 | admin.pem 127 | ``` 128 | -------------------------------------------------------------------------------- /labs/initialize-a-certificate-authority.md: -------------------------------------------------------------------------------- 1 | # Initialize a certificate authority 2 | 3 | In this lab you will setup the necessary PKI infrastructure to secure the Kuberentes API for remote communication. This lab will leverage CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), to bootstrap a Certificate Authority. 4 | 5 | ## Download and install cfssl 6 | 7 | ### node0 8 | 9 | ``` 10 | gcloud compute ssh node0 11 | ``` 12 | 13 | ``` 14 | sudo mkdir -p /opt/bin 15 | ``` 16 | 17 | ``` 18 | sudo curl -o /opt/bin/cfssl https://kuar.io/cfssl 19 | sudo chmod +x /opt/bin/cfssl 20 | ``` 21 | 22 | ``` 23 | sudo curl -o /opt/bin/cfssljson https://kuar.io/cfssljson 24 | sudo chmod +x /opt/bin/cfssljson 25 | ``` 26 | 27 | ## Initialize a CA 28 | 29 | ### Create the CA configuration file 30 | 31 | ``` 32 | cat < ca-config.json 33 | { 34 | "signing": { 35 | "default": { 36 | "expiry": "8760h" 37 | }, 38 | "profiles": { 39 | "server": { 40 | "usages": ["signing", "key encipherment", "server auth"], 41 | "expiry": "8760h" 42 | }, 43 | "client": { 44 | "usages": ["signing","key encipherment","client auth"], 45 | "expiry": "8760h" 46 | } 47 | } 48 | } 49 | } 50 | EOF 51 | ``` 52 | 53 | ### Generate the CA certificate and private key 54 | 55 | Create the CA CSR: 56 | 57 | ``` 58 | cat < ca-csr.json 59 | { 60 | "CN": "Kubernetes CA", 61 | "key": { 62 | "algo": "rsa", 63 | "size": 2048 64 | }, 65 | "names": [ 66 | { 67 | "C": "US", 68 | "L": "Portland", 69 | "O": "Kubernetes", 70 | "OU": "CA", 71 | "ST": "Oregon" 72 | } 73 | ] 74 | } 75 | EOF 76 | ``` 77 | 78 | Generate the CA certificate and private key: 79 | 80 | ``` 81 | cfssl gencert -initca ca-csr.json | cfssljson -bare ca 82 | ``` 83 | 84 | Results: 85 | 86 | ``` 87 | ca-key.pem 88 | ca.csr 89 | ca.pem 90 | ``` 91 | -------------------------------------------------------------------------------- /labs/install-and-configure-docker.md: -------------------------------------------------------------------------------- 1 | ## Install and configure the Docker 2 | 3 | In this lab you will install and configure Docker on node0 and node1. 4 | 5 | ### node0 6 | 7 | ``` 8 | gcloud compute ssh node0 9 | ``` 10 | 11 | #### Download and configure the docker unit file 12 | 13 | ``` 14 | sudo curl https://kuar.io/docker.service -o /etc/systemd/system/docker.service 15 | ``` 16 | 17 | Set the `--bip` flag to `10.200.0.1/24` (docker0 interface address which will be used by docker daemon) and `--log-level=error` flag to suppress docker verbosity: 18 | 19 | ``` 20 | sudo sed -i -e "s/BRIDGE_IP/10.200.0.1\/24 --log-level=error/g;" /etc/systemd/system/docker.service 21 | ``` 22 | ``` 23 | cat /etc/systemd/system/docker.service 24 | ``` 25 | 26 | Start docker: 27 | 28 | ``` 29 | sudo systemctl daemon-reload 30 | sudo systemctl enable docker 31 | sudo systemctl start docker 32 | ``` 33 | 34 | #### Verify 35 | 36 | ``` 37 | ifconfig 38 | docker version 39 | ``` 40 | 41 | ### node1 42 | 43 | ``` 44 | gcloud compute ssh node1 45 | ``` 46 | 47 | #### Download and configure the docker unit file 48 | 49 | ``` 50 | sudo curl https://kuar.io/docker.service -o /etc/systemd/system/docker.service 51 | ``` 52 | 53 | Set the `--bip` flag to `10.200.1.1/24` and `--log-level` flag to `error`: 54 | 55 | ``` 56 | sudo sed -i -e "s/BRIDGE_IP/10.200.1.1\/24 --log-level=error/g;" /etc/systemd/system/docker.service 57 | ``` 58 | 59 | ``` 60 | cat /etc/systemd/system/docker.service 61 | ``` 62 | 63 | Start docker: 64 | 65 | ``` 66 | sudo systemctl daemon-reload 67 | sudo systemctl enable docker 68 | sudo systemctl start docker 69 | ``` 70 | 71 | #### Verify 72 | 73 | ``` 74 | ifconfig 75 | docker version 76 | ``` 77 | -------------------------------------------------------------------------------- /labs/install-and-configure-kube-proxy.md: -------------------------------------------------------------------------------- 1 | # Install and configure the kube-proxy 2 | 3 | ## node0 4 | 5 | ``` 6 | gcloud compute ssh node0 7 | ``` 8 | 9 | Download the kube-proxy pod: 10 | 11 | ``` 12 | sudo curl -O https://kuar.io/kube-proxy-pod.yaml 13 | ``` 14 | 15 | Configure the master flag: 16 | 17 | ``` 18 | PROJECT_ID=$(curl -H "Metadata-Flavor: Google" \ 19 | http://metadata.google.internal/computeMetadata/v1/project/project-id) 20 | ``` 21 | 22 | ``` 23 | sudo sed -i -e "s/PROJECT_ID/${PROJECT_ID}/g;" kube-proxy-pod.yaml 24 | ``` 25 | 26 | ``` 27 | cat kube-proxy-pod.yaml 28 | ``` 29 | 30 | Start the kube-proxy service: 31 | 32 | ``` 33 | sudo cp kube-proxy-pod.yaml /etc/kubernetes/manifests 34 | ``` 35 | 36 | Verify: 37 | 38 | ``` 39 | docker ps 40 | ``` 41 | 42 | Check iptables (search for rules with 'kubernetes' comments): 43 | 44 | ``` 45 | sudo iptables -vL -n -t nat 46 | ``` 47 | 48 | ### node1 49 | 50 | ``` 51 | gcloud compute ssh node1 52 | ``` 53 | 54 | Repeat the steps from above. 55 | -------------------------------------------------------------------------------- /labs/install-and-configure-kubectl.md: -------------------------------------------------------------------------------- 1 | # Install and configure the kubectl CLI 2 | 3 | ## Install kubectl 4 | 5 | ### laptop 6 | 7 | #### Linux 8 | 9 | ``` 10 | curl -o kubectl https://kuar.io/linux/kubectl 11 | chmod +x kubectl 12 | sudo cp kubectl /usr/local/bin/kubectl 13 | ``` 14 | 15 | #### OS X 16 | 17 | ``` 18 | curl -o kubectl https://kuar.io/darwin/kubectl 19 | chmod +x kubectl 20 | sudo cp kubectl /usr/local/bin/kubectl 21 | ``` 22 | 23 | ### Configure kubectl 24 | 25 | Download the client credentials and CA cert: 26 | 27 | ``` 28 | gcloud compute copy-files node0:~/admin-key.pem . 29 | gcloud compute copy-files node0:~/admin.pem . 30 | gcloud compute copy-files node0:~/ca.pem . 31 | ``` 32 | 33 | Get the Kubernetes controller external IP: 34 | 35 | ``` 36 | EXTERNAL_IP=$(gcloud compute ssh node0 --command \ 37 | "curl -H 'Metadata-Flavor: Google' \ 38 | http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip") 39 | ``` 40 | 41 | Create the workshop cluster config: 42 | 43 | ``` 44 | kubectl config set-cluster workshop \ 45 | --certificate-authority=ca.pem \ 46 | --embed-certs=true \ 47 | --server=https://${EXTERNAL_IP}:6443 48 | ``` 49 | 50 | Add the admin user credentials: 51 | 52 | ``` 53 | kubectl config set-credentials admin \ 54 | --client-key=admin-key.pem \ 55 | --client-certificate=admin.pem \ 56 | --embed-certs=true 57 | ``` 58 | 59 | Configure the workshop context: 60 | 61 | ``` 62 | kubectl config set-context workshop \ 63 | --cluster=workshop \ 64 | --user=admin 65 | ``` 66 | 67 | ``` 68 | kubectl config use-context workshop 69 | ``` 70 | 71 | ``` 72 | kubectl config view 73 | ``` 74 | 75 | ### Explore the kubectl CLI 76 | 77 | Check the health status of the cluster components: 78 | 79 | ``` 80 | kubectl get cs 81 | ``` 82 | 83 | List pods: 84 | 85 | ``` 86 | kubectl get pods 87 | ``` 88 | 89 | List nodes: 90 | 91 | ``` 92 | kubectl get nodes 93 | ``` 94 | 95 | List services: 96 | 97 | ``` 98 | kubectl get services 99 | ``` 100 | -------------------------------------------------------------------------------- /labs/install-and-configure-kubelet.md: -------------------------------------------------------------------------------- 1 | # Install and configure the Kubelet 2 | 3 | ## node1 4 | 5 | ``` 6 | gcloud compute ssh node1 7 | ``` 8 | 9 | Download the kubelet unit file: 10 | 11 | ``` 12 | sudo curl https://kuar.io/kubelet.service \ 13 | -o /etc/systemd/system/kubelet.service 14 | ``` 15 | 16 | Configure the api-servers flag: 17 | 18 | ``` 19 | PROJECT_ID=$(curl -H "Metadata-Flavor: Google" \ 20 | http://metadata.google.internal/computeMetadata/v1/project/project-id) 21 | ``` 22 | 23 | ``` 24 | sudo sed -i -e "s/PROJECT_ID/${PROJECT_ID}/g;" /etc/systemd/system/kubelet.service 25 | ``` 26 | 27 | ``` 28 | cat /etc/systemd/system/kubelet.service 29 | ``` 30 | 31 | ``` 32 | sudo systemctl daemon-reload 33 | sudo systemctl enable kubelet 34 | sudo systemctl start kubelet 35 | ``` 36 | 37 | ### Verify 38 | 39 | ``` 40 | sudo systemctl status kubelet 41 | ``` 42 | 43 | #### laptop 44 | 45 | ``` 46 | kubectl get nodes 47 | ``` 48 | -------------------------------------------------------------------------------- /labs/install-gce-client-tools.md: -------------------------------------------------------------------------------- 1 | # Install GCE client tools 2 | 3 | ## Install the Google 'gcloud' SDK 4 | 5 | Follow the instructions [here](https://cloud.google.com/sdk/) to install gcloud. 6 | Then, set the Google Cloud project that you want to use for this lab as the default, by running: 7 | 8 | ``` 9 | gcloud config set project 10 | gcloud config set compute/zone us-central1-f 11 | gcloud config set compute/region us-central1 12 | ``` 13 | 14 | Note: if you don't yet have a Google Cloud project created, then follow the signup 15 | instructions [here](https://cloud.google.com/compute/docs/signup). 16 | 17 | Grab the service account docs from here: 18 | 19 | [Google Service Account Docs](https://developers.google.com/console/help/new/#serviceaccounts) 20 | -------------------------------------------------------------------------------- /labs/kuberentes-controller-pod.md: -------------------------------------------------------------------------------- 1 | # Install and configure the Kubernetes controller 2 | 3 | ## Install and configure the Kubelet 4 | 5 | ### node0 6 | 7 | ``` 8 | gcloud compute ssh node0 9 | ``` 10 | 11 | Download the kubelet unit file: 12 | 13 | ``` 14 | sudo curl https://kuar.io/kubelet.service \ 15 | -o /etc/systemd/system/kubelet.service 16 | ``` 17 | 18 | Configure the api-servers flag: 19 | 20 | ``` 21 | PROJECT_ID=$(curl -H "Metadata-Flavor: Google" \ 22 | http://metadata.google.internal/computeMetadata/v1/project/project-id) 23 | ``` 24 | 25 | ``` 26 | sudo sed -i -e "s/PROJECT_ID/${PROJECT_ID}/g;" /etc/systemd/system/kubelet.service 27 | ``` 28 | 29 | ``` 30 | cat /etc/systemd/system/kubelet.service 31 | ``` 32 | 33 | ``` 34 | sudo systemctl daemon-reload 35 | sudo systemctl enable kubelet 36 | sudo systemctl start kubelet 37 | ``` 38 | 39 | ### Verify 40 | 41 | ``` 42 | sudo systemctl status kubelet 43 | ``` 44 | 45 | ## Deploy the Kubernetes Controller 46 | 47 | ``` 48 | sudo mkdir -p /etc/kubernetes/manifests 49 | sudo mkdir -p /var/run/kubernetes 50 | sudo mkdir -p /var/lib/etcd 51 | ``` 52 | 53 | Copy certs 54 | 55 | ``` 56 | sudo cp apiserver-key.pem apiserver.pem ca.pem ca-key.pem /var/run/kubernetes/ 57 | ``` 58 | 59 | Download the controller pod manifest: 60 | 61 | ``` 62 | sudo curl -o /etc/kubernetes/manifests/kube-controller-pod.yaml \ 63 | https://kuar.io/kube-controller-pod.yaml 64 | ``` 65 | 66 | Verify: 67 | 68 | ``` 69 | docker ps 70 | ``` 71 | 72 | ### Allow external access to the API server secure port 73 | 74 | ``` 75 | gcloud compute firewall-rules create default-allow-kubernetes-secure \ 76 | --allow tcp:6443 \ 77 | --source-ranges 0.0.0.0/0 78 | ``` 79 | -------------------------------------------------------------------------------- /labs/pods.md: -------------------------------------------------------------------------------- 1 | # Creating and managing pods 2 | 3 | * Deploy a pod with the kubectl cli tool 4 | * Manage the basic life cycle of a pod 5 | 6 | ## Listing Pods 7 | 8 | ``` 9 | kubectl get pods 10 | ``` 11 | 12 | ## Creating Pods 13 | 14 | ``` 15 | kubectl run inspector \ 16 | --labels="app=inspector,track=stable" \ 17 | --image=b.gcr.io/kuar/inspector:1.0.0 \ 18 | --overrides='{ 19 | "apiVersion": "v1", 20 | "spec": { 21 | "template": { 22 | "spec": { 23 | "containers": [ 24 | { 25 | "args": [ 26 | "-insecure-host=0.0.0.0", 27 | "-insecure-port=80" 28 | ], 29 | "image": "b.gcr.io/kuar/inspector:1.0.0", 30 | "name": "inspector" 31 | } 32 | ] 33 | } 34 | } 35 | } 36 | }' 37 | ``` 38 | 39 | ## Watch for status 40 | 41 | ``` 42 | kubectl get pods --watch 43 | ``` 44 | 45 | ## Get Pod info 46 | 47 | ``` 48 | kubectl describe pods inspector 49 | ``` 50 | 51 | ## Visit the running service 52 | 53 | Grab the `IP` address for the pod 54 | 55 | ``` 56 | kubectl describe pods inspector 57 | ``` 58 | 59 | And run this command on one of your Kubernetes nodes: 60 | 61 | ``` 62 | curl http://IP 63 | ``` 64 | -------------------------------------------------------------------------------- /labs/provisioning-coreos-on-gce.md: -------------------------------------------------------------------------------- 1 | # Provisioning CoreOS on Google Compute Engine 2 | 3 | In this lab you will provision two GCE instances running CoreOS. 4 | 5 | ## Provision 2 GCE instances 6 | 7 | ### Provision CoreOS using the gcloud CLI 8 | 9 | #### node0 10 | 11 | ``` 12 | gcloud compute instances create node0 \ 13 | --image-project coreos-cloud \ 14 | --image coreos-stable-723-3-0-v20150804 \ 15 | --boot-disk-size 200GB \ 16 | --machine-type n1-standard-1 \ 17 | --can-ip-forward \ 18 | --scopes compute-rw 19 | ``` 20 | 21 | #### node1 22 | 23 | ``` 24 | gcloud compute instances create node1 \ 25 | --image-project coreos-cloud \ 26 | --image coreos-stable-723-3-0-v20150804 \ 27 | --boot-disk-size 200GB \ 28 | --machine-type n1-standard-1 \ 29 | --can-ip-forward \ 30 | --scopes compute-rw 31 | ``` 32 | 33 | #### Verify 34 | 35 | ``` 36 | gcloud compute instances list 37 | ``` 38 | -------------------------------------------------------------------------------- /labs/replication-controllers.md: -------------------------------------------------------------------------------- 1 | # Creating and managing replication controllers 2 | 3 | * Horizontally scale pods using a replication controller 4 | 5 | > You will not be able to access pods after this lab. You need to spin up a Kubernetes service later. 6 | 7 | ## Listing Replication Controllers 8 | 9 | ``` 10 | kubectl get replicationControllers 11 | ``` 12 | 13 | Use `rc` as a shorthand for `replicationControllers` 14 | 15 | ``` 16 | kubectl get rc 17 | ``` 18 | 19 | ``` 20 | kubectl get pods 21 | ``` 22 | 23 | ## Horizontally scaling pods 24 | 25 | ### Resize the replication controller 26 | 27 | ``` 28 | kubectl scale rc inspector --replicas=10 29 | ``` 30 | 31 | ``` 32 | kubectl get pods --watch 33 | ``` 34 | 35 | -------------------------------------------------------------------------------- /labs/rolling-updates.md: -------------------------------------------------------------------------------- 1 | # Rolling Updates 2 | 3 | * Use multiple replication controllers with a single service 4 | * Deploy a canary application for testing 5 | 6 | ## Send the Canary 7 | 8 | ``` 9 | kubectl run inspector-canary \ 10 | --labels="app=inspector,track=canary" \ 11 | --replicas=1 \ 12 | --image=b.gcr.io/kuar/inspector:2.0.0 \ 13 | --overrides='{ 14 | "apiVersion": "v1", 15 | "spec": { 16 | "template": { 17 | "spec": { 18 | "containers": [ 19 | { 20 | "args": [ 21 | "-insecure-host=0.0.0.0", 22 | "-insecure-port=80" 23 | ], 24 | "image": "b.gcr.io/kuar/inspector:2.0.0", 25 | "name": "inspector-canary" 26 | } 27 | ] 28 | } 29 | } 30 | } 31 | }' 32 | ``` 33 | 34 | ### Validation 35 | 36 | Try hitting the service port on any of the node instances. 37 | 38 | #### laptop 39 | 40 | ``` 41 | while true; do curl -s http://inspector.PROJECT_ID.io | \ 42 | grep -o -e 'Version: Inspector [0-9].[0-9].[0-9]'; sleep 1; done 43 | ``` 44 | 45 | Did you find the canary? 46 | 47 | ## Rolling Update 48 | 49 | Open three terminals 50 | 51 | ### Terminal 1 52 | 53 | #### laptop 54 | 55 | ``` 56 | kubectl get pods --watch 57 | ``` 58 | 59 | ### Terminal 2 60 | 61 | #### laptop 62 | 63 | ``` 64 | while true; do curl -s http://inspector.PROJECT_ID.io | \ 65 | grep -o -e 'Version: Inspector [0-9].[0-9].[0-9]'; sleep 1; done 66 | ``` 67 | 68 | ### Terminal 3 69 | 70 | #### laptop 71 | 72 | ``` 73 | kubectl rolling-update inspector --update-period=3s --image=b.gcr.io/kuar/inspector:2.0.0 74 | ``` 75 | -------------------------------------------------------------------------------- /labs/secure-the-kubernetes-api.md: -------------------------------------------------------------------------------- 1 | # Secure the Kubernetes API 2 | -------------------------------------------------------------------------------- /labs/services.md: -------------------------------------------------------------------------------- 1 | # Creating and managing services 2 | 3 | * Create a service using the kubectl cli tool 4 | * Map a service to pod lables 5 | 6 | ### laptop 7 | 8 | ``` 9 | git clone https://github.com/kelseyhightower/intro-to-kubernetes-workshop.git 10 | ``` 11 | 12 | ### Listing Services 13 | 14 | ``` 15 | kubectl get services 16 | ``` 17 | 18 | ### Creating Services 19 | 20 | ``` 21 | cat intro-to-kubernetes-workshop/kubernetes-configs/inspector-svc.yaml 22 | ``` 23 | 24 | ``` 25 | kubectl create -f intro-to-kubernetes-workshop/kubernetes-configs/inspector-svc.yaml 26 | ``` 27 | 28 | #### Validation 29 | ``` 30 | kubectl describe service inspector 31 | ``` 32 | 33 | ## Create the inspector firewall rule 34 | 35 | #### laptop 36 | 37 | ``` 38 | gcloud compute firewall-rules create default-allow-inspector --allow tcp:36000 39 | ``` 40 | 41 | Try hitting the external IP address for each instance in your web browser on port 36000 42 | 43 | ``` 44 | gcloud compute instances list 45 | ``` 46 | 47 | ``` 48 | curl EXTERNAL_IP_ADDRESS:36000 49 | ``` 50 | -------------------------------------------------------------------------------- /nginx/inspector.conf: -------------------------------------------------------------------------------- 1 | upstream inspector { 2 | least_conn; 3 | server node0.c.PROJECT_ID.internal:36000; 4 | server node1.c.PROJECT_ID.internal:36000; 5 | } 6 | 7 | server { 8 | server_name inspector.PROJECT_ID.io; 9 | location / { 10 | proxy_pass http://inspector; 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /slides/docker/large-dockerfile: -------------------------------------------------------------------------------- 1 | FROM google/debian:jessie 2 | 3 | MAINTAINER Kelsey Hightower 4 | RUN apt-get update -y && apt-get install —no-install-recommends -y -q curl build-essential ca-certificates git mercurial 5 | RUN curl -O -s https://storage.googleapis.com/golang/go1.4.2.src.tar.gz 6 | RUN tar -xzf go1.4.2.src.tar.gz -C /usr/local 7 | RUN cd /usr/local/go/src && ./make.bash —no-clean 2>&1 8 | 9 | RUN mkdir -p /gopath/src/github.com/kelseyhightower/pgview 10 | ADD . /gopath/src/github.com/kelseyhightower/pgview 11 | RUN CGO_ENABLED=0 GOOS=linux go build -a -tags netgo -ldflags '-w' . 12 | 13 | RUN cp pgview /pgview 14 | 15 | ENTRYPOINT ["/pgview"] 16 | -------------------------------------------------------------------------------- /slides/docker/small-dockerfile: -------------------------------------------------------------------------------- 1 | FROM scratch 2 | 3 | MAINTAINER Kelsey Hightower 4 | ADD pgview pgview 5 | 6 | ENTRYPOINT ["/pgview"] 7 | -------------------------------------------------------------------------------- /slides/images/container.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/kubernetes-workshops/d751eae7fdc8e485fab6993e62496f1ca62bc710/slides/images/container.png -------------------------------------------------------------------------------- /slides/images/kubernetes-nodes-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/kubernetes-workshops/d751eae7fdc8e485fab6993e62496f1ca62bc710/slides/images/kubernetes-nodes-2.png -------------------------------------------------------------------------------- /slides/images/kubernetes-rc-reschedule.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/kubernetes-workshops/d751eae7fdc8e485fab6993e62496f1ca62bc710/slides/images/kubernetes-rc-reschedule.png -------------------------------------------------------------------------------- /slides/images/kubernetes-rc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/kubernetes-workshops/d751eae7fdc8e485fab6993e62496f1ca62bc710/slides/images/kubernetes-rc.png -------------------------------------------------------------------------------- /slides/images/kubernetes-scheduler.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/kubernetes-workshops/d751eae7fdc8e485fab6993e62496f1ca62bc710/slides/images/kubernetes-scheduler.png -------------------------------------------------------------------------------- /slides/images/pod.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/kubernetes-workshops/d751eae7fdc8e485fab6993e62496f1ca62bc710/slides/images/pod.png -------------------------------------------------------------------------------- /slides/manifests/node.json: -------------------------------------------------------------------------------- 1 | { 2 | "kind": "Node", 3 | "apiVersion": "v1beta3", 4 | "metadata": { 5 | "name": "192.168.12.100", 6 | "labels": { 7 | "environment": "production", 8 | "name": "node0" 9 | } 10 | }, 11 | "spec": { 12 | "externalID": "192.168.12.100" 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /slides/manifests/pod.json: -------------------------------------------------------------------------------- 1 | { 2 | "kind": "Pod", 3 | "apiVersion": "v1beta3", 4 | "metadata": { 5 | "name": "web", 6 | "labels": { 7 | "environment": "production", 8 | "name": "web" 9 | } 10 | }, 11 | "spec": { 12 | "containers": [{ 13 | "name": "web", 14 | "image": "quay.io/kelseyhightower/web:1.0.0", 15 | "ports": [{"containerPort": 80, "protocol": "TCP"}] 16 | }, 17 | { 18 | "name": "memcached", 19 | "image": "memcached", 20 | "ports": [{"containerPort": 11211, "protocol": "TCP"}] 21 | }] 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /slides/manifests/replication-controller.json: -------------------------------------------------------------------------------- 1 | { 2 | "kind": "ReplicationController", 3 | "apiVersion": "v1beta3", 4 | "metadata": { 5 | "name": "web" 6 | }, 7 | "spec": { 8 | "replicas": 1, 9 | "selector": { 10 | "name": "web", 11 | "track": "stable" 12 | }, 13 | "template": { 14 | "metadata": { 15 | "labels": { 16 | "name": "web", 17 | "track": "stable" 18 | } 19 | }, 20 | "spec": {"containers": [...]} 21 | } 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /slides/manifests/service.json: -------------------------------------------------------------------------------- 1 | { 2 | "kind": "Service", 3 | "apiVersion": "v1beta3", 4 | "metadata": { 5 | "name": "web" 6 | }, 7 | "spec": { 8 | "ports": [{ 9 | "protocol": "TCP", 10 | "port": 80, 11 | "targetPort": 80 12 | }], 13 | "selector": { 14 | "name": "web", 15 | "environment": "production" 16 | } 17 | } 18 | } 19 | -------------------------------------------------------------------------------- /slides/talk.slide: -------------------------------------------------------------------------------- 1 | Intro to Kubernetes Workshop 2 | 3 | Kelsey Hightower 4 | CoreOS 5 | kelsey.hightower@coreos.com 6 | @kelseyhightower 7 | 8 | * Course Outline 9 | 10 | * Course Outline 11 | 12 | Kubernetes Core Concepts. 13 | 14 | - containers 15 | - pods 16 | - labels 17 | - services 18 | - Kuberbetes network model 19 | 20 | * Course Outline 21 | 22 | Kubernetes Infrastructure. 23 | 24 | - CoreOS 25 | - Docker 26 | - etcd 27 | - Kubernetes controller 28 | - Kubernetes node 29 | - Google Compute Engine 30 | 31 | * Course Outline 32 | 33 | Client tools. 34 | 35 | - ssh 36 | - gcloud 37 | - kubectl 38 | 39 | * Containers 40 | 41 | Unix processes not lightweight Virtual Machines 42 | 43 | - application + dependencies = image 44 | - Runtime environment (cgroups, namespaces, env vars) 45 | 46 | .image images/container.png 47 | 48 | * Containers 49 | 50 | Building container images. 51 | 52 | .code docker/large-dockerfile 53 | 54 | Total size: 500MB 55 | 56 | * Containers 57 | 58 | Building container images. 59 | 60 | - Build applications in a dedicated build container or CI 61 | - Ship build artifacts, not build environments 62 | 63 | Remix 64 | 65 | .code docker/small-dockerfile 66 | 67 | Total size: 4MB 68 | 69 | * Kubernetes 70 | 71 | Container management, scheduling, and service discovery. 72 | 73 | - API driven application management 74 | - Agents monitor endpoints for state changes (real-time) 75 | - Controllers enforce desired state 76 | - Labels identify resources (nodes, applications, services) 77 | 78 | * Kubernetes 79 | 80 | High level concepts 81 | 82 | - node 83 | - pod 84 | - scheduler 85 | - replication controller 86 | - service 87 | 88 | * Node 89 | 90 | Runs containers and proxies service requests. 91 | 92 | - docker 93 | - kubelet 94 | - proxy 95 | 96 | .image images/kubernetes-nodes-2.png 97 | 98 | * Node Manifest 99 | 100 | .code manifests/node.json 101 | 102 | * Pod 103 | 104 | Represents a logical application. 105 | 106 | - One or more containers 107 | - Shared namespaces 108 | 109 | .image images/pod.png 110 | 111 | * Pod Manifest 112 | 113 | .code manifests/pod.json 114 | 115 | * Scheduler 116 | 117 | Schedules pods to run on nodes. 118 | 119 | - Global scheduler for long running jobs 120 | - Best fit chosen based on pod requirements 121 | - Pluggable 122 | 123 | .image images/kubernetes-scheduler.png 124 | 125 | * Replication Controller 126 | 127 | Manages a replicated set of pods. 128 | 129 | - Creates pods from a template 130 | - Ensures desired number of pods are running 131 | - Online resizing 132 | 133 | .image images/kubernetes-rc.png 134 | 135 | * Replication Controller 136 | 137 | Manages a replicated set of pods. 138 | 139 | - Creates pods from a template 140 | - Ensures desired number of pods are running 141 | - Self-healing 142 | 143 | .image images/kubernetes-rc-reschedule.png 144 | 145 | * Replication Controller Manifest 146 | 147 | .code manifests/replication-controller.json 148 | 149 | * Service 150 | 151 | Service discovery for pods. 152 | 153 | - Proxy runs on each node 154 | - Virutal IP per service (avoid port collisions) 155 | - Basic round-robin algorithm 156 | - Dynamic backends based on label queries 157 | 158 | * Service Manifest 159 | 160 | .code manifests/service.json 161 | 162 | * Time to get Hands On 163 | --------------------------------------------------------------------------------