├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── cleanup.sh ├── kubernetes ├── cleanup.sh ├── deployments │ ├── auth.yaml │ ├── frontend.yaml │ ├── hello-canary.yaml │ ├── hello-green.yaml │ └── hello.yaml ├── nginx │ ├── frontend.conf │ └── proxy.conf ├── pods │ ├── healthy-monolith.yaml │ ├── monolith.yaml │ └── secure-monolith.yaml ├── services │ ├── auth.yaml │ ├── frontend.yaml │ ├── hello-blue.yaml │ ├── hello-green.yaml │ ├── hello.yaml │ └── monolith.yaml └── tls │ ├── ca-key.pem │ ├── ca.pem │ ├── cert.pem │ └── key.pem └── labs ├── configure-networking.md ├── create-gce-account.md ├── creating-and-managing-deployments.md ├── creating-and-managing-pods.md ├── creating-and-managing-services.md ├── download-a-kubernetes-release.md ├── enable-and-explore-cloud-shell.md ├── install-and-configure-apiserver.md ├── install-and-configure-controller-manager.md ├── install-and-configure-docker.md ├── install-and-configure-etcd.md ├── install-and-configure-kubectl.md ├── install-and-configure-kubelet.md ├── install-and-configure-scheduler.md ├── managing-application-configurations-and-secrets.md ├── monitoring-and-health-checks.md ├── provision-kubernetes-cluster-with-gke.md ├── provisioning-ubuntu-on-gce.md └── rolling-out-updates.md /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # How to become a contributor and submit your own code 2 | 3 | ## Contributor License Agreements 4 | 5 | We'd love to accept your patches! Before we can take them, we 6 | have to jump a couple of legal hurdles. 7 | 8 | ### Before you contribute 9 | Before we can use your code, you must sign the 10 | [Google Individual Contributor License Agreement](https://cla.developers.google.com/about/google-individual) 11 | (CLA), which you can do online. The CLA is necessary mainly because you own the 12 | copyright to your changes, even after your contribution becomes part of our 13 | codebase, so we need your permission to use and distribute your code. We also 14 | need to be sure of various other things—for instance that you'll tell us if you 15 | know that your code infringes on other people's patents. You don't have to sign 16 | the CLA until after you've submitted your code for review and a member has 17 | approved it, but you must do it before we can put your code into our codebase. 18 | Before you start working on a larger contribution, you should get in touch with 19 | us first through the issue tracker with your idea so that we can help out and 20 | possibly guide you. Coordinating up front makes it much easier to avoid 21 | frustration later on. 22 | 23 | ### Code reviews 24 | All submissions, including submissions by project members, require review. We 25 | use Github pull requests for this purpose. 26 | 27 | ### The small print 28 | Contributions made by corporations are covered by a different agreement than 29 | the one above, the 30 | [Software Grant and Corporate Contributor License Agreement](https://cla.developers.google.com/about/google-corporate). 31 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Orchestrating the Cloud with Kuberenetes 2 | 3 | In this Codelab you will learn how to: 4 | 5 | * Provision a complete Kubernetes using [Google Container Engine](https://cloud.google.com/container-engine) 6 | * Deploy and manage Docker containers using kubectl 7 | 8 | Kubernetes Version: 1.2.2 9 | 10 | ## Setup GCE and Enable Cloud Shell 11 | 12 | In this section you will create a Google Compute Engine (GCE) account. GCE will allow you to the create VMs, Networks, and Storage volumes required for this workshop. GCE also provides the [Cloud Shell](https://cloud.google.com/shell/docs) computing environment that will be used complete the labs. 13 | 14 | #### Labs 15 | 16 | * [Create a GCE Account](labs/create-gce-account.md) 17 | * [Enable and explore Cloud Shell](labs/enable-and-explore-cloud-shell.md) 18 | 19 | ### Clone this Repository 20 | 21 | Login into your Cloud Shell environment and clone this repository. 22 | 23 | ``` 24 | git clone https://github.com/googlecodelabs/orchestrate-with-kubernetes.git 25 | ``` 26 | 27 | ## Provision Kubernetes using GKE 28 | 29 | Kubernetes is a distributed system composed of a collection of microservices. Like any system Kubernetes must be installed and configured. In this section you will install Kubernetes from the ground up with the minimal configuration required to get a cluster up and running. 30 | 31 | Kubernetes can be configured with many options and add-ons, but can be time consuming to bootstrap from the ground up. In this section you will bootstrap Kubernetes using [Google Container Engine](https://cloud.google.com/container-engine) (GKE). 32 | 33 | * [Provision a Kubernetes Cluster with GKE](labs/provision-kubernetes-cluster-with-gke.md) 34 | 35 | ## Managing Applications with Kubernetes 36 | 37 | Kubernetes is all about applications and in this section you will utilize the Kubernetes API to deploy, manage, and upgrade applications. In this part of the workshop you will use an example application called "app" to complete the labs. 38 | 39 | [App](https://github.com/kelseyhightower/app) is hosted on GitHub and provides an example 12 Facter application. During this workshop you will be working with the following Docker images: 40 | 41 | * [kelseyhightower/monolith](https://hub.docker.com/r/kelseyhightower/monolith) - Monolith includes auth and hello services. 42 | * [kelseyhightower/auth](https://hub.docker.com/r/kelseyhightower/auth) - Auth microservice. Generates JWT tokens for authenticated users. 43 | * [kelseyhightower/hello](https://hub.docker.com/r/kelseyhightower/hello) - Hello microservice. Greets authenticated users. 44 | * [ngnix](https://hub.docker.com/_/nginx) - Frontend to the auth and hello services. 45 | 46 | #### Labs 47 | 48 | For each of the following labs, you should be in the kubernetes dir: 49 | ``` 50 | cd orchestrate-with-kubernetes/kubernetes 51 | ``` 52 | 53 | * [Creating and managing pods](labs/creating-and-managing-pods.md) 54 | * [Monitoring and health checks](labs/monitoring-and-health-checks.md) 55 | * [Managing application configurations and secrets](labs/managing-application-configurations-and-secrets.md) 56 | * [Creating and managing services](labs/creating-and-managing-services.md) 57 | * [Creating and managing deployments](labs/creating-and-managing-deployments.md) 58 | * [Rolling out updates](labs/rolling-out-updates.md) 59 | 60 | ## Links 61 | 62 | * [Kubernetes](http://googlecloudplatform.github.io/kubernetes) 63 | * [gcloud Tool Guide](https://cloud.google.com/sdk/gcloud) 64 | * [Docker](https://docs.docker.com) 65 | * [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd) 66 | * [nginx](http://nginx.org) 67 | -------------------------------------------------------------------------------- /cleanup.sh: -------------------------------------------------------------------------------- 1 | # Copyright 2016 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | gcloud compute instances delete node0 node1 16 | gcloud compute routes delete default-route-10-200-1-0-24 default-route-10-200-0-0-24 17 | gcloud compute firewall-rules delete default-allow-local-api 18 | -------------------------------------------------------------------------------- /kubernetes/cleanup.sh: -------------------------------------------------------------------------------- 1 | kubectl delete pods healthy-monolith monolith secure-monolith 2 | kubectl delete services monolith auth frontend hello 3 | kubectl delete deployments auth frontend hello hello-canary hello-green 4 | kubectl delete secrets tls-certs 5 | kubectl delete configmaps nginx-frontend-conf nginx-proxy-conf 6 | -------------------------------------------------------------------------------- /kubernetes/deployments/auth.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: auth 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | app: auth 11 | track: stable 12 | spec: 13 | containers: 14 | - name: auth 15 | image: "kelseyhightower/auth:2.0.0" 16 | ports: 17 | - name: http 18 | containerPort: 80 19 | - name: health 20 | containerPort: 81 21 | resources: 22 | limits: 23 | cpu: 0.2 24 | memory: "10Mi" 25 | livenessProbe: 26 | httpGet: 27 | path: /healthz 28 | port: 81 29 | scheme: HTTP 30 | initialDelaySeconds: 5 31 | periodSeconds: 15 32 | timeoutSeconds: 5 33 | readinessProbe: 34 | httpGet: 35 | path: /readiness 36 | port: 81 37 | scheme: HTTP 38 | initialDelaySeconds: 5 39 | timeoutSeconds: 1 40 | -------------------------------------------------------------------------------- /kubernetes/deployments/frontend.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: frontend 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | app: frontend 11 | track: stable 12 | spec: 13 | containers: 14 | - name: nginx 15 | image: "nginx:1.9.14" 16 | lifecycle: 17 | preStop: 18 | exec: 19 | command: ["/usr/sbin/nginx","-s","quit"] 20 | volumeMounts: 21 | - name: "nginx-frontend-conf" 22 | mountPath: "/etc/nginx/conf.d" 23 | - name: "tls-certs" 24 | mountPath: "/etc/tls" 25 | volumes: 26 | - name: "tls-certs" 27 | secret: 28 | secretName: "tls-certs" 29 | - name: "nginx-frontend-conf" 30 | configMap: 31 | name: "nginx-frontend-conf" 32 | items: 33 | - key: "frontend.conf" 34 | path: "frontend.conf" 35 | -------------------------------------------------------------------------------- /kubernetes/deployments/hello-canary.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: hello-canary 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | app: hello 11 | track: canary 12 | version: 2.0.0 13 | spec: 14 | containers: 15 | - name: hello 16 | image: kelseyhightower/hello:2.0.0 17 | ports: 18 | - name: http 19 | containerPort: 80 20 | - name: health 21 | containerPort: 81 22 | resources: 23 | limits: 24 | cpu: 0.2 25 | memory: 10Mi 26 | livenessProbe: 27 | httpGet: 28 | path: /healthz 29 | port: 81 30 | scheme: HTTP 31 | initialDelaySeconds: 5 32 | periodSeconds: 15 33 | timeoutSeconds: 5 34 | readinessProbe: 35 | httpGet: 36 | path: /readiness 37 | port: 81 38 | scheme: HTTP 39 | initialDelaySeconds: 5 40 | timeoutSeconds: 1 41 | -------------------------------------------------------------------------------- /kubernetes/deployments/hello-green.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: hello-green 5 | spec: 6 | replicas: 3 7 | template: 8 | metadata: 9 | labels: 10 | app: hello 11 | track: stable 12 | version: 2.0.0 13 | spec: 14 | containers: 15 | - name: hello 16 | image: kelseyhightower/hello:2.0.0 17 | ports: 18 | - name: http 19 | containerPort: 80 20 | - name: health 21 | containerPort: 81 22 | resources: 23 | limits: 24 | cpu: 0.2 25 | memory: 10Mi 26 | livenessProbe: 27 | httpGet: 28 | path: /healthz 29 | port: 81 30 | scheme: HTTP 31 | initialDelaySeconds: 5 32 | periodSeconds: 15 33 | timeoutSeconds: 5 34 | readinessProbe: 35 | httpGet: 36 | path: /readiness 37 | port: 81 38 | scheme: HTTP 39 | initialDelaySeconds: 5 40 | timeoutSeconds: 1 41 | -------------------------------------------------------------------------------- /kubernetes/deployments/hello.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: hello 5 | spec: 6 | replicas: 3 7 | template: 8 | metadata: 9 | labels: 10 | app: hello 11 | track: stable 12 | version: 1.0.0 13 | spec: 14 | containers: 15 | - name: hello 16 | image: "kelseyhightower/hello:1.0.0" 17 | ports: 18 | - name: http 19 | containerPort: 80 20 | - name: health 21 | containerPort: 81 22 | resources: 23 | limits: 24 | cpu: 0.2 25 | memory: "10Mi" 26 | livenessProbe: 27 | httpGet: 28 | path: /healthz 29 | port: 81 30 | scheme: HTTP 31 | initialDelaySeconds: 5 32 | periodSeconds: 15 33 | timeoutSeconds: 5 34 | readinessProbe: 35 | httpGet: 36 | path: /readiness 37 | port: 81 38 | scheme: HTTP 39 | initialDelaySeconds: 5 40 | timeoutSeconds: 1 41 | -------------------------------------------------------------------------------- /kubernetes/nginx/frontend.conf: -------------------------------------------------------------------------------- 1 | upstream hello { 2 | server hello.default.svc.cluster.local; 3 | } 4 | 5 | upstream auth { 6 | server auth.default.svc.cluster.local; 7 | } 8 | 9 | server { 10 | listen 443; 11 | ssl on; 12 | 13 | ssl_certificate /etc/tls/cert.pem; 14 | ssl_certificate_key /etc/tls/key.pem; 15 | 16 | location / { 17 | proxy_pass http://hello; 18 | } 19 | 20 | location /login { 21 | proxy_pass http://auth; 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /kubernetes/nginx/proxy.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 443; 3 | ssl on; 4 | 5 | ssl_certificate /etc/tls/cert.pem; 6 | ssl_certificate_key /etc/tls/key.pem; 7 | 8 | location / { 9 | proxy_pass http://127.0.0.1:80; 10 | } 11 | } 12 | -------------------------------------------------------------------------------- /kubernetes/pods/healthy-monolith.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: "healthy-monolith" 5 | labels: 6 | app: monolith 7 | spec: 8 | containers: 9 | - name: monolith 10 | image: kelseyhightower/monolith:1.0.0 11 | ports: 12 | - name: http 13 | containerPort: 80 14 | - name: health 15 | containerPort: 81 16 | resources: 17 | limits: 18 | cpu: 0.2 19 | memory: "10Mi" 20 | livenessProbe: 21 | httpGet: 22 | path: /healthz 23 | port: 81 24 | scheme: HTTP 25 | initialDelaySeconds: 5 26 | periodSeconds: 15 27 | timeoutSeconds: 5 28 | readinessProbe: 29 | httpGet: 30 | path: /readiness 31 | port: 81 32 | scheme: HTTP 33 | initialDelaySeconds: 5 34 | timeoutSeconds: 1 35 | -------------------------------------------------------------------------------- /kubernetes/pods/monolith.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: monolith 5 | labels: 6 | app: monolith 7 | spec: 8 | containers: 9 | - name: monolith 10 | image: kelseyhightower/monolith:1.0.0 11 | args: 12 | - "-http=0.0.0.0:80" 13 | - "-health=0.0.0.0:81" 14 | - "-secret=secret" 15 | ports: 16 | - name: http 17 | containerPort: 80 18 | - name: health 19 | containerPort: 81 20 | resources: 21 | limits: 22 | cpu: 0.2 23 | memory: "10Mi" 24 | -------------------------------------------------------------------------------- /kubernetes/pods/secure-monolith.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: "secure-monolith" 5 | labels: 6 | app: monolith 7 | spec: 8 | containers: 9 | - name: nginx 10 | image: "nginx:1.9.14" 11 | lifecycle: 12 | preStop: 13 | exec: 14 | command: ["/usr/sbin/nginx","-s","quit"] 15 | volumeMounts: 16 | - name: "nginx-proxy-conf" 17 | mountPath: "/etc/nginx/conf.d" 18 | - name: "tls-certs" 19 | mountPath: "/etc/tls" 20 | - name: monolith 21 | image: "kelseyhightower/monolith:1.0.0" 22 | ports: 23 | - name: http 24 | containerPort: 80 25 | - name: health 26 | containerPort: 81 27 | resources: 28 | limits: 29 | cpu: 0.2 30 | memory: "10Mi" 31 | livenessProbe: 32 | httpGet: 33 | path: /healthz 34 | port: 81 35 | scheme: HTTP 36 | initialDelaySeconds: 5 37 | periodSeconds: 15 38 | timeoutSeconds: 5 39 | readinessProbe: 40 | httpGet: 41 | path: /readiness 42 | port: 81 43 | scheme: HTTP 44 | initialDelaySeconds: 5 45 | timeoutSeconds: 1 46 | volumes: 47 | - name: "tls-certs" 48 | secret: 49 | secretName: "tls-certs" 50 | - name: "nginx-proxy-conf" 51 | configMap: 52 | name: "nginx-proxy-conf" 53 | items: 54 | - key: "proxy.conf" 55 | path: "proxy.conf" 56 | -------------------------------------------------------------------------------- /kubernetes/services/auth.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: "auth" 5 | spec: 6 | selector: 7 | app: "auth" 8 | ports: 9 | - protocol: "TCP" 10 | port: 80 11 | targetPort: 80 12 | -------------------------------------------------------------------------------- /kubernetes/services/frontend.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: "frontend" 5 | spec: 6 | selector: 7 | app: "frontend" 8 | ports: 9 | - protocol: "TCP" 10 | port: 443 11 | targetPort: 443 12 | type: LoadBalancer 13 | -------------------------------------------------------------------------------- /kubernetes/services/hello-blue.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: "hello" 5 | spec: 6 | selector: 7 | app: "hello" 8 | version: 1.0.0 9 | ports: 10 | - protocol: "TCP" 11 | port: 80 12 | targetPort: 80 13 | -------------------------------------------------------------------------------- /kubernetes/services/hello-green.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: hello 5 | spec: 6 | selector: 7 | app: hello 8 | version: 2.0.0 9 | ports: 10 | - protocol: TCP 11 | port: 80 12 | targetPort: 80 13 | -------------------------------------------------------------------------------- /kubernetes/services/hello.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: "hello" 5 | spec: 6 | selector: 7 | app: "hello" 8 | ports: 9 | - protocol: "TCP" 10 | port: 80 11 | targetPort: 80 12 | -------------------------------------------------------------------------------- /kubernetes/services/monolith.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: "monolith" 5 | spec: 6 | selector: 7 | app: "monolith" 8 | secure: "enabled" 9 | ports: 10 | - protocol: "TCP" 11 | port: 443 12 | targetPort: 443 13 | nodePort: 31000 14 | type: NodePort 15 | -------------------------------------------------------------------------------- /kubernetes/tls/ca-key.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEpAIBAAKCAQEA6krl+/g69Et06/cU6601EjypCM5vhB1ghXdhYuqCNZZLWPMB 3 | /B55GkNnHMUPfMCMs6W6ItHVjNquaUabCLbnnXKmdEIWfq8NWKnNy8vUX2QIdKfq 4 | Tigc4pijrcuJekbFNdFtpnCx6ifTdPZMqnsGP2dY32rdVQdZ0DuPp3aJwJNTBAwA 5 | g/naoHLvyWLrZB5FHkxN61iBl78jllLXKv9qdRRg+h3GWA95ObpQlKfo8gSyfqFS 6 | ZlCvVeFhaNbcxd4ZiXLi6H1laWjybcKfUd6cmCw1U4AVnEnmXZGIsW9zv6/qmj7J 7 | 0MNEz65ivr+QDZPzZv63RzijJs2pYHl9oUq/7wIDAQABAoIBAQC2lZHvL/65nQhM 8 | T6x1EfF2+eD9JOuQ+NfcizFQxdKdcjfb5N0aHqFfz0FPEV9FaET+R1vsgLw8Xbtn 9 | /YcaXnfXop6HoW0oYsEy5HmlpX4mrK1ORAF70RTZnfyIl0LXEMnlbAVYnSB5i3nl 10 | /3+1p9QxmxeOXRiJiAX9Gj2UUvN9J5SXgPGfH+94t8l7qY/kvDvJQDWQ6DLCPBca 11 | tUjCEmaJVsbE5ekKh6ENNZxTgIUp7BON3BnUuM9ud3Gmyf3dOlZB6rz5mooNH/41 12 | G5tUqau5ppixo9VgCfsPjy1eKzHInzYppw44Lq2tVPZ+A429qB6tKkOIotcud3zY 13 | /VYufh9JAoGBAP7sRqCxYb/QQG4Eniqc4SP5Ec5RXYbm5i7Sk8HGbahsnguqQZJf 14 | PwuoamMEhP7SJI2/FwJm6y/lRN7Dqo1fuTpyYng0vBsBOP7etVjIiHMoGoP+jKWV 15 | uivICkQRuhiu9LBwaCE2nsC4LsOXFmcNJRkXtO3ygK2HjZTeXWYpnkelAoGBAOtI 16 | Twe2t5wiwsODs7KtQ4NAWIYaHpG34Cwc6SAh+S+7QRDJ4msyW8pwVQSW7xuTML23 17 | zDzQ4gVbDbNUbfmKOPa/XJI9ECSZkc/+vWUrzfeT1hK2UyeqsmEZdFYitJ5/oF8l 18 | VhSPB9Sd0YknzQfF8ZICSuQEg0UJmxRKUgeUM/UDAoGBAORWItUgzVuQX4WsITgu 19 | GQOtvyM8gjepbpiWCb9RyztHPzFXqTBAnCoHCnPywmW1OQS2GxgNs6/M/qlCPewv 20 | x6vwdP8SzUKrD7BLL8h8pqvvSgDc6oIO4RkCLx/VeQlO/OFlbgAB+qTI1SpglLJt 21 | dcNKFsfjpRrKBilIHAS8Vof5AoGAJvWeQIS8+pm27nEMfHW8TCuHfQ0uKqrr7+IJ 22 | qEx32rODHqiPWXjJQkg/i7cCeOpyk7evlhJwmrptFljQrRV6QUGGrqB139meD3b7 23 | HZmXTXupYwfV1SeqyfFRFkJA7k3r3FVuX5EfltFbNP7mMHdSfP7sL72fjvr8Nuvn 24 | kWG1CMkCgYANW0BiYWmxcw4LXjYVrrcgkrrJevL6nQ4ELxeb5+ic60hoTFo7X37D 25 | kvIoCc1OWLyJ/bT12ZgW45SqxZgC+JNQ2XhvorPE6Bq33h1tZzULHEKyde2MyqLW 26 | ze1zfkHPseS5xbXwMir7JfYPB7n2Tt9j+vynyQ8nFsGvRSJgYiJvhg== 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /kubernetes/tls/ca.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIDOzCCAiOgAwIBAgIQby02MxEEyeNyWhqHw4a3zDANBgkqhkiG9w0BAQsFADAV 3 | MRMwEQYDVQQKEwpLdWJlcm5ldGVzMB4XDTE2MDQyNjA0MDkwOFoXDTE3MDQyNjA0 4 | MDkwOFowFTETMBEGA1UEChMKS3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQAD 5 | ggEPADCCAQoCggEBAOpK5fv4OvRLdOv3FOutNRI8qQjOb4QdYIV3YWLqgjWWS1jz 6 | AfweeRpDZxzFD3zAjLOluiLR1YzarmlGmwi2551ypnRCFn6vDVipzcvL1F9kCHSn 7 | 6k4oHOKYo63LiXpGxTXRbaZwseon03T2TKp7Bj9nWN9q3VUHWdA7j6d2icCTUwQM 8 | AIP52qBy78li62QeRR5MTetYgZe/I5ZS1yr/anUUYPodxlgPeTm6UJSn6PIEsn6h 9 | UmZQr1XhYWjW3MXeGYly4uh9ZWlo8m3Cn1HenJgsNVOAFZxJ5l2RiLFvc7+v6po+ 10 | ydDDRM+uYr6/kA2T82b+t0c4oybNqWB5faFKv+8CAwEAAaOBhjCBgzAOBgNVHQ8B 11 | Af8EBAMCAqQwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDwYDVR0TAQH/BAUwAwEB/zAd 12 | BgNVHQ4EFgQUDxJum8kqt4i67A/qALy2h/ON4iwwHwYDVR0jBBgwFoAUDxJum8kq 13 | t4i67A/qALy2h/ON4iwwCwYDVR0RBAQwAoIAMA0GCSqGSIb3DQEBCwUAA4IBAQC8 14 | VaLgNIFw66LOQoU5QAoqBJ8i8/L5i0b9cJZoM/YPEGd44OK2wMQEh1vaVsXYhDe2 15 | 6MZcomDzcq4XD7/OsBtdrFEw7F7prwWbVvjlgLqhOdlAtNQyTMFd/Jm3M6pI7NTf 16 | GiOdLdyMyy/sY719sh34IpetTE9W/K0wKSnNJG011qV+LSOAR0g7H3OZyJN+MqVp 17 | WurY/umW7PJF0Xo5YR4jSWf/rdaqFLrTAt4Xn6jIBZeW41E5ABxhvaUA7EcSIBYP 18 | MFF8YS+e/Clg2FeM6EikABTXFBWGv23UlCRALaduYGwYH+M86bNgO45Ba1JJkP6M 19 | 4m0nKCrFlqDnSUyaCsZs 20 | -----END CERTIFICATE----- 21 | -------------------------------------------------------------------------------- /kubernetes/tls/cert.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIDbjCCAlagAwIBAgIQWqMGAhd/4EOo4hFRrvXhrDANBgkqhkiG9w0BAQsFADAV 3 | MRMwEQYDVQQKEwpLdWJlcm5ldGVzMB4XDTE2MDQyNjA0MDkwOFoXDTE3MDQyNjA0 4 | MDkwOFowLTETMBEGA1UEChMKS3ViZXJuZXRlczEWMBQGA1UEAxMNKi5leGFtcGxl 5 | LmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpXqpuNLl6q9eD0 6 | fifuAGjnI0TlvkSdETUwMkA0HMXeGl3a7tfhgkd4gxdOJPQpmwTz1l84ckzi1wPo 7 | MrDDvhAwyAVzvg6rRoZ/jccZQ2DHfaaUkBc3fabQw38stLKbyMenCGm3GmhGcAOW 8 | GrJ2yADwqfELTY/M4MSXSFhvxsOPpNCv6v2kCLwMcHMwHMkgdBlZKwDr1KPeHUd7 9 | YEo8T4CYqNrYpckqwH/kuecDSyau2iowkce17LfUDpM1GKeENAiN6Mz5e8yPK1a8 10 | EUEVLFWzZMdxcdIPZ8ZPDkTgn1qH8134FQOzHLtxHBDPRkXa9vs9iieBGBYJBtO6 11 | gTe8wLECAwEAAaOBoTCBnjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYB 12 | BQUHAwEwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUNC2nZTkEz0njbyYs1jkPUy9k 13 | O7wwHwYDVR0jBBgwFoAUDxJum8kqt4i67A/qALy2h/ON4iwwKQYDVR0RBCIwIIIN 14 | Ki5leGFtcGxlLmNvbYIJbG9jYWxob3N0hwR/AAABMA0GCSqGSIb3DQEBCwUAA4IB 15 | AQC3/YnlZaq0CVWSfQBnaO4dPc05aN36GgDhRjxJZ8pAoklhJA28BlvyhAwJPJZv 16 | AtxquUmZPYXBbhqtOaMosaLKYWvvaN5IzV+rfQ8sWUN/1JdgeEPku4U19aiDtj5z 17 | VWqS9C3cU6MTAUP0pvn+vq1moOnMEyGiPA0wEngvO/Bjhg9MFwHByY2ISEgutuD9 18 | 9Iv7sxkx3M4VoQxRfGtHLYiVwqiS1YcbwaGfrRyBThBKoGbUUa8nViFAxr9KPZO+ 19 | 0hxuqdBDfk942lWFfqN8O8GHp5O0KrZTej9vhgBhgx1XkbL5MeKAV6oHLgHl+zj5 20 | h2j/WHCfgs5+or21+Q9M9+WK 21 | -----END CERTIFICATE----- 22 | -------------------------------------------------------------------------------- /kubernetes/tls/key.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEowIBAAKCAQEAyleqm40uXqr14PR+J+4AaOcjROW+RJ0RNTAyQDQcxd4aXdru 3 | 1+GCR3iDF04k9CmbBPPWXzhyTOLXA+gysMO+EDDIBXO+DqtGhn+NxxlDYMd9ppSQ 4 | Fzd9ptDDfyy0spvIx6cIabcaaEZwA5YasnbIAPCp8QtNj8zgxJdIWG/Gw4+k0K/q 5 | /aQIvAxwczAcySB0GVkrAOvUo94dR3tgSjxPgJio2tilySrAf+S55wNLJq7aKjCR 6 | x7Xst9QOkzUYp4Q0CI3ozPl7zI8rVrwRQRUsVbNkx3Fx0g9nxk8OROCfWofzXfgV 7 | A7Mcu3EcEM9GRdr2+z2KJ4EYFgkG07qBN7zAsQIDAQABAoIBACarnHp//+WtzLIC 8 | Z/3fmYpy6iWntrZMQlak8GWe0ATszqMzTURK3+gi2wLgN2XGcc7/fu/RzN5u1+Ly 9 | RIXN0wwrFn8cQK1zBFZ+GC194YekeJoWeHdHbqcr7MDoXVxpM3UcshnqGYzmMVAu 10 | JsoGs3Cijgf4PgmGgUpxEy17p0QGX6AmeqhSzeBwc6pNfJvPeXljJKBfMXTnkxn8 11 | eFpCaZVVMd3MDuAFRajgEoyO428MAGYpjIScuV4Vutd7QDa39hax5JMpc2yNUXPQ 12 | eIHLrj21aHX3bgfh1WE3yhclYf3pGarpOuW27N4UrmeGyb7jAa80CMWsqAILRru7 13 | vEHh50ECgYEA00hRBJuF04MLkHsk5/oX5TllIA/zEsrYgoHYI3agWUqLZMUR/ujE 14 | tNDjdCf5RG8lf9ANcU7eFhQMCZdDiSH2dwwmv6J2/RzUsPJ+675vktl3AoTVD7KA 15 | glsuPSChik8k/1PI7ZaYhTs2UxCOVB7hOXsv9qjSoIHgFMZDnkswTnkCgYEA9Sr2 16 | NvVU4OLI1CPCjAltIhnDUMqsJ6crlJFICAR2qeBQcf3SqAUAh/NRZ9NodIRozgOE 17 | bHDjKYD98Hq5v2QHqVnZ3BGNlNJrPEt33I7DcJ6Z6OfNVcWMDL4Gm5JsjnJl2AR7 18 | 2HNXaB91qQzmzcb6YfL3r7TwSZ4GtuwJb2O0lfkCgYBLBCEn9qQ0bhHcEa0P5F85 19 | lwBNuvv+DyGCbOG17bePHIWTmNkD3deBr60in9LENoZk9BThxzPZOPLxMNDczr84 20 | k4rqfZ+rzOHDlcX0o9/vjuDPdyRC94jjP8aSE5Tni6RCN5heqxqqK1Tldzphqbkj 21 | 9JYaCOUH8jUCi0aU3HNhWQKBgFAQGZvU/kT6io8MponIwkTymOAXb6T7aLX5w8Yq 22 | fv327Q5sz5BjIctD4H/BgEkcvIUajPJE40o4f7U6vtILvpzFZOoDKXNCTBbCpn/2 23 | d0id4rE2kc3C13uJyuqfJKhYH34t6KvE7vRn4aq1NeJZaob2K4DL2/SOkK7H4kTo 24 | EJ8xAoGBANGyX2sECsld/+KOWH5xMrAZo2qNkL+TgRQ3o8NHcUcvAT3C23tYstoN 25 | 7LpKSmZxba7IWnV7bYUPXqHVLpNIZAuuMFHOxTnU7hxrp9oVpJVJt+MABR2/1Kay 26 | O0JSjXP6b58VllICqOMjyR/HbtoBwg9qe9LfxFdZXvJOEWg5ErqC 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /labs/configure-networking.md: -------------------------------------------------------------------------------- 1 | # Configuring the Network 2 | 3 | In this lab you will configure the network between node0 and node1 to ensure cross host connectivity. You will also ensure containers can communicate across hosts and reach the internet. 4 | 5 | ## Create network routes between Docker hosts. 6 | 7 | ### Cloud Shell 8 | 9 | ``` 10 | gcloud compute routes create default-route-10-200-0-0-24 \ 11 | --destination-range 10.200.0.0/24 \ 12 | --next-hop-instance node0 13 | ``` 14 | ``` 15 | gcloud compute routes create default-route-10-200-1-0-24 \ 16 | --destination-range 10.200.1.0/24 \ 17 | --next-hop-instance node1 18 | ``` 19 | 20 | ``` 21 | gcloud compute routes list 22 | ``` 23 | 24 | Allow access the API server 25 | 26 | ``` 27 | gcloud compute firewall-rules create default-allow-local-api \ 28 | --allow tcp:8080 \ 29 | --source-ranges 10.200.0.0/16 30 | ``` 31 | 32 | 33 | ## Getting Containers Online 34 | 35 | By default GCE will not route traffic to the internet for the container subnet. In this section we will configure NAT to workaround the issue. 36 | 37 | ### node0 38 | 39 | ``` 40 | gcloud compute ssh node0 41 | ``` 42 | 43 | ``` 44 | sudo iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o ens4 -j MASQUERADE 45 | ``` 46 | 47 | ### node1 48 | 49 | ``` 50 | gcloud compute ssh node1 51 | ``` 52 | 53 | ``` 54 | sudo iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o ens4 -j MASQUERADE 55 | ``` 56 | 57 | ## Validating Cross Host Container Networking 58 | 59 | ### Terminal 1 60 | 61 | ``` 62 | gcloud compute ssh node0 63 | ``` 64 | 65 | ``` 66 | sudo docker run -t -i --rm busybox /bin/sh 67 | ``` 68 | 69 | ``` 70 | ip -f inet addr show eth0 71 | ``` 72 | 73 | ### Terminal 2 74 | 75 | ``` 76 | gcloud compute ssh node1 77 | ``` 78 | 79 | ``` 80 | sudo docker run -t -i --rm busybox /bin/sh 81 | ``` 82 | 83 | ``` 84 | ping -c 3 10.200.0.2 85 | ``` 86 | 87 | ``` 88 | ping -c 3 google.com 89 | ``` 90 | 91 | Exit both busybox instances. 92 | -------------------------------------------------------------------------------- /labs/create-gce-account.md: -------------------------------------------------------------------------------- 1 | # Create a GCE Account 2 | 3 | A Google Cloud Platform account is required for this workshop. You can use an existing GCP account or [sign up](https://cloud.google.com/compute/docs/signup) for a new one with a valid gmail account. 4 | 5 | > A credit card is required for Google Cloud Platform. 6 | 7 | ## Create a Project 8 | 9 | A GCP project is required for this course. You can use an existing GCP project or [create a new one](https://support.google.com/cloud/answer/6251787). 10 | 11 | > Your project name maybe different from your project id. 12 | 13 | ## Enable Compute Engine and Container Engine APIs 14 | 15 | In order to create the cloud resources required by this workshop, you will need to enable the following APIs using the [Google API Console](https://developers.googleblog.com/2016/03/introducing-google-api-console.html): 16 | 17 | * Compute Engine API 18 | * Container Engine API 19 | -------------------------------------------------------------------------------- /labs/creating-and-managing-deployments.md: -------------------------------------------------------------------------------- 1 | # Creating and Managing Deployments 2 | 3 | Deployments abstract away the low level details of managing Pods. Pods are tied to the lifetime of the node they are created on. When the node goes away so does the Pod. ReplicaSets can be used to ensure one or more replicas of a Pods are always running, even when nodes fail. 4 | 5 | Deployments sit on top of ReplicaSets and add the ability to define how updates to Pods should be rolled out. 6 | 7 | In this lab we will combine everything we learned about Pods and Services to breakup the monolith application into smaller Services. You will create 3 deployments, one for each service: 8 | 9 | * frontend 10 | * auth 11 | * hello 12 | 13 | You will also define internal services for the `auth` and `hello` deployments and an external service for the `frontend` deployment. 14 | 15 | ## Tutorial: Creating Deployments 16 | 17 | ### Create and Expose the Auth Deployment 18 | 19 | ``` 20 | kubectl create -f deployments/auth.yaml 21 | ``` 22 | 23 | ``` 24 | kubectl describe deployments auth 25 | ``` 26 | 27 | ``` 28 | kubectl create -f services/auth.yaml 29 | ``` 30 | 31 | ### Create and Expose the Hello Deployment 32 | 33 | ``` 34 | kubectl create -f deployments/hello.yaml 35 | ``` 36 | 37 | ``` 38 | kubectl describe deployments hello 39 | ``` 40 | 41 | ``` 42 | kubectl create -f services/hello.yaml 43 | ``` 44 | 45 | ### Create and Expose the Frontend Deployment 46 | 47 | 48 | ``` 49 | kubectl create configmap nginx-frontend-conf --from-file=nginx/frontend.conf 50 | ``` 51 | 52 | ``` 53 | kubectl create -f deployments/frontend.yaml 54 | ``` 55 | 56 | ``` 57 | kubectl create -f services/frontend.yaml 58 | ``` 59 | 60 | ## Tutorial: Scaling Deployments 61 | 62 | Behind the scenes Deployments manage ReplicaSets. Each deployment is mapped to one active ReplicaSet. Use the `kubectl get replicasets` command to view the current set of replicas. 63 | 64 | ``` 65 | kubectl get replicasets 66 | ``` 67 | 68 | ReplicaSets are scaled through the Deployment for each service and can be scaled independently. Use the `kubectl scale` command to scale the hello deployment: 69 | 70 | ``` 71 | kubectl scale deployments hello --replicas=3 72 | ``` 73 | 74 | ``` 75 | kubectl describe deployments hello 76 | ``` 77 | 78 | ``` 79 | kubectl get pods 80 | ``` 81 | 82 | ``` 83 | kubectl get replicasets 84 | ``` 85 | 86 | ## Exercise: Scaling Deployments 87 | 88 | In this exercise you will scale the `frontend` deployment using an existing deployment configuration file. 89 | 90 | ### Hints 91 | 92 | ``` 93 | vim deployments/frontend.yaml 94 | ``` 95 | 96 | ``` 97 | kubectl apply -f deployments/frontend.yaml 98 | ``` 99 | 100 | ## Exercise: Interact with the Frontend Service 101 | 102 | ### Hints 103 | 104 | ``` 105 | kubectl get services frontend 106 | ``` 107 | 108 | ``` 109 | curl -k https:// 110 | ``` 111 | 112 | ## Summary 113 | 114 | Deployments are the preferred way to manage application deployments. You learned how to create, expose and scale deployments. -------------------------------------------------------------------------------- /labs/creating-and-managing-pods.md: -------------------------------------------------------------------------------- 1 | # Creating and managing pods 2 | 3 | At the core of Kubernetes is the Pod. Pods represent a logical application and hold a collection of one or more containers and volumes. In this lab you will learn how to: 4 | 5 | * Write a Pod configuration file 6 | * Create and inspect Pods 7 | * Interact with Pods remotely using kubectl 8 | 9 | In this lab you will create a Pod named `monolith` and interact with it using the kubectl command line tool. 10 | 11 | ## Tutorial: Creating Pods 12 | 13 | Explore the `monolith` pod configuration file: 14 | 15 | ``` 16 | cat pods/monolith.yaml 17 | ``` 18 | 19 | Create the `monolith` pod using kubectl: 20 | 21 | ``` 22 | kubectl create -f pods/monolith.yaml 23 | ``` 24 | 25 | ## Exercise: View Pod details 26 | 27 | Use the `kubectl get` and `kubect describe` commands to view details for the `monolith` Pod: 28 | 29 | ### Hints 30 | 31 | ``` 32 | kubectl get pods 33 | ``` 34 | 35 | ``` 36 | kubectl describe pods 37 | ``` 38 | 39 | ### Quiz 40 | 41 | * What is the IP address of the `monolith` Pod? 42 | * What node is the `monolith` Pod running on? 43 | * What containers are running in the `monolith` Pod? 44 | * What are the labels attached to the `monolith` Pod? 45 | * What arguments are set on the `monolith` container? 46 | 47 | ## Exercise: Interact with a Pod remotely 48 | 49 | Pods are allocated a private IP address by default and cannot be reached outside of the cluster. Use the `kubectl port-forward` command to map a local port to a port inside the `monolith` pod. 50 | 51 | ### Hints 52 | 53 | Use two terminals. One to run the `kubectl port-forward` command, and the other to issue `curl` commands. 54 | 55 | ``` 56 | kubectl port-forward monolith 10080:80 57 | ``` 58 | 59 | ``` 60 | curl http://127.0.0.1:10080 61 | ``` 62 | 63 | ``` 64 | curl http://127.0.0.1:10080/secure 65 | ``` 66 | 67 | ``` 68 | curl -u user http://127.0.0.1:10080/login 69 | ``` 70 | 71 | > Type "password" at the prompt. 72 | 73 | ``` 74 | curl -H "Authorization: Bearer " http://127.0.0.1:10080/secure 75 | ``` 76 | 77 | > Use the JWT token from the previous login. 78 | 79 | ## Exercise: View the logs of a Pod 80 | 81 | Use the `kubectl logs` command to view the logs for the `monolith` Pod: 82 | 83 | ``` 84 | kubectl logs monolith 85 | ``` 86 | 87 | > Use the -f flag and observe what happens. 88 | 89 | ## Exercise: Run an interactive shell inside a Pod 90 | 91 | Use the `kubectl exec` command to run an interactive shell inside the `monolith` Pod: 92 | 93 | ``` 94 | kubectl exec monolith --stdin --tty -c monolith /bin/sh 95 | ``` 96 | -------------------------------------------------------------------------------- /labs/creating-and-managing-services.md: -------------------------------------------------------------------------------- 1 | # Creating and Managing Services 2 | 3 | Services provide stable endpoints for Pods based on a set of labels. 4 | 5 | In this lab you will create the `monolith` service and "expose" the `secure-monolith` Pod externally. You will learn how to: 6 | 7 | * Create a service 8 | * Use label selectors to expose a limited set of Pods externally 9 | 10 | ## Tutorial: Create a Service 11 | 12 | Explore the monolith service configuration file: 13 | 14 | ``` 15 | cat services/monolith.yaml 16 | ``` 17 | 18 | Create the monolith service using kubectl: 19 | 20 | ``` 21 | kubectl create -f services/monolith.yaml 22 | ``` 23 | 24 | Use the `gcloud compute firewall-rules` command to allow traffic to the `monolith` service: 25 | 26 | ``` 27 | gcloud compute firewall-rules create allow-monolith-nodeport \ 28 | --allow=tcp:31000 29 | ``` 30 | 31 | ## Exercise: Interact with the Monolith Service Remotely 32 | 33 | ### Hints 34 | 35 | ``` 36 | gcloud compute instances list 37 | ``` 38 | 39 | ``` 40 | curl -k https://:31000 41 | ``` 42 | 43 | ### Quiz 44 | 45 | * Why are you unable to get a response from the `monolith` service? 46 | 47 | ## Exercise: Explore the monolith Service 48 | 49 | ### Hints 50 | 51 | ``` 52 | kubectl get services monolith 53 | ``` 54 | 55 | ``` 56 | kubectl describe services monolith 57 | ``` 58 | 59 | ### Quiz 60 | 61 | * How many endpoints does the `monolith` service have? 62 | * What labels must a Pod have to be picked up by the `monolith` service? 63 | 64 | ## Tutorial: Add Labels to Pods 65 | 66 | Currently the `monolith` service does not have any endpoints. One way to troubleshoot an issue like this is to use the `kubectl get pods` command with a label query. 67 | 68 | ``` 69 | kubectl get pods -l "app=monolith" 70 | ``` 71 | 72 | ``` 73 | kubectl get pods -l "app=monolith,secure=enabled" 74 | ``` 75 | 76 | > Notice this label query does not print any results 77 | 78 | Use the `kubectl label` command to add the missing `secure=enabled` label to the `secure-monolith` Pod. 79 | 80 | ``` 81 | kubectl label pods secure-monolith 'secure=enabled' 82 | ``` 83 | 84 | View the list of endpoints on the `monolith` service: 85 | 86 | ``` 87 | kubectl describe services monolith 88 | ``` 89 | 90 | ### Quiz 91 | 92 | * How many endpoints does the `monolith` service have? 93 | 94 | ## Exercise: Interact with the Monolith Service Remotely 95 | 96 | ### Hints 97 | 98 | ``` 99 | gcloud compute instances list 100 | ``` 101 | 102 | ``` 103 | curl -k https://:31000 104 | ``` 105 | 106 | ## Tutorial: Remove Labels from Pods 107 | 108 | In this exercise you will observe what happens when a required label is removed from a Pod. 109 | 110 | Use the `kubectl label` command to remove the `secure` label from the `secure-monolith` Pod. 111 | 112 | ``` 113 | kubectl label pods secure-monolith secure- 114 | ``` 115 | 116 | View the list of endpoints on the `monolith` service: 117 | 118 | ``` 119 | kubectl describe services monolith 120 | ``` 121 | 122 | ### Quiz 123 | 124 | * How many endpoints does the `monolith` service have? 125 | 126 | ## Summary 127 | 128 | In this lab you learned how to expose Pods using services and labels. 129 | -------------------------------------------------------------------------------- /labs/download-a-kubernetes-release.md: -------------------------------------------------------------------------------- 1 | # Download a Kubernetes Release 2 | 3 | Offical Kubernetes releases are hosted on GitHub. We are going to download a copy of the official release from Google Cloud Storage hosted in the EU for performance. 4 | 5 | ``` 6 | wget https://storage.googleapis.com/craft-conf/kubernetes.tar.gz 7 | tar -xvf kubernetes.tar.gz 8 | tar -xvf kubernetes/server/kubernetes-server-linux-amd64.tar.gz 9 | sudo cp kubernetes/server/bin/hyperkube /usr/local/bin/ 10 | sudo cp kubernetes/server/bin/kubectl /usr/local/bin/ 11 | ``` 12 | -------------------------------------------------------------------------------- /labs/enable-and-explore-cloud-shell.md: -------------------------------------------------------------------------------- 1 | # Enable and Explore Google Cloud Shell 2 | 3 | [Google Cloud Shell](https://cloud.google.com/shell/docs) provides you with command-line access to computing resources hosted on Google Cloud Platform and is available now in the Google Cloud Platform Console. Cloud Shell makes it easy for you to manage your Cloud Platform Console projects and resources without having to install the Google Cloud SDK and other tools on your system. With Cloud Shell, the Cloud SDK gcloud command and other utilities you need are always available when you need them. 4 | 5 | ## Explore Google Cloud Shell 6 | 7 | Visit the Google Cloud Shell [getting started guide](https://cloud.google.com/shell/docs/quickstart) and work through the exercises. 8 | 9 | ## Configure Your Cloud Shell Environment 10 | 11 | Create two Cloud Shell Sessions and run the following commands: 12 | 13 | To avoid setting the compute zone for each command pick a zone from the list and set your config. 14 | 15 | ``` 16 | gcloud compute zones list 17 | ``` 18 | 19 | ``` 20 | gcloud config set compute/zone europe-west1-d 21 | ``` 22 | -------------------------------------------------------------------------------- /labs/install-and-configure-apiserver.md: -------------------------------------------------------------------------------- 1 | # Install and configure the Kubernetes API Server 2 | 3 | ## node0 4 | 5 | ``` 6 | gcloud compute ssh node0 7 | ``` 8 | 9 | ### Create the kube-apiserver systemd unit file: 10 | 11 | ``` 12 | [Unit] 13 | Description=Kubernetes API Server 14 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 15 | 16 | [Service] 17 | ExecStart=/usr/local/bin/hyperkube \ 18 | apiserver \ 19 | --insecure-bind-address=0.0.0.0 \ 20 | --etcd-servers=http://127.0.0.1:2379 \ 21 | --service-cluster-ip-range 10.32.0.0/24 \ 22 | --allow-privileged=true 23 | Restart=on-failure 24 | RestartSec=5 25 | 26 | [Install] 27 | WantedBy=multi-user.target 28 | ``` 29 | 30 | Start the kube-apiserver service: 31 | 32 | ``` 33 | sudo mv kube-apiserver.service /etc/systemd/system/ 34 | ``` 35 | 36 | ``` 37 | sudo systemctl daemon-reload 38 | sudo systemctl enable kube-apiserver 39 | sudo systemctl start kube-apiserver 40 | ``` 41 | 42 | ### Verify 43 | 44 | ``` 45 | sudo systemctl status kube-apiserver 46 | kubectl version 47 | kubectl get cs 48 | ``` 49 | -------------------------------------------------------------------------------- /labs/install-and-configure-controller-manager.md: -------------------------------------------------------------------------------- 1 | # Install and configure the Kubernetes Controller Manager 2 | 3 | ## node0 4 | 5 | ``` 6 | gcloud compute ssh node0 7 | ``` 8 | 9 | ### Create the kube-controller-manager systemd unit file: 10 | 11 | ``` 12 | [Unit] 13 | Description=Kubernetes Controller Manager 14 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 15 | 16 | [Service] 17 | ExecStart=/usr/local/bin/hyperkube \ 18 | controller-manager \ 19 | --master=http://127.0.0.1:8080 20 | Restart=on-failure 21 | RestartSec=5 22 | 23 | [Install] 24 | WantedBy=multi-user.target 25 | ``` 26 | 27 | Start the kube-controller-manager service: 28 | 29 | ``` 30 | sudo mv kube-controller-manager.service /etc/systemd/system/ 31 | ``` 32 | 33 | ``` 34 | sudo systemctl daemon-reload 35 | sudo systemctl enable kube-controller-manager 36 | sudo systemctl start kube-controller-manager 37 | ``` 38 | 39 | ### Verify 40 | 41 | ``` 42 | sudo systemctl status kube-controller-manager 43 | kubectl get cs 44 | ``` 45 | -------------------------------------------------------------------------------- /labs/install-and-configure-docker.md: -------------------------------------------------------------------------------- 1 | # Install and configure the Docker 2 | 3 | In this lab you will install and configure Docker on node0 and node1. Docker is will run containers created by Kubernetes and provide the API required to inspect them. 4 | 5 | ### node0 6 | 7 | ``` 8 | gcloud compute ssh node0 9 | ``` 10 | 11 | ### Create the Kubernetes Docker Bridge 12 | 13 | By default Docker handles container networking for a Kubernetes cluster. Docker requires at least one bridge to be setup before running any containers. Each Docker host must have an unique bridge IP address to avoid allocating duplicate IP addresses to containers across hosts. 14 | 15 | ``` 16 | sudo ip link add name kubernetes type bridge 17 | sudo ip addr add 10.200.0.1/24 dev kubernetes 18 | sudo ip link set kubernetes up 19 | ``` 20 | 21 | ### Install the Docker Engine 22 | 23 | ``` 24 | wget https://get.docker.com/builds/Linux/x86_64/docker-1.9.1 25 | chmod +x docker-1.9.1 26 | sudo mv docker-1.9.1 /usr/bin/docker 27 | ``` 28 | 29 | ### Create the docker systemd unit file: 30 | 31 | ``` 32 | [Unit] 33 | Description=Docker Application Container Engine 34 | Documentation=http://docs.docker.io 35 | 36 | [Service] 37 | ExecStart=/usr/bin/docker daemon \ 38 | --bridge=kubernetes \ 39 | --iptables=false \ 40 | --ip-masq=false \ 41 | --host=unix:///var/run/docker.sock \ 42 | --log-level=error \ 43 | --storage-driver=overlay 44 | Restart=on-failure 45 | RestartSec=5 46 | 47 | [Install] 48 | WantedBy=multi-user.target 49 | ``` 50 | 51 | Copy the docker unit file into place. 52 | 53 | ``` 54 | sudo mv docker.service /etc/systemd/system/docker.service 55 | ``` 56 | 57 | Start docker: 58 | 59 | ``` 60 | sudo systemctl daemon-reload 61 | sudo systemctl enable docker 62 | sudo systemctl start docker 63 | ``` 64 | 65 | #### Verify 66 | 67 | ``` 68 | sudo systemctl status docker --no-pager 69 | sudo docker version 70 | ``` 71 | 72 | ### node1 73 | 74 | ``` 75 | gcloud compute ssh node1 76 | ``` 77 | 78 | ### Create the Kubernetes Docker Bridge 79 | 80 | ``` 81 | sudo ip link add name kubernetes type bridge 82 | sudo ip addr add 10.200.1.1/24 dev kubernetes 83 | sudo ip link set kubernetes up 84 | ``` 85 | 86 | ### Install the Docker Engine 87 | 88 | ``` 89 | wget https://get.docker.com/builds/Linux/x86_64/docker-1.9.1 90 | chmod +x docker-1.9.1 91 | sudo mv docker-1.9.1 /usr/bin/docker 92 | ``` 93 | 94 | ### Create the docker systemd unit file 95 | 96 | ``` 97 | [Unit] 98 | Description=Docker Application Container Engine 99 | Documentation=http://docs.docker.io 100 | 101 | [Service] 102 | ExecStart=/usr/bin/docker daemon \ 103 | --bridge=kubernetes \ 104 | --iptables=false \ 105 | --ip-masq=false \ 106 | --host=unix:///var/run/docker.sock \ 107 | --log-level=error \ 108 | --storage-driver=overlay 109 | Restart=on-failure 110 | RestartSec=5 111 | 112 | [Install] 113 | WantedBy=multi-user.target 114 | ``` 115 | 116 | Copy the docker unit file into place. 117 | 118 | ``` 119 | sudo mv docker.service /etc/systemd/system/docker.service 120 | ``` 121 | 122 | Start docker: 123 | 124 | ``` 125 | sudo systemctl daemon-reload 126 | sudo systemctl enable docker 127 | sudo systemctl start docker 128 | ``` 129 | 130 | #### Verify 131 | 132 | ``` 133 | sudo systemctl status docker --no-pager 134 | sudo docker version 135 | ``` 136 | -------------------------------------------------------------------------------- /labs/install-and-configure-etcd.md: -------------------------------------------------------------------------------- 1 | # Install and configure etcd 2 | 3 | Kubernetes cluster state is stored in etcd. 4 | 5 | ## node0 6 | 7 | ``` 8 | gcloud compute ssh node0 9 | ``` 10 | 11 | ### Download etcd release 12 | 13 | ``` 14 | wget https://storage.googleapis.com/craft-conf/etcd-v2.3.2-linux-amd64.tar.gz 15 | tar -xvf etcd-v2.3.2-linux-amd64.tar.gz 16 | sudo cp etcd-v2.3.2-linux-amd64/etcdctl /usr/local/bin/ 17 | sudo cp etcd-v2.3.2-linux-amd64/etcd /usr/local/bin/ 18 | ``` 19 | 20 | ### Create the etcd systemd unit file: 21 | 22 | ``` 23 | [Unit] 24 | Description=etcd 25 | Documentation=https://github.com/coreos 26 | 27 | [Service] 28 | ExecStart=/usr/local/bin/etcd \ 29 | --data-dir=/var/lib/etcd 30 | Restart=on-failure 31 | RestartSec=5 32 | 33 | [Install] 34 | WantedBy=multi-user.target 35 | ``` 36 | 37 | Start the etcd service: 38 | 39 | ``` 40 | sudo mv etcd.service /etc/systemd/system/ 41 | ``` 42 | 43 | ``` 44 | sudo systemctl daemon-reload 45 | sudo systemctl enable etcd 46 | sudo systemctl start etcd 47 | ``` 48 | 49 | ### Verify 50 | 51 | ``` 52 | sudo systemctl status etcd 53 | ``` 54 | 55 | ``` 56 | etcdctl cluster-health 57 | ``` 58 | -------------------------------------------------------------------------------- /labs/install-and-configure-kubectl.md: -------------------------------------------------------------------------------- 1 | # Install and configure the kubectl CLI 2 | 3 | ## Install kubectl 4 | 5 | ### laptop 6 | 7 | #### Linux 8 | 9 | ``` 10 | curl -O https://storage.googleapis.com/bin.kuar.io/linux/kubectl 11 | chmod +x kubectl 12 | sudo cp kubectl /usr/local/bin/kubectl 13 | ``` 14 | 15 | #### OS X 16 | 17 | ``` 18 | curl -O https://storage.googleapis.com/bin.kuar.io/darwin/kubectl 19 | chmod +x kubectl 20 | sudo cp kubectl /usr/local/bin/kubectl 21 | ``` 22 | 23 | ### Configure kubectl 24 | 25 | Download the client credentials and CA cert: 26 | 27 | ``` 28 | gcloud compute copy-files node0:~/admin-key.pem . 29 | gcloud compute copy-files node0:~/admin.pem . 30 | gcloud compute copy-files node0:~/ca.pem . 31 | ``` 32 | 33 | Get the Kubernetes controller external IP: 34 | 35 | ``` 36 | EXTERNAL_IP=$(gcloud compute ssh node0 --command \ 37 | "curl -H 'Metadata-Flavor: Google' \ 38 | http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip") 39 | ``` 40 | 41 | Create the workshop cluster config: 42 | 43 | ``` 44 | kubectl config set-cluster workshop \ 45 | --certificate-authority=ca.pem \ 46 | --embed-certs=true \ 47 | --server=https://${EXTERNAL_IP}:6443 48 | ``` 49 | 50 | Add the admin user credentials: 51 | 52 | ``` 53 | kubectl config set-credentials admin \ 54 | --client-key=admin-key.pem \ 55 | --client-certificate=admin.pem \ 56 | --embed-certs=true 57 | ``` 58 | 59 | Configure the workshop context: 60 | 61 | ``` 62 | kubectl config set-context workshop \ 63 | --cluster=workshop \ 64 | --user=admin 65 | ``` 66 | 67 | ``` 68 | kubectl config use-context workshop 69 | ``` 70 | 71 | ``` 72 | kubectl config view 73 | ``` 74 | 75 | ### Explore the kubectl CLI 76 | 77 | Check the health status of the cluster components: 78 | 79 | ``` 80 | kubectl get cs 81 | ``` 82 | 83 | List pods: 84 | 85 | ``` 86 | kubectl get pods 87 | ``` 88 | 89 | List nodes: 90 | 91 | ``` 92 | kubectl get nodes 93 | ``` 94 | 95 | List services: 96 | 97 | ``` 98 | kubectl get services 99 | ``` 100 | -------------------------------------------------------------------------------- /labs/install-and-configure-kubelet.md: -------------------------------------------------------------------------------- 1 | # Install and configure the Kubelet 2 | 3 | ## node1 4 | 5 | ``` 6 | gcloud compute ssh node1 7 | ``` 8 | 9 | ### Download Kubernetes release tar 10 | 11 | See the [Download a Kubernetes release](download-a-kubernetes-release.md) lab. 12 | 13 | ### Create the kubelet systemd unit file: 14 | 15 | ``` 16 | [Unit] 17 | Description=Kubernetes Kubelet 18 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 19 | After=docker.service 20 | Requires=docker.service 21 | 22 | [Service] 23 | ExecStart=/usr/local/bin/hyperkube \ 24 | kubelet \ 25 | --api-servers=http://node0:8080 \ 26 | --allow-privileged=true 27 | Restart=on-failure 28 | RestartSec=5 29 | 30 | [Install] 31 | WantedBy=multi-user.target 32 | ``` 33 | 34 | Start the kubelet service: 35 | 36 | ``` 37 | sudo mv kubelet.service /etc/systemd/system/ 38 | ``` 39 | 40 | ``` 41 | sudo systemctl daemon-reload 42 | sudo systemctl enable kubelet 43 | sudo systemctl start kubelet 44 | ``` 45 | 46 | ### Verify 47 | 48 | ``` 49 | sudo systemctl status kubelet 50 | kubectl --server http://node0:8080 get nodes 51 | ``` 52 | -------------------------------------------------------------------------------- /labs/install-and-configure-scheduler.md: -------------------------------------------------------------------------------- 1 | # Install and configure the Kubernetes Scheduler 2 | 3 | ## node0 4 | 5 | ``` 6 | gcloud compute ssh node0 7 | ``` 8 | 9 | ### Create the kube-scheduler systemd unit file: 10 | 11 | ``` 12 | [Unit] 13 | Description=Kubernetes Scheduler 14 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 15 | 16 | [Service] 17 | ExecStart=/usr/local/bin/hyperkube \ 18 | scheduler \ 19 | --master=http://127.0.0.1:8080 20 | Restart=on-failure 21 | RestartSec=5 22 | 23 | [Install] 24 | WantedBy=multi-user.target 25 | ``` 26 | 27 | Start the kube-scheduler service: 28 | 29 | ``` 30 | sudo mv kube-scheduler.service /etc/systemd/system/ 31 | ``` 32 | 33 | ``` 34 | sudo systemctl daemon-reload 35 | sudo systemctl enable kube-scheduler 36 | sudo systemctl start kube-scheduler 37 | ``` 38 | 39 | ### Verify 40 | 41 | ``` 42 | sudo systemctl status kube-scheduler 43 | kubectl get cs 44 | ``` 45 | -------------------------------------------------------------------------------- /labs/managing-application-configurations-and-secrets.md: -------------------------------------------------------------------------------- 1 | # Managing Application Configurations and Secrets 2 | 3 | Many applications require configuration settings and secrets such as TLS certificates to run in a production environment. In this lab you will learn how to: 4 | 5 | * Create secrets to store sensitive application data 6 | * Create configmaps to store application configuration data 7 | * Expose secrets and configmaps to Pods at runtime 8 | 9 | In this lab we will create a new Pod named `secure-monolith` based on the `healthy-monolith` Pod. The `secure-monolith` Pod secures access to the `monolith` container using [Nginx](http://nginx.org/en), which will serve as a reverse proxy serving HTTPS. 10 | 11 | > The nginx container will be deployed in the same pod as the monolith container because they are tightly coupled. 12 | 13 | ## Tutorial: Creating Secrets 14 | 15 | Before we can use the `nginx` container to serve HTTPS traffic we need some TLS certificates. In this tutorial you will store a set of self-signed TLS certificates in Kubernetes as secrets. 16 | 17 | Create the `tls-certs` secret from the TLS certificates stored under the tls directory: 18 | 19 | ``` 20 | kubectl create secret generic tls-certs --from-file=tls/ 21 | ``` 22 | 23 | Examine the `tls-certs` secret: 24 | 25 | ``` 26 | kubectl describe secrets tls-certs 27 | ``` 28 | 29 | ### Quiz 30 | 31 | * How many items are stored under the `tls-certs` secret? 32 | * What are key the names? 33 | 34 | ## Tutorial: Creating Configmaps 35 | 36 | The nginx container also needs a configuration file to setup the secure reverse proxy. In this tutorial you will create a configmap from the `proxy.conf` nginx configuration file. 37 | 38 | Create the `nginx-proxy-conf` configmap based on the `proxy.conf` nginx configuration file: 39 | 40 | ``` 41 | kubectl create configmap nginx-proxy-conf --from-file=nginx/proxy.conf 42 | ``` 43 | 44 | Examine the `nginx-proxy-conf` configmap: 45 | 46 | ``` 47 | kubectl describe configmaps nginx-proxy-conf 48 | ``` 49 | 50 | ### Quiz 51 | 52 | * How many items are stored under the `nginx-proxy-conf` configmap? 53 | * What are the key names? 54 | 55 | ## Tutorial: Use Configmaps and Secrets 56 | 57 | In this tutorial you will expose the `nginx-proxy-conf` configmap and the `tls-certs` secrets to the `secure-monolith` pod at runtime: 58 | 59 | Examine the `secure-monolith` pod configuration file: 60 | 61 | ``` 62 | cat pods/secure-monolith.yaml 63 | ``` 64 | 65 | ### Quiz 66 | 67 | * How are secrets exposed to the `secure-monolith` Pod? 68 | * How are configmaps exposed to the `secure-monolith` Pod? 69 | 70 | Create the `secure-monolith` Pod using kubectl: 71 | 72 | ``` 73 | kubectl create -f pods/secure-monolith.yaml 74 | ``` 75 | 76 | #### Test the HTTPS endpoint 77 | 78 | Forward local port 10443 to 443 of the `secure-monolith` Pod: 79 | 80 | ``` 81 | kubectl port-forward secure-monolith 10443:443 82 | ``` 83 | 84 | Use the `curl` command to test the HTTPS endpoint: 85 | 86 | ``` 87 | curl --cacert tls/ca.pem https://127.0.0.1:10443 88 | ``` 89 | 90 | Use the `kubectl logs` command to verify traffic to the `secure-monolith` Pod: 91 | 92 | ``` 93 | kubectl logs -c nginx secure-monolith 94 | ``` 95 | 96 | ## Summary 97 | 98 | Secrets and Configmaps allow you to store application secrets and configuration data, then expose them to Pods at runtime. In this lab you learned how to expose Secrets and Configmaps to Pods using volume mounts. You also learned how to run multiple containers in a single Pod. 99 | -------------------------------------------------------------------------------- /labs/monitoring-and-health-checks.md: -------------------------------------------------------------------------------- 1 | # Monitoring and Health Checks 2 | 3 | Kubernetes supports monitoring applications in the form of readiness and liveness probes. Health checks can be performed on each container in a Pod. Readiness probes indicate when a Pod is "ready" to serve traffic. Liveness probes indicate a container is "alive". If a liveness probe fails multiple times the container will be restarted. Liveness probes that continue to fail will cause a Pod to enter a crash loop. If a readiness check fails the container will be marked as not ready and will be removed from any load balancers. 4 | 5 | In this lab you will deploy a new Pod named `healthy-monolith`, which is largely based on the `monolith` Pod with the addition of readiness and liveness probes. 6 | 7 | In this lab you will learn how to: 8 | 9 | * Create Pods with readiness and liveness probes 10 | * Troubleshoot failing readiness and liveness probes 11 | 12 | ## Tutorial: Creating Pods with Liveness and Readiness Probes 13 | 14 | Explore the `healthy-monolith` pod configuration file: 15 | 16 | ``` 17 | cat pods/healthy-monolith.yaml 18 | ``` 19 | 20 | Create the `healthy-monolith` pod using kubectl: 21 | 22 | ``` 23 | kubectl create -f pods/healthy-monolith.yaml 24 | ``` 25 | 26 | ## Exercise: View Pod details 27 | 28 | Pods will not be marked ready until the readiness probe returns an HTTP 200 response. Use the `kubectl describe` to view details for the `healthy-monolith` Pod. 29 | 30 | ### Hints 31 | 32 | ``` 33 | kubectl describe pods 34 | ``` 35 | 36 | ### Quiz 37 | 38 | * How is the readiness of the `healthy-monolith` Pod determined? 39 | * How is the liveness of the `healthy-monolith` Pod determined? 40 | * How often is the readiness probe checked? 41 | * How often is the liveness probe checked? 42 | 43 | > The `healthy-monolith` Pod logs each health check. Use the `kubectl logs` command to view them. 44 | 45 | ## Tutorial: Experiment with Readiness Probes 46 | 47 | In this tutorial you will observe how Kubernetes responds to failed readiness probes. The `monolith` container supports the ability to force failures of it's readiness and liveness probes. This will enable us to simulate failures for the `healthy-monolith` Pod. 48 | 49 | Use the `kubectl port-forward` command to forward a local port to the health port of the `healthy-monolith` Pod. 50 | 51 | ``` 52 | kubectl port-forward healthy-monolith 10081:81 53 | ``` 54 | 55 | > You know have access to the /healthz and /readiness HTTP endpoints exposed by the monolith container. 56 | 57 | ### Experiment with Readiness Probes 58 | 59 | Force the `monolith` container readiness probe to fail. Use the `curl` command to toggle the readiness probe status: 60 | 61 | ``` 62 | curl http://127.0.0.1:10081/readiness/status 63 | ``` 64 | 65 | Wait about 45 seconds and get the status of the `healthy-monolith` Pod using the `kubectl get pods` command: 66 | 67 | ``` 68 | kubectl get pods healthy-monolith 69 | ``` 70 | 71 | Use the `kubectl describe` command to get more details about the failing readiness probe: 72 | 73 | ``` 74 | kubectl describe pods healthy-monolith 75 | ``` 76 | 77 | > Notice the events for the `healthy-monolith` Pod report details about failing readiness probe. 78 | 79 | Force the `monolith` container readiness probe to pass. Use the `curl` command to toggle the readiness probe status: 80 | 81 | ``` 82 | curl http://127.0.0.1:10081/readiness/status 83 | ``` 84 | 85 | Wait about 15 seconds and get the status of the `healthy-monolith` Pod using the `kubectl get pods` command: 86 | 87 | ``` 88 | kubectl get pods healthy-monolith 89 | ``` 90 | 91 | ## Exercise: Experiment with Liveness Probes 92 | 93 | Building on what you learned in the previous tutorial use the `kubectl port-forward` and `curl` commands to force the `monolith` container liveness probe to fail. Observe how Kubernetes responds to failing liveness probes. 94 | 95 | ### Hints 96 | 97 | ``` 98 | kubectl port-forward healthy-monolith 10081:81 99 | ``` 100 | 101 | ``` 102 | curl http://127.0.0.1:10081/healthz/status 103 | ``` 104 | 105 | ### Quiz 106 | 107 | * What happened when the liveness probe failed? 108 | * What events where created when the liveness probe failed? 109 | 110 | ## Summary 111 | 112 | In this lab you learned that Kubernetes supports application monitoring using 113 | liveness and readiness probes. You also learned how to add readiness and liveness probes to Pods and what happens when probes fail. 114 | -------------------------------------------------------------------------------- /labs/provision-kubernetes-cluster-with-gke.md: -------------------------------------------------------------------------------- 1 | # Provision a Kubernetes Cluster with GKE 2 | 3 | GKE is a hosted Kubernetes by Google. GKE clusters can be provisioned using a single command: 4 | 5 | ``` 6 | gcloud container clusters create craft 7 | ``` 8 | 9 | GKE clusters can be customized and supports different machine types, number of nodes, and network settings. 10 | 11 | ## Create a Kubernetes cluster using gcloud 12 | 13 | ``` 14 | gcloud container clusters create craft \ 15 | --disk-size 200 \ 16 | --enable-cloud-logging \ 17 | --enable-cloud-monitoring \ 18 | --machine-type n1-standard-1 \ 19 | --num-nodes 3 20 | ``` 21 | -------------------------------------------------------------------------------- /labs/provisioning-ubuntu-on-gce.md: -------------------------------------------------------------------------------- 1 | # Provisioning Ubuntu 15.10 on Google Compute Engine 2 | 3 | In this lab you will provision two GCE instances running Ubuntu 16.04. These instances will be used to provision a two node Kubernetes cluster. 4 | 5 | ## Provision 2 GCE instances 6 | 7 | ### Provision Ubuntu using the gcloud CLI 8 | 9 | #### node0 10 | 11 | ``` 12 | gcloud compute instances create node0 \ 13 | --image-project ubuntu-os-cloud \ 14 | --image ubuntu-1510-wily-v20160405 \ 15 | --boot-disk-size 200GB \ 16 | --machine-type n1-standard-1 \ 17 | --can-ip-forward 18 | ``` 19 | 20 | #### node1 21 | 22 | ``` 23 | gcloud compute instances create node1 \ 24 | --image-project ubuntu-os-cloud \ 25 | --image ubuntu-1510-wily-v20160405 \ 26 | --boot-disk-size 200GB \ 27 | --machine-type n1-standard-1 \ 28 | --can-ip-forward 29 | ``` 30 | 31 | #### Verify 32 | 33 | ``` 34 | gcloud compute instances list 35 | ``` 36 | -------------------------------------------------------------------------------- /labs/rolling-out-updates.md: -------------------------------------------------------------------------------- 1 | # Rolling out Updates 2 | 3 | Kubernetes makes it easy to rollout updates to your applications using the builtin rolling update mechanism. In this lab you will learn how to: 4 | 5 | * Modify deployments to tigger rolling updates 6 | * Pause and resume an active rolling update 7 | * Rollback a deployment to a previous revision 8 | 9 | ## Tutorial: Rollout a new version of the Auth service 10 | 11 | ``` 12 | kubectl rollout history deployment auth 13 | ``` 14 | 15 | Modify the auth deployment image: 16 | 17 | ``` 18 | vim deployments/auth.yaml 19 | ``` 20 | 21 | ``` 22 | image: "kelseyhightower/auth:2.0.0" 23 | ``` 24 | 25 | ``` 26 | kubectl apply -f deployments/auth.yaml --record 27 | ``` 28 | 29 | ``` 30 | kubectl describe deployments auth 31 | ``` 32 | 33 | ``` 34 | kubectl get replicasets 35 | ``` 36 | 37 | ``` 38 | kubectl rollout history deployment auth 39 | ``` 40 | 41 | ## Tutorial: Pause and Resume an Active Rollout 42 | 43 | ``` 44 | kubectl rollout history deployment hello 45 | ``` 46 | 47 | Modify the hello deployment image: 48 | 49 | ``` 50 | vim deployments/hello.yaml 51 | ``` 52 | 53 | ``` 54 | image: "kelseyhightower/hello:2.0.0" 55 | ``` 56 | 57 | ``` 58 | kubectl apply -f deployments/hello.yaml --record 59 | ``` 60 | 61 | ``` 62 | kubectl describe deployments hello 63 | ``` 64 | 65 | ``` 66 | kubectl rollout pause deployment hello 67 | ``` 68 | 69 | ``` 70 | kubectl rollout resume deployment hello 71 | ``` 72 | 73 | ## Exercise: Rollback the Hello service 74 | 75 | Use the `kubectl rollout undo` command to rollback to a previous deployment of the Hello service. 76 | 77 | ## Summary 78 | 79 | In this lab you learned how to rollout updates to your applications by modifying deployment objects to trigger rolling updates. You also learned how to pause and resume an active rolling update and rollback it back using the `kubectl rollout` command. --------------------------------------------------------------------------------