├── ..data ├── .gitignore ├── LICENSE ├── README.md ├── TROUBLESHOOTING.md ├── envoy-front-proxy ├── Dockerfile-frontenvoy ├── Dockerfile-service ├── README.md ├── docker-compose.yml ├── front-envoy.yaml ├── service-envoy.yaml ├── service.py └── start_service.sh ├── exercise-1 ├── README.md ├── README_AWS.md └── img │ ├── IGdemo.png │ ├── IGdemoattach.png │ ├── VPCdemo.png │ └── k8sdesign.png ├── exercise-10 └── README.md ├── exercise-11 └── README.md ├── exercise-12 └── README.md ├── exercise-13 └── README.md ├── exercise-14 └── README.md ├── exercise-15 └── README.md ├── exercise-16 └── README.md ├── exercise-2 ├── README.md └── optional.md ├── exercise-3 └── README.md ├── exercise-4 └── README.md ├── exercise-5 └── README.md ├── exercise-6 └── README.md ├── exercise-7 └── README.md ├── exercise-8 └── README.md ├── exercise-8a └── README.md ├── exercise-9 └── README.md ├── exercise-envoy └── README.md ├── images ├── boost_mode.png ├── cloud_shell.png ├── docker_compose_v0.1.svg ├── docker_file_sharing.png ├── homescreen.png ├── homescreen2.png ├── k8console.png ├── project_name.jpg └── welcomeaccount.png ├── istio ├── deny-guestbook-service.yaml ├── guestbook-gateway.yaml ├── guestbook-service-503-vs.yaml ├── guestbook-service-dest.yaml ├── guestbook-service-retry-vs.yaml ├── guestbook-ui-80p-v1-vs.yaml ├── guestbook-ui-chrome-vs.yaml ├── guestbook-ui-delay-vs.yaml ├── guestbook-ui-dest.yaml ├── guestbook-ui-v1-vs.yaml ├── guestbook-ui-vs.yaml ├── helloworld-service-80p-v1-vs.yaml ├── helloworld-service-dest.yaml ├── helloworld-service-v1-vs.yaml └── rate-limits.yaml ├── kubernetes-v2 ├── guestbook-ui-deployment-v2.yaml └── helloworld-deployment-v2.yaml ├── kubernetes ├── guestbook-deployment.yaml ├── guestbook-service.yaml ├── guestbook-ui-deployment.yaml ├── guestbook-ui-service.yaml ├── helloworld-deployment.yaml ├── helloworld-service.yaml ├── mysql-deployment.yaml ├── mysql-service.yaml ├── redis-deployment.yaml └── redis-service.yaml ├── scripts └── add_helm.sh └── setup └── README.md /..data: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/..data -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | hs_err_pid* 3 | target 4 | .idea 5 | 6 | target/ 7 | !.mvn/wrapper/maven-wrapper.jar 8 | 9 | ### STS ### 10 | .apt_generated 11 | .classpath 12 | .factorypath 13 | .project 14 | .settings 15 | .springBeans 16 | 17 | ### IntelliJ IDEA ### 18 | .idea 19 | *.iws 20 | *.iml 21 | *.ipr 22 | 23 | ### NetBeans ### 24 | nbproject/private/ 25 | build/ 26 | nbbuild/ 27 | dist/ 28 | nbdist/ 29 | .nb-gradle/ 30 | *.sw? 31 | .envrc 32 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Workshop Setup 2 | - [Setup for the workshop](setup/README.md) 3 | - [Exercise 1 - Startup a Cluster using the Google Kubernetes Engine](exercise-1/README.md) 4 | - [Exercise 1 - AWS Setup](exercise-1/README_AWS.md) 5 | 6 | ## Optional Kubernetes Exercises 7 | 8 | - [Exercise 2 - Deploying a microservice to Kubernetes](exercise-2/README.md) 9 | - [Exercise 3 - Creating a Kubernetes Service](exercise-3/README.md) 10 | - [Exercise 4 - Scaling in and out](exercise-4/README.md) 11 | 12 | ## Creating a Service Mesh with Istio 13 | 14 | - [Exercise 5 - Installing Istio](exercise-5/README.md) 15 | - [Exercise 6 - Creating a Service Mesh with Istio Proxy](exercise-6/README.md) 16 | - [Exercise 7 - Istio Ingress Controller](exercise-7/README.md) 17 | - [Exercise 8 - Telemetry](exercise-8/README.md) 18 | - [Exercise 9 - Distributed Tracing](exercise-9/README.md) 19 | - [Exercise 10 - Request Routing and Canary Deployments](exercise-10/README.md) 20 | - [Exercise 11 - Fault Injection and Rate Limiting](exercise-11/README.md) 21 | - [Exercise 12 - Service Isolation Using Mixer](exercise-12/README.md) 22 | 23 | ## Securing Istio 24 | 25 | - [Exercise 13 - Istio Mutual TLS](exercise-13/README.md) 26 | - [Exercise 14 - Ensuring security with iptables](exercise-14/README.md) 27 | - [Exercise 15 - mTLS again, now with 100% more SPIFFE](exercise-15/README.md) 28 | - [Exercise 16 - Istio RBAC](exercise-16/README.md) 29 | 30 | ## Credits 31 | These workshop exercises are built with the help from a number of amazing Kubernetes and Istio Experts from Google and [Grand Cloud](https://www.grandcloud.com). This content is free to use we only ask that you keep the original attributions included in any future contributions or forks. 32 | 33 | #### Ryan Knight [@knight_cloud](https://twitter.com/knight_cloud) 34 | 35 | #### Ray Tsang [@saturnism](https://twitter.com/saturnism) 36 | 37 | The Kubernetes and Istio Exercises are derived from the work of Ray Tsang [@saturnism](https://twitter.com/saturnism) 38 | 39 | A lot of the exercises where copied from the [Istio Workshop Google Doc](https://t.co/yDJY1yODzX) 40 | 41 | And the exercises from these repositories: 42 | 43 | [https://github.com/saturnism/spring-boot-docker](https://github.com/saturnism/spring-boot-docker) 44 | 45 | [https://github.com/saturnism/istio-by-example-java](https://github.com/saturnism/istio-by-example-java) 46 | 47 | #### Zach Butcher [@ZachButcher](https://twitter.com/ZackButcher) 48 | 49 | Zach was instrumental in helping write the Istio tutorials and in particular the Istio Mixer Exercises. 50 | 51 | #### Ben Edwards - No Social Media Presence 52 | -------------------------------------------------------------------------------- /TROUBLESHOOTING.md: -------------------------------------------------------------------------------- 1 | ##Google Cloud SDK Troubleshooting 2 | 3 | If kubectl is not connecting to your cluster or showing pods trying getting the credentials: 4 | 5 | `gcloud container clusters get-credentials` 6 | 7 | ##Kubernetes Troubleshooting 8 | 9 | The most important tool to know for debugging problems is the describe resource command. 10 | It can be used to describe any Kubernetes resource. 11 | 12 | `kubectl describe pod helloworld-service-v1-119527584-jwfzh` 13 | 14 | `kubectl describe service helloworld-service` 15 | 16 | kubectl top can be used to see resource utilization. A very common problem is the pod was not able to be scheduled. 17 | 18 | `kubectl top nodes` 19 | 20 | `kubectl top pods` 21 | 22 | View the logs of a Pod by finding the full pod name and specify the main container, i.e.: 23 | 24 | `kubectl logs guestbook-ui-1872113123-1lw81 -c guestbook-ui` 25 | -------------------------------------------------------------------------------- /envoy-front-proxy/Dockerfile-frontenvoy: -------------------------------------------------------------------------------- 1 | FROM envoyproxy/envoy:latest 2 | 3 | RUN apt-get update && apt-get -q install -y \ 4 | curl 5 | CMD /usr/local/bin/envoy -c /etc/front-envoy.yaml --service-cluster front-proxy 6 | -------------------------------------------------------------------------------- /envoy-front-proxy/Dockerfile-service: -------------------------------------------------------------------------------- 1 | FROM envoyproxy/envoy-alpine:latest 2 | 3 | RUN apk update && apk add python3 bash 4 | RUN python3 --version && pip3 --version 5 | RUN pip3 install -q Flask==0.11.1 requests==2.18.4 6 | RUN mkdir /code 7 | ADD ./service.py /code 8 | ADD ./start_service.sh /usr/local/bin/start_service.sh 9 | RUN chmod u+x /usr/local/bin/start_service.sh 10 | ENTRYPOINT /usr/local/bin/start_service.sh 11 | -------------------------------------------------------------------------------- /envoy-front-proxy/README.md: -------------------------------------------------------------------------------- 1 | To learn about this sandbox and for instructions on how to run it please head over 2 | to the [envoy docs](https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/front_proxy.html) 3 | -------------------------------------------------------------------------------- /envoy-front-proxy/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '2' 2 | services: 3 | 4 | front-envoy: 5 | build: 6 | context: . 7 | dockerfile: Dockerfile-frontenvoy 8 | volumes: 9 | - ./front-envoy.yaml:/etc/front-envoy.yaml 10 | networks: 11 | - envoymesh 12 | expose: 13 | - "80" 14 | - "8001" 15 | ports: 16 | - "8000:80" 17 | - "8001:8001" 18 | 19 | service1: 20 | build: 21 | context: . 22 | dockerfile: Dockerfile-service 23 | volumes: 24 | - ./service-envoy.yaml:/etc/service-envoy.yaml 25 | networks: 26 | envoymesh: 27 | aliases: 28 | - service1 29 | environment: 30 | - SERVICE_NAME=1 31 | expose: 32 | - "80" 33 | 34 | service2: 35 | build: 36 | context: . 37 | dockerfile: Dockerfile-service 38 | volumes: 39 | - ./service-envoy.yaml:/etc/service-envoy.yaml 40 | networks: 41 | envoymesh: 42 | aliases: 43 | - service2 44 | environment: 45 | - SERVICE_NAME=2 46 | expose: 47 | - "80" 48 | 49 | networks: 50 | envoymesh: {} 51 | -------------------------------------------------------------------------------- /envoy-front-proxy/front-envoy.yaml: -------------------------------------------------------------------------------- 1 | static_resources: 2 | listeners: 3 | - address: 4 | socket_address: 5 | address: 0.0.0.0 6 | port_value: 80 7 | filter_chains: 8 | - filters: 9 | - name: envoy.http_connection_manager 10 | config: 11 | codec_type: auto 12 | stat_prefix: ingress_http 13 | route_config: 14 | name: local_route 15 | virtual_hosts: 16 | - name: backend 17 | domains: 18 | - "*" 19 | routes: 20 | - match: 21 | prefix: "/service/1" 22 | route: 23 | cluster: service1 24 | - match: 25 | prefix: "/service/2" 26 | route: 27 | cluster: service2 28 | http_filters: 29 | - name: envoy.router 30 | config: {} 31 | clusters: 32 | - name: service1 33 | connect_timeout: 0.25s 34 | type: strict_dns 35 | lb_policy: round_robin 36 | http2_protocol_options: {} 37 | hosts: 38 | - socket_address: 39 | address: service1 40 | port_value: 80 41 | - name: service2 42 | connect_timeout: 0.25s 43 | type: strict_dns 44 | lb_policy: round_robin 45 | http2_protocol_options: {} 46 | hosts: 47 | - socket_address: 48 | address: service2 49 | port_value: 80 50 | admin: 51 | access_log_path: "/dev/null" 52 | address: 53 | socket_address: 54 | address: 0.0.0.0 55 | port_value: 8001 56 | -------------------------------------------------------------------------------- /envoy-front-proxy/service-envoy.yaml: -------------------------------------------------------------------------------- 1 | static_resources: 2 | listeners: 3 | - address: 4 | socket_address: 5 | address: 0.0.0.0 6 | port_value: 80 7 | filter_chains: 8 | - filters: 9 | - name: envoy.http_connection_manager 10 | config: 11 | codec_type: auto 12 | stat_prefix: ingress_http 13 | route_config: 14 | name: local_route 15 | virtual_hosts: 16 | - name: service 17 | domains: 18 | - "*" 19 | routes: 20 | - match: 21 | prefix: "/service" 22 | route: 23 | cluster: local_service 24 | http_filters: 25 | - name: envoy.router 26 | config: {} 27 | clusters: 28 | - name: local_service 29 | connect_timeout: 0.25s 30 | type: strict_dns 31 | lb_policy: round_robin 32 | hosts: 33 | - socket_address: 34 | address: 127.0.0.1 35 | port_value: 8080 36 | admin: 37 | access_log_path: "/dev/null" 38 | address: 39 | socket_address: 40 | address: 0.0.0.0 41 | port_value: 8081 42 | -------------------------------------------------------------------------------- /envoy-front-proxy/service.py: -------------------------------------------------------------------------------- 1 | from flask import Flask 2 | from flask import request 3 | import socket 4 | import os 5 | import sys 6 | import requests 7 | 8 | app = Flask(__name__) 9 | 10 | TRACE_HEADERS_TO_PROPAGATE = [ 11 | 'X-Ot-Span-Context', 12 | 'X-Request-Id', 13 | 14 | # Zipkin headers 15 | 'X-B3-TraceId', 16 | 'X-B3-SpanId', 17 | 'X-B3-ParentSpanId', 18 | 'X-B3-Sampled', 19 | 'X-B3-Flags', 20 | 21 | # Jaeger header (for native client) 22 | "uber-trace-id" 23 | ] 24 | 25 | @app.route('/service/') 26 | def hello(service_number): 27 | return ('Hello from behind Envoy (service {})! hostname: {} resolved' 28 | 'hostname: {}\n'.format(os.environ['SERVICE_NAME'], 29 | socket.gethostname(), 30 | socket.gethostbyname(socket.gethostname()))) 31 | 32 | @app.route('/trace/') 33 | def trace(service_number): 34 | headers = {} 35 | # call service 2 from service 1 36 | if int(os.environ['SERVICE_NAME']) == 1 : 37 | for header in TRACE_HEADERS_TO_PROPAGATE: 38 | if header in request.headers: 39 | headers[header] = request.headers[header] 40 | ret = requests.get("http://localhost:9000/trace/2", headers=headers) 41 | return ('Hello from behind Envoy (service {})! hostname: {} resolved' 42 | 'hostname: {}\n'.format(os.environ['SERVICE_NAME'], 43 | socket.gethostname(), 44 | socket.gethostbyname(socket.gethostname()))) 45 | 46 | if __name__ == "__main__": 47 | app.run(host='127.0.0.1', port=8080, debug=True) 48 | -------------------------------------------------------------------------------- /envoy-front-proxy/start_service.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | python3 /code/service.py & 3 | envoy -c /etc/service-envoy.yaml --service-cluster service${SERVICE_NAME} -------------------------------------------------------------------------------- /exercise-1/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 1 - Startup a Kubernetes Cluster using the Google Kubernetes Engine 2 | 3 | ### Enable Google Cloud APIs and set Default Zone and Region 4 | 5 | You should perform all of the lab instructions directly in Cloud Shell. 6 | 7 | 1. Enable Google Cloud APIs that you'll be using from our Kubernetes cluster. 8 | 9 | ```sh 10 | gcloud services enable \ 11 | cloudapis.googleapis.com \ 12 | container.googleapis.com \ 13 | containerregistry.googleapis.com 14 | ``` 15 | 16 | 2. Set the default zone and region 17 | 18 | ```sh 19 | gcloud config set compute/zone us-central1-f 20 | gcloud config set compute/region us-central1 21 | ``` 22 | 23 | Note: For the lab, use the region/zone recommended by the instructor. Learn more about different zones and regions in [Regions & Zones documentation](https://cloud.google.com/compute/docs/zones). 24 | 25 | ### Create a Kubernetes Cluster using the Google Kubernetes Engine 26 | 27 | 1 - Creating a Kubernetes cluster in Google Cloud Platform is very easy! Use Kubernetes Engine to create a cluster: 28 | 29 | ```sh 30 | gcloud beta container clusters create guestbook \ 31 | --addons=HorizontalPodAutoscaling,HttpLoadBalancing \ 32 | --machine-type=n1-standard-4 \ 33 | --cluster-version=1.12 \ 34 | --enable-stackdriver-kubernetes --enable-ip-alias \ 35 | --enable-autoscaling --min-nodes=3 --max-nodes=5 \ 36 | --enable-autorepair \ 37 | --scopes cloud-platform 38 | ``` 39 | 40 | **Note:** You can specify Istio as one of the addons when you create the cluster. Howerver for this workshop we will manually install Istio. 41 | 42 | **Note:** The scopes parameter is important for this lab. Scopes determine what Google Cloud Platform resources these newly created instances can access. By default, instances are able to read from Google Cloud Storage, write metrics to Google Cloud Monitoring, etc. For our lab, we add the cloud-platform scope to give us more privileges, such as writing to Cloud Storage as well. 43 | 44 | This will take a few minutes to run. Behind the scenes, it will create Google Compute Engine instances, and configure each instance as a Kubernetes node. These instances don’t include the Kubernetes Master node. In Google Kubernetes Engine, the Kubernetes Master node is managed service so that you don’t have to worry about it! 45 | 46 | You can see the newly created instances in the Compute Engine → VM Instances page. 47 | 48 | 2 - Grant cluster administrator (admin) permissions to the current user. To create the necessary RBAC rules for Istio, the current user requires admin permissions. 49 | 50 | ```sh 51 | kubectl create clusterrolebinding cluster-admin-binding \ 52 | --clusterrole=cluster-admin \ 53 | --user=$(gcloud config get-value core/account) 54 | ``` 55 | 56 | Admin permissions are required to instal Istio 57 | 58 | 3 - Verify kubectl 59 | 60 | ```sh 61 | kubectl version 62 | ``` 63 | 64 | 4 - Install Helm 65 | 66 | Helm is a Package Manager for Kubernetes, similar to Linux Packager Managers like RPM. We will use Helm to install Istio. 67 | 68 | Install helm using the install script in the workshop script directory: 69 | 70 | ```sh 71 | cd istio-workshop/ 72 | sh scripts/add_helm.sh 73 | ``` 74 | 75 | This script is from Jonathan Campos blog on [Installing Helm in Google Kubernetes Engine (GKE)](https://medium.com/google-cloud/installing-helm-in-google-kubernetes-engine-7f07f43c536e) 76 | 77 | 5 - Optional View the Kubernets Cluster in the Web Console 78 | 79 | You can view the Kubernets Cluster in the Web Console by clicking on the hamburger icon and then going under Kubernetes Engine: 80 | 81 | ![Google Cloud Kubernetes Engine](../images/k8console.png) 82 | 83 | 84 | 85 | ## Explanation 86 | #### By Ray Tsang [@saturnism](https://twitter.com/saturnism) 87 | 88 | This will take a few minutes to run. Behind the scenes, it will create Google Compute Engine instances, and configure each instance as a Kubernetes node. These instances don’t include the Kubernetes Master node. In Google Kubernetes Engine, the Kubernetes Master node is managed service so that you don’t have to worry about it! 89 | 90 | #### [Continue to Exercise 2 - Deploying a microservice to Kubernetes](../exercise-2/README.md) 91 | -------------------------------------------------------------------------------- /exercise-1/README_AWS.md: -------------------------------------------------------------------------------- 1 | # Exercise 1 - Startup a Kubernetes Cluster 2 | 3 | ## Pre requisite 4 | 5 | ### Install Kops 6 | 7 | [Github Kops install](https://github.com/kubernetes/kops/blob/master/docs/install.md) 8 | 9 | #### MacOS 10 | 11 | From Homebrew: 12 | 13 | ```sh 14 | brew update && brew install kops 15 | ``` 16 | 17 | From Github: 18 | 19 | ```sh 20 | wget -O kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64 21 | chmod +x ./kops 22 | sudo mv ./kops /usr/local/bin/ 23 | ``` 24 | 25 | You can also [install from source](development/building.md). 26 | 27 | #### Linux 28 | 29 | From Github: 30 | 31 | ```sh 32 | wget -O kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64 33 | chmod +x ./kops 34 | sudo mv ./kops /usr/local/bin/ 35 | ``` 36 | 37 | ### AWS User 38 | 39 | Have created a user with sufficient privileges: 40 | 41 | ``` 42 | AmazonEC2FullAccess 43 | AmazonRoute53FullAccess 44 | AmazonS3FullAccess 45 | IAMFullAccess 46 | AmazonVPCFullAccess 47 | ``` 48 | You can create kops user with console or with folowing aws cli command line: 49 | 50 | ```sh 51 | aws iam create-group --group-name kops 52 | 53 | aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops 54 | aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops 55 | aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops 56 | aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops 57 | aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops 58 | 59 | aws iam create-user --user-name kops 60 | 61 | aws iam add-user-to-group --user-name kops --group-name kops 62 | 63 | aws iam create-access-key --user-name kops 64 | ``` 65 | 66 | Retrieve **AWS KEY** 67 | 68 | ### Domain registration on AWS Route 53 69 | 70 | You can create a subdomain or attach to an existing domain 71 | 72 | Check your domain 73 | 74 | ```sh 75 | dig ns mydemodomain.domain.cloud 76 | ``` 77 | 78 | ``` 79 | ;; ANSWER SECTION: 80 | mydemodomain.domain.cloud. 866 IN SOA ns-138.awsdns-17.com. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 81 | ``` 82 | 83 | You need to retrieve **Hosted Zone ID** from console or command line 84 | 85 | ### S3 Bucket for State Store 86 | 87 | You need to create an S3 Bucket to keep the state of your k8s cluster and be able to perform admin tasks 88 | 89 | ```sh 90 | aws s3api create-bucket \ 91 | --bucket mydemobucket-cloud-state-store \ 92 | --region eu-central-1 93 | ``` 94 | 95 | Retrieve **S3 address** 96 | 97 | ### Create your dedicated VPC (Optional) 98 | 99 | ![VPC Creation](img/VPCdemo.png) 100 | 101 | Retrieve **VPC Id** 102 | 103 | ### Create you AWS Internet Gateway (Optional) 104 | 105 | If you have previously created a dedicated VPC you need to create an AWS Internet Gateway: 106 | 107 | ![VPC Creation](img/IGdemo.png) 108 | 109 | And attach it to your VPC: 110 | 111 | ![VPC Creation](img/IGdemoattach.png) 112 | 113 | ## Create your Cluster 114 | 115 | ### Launch Creation 116 | 117 | Export AWS credential and s3 bucket 118 | 119 | ```sh 120 | export AWS_ACCESS_KEY_ID=XXXXXX 121 | export AWS_SECRET_ACCESS_KEY=XXXXX 122 | export KOPS_STATE_STORE=s3://mydemobucket-cloud-state-store 123 | ``` 124 | 125 | We are now ready to launch our k8 cluster creation. In this exercise we will create a public cluster split in three AZ has bellow: 126 | 127 | ![AWS K8S Overview](img/k8sdesign.png) 128 | 129 | ```sh 130 | kops create cluster \ 131 | --node-count 3 \ 132 | --zones eu-central-1a,eu-central-1b,eu-central-1c \ 133 | --master-zones eu-central-1a,eu-central-1b,eu-central-1c \ 134 | --dns-zone=YOURHOSTEDZONEID \ 135 | --node-size t2.micro \ 136 | --master-size t2.micro \ 137 | --topology private \ 138 | --networking calico \ 139 | --vpc=YOURVPCID \ 140 | --bastion \ 141 | --kubernetes-version=1.7.6 \ 142 | --name=k8s-main.mydemodomain.domain.cloud \ 143 | --cloud-labels "Name=Test Cluster,Env=Demo" \ 144 | --ssh-public-key ~/.ssh/mykey.pub 145 | ``` 146 | 147 | Launch installation: 148 | 149 | ```sh 150 | kops update cluster k8s-main.mydemodomain.domain.cloud --yes 151 | ``` 152 | 153 | Check installation: 154 | 155 | Validate Cluster Creation on AWS: 156 | 157 | ```sh 158 | kops validate cluster 159 | ``` 160 | 161 | Validation K8S cluster: 162 | 163 | ```sh 164 | kubectl get nodes 165 | ``` 166 | 167 | ### Install Addons 168 | 169 | Install addon if needed: 170 | 171 | * Add Heapster 172 | * Add Dashboard 173 | * External DNS 174 | * Node Problem Detector 175 | 176 | ### Post Installation 177 | 178 | If you want to use **admission control** you need to edit existing cluster and add following line: 179 | 180 | ```sh 181 | kops edit cluster k8s-main.mydemodomain.domain.cloud 182 | ``` 183 | 184 | ``` 185 | kubeAPIServer: 186 | admissionControl: 187 | - NamespaceLifecycle 188 | - LimitRanger 189 | - ServiceAccount 190 | - PersistentVolumeLabel 191 | - DefaultStorageClass 192 | - ResourceQuota 193 | - DefaultTolerationSeconds 194 | - Initializers 195 | runtimeConfig: 196 | admissionregistration.k8s.io/v1alpha1: "true" 197 | ``` 198 | 199 | Perfom update: 200 | 201 | ```sh 202 | kops update cluster --yes 203 | kops rolling-update cluster --yes 204 | ``` 205 | 206 | Check result with: 207 | ``` 208 | kubectl api-versions | grep admi 209 | ;;; 210 | admissionregistration.k8s.io/v1alpha1 211 | ``` 212 | -------------------------------------------------------------------------------- /exercise-1/img/IGdemo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/exercise-1/img/IGdemo.png -------------------------------------------------------------------------------- /exercise-1/img/IGdemoattach.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/exercise-1/img/IGdemoattach.png -------------------------------------------------------------------------------- /exercise-1/img/VPCdemo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/exercise-1/img/VPCdemo.png -------------------------------------------------------------------------------- /exercise-1/img/k8sdesign.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/exercise-1/img/k8sdesign.png -------------------------------------------------------------------------------- /exercise-10/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 10 - Request Routing and Canary Testing 2 | 3 | #### Deploy Guestbook UI 2.0 4 | 5 | We currently have Guestbook UI 1.0 running in the environment. Let's deploy Guestbook UI 2.0, which has a green background. 6 | 7 | ```sh 8 | kubectl apply -f kubernetes-v2/guestbook-ui-deployment-v2.yaml 9 | ``` 10 | 11 | Once deployed, browse to the Ingress IP from the previous exercise and you should see that everytime you reload the Guestbook UI website, it'll switch between the 2 versions (2 different background colors). 12 | 13 | #### Default the to V1 14 | 15 | We can default the production traffic to V1. 16 | 17 | 1 - Define Destination Rule 18 | 19 | Istio needs your help to understand which deployments/pods belongs to which version. We can use `DestinationRule` to define `subsets`. A `subset` is a way group different pods together into a `subset` by using Kubernetes label selectors. 20 | 21 | ```sh 22 | kubectl apply -f istio/guestbook-ui-dest.yaml 23 | ``` 24 | 25 | 2 - Define Virtual Service 26 | 27 | Earlier in the lab we used virtual service to bind Kubernetes service to Istio Ingress Gateway. A virtual service can also be used to define routing rules and traffic shifting rules as well. Let's configure the virtual service so that all the requests go to `v1`. 28 | 29 | ```sh 30 | kubectl apply -f istio/guestbook-ui-v1-vs.yaml 31 | ``` 32 | 33 | If you refresh Guestbook UI several times, you should see that all of the requests are now responded by V1 application. 34 | 35 | #### Traffic Shifting 36 | 37 | Rather than routing 100% of the traffic to `v1`, you can also configure weights so that you can route `x%` of traffic to `v1`, and `y%` of traffic to `v2`, etc. 38 | 39 | ```sh 40 | kubectl apply -f istio/guestbook-ui-80p-v1-vs.yaml 41 | ``` 42 | 43 | If you refresh Guestbook UI several times, you should see that most of the requests are now responded by V1 application, and some requests are responded by V2 application. You can adjust the weights and reapply the file to test different probabilities. 44 | 45 | #### Route based on HTTP header 46 | 47 | We can also canary test based on HTTP headers. For example, if we want to test different versions of the application based on browser's User Agent, we can setup match rules against the header values. 48 | 49 | ```sh 50 | kubectl apply -f istio/guestbook-ui-chrome-vs.yaml 51 | ``` 52 | 53 | Try browsing the Guestbook UI with both Chrome and Firefox. Chrome should show V2 of the application, and Firefox should show V1 of the application. 54 | 55 | #### [Continue to Exercise 11 - Fault Injection and Rate Limiting](../exercise-11/README.md) 56 | -------------------------------------------------------------------------------- /exercise-11/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 11 - Fault Injection and Circuit Breaking 2 | 3 | In this exercise we will learn how to test the resiliency of an application by injecting faults. 4 | 5 | To test our guestbook application for resiliency, this exercise will test injecting different levels of delay when the user agent accessing the hello world service is mobile. 6 | 7 | #### Inject a route rule to delay request 8 | 9 | We can inject delays into the request. 10 | 11 | ```sh 12 | kubectl apply -f istio/guestbook-ui-delay-vs.yaml 13 | ``` 14 | 15 | Browse to the Guestbook UI, and you'll see that the request is responding much slower! 16 | 17 | #### Inject error responses 18 | 19 | We can also inject error responses, such as returning 503 from a service. 20 | 21 | ```sh 22 | kubectl apply -f istio/guestbook-service-503-vs.yaml 23 | ``` 24 | 25 | Visiting the Guestbook UI, and you'll see that it is now unable to retrieve any Guestbook messages. Luckily, the application has a graceful fallback to display a nice error message. 26 | 27 | #### Remove the faults 28 | 29 | Remove the annoying 503 errors. 30 | 31 | ```sh 32 | kubectl delete -f istio/guestbook-service-503-vs.yaml 33 | ``` 34 | 35 | Then reset the Guestbook UI virtual service so that it routes all requests to V1. 36 | 37 | ```sh 38 | kubectl apply -f istio/guestbook-ui-v1-vs.yaml 39 | ``` 40 | 41 | #### Circuit Breaking 42 | 43 | There are several circuit breaker rules you can apply in Istio: 44 | * Retries 45 | * Outlier Detection 46 | * Connection pooling 47 | 48 | Retries can be configured in the virtual service. 49 | 50 | ```sh 51 | kubectl apply -f istio/guestbook-service-retry-vs.yaml 52 | ``` 53 | 54 | Outlier and connection pooling are configured in the destination rule. 55 | 56 | ```sh 57 | kubectl apply -f istio/guestbook-service-dest.yaml 58 | ``` 59 | 60 | ```sh 61 | kubectl apply -f istio/guestbook-service-dest.yaml 62 | ``` 63 | 64 | #### [Continue to Exercise 12 - Service Isolation Using Mixer](../exercise-12/README.md) 65 | -------------------------------------------------------------------------------- /exercise-12/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 12 - Service Isolation Using Mixer 2 | 3 | This has not been tested with Istio 1.2.5! Mixer has changed with recent versions of Istio and this might be out of date! 4 | 5 | #### Service Isolation Using Mixer 6 | 7 | We'll block access to the Guestbook Service by adding the `deny-guestbook-service.yaml` rule shown below. 8 | 9 | ```sh 10 | kubectl apply -f istio/deny-guestbook-service.yaml 11 | ``` 12 | 13 | Visit Guestbook UI and see that Guestbook Service can no longer be accessed. 14 | 15 | To remove denial, you can delete the rule. 16 | 17 | ```sh 18 | kubectl delete -f istio/deny-guestbook-service.yaml 19 | ``` 20 | 21 | #### [Continue to Exercise 13 - Istio Mutual TLS](../exercise-13/README.md) 22 | -------------------------------------------------------------------------------- /exercise-13/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 13 - Istio Mutual TLS 2 | 3 | This has not been tested with Istio 1.2.5! Auth has changed with recent versions of Istio and this might be out of date! 4 | 5 | #### Overview of Istio Mutual TLS 6 | 7 | Istio provides transparent, and frankly magical, mutal TLS to services inside the service mesh when asked. By mutual TLS we understand that both the client and the server authenticate each others certificates as part of the TLS handshake. 8 | 9 | #### Enable Mutual TLS 10 | 11 | Let the past go. Kill it, if you have to: 12 | ``` 13 | cd ~/istio 14 | kubectl delete all --all 15 | kubectl delete -f install/kubernetes/istio-demo.yaml 16 | ``` 17 | 18 | It's the only way for TLS to be the way it was meant to be: 19 | 20 | ``` 21 | # (from istio install root) 22 | kubectl create -f install/kubernetes/istio-demo-auth.yaml \ 23 | --as=admin \ 24 | --as-group=system:masters 25 | ``` 26 | 27 | We need to (re)create the auto injector. Use the Exercise 6 instructions. 28 | 29 | Finally enable injection and deploy the thrilling Book Info sample. 30 | 31 | ``` 32 | # (from istio install root) 33 | kubectl label namespace default istio-injection=enabled 34 | kubectl create -f samples/bookinfo/platform/kube/bookinfo.yaml 35 | kubectl create -f samples/bookinfo/platform/kube/bookinfo-ingress.yaml 36 | ``` 37 | 38 | #### Take it for a spin 39 | 40 | At this point it might seem like nothing changed, but it has. 41 | Let's disable the webhook in default for a second. 42 | 43 | ``` 44 | kubectl label namespace default istio-injection- 45 | ``` 46 | 47 | Now lets give ourselves a space to play 48 | 49 | ``` 50 | kubectl run toolbox -l app=toolbox --image centos:7 /bin/sh -- -c 'sleep 84600' 51 | ``` 52 | 53 | First: let's prove to ourselves that we really are doing something with tls. From here on out assume names like foo-XXXX need to be replaced with the foo podname you have in your cluster. We pass `-k` to `curl` to convince it to be a bit laxer about cert checking. 54 | 55 | ``` 56 | tb=$(kubectl get po -l app=toolbox -o template --template '{{(index .items 0).metadata.name}}') 57 | kubectl exec -it $tb curl -- https://details:9080/details/0 -k 58 | ``` 59 | 60 | Denied! 61 | 62 | Let's exfiltrate the certificates out of a proxy so we can pretend to be them (incidentally I hope this serves as a cautionary tale about the importance locking down pods). 63 | 64 | ``` 65 | pp=$(kubectl get po -l app=productpage -o template --template '{{(index .items 0).metadata.name}}') 66 | mkdir ~/tmp # or wherever you want to stash these certs 67 | cd ~/tmp 68 | fs=(key.pem cert-chain.pem root-cert.pem) 69 | for f in ${fs[@]}; do kubectl exec -c istio-proxy $pp /bin/cat -- /etc/certs/$f >$f; done 70 | ``` 71 | 72 | This should give you the certs. Now let us copy them into our toolbox. 73 | 74 | ``` 75 | for f in ${fs[@]}; do kubectl cp $f default/$tb:$f; done 76 | ``` 77 | 78 | Try once more to talk to the details service, but this time with feeling: 79 | 80 | ``` 81 | kubectl exec -it $tb curl -- https://details:9080/details/0 -v --key ./key.pem --cert ./cert-chain.pem --cacert ./root-cert.pem -k 82 | ``` 83 | 84 | Success! We really are protecting our connections with tls. Time to enjoy its magic from the inside. Let's enable the webhook and roll our pod: 85 | 86 | ``` 87 | kubectl label namespace default istio-injection=enabled 88 | kubectl delete po $tb 89 | tb=$(kubectl get po -l app=toolbox -o template --template '{{(index .items 0).metadata.name}}') 90 | kubectl exec -it $tb curl -- http://details:9080/details/0 91 | ``` 92 | 93 | Notice the protocol. 94 | 95 | #### [Continue to Exercise 14 - Ensuring security with iptables](../exercise-14/README.md) 96 | -------------------------------------------------------------------------------- /exercise-14/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 14 - Ensuring security with iptables 2 | 3 | #### A digression 4 | 5 | In the last exercise we saw tls enforced. A question that hasn't been answered yet, and possbily hasn't even asked yet: how is it possible for traffic to be passing through the proxy when it hasn't been referenced at all? 6 | 7 | #### Enter iptables 8 | 9 | iptables is the linux firewall. You may have seen many frontends going by different names but ultimately they manipulate iptables. We are going to take a look at how istio (ab)uses iptables to make proxying traffic seamless. 10 | 11 | #### Another digression 12 | 13 | Pods mostly don't have root users, this makes taking a look at their network state hard. Fortunately there are ways and means of making them open up to us. 14 | 15 | Lets see where our old buddy productpage is scheduled 16 | 17 | ``` 18 | kubectl get -o template --template '{{ .status.hostIP }}' po $pp 19 | ``` 20 | 21 | That'll give you the internal ip. Copy the internal ip for the next step. 22 | 23 | Next we want to ssh onto that node. 24 | 1. Navigate to the Compute Engine page of the google cloud console 25 | 2. Find the node where the productpage is running by finding the internal ip on that page 26 | 3. Click on the handy ssh button and get the gcloud command to ssh into the box (our whatever method you prefer) 27 | 28 | ``` 29 | gcloud compute --project "sparkcluster-177619" ssh --zone "us-west1-c" "gke-guestbook-default-pool-3bb0f4e2-ldbr" 30 | ``` 31 | 32 | Until I say otherwise, assume that's the shell we are working with. That is the following commands are run inside the node specified. 33 | 34 | ``` 35 | cid=$(docker ps | grep bookinfo-productpage | awk '{print $1}') 36 | pid=$(docker inspect -f '{{.State.Pid}}' $cid) 37 | ``` 38 | 39 | This gives us the container id of the product page running on the host and its pid. Why are we doing this? Because docker couldn't behave like a good citizen if it's existence depended on it, which clearly it doesn't, because it still exists. 40 | 41 | ``` 42 | sudo mkdir -p /var/run/netns 43 | sudo ln -sf /proc/$pid/ns/net /var/run/netns/pp 44 | ``` 45 | 46 | What did this achieve? 47 | 48 | ``` 49 | sudo ip netns exec pp ss -tln 50 | ``` 51 | 52 | That's right, we just ran a command from the host in the network namespace of a container (this gives us the listening tcp ports). 53 | 54 | #### iptables, anyone? 55 | 56 | ``` 57 | sudo ip netns exec pp iptables -t nat -v -L 58 | ``` 59 | 60 | Yields the answer to how the proxy weaves its spell of control 61 | 62 | ``` 63 | Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) 64 | pkts bytes target prot opt in out source destination 65 | 0 0 ISTIO_REDIRECT all -- any any anywhere anywhere /* istio/install-istio-prerouting */ 66 | 67 | Chain INPUT (policy ACCEPT 0 packets, 0 bytes) 68 | pkts bytes target prot opt in out source destination 69 | 70 | Chain OUTPUT (policy ACCEPT 1222 packets, 114K bytes) 71 | pkts bytes target prot opt in out source destination 72 | 34 2040 ISTIO_OUTPUT tcp -- any any anywhere anywhere /* istio/install-istio-output */ 73 | 74 | Chain POSTROUTING (policy ACCEPT 1222 packets, 114K bytes) 75 | pkts bytes target prot opt in out source destination 76 | 77 | Chain ISTIO_OUTPUT (1 references) 78 | pkts bytes target prot opt in out source destination 79 | 0 0 ISTIO_REDIRECT all -- any lo anywhere !localhost /* istio/redirect-implicit-loopback */ 80 | 34 2040 RETURN all -- any any anywhere anywhere owner UID match 1337 /* istio/bypass-envoy */ 81 | 0 0 RETURN all -- any any anywhere localhost /* istio/bypass-explicit-loopback */ 82 | 0 0 ISTIO_REDIRECT all -- any any anywhere anywhere /* istio/redirect-default-outbound */ 83 | 84 | Chain ISTIO_REDIRECT (3 references) 85 | pkts bytes target prot opt in out source destination 86 | 0 0 REDIRECT tcp -- any any anywhere anywhere /* istio/redirect-to-envoy-port */ redir ports 15001 87 | ``` 88 | 89 | Basically there are a few rules here of interest. A complete explanation is out of the scope of this exercise but here is a summary: 90 | 91 | 1. In pre-routing we catch the inbound traffic. Anything coming in jumps to our redirect rule. 92 | 2. In output we inspect outbound traffic. This is more interesting. Istio runs under uid 1337. To avoid looping, all traffic from this uid is allowed to egress. Traffic outbound not from this uid also jumps to the redirect. 93 | 3. The redirect bends all traffic from whatever port it was on to 15001. The observant among you might have noticed that output from the `ss` command we ran earlier. 94 | 95 | This effectively forces all traffic to run through the proxy without any cooperation required from the services in the pod. 96 | 97 | #### When did that happen? 98 | 99 | The init container that istio injects runs a small script to setup this rules with NET\_CAP\_ADMIN. Neat, eh? 100 | 101 | #### [Continue to Exercise 15 - mTLS again, now with 100% more SPIFFE](../exercise-15/README.md) 102 | -------------------------------------------------------------------------------- /exercise-15/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 15 - mTLS again, now with 100% more SPIFFE 2 | 3 | Istio uses SPIFFE to assert the identify of workloads on the cluster. SPIFFE is a very simple standard. It consists of a notion of identity and a method of proving it. A SPIFFE identity consists of an authority part and a path. The meaning of the path in spiffe land is implementation defined. In k8s it takes the form `/ns/$namespace/sa/$service-account` with the expected meaning. A SPIFFE identify is embedded in a document. This document in principle can take many forms but currently the only defined format is x509. Let's see what an SPIFFE x509 looks like. Remember those certificates we stole earlier? Execute the below snippet either in the directory where you have the certificates locally, if you have `openssl` installed. 4 | 5 | ``` 6 | cd ~/tmp 7 | openssl x509 -in cert-chain.pem -text | less 8 | ``` 9 | 10 | The important thing to notice is that the subject isn't what you'd normally expect. It has no meaning here. What is interesting is the URI SAN (Subject Alternative Name) extension. Note the SPIFFE identify. There is one more part to SPIFFE identity, and that's a signing authority. This a CA certificate with a SPIFFE identify with _no_ path component. 11 | 12 | ``` 13 | openssl verify -show_chain -CAfile root-cert.pem cert-chain.pem 14 | ``` 15 | 16 | You might need to drop the `-show-chain` argument depending on what version of openssl you have installed. Depending on how long it's been since the certificates were extracted they might be out of date. Istio rolls certificates aggressively. 17 | 18 | #### Why? 19 | 20 | The Istio proxy uses the SPIFFE identity to establish secure authenticated communication channels over TLS. By providing identities on both the client and server side it establishes mTLS. The service account is also made available as an attribute in mixer. 21 | 22 | Interested parties can find the SPIFFE specification https://github.com/spiffe/spiffe. It's very readable. 23 | 24 | #### [Continue to Exercise 16 - Istio RBAC](../exercise-16/README.md) 25 | -------------------------------------------------------------------------------- /exercise-16/README.md: -------------------------------------------------------------------------------- 1 | ## Istio RBAC 2 | 3 | #### Mixer all the way down 4 | 5 | Istio has an RBAC engine implemented as a mixer adapter. There are a number of things that need to be configured before it can be turned loose on unsuspecting services. 6 | 7 | #### Who did what 8 | 9 | The foundation is an instance of an authorization template. The purpose of the auth template is to select from the attribute vocabulary available a subset that will be endowed with specific meaning for RBAC. Consider subject selection; who is making the request? This could be extracted from a header, from the SPIFFE URI, from a cookie etc etc. The authorization template is what specifies this. 10 | 11 | Example usage: 12 | 13 | ``` 14 | apiVersion: "config.istio.io/v1alpha2" 15 | kind: authorization 16 | metadata: 17 | name: requestcontext 18 | namespace: istio-system 19 | spec: 20 | subject: 21 | user: source.user | "" 22 | groups: "" 23 | properties: 24 | app: source.labels["app"] | "" 25 | version: source.labels["version"] | "" 26 | namespace: source.namespace | "" 27 | action: 28 | namespace: destination.namespace | "" 29 | service: destination.service | "" 30 | method: request.method | "" 31 | path: request.path | "" 32 | properties: 33 | app: destination.labels["app"] | "" 34 | version: destination.labels["version"] | "" 35 | ``` 36 | 37 | Subject is the who, action is the what. In this instance will we be using the _service account_ as the user, which will allow us to take advantage of SPIFFE identity asserted by our x509 certificates. We also take advantage of properies to include metadata in our auth decisions, for instance the app label. 38 | 39 | #### The binding of services 40 | 41 | In order to make decisions about what we will allow we need two more ingredients: roles, and bindings of those roles to eligible inbound requests. An example role: 42 | 43 | ``` 44 | Version: "config.istio.io/v1alpha2" 45 | kind: ServiceRole 46 | metadata: 47 | name: details-reviews-viewer 48 | namespace: default 49 | spec: 50 | rules: 51 | - services: ["details.default.svc.cluster.local"] 52 | methods: ["GET"] 53 | - services: ["reviews.default.svc.cluster.local"] 54 | methods: ["GET"] 55 | constraints: 56 | - key: "version" 57 | values: ["v2", "v3"] 58 | ``` 59 | 60 | Details to note: 61 | - The role has an applicable namespace 62 | - We define an array of rules, with values to be matched onto the corresponding attributes from our earlier authorization template instance (service / method / path). If path is not supplied it defaults to allowing all paths. 63 | - Finally constraints can be added via the extra properties defined in the authorization template. 64 | 65 | What does a binding look like? 66 | 67 | ``` 68 | apiVersion: "config.istio.io/v1alpha2" 69 | kind: ServiceRoleBinding 70 | metadata: 71 | name: bind-details-reviews 72 | namespace: default 73 | spec: 74 | subjects: 75 | - user: "cluster.local/ns/default/sa/bookinfo-productpage" 76 | roleRef: 77 | kind: ServiceRole 78 | name: "details-reviews-viewer 79 | ``` 80 | 81 | This is relatively straight-forward: We define an array of subjects we would like this binding to apply to and reference the role that we would like to bind to. 82 | 83 | In summary: roles are about actions, and role-bindings are about subjects. 84 | 85 | #### The final component 86 | 87 | We some final config for enablement. This is a mixer adaptor so we need the usual mixer suspects: instances and handlers. 88 | 89 | ``` 90 | apiVersion: "config.istio.io/v1alpha2" 91 | kind: rbac 92 | metadata: 93 | name: handler 94 | namespace: istio-system 95 | spec: 96 | config_store_url: "k8s://" 97 | --- 98 | apiVersion: "config.istio.io/v1alpha2" 99 | kind: rule 100 | metadata: 101 | name: rbaccheck 102 | namespace: istio-system 103 | spec: 104 | actions: 105 | - handler: handler.rbac 106 | instances: 107 | - requestcontext.authorization 108 | ``` 109 | 110 | #### Switching it on 111 | 112 | Assume all other shell is run with the working directory set to the Istio release root. 113 | 114 | ``` 115 | cd ~/istio 116 | kubectl create -f samples/bookinfo/platform/kube/bookinfo-add-serviceaccount.yaml 117 | kubectl create -f samples/bookinfo/platform/kube/rbac/rbac-config-ON.yaml 118 | ``` 119 | 120 | Now we should get denied for all the things: 121 | 122 | ``` 123 | kubectl create -f samples/bookinfo/networking/bookinfo-gateway.yaml 124 | INGRESS_IP=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') 125 | curl http://$INGRESS_IP/productpage 126 | ``` 127 | 128 | We can let ourselves back in with some roles / bindings. 129 | 130 | ``` 131 | kubectl apply -f samples/bookinfo/platform/kube/rbac/namespace-policy.yaml 132 | ``` 133 | 134 | This creates a catch-all rule that allows service in either `istio-system` or `default` to use `GET` on any service. Open the file for the details. Use curl or a brower to hit the previous endpoint and enjoy unfettered access. 135 | 136 | #### Ascending to a higher plane 137 | 138 | Let's delete our overly lax policy: 139 | 140 | ``` 141 | kubectl delete -f samples/bookinfo/platform/kube/rbac/namespace-policy.yaml 142 | ``` 143 | 144 | Let's introduce something a bit more fine-grained. A picture of the architecture will be helpful here. 145 | 146 | ![architecture](https://istio.io/docs/guides/img/bookinfo/withistio.svg) 147 | 148 | We'd like ingress to be able to contact the productpage. 149 | 150 | ``` 151 | kubectl apply -f samples/bookinfo/platform/kube/rbac/productpage-policy.yaml 152 | ``` 153 | 154 | Now if we visit the page in the browser we can that we have the first level of the service graph opened up. Again, details are in the file. Currently everything else is showing an error. This is solved by throwing more targeted policy at the problem: 155 | 156 | ``` 157 | kubectl apply -f samples/bookinfo/platform/kube/rbac/details-reviews-policy.yaml 158 | ``` 159 | 160 | This enables reviews (for v2 / v3) and details. Visit the page! You can see how this was done by checking out the file. Note the usage of the spiffe identity. 161 | 162 | Lastly: 163 | 164 | ``` 165 | kubectl apply -f samples/bookinfo/platform/kube/rbac/ratings-policy.yaml 166 | ``` 167 | 168 | Ratings should now appear. 169 | 170 | That's a wrap! 171 | -------------------------------------------------------------------------------- /exercise-2/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 2 - Deploying a microservice to Kubernetes 2 | 3 | #### Deploy Hello World 4 | 5 | 1 - Deploy Hello World service to Kubernetes 6 | 7 | ```sh 8 | cd istio-workshop 9 | kubectl apply -f kubernetes/helloworld-deployment.yaml 10 | ``` 11 | 12 | ```sh 13 | kubectl get pods 14 | 15 | NAME READY STATUS RESTARTS AGE 16 | helloworld-service-v1-.... 1/1 Running 0 20s 17 | ``` 18 | 19 | An important detail to note is that READY shows 1/1. That is referring to the number of containers in the pod that are ready and this pod only has 1 container. 20 | 21 | 2 - Note the name of the pod above for use in the command below. Then delete one of the hello world pods. 22 | 23 | ```sh 24 | kubectl delete pod helloworld-service-v1-... 25 | ``` 26 | 27 | 3 - Kubernetes will automatically restart this pod for you. Verify it is restarted 28 | 29 | ```sh 30 | kubectl get pods 31 | 32 | NAME READY STATUS RESTARTS AGE 33 | helloworld-service-v1-.... 1/1 Running 0 20s 34 | ``` 35 | 36 | 4 - All of the container output to STDOUT and STDERR will be accessible as Kubernetes logs: 37 | 38 | ```sh 39 | kubectl logs helloworld-service-v1-... 40 | ``` 41 | 42 | or to follow the log file: 43 | 44 | ```sh 45 | kubectl logs -f helloworld-service-v1-... 46 | ``` 47 | 48 | #### Pod Details 49 | 50 | One of the key tools for troubleshooting issues when creating pods is describe which shows the pod details: 51 | 52 | ```sh 53 | kubectl describe pods helloworld-service-v1-... 54 | ``` 55 | 56 | This shows all the details of the pod such as status, events, containers, IP and more. 57 | 58 | An important detail to notice, that will be relevant to Istio, is that the pod currently only has a single container: 59 | 60 | ``` 61 | Containers: 62 | helloworld-service: 63 | Container ID: docker://9f6dd8ffeb104541e95dd6cf5d960851840409bb9e683d79b8e604fe1af1045c 64 | Image: retroryan/helloworld:1.0 65 | Image ID: docker-pullable://retroryan/helloworld@sha256:4ab1359b88ed1e5c820c27ae2c475a816e60d4b99b1703e9223ddb4885a4d2e7 66 | Port: 8080/TCP 67 | ``` 68 | 69 | When we deploy with Istio be sure to notice the additional containers that get added. 70 | 71 | Pods can also have associated labels that are viewed with the following command: 72 | 73 | ```sh 74 | kubectl get pods --show-labels 75 | ``` 76 | 77 | ## Explanation 78 | 79 | #### By Ray Tsang [@saturnism](https://twitter.com/saturnism) 80 | 81 | We will be using yaml files throughout this workshop. Every file describes a resource that needs to be deployed into Kubernetes. We won’t be able to go into details on the contents, but you are definitely encouraged to read them and see how pods, services, and others are declared. 82 | 83 | The pod deploys a microservice that is a container whose images contains a self-executing JAR files. The source is available at [istio-by-example-java](https://github.com/saturnism/istio-by-example-java) if you are interested in seeing it. 84 | 85 | In this first example we deployed a Kubernetes pod by specifying a deployment using this [helloworldservice-deployment.yaml](/kubernetes/helloworldservice-deployment.yaml). 86 | 87 | A Kubernetes pod is a group of containers, tied together for the purposes of administration and networking. It can contain one or more containers. All containers within a single pod will share the same networking interface, IP address, volumes, etc. All containers within the same pod instance will live and die together. It’s especially useful when you have, for example, a container that runs the application, and another container that periodically polls logs/metrics from the application container. 88 | 89 | You can start a single Pod in Kubernetes by creating a Pod resource. However, a Pod created this way would be known as a Naked Pod. If a Naked Pod dies/exits, it will not be restarted by Kubernetes. A better way to start a pod, is by using a higher-level construct such as Replication Controller, Replica Set, or a Deployment. 90 | 91 | Prior to Kubernetes 1.2, Replication Controller is the preferred way deploy and manage your application instances. Kubernetes 1.2 introduced two new concepts - Replica Set, and Deployments. 92 | 93 | Replica Set is the next-generation Replication Controller. The only difference between a Replica Set and a Replication Controller right now is the selector support. Replica Set supports the new set-based selector requirements whereas a Replication Controller only supports equality-based selector requirements. 94 | 95 | For example, Replication Controller can only select pods based on equality, such as "environment = prod", whereas Replica Sets can select using the "in" operator, such as "environment in (prod, qa)". Learn more about the different selectors in the [Labels guide](http://kubernetes.io/docs/user-guide/labels). 96 | 97 | Deployment provides declarative updates for Pods and Replica Sets. You only need to describe the desired state in a Deployment object, and the Deployment controller will change the actual state to the desired state at a controlled rate for you. You can use deployments to easily: 98 | - Create a Deployment to bring up a Replica Set and Pods. 99 | - Check the status of a Deployment to see if it succeeds or not. 100 | - Later, update that Deployment to recreate the Pods (for example, to use a new image, or configuration). 101 | - Rollback to an earlier Deployment revision if the current Deployment isn’t stable. 102 | - Pause and resume a Deployment. 103 | 104 | In this workshop, because we are working with Kubernetes 1.7+, we will be using Deployment extensively. 105 | 106 | There are other containers running too. The interesting one is the pause container. The atomic unit Kubernetes can manage is actually a Pod, not a container. A Pod can be composed of multiple tightly-coupled containers that is guaranteed to scheduled onto the same node, and will share the same Pod IP address, and can mount the same volumes.. What that essentially means is that if you run multiple containers in the same Pod, they will share the same namespaces. 107 | 108 | A pause container is how Kubernetes uses Docker containers to create shared namespaces so that the actual application containers within the same Pod can share resources. 109 | 110 | #### Optional - [Peering under the covers of Kubernetes](optional.md) 111 | 112 | #### [Continue to Exercise 3 - Creating a Kubernetes Service](../exercise-3/README.md) 113 | -------------------------------------------------------------------------------- /exercise-2/optional.md: -------------------------------------------------------------------------------- 1 | ## Exercise 2 - Optional 2 | ## Peering under the covers of node 3 | 4 | `kubectl get pods -owide` 5 | 6 | That will list the node the pod is running on. For example you should see: 7 | 8 | `NODE gke-guestbook-...` 9 | 10 | `gcloud compute ssh ` 11 | 12 | `sudo docker ps` 13 | 14 | `someuser@:~$ exit` 15 | 16 | The Pod name is automatically assigned as the hostname of the container: 17 | 18 | ``` 19 | kubectl exec -ti helloworld-service-v1-... /bin/ash 20 | 21 | root@helloworld-...:/data# hostname 22 | helloworld-service-.... 23 | 24 | root@helloworld-...:/app/src# hostname -i 25 | 10.104.1.5 26 | 27 | root@helloworld-...:/app/src# exit 28 | 29 | root@helloworld-...:/data# hostname -i 30 | 10.104.1.5 31 | ``` 32 | 33 | #### [Continue to Exercise 3 - Creating a Kubernetes Service](../exercise-3/README.md) 34 | -------------------------------------------------------------------------------- /exercise-3/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 3 - Creating a Kubernetes Service 2 | 3 | Each Pod has a unique IP address - but the address is ephemeral. The Pod IP addresses are not stable and it can change when Pods start and/or restart. A service provides a single access point to a set of pods matching some constraints. A Service IP address is stable. 4 | 5 | In Kubernetes, you can instruct the underlying infrastructure to create an external load balancer, by specifying the Service Type as a LoadBalancer. If you open up [helloworldservice-service.yaml](/kubernetes/helloworldservice-service.yaml) you will see that it has a type: LoadBalancer 6 | 7 | #### Create the Hello World Service “service” 8 | 9 | ```sh 10 | kubectl apply -f kubernetes/helloworld-service.yaml 11 | ``` 12 | 13 | ```sh 14 | kubectl get services 15 | 16 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 17 | helloworld-service LoadBalancer 10.0.0.208 8080:31771/TCP 9s 18 | ``` 19 | 20 | The external ip will start as pending. After a short period the EXTERNAL IP will be populated. This is the external IP of the Load Balancer. 21 | 22 | #### Curl the external ip to test the helloworld service: 23 | 24 | ```sh 25 | curl http://[YOUR_SERVICE_EXTERNAL_IP]:8080/hello/world 26 | ``` 27 | 28 | #### Explanation 29 | #### By Ray Tsang [@saturnism](https://twitter.com/saturnism) 30 | 31 | Open the [helloworldservice-service.yaml](helloworldservice-service.yaml) to examine the service descriptor. The important part about this file is the selector section. This is how a service knows which pod to route the traffic to, by matching the selector labels with the labels of the pods. 32 | 33 | The other important part to notice in this file is the type of service is a Load Balancer. This tells GCE that an externally facing load balancer should be created for this service so that it is accessible from the outside. 34 | 35 | Since we are running two instances of the Hello World Service (one instance in one pod), and that the IP addresses are not only unique, but also ephemeral - how will a client reach our services? We need a way to discover the service. 36 | 37 | In Kubernetes, Service Discovery is a first class citizen. We created a Service that will: 38 | act as a load balancer to load balance the requests to the pods, and 39 | provide a stable IP address, allow discovery from the API, and also create a DNS name! 40 | 41 | #### Optional - curl the service using a DNS name 42 | 43 | If you login into another container you can access the helloworldservice via the DNS name. For example start a new tutum/curl container to get a shell and curl the service using the service name: 44 | 45 | ```sh 46 | $ kubectl run curl --image=tutum/curl -i --tty --rm 47 | 48 | root@curl-797905165-015t4:/# curl http://helloworld-service:8080/hello/Batman 49 | {"greeting":"Hello Batman from helloworld-service-... with 1.0","hostname":"helloworld-service-...","version":"1.0"} 50 | 51 | root@curl-797905165-015t4:/# exit 52 | ``` 53 | 54 | #### [Continue to Exercise 4 - Scaling In and Out](../exercise-4/README.md) 55 | -------------------------------------------------------------------------------- /exercise-4/README.md: -------------------------------------------------------------------------------- 1 | # Exercise 4 - Scaling in and out 2 | 3 | ### Scale the number of Hello World service pods 4 | 5 | 1. Scale the number of replicas of your Hello World service by running the following commands: 6 | 7 | ```sh 8 | kubectl get deployment 9 | 10 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 11 | helloworld-service-v1 1 1 1 1 1m 12 | ``` 13 | 14 | ```sh 15 | kubectl scale deployment helloworld-service-v1 --replicas=4 16 | ``` 17 | 18 | ```sh 19 | kubectl get deployment 20 | 21 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 22 | helloworld-service-v1 4 4 4 4 1m 23 | ``` 24 | 25 | ```sh 26 | kubectl get pods 27 | 28 | NAME READY STATUS RESTARTS AGE 29 | helloworld-service-v1-... 1/1 Running 0 1m 30 | helloworld-service-v1-... 1/1 Running 0 1m 31 | helloworld-service-v1-... 1/1 Running 0 1m 32 | helloworld-service-v1-... 1/1 Running 0 2m 33 | ``` 34 | 35 | 2. Try scaling out further. 36 | 37 | ``` 38 | kubectl scale deployment helloworld-service-v1 --replicas=25 39 | ``` 40 | 41 | If you look at the pod status, some of the pods will show a `Pending` state. That is because we only have four physical nodes, and the underlying infrastructure has run out of capacity to run the containers with the requested resources. And the underlying infrastructure has run out of capacity to run the containers with the requested resources. 42 | 43 | 3. Pick a pod name that has a `Pending` state to confirm the lack of resources in the detailed status. 44 | 45 | ``` 46 | kubectl describe pod helloworld-service... 47 | ``` 48 | 49 | 4. We can easily spin up another Compute Engine instance to append to the cluster. 50 | 51 | ``` 52 | gcloud container clusters resize guestbook --size=5 53 | gcloud compute instances list 54 | ``` 55 | 56 | Open another terminal and run: 57 | 58 | ``` 59 | kubectl get pods -w -o wide 60 | ``` 61 | 62 | This will monitor the recovering process. 63 | 64 | 5. Verify the new instance has joined the Kubernetes cluster, you’ll should be able to see it with this command: 65 | 66 | ``` 67 | kubectl get nodes 68 | kubectl get pods -o wide 69 | ``` 70 | 71 | 6. IMPORTANT! - Scale back the number of replicas before moving on! 72 | 73 | ``` 74 | kubectl scale deployment helloworld-service-v1 --replicas=2 75 | gcloud container clusters resize guestbook --size=3 76 | ``` 77 | 78 | Kubernetes will only keep 2 of the Hello World instances and terminate the rest. 79 | 80 | #### [Continue to Exercise 5 - Installing Istio](../exercise-5/README.md) 81 | -------------------------------------------------------------------------------- /exercise-5/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 5 - Installing Istio 2 | 3 | #### Clean up 4 | 5 | If you have anything running in Kubernetes from the previous exercises first remove those. The easiest way is to start with a clean slate and delete all deployed services from the cluster: 6 | 7 | ```sh 8 | kubectl delete all --all 9 | ``` 10 | 11 | #### Install Istio 12 | 13 | We will follow the slightly modified GKE instructions from [installing Istio](https://cloud.google.com/istio/docs/how-to/installing-oss#install_istio) 14 | 15 | 1 - Be sure you are in the home directory: 16 | 17 | ```sh 18 | cd 19 | ``` 20 | 21 | 2 - Run the following command to download and extract the Istio installation file and Istio client. 22 | 23 | ```sh 24 | curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.5 sh - 25 | ``` 26 | 27 | The installation directory contains: 28 | 29 | * Installation .yaml files for Kubernetes in install 30 | * Sample applications in samples 31 | * The istioctl client binary in the bin/ directory. You can use istioctl to manually inject Envoy as a sidecar proxy and to create routing rules and policies. 32 | * The istio.VERSION configuration file 33 | 34 | 2 - Ensure that you're in the Istio installation's root directory. 35 | 36 | ```sh 37 | cd ~/istio-1.2.5/ 38 | ``` 39 | 40 | 3 - Add the istioctl client to your PATH: 41 | 42 | ```sh 43 | export PATH=$PWD/bin:$PATH 44 | ``` 45 | 46 | 4 - Set up the istio-system namespace for Istio's control plane components: 47 | 48 | ```sh 49 | kubectl create namespace istio-system 50 | ``` 51 | 52 | We will install Istio in the istio-system namespace you just created, and then manage microservices from all other namespaces. The installation includes Istio core components, tools, and samples. 53 | 54 | 5 - Install the Istio Custom Resource Definitions (CRDs) and wait a few seconds for the CRDs to be committed in the Kubernetes API server: 55 | 56 | ```sh 57 | helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system \ 58 | --set grafana.enabled=true --set prometheus.enabled=true \ 59 | --set tracing.enabled=true | kubectl apply -f - 60 | ``` 61 | 62 | 6 - Verify that all 23 Istio CRDs were committed using the following command: 63 | 64 | ``` 65 | kubectl get crds | grep 'istio.io\|certmanager.k8s.io' | wc -l 66 | ``` 67 | 68 | It will take a minute to install the CRDs. After a minute you should see in the output: 69 | 23 70 | 71 | 7 - Install Istio with the [Demo Profile](https://istio.io/docs/setup/kubernetes/additional-setup/config-profiles/). Although you can choose another profile, we recommend the default profile for production deployments. 72 | 73 | ```sh 74 | helm template install/kubernetes/helm/istio --name istio --namespace istio-system \ 75 | --values install/kubernetes/helm/istio/values-istio-demo.yaml | kubectl apply -f - 76 | ``` 77 | 78 | This deploys the core Istio components: 79 | 80 | * Istio-Pilot, which is responsible for service discovery and for configuring the Envoy sidecar proxies in an Istio service mesh. 81 | * The Mixer components Istio-Policy and Istio-Telemetry, which enforce usage policies and gather telemetry data across the service mesh. 82 | * Istio-Ingressgateway, which provides an ingress point for traffic from outside the cluster. 83 | * Istio-Citadel, which automates key and certificate management for Istio. 84 | 85 | And it also deploys the monitoring components. 86 | 87 | 8 - Verify Istio installation 88 | 89 | Ensure the following Kubernetes Services are deployed: istio-citadel, istio-pilot, istio-ingressgateway, istio-policy, and istio-telemetry (you'll also see the other deployed services): 90 | 91 | ```sh 92 | kubectl get service -n istio-system 93 | ``` 94 | 95 | In the output you should see istio-citadel, istio-galley, istio-ingressgateway, istio-pilot, istio-policy, istio-sidecar-injector, istio-statsd-prom-bridge, istio-telemetry and prometheus. 96 | 97 | 10 - Ensure the corresponding Kubernetes Pods are deployed and all containers are up and running: istio-pilot-*, istio-policy-*, istio-telemetry-*, istio-ingressgateway-*, and istio-citadel-*.* 98 | 99 | ```sh 100 | kubectl get pods -n istio-system 101 | ``` 102 | 103 | ``` 104 | Output: 105 | NAME READY STATUS RESTARTS AGE 106 | istio-citadel-54f4678f86-4549b 1/1 Running 0 12m 107 | istio-cleanup-secrets-5pl77 0/1 Completed 0 12m 108 | istio-galley-7bd8b5f88f-nhwlc 1/1 Running 0 12m 109 | istio-ingressgateway-665699c874-l62rg 1/1 Running 0 12m 110 | istio-pilot-68cbbcd65d-l5298 2/2 Running 0 12m 111 | istio-policy-7c5b5bb744-k6vm9 2/2 Running 0 12m 112 | istio-security-post-install-g9l9p 0/1 Completed 3 12m 113 | istio-sidecar-injector-85ccf84984-2hpfm 1/1 Running 0 12m 114 | istio-telemetry-5b6c57fffc-9j4dc 2/2 Running 0 12m 115 | istio-tracing-77f9f94b98-jv8vh 1/1 Running 0 12m 116 | prometheus-7456f56c96-7hrk5 1/1 Running 0 12m 117 | ... 118 | ``` 119 | 120 | 121 | #### Running istioctl 122 | 123 | Istio related commands need to have `istioctl` in the path. Verify it is available by running: 124 | 125 | ```sh 126 | istioctl -h 127 | ``` 128 | 129 | #### What just happened?! 130 | 131 | Congratulations! You have installed Istio into the Kubernetes cluster. A lot has been installed: 132 | * Istio Controllers and related RBAC rules 133 | * Istio Custom Resource Defintiions 134 | * Prometheus and Grafana for Monitoring 135 | * Jeager for Distributed Tracing 136 | * Istio Sidecar Injector (we'll take a look next next section) 137 | 138 | #### [Continue to Exercise 6 - Creating a Service Mesh with Istio Proxy](../exercise-6/README.md) 139 | -------------------------------------------------------------------------------- /exercise-6/README.md: -------------------------------------------------------------------------------- 1 | # Exercise 6 - Creating a service mesh with Istio Proxy 2 | 3 | ### What is a service mesh? 4 | 5 | Cloud-native applications require a new approach to managing the communication between each service. This problem is best solved by creating a dedicated infrastructure layer that handles service-to-service communication. With Istio, this infrastructure layer is created by deploying a lightweight proxy alongside each application service. This is done in a way that the application does not need to be aware of the proxy. 6 | 7 | Moving the service communication to a separate layer provides a separation of concerns. The monitoring, management and security of communication can be handled outside of the application logic. 8 | 9 | ### What is a Kubernetes sidecar? 10 | 11 | A Kubernetes pod is a group of containers, tied together for the purposes of administration and networking. Each pod can contain one or more containers. Small containers are often used to provide common utilities for the pod. These sidecar containers extend and enhance the main container. Sidecar containers are a crosscutting concern and can be used across multiple pods. 12 | 13 | ### The Istio Proxy sidecar 14 | 15 | To create a service mesh with Istio, you update the deployment of the pods to add the Istio Proxy (based on the Lyft Envoy Proxy) as a side car to each pod. The Proxy is then run as a separate container that manages all communication with that pod. This can be done either manually or with the latest version of Kubernetes automatically. 16 | 17 | #### Manual sidecar injection 18 | 19 | The side car can be injected manually by running the istioctl kube-inject command, which modifies the YAML file before creating the deployments. This injects the Proxy into the deployment by updating the YAML to add the Proxy as a sidecar. When this command is used, the microservices are now packaged with an Proxy sidecar that manages incoming and outgoing calls for the service. 20 | 21 | To see how the deployment YAML is modified, run the following from 22 | 23 | 1. Change to the the `istio-workshop` dir: 24 | 25 | ```sh 26 | cd ~/istio-workshop/ 27 | ``` 28 | 29 | 2. Examine the Hello World Deployment: 30 | 31 | ```sh 32 | more kubernetes/helloworld-deployment.yaml 33 | ``` 34 | 35 | 3. Use istioctl to see what manual sidecar injection will add to the deployment: 36 | 37 | ```sh 38 | istioctl kube-inject -f kubernetes/helloworld-deployment.yaml | more 39 | ``` 40 | 41 | This adds the Istio Proxy as an additional container to the Pod and setups the necessary configuration. Inside the YAML there is now an additional container: 42 | 43 | ``` 44 | spec: 45 | containers: 46 | - image: saturnism/helloworld-service-istio:1.0 47 | ... 48 | - args: 49 | ... 50 | image: docker.io/istio/proxyv2:1.2.2 51 | ... 52 | initContainers: 53 | - args: 54 | ... 55 | image: docker.io/istio/proxy_init:1.2.2 56 | ... 57 | ``` 58 | 59 | Notice that the output has more than just the application container. Specifically, it has an additional istio-proxy container, and an init container. 60 | 61 | The init container is responsible for setting up the IP table rules to intercept incoming and outgoing connections and directing them to the Istio Proxy. The istio-proxy container is the Envoy proxy itself. 62 | 63 | If you want to use manual Istio sidecar injection, then you would always filter your Kubernetes deployment file through istioctl utility, and deploy the resulting Deployment specification. 64 | 65 | 66 | #### Automatic sidecar injection 67 | 68 | Istio sidecars can also be automatically injected into a pod at creation time using a feature in Kubernetes called a mutating webhook admission controller. Note that unlike manual injection, automatic injection occurs at the pod-level. You won't see any change to the deployment itself. Instead you'll want to check individual pods (via kubectl describe) to see the injected proxy. 69 | 70 | An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object, but after the request is authenticated and authorized. Admission controllers may be “validating”, “mutating”, or both. Mutating controllers may modify the objects they admit; validating controllers may not. 71 | 72 | The admission control process proceeds in two phases. In the first phase, mutating admission controllers are run. In the second phase, validating admission controllers are run. 73 | 74 | MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. 75 | 76 | For Istio the webhook is the sidecar injector webhook deployment called "istio-sidecar-injector". It will modify a pod before it is started to inject an istio init container and istio proxy container. 77 | 78 | #### Using the Sidecar Injector 79 | 80 | By default, Istio is configured to apply a sidecar injector to namespaces with the label/value of `istio-injection=enabled`. 81 | 82 | Label the default namespace with `istio-injection` label set to `enabled`. 83 | 84 | ```sh 85 | kubectl label namespace default istio-injection=enabled 86 | ``` 87 | 88 | Check that the label is applied. 89 | 90 | ```sh 91 | kubectl get namespace -L istio-injection 92 | 93 | NAME STATUS AGE ISTIO-INJECTION 94 | default Active 1h enabled 95 | istio-system Active 1h 96 | kube-public Active 1h 97 | kube-system Active 1h 98 | ``` 99 | 100 | ### Deploy Guestbook services 101 | 102 | To demonstrate Istio, we’re going to use [this guestbook example](https://github.com/saturnism/istio-by-example-java/tree/master/spring-boot2-example). This example is built with Spring Boot, a frontend using Spring MVC and Thymeleaf, and two microservices. The 3 microservices that we are going to deploy are: 103 | 104 | * Hello World service - A simple service that returns a greeting back to the user. 105 | * Guestbook service - A service that keeps a registry of guests and the message they left. 106 | * Guestbook UI - The front end to the application that calls to the other microservices to get the list of guests, register a new guest, and get the greeting for the user when they register. 107 | 108 | The guestbook example requires MySQL to store guestbook entries and Redis to store session information. There is a storied history to MySQL connectivity issues [documented here](https://github.com/istio/istio/issues/10062). The simple fix for this workshop is: 109 | 110 | ```sh 111 | kubectl delete meshpolicies.authentication.istio.io default 112 | ``` 113 | 114 | 1 - Deploy MySQL, Redis, the Hello World microservices, and the associated Kubernetes Services from the `istio-workshop` dir: 115 | 116 | ```sh 117 | cd ~/istio-workshop 118 | kubectl apply -f kubernetes/ 119 | ``` 120 | 121 | 2 - Notice that each of the pods now has one Istio init container and two running containers. One is the main application container and the second is the istio proxy container. 122 | 123 | ```sh 124 | kubectl get pod 125 | ``` 126 | 127 | When you get the pods you should see in the READY column 2/2 meaning that 2 of 2 containers are in the running state (it might take a minute or two to get to that state). 128 | 129 | When you describe the pod what that shows is the details of the additional containers. 130 | 131 | ```sh 132 | kubectl describe pods helloworld-service-v1..... 133 | ``` 134 | 135 | And to view the logs for a container use: 136 | 137 | ```sh 138 | kubectl logs guestbook-service- -c guestbook-service 139 | ``` 140 | 141 | 3 - Verify that previous deployments are all in a state of AVAILABLE before continuing. **Do not procede until they are up and running.** 142 | 143 | ```sh 144 | watch kubectl get deployment 145 | ``` 146 | 147 | 4 - Access the guestbook UI in the web browser: 148 | 149 | The Guestbook UI Kubernetes service has a type of LoadBalancer. This creates an external IP through which the UI can be accessed: 150 | 151 | ```sh 152 | kubectl get svc guestbook-ui 153 | 154 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 155 | guestbook-ui LoadBalancer 10.59.245.13 35.197.94.184 80:30471/TCP 2m 156 | ``` 157 | 158 | You can test access via a web browser and curl. You should be able to access the guestbook ui in your browser with that IP address. 159 | 160 | 6 - For the curious you can inspect the details of the envoy proxy by gaining shell access to the container: 161 | 162 | ``` 163 | kubectl get pods 164 | kubectl exec -it helloworld-service-v1..... -c istio-proxy bash 165 | cd /etc/istio/proxy 166 | more envoy-rev0.json 167 | exit 168 | ``` 169 | 170 | 171 | #### [Continue to Exercise 7 - Istio Ingress controller](../exercise-7/README.md) 172 | -------------------------------------------------------------------------------- /exercise-7/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 7 - Istio Ingress Controller 2 | 3 | The components deployed on the service mesh by default are not exposed outside the cluster. External access to individual services so far has been provided by creating an external load balancer on each service. 4 | 5 | Traditionally in Kubernetes, you would use an Ingress to configure a L7 proxy. However, Istio provides a much richer set of proxy configurations that are not well-defined in Kubernetes Ingress. 6 | Thus, in Istio, we will use Isito Gateway to define fine grained control over L7 edge proxy configuration. 7 | 8 | #### Set the Ingress IP for future exercises 9 | 10 | Find the IP address of the Ingress Gateway: 11 | 12 | ```sh 13 | export INGRESS_IP=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') 14 | echo $INGRESS_IP 15 | ``` 16 | 17 | `curl` the IP address: 18 | 19 | ```sh 20 | curl http://$INGRESS_IP 21 | ``` 22 | 23 | This will return `connection refused` error. That is because there are no gateway configured to listen to any incoming connections. 24 | 25 | #### Configure Guestbook Ingress 26 | 27 | 1 - Create a new Gateway 28 | 29 | ```sh 30 | kubectl apply -f istio/guestbook-gateway.yaml 31 | ``` 32 | 33 | There is a `selector` block in the gateway definition. The `selector` will be used to find the actual edge proxy that should be configured to accept traffic for the gateway. This example selects pods from any namespaces that has the label of `istio` and value of `ingressgateway`. 34 | 35 | This is similar to running: 36 | ```sh 37 | kubectl get pods -l istio=ingressgateway --all-namespaces 38 | ``` 39 | 40 | You'll find a single pod that matches this label, which is the Ingress Gateway that we looked at earlier. 41 | ``` 42 | NAMESPACE NAME READY STATUS RESTARTS AGE 43 | istio-system istio-ingressgateway-... 1/1 Running 0 7d 44 | ``` 45 | 46 | After creating the Gateway, it'll enable the Istio Ingress Gateway to listen on port `80`. 47 | The `hosts` block of the configuration can be used to configure virtual hosting. E.g., the same IP address can be configured to respond to different host names with different routing rules. 48 | 49 | If you `curl` the ingress IP again: 50 | ```sh 51 | curl -v http://$INGRESS_IP 52 | ``` 53 | 54 | Rather than `connection refused`, you should see the server responded with `404 Not Found` HTTP response. This is because we have not bound any backends to this gatway yet. 55 | 56 | 2 - Create a Virtual Service 57 | 58 | To bind a backend to the gateway, we'll need to create a virtual service. 59 | 60 | ```sh 61 | kubectl apply -f istio/guestbook-ui-vs.yaml 62 | ``` 63 | 64 | A virtual service is a logical grouping of routing rules for a given target service. For ingress, we can use virtual service to bind to a gateway. 65 | 66 | This example binds to the gateway we just created, and will respond to any hostname. Again, if you need to use virtual hosting and respond to different host names, you can specify them in the `hosts` section. 67 | 68 | 3 - Connect via the Ingress Gateway 69 | 70 | Try connecting to the Ingress Gateway again. 71 | 72 | ```sh 73 | curl http://$INGRESS_IP 74 | ``` 75 | 76 | This time you should see the HTTP response! 77 | 78 | 4 - In a Web Browser navigate to the Guestbook UI using the Ingress Gateway IP address. 79 | 80 | 5 - Say Hello a few times 81 | 82 | #### Optional -Inspecting the Istio Ingress Gateway 83 | 84 | The ingress controller gets expossed as a normal Kubernetes service load balancer: 85 | 86 | ```sh 87 | kubectl get svc istio-ingressgateway -n istio-system -o yaml 88 | ``` 89 | 90 | Because the Istio Ingress Controller is an Envoy Proxy you can inspect it using the admin routes. First find the name of the istio ingress proxy: 91 | 92 | 93 | ```sh 94 | kubectl port-forward $(kubectl -n istio-system get pod -l app=istio-ingressgateway \ 95 | -o jsonpath='{.items[0].metadata.name}') \ 96 | -n istio-system 8080:15000 97 | ``` 98 | 99 | From the Cloud Shell, use the Web Preview button in the top right corner. 100 | 101 | You can view the statistics, listeners, routes, clusters and server info for the envoy proxy by forwarding the local port: 102 | 103 | ```sh 104 | curl localhost:8080/help 105 | curl localhost:8080/stats 106 | curl localhost:8080/listeners 107 | curl localhost:8080/routes 108 | curl localhost:8080/clusters 109 | curl localhost:8080/server_info 110 | ``` 111 | 112 | See the [admin docs](https://www.envoyproxy.io/docs/envoy/v1.5.0/operations/admin) for more details. 113 | 114 | #### Optional - Inspecting the Istio Log Files 115 | 116 | It can be helpful to look at the log files of the Istio ingress controller to see what request is being routed. First find the ingress pod and the output the log files: 117 | 118 | ```sh 119 | kubectl logs istio-ingressgateway-... -n istio-system 120 | ``` 121 | 122 | #### [Exercise 8 - Telemetry](../exercise-8/README.md) 123 | -------------------------------------------------------------------------------- /exercise-8/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 8 - Telemetry 2 | 3 | #### Generate Guestbook Telemetry data 4 | 5 | Generate a small load to the application either using a shell script or fortio: 6 | 7 | With simple shell script: 8 | 9 | ```sh 10 | while sleep 0.5; do curl http://$INGRESS_IP; done 11 | ``` 12 | 13 | Or, with fortio: 14 | 15 | ```sh 16 | docker run istio/fortio load -t 5m -qps 5 \ 17 | http://$INGRESS_IP 18 | ``` 19 | 20 | ### Grafana 21 | 22 | 23 | Establish port forward from local port 3000 to the Grafana instance: 24 | ```sh 25 | kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana \ 26 | -o jsonpath='{.items[0].metadata.name}') 3000:3000 27 | ``` 28 | 29 | $ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 & 30 | 31 | If you are in Cloud Shell, you'll need to use Web Preview and Change Port to `3000`. 32 | 33 | Browse to http://localhost:3000 and navigate to the different Istio Dashboards. On the left select the "Dashboards" logo, then click "Manage", then select the "Istio Mesh Dashboard" and "Istio Performance Dashboard" 34 | 35 | ### Prometheus 36 | ```sh 37 | kubectl -n istio-system port-forward \ 38 | $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') \ 39 | 9090:9090 40 | ``` 41 | 42 | If you are in Cloud Shell, you'll need to use Web Preview and Change Port to `9090`. 43 | 44 | Browse to http://localhost:9090/graph and in the “Expression” input box enter: `istio_request_bytes_count`. Click the Execute button. 45 | 46 | ### Service Graph 47 | 48 | The service graph functionality has been replaced with Kiali. For more details see: 49 | 50 | [Visualizing Your Mesh](https://istio.io/docs/tasks/telemetry/kiali/) 51 | 52 | #### [Continue to Exercise 9 - Distributed Tracing](../exercise-9/README.md) 53 | -------------------------------------------------------------------------------- /exercise-8a/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 8a - Additional Telemetry and Log 2 | 3 | First we need to configure Istio to automatically gather telemetry data for services running in the mesh. 4 | 5 | This is done by adding Istio configuration that instructs Mixer to automatically generate and report a new metric and a new log stream for all traffic within the mesh. 6 | 7 | The added configuration controls three pieces of Mixer functionality: 8 | 9 | * Generation of instances (in this example, metric values and log entries) from Istio attributes 10 | * Creation of handlers (configured Mixer adapters) capable of processing generated instances 11 | * Dispatch of instances to handlers according to a set of rules 12 | 13 | The metrics configuration directs Mixer to send metric values to Prometheus. It uses three stanzas (or blocks) of configuration: instance configuration, handler configuration, and rule configuration. 14 | 15 | #### Create a Rule to Collect Telemetry Data 16 | 17 | ```sh 18 | istioctl create -f guestbook/guestbook-telemetry.yaml 19 | ``` 20 | 21 | #### Mixer Log Stream 22 | 23 | The logs configuration directs Mixer to send log entries to stdout. It uses three stanzas (or blocks) of configuration: instance configuration, handler configuration, and rule configuration. 24 | 25 | ```sh 26 | kubectl -n istio-system logs -f $(kubectl -n istio-system get pods -l istio=mixer -o jsonpath='{.items[0].metadata.name}') mixer | grep \"instance\":\"newlog.logentry.istio-system\" 27 | ``` 28 | 29 | #### [Continue to Exercise 9 - Distributed Tracing](../exercise-9/README.md) 30 | -------------------------------------------------------------------------------- /exercise-9/README.md: -------------------------------------------------------------------------------- 1 | ## Exercise 9 - Distributed Tracing 2 | 3 | The sample guestbook application shows how a Spring Java application can be configured to collect trace spans using Zipkin or Jaeger. 4 | 5 | Although Istio proxies are able to automatically send spans, it needs help from the application to tie together the entire trace. To do this applications need to propagate the appropriate HTTP headers so that when the proxies send span information to Zipkin or Jaeger, the spans can be correlated correctly into a single trace. 6 | 7 | To do this the guestbook application collects and propagate the following headers from the incoming request to any outgoing requests: 8 | 9 | - `x-request-id` 10 | - `x-b3-traceid` 11 | - `x-b3-spanid` 12 | - `x-b3-parentspanid` 13 | - `x-b3-sampled` 14 | - `x-b3-flags` 15 | - `x-ot-span-context` 16 | 17 | Our sample application uses Spring Boot 2 and Spring Cloud Finchley release. Spring Cloud has built in Zipkin header propagation using Spring Cloud Sleuth, which will automatically propagate `x-b3-*` headers. 18 | 19 | To configure additional headers to propagate, the sample application configured `application.properties` to add additional `propagation keys`, e.g., `spring.sleuth.propagation-keys=x-request-id,x-ot-span-context`. 20 | 21 | #### View Guestbook Traces 22 | 23 | Browse to the Guestbook UI and say Hello a few times. 24 | 25 | ### Jaeger 26 | 27 | Istio demo configuration installs Jaeger for trace collection. This is interchangeable. Setup a port-forward in the usual way. 28 | 29 | ```sh 30 | kubectl -n istio-system port-forward $(kubectl -n istio-system get po -l app=jaeger -o jsonpath='{.items[0].metadata.name}') 16686 31 | ``` 32 | 33 | Browse to http://localhost:16886. 34 | 35 | Under *Find Traces* and in *Service* drop down, select *guestbook-ui*, and then click on *Find Traces*. 36 | 37 | #### [Continue to Exercise 10 - Request Routing and Canary Testing](../exercise-10/README.md) 38 | -------------------------------------------------------------------------------- /exercise-envoy/README.md: -------------------------------------------------------------------------------- 1 | # Exercise 4a - Running Envoy Manually 2 | 3 | This Envoy exercise is copied from [Front Proxy](https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/front_proxy.html) 4 | 5 | ### Background on docker 6 | 7 | If you have problems with docker the following commands are helpful: 8 | 9 | View all running docker images and find the container id: 10 | `docker ps -a` 11 | 12 | Force remove a running docker image: 13 | `docker rm -f #CONTAINER_ID` 14 | 15 | ### Docker File Sharing on Mac 16 | 17 | On a mac you need to share the directoy to `envoy-front-proxy`. To mount the `envoy-front-proxy` directory inside of the docker container you need to first setup docker file sharing. Under docker preferences go to the File Sharing dialog and add the envoy-conf directory: 18 | 19 | ![Docker File Sharing](../images/docker_file_sharing.png) 20 | 21 | 22 | ### Docker Compose Overview 23 | 24 | ![Docker Compose Deployment](../images/docker_compose_v0.1.svg) 25 | 26 | ### Running the Sandbox 27 | 28 | The following documentation runs through the setup of an envoy cluster organized as is described in the image above. 29 | 30 | #### Step 1: Install Docker 31 | 32 | Ensure that you have a recent versions of docker, docker-compose and docker-machine installed. 33 | 34 | A simple way to achieve this is via the Docker Toolbox. 35 | 36 | #### Step 2: Start all of our containers 37 | 38 | Inside the istio-workshop directory: 39 | 40 | ```sh 41 | $ cd envoy-front-proxy 42 | $ docker-compose up --build -d 43 | $ docker-compose ps 44 | Name Command State Ports 45 | ------------------------------------------------------------------------------------------------------------- 46 | example_service1_1 /bin/sh -c /usr/local/bin/ ... Up 80/tcp 47 | example_service2_1 /bin/sh -c /usr/local/bin/ ... Up 80/tcp 48 | example_front-envoy_1 /bin/sh -c /usr/local/bin/ ... Up 0.0.0.0:8000->80/tcp, 0.0.0.0:8001->8001/tcp 49 | ``` 50 | 51 | #### Step 3: Test Envoy’s routing capabilities 52 | 53 | You can now send a request to both services via the front-envoy. 54 | 55 | For service1: 56 | 57 | ```sh 58 | $ curl -v localhost:8000/service/1 59 | ``` 60 | 61 | For service2: 62 | 63 | ```sh 64 | curl -v localhost:8000/service/2 65 | ``` 66 | 67 | #### Step 4: Test Envoy’s load balancing capabilities 68 | 69 | Now let’s scale up our service1 nodes to demonstrate the clustering abilities of envoy.: 70 | 71 | ```sh 72 | $ docker-compose scale service1=3 73 | Creating and starting example_service1_2 ... done 74 | Creating and starting example_service1_3 ... done 75 | ``` 76 | 77 | Now if we send a request to service1 multiple times, the front envoy will load balance the requests by doing a round robin of the three service1 machines: 78 | 79 | ```sh 80 | curl -v localhost:8000/service/1 81 | ``` 82 | 83 | #### Step 7: enter containers and curl services 84 | 85 | In addition of using curl from your host machine, you can also enter the containers themselves and curl from inside them. To enter a container you can use docker-compose exec /bin/bash. For example we can enter the front-envoy container, and curl for services locally: 86 | 87 | ```sh 88 | $ docker-compose exec front-envoy /bin/bash 89 | root@81288499f9d7:/# curl localhost:80/service/1 90 | Hello from behind Envoy (service 1)! hostname: 85ac151715c6 resolvedhostname: 172.19.0.3 91 | root@81288499f9d7:/# curl localhost:80/service/1 92 | Hello from behind Envoy (service 1)! hostname: 20da22cfc955 resolvedhostname: 172.19.0.5 93 | root@81288499f9d7:/# curl localhost:80/service/1 94 | Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6 95 | root@81288499f9d7:/# curl localhost:80/service/2 96 | Hello from behind Envoy (service 2)! hostname: 92f4a3737bbc resolvedhostname: 172.19.0.2 97 | ``` 98 | 99 | #### Step 8: curl admin 100 | 101 | When envoy runs it also attaches an admin to your desired port. In the example configs the admin is bound to port 8001. We can curl it to gain useful information. For example you can curl /server_info to get information about the envoy version you are running. Additionally you can curl /stats to get statistics. For example inside frontenvoy we can get: 102 | 103 | ```sh 104 | $ curl localhost:8001/server_info 105 | envoy 10e00b/RELEASE live 142 142 0 106 | $ curl localhost:8001/stats 107 | cluster.service1.external.upstream_rq_200: 7 108 | ... 109 | cluster.service1.membership_change: 2 110 | cluster.service1.membership_total: 3 111 | ... 112 | cluster.service1.upstream_cx_http2_total: 3 113 | ... 114 | cluster.service1.upstream_rq_total: 7 115 | ... 116 | cluster.service2.external.upstream_rq_200: 2 117 | ... 118 | cluster.service2.membership_change: 1 119 | cluster.service2.membership_total: 1 120 | ... 121 | cluster.service2.upstream_cx_http2_total: 1 122 | ... 123 | cluster.service2.upstream_rq_total: 2 124 | ... 125 | ``` 126 | 127 | Notice that we can get the number of members of upstream clusters, number of requests fulfilled by them, information about http ingress, and a plethora of other useful stats. 128 | 129 | #### [Continue to Exercise 5 - Installing Istio](../exercise-5/README.md) 130 | -------------------------------------------------------------------------------- /images/boost_mode.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/images/boost_mode.png -------------------------------------------------------------------------------- /images/cloud_shell.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/images/cloud_shell.png -------------------------------------------------------------------------------- /images/docker_file_sharing.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/images/docker_file_sharing.png -------------------------------------------------------------------------------- /images/homescreen.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/images/homescreen.png -------------------------------------------------------------------------------- /images/homescreen2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/images/homescreen2.png -------------------------------------------------------------------------------- /images/k8console.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/images/k8console.png -------------------------------------------------------------------------------- /images/project_name.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/images/project_name.jpg -------------------------------------------------------------------------------- /images/welcomeaccount.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/retroryan/istio-workshop/2f32b0aa6c834e9e6ca89b407545315db1d2a064/images/welcomeaccount.png -------------------------------------------------------------------------------- /istio/deny-guestbook-service.yaml: -------------------------------------------------------------------------------- 1 | # Create a denier that returns a google.rpc.Code 7 (PERMISSION_DENIED) 2 | apiVersion: config.istio.io/v1alpha2 3 | kind: denier 4 | metadata: 5 | name: denyall 6 | namespace: istio-system 7 | spec: 8 | status: 9 | code: 7 10 | message: Not allowed 11 | --- 12 | # The (empty) data handed to denyall at run time 13 | apiVersion: config.istio.io/v1alpha2 14 | kind: checknothing 15 | metadata: 16 | name: denyrequest 17 | namespace: istio-system 18 | spec: 19 | --- 20 | # The rule that uses denier to deny requests with source.labels["app"] == "helloworld-service" 21 | apiVersion: config.istio.io/v1alpha2 22 | kind: rule 23 | metadata: 24 | name: deny-guestbook-service 25 | namespace: istio-system 26 | spec: 27 | match: destination.service=="guestbook-service.default.svc.cluster.local" 28 | actions: 29 | - handler: denyall.denier 30 | instances: 31 | - denyrequest.checknothing 32 | -------------------------------------------------------------------------------- /istio/guestbook-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: guestbook-gateway 5 | spec: 6 | selector: 7 | istio: ingressgateway 8 | servers: 9 | - hosts: 10 | - "*" 11 | port: 12 | number: 80 13 | name: http 14 | protocol: HTTP 15 | -------------------------------------------------------------------------------- /istio/guestbook-service-503-vs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: guestbook-service 5 | spec: 6 | hosts: 7 | - guestbook-service 8 | http: 9 | - route: 10 | - destination: 11 | host: guestbook-service 12 | fault: 13 | abort: 14 | percent: 100 15 | httpStatus: 503 16 | -------------------------------------------------------------------------------- /istio/guestbook-service-dest.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: DestinationRule 3 | metadata: 4 | name: guestbook-service 5 | spec: 6 | host: guestbook-service 7 | trafficPolicy: 8 | connectionPool: 9 | tcp: 10 | maxConnections: 100 11 | http: 12 | maxRequestsPerConnection: 10 13 | http1MaxPendingRequests: 1024 14 | outlierDetection: 15 | consecutiveErrors: 7 16 | interval: 5m 17 | baseEjectionTime: 15m 18 | -------------------------------------------------------------------------------- /istio/guestbook-service-retry-vs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: guestbook-service 5 | spec: 6 | hosts: 7 | - guestbook-service 8 | http: 9 | - route: 10 | - destination: 11 | host: guestbook-service 12 | retries: 13 | attempts: 3 14 | perTryTimeout: 2s 15 | 16 | -------------------------------------------------------------------------------- /istio/guestbook-ui-80p-v1-vs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: guestbook-ui 5 | spec: 6 | hosts: 7 | - "*" 8 | gateways: 9 | - guestbook-gateway 10 | http: 11 | - match: 12 | - uri: 13 | prefix: / 14 | route: 15 | - destination: 16 | host: guestbook-ui 17 | subset: v1 18 | weight: 80 19 | - destination: 20 | host: guestbook-ui 21 | subset: v2 22 | weight: 20 23 | -------------------------------------------------------------------------------- /istio/guestbook-ui-chrome-vs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: guestbook-ui 5 | spec: 6 | hosts: 7 | - "*" 8 | gateways: 9 | - guestbook-gateway 10 | http: 11 | - match: 12 | - uri: 13 | prefix: / 14 | headers: 15 | user-agent: 16 | regex: ".*Chrome.*" 17 | route: 18 | - destination: 19 | host: guestbook-ui 20 | subset: v2 21 | - match: 22 | - uri: 23 | prefix: / 24 | route: 25 | - destination: 26 | host: guestbook-ui 27 | subset: v1 28 | -------------------------------------------------------------------------------- /istio/guestbook-ui-delay-vs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: guestbook-ui 5 | spec: 6 | hosts: 7 | - "*" 8 | gateways: 9 | - guestbook-gateway 10 | http: 11 | - match: 12 | - uri: 13 | prefix: / 14 | route: 15 | - destination: 16 | host: guestbook-ui 17 | fault: 18 | delay: 19 | percent: 100 20 | fixedDelay: 5s 21 | -------------------------------------------------------------------------------- /istio/guestbook-ui-dest.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: DestinationRule 3 | metadata: 4 | name: guestbook-ui 5 | spec: 6 | host: guestbook-ui 7 | subsets: 8 | - name: v1 9 | labels: 10 | version: "1.0" 11 | - name: v2 12 | labels: 13 | version: "2.0" 14 | -------------------------------------------------------------------------------- /istio/guestbook-ui-v1-vs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: guestbook-ui 5 | spec: 6 | hosts: 7 | - "*" 8 | gateways: 9 | - guestbook-gateway 10 | http: 11 | - match: 12 | - uri: 13 | prefix: / 14 | route: 15 | - destination: 16 | host: guestbook-ui 17 | subset: v1 18 | -------------------------------------------------------------------------------- /istio/guestbook-ui-vs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: guestbook-ui 5 | spec: 6 | hosts: 7 | - "*" 8 | gateways: 9 | - guestbook-gateway 10 | http: 11 | - match: 12 | - uri: 13 | prefix: / 14 | route: 15 | - destination: 16 | host: guestbook-ui 17 | -------------------------------------------------------------------------------- /istio/helloworld-service-80p-v1-vs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: helloworld-service 5 | spec: 6 | hosts: 7 | - helloworld-service 8 | http: 9 | - route: 10 | - destination: 11 | host: helloworld-service 12 | subset: v1 13 | weight: 80 14 | - destination: 15 | host: helloworld-service 16 | subset: v2 17 | weight: 20 18 | 19 | -------------------------------------------------------------------------------- /istio/helloworld-service-dest.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: DestinationRule 3 | metadata: 4 | name: helloworld-service 5 | spec: 6 | host: helloworld-service 7 | subsets: 8 | - name: v1 9 | labels: 10 | version: "1.0" 11 | - name: v2 12 | labels: 13 | version: "2.0" 14 | -------------------------------------------------------------------------------- /istio/helloworld-service-v1-vs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: helloworld-service 5 | spec: 6 | hosts: 7 | - helloworld-service 8 | http: 9 | - route: 10 | - destination: 11 | host: helloworld-service 12 | subset: v1 13 | -------------------------------------------------------------------------------- /istio/rate-limits.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: config.istio.io/v1alpha2 2 | kind: memquota 3 | metadata: 4 | name: handler 5 | namespace: istio-system 6 | spec: 7 | quotas: 8 | - name: requestcount.quota.istio-system 9 | maxAmount: 100 10 | validDuration: 1s 11 | overrides: 12 | - dimensions: 13 | source: guestbook-ui 14 | destination: guestbook-service 15 | maxAmount: 1 16 | validDuration: 1s 17 | - dimensions: 18 | destination: helloworld-service 19 | maxAmount: 500 20 | validDuration: 1s 21 | --- 22 | apiVersion: config.istio.io/v1alpha2 23 | kind: quota 24 | metadata: 25 | name: requestcount 26 | namespace: istio-system 27 | spec: 28 | dimensions: 29 | source: source.labels["app"] | source.service | "unknown" 30 | sourceVersion: source.labels["version"] | "unknown" 31 | destination: destination.labels["app"] | destination.service | "unknown" 32 | destinationVersion: destination.labels["version"] | "unknown" 33 | --- 34 | apiVersion: "config.istio.io/v1alpha2" 35 | kind: rule 36 | metadata: 37 | name: quota 38 | namespace: istio-system 39 | spec: 40 | actions: 41 | - handler: handler.memquota 42 | instances: 43 | - requestcount.quota 44 | --- 45 | apiVersion: config.istio.io/v1alpha2 46 | kind: QuotaSpec 47 | metadata: 48 | name: request-count 49 | namespace: istio-system 50 | spec: 51 | rules: 52 | - quotas: 53 | - charge: 1 54 | quota: RequestCount 55 | --- 56 | apiVersion: config.istio.io/v1alpha2 57 | kind: QuotaSpecBinding 58 | metadata: 59 | name: request-count 60 | namespace: istio-system 61 | spec: 62 | quotaSpecs: 63 | - name: request-count 64 | namespace: istio-system 65 | services: 66 | - name: guestbook-ui 67 | - name: helloworld-service 68 | - name: guestbook-service 69 | -------------------------------------------------------------------------------- /kubernetes-v2/guestbook-ui-deployment-v2.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | apiVersion: extensions/v1beta1 16 | kind: Deployment 17 | metadata: 18 | name: guestbook-ui-v2 19 | labels: 20 | app: guestbook-ui 21 | version: "2.0" 22 | visualize: "true" 23 | spec: 24 | replicas: 1 25 | selector: 26 | matchLabels: 27 | app: guestbook-ui 28 | serving: "true" 29 | template: 30 | metadata: 31 | labels: 32 | app: guestbook-ui 33 | version: "2.0" 34 | serving: "true" 35 | visualize: "true" 36 | annotations: 37 | visualizer/uses: helloworld-service,guestbook-service,redis 38 | spec: 39 | containers: 40 | - name: guestbook-ui 41 | image: saturnism/guestbook-ui-istio:2.0 42 | env: 43 | - name: BACKEND_GUESTBOOK_SERVICE_URL 44 | value: http://guestbook-service:80/messages 45 | ports: 46 | - name: http 47 | containerPort: 8080 48 | -------------------------------------------------------------------------------- /kubernetes-v2/helloworld-deployment-v2.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | apiVersion: extensions/v1beta1 16 | kind: Deployment 17 | metadata: 18 | name: helloworld-service-v2 19 | labels: 20 | app: helloworld-service-v2 21 | visualize: "true" 22 | version: "2.0" 23 | spec: 24 | replicas: 1 25 | selector: 26 | matchLabels: 27 | app: helloworld-service 28 | serving: "true" 29 | template: 30 | metadata: 31 | labels: 32 | app: helloworld-service 33 | serving: "true" 34 | visualize: "true" 35 | version: "2.0" 36 | spec: 37 | containers: 38 | - name: helloworld-service 39 | image: saturnism/helloworld-service-istio:1.0 40 | env: 41 | - name: version 42 | value: "2.0" 43 | - name: GREETING 44 | value: "Hola $name from $hostname with 2.0" 45 | ports: 46 | - name: http 47 | containerPort: 8080 48 | -------------------------------------------------------------------------------- /kubernetes/guestbook-deployment.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | apiVersion: extensions/v1beta1 16 | kind: Deployment 17 | metadata: 18 | labels: 19 | app: guestbook-service 20 | visualize: "true" 21 | name: guestbook-service 22 | spec: 23 | replicas: 1 24 | selector: 25 | matchLabels: 26 | app: guestbook-service 27 | serving: "true" 28 | template: 29 | metadata: 30 | labels: 31 | app: guestbook-service 32 | serving: "true" 33 | version: "1.0" 34 | visualize: "true" 35 | spec: 36 | containers: 37 | - name: guestbook-service 38 | image: saturnism/guestbook-service-istio:1.0 39 | ports: 40 | - containerPort: 8080 41 | name: http 42 | -------------------------------------------------------------------------------- /kubernetes/guestbook-service.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | kind: Service 16 | apiVersion: v1 17 | metadata: 18 | name: guestbook-service 19 | labels: 20 | app: guestbook-service 21 | visualize: "true" 22 | spec: 23 | ports: 24 | - port: 80 25 | targetPort: 8080 26 | name: http 27 | selector: 28 | app: guestbook-service 29 | serving: "true" 30 | -------------------------------------------------------------------------------- /kubernetes/guestbook-ui-deployment.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | apiVersion: extensions/v1beta1 16 | kind: Deployment 17 | metadata: 18 | name: guestbook-ui 19 | labels: 20 | app: guestbook-ui 21 | visualize: "true" 22 | spec: 23 | replicas: 1 24 | selector: 25 | matchLabels: 26 | app: guestbook-ui 27 | serving: "true" 28 | template: 29 | metadata: 30 | labels: 31 | app: guestbook-ui 32 | version: "1.0" 33 | serving: "true" 34 | visualize: "true" 35 | annotations: 36 | visualizer/uses: helloworld-service,guestbook-service,redis 37 | spec: 38 | containers: 39 | - name: guestbook-ui 40 | image: saturnism/guestbook-ui-istio:1.0 41 | env: 42 | - name: BACKEND_GUESTBOOK_SERVICE_URL 43 | value: http://guestbook-service:80/messages 44 | ports: 45 | - name: http 46 | containerPort: 8080 47 | -------------------------------------------------------------------------------- /kubernetes/guestbook-ui-service.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | kind: Service 16 | apiVersion: v1 17 | metadata: 18 | name: guestbook-ui 19 | labels: 20 | app: guestbook-ui 21 | visualize: "true" 22 | spec: 23 | type: LoadBalancer 24 | ports: 25 | - port: 80 26 | targetPort: 8080 27 | name: http 28 | selector: 29 | app: guestbook-ui 30 | serving: "true" 31 | -------------------------------------------------------------------------------- /kubernetes/helloworld-deployment.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | apiVersion: extensions/v1beta1 16 | kind: Deployment 17 | metadata: 18 | name: helloworld-service-v1 19 | labels: 20 | app: helloworld-service-v1 21 | visualize: "true" 22 | version: "1.0" 23 | spec: 24 | replicas: 1 25 | selector: 26 | matchLabels: 27 | app: helloworld-service 28 | serving: "true" 29 | template: 30 | metadata: 31 | labels: 32 | app: helloworld-service 33 | version: "latest" 34 | serving: "true" 35 | visualize: "true" 36 | version: "1.0" 37 | spec: 38 | containers: 39 | - name: helloworld-service 40 | image: saturnism/helloworld-service-istio:1.0 41 | env: 42 | - name: version 43 | value: "1.0" 44 | ports: 45 | - name: http 46 | containerPort: 8080 47 | -------------------------------------------------------------------------------- /kubernetes/helloworld-service.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | kind: Service 16 | apiVersion: v1 17 | metadata: 18 | name: helloworld-service 19 | labels: 20 | app: helloworld-service 21 | visualize: "true" 22 | spec: 23 | type: LoadBalancer 24 | ports: 25 | - port: 8080 26 | targetPort: 8080 27 | name: http 28 | selector: 29 | app: helloworld-service 30 | serving: "true" 31 | -------------------------------------------------------------------------------- /kubernetes/mysql-deployment.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | apiVersion: extensions/v1beta1 16 | kind: Deployment 17 | metadata: 18 | name: mysql 19 | labels: 20 | app: mysql 21 | visualize: "true" 22 | spec: 23 | replicas: 1 24 | template: 25 | metadata: 26 | labels: 27 | app: mysql 28 | visualize: "true" 29 | spec: 30 | containers: 31 | - name: mysql 32 | image: mysql:5.6 33 | livenessProbe: 34 | tcpSocket: 35 | port: 3306 36 | env: 37 | - name: MYSQL_ROOT_PASSWORD 38 | # change this 39 | value: yourpassword 40 | - name: MYSQL_DATABASE 41 | value: app 42 | ports: 43 | - containerPort: 3306 44 | name: mysql 45 | -------------------------------------------------------------------------------- /kubernetes/mysql-service.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | apiVersion: v1 16 | kind: Service 17 | metadata: 18 | name: mysql 19 | labels: 20 | app: mysql 21 | visualize: "true" 22 | spec: 23 | ports: 24 | # the port that this service should serve on 25 | - port: 3306 26 | name: notsql 27 | # label keys and values that must match in order to receive traffic for this service 28 | selector: 29 | app: mysql 30 | -------------------------------------------------------------------------------- /kubernetes/redis-deployment.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | apiVersion: extensions/v1beta1 16 | kind: Deployment 17 | metadata: 18 | name: redis 19 | labels: 20 | app: redis 21 | visualize: "true" 22 | spec: 23 | replicas: 1 24 | template: 25 | metadata: 26 | labels: 27 | app: redis 28 | version: "4.0.2" 29 | visualize: "true" 30 | spec: 31 | containers: 32 | - name: redis 33 | image: redis 34 | livenessProbe: 35 | tcpSocket: 36 | port: 6379 37 | ports: 38 | - name: redis-server 39 | containerPort: 6379 40 | -------------------------------------------------------------------------------- /kubernetes/redis-service.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Google Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # DO NOT NAME THE PORT REDIS!!! This routes the connection through Istio and breaks the application 16 | 17 | kind: Service 18 | apiVersion: v1 19 | metadata: 20 | name: redis 21 | labels: 22 | app: redis 23 | visualize: "true" 24 | spec: 25 | ports: 26 | - port: 6379 27 | selector: 28 | app: redis 29 | -------------------------------------------------------------------------------- /scripts/add_helm.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Copied from https://github.com/jonbcampos/kubernetes-series 4 | # Full Details at: 5 | # https://medium.com/google-cloud/installing-helm-in-google-kubernetes-engine-7f07f43c536e 6 | 7 | echo "install helm" 8 | # installs helm with bash commands for easier command line integration 9 | curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash 10 | # add a service account within a namespace to segregate tiller 11 | kubectl --namespace kube-system create sa tiller 12 | # create a cluster role binding for tiller 13 | kubectl create clusterrolebinding tiller \ 14 | --clusterrole cluster-admin \ 15 | --serviceaccount=kube-system:tiller 16 | 17 | echo "initialize helm" 18 | # initialized helm within the tiller service account 19 | helm init --service-account tiller 20 | # updates the repos for Helm repo integration 21 | helm repo update 22 | 23 | echo "verify helm" 24 | # verify that helm is installed in the cluster 25 | kubectl get deploy,svc tiller-deploy -n kube-system 26 | -------------------------------------------------------------------------------- /setup/README.md: -------------------------------------------------------------------------------- 1 | ## Workshop Setup 2 | 3 | ### Google Cloud Console Setup 4 | 5 | 1 - The instructor will provide you with a temporary username / password to sign in to the Google Cloud Platform Console. 6 | 7 | **Important:** To avoid conflicts with your personal account, please open a new incognito window for the rest of this lab. 8 | 9 | Sign in to the Google Cloud Platform Console: [https://console.cloud.google.com/home](https://console.cloud.google.com/home) with the provided credentials. In Welcome to your new account dialog, click Accept. 10 | 11 | ![Welcome to your new account](../images/welcomeaccount.png) 12 | 13 | 2 - Check I agree and click Agree and Continue 14 | 15 | 2 - If you see a top bar with Sign Up for Free Trial - DO NOT SIGN UP FOR THE FREE TRIAL. Click Dismiss since you'll be using a pre-provisioned lab account. If you are doing this on your own account, then you may want the free trial. 16 | 17 | ![Google Cloud Console Setup](../images/homescreen.png) 18 | 19 | 3 - Select the only Google Cloud Platform project in the project list. If you don't see a project, let the instructor know! 20 | 21 | ![Google Cloud Console Setup 2](../images/homescreen2.png) 22 | 23 | 4 - Take note of the full project name from the URL such as codeone19-sfo-... : 24 | 25 | ![Google Cloud Project Name](../images/project_name.jpg) 26 | 27 | ## Google Cloud Shell 28 | 29 | You will do most of the work from the [Google Cloud Shell](https://cloud.google.com/developer-shell/#how_do_i_get_started), a command line environment running in the Cloud. This Debian-based virtual machine is loaded with all the development tools you’ll need (docker, gcloud, kubectl and others) and offers a persistent 5GB home directory. Open the Google Cloud Shell by clicking on the icon on the top right of the screen: 30 | 31 | ![Google Cloud Shell Setup](../images/cloud_shell.png) 32 | 33 | 1. When prompted, click Start Cloud Shell. You should see the shell prompt at the bottom of the window. 34 | 35 | 2. Check to see if Cloud Shell's Boost mode is Enabled. 36 | 37 | ![Shell Boost Mode](../images/boost_mode.png) 38 | 39 | 3. If not, then enable Boost Mode for Cloud Shell. 40 | 41 | ## Download the workshop source into the cloud shell: 42 | 43 | Clone from the current repository, e.g. 44 | 45 | `git clone https://github.com/retroryan/istio-workshop` 46 | 47 | ## Optional - Local Setup 48 | 49 | You can setup the Google Cloud SDK locally but that is out of the scope of this workshop. See the [Cloud SDK for more information](https://cloud.google.com/sdk/) 50 | 51 | #### [Continue to Exercise 1](../exercise-1/README.md) 52 | --------------------------------------------------------------------------------