├── .gitignore ├── LICENSE ├── README.md ├── Vagrantfile ├── demo.sh ├── docker ├── istio │ ├── demo-gateway.yaml │ ├── grafana-gateway.yaml │ ├── jaeger-gateway.yaml │ ├── servicegraph-gateway.yaml │ └── zipkin-gateway.yaml ├── remove.ps1 ├── remove.sh ├── setup.ps1 └── setup.sh ├── istio-talk.bmpr ├── istio-talk.slide ├── media ├── CNCF.png ├── bug.png ├── deathstar.jpg ├── envoy.png ├── gopher.png ├── grafana.png ├── istio-talk-1.png ├── istio-talk-2.png ├── istio-talk-3.png ├── istio-talk-4.png ├── istio-talk-5.png ├── istio.png ├── plane_bg.png ├── resiliency_demo.png ├── sap.png ├── servicegraph.png ├── trafficshifting_demo.png └── zipkin.png ├── resiliency ├── README.md ├── demo.json ├── hurl │ ├── README.md │ ├── hurl.png │ └── kube │ │ └── deployment.yaml ├── istio.png ├── istio.pxm ├── istio │ ├── inject-abort.yaml │ ├── inject-big-delay.yaml │ ├── inject-delay.yaml │ ├── request-timeout.yaml │ └── services.yaml ├── resiliency.bmpr ├── resiliency.png ├── startDemo.sh └── webnull │ ├── README.md │ ├── kube │ └── deployment.yaml │ └── webnull.png ├── restartPods.sh ├── trafficshifting ├── README.md ├── demo.json ├── istio │ ├── service-be-v1-v2.yaml │ ├── service-be-v2-v3.yaml │ ├── service-be-v2.yaml │ ├── service-be-v3.yaml │ ├── service-mt-retry.yaml │ └── services-all-v1.yaml ├── startDemo.sh ├── topdog │ ├── README.md │ ├── dogs.png │ ├── kube │ │ ├── deployment-be-v1.yaml │ │ ├── deployment-be-v2.yaml │ │ ├── deployment-be-v3.yaml │ │ ├── deployment-mt-v1.yaml │ │ └── deployment-ui-v1.yaml │ └── topdog.png ├── trafficshifting.bmpr └── trafficshifting.png ├── vagrant-demo.sh └── vagrant ├── istio ├── demo-gateway.yaml ├── grafana-gateway.yaml ├── ingress.yaml ├── jaeger-gateway.yaml ├── servicegraph-gateway.yaml └── zipkin-gateway.yaml └── setup.sh /.gitignore: -------------------------------------------------------------------------------- 1 | # Vagrant 2 | .vagrant 3 | *.log 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Michael Lore 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # istio-talk 2 | 3 | This repository contains demo code for my talk on [Istio]. 4 | 5 | To view the slides, run Go's [present](https://godoc.org/golang.org/x/tools/present) tool from the project folder and select `istio-talk.slide`. 6 | 7 | ## Resiliency Demo 8 | 9 | See the [walk-through](resiliency/README.md). 10 | 11 | ![Resiliency Demo](media/resiliency_demo.png) 12 | 13 | To use my [demo tool], follow the setup instructions in the [walk-through](resiliency/README.md) and then run [startDemo.sh](resiliency/startDemo.sh) from the `resiliency` folder. 14 | 15 | ## Traffic Shifting Demo 16 | 17 | See the [walk-through](trafficshifting/README.md). 18 | 19 | ![Traffic Shifting Demo](media/trafficshifting_demo.png) 20 | 21 | To use my [demo tool], follow the setup instructions in the [walk-through](trafficshifting/README.md) and then run [startDemo.sh](trafficshifting/startDemo.sh) from the `trafficshifting` folder. 22 | 23 | ## Vagrant Version 24 | 25 | I've created a Vagrant version to make it easy to spin up an environment for running the demos. Pull this repository and then `vagrant up` and `vagrant ssh`. Then run the demo script from inside the box: 26 | 27 | /vagrant/vagrant-demo.sh 28 | 29 | This will start the `present` tool on http://192.168.99.101:8080/ and the first demo (traffic shifting) on http://192.168.99.101:8081/. After showing the first demo, press return and the second demo (resiliency) will start on the same port. Press return again and the demo and `present` should stop. 30 | 31 | The Vagrant version also has the Istio tools available at: 32 | 33 | * http://jaeger.192.168.99.101.xip.io/zipkin 34 | * http://zipkin.192.168.99.101.xip.io/zipkin 35 | * http://grafana.192.168.99.101.xip.io/ 36 | * http://servicegraph.192.168.99.101.xip.io/dotviz 37 | 38 | And the demo pods can be reached directly at: 39 | 40 | * http://topdog.192.168.99.101.xip.io/ 41 | * http://hurl.192.168.99.101.xip.io/ 42 | * http://webnull.192.168.99.101.xip.io/status 43 | 44 | ## Docker for Desktop Version 45 | 46 | If you have Docker for Desktop, you can enable Kubernetes and then [install Istio](https://istio.io/docs/setup/kubernetes/quick-start/). You will also need to [install Go](https://golang.org/doc/install), making sure it's on your path. 47 | 48 | Install the Go tools as well: 49 | 50 | go get golang.org/x/tools/cmd/present 51 | git clone https://github.com/ancientlore/demon 52 | cd demon && go install && cd - 53 | rm -rf demon 54 | 55 | The following script will set up the demos: 56 | 57 | ./docker/setup.sh 58 | 59 | Then run `demo.sh` to start the demo. 60 | 61 | The Docker version also has the Istio tools available at: 62 | 63 | * http://jaeger.127.0.0.1.xip.io/zipkin 64 | * http://zipkin.127.0.0.1.xip.io/zipkin 65 | * http://grafana.127.0.0.1.xip.io/ 66 | * http://servicegraph.127.0.0.1.xip.io/dotviz 67 | 68 | And the demo pods can be reached directly at: 69 | 70 | * http://topdog.127.0.0.1.xip.io/ 71 | * http://hurl.127.0.0.1.xip.io/ 72 | * http://webnull.127.0.0.1.xip.io/status 73 | 74 | ## Notes 75 | 76 | These demos use other utilities I've created: 77 | 78 | * [topdog], a demo application written in Go. Also see the [topdog Docker image]. 79 | * [webnull], a service that tosses away requests and graphs throughput. Also see the [webnull Docker image]. 80 | * [hurl], a cURL-like application designed to send many parallel HTTP requests to generate load. Also see the [hURL Docker image]. 81 | * [demon], a utility for showing the demos on one unified web page. 82 | 83 | [Istio]: https://istio.io/ 84 | [topdog]: https://github.com/ancientlore/topdog 85 | [hURL]: https://github.com/ancientlore/hurl 86 | [webnull]: https://github.com/ancientlore/webnull 87 | [topdog Docker image]: https://hub.docker.com/r/ancientlore/topdog/ 88 | [webnull Docker image]: https://hub.docker.com/r/ancientlore/webnull/ 89 | [hURL Docker image]: https://hub.docker.com/r/ancientlore/hurl/ 90 | [demon]: https://github.com/ancientlore/demon 91 | [demo tool]: https://github.com/ancientlore/demon 92 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | Vagrant.configure("2") do |config| 2 | 3 | # Build 4 | config.vm.box = "ubuntu/bionic64" 5 | config.vm.box_check_update = true 6 | 7 | config.vm.network "private_network", ip: "192.168.99.101" 8 | 9 | #config.vm.network "forwarded_port", guest: 8080, host: 8080 10 | #config.vm.network "forwarded_port", guest: 8081, host: 8081 11 | #config.vm.network "forwarded_port", guest: 8082, host: 8082 12 | 13 | config.vm.provider "virtualbox" do |v| 14 | v.name = "istiotalk" 15 | v.memory = "4096" 16 | v.cpus = "2" 17 | end 18 | 19 | config.vm.provision "shell", path: "vagrant/setup.sh" 20 | 21 | end 22 | -------------------------------------------------------------------------------- /demo.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Use this to run all the demos 4 | # Port 8080 - The presentation 5 | # Port 8081 - Traffic Shifting Demo 6 | # Port 8081 - Resiliency Demo (after pressing return) 7 | addr=$1 8 | addr=${addr:-127.0.0.1} 9 | 10 | ctx=`kubectl config current-context` 11 | echo kube context is $ctx 12 | ns=`kubectl config view -o jsonpath="{.contexts[?(@.name == \"$ctx\")].context.namespace}"` 13 | ns=${ns:-default} 14 | echo KUBE_NAMESPACE is $ns 15 | 16 | t="http://topdog.$addr.xip.io/" 17 | 18 | w="http://webnull.$addr.xip.io/status/" 19 | 20 | h="http://hurl.$addr.xip.io/" 21 | hp=`kubectl get pod -l app=hurl -o name | sed 's/^pod\///'` 22 | echo HURL pod is $hp 23 | 24 | # run present 25 | present -notes -http $addr:8080 -play=false & 26 | pid=$! 27 | 28 | # traffic shifting demo 29 | echo "*** TRAFFIC SHIFTING DEMO ***" 30 | cd trafficshifting 31 | KUBE_NAMESPACE=$ns TOPDOG=$t demon -addr :8081 32 | cd .. 33 | 34 | # resiliency shifting demo 35 | echo "*** RESILIENCY DEMO ***" 36 | cd resiliency 37 | KUBE_NAMESPACE=$ns WEBNULL=$w HURL=$h POD=$hp demon -addr :8081 38 | cd .. 39 | 40 | # stop present 41 | kill -9 $pid 42 | 43 | popd 44 | -------------------------------------------------------------------------------- /docker/istio/demo-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: topdog-gateway 5 | spec: 6 | servers: 7 | - port: 8 | number: 80 9 | name: http 10 | protocol: HTTP2 11 | hosts: 12 | - topdog.127.0.0.1.xip.io 13 | --- 14 | apiVersion: networking.istio.io/v1alpha3 15 | kind: Gateway 16 | metadata: 17 | name: hurl-gateway 18 | spec: 19 | servers: 20 | - port: 21 | number: 80 22 | name: http 23 | protocol: HTTP2 24 | hosts: 25 | - hurl.127.0.0.1.xip.io 26 | --- 27 | apiVersion: networking.istio.io/v1alpha3 28 | kind: Gateway 29 | metadata: 30 | name: webnull-gateway 31 | spec: 32 | servers: 33 | - port: 34 | number: 80 35 | name: http 36 | protocol: HTTP2 37 | hosts: 38 | - webnull.127.0.0.1.xip.io 39 | -------------------------------------------------------------------------------- /docker/istio/grafana-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: grafana-gateway 5 | namespace: istio-system 6 | spec: 7 | selector: 8 | istio: ingressgateway 9 | servers: 10 | - port: 11 | number: 80 12 | name: http 13 | protocol: HTTP2 14 | hosts: 15 | - grafana.127.0.0.1.xip.io 16 | --- 17 | apiVersion: networking.istio.io/v1alpha3 18 | kind: DestinationRule 19 | metadata: 20 | name: grafana 21 | namespace: istio-system 22 | spec: 23 | host: grafana.istio-system.svc.cluster.local 24 | trafficPolicy: 25 | tls: 26 | mode: DISABLE 27 | --- 28 | apiVersion: networking.istio.io/v1alpha3 29 | kind: VirtualService 30 | metadata: 31 | name: grafana 32 | namespace: istio-system 33 | spec: 34 | hosts: 35 | - grafana 36 | - grafana.127.0.0.1.xip.io 37 | gateways: 38 | - grafana-gateway 39 | http: 40 | - match: 41 | - port: 80 42 | route: 43 | - destination: 44 | host: grafana.istio-system.svc.cluster.local 45 | port: 46 | number: 3000 47 | -------------------------------------------------------------------------------- /docker/istio/jaeger-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: jaeger-gateway 5 | namespace: istio-system 6 | spec: 7 | selector: 8 | istio: ingressgateway 9 | servers: 10 | - port: 11 | number: 80 12 | name: http 13 | protocol: HTTP2 14 | hosts: 15 | - jaeger.127.0.0.1.xip.io 16 | --- 17 | apiVersion: networking.istio.io/v1alpha3 18 | kind: DestinationRule 19 | metadata: 20 | name: jaeger 21 | namespace: istio-system 22 | spec: 23 | host: jaeger-query.istio-system.svc.cluster.local 24 | trafficPolicy: 25 | tls: 26 | mode: DISABLE 27 | --- 28 | apiVersion: networking.istio.io/v1alpha3 29 | kind: VirtualService 30 | metadata: 31 | name: jaeger 32 | namespace: istio-system 33 | spec: 34 | hosts: 35 | - jaeger-query 36 | - jaeger.127.0.0.1.xip.io 37 | gateways: 38 | - jaeger-gateway 39 | http: 40 | - match: 41 | - port: 80 42 | route: 43 | - destination: 44 | host: jaeger-query.istio-system.svc.cluster.local 45 | port: 46 | number: 16686 47 | -------------------------------------------------------------------------------- /docker/istio/servicegraph-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: servicegraph-gateway 5 | namespace: istio-system 6 | spec: 7 | selector: 8 | istio: ingressgateway 9 | servers: 10 | - port: 11 | number: 80 12 | name: http 13 | protocol: HTTP2 14 | hosts: 15 | - servicegraph.127.0.0.1.xip.io 16 | --- 17 | apiVersion: networking.istio.io/v1alpha3 18 | kind: DestinationRule 19 | metadata: 20 | name: servicegraph 21 | namespace: istio-system 22 | spec: 23 | host: servicegraph.istio-system.svc.cluster.local 24 | trafficPolicy: 25 | tls: 26 | mode: DISABLE 27 | --- 28 | apiVersion: networking.istio.io/v1alpha3 29 | kind: VirtualService 30 | metadata: 31 | name: servicegraph 32 | namespace: istio-system 33 | spec: 34 | hosts: 35 | - servicegraph 36 | - servicegraph.127.0.0.1.xip.io 37 | gateways: 38 | - servicegraph-gateway 39 | http: 40 | - match: 41 | - port: 80 42 | route: 43 | - destination: 44 | host: servicegraph.istio-system.svc.cluster.local 45 | port: 46 | number: 8088 47 | -------------------------------------------------------------------------------- /docker/istio/zipkin-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: zipkin-gateway 5 | namespace: istio-system 6 | spec: 7 | selector: 8 | istio: ingressgateway 9 | servers: 10 | - port: 11 | number: 80 12 | name: http 13 | protocol: HTTP2 14 | hosts: 15 | - zipkin.127.0.0.1.xip.io 16 | --- 17 | apiVersion: networking.istio.io/v1alpha3 18 | kind: DestinationRule 19 | metadata: 20 | name: zipkin 21 | namespace: istio-system 22 | spec: 23 | host: zipkin.istio-system.svc.cluster.local 24 | trafficPolicy: 25 | tls: 26 | mode: DISABLE 27 | --- 28 | apiVersion: networking.istio.io/v1alpha3 29 | kind: VirtualService 30 | metadata: 31 | name: zipkin 32 | namespace: istio-system 33 | spec: 34 | hosts: 35 | - zipkin 36 | - zipkin.127.0.0.1.xip.io 37 | gateways: 38 | - zipkin-gateway 39 | http: 40 | - match: 41 | - port: 80 42 | route: 43 | - destination: 44 | host: zipkin.istio-system.svc.cluster.local 45 | port: 46 | number: 9411 47 | -------------------------------------------------------------------------------- /docker/remove.ps1: -------------------------------------------------------------------------------- 1 | 2 | kubectl delete gateway -n istio-system grafana-gateway 3 | kubectl delete gateway -n istio-system jaeger-gateway 4 | kubectl delete gateway -n istio-system servicegraph-gateway 5 | kubectl delete gateway -n istio-system zipkin-gateway 6 | 7 | kubectl delete destinationrule -n istio-system grafana 8 | kubectl delete destinationrule -n istio-system jaeger 9 | kubectl delete destinationrule -n istio-system servicegraph 10 | kubectl delete destinationrule -n istio-system zipkin 11 | 12 | kubectl delete virtualservice -n istio-system grafana 13 | kubectl delete virtualservice -n istio-system jaeger 14 | kubectl delete virtualservice -n istio-system servicegraph 15 | kubectl delete virtualservice -n istio-system zipkin 16 | 17 | kubectl delete gateway -n default topdog-gateway 18 | kubectl delete gateway -n default webnull-gateway 19 | kubectl delete gateway -n default hurl-gateway 20 | 21 | kubectl delete destinationrule -n default topdogbe 22 | kubectl delete destinationrule -n default topdogmt 23 | kubectl delete destinationrule -n default topdogui 24 | kubectl delete destinationrule -n default webnull 25 | kubectl delete destinationrule -n default hurl 26 | 27 | kubectl delete virtualservice -n default topdogbe 28 | kubectl delete virtualservice -n default topdogmt 29 | kubectl delete virtualservice -n default topdogui 30 | kubectl delete virtualservice -n default webnull 31 | kubectl delete virtualservice -n default hurl 32 | 33 | kubectl delete deployment -n default hurl 34 | kubectl delete deployment -n default webnull 35 | kubectl delete deployment -n default topdogbe-v1 36 | kubectl delete deployment -n default topdogbe-v2 37 | kubectl delete deployment -n default topdogbe-v3 38 | kubectl delete deployment -n default topdogmt-v1 39 | kubectl delete deployment -n default topdogui-v1 40 | 41 | kubectl delete service -n default hurl 42 | kubectl delete service -n default webnull 43 | kubectl delete service -n default topdogbe 44 | kubectl delete service -n default topdogmt 45 | kubectl delete service -n default topdogui 46 | -------------------------------------------------------------------------------- /docker/remove.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | kubectl delete gateway -n istio-system grafana-gateway 4 | kubectl delete gateway -n istio-system jaeger-gateway 5 | kubectl delete gateway -n istio-system servicegraph-gateway 6 | kubectl delete gateway -n istio-system zipkin-gateway 7 | 8 | kubectl delete destinationrule -n istio-system grafana 9 | kubectl delete destinationrule -n istio-system jaeger 10 | kubectl delete destinationrule -n istio-system servicegraph 11 | kubectl delete destinationrule -n istio-system zipkin 12 | 13 | kubectl delete virtualservice -n istio-system grafana 14 | kubectl delete virtualservice -n istio-system jaeger 15 | kubectl delete virtualservice -n istio-system servicegraph 16 | kubectl delete virtualservice -n istio-system zipkin 17 | 18 | kubectl delete gateway -n default topdog-gateway 19 | kubectl delete gateway -n default webnull-gateway 20 | kubectl delete gateway -n default hurl-gateway 21 | 22 | kubectl delete destinationrule -n default topdogbe 23 | kubectl delete destinationrule -n default topdogmt 24 | kubectl delete destinationrule -n default topdogui 25 | kubectl delete destinationrule -n default webnull 26 | kubectl delete destinationrule -n default hurl 27 | 28 | kubectl delete virtualservice -n default topdogbe 29 | kubectl delete virtualservice -n default topdogmt 30 | kubectl delete virtualservice -n default topdogui 31 | kubectl delete virtualservice -n default webnull 32 | kubectl delete virtualservice -n default hurl 33 | 34 | kubectl delete deployment -n default hurl 35 | kubectl delete deployment -n default webnull 36 | kubectl delete deployment -n default topdogbe-v1 37 | kubectl delete deployment -n default topdogbe-v2 38 | kubectl delete deployment -n default topdogbe-v3 39 | kubectl delete deployment -n default topdogmt-v1 40 | kubectl delete deployment -n default topdogui-v1 41 | 42 | kubectl delete service -n default hurl 43 | kubectl delete service -n default webnull 44 | kubectl delete service -n default topdogbe 45 | kubectl delete service -n default topdogmt 46 | kubectl delete service -n default topdogui 47 | -------------------------------------------------------------------------------- /docker/setup.ps1: -------------------------------------------------------------------------------- 1 | 2 | Write-Output "*** TEST KUBECTL ***" 3 | kubectl config view 4 | kubectl config use-context docker-desktop 5 | 6 | # Wait for Istio 7 | #echo "*** WAITING ON ISTIO ***" 8 | #wait_on_istio 9 | 10 | # ingress 11 | Write-Output "*** INSTALLING ISTIO ADDONS GATEWAYS ***" 12 | kubectl apply -f .\docker\istio\grafana-gateway.yaml 13 | kubectl apply -f .\docker\istio\jaeger-gateway.yaml 14 | kubectl apply -f .\docker\istio\servicegraph-gateway.yaml 15 | kubectl apply -f .\docker\istio\zipkin-gateway.yaml 16 | Start-Sleep 5 17 | 18 | # k8s 19 | Write-Output "*** SETTING KUBERNETES VARIABLES ***" 20 | Set-Variable -name KUBE_NAMESPACE -value default 21 | 22 | # pull images 23 | Write-Output "*** PULLING DOCKER IMAGES FOR LAB ***" 24 | docker pull ancientlore/topdog:v0.1.4 25 | docker pull ancientlore/hurl:v0.1.2 26 | docker pull ancientlore/webnull:v0.1.3 27 | 28 | Write-Output "*** INSTALLING DEMO CONTAINERS ***" 29 | istioctl kube-inject -f .\resiliency\hurl\kube\deployment.yaml > _hurl.yaml 30 | kubectl apply -f _hurl.yaml 31 | istioctl kube-inject -f .\resiliency\webnull\kube\deployment.yaml > _webnull.yaml 32 | kubectl apply -f _webnull.yaml 33 | istioctl kube-inject -f .\trafficshifting\topdog\kube\deployment-be-v1.yaml > _topdogbe1.yaml 34 | kubectl apply -f _topdogbe1.yaml 35 | istioctl kube-inject -f .\trafficshifting\topdog\kube\deployment-be-v2.yaml > _topdogbe2.yaml 36 | kubectl apply -f _topdogbe2.yaml 37 | istioctl kube-inject -f .\trafficshifting\topdog\kube\deployment-be-v3.yaml > _topdogbe3.yaml 38 | kubectl apply -f _topdogbe3.yaml 39 | istioctl kube-inject -f .\trafficshifting\topdog\kube\deployment-mt-v1.yaml > _topdogmt1.yaml 40 | kubectl apply -f _topdogmt1.yaml 41 | istioctl kube-inject -f .\trafficshifting\topdog\kube\deployment-ui-v1.yaml > _topdogui1.yaml 42 | kubectl apply -f _topdogui1.yaml 43 | Remove-Item _*.yaml 44 | 45 | Write-Output "*** INSTALLING DEMO GATEWAYS ***" 46 | kubectl apply -f .\docker\istio\demo-gateway.yaml 47 | 48 | Write-Output "*** ADDING ISTIO VIRTUAL SERVICES ***" 49 | kubectl apply -f .\trafficshifting\istio\services-all-v1.yaml -n default 50 | kubectl apply create -f .\resiliency\istio\services.yaml -n default 51 | 52 | Write-Output 'echo "Run .\demo.sh to start the demo. You need the bash shell."' 53 | -------------------------------------------------------------------------------- /docker/setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | wait_on_istio () 4 | { 5 | # wait for istio 6 | until [ `kubectl get pods -n istio-system | egrep -v '(Running|Completed)' | wc -l` -eq 1 ] 7 | do 8 | echo "Waiting for Istio" 9 | sleep 5 10 | done 11 | } 12 | 13 | echo "*** TEST KUBECTL ***" 14 | kubectl config view 15 | kubectl config use-context docker-desktop 16 | 17 | # Wait for Istio 18 | echo "*** WAITING ON ISTIO ***" 19 | wait_on_istio 20 | 21 | # ingress 22 | echo "*** INSTALLING ISTIO ADDONS GATEWAYS ***" 23 | kubectl apply -f ./docker/istio/grafana-gateway.yaml 24 | kubectl apply -f ./docker/istio/jaeger-gateway.yaml 25 | kubectl apply -f ./docker/istio/servicegraph-gateway.yaml 26 | kubectl apply -f ./docker/istio/zipkin-gateway.yaml 27 | sleep 5 28 | 29 | # k8s 30 | echo "*** SETTING KUBERNETES VARIABLES ***" 31 | export KUBE_NAMESPACE=default 32 | 33 | # pull images 34 | echo "*** PULLING DOCKER IMAGES FOR LAB ***" 35 | docker pull ancientlore/topdog:v0.1.4 36 | docker pull ancientlore/hurl:v0.1.2 37 | docker pull ancientlore/webnull:v0.1.3 38 | 39 | echo "*** INSTALLING DEMO CONTAINERS ***" 40 | kubectl apply -f <(istioctl kube-inject -f ./resiliency/hurl/kube/deployment.yaml) 41 | kubectl apply -f <(istioctl kube-inject -f ./resiliency/webnull/kube/deployment.yaml) 42 | kubectl apply -f <(istioctl kube-inject -f ./trafficshifting/topdog/kube/deployment-be-v1.yaml) 43 | kubectl apply -f <(istioctl kube-inject -f ./trafficshifting/topdog/kube/deployment-be-v2.yaml) 44 | kubectl apply -f <(istioctl kube-inject -f ./trafficshifting/topdog/kube/deployment-be-v3.yaml) 45 | kubectl apply -f <(istioctl kube-inject -f ./trafficshifting/topdog/kube/deployment-mt-v1.yaml) 46 | kubectl apply -f <(istioctl kube-inject -f ./trafficshifting/topdog/kube/deployment-ui-v1.yaml) 47 | 48 | echo "*** INSTALLING DEMO GATEWAYS ***" 49 | kubectl apply -f ./docker/istio/demo-gateway.yaml 50 | 51 | echo "*** ADDING ISTIO VIRTUAL SERVICES ***" 52 | kubectl apply -f ./trafficshifting/istio/services-all-v1.yaml -n default 53 | kubectl apply -f ./resiliency/istio/services.yaml -n default 54 | 55 | echo 'echo "Run ./demo.sh to start the demo."' 56 | -------------------------------------------------------------------------------- /istio-talk.bmpr: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/istio-talk.bmpr -------------------------------------------------------------------------------- /istio-talk.slide: -------------------------------------------------------------------------------- 1 | Istio 2 | Introducing the Service Mesh 3 | 18:00 18 Apr 2018 4 | Tags: microservice, devops, kubernetes, istio, service-mesh, proxy, envoy 5 | 6 | Michael Lore 7 | Principal Architect, SAP Concur 8 | @ancientlore 9 | 10 | 11 | 12 | * About Me 13 | 14 | - Central Architecture Team at SAP Concur 15 | - Designed the booking engine powering Concur Travel 16 | - Playing with Go since 2010 (see [[https://github.com/ancientlore/go-avltree][go-avltree]]) 17 | - Interested in concurrent applications in the travel space 18 | - ...which explains my interest in Go, microservices, and service mesh 19 | 20 | .image media/gopher.png 21 | 22 | [[https://twitter.com/ancientlore][@ancientlore]] 23 | .link https://github.com/ancientlore/istio-talk 24 | 25 | .background media/plane_bg.png 26 | 27 | 28 | 29 | * Monolith to Microservices 30 | 31 | .image media/deathstar.jpg 32 | 33 | .caption Who isn't doing this? 34 | 35 | .background media/plane_bg.png 36 | 37 | 38 | 39 | * Simple, right? 40 | 41 | .image media/istio-talk-1.png 42 | 43 | .background media/plane_bg.png 44 | 45 | 46 | 47 | * Not so fast... 48 | 49 | .image media/istio-talk-2.png 50 | 51 | .background media/plane_bg.png 52 | 53 | 54 | 55 | * A polyglot world 56 | 57 | .image media/istio-talk-3.png 58 | 59 | .caption The microservices aren't so micro after all. 60 | 61 | .background media/plane_bg.png 62 | 63 | 64 | 65 | * Enter Istio: A Service Mesh 66 | 67 | .image media/istio-talk-4.png 68 | 69 | .background media/plane_bg.png 70 | 71 | 72 | 73 | * What is Istio? 74 | 75 | .link https://istio.io/ 76 | 77 | A complete framework for *connecting*, *securing*, *managing*, and *monitoring* services 78 | 79 | Secure and monitor traffic for microservices *and* legacy services 80 | 81 | An *open*platform* with key contributions from Google, IBM, Lyft, and others 82 | 83 | *Multi-environment*and*multi-platform*, but Kubernetes first 84 | 85 | .image media/istio.png 86 | 87 | .background media/plane_bg.png 88 | 89 | 90 | 91 | * Why are we interested in Istio? 92 | 93 | - [[https://www.eugdpr.org/][Global Data Protection Regulation (GDPR)]] 94 | - Zero-trust, automated mutual-TLS 95 | - Deployment and routing strategies (canary) 96 | - Circuit breakers, fault injection 97 | - Observability and tracing 98 | - Mesh expansion to legacy services 99 | - Consistency within a polyglot environment 100 | 101 | .image media/sap.png 32 _ 102 | .caption SAP is a [[https://www.cncf.io/announcement/2017/10/11/cloud-native-computing-foundation-welcomes-sap-platinum-member/][Platinum Member]] of the CNCF. 103 | 104 | .background media/plane_bg.png 105 | 106 | 107 | 108 | * Powered by Envoy 109 | 110 | .link https://www.envoyproxy.io/ 111 | A C++ based L4/L7 proxy created and battle-tested at Lyft 112 | 113 | - Dynamic service discovery 114 | - Load balancing and traffic splitting 115 | - TLS termination 116 | - HTTP/2 first, gRPC proxying 117 | - Circuit breakers and timeout handling 118 | - Health checks, fault and delay injection 119 | 120 | .image media/envoy.png 121 | 122 | .background media/plane_bg.png 123 | 124 | 125 | 126 | * Istio Architecture 127 | 128 | .image media/istio-talk-5.png 129 | 130 | .background media/plane_bg.png 131 | 132 | 133 | 134 | * Istio Resources 135 | 136 | Istio is configured via Kubernetes resources: 137 | 138 | - Virtual Services 139 | - Destination Rules 140 | - Service Entries 141 | - Gateways 142 | 143 | They are visible via `kubectl`: 144 | 145 | kubectl get virtualservices 146 | 147 | You can also use `istioctl`, which provides better validation. 148 | 149 | istioctl get destinationrules 150 | 151 | .background media/plane_bg.png 152 | 153 | 154 | 155 | * Dashboard 156 | 157 | .image media/grafana.png _ 900 158 | 159 | .background media/plane_bg.png 160 | 161 | 162 | 163 | * Service Graph 164 | 165 | .image media/servicegraph.png _ 900 166 | 167 | .background media/plane_bg.png 168 | 169 | 170 | 171 | * Tracing 172 | 173 | .image media/zipkin.png _ 900 174 | 175 | .background media/plane_bg.png 176 | 177 | 178 | 179 | * Things you still need to do 180 | 181 | - Propagate headers between service calls. 182 | 183 | headersToCopy := []string{"x-request-id", "x-b3-traceid", "x-b3-spanid", "x-b3-parentspanid", 184 | "x-b3-sampled", "x-b3-flags", "x-ot-span-context"} 185 | for _, h := range headersToCopy { 186 | val := fromRequest.Header.Get(h) 187 | if val != "" { 188 | toRequest.Header.Set(h, val) 189 | } 190 | } 191 | 192 | - Handle *504*Gateway*Timeout* from timeouts. 193 | - Handle *503*Service*Unavailable* from circuit breakers and _tune_them_. 194 | - Inject the proxy into your pods using `istioctl`kube-inject` or run the automatic injector. 195 | 196 | .caption Not totally free 197 | 198 | .background media/plane_bg.png 199 | 200 | 201 | 202 | * Demo - Traffic Shifting 203 | 204 | .image trafficshifting/trafficshifting.png 205 | 206 | .background media/plane_bg.png 207 | 208 | 209 | 210 | * Demo - Resiliency 211 | 212 | .image resiliency/resiliency.png 213 | 214 | .background media/plane_bg.png 215 | 216 | 217 | 218 | * Issues we discovered 219 | 220 | .image media/bug.png _ 200 221 | 222 | - Istio is progressing rapidly, but is still alpha. 223 | - We encountered high CPU issues on larger clusters with many changes going on. 224 | - Proxies are not updated instantly - you have to be careful to avoid some communication issues while rules propagate to the envoy instances. 225 | - We have not tried mesh expansion - it looks like it needs to be flushed out further. 226 | 227 | But with the release of 1.0, things are getting stable quickly. 228 | 229 | .background media/plane_bg.png 230 | 231 | 232 | 233 | * Other good Istio Presentations 234 | 235 | - Ray Tsang (@saturnism), [[https://speakerdeck.com/saturnism/making-microservices-micro-with-istio-service-mesh][Making Microservices Micro]] 236 | - Matt Klein (@mattklein123), [[https://youtu.be/IeJDjq-COjk][The Mechanics of Deploying Envoy at Lyft]] 237 | 238 | .background media/plane_bg.png 239 | 240 | 241 | 242 | * Resources 243 | 244 | [[https://istio.io/][istio.io]] 245 | 246 | [[https://www.envoyproxy.io/][www.envoyproxy.io]] 247 | 248 | Quick start on GKE: 249 | [[https://istio.io/docs/setup/kubernetes/quick-start-gke-dm.html][istio.io/docs/setup/kubernetes/quick-start-gke-dm.html]] 250 | 251 | This presentation, Kubernetes deployments, Istio code: 252 | [[https://github.com/ancientlore/istio-talk][github.com/ancientlore/istio-talk]] 253 | 254 | Demo code: 255 | [[https://github.com/ancientlore/hurl][github.com/ancientlore/hurl]] 256 | [[https://github.com/ancientlore/webnull][github.com/ancientlore/webnull]] 257 | [[https://github.com/ancientlore/topdog][github.com/ancientlore/topdog]] 258 | 259 | Utility for showing the demos: 260 | [[https://github.com/ancientlore/demon][github.com/ancientlore/demon]] 261 | 262 | .background media/plane_bg.png 263 | -------------------------------------------------------------------------------- /media/CNCF.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/CNCF.png -------------------------------------------------------------------------------- /media/bug.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/bug.png -------------------------------------------------------------------------------- /media/deathstar.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/deathstar.jpg -------------------------------------------------------------------------------- /media/envoy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/envoy.png -------------------------------------------------------------------------------- /media/gopher.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/gopher.png -------------------------------------------------------------------------------- /media/grafana.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/grafana.png -------------------------------------------------------------------------------- /media/istio-talk-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/istio-talk-1.png -------------------------------------------------------------------------------- /media/istio-talk-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/istio-talk-2.png -------------------------------------------------------------------------------- /media/istio-talk-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/istio-talk-3.png -------------------------------------------------------------------------------- /media/istio-talk-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/istio-talk-4.png -------------------------------------------------------------------------------- /media/istio-talk-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/istio-talk-5.png -------------------------------------------------------------------------------- /media/istio.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/istio.png -------------------------------------------------------------------------------- /media/plane_bg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/plane_bg.png -------------------------------------------------------------------------------- /media/resiliency_demo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/resiliency_demo.png -------------------------------------------------------------------------------- /media/sap.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/sap.png -------------------------------------------------------------------------------- /media/servicegraph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/servicegraph.png -------------------------------------------------------------------------------- /media/trafficshifting_demo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/trafficshifting_demo.png -------------------------------------------------------------------------------- /media/zipkin.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/media/zipkin.png -------------------------------------------------------------------------------- /resiliency/README.md: -------------------------------------------------------------------------------- 1 | # Resiliency Demo using Istio 2 | 3 | This demo shows a mock service, [webnull], that just throws away requests. Another mock service, [hurl], is really a tool to send parallel HTTP requests to [webnull] to simulate load. We will use [Istio] to test circuit breakers, inject faults, and inject delays. These tools can be used to test how upsteam services respond to problems. 4 | 5 | ![service diagram](resiliency.png) 6 | 7 | What is great about [Istio] is that these functions are part of the _infrastructure_ - no special coding is needed in [hurl] or [webnull] to take advantage of these features. [Istio] also has a great dashboard, a service graph, and a trace analyzer. 8 | 9 | > Note: These instructions assume a `bash` shell. On Windows, you can use `git-bash` which should be installed with [git](https://git-scm.com/). 10 | 11 | ## Requirements 12 | 13 | For this demo you need: 14 | 15 | * A Kubernetes cluster (or [Minikube], or [Docker] for Desktop) with [Istio] installed. 16 | * The [kubectl command](https://kubernetes.io/docs/tasks/tools/install-kubectl/) should be installed. 17 | * Optionally, the [istioctl command](https://istio.io/docs/reference/commands/istioctl) can be used. 18 | 19 | ## Setup 20 | 21 | Export the following variables which are used by the commands in this demo: 22 | 23 | * `KUBE_NAMESPACE` - The Kubernetes namespace to deploy to (like `default`, but use yours). 24 | 25 | > Note: Export a variable using, for example, `export KUBE_NAMESPACE=my-kubernetes-namespace`, or assign it when calling the script like `KUBE_NAMESPACE=my-kubernetes-namespace ./script.sh`. 26 | 27 | Deploy [webnull] according to its [instructions](webnull/README.md). Then deploy [hurl] according to its [instructions](hurl/README.md). Be sure to install the version that injects the [Istio] sidecar. 28 | 29 | > Note: The [hurl] user interface will not be available until the `hurl` command is running inside the container. 30 | 31 | Make sure the containers are running: 32 | 33 | $ kubectl get pods 34 | NAME READY STATUS RESTARTS AGE 35 | hurl-4211406881-d8rz5 2/2 Running 0 1m 36 | webnull-3334364612-nlwsr 2/2 Running 0 1m 37 | 38 | Set up the virtual services and destination rules for each service. Note that the destination rule for `webnull` has a circuit breaker. 39 | 40 | $ cd istio 41 | $ kubectl apply -f services.yaml -n $KUBE_NAMESPACE 42 | 43 | > Note that you can generally use `kubectl` instead of `istioctl`, but `istioctl` provides additional client-side validation. 44 | 45 | You can check the route rules using `istioctl get routerules` or `kubectl get routerules`. You can also fetch an individual rule using: 46 | 47 | $ kubectl get virtualservice webnull -n $KUBE_NAMESPACE -o yaml 48 | $ kubectl get destinationrule webnull -n $KUBE_NAMESPACE -o yaml 49 | $ kubectl get virtualservice hurl -n $KUBE_NAMESPACE -o yaml 50 | $ kubectl get destinationrule hurl -n $KUBE_NAMESPACE -o yaml 51 | 52 | You're now ready to proceed with the demo. 53 | 54 | ## Starting the Demo 55 | 56 | Start [hurl] with a basic request set. 57 | 58 | $ kubectl exec -it $(kubectl get pod -l app=hurl -o name | sed 's/^pod\///') /bin/sh 59 | # hurl -conns 10 -loop 2000000 http://webnull:8080/xml 60 | 61 | > Stay logged into the [hurl] container so that you can exercise commands later. In this walk-through, commands shown with `#` are executed in the hurl container, and commands shown with `$` are executed in your normal shell. 62 | 63 | Look at the [webnull] dashboard by running the command below (in a new shell) and browsing to http://localhost:8081/status/ - it shows incoming request volume and bytes received if it was a POST or PUT. This small utility can be used to check what sort of throughput you could achieve, assuming your service had no actual work to do. 64 | 65 | $ kubectl port-forward $(kubectl get pod -l app=webnull -o name | sed 's/^pods\///') 8081:8080 66 | 67 | Now look at the [hurl] dashboard by running the command below (in a new shell) and browsing to http://localhost:8082/. [hurl] was created to work similarly to [curl], except that it sends requests in parallel to load up some other service. The dashboard shows graphs of response codes, bytes recieved, and request round-trip time. 68 | 69 | $ kubectl port-forward $(kubectl get pod -l app=hurl -o name | sed 's/^pods\///') 8082:8080 70 | 71 | We will be using [hurl] to send load to [webnull], and then checking what happens when we exercise [Istio] features. 72 | 73 | > Note: Run the [Istio] commands from the `istio` folder. 74 | 75 | We have already set up [virtual services and destination rules](istio/services.yaml) in [Istio]: 76 | 77 | $ kubectl get virtualservice webnull -n $KUBE_NAMESPACE -o yaml 78 | $ kubectl get destinationrule webnull -n $KUBE_NAMESPACE -o yaml 79 | $ kubectl get virtualservice hurl -n $KUBE_NAMESPACE -o yaml 80 | $ kubectl get destinationrule hurl -n $KUBE_NAMESPACE -o yaml 81 | 82 | Note that we have a circuit breaker set up for `webnull`. 83 | 84 | At this point, the service is "operating normally". 85 | 86 | > See region A on the diagram below 87 | 88 | ## Circuit Breaker 89 | 90 | To trigger the circuit breaker, restart [hurl] with more connections: 91 | 92 | # hurl -conns 100 -loop 2000000 http://webnull:8080/xml 93 | 94 | Note the large number of HTTP 503 responses from the circuit breaker. Technically, this is triggering connection limits. Part of the circuit breaker's configuration responds to errors coming from the service. 95 | 96 | > See region B on the diagram below 97 | 98 | Now restart [hurl] with a normal number of connections. 99 | 100 | # hurl -conns 10 -loop 2000000 http://webnull:8080/xml 101 | 102 | Things return to normal, although [Istio] may still have the circuit breaker triggered. 103 | 104 | > See region C on the diagram below 105 | 106 | ## Fault Injection 107 | 108 | Next, we will inject [faults](istio/inject-abort.yaml) into the system. 109 | 110 | $ kubectl apply -f inject-abort.yaml -n $KUBE_NAMESPACE 111 | 112 | Note the HTTP 400 responses from the fault. This can be used to test how upstream services respond to failures. 113 | 114 | > See region D on the diagram below 115 | 116 | Now remove the fault: 117 | 118 | $ kubectl apply -f istio/services.yaml -n $KUBE_NAMESPACE 119 | 120 | Note how the service returns to normal. 121 | 122 | > See region E on the diagram below 123 | 124 | ## Delay Injection 125 | 126 | We will now inject a [small delay](istio/inject-delay.yaml) into some percentage of the requests. 127 | 128 | $ kubectl apply -f inject-delay.yaml -n $KUBE_NAMESPACE 129 | 130 | Note that even with this small delay, the service doesn't process as many transactions. 131 | 132 | > See region F on the diagram below 133 | 134 | Remove the small delay: 135 | 136 | $ kubectl apply -f istio/services.yaml -n $KUBE_NAMESPACE 137 | 138 | Note the service return to normal. 139 | 140 | > See region G on the diagram below 141 | 142 | ## Big Delay Injection 143 | 144 | Now let's inject a [larger delay](istio/inject-big-delay.yaml) on more requests, simulating a potential database performance issue. 145 | 146 | $ kubectl apply -f inject-big-delay.yaml -n $KUBE_NAMESPACE 147 | 148 | Note how the service practically falls over. 149 | 150 | > See region H on the diagram below 151 | 152 | But to be realistic, let's add far more connections to similate requests from clients piling up. 153 | 154 | # hurl -conns 100 -loop 2000000 http://webnull:8080/xml 155 | 156 | Note that the service improves, but is still hurting. In some cases it might also trigger the circuit breaker. 157 | 158 | > See region I on the diagram below 159 | 160 | "Fix the database" and remove the delay. 161 | 162 | $ kubectl apply -f istio/services.yaml -n $KUBE_NAMESPACE 163 | 164 | Now the circuit breaker is triggering - why? The reason is that we still have many requests going. 165 | 166 | > See region J on the diagram below 167 | 168 | In reality the request rate would come back under control once the service speeds up, so let's adjust [hurl] for that. 169 | 170 | # hurl -conns 10 -loop 2000000 http://webnull:8080/xml 171 | 172 | The circuit breaker has a memory and still may trigger, but things are getting back to normal. 173 | 174 | > See region K on the diagram below 175 | 176 | ## Request Timeouts 177 | 178 | To test request timeouts, first try a long-running page without a timeout in place: 179 | 180 | # hurl http://webnull:8080/delay/5000 181 | 182 | Next, set up the [request timeout](istio/request-timeout.yaml) route rule. 183 | 184 | $ kubectl apply -f request-timeout.yaml -n $KUBE_NAMESPACE 185 | 186 | Wait a moment for the Istio proxy to update, then try the request again: 187 | 188 | # hurl http://webnull:8080/delay/5000 189 | 2018/01/25 21:47:47 Failed to post to http://webnull:8080/delay/5000, status 504 Gateway Timeout 190 | 191 | Clean up the rule: 192 | 193 | $ kubectl apply -f istio/services.yaml -n $KUBE_NAMESPACE 194 | 195 | ## Diagram 196 | 197 | ![diagram](istio.png) 198 | 199 | ## Closing Thoughts 200 | 201 | * [Istio] takes several problems that used to be (inconsistently implemented) coding exercises and makes them a consistent part of the infrastructure. 202 | * Testing upstream service behavior is easily accomplished by injecting faults and delays. 203 | * Circuit breakers can use used to protect upstream (and downtream) services from overload. 204 | * HTTP timeouts are easily controlled, even when the service implementations don't set them explicitly. 205 | * [Istio] can be extended to services outside of Kubernetes. 206 | * [Istio] includes much more, including mutual TLS between services, throttling, and monitoring. 207 | 208 | [webnull]: https://github.com/ancientlore/webnull 209 | [hurl]: https://github.com/ancientlore/hurl 210 | [Istio]: https://istio.io/ 211 | [curl]: https://curl.haxx.se/ 212 | [Docker]: https://www.docker.com/ 213 | -------------------------------------------------------------------------------- /resiliency/demo.json: -------------------------------------------------------------------------------- 1 | { 2 | "title": "Resiliency Demo", 3 | "stepsPosition": 2, 4 | "sites": { 5 | "webnull": { 6 | "title": "/web/null", 7 | "url": "$WEBNULL", 8 | "position": 4 9 | }, 10 | "hurl": { 11 | "title": "hURL", 12 | "url": "$HURL", 13 | "position": 0 14 | } 15 | }, 16 | "processes": { 17 | "bash": { 18 | "title": "bash", 19 | "command": ["bash"], 20 | "dir": "./istio", 21 | "position": 1, 22 | "exitInput": "exit" 23 | }, 24 | "kubeexec": { 25 | "title": "kubectl exec", 26 | "command": ["kubectl", "exec", "-i", "$POD", "-c", "hurl", "/bin/sh"], 27 | "dir": "", 28 | "position": 3, 29 | "exitInput": "exit" 30 | } 31 | }, 32 | "steps": [ 33 | { 34 | "title": "Check the running containers", 35 | "desc": "The containers were deployed earlier.", 36 | "id": "bash", 37 | "input": "kubectl get pods | egrep '(^NAME|webnull|hurl)'" 38 | }, 39 | { 40 | "title": "Check the virtual services", 41 | "desc": "These virtual services were created earlier.", 42 | "id": "bash", 43 | "input": "kubectl get virtualservices | egrep '(^NAME|webnull|hurl)'" 44 | }, 45 | { 46 | "title": "Check the destination rules", 47 | "desc": "These rules were created earlier.", 48 | "id": "bash", 49 | "input": "kubectl get destinationrules | egrep '(^NAME|webnull|hurl)'" 50 | }, 51 | { 52 | "title": "Create some initial load", 53 | "desc": "Using 10 connections, hurl sends load to webnull.", 54 | "id": "kubeexec", 55 | "input": "hurl -conns 10 -loop 2000000 http://webnull:8080/xml 2> /dev/null & pid=$!" 56 | }, 57 | { 58 | "title": "Test circuit breaker", 59 | "desc": "By bumping the connections to 100, we'll trigger the circuit breaker.", 60 | "id": "kubeexec", 61 | "input": "kill -15 $pid ; sleep 1 ; hurl -conns 100 -loop 2000000 http://webnull:8080/xml 2> /dev/null & pid=$!" 62 | }, 63 | { 64 | "title": "Look at the circuit breaker rule", 65 | "desc": "Let's take a look at the circuit breaker destination rule.", 66 | "id": "bash", 67 | "input": "kubectl get destinationrule webnull -n $KUBE_NAMESPACE -o yaml" 68 | }, 69 | { 70 | "title": "Return to normal load", 71 | "desc": "Restore the connections to 10.", 72 | "id": "kubeexec", 73 | "input": "kill -15 $pid ; sleep 1 ; hurl -conns 10 -loop 2000000 http://webnull:8080/xml 2> /dev/null & pid=$!" 74 | }, 75 | { 76 | "title": "Test fault injection", 77 | "desc": "Inject a 400 error in 1% of the requests.", 78 | "id": "bash", 79 | "input": "kubectl apply -f inject-abort.yaml -n $KUBE_NAMESPACE" 80 | }, 81 | { 82 | "title": "Look at the fault injection rule", 83 | "desc": "Let's take a look at the fault injection rule.", 84 | "id": "bash", 85 | "input": "kubectl get virtualservice webnull -n $KUBE_NAMESPACE -o yaml" 86 | }, 87 | { 88 | "title": "Remove the fault", 89 | "desc": "Remove the rule that injects faults.", 90 | "id": "bash", 91 | "input": "kubectl apply -f services.yaml -n $KUBE_NAMESPACE" 92 | }, 93 | { 94 | "title": "Test delay injection", 95 | "desc": "Now create a rule that injects a 1s delay in 5% of the requests.", 96 | "id": "bash", 97 | "input": "kubectl apply -f inject-delay.yaml -n $KUBE_NAMESPACE" 98 | }, 99 | { 100 | "title": "Look at the delay injection rule", 101 | "desc": "Let's take a look at the delay injection rule.", 102 | "id": "bash", 103 | "input": "kubectl get virtualservice webnull -n $KUBE_NAMESPACE -o yaml" 104 | }, 105 | { 106 | "title": "Remove the delay", 107 | "desc": "Remove the rule that injects the delay.", 108 | "id": "bash", 109 | "input": "kubectl apply -f services.yaml -n $KUBE_NAMESPACE" 110 | }, 111 | { 112 | "title": "Inject a large delay", 113 | "desc": "Create a rule that injects in 4s delay in 25% of the requests.", 114 | "id": "bash", 115 | "input": "kubectl apply -f inject-big-delay.yaml -n $KUBE_NAMESPACE" 116 | }, 117 | { 118 | "title": "Increase load", 119 | "desc": "To be realistic, requests would back up, increasing the connections.", 120 | "id": "kubeexec", 121 | "input": "kill -15 $pid ; sleep 1 ; hurl -conns 100 -loop 2000000 http://webnull:8080/xml 2> /dev/null & pid=$!" 122 | }, 123 | { 124 | "title": "Fix the delay", 125 | "desc": "Let's \"fix\" the delay by removing the route rule.", 126 | "id": "bash", 127 | "input": "kubectl apply -f services.yaml -n $KUBE_NAMESPACE" 128 | }, 129 | { 130 | "title": "Connections should return to normal", 131 | "desc": "Though a bit artificial, we'll reduce the connections back to normal.", 132 | "id": "kubeexec", 133 | "input": "kill -15 $pid ; sleep 1 ; hurl -conns 10 -loop 2000000 http://webnull:8080/xml 2> /dev/null & pid=$!" 134 | }, 135 | { 136 | "title": "Try a long request", 137 | "desc": "We're going to check on request timeouts, so let's try a long request.", 138 | "id": "kubeexec", 139 | "input": "kill -15 $pid ; sleep 1 ; hurl http://webnull:8080/delay/5000" 140 | }, 141 | { 142 | "title": "Set up request timeout rule", 143 | "desc": "Configure a request timeout rule.", 144 | "id": "bash", 145 | "input": "kubectl apply -f request-timeout.yaml -n $KUBE_NAMESPACE" 146 | }, 147 | { 148 | "title": "Look at the request timeout rule", 149 | "desc": "Let's take a look at the request timeout route rule.", 150 | "id": "bash", 151 | "input": "kubectl get virtualservice webnull -n $KUBE_NAMESPACE -o yaml" 152 | }, 153 | { 154 | "title": "Try the long request again", 155 | "desc": "This time it should time out after 3s.", 156 | "id": "kubeexec", 157 | "input": "hurl http://webnull:8080/delay/5000" 158 | }, 159 | { 160 | "title": "Clean up the timeout rule", 161 | "desc": "Clean up after ourselves.", 162 | "id": "bash", 163 | "input": "kubectl apply -f services.yaml -n $KUBE_NAMESPACE" 164 | } 165 | ] 166 | } 167 | -------------------------------------------------------------------------------- /resiliency/hurl/README.md: -------------------------------------------------------------------------------- 1 | # hurl 2 | 3 | "[hurl] in a container." 4 | 5 | [hurl] is an open-source utility that works a lot like [curl], except that it is designed for pulling many requests concurrently. As such, it's also ideal for easily generating load on servers. It includes a graph that shows informtation about the requests going on. 6 | 7 | ![graph](hurl.png) 8 | 9 | > Note: These instructions assume a `bash` shell. On Windows, you can use `git-bash` which should be installed with [git](https://git-scm.com/). 10 | 11 | ## Usage 12 | 13 | To use the [hurl] container you need to `exec` into it using `kubectl`. 14 | 15 | $ kubectl exec -it $(kubectl get pod -l service=hurl -o name | sed 's/^pods\///') /bin/sh 16 | 17 | You then use [hurl] with the appropriate command-line options. 18 | 19 | __ __ ______ __ 20 | / /_ / / / / __ \/ / 21 | / __ \/ / / / /_/ / / 22 | / / / / /_/ / _, _/ /___ 23 | /_/ /_/\____/_/ |_/_____/ 24 | 25 | A tool to fetch over HTTP, slanted towards load generation. 26 | 27 | Usage: 28 | hurl [options] url1 [url2 ... urlN] 29 | hurl [options] @urlFile 30 | 31 | Example: 32 | hurl -method POST -files "*.xml" -conns 10 http://localhost/svc/foo http://localhost/svc/bar 33 | hurl -method POST -files "*.xml" -conns 10 @urls.txt 34 | 35 | Options: 36 | -addr string 37 | HTTP service address for monitoring. (default ":8080") 38 | -conns int 39 | Number of concurrent HTTP connections. (default 2) 40 | -cpu int 41 | Number of CPUs to use. 42 | -cpuprofile string 43 | Write CPU profile to given file. 44 | -discard 45 | Discard received data. 46 | -files string 47 | Pattern of files to post, like *.xml. Comma-separate for multiple patterns. 48 | -hdrdelim string 49 | Delimiter for HTTP headers specified with -header. (default "|") 50 | -headers string 51 | HTTP headers, delimited by -hdrdelim. 52 | -help 53 | Show help. 54 | -loop int 55 | Number of times to loop and repeat. (default 1) 56 | -memprofile string 57 | Write memory profile to given file. 58 | -method string 59 | HTTP method. (default "GET") 60 | -nocompress 61 | Disable HTTP compression. 62 | -nokeepalive 63 | Disable HTTP keep-alives. 64 | -requestid string 65 | Name of header to send a random GUID. 66 | -timeout duration 67 | HTTP timeout. (default 10s) 68 | -version 69 | Show version. 70 | -wd string 71 | Set the working directory. 72 | 73 | All of the options can be set via environment variables prefixed with "HURL_" - for instance, 74 | HURL_TIMEOUT can be set to "30s" to increase the default timeout. 75 | 76 | Options can also be specified in a TOML configuration file named "hurl.config". The location 77 | of the file can be overridden with the HURL_CONFIG environment variable. 78 | 79 | In our tests, we'll typically use [hurl] to talk to the [webnull] container as follows: 80 | 81 | # hurl -conns 25 -loop 2000000 http://webnull:8080/xml 82 | 83 | > NOTE: Normally [hurl] does not discard responses. Inside the container, `HURL_DISCARD` is set to `true`, causing it to discard results by default. This makes the command-line options simpler, since we're only trying to induce load. 84 | 85 | ## Deployment 86 | 87 | To deploy the [hurl] container without [Istio] *or* if you have Istio automatic sidecar injection enabled, run: 88 | 89 | $ kubectl apply -f ./kube/deployment.yaml 90 | 91 | To deploy the [hurl] container with [Istio] and automatic sidecar injection is not enabled, run: 92 | 93 | $ kubectl apply -f <(istioctl kube-inject -f ./kube/deployment.yaml) 94 | 95 | You can check the UI by using a port-forward and then browsing to http://localhost:8082/. 96 | 97 | $ hp=`kubectl get pod -l app=hurl -o name | sed 's/^pods\///'` 98 | $ kubectl port-forward $hp 8082:8080 99 | 100 | > Note: The UI is only available while the `hurl` command is running. 101 | 102 | [hurl]: https://github.com/ancientlore/hurl 103 | [curl]: https://curl.haxx.se/ 104 | [webnull]: https://github.com/ancientlore/webnull 105 | [Istio]: https://istio.io/ 106 | -------------------------------------------------------------------------------- /resiliency/hurl/hurl.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/resiliency/hurl/hurl.png -------------------------------------------------------------------------------- /resiliency/hurl/kube/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: hurl 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | app: hurl 11 | spec: 12 | containers: 13 | - env: 14 | - name: HURL_DISCARD 15 | value: "true" 16 | - name: HURL_CPU 17 | value: "1" 18 | image: ancientlore/hurl:v0.1.2 19 | imagePullPolicy: Always 20 | name: hurl 21 | ports: 22 | - containerPort: 8080 23 | --- 24 | apiVersion: v1 25 | kind: Service 26 | metadata: 27 | name: hurl 28 | labels: 29 | app: hurl 30 | spec: 31 | ports: 32 | - port: 8080 33 | name: http 34 | targetPort: 8080 35 | selector: 36 | app: hurl 37 | -------------------------------------------------------------------------------- /resiliency/istio.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/resiliency/istio.png -------------------------------------------------------------------------------- /resiliency/istio.pxm: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/resiliency/istio.pxm -------------------------------------------------------------------------------- /resiliency/istio/inject-abort.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: webnull 5 | spec: 6 | hosts: 7 | - webnull 8 | http: 9 | - route: 10 | - destination: 11 | host: webnull 12 | fault: 13 | abort: 14 | percent: 1 15 | httpStatus: 400 16 | -------------------------------------------------------------------------------- /resiliency/istio/inject-big-delay.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: webnull 5 | spec: 6 | hosts: 7 | - webnull 8 | http: 9 | - route: 10 | - destination: 11 | host: webnull 12 | fault: 13 | delay: 14 | percent: 25 15 | fixedDelay: 4s 16 | -------------------------------------------------------------------------------- /resiliency/istio/inject-delay.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: webnull 5 | spec: 6 | hosts: 7 | - webnull 8 | http: 9 | - route: 10 | - destination: 11 | host: webnull 12 | fault: 13 | delay: 14 | percent: 5 15 | fixedDelay: 1s 16 | -------------------------------------------------------------------------------- /resiliency/istio/request-timeout.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: webnull 5 | spec: 6 | hosts: 7 | - webnull 8 | http: 9 | - route: 10 | - destination: 11 | host: webnull 12 | timeout: 3s 13 | -------------------------------------------------------------------------------- /resiliency/istio/services.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: webnull 5 | spec: 6 | hosts: 7 | - webnull 8 | - webnull.127.0.0.1.xip.io 9 | - webnull.192.168.99.101.xip.io 10 | gateways: 11 | - webnull-gateway 12 | http: 13 | - route: 14 | - destination: 15 | host: webnull 16 | --- 17 | apiVersion: networking.istio.io/v1alpha3 18 | kind: DestinationRule 19 | metadata: 20 | name: webnull 21 | spec: 22 | host: webnull 23 | trafficPolicy: 24 | tls: 25 | mode: ISTIO_MUTUAL 26 | connectionPool: 27 | tcp: 28 | maxConnections: 50 29 | http: 30 | http2MaxRequests: 100 31 | maxRequestsPerConnection: 2 32 | http1MaxPendingRequests: 50 33 | outlierDetection: 34 | consecutiveErrors: 1 35 | interval: 1s 36 | baseEjectionTime: 10s 37 | maxEjectionPercent: 100 38 | --- 39 | apiVersion: networking.istio.io/v1alpha3 40 | kind: VirtualService 41 | metadata: 42 | name: hurl 43 | spec: 44 | hosts: 45 | - hurl 46 | - hurl.127.0.0.1.xip.io 47 | - hurl.192.168.99.101.xip.io 48 | gateways: 49 | - hurl-gateway 50 | http: 51 | - route: 52 | - destination: 53 | host: hurl 54 | --- 55 | apiVersion: networking.istio.io/v1alpha3 56 | kind: DestinationRule 57 | metadata: 58 | name: hurl 59 | spec: 60 | host: hurl 61 | trafficPolicy: 62 | tls: 63 | mode: ISTIO_MUTUAL 64 | -------------------------------------------------------------------------------- /resiliency/resiliency.bmpr: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/resiliency/resiliency.bmpr -------------------------------------------------------------------------------- /resiliency/resiliency.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/resiliency/resiliency.png -------------------------------------------------------------------------------- /resiliency/startDemo.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ctx=`kubectl config current-context` 4 | echo kube context is $ctx 5 | ns=`kubectl config view -o jsonpath="{.contexts[?(@.name == \"$ctx\")].context.namespace}"` 6 | ns=${ns:-default} 7 | echo KUBE_NAMESPACE is $ns 8 | 9 | w="http://localhost:8081/status/" 10 | h="http://localhost:8082/" 11 | 12 | # start port-forward 13 | wp=`kubectl get pod -l app=webnull -o name | sed 's/^pod\///'` 14 | kubectl port-forward $wp 8081:8080 & 15 | wpid=$! 16 | echo WEBNULL pod is $wp 17 | 18 | # start port-forward 19 | hp=`kubectl get pod -l app=hurl -o name | sed 's/^pod\///'` 20 | kubectl port-forward $hp 8082:8080 & 21 | hpid=$! 22 | echo HURL pod is $hp 23 | 24 | KUBE_NAMESPACE=$ns WEBNULL=$w HURL=$h POD=$hp demon 25 | 26 | kill -15 $hpid 27 | kill -15 $wpid 28 | -------------------------------------------------------------------------------- /resiliency/webnull/README.md: -------------------------------------------------------------------------------- 1 | # webnull 2 | 3 | This project deploys an instance of a [webnull] container for testing. 4 | 5 | [webnull] is an open source project that accepts requests and throws them away. It's like a simpler version of [httpbin], and it has a graph to view requests. 6 | 7 | ![graph](webnull.png) 8 | 9 | By using [webnull], you can test networks or connectivity. 10 | 11 | > Note: These instructions assume a `bash` shell. On Windows, you can use `git-bash` which should be installed with [git](https://git-scm.com/). 12 | 13 | ## Endpoints 14 | 15 | * `/status` shows the graph. 16 | * `/delay/NNN` delays the response by `NNN` milliseconds and then returns `200 OK`. 17 | * `/http/NNN` responds with an HTTP `NNN` code. 18 | * Anything else response with `200 OK`. 19 | 20 | ## Deployment 21 | 22 | To deploy the [webnull] container without [Istio] *or* if you have Istio automatic sidecar injection enabled, run: 23 | 24 | $ kubectl apply -f ./kube/deployment.yaml 25 | 26 | To deploy the [webnull] container with [Istio] and automatic sidecar injection is not enabled, run: 27 | 28 | $ kubectl apply -f <(istioctl kube-inject -f ./kube/deployment.yaml) 29 | 30 | You can check the UI by using a port-forward and then browsing to http://localhost:8081/status. 31 | 32 | $ wp=`kubectl get pod -l app=webnull -o name | sed 's/^pods\///'` 33 | $ kubectl port-forward $wp 8081:8080 34 | 35 | [webnull]: https://github.com/ancientlore/webnull 36 | [httpbin]: http://httpbin.org/ 37 | [Istio]: https://istio.io/ 38 | -------------------------------------------------------------------------------- /resiliency/webnull/kube/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: webnull 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | app: webnull 11 | spec: 12 | containers: 13 | - env: 14 | - name: WEBNULL_CPU 15 | value: "1" 16 | image: ancientlore/webnull:v0.1.3 17 | imagePullPolicy: Always 18 | name: webnull 19 | ports: 20 | - containerPort: 8080 21 | --- 22 | apiVersion: v1 23 | kind: Service 24 | metadata: 25 | name: webnull 26 | labels: 27 | app: webnull 28 | spec: 29 | ports: 30 | - port: 8080 31 | name: http 32 | targetPort: 8080 33 | selector: 34 | app: webnull 35 | -------------------------------------------------------------------------------- /resiliency/webnull/webnull.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/resiliency/webnull/webnull.png -------------------------------------------------------------------------------- /restartPods.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | kubectl delete pod -n istio-system --all 4 | sleep 10 5 | kubectl delete pod -l app=webnull 6 | kubectl delete pod -l app=hurl 7 | kubectl delete pod -l app=topdogbe 8 | kubectl delete pod -l app=topdogmt 9 | kubectl delete pod -l app=topdogui 10 | 11 | -------------------------------------------------------------------------------- /trafficshifting/README.md: -------------------------------------------------------------------------------- 1 | # Traffic Shifting Demo using Istio 2 | 3 | This project demonstrates how traffic can be shifted between different versions of a service. It also shows how to retry requests using [Istio]. 4 | 5 | What is great about [Istio] is that these functions are part of the _infrastructure_ - no special coding is needed to take advantage of these features. [Istio] also has a great dashboard, a service graph, and a trace analyzer. 6 | 7 | The demo is based on a simple application called [topdog]. [topdog] has three tiers - a UI, a middle tier, and a backend tier. All three are served the same [Docker] image, for simplicity. 8 | 9 | ![topdog tiers](topdog/topdog.png) 10 | 11 | > Note: These instructions assume a `bash` shell. On Windows, you can use `git-bash` which should be installed with [git](https://git-scm.com/). 12 | 13 | ## Requirements 14 | 15 | For this demo you need: 16 | 17 | * A Kubernetes cluster (or [Minikube], or [Docker] for Desktop) with [Istio] installed. 18 | * The [kubectl command](https://kubernetes.io/docs/tasks/tools/install-kubectl/) should be installed. 19 | * Optionally, the [istioctl command](https://istio.io/docs/reference/commands/istioctl) can be used. 20 | 21 | ## Setup 22 | 23 | Export the following variables which are used by the commands in this demo: 24 | 25 | * `KUBE_NAMESPACE` - The Kubernetes namespace to deploy to (like `default`, but use yours). 26 | 27 | > Note: Export a variable using, for example, `export KUBE_NAMESPACE=my-kubernetes-namespace`, or assign it when calling the script like `KUBE_NAMESPACE=my-kubernetes-namespace ./script.sh *.json`. 28 | 29 | Deploy [topdog] according to its [instructions](topdog/README.md). Be sure to install the version that injects the [Istio] sidecar. 30 | 31 | Make sure the containers are running: 32 | 33 | $ kubectl get pods 34 | NAME READY STATUS RESTARTS AGE 35 | topdogbe-v1-3067989556-267jm 2/2 Running 0 1m 36 | topdogbe-v2-1117189381-pt85q 2/2 Running 0 1m 37 | topdogbe-v3-1436324766-r3jxb 2/2 Running 0 1m 38 | topdogmt-v1-3844409480-6w4dd 2/2 Running 0 1m 39 | topdogui-v1-3220263237-8x5ht 2/2 Running 0 1m 40 | 41 | > Note that there are three versions of the backend pod, one version of the midtier pod, and one version of the UI pod. Normally we wouldn't create three versions of the backend at the same time. 42 | 43 | Set up the virtual services and destination rules for each service: 44 | 45 | $ cd istio 46 | $ kubectl apply -f services-all-v1.yaml -n $KUBE_NAMESPACE 47 | 48 | > Note that you can generally use `kubectl` instead of `istioctl`, but `istioctl` provides additional client-side validation. 49 | 50 | You can check the virtual services using `istioctl get virtualservices` or `kubectl get virtualservices`. You can also fetch an individual service using: 51 | 52 | $ kubectl get virtualservice topdogui -n $KUBE_NAMESPACE -o yaml 53 | $ kubectl get virtualservice topdogmt -n $KUBE_NAMESPACE -o yaml 54 | $ kubectl get virtualservice topdogbe -n $KUBE_NAMESPACE -o yaml 55 | 56 | You can check the destination rules using `istioctl get destinationrules` or `kubectl get destinationrules`. You can also fetch an individual destination rule using: 57 | 58 | $ kubectl get destinationrule topdogui -n $KUBE_NAMESPACE -o yaml 59 | $ kubectl get destinationrule topdogmt -n $KUBE_NAMESPACE -o yaml 60 | $ kubectl get destinatiorule topdogbe -n $KUBE_NAMESPACE -o yaml 61 | 62 | You're now ready to proceed with the demo. 63 | 64 | ![service diagram](trafficshifting.png) 65 | 66 | ## Starting the Demo 67 | 68 | The [topdog] service has already been deployed, but if you need to deploy it see the [instructions](topdog/README.md). 69 | 70 | View the user interface by running the following command (in a new shell) and then browsing to http://localhost:5000/. 71 | 72 | $ kubectl port-forward $(kubectl get pod -l app=topdogui -o name | sed 's/^pods\///') 5000 73 | 74 | > Note: Run the [Istio] commands from the [istio](istio) subfolder. 75 | 76 | We start out with a [default set of virtual services and destination rules](istio/services-all-v1.yaml) that we created earlier: 77 | 78 | $ kubectl apply -f services-all-v1.yaml -n $KUBE_NAMESPACE 79 | 80 | These rules pass all traffic to the `v1` version of the services. 81 | 82 | ## First Version 83 | 84 | The first version works, but the results seem skewed to benefit the original developer. 85 | 86 | > Never let a candidate create the voting machine. 87 | 88 | Someone on the team decided to fix this. Let's test their version, routing traffic 50/50 between `v1` and `v2`. 89 | 90 | $ kubectl apply -f service-be-v1-v2.yaml -n $KUBE_NAMESPACE 91 | 92 | This [virtual service](istio/service-be-v1-v2.yaml) defines weighted routes so that traffic arriving uses both `v1` and `v2`. 93 | 94 | ## Second Version 95 | 96 | The second version fixed the original bug, but it introduced occasional failures. We noticed that retrying the request works. So, we decided to add in retries using [Istio] rather than back out the new service. We can't let the original developer be the top dog. 97 | 98 | $ kubectl apply -f service-mt-retry.yaml -n $KUBE_NAMESPACE 99 | 100 | The [retry logic](istio/service-mt-retry.yaml) fixes the problems, so we're going to move all the traffic to `v2` by [replacing the virtual service](istio/service-be-v2.yaml) and deleting the 50/50 rule. 101 | 102 | $ kubectl apply -f service-be-v2.yaml -n $KUBE_NAMESPACE 103 | 104 | ## Third Version 105 | 106 | Now another team member decided that we really should fix the problem, even though we have worked around the issue. So let's start moving traffic 50/50 between `v2` and `v3` using a [new route](istio/service-be-v2-v3.yaml). 107 | 108 | $ kubectl apply -f service-be-v2-v3.yaml -n $KUBE_NAMESPACE 109 | 110 | That seems to look good, so we'll route all the traffic to `v3` by [replacing the virtual service](istio/service-be-v3.yaml), thus deleting the 50/50 rule. 111 | 112 | $ kubectl apply -f service-be-v3.yaml -n $KUBE_NAMESPACE 113 | 114 | Do the results still seem skewed? 115 | 116 | [Istio]: https://istio.io/ 117 | [topdog]: https://github.com/ancientlore/topdog 118 | [Docker]: https://www.docker.com/ 119 | [Minikube]: https://github.com/kubernetes/minikube 120 | -------------------------------------------------------------------------------- /trafficshifting/demo.json: -------------------------------------------------------------------------------- 1 | { 2 | "title": "Traffic Shifting Demo", 3 | "stepsPosition": 1, 4 | "sites": { 5 | "topdog": { 6 | "title": "topdog", 7 | "url": "$TOPDOG", 8 | "position": 0 9 | } 10 | }, 11 | "processes": { 12 | "bash": { 13 | "title": "bash", 14 | "command": ["bash"], 15 | "dir": "./istio", 16 | "position": 2, 17 | "exitInput": "exit" 18 | } 19 | }, 20 | "steps": [ 21 | { 22 | "title": "Check the running pods", 23 | "desc": "The topdog pods were deployed earlier. Normally, we wouldn't deploy all three versions at once.", 24 | "id": "bash", 25 | "input": "kubectl get pods | egrep '(^NAME|topdog)'" 26 | }, 27 | { 28 | "title": "Check the virtual services", 29 | "desc": "These virtual services were created earlier. They direct all traffic to v1 of the topdog services.", 30 | "id": "bash", 31 | "input": "kubectl get virtualservices | egrep '(^NAME|topdog)'" 32 | }, 33 | { 34 | "title": "Check the destination rules", 35 | "desc": "These destination rules were created earlier. They direct all traffic to v1 of the topdog services.", 36 | "id": "bash", 37 | "input": "kubectl get destinationrules | egrep '(^NAME|topdog)'" 38 | }, 39 | { 40 | "title": "Inspect a virtual service", 41 | "desc": "The virtual service specifies the subset of the versions to use.", 42 | "id": "bash", 43 | "input": "kubectl get virtualservice topdogbe -n $KUBE_NAMESPACE -o yaml" 44 | }, 45 | { 46 | "title": "Inspect a destination rule", 47 | "desc": "The destination rule defines the subsets using Kubernetes labels.", 48 | "id": "bash", 49 | "input": "kubectl get destinationrule topdogbe -n $KUBE_NAMESPACE -o yaml" 50 | }, 51 | { 52 | "title": "Observe the problem with v1", 53 | "desc": "Never let a candidate create the voting machine...", 54 | "id": "bash", 55 | "input": "" 56 | }, 57 | { 58 | "title": "Fix the service with v2", 59 | "desc": "A team member fixed the unabashed bias. Let's roll it out 50/50 between v1 and v2.", 60 | "id": "bash", 61 | "input": "kubectl apply -f service-be-v1-v2.yaml -n $KUBE_NAMESPACE" 62 | }, 63 | { 64 | "title": "Look at the traffic shifting rule", 65 | "desc": "While we wait for it to apply, let's inspect what the traffic shifting rule looks like.", 66 | "id": "bash", 67 | "input": "kubectl get virtualservice topdogbe -n $KUBE_NAMESPACE -o yaml" 68 | }, 69 | { 70 | "title": "Observe the 50/50 utilization, and a new bug", 71 | "desc": "Hmmm, sometimes v2 fails. Using zipkin, we find that a retry might fix the issue.", 72 | "id": "bash", 73 | "input": "" 74 | }, 75 | { 76 | "title": "Fix v2 with an HTTP retry rule", 77 | "desc": "Retries to the midtier will fix the v2 problem. That's better than reverting in this case.", 78 | "id": "bash", 79 | "input": "kubectl apply -f service-mt-retry.yaml -n $KUBE_NAMESPACE" 80 | }, 81 | { 82 | "title": "Look at the retry rule", 83 | "desc": "While we wait for it to apply, let's inspect what the retry rule looks like.", 84 | "id": "bash", 85 | "input": "kubectl get virtualservice topdogmt -n $KUBE_NAMESPACE -o yaml" 86 | }, 87 | { 88 | "title": "Move all traffic to v2", 89 | "desc": "Everything looks good, so we'll reset the default route to v2.", 90 | "id": "bash", 91 | "input": "kubectl apply -f service-be-v2.yaml -n $KUBE_NAMESPACE" 92 | }, 93 | { 94 | "title": "Shift traffic to a v3 which fixes the 500 bug", 95 | "desc": "Now a team member fixes the v2 bug.", 96 | "id": "bash", 97 | "input": "kubectl apply -f service-be-v2-v3.yaml -n $KUBE_NAMESPACE" 98 | }, 99 | { 100 | "title": "Looking good, so let's move all traffic to v3", 101 | "desc": "Reset the default route rule to v3.", 102 | "id": "bash", 103 | "input": "kubectl apply -f service-be-v3.yaml -n $KUBE_NAMESPACE" 104 | }, 105 | { 106 | "title": "Are the results still skewed?", 107 | "desc": "I see bias, but less obvious.", 108 | "id": "bash", 109 | "input": "" 110 | }, 111 | { 112 | "title": "Clean up - back to v1", 113 | "desc": "The demo is done, so let's move everything back to v1.", 114 | "id": "bash", 115 | "input": "kubectl apply -f services-all-v1.yaml -n $KUBE_NAMESPACE" 116 | } 117 | ] 118 | } 119 | -------------------------------------------------------------------------------- /trafficshifting/istio/service-be-v1-v2.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: topdogbe 5 | spec: 6 | hosts: 7 | - topdogbe 8 | http: 9 | - route: 10 | - destination: 11 | host: topdogbe 12 | subset: v1 13 | weight: 50 14 | - destination: 15 | host: topdogbe 16 | subset: v2 17 | weight: 50 18 | -------------------------------------------------------------------------------- /trafficshifting/istio/service-be-v2-v3.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: topdogbe 5 | spec: 6 | hosts: 7 | - topdogbe 8 | http: 9 | - route: 10 | - destination: 11 | host: topdogbe 12 | subset: v2 13 | weight: 50 14 | - destination: 15 | host: topdogbe 16 | subset: v3 17 | weight: 50 18 | -------------------------------------------------------------------------------- /trafficshifting/istio/service-be-v2.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: topdogbe 5 | spec: 6 | hosts: 7 | - topdogbe 8 | http: 9 | - route: 10 | - destination: 11 | host: topdogbe 12 | subset: v2 13 | -------------------------------------------------------------------------------- /trafficshifting/istio/service-be-v3.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: topdogbe 5 | spec: 6 | hosts: 7 | - topdogbe 8 | http: 9 | - route: 10 | - destination: 11 | host: topdogbe 12 | subset: v3 13 | -------------------------------------------------------------------------------- /trafficshifting/istio/service-mt-retry.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: topdogmt 5 | spec: 6 | hosts: 7 | - topdogmt 8 | http: 9 | - route: 10 | - destination: 11 | host: topdogmt 12 | subset: v1 13 | retries: 14 | attempts: 3 15 | perTryTimeout: 100ms 16 | retryOn: 5xx 17 | -------------------------------------------------------------------------------- /trafficshifting/istio/services-all-v1.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: topdogui 5 | spec: 6 | hosts: 7 | - topdogui 8 | - topdog.127.0.0.1.xip.io 9 | - topdog.192.168.99.101.xip.io 10 | gateways: 11 | - topdog-gateway 12 | http: 13 | - route: 14 | - destination: 15 | host: topdogui 16 | subset: v1 17 | --- 18 | apiVersion: networking.istio.io/v1alpha3 19 | kind: DestinationRule 20 | metadata: 21 | name: topdogui 22 | spec: 23 | host: topdogui 24 | trafficPolicy: 25 | tls: 26 | mode: ISTIO_MUTUAL 27 | subsets: 28 | - name: v1 29 | labels: 30 | version: v1 31 | --- 32 | apiVersion: networking.istio.io/v1alpha3 33 | kind: VirtualService 34 | metadata: 35 | name: topdogmt 36 | spec: 37 | hosts: 38 | - topdogmt 39 | http: 40 | - route: 41 | - destination: 42 | host: topdogmt 43 | subset: v1 44 | --- 45 | apiVersion: networking.istio.io/v1alpha3 46 | kind: DestinationRule 47 | metadata: 48 | name: topdogmt 49 | spec: 50 | host: topdogmt 51 | trafficPolicy: 52 | tls: 53 | mode: ISTIO_MUTUAL 54 | subsets: 55 | - name: v1 56 | labels: 57 | version: v1 58 | --- 59 | apiVersion: networking.istio.io/v1alpha3 60 | kind: VirtualService 61 | metadata: 62 | name: topdogbe 63 | spec: 64 | hosts: 65 | - topdogbe 66 | http: 67 | - route: 68 | - destination: 69 | host: topdogbe 70 | subset: v1 71 | --- 72 | apiVersion: networking.istio.io/v1alpha3 73 | kind: DestinationRule 74 | metadata: 75 | name: topdogbe 76 | spec: 77 | host: topdogbe 78 | trafficPolicy: 79 | tls: 80 | mode: ISTIO_MUTUAL 81 | subsets: 82 | - name: v1 83 | labels: 84 | version: v1 85 | - name: v2 86 | labels: 87 | version: v2 88 | - name: v3 89 | labels: 90 | version: v3 91 | -------------------------------------------------------------------------------- /trafficshifting/startDemo.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ctx=`kubectl config current-context` 4 | echo kube context is $ctx 5 | ns=`kubectl config view -o jsonpath="{.contexts[?(@.name == \"$ctx\")].context.namespace}"` 6 | ns=${ns:-default} 7 | echo KUBE_NAMESPACE is $ns 8 | 9 | t="http://localhost:8081" 10 | 11 | # start port-forward 12 | hp=`kubectl get pod -l app=topdogui -o name | sed 's/^pods\///'` 13 | kubectl port-forward $hp 8081:5000 & 14 | hpid=$! 15 | echo TOPDOG pod is $hp 16 | 17 | KUBE_NAMESPACE=$ns TOPDOG=$t demon 18 | 19 | kill -15 $hpid 20 | -------------------------------------------------------------------------------- /trafficshifting/topdog/README.md: -------------------------------------------------------------------------------- 1 | # topdog 2 | 3 | This project deploys a QA instance of the [topdog] application for testing. 4 | 5 | [topdog] is a simple three-tier application. 6 | 7 | ![topdog tiers](topdog.png) 8 | 9 | * The _backend_ returns random, weighted data about who the "top dog" architect is at any given moment. 10 | * The _midtier_ queries the backend for the "top dog". 11 | * The _UI_ queries the midtier and displays realtime results. 12 | 13 | The deployment includes three versions of the backend. Ordinarily, you would not deploy all three at once, but for this demo that works well. The midtier is essentially a pass-through, but in practice could have other business logic. 14 | 15 | > Note: Each tier uses the same Docker image. This is just for simplicity. 16 | 17 | > Note: These instructions assume a `bash` shell. On Windows, you can use `git-bash` which should be installed with [git](https://git-scm.com/). 18 | 19 | ## UI Endpoints 20 | 21 | * `/` shows the UI. 22 | * `/query` gets the "top dog". 23 | * `/static/*` returns static content. 24 | 25 | ## Midtier Endpoints 26 | 27 | * `/midtier` returns the "top dog" from the midtier, which has to query the backend to get it. 28 | 29 | ## Backend Endpoints 30 | 31 | * `/backend` computes the "top dog" using a random, weighted system. 32 | 33 | ## Deployment 34 | 35 | To deploy the [topdog] container without [Istio] *or* if you have Istio automatic sidecar injection enabled, run: 36 | 37 | $ kubectl apply -f ./kube/ 38 | 39 | To deploy the [topdog] containers with [Istio] and automatic sidecar injection is not enabled, run: 40 | 41 | $ kubectl apply -f <(istioctl kube-inject -f ./kube/deployment-be-v1.yaml) 42 | $ kubectl apply -f <(istioctl kube-inject -f ./kube/deployment-be-v2.yaml) 43 | $ kubectl apply -f <(istioctl kube-inject -f ./kube/deployment-be-v3.yaml) 44 | $ kubectl apply -f <(istioctl kube-inject -f ./kube/deployment-mt-v1.yaml) 45 | $ kubectl apply -f <(istioctl kube-inject -f ./kube/deployment-ui-v1.yaml) 46 | 47 | You can check the UI by using a port-forward and then browsing to http://localhost:5000/. 48 | 49 | $ hp=`kubectl get pod -l app=topdogui -o name | sed 's/^pods\///'` 50 | $ kubectl port-forward $hp 5000 51 | 52 | ## Brought to you by: 53 | 54 | ![dogs](dogs.png) 55 | 56 | [Istio]: https://istio.io/ 57 | [topdog]: https://github.com/ancientlore/topdog 58 | -------------------------------------------------------------------------------- /trafficshifting/topdog/dogs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/trafficshifting/topdog/dogs.png -------------------------------------------------------------------------------- /trafficshifting/topdog/kube/deployment-be-v1.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: topdogbe-v1 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | app: topdogbe 11 | version: v1 12 | spec: 13 | containers: 14 | - env: 15 | - name: VERSION 16 | value: "1" 17 | image: ancientlore/topdog:v0.1.4 18 | imagePullPolicy: Always 19 | name: topdogbe 20 | ports: 21 | - containerPort: 5000 22 | --- 23 | apiVersion: v1 24 | kind: Service 25 | metadata: 26 | name: topdogbe 27 | labels: 28 | app: topdogbe 29 | spec: 30 | ports: 31 | - port: 5000 32 | name: http 33 | targetPort: 5000 34 | selector: 35 | app: topdogbe 36 | -------------------------------------------------------------------------------- /trafficshifting/topdog/kube/deployment-be-v2.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: topdogbe-v2 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | app: topdogbe 11 | version: v2 12 | spec: 13 | containers: 14 | - env: 15 | - name: VERSION 16 | value: "2" 17 | image: ancientlore/topdog:v0.1.4 18 | imagePullPolicy: Always 19 | name: topdogbe 20 | ports: 21 | - containerPort: 5000 22 | --- 23 | apiVersion: v1 24 | kind: Service 25 | metadata: 26 | name: topdogbe 27 | labels: 28 | app: topdogbe 29 | spec: 30 | ports: 31 | - port: 5000 32 | name: http 33 | targetPort: 5000 34 | selector: 35 | app: topdogbe 36 | -------------------------------------------------------------------------------- /trafficshifting/topdog/kube/deployment-be-v3.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: topdogbe-v3 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | app: topdogbe 11 | version: v3 12 | spec: 13 | containers: 14 | - env: 15 | - name: VERSION 16 | value: "3" 17 | image: ancientlore/topdog:v0.1.4 18 | imagePullPolicy: Always 19 | name: topdogbe 20 | ports: 21 | - containerPort: 5000 22 | --- 23 | apiVersion: v1 24 | kind: Service 25 | metadata: 26 | name: topdogbe 27 | labels: 28 | app: topdogbe 29 | spec: 30 | ports: 31 | - port: 5000 32 | name: http 33 | targetPort: 5000 34 | selector: 35 | app: topdogbe 36 | -------------------------------------------------------------------------------- /trafficshifting/topdog/kube/deployment-mt-v1.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: topdogmt-v1 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | app: topdogmt 11 | version: v1 12 | spec: 13 | containers: 14 | - env: 15 | - name: VERSION 16 | value: "1" 17 | - name: MIDTIER 18 | value: "http://topdogbe:5000" 19 | - name: BACKEND 20 | value: "http://topdogbe:5000" 21 | image: ancientlore/topdog:v0.1.4 22 | imagePullPolicy: Always 23 | name: topdogmt 24 | ports: 25 | - containerPort: 5000 26 | --- 27 | apiVersion: v1 28 | kind: Service 29 | metadata: 30 | name: topdogmt 31 | labels: 32 | app: topdogmt 33 | spec: 34 | ports: 35 | - port: 5000 36 | name: http 37 | targetPort: 5000 38 | selector: 39 | app: topdogmt 40 | -------------------------------------------------------------------------------- /trafficshifting/topdog/kube/deployment-ui-v1.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: topdogui-v1 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | app: topdogui 11 | version: v1 12 | spec: 13 | containers: 14 | - env: 15 | - name: VERSION 16 | value: "1" 17 | - name: MIDTIER 18 | value: "http://topdogmt:5000" 19 | - name: BACKEND 20 | value: "http://topdogbe:5000" 21 | image: ancientlore/topdog:v0.1.4 22 | imagePullPolicy: Always 23 | name: topdogui 24 | ports: 25 | - containerPort: 5000 26 | --- 27 | apiVersion: v1 28 | kind: Service 29 | metadata: 30 | name: topdogui 31 | labels: 32 | app: topdogui 33 | spec: 34 | ports: 35 | - port: 5000 36 | name: http 37 | targetPort: 5000 38 | selector: 39 | app: topdogui 40 | -------------------------------------------------------------------------------- /trafficshifting/topdog/topdog.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/trafficshifting/topdog/topdog.png -------------------------------------------------------------------------------- /trafficshifting/trafficshifting.bmpr: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/trafficshifting/trafficshifting.bmpr -------------------------------------------------------------------------------- /trafficshifting/trafficshifting.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ancientlore/istio-talk/1e0ac0aa2003b03ffe3f3cc7ec200c365f1b74cf/trafficshifting/trafficshifting.png -------------------------------------------------------------------------------- /vagrant-demo.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Use this in the Vagrant box to run all the demos 4 | # Port 8080 - The presentation 5 | # Port 8081 - Traffic Shifting Demo 6 | # Port 8081 - Resiliency Demo (after pressing return) 7 | pushd /vagrant 8 | 9 | addr=192.168.99.101 10 | 11 | ./demo.sh $addr 12 | -------------------------------------------------------------------------------- /vagrant/istio/demo-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: topdog-gateway 5 | spec: 6 | servers: 7 | - port: 8 | number: 80 9 | name: http 10 | protocol: HTTP2 11 | hosts: 12 | - topdog.192.168.99.101.xip.io 13 | --- 14 | apiVersion: networking.istio.io/v1alpha3 15 | kind: Gateway 16 | metadata: 17 | name: hurl-gateway 18 | spec: 19 | servers: 20 | - port: 21 | number: 80 22 | name: http 23 | protocol: HTTP2 24 | hosts: 25 | - hurl.192.168.99.101.xip.io 26 | --- 27 | apiVersion: networking.istio.io/v1alpha3 28 | kind: Gateway 29 | metadata: 30 | name: webnull-gateway 31 | spec: 32 | servers: 33 | - port: 34 | number: 80 35 | name: http 36 | protocol: HTTP2 37 | hosts: 38 | - webnull.192.168.99.101.xip.io 39 | -------------------------------------------------------------------------------- /vagrant/istio/grafana-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: grafana-gateway 5 | namespace: istio-system 6 | spec: 7 | selector: 8 | istio: ingressgateway 9 | servers: 10 | - port: 11 | number: 80 12 | name: http 13 | protocol: HTTP2 14 | hosts: 15 | - grafana.192.168.99.101.xip.io 16 | --- 17 | apiVersion: networking.istio.io/v1alpha3 18 | kind: DestinationRule 19 | metadata: 20 | name: grafana 21 | namespace: istio-system 22 | spec: 23 | host: grafana.istio-system.svc.cluster.local 24 | trafficPolicy: 25 | tls: 26 | mode: DISABLE 27 | --- 28 | apiVersion: networking.istio.io/v1alpha3 29 | kind: VirtualService 30 | metadata: 31 | name: grafana 32 | namespace: istio-system 33 | spec: 34 | hosts: 35 | - grafana 36 | - grafana.192.168.99.101.xip.io 37 | gateways: 38 | - grafana-gateway 39 | http: 40 | - match: 41 | - port: 80 42 | route: 43 | - destination: 44 | host: grafana.istio-system.svc.cluster.local 45 | port: 46 | number: 3000 47 | -------------------------------------------------------------------------------- /vagrant/istio/ingress.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Ingress 3 | metadata: 4 | name: minikube-ingress 5 | namespace: istio-system 6 | spec: 7 | backend: 8 | serviceName: istio-ingressgateway 9 | servicePort: 80 10 | -------------------------------------------------------------------------------- /vagrant/istio/jaeger-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: jaeger-gateway 5 | namespace: istio-system 6 | spec: 7 | selector: 8 | istio: ingressgateway 9 | servers: 10 | - port: 11 | number: 80 12 | name: http 13 | protocol: HTTP2 14 | hosts: 15 | - jaeger.192.168.99.101.xip.io 16 | --- 17 | apiVersion: networking.istio.io/v1alpha3 18 | kind: DestinationRule 19 | metadata: 20 | name: jaeger 21 | namespace: istio-system 22 | spec: 23 | host: jaeger-query.istio-system.svc.cluster.local 24 | trafficPolicy: 25 | tls: 26 | mode: DISABLE 27 | --- 28 | apiVersion: networking.istio.io/v1alpha3 29 | kind: VirtualService 30 | metadata: 31 | name: jaeger 32 | namespace: istio-system 33 | spec: 34 | hosts: 35 | - jaeger-query 36 | - jaeger.192.168.99.101.xip.io 37 | gateways: 38 | - jaeger-gateway 39 | http: 40 | - match: 41 | - port: 80 42 | route: 43 | - destination: 44 | host: jaeger-query.istio-system.svc.cluster.local 45 | port: 46 | number: 16686 47 | -------------------------------------------------------------------------------- /vagrant/istio/servicegraph-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: servicegraph-gateway 5 | namespace: istio-system 6 | spec: 7 | selector: 8 | istio: ingressgateway 9 | servers: 10 | - port: 11 | number: 80 12 | name: http 13 | protocol: HTTP2 14 | hosts: 15 | - servicegraph.192.168.99.101.xip.io 16 | --- 17 | apiVersion: networking.istio.io/v1alpha3 18 | kind: DestinationRule 19 | metadata: 20 | name: servicegraph 21 | namespace: istio-system 22 | spec: 23 | host: servicegraph.istio-system.svc.cluster.local 24 | trafficPolicy: 25 | tls: 26 | mode: DISABLE 27 | --- 28 | apiVersion: networking.istio.io/v1alpha3 29 | kind: VirtualService 30 | metadata: 31 | name: servicegraph 32 | namespace: istio-system 33 | spec: 34 | hosts: 35 | - servicegraph 36 | - servicegraph.192.168.99.101.xip.io 37 | gateways: 38 | - servicegraph-gateway 39 | http: 40 | - match: 41 | - port: 80 42 | route: 43 | - destination: 44 | host: servicegraph.istio-system.svc.cluster.local 45 | port: 46 | number: 8088 47 | -------------------------------------------------------------------------------- /vagrant/istio/zipkin-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: zipkin-gateway 5 | namespace: istio-system 6 | spec: 7 | selector: 8 | istio: ingressgateway 9 | servers: 10 | - port: 11 | number: 80 12 | name: http 13 | protocol: HTTP2 14 | hosts: 15 | - zipkin.192.168.99.101.xip.io 16 | --- 17 | apiVersion: networking.istio.io/v1alpha3 18 | kind: DestinationRule 19 | metadata: 20 | name: zipkin 21 | namespace: istio-system 22 | spec: 23 | host: zipkin.istio-system.svc.cluster.local 24 | trafficPolicy: 25 | tls: 26 | mode: DISABLE 27 | --- 28 | apiVersion: networking.istio.io/v1alpha3 29 | kind: VirtualService 30 | metadata: 31 | name: zipkin 32 | namespace: istio-system 33 | spec: 34 | hosts: 35 | - zipkin 36 | - zipkin.192.168.99.101.xip.io 37 | gateways: 38 | - zipkin-gateway 39 | http: 40 | - match: 41 | - port: 80 42 | route: 43 | - destination: 44 | host: zipkin.istio-system.svc.cluster.local 45 | port: 46 | number: 9411 47 | -------------------------------------------------------------------------------- /vagrant/setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | wait_on_istio () 4 | { 5 | # wait for istio 6 | until [ `kubectl get pods -n istio-system | egrep -v '(Running|Completed)' | wc -l` -eq 1 ] 7 | do 8 | echo "Waiting for Istio" 9 | sleep 5 10 | done 11 | } 12 | 13 | wait_on_k8s () 14 | { 15 | # wait for kubernetes 16 | until [ `kubectl get pods -n kube-system | egrep 'Running' | wc -l` -ge 5 ] 17 | do 18 | echo "Waiting for Kubernetes" 19 | sleep 5 20 | done 21 | } 22 | 23 | sudo apt-get update 24 | sudo apt-get upgrade -y 25 | 26 | # Installing Git 27 | sudo apt-get install -y git 28 | sudo apt-get install -y jq 29 | sudo apt-get install -y socat 30 | sudo apt-get install -y gcc 31 | 32 | # Installing Docker 33 | echo "*** INSTALLING DOCKER ***" 34 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 35 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 36 | sudo apt-get update 37 | apt-cache policy docker-ce 38 | sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu 39 | sudo systemctl status docker 40 | sudo usermod -aG docker ubuntu 41 | 42 | # Installing kubectl 43 | echo "*** INSTALLING KUBECTL ***" 44 | sudo snap install kubectl --classic 45 | 46 | # Install minikube 47 | echo "*** INSTALLING MINIKUBE ***" 48 | curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.31.0/minikube-linux-amd64 49 | chmod +x minikube 50 | sudo mv minikube /usr/local/bin/ 51 | 52 | # Get latest Istio 53 | echo "*** GETTING ISTIO ***" 54 | curl -L https://git.io/getLatestIstio | sh - 55 | cd istio* 56 | export PATH=$PWD/bin:$PATH 57 | echo "export PATH=$PWD/bin:\$PATH" >> /home/vagrant/.profile 58 | cd .. 59 | 60 | # Start minikube 61 | echo "*** STARTING MINIKUBE ***" 62 | sudo minikube start --vm-driver=none --kubernetes-version=v1.10.3 63 | # \ 64 | # --extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \ 65 | # --extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" 66 | sudo minikube addons enable ingress 67 | sleep 10 # allow k8s components to start up 68 | wait_on_k8s 69 | kubectl get pods -n kube-system 70 | 71 | echo "*** TEST KUBECTL ***" 72 | kubectl config view 73 | kubectl config use-context minikube 74 | 75 | # Install Istio 76 | echo "*** INSTALLING ISTIO ***" 77 | cd istio* 78 | kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml 79 | sleep 10 # allow CRDs to be committed in kube-apiserver 80 | kubectl apply -f install/kubernetes/istio-demo-auth.yaml 81 | #echo "*** INSTALLING ISTIO ADDONS ***" 82 | #kubectl apply -f install/kubernetes/addons/ 83 | cd .. 84 | wait_on_istio 85 | kubectl get pods -n istio-system 86 | 87 | # ingress 88 | echo "*** INSTALLING ISTIO ADDONS GATEWAYS ***" 89 | kubectl apply -f /vagrant/vagrant/istio/grafana-gateway.yaml 90 | kubectl apply -f /vagrant/vagrant/istio/jaeger-gateway.yaml 91 | kubectl apply -f /vagrant/vagrant/istio/servicegraph-gateway.yaml 92 | kubectl apply -f /vagrant/vagrant/istio/zipkin-gateway.yaml 93 | kubectl apply -f /vagrant/vagrant/istio/ingress.yaml 94 | sleep 5 95 | 96 | # k8s 97 | echo "*** SETTING KUBERNETES VARIABLES ***" 98 | export KUBE_NAMESPACE=default 99 | echo "export KUBE_NAMESPACE=default" >> /home/vagrant/.profile 100 | 101 | # pull images 102 | echo "*** PULLING DOCKER IMAGES FOR LAB ***" 103 | sudo docker pull ancientlore/topdog:v0.1.4 104 | sudo docker pull ancientlore/hurl:v0.1.2 105 | sudo docker pull ancientlore/webnull:v0.1.3 106 | 107 | echo "*** INSTALLING DEMO CONTAINERS ***" 108 | kubectl apply -f <(istioctl kube-inject -f /vagrant/resiliency/hurl/kube/deployment.yaml) 109 | kubectl apply -f <(istioctl kube-inject -f /vagrant/resiliency/webnull/kube/deployment.yaml) 110 | kubectl apply -f <(istioctl kube-inject -f /vagrant/trafficshifting/topdog/kube/deployment-be-v1.yaml) 111 | kubectl apply -f <(istioctl kube-inject -f /vagrant/trafficshifting/topdog/kube/deployment-be-v2.yaml) 112 | kubectl apply -f <(istioctl kube-inject -f /vagrant/trafficshifting/topdog/kube/deployment-be-v3.yaml) 113 | kubectl apply -f <(istioctl kube-inject -f /vagrant/trafficshifting/topdog/kube/deployment-mt-v1.yaml) 114 | kubectl apply -f <(istioctl kube-inject -f /vagrant/trafficshifting/topdog/kube/deployment-ui-v1.yaml) 115 | 116 | echo "*** INSTALLING DEMO GATEWAYS ***" 117 | kubectl apply -f /vagrant/vagrant/istio/demo-gateway.yaml 118 | 119 | echo "*** ADDING ISTIO VIRTUAL SERVICES ***" 120 | kubectl apply -f /vagrant/trafficshifting/istio/services-all-v1.yaml -n default 121 | kubectl apply -f /vagrant/resiliency/istio/services.yaml -n default 122 | 123 | # Adjust permissions due to having to run minikube as root 124 | echo "*** UPDATING MINIKUBE CONFIG/PERMISSIONS ***" 125 | sudo mv /root/.kube /home/vagrant/ # this will write over any previous configuration 126 | sudo chown -R vagrant /home/vagrant/.kube 127 | sudo chgrp -R vagrant /home/vagrant/.kube 128 | #sudo mv /root/.minikube /home/vagrant/.minikube # this will write over any previous configuration 129 | #sudo chown -R vagrant /home/vagrant/.minikube 130 | #sudo chgrp -R vagrant /home/vagrant/.minikube 131 | sudo chown -R vagrant /root/.minikube 132 | sudo chgrp -R vagrant /root/.minikube 133 | sudo chmod go+rx /root 134 | 135 | # install Go 136 | echo "*** INSTALLING GOLANG ***" 137 | curl https://dl.google.com/go/go1.11.4.linux-amd64.tar.gz > go1.11.4.linux-amd64.tar.gz 138 | sudo tar -C /usr/local -xzf go1.11.4.linux-amd64.tar.gz 139 | export GOPATH=/home/vagrant 140 | echo "export GOPATH=/home/vagrant" >> /home/vagrant/.profile 141 | export PATH="$PATH:/usr/local/go/bin:/home/vagrant/bin" 142 | echo "export PATH=\"$PATH:/usr/local/go/bin:/home/vagrant/bin\"" >> /home/vagrant/.profile 143 | go env 144 | echo "*** INSTALLING GO-based TOOLS ***" 145 | go get golang.org/x/tools/cmd/present 146 | go get golang.org/x/tools/cmd/stringer 147 | git clone https://github.com/ancientlore/binder 148 | git clone https://github.com/ancientlore/hurl 149 | git clone https://github.com/ancientlore/webnull 150 | git clone https://github.com/ancientlore/demon 151 | git clone https://github.com/ancientlore/topdog 152 | cd binder && go install && cd - 153 | cd hurl && go install && cd - 154 | cd webnull && go install && cd - 155 | cd demon && go install && cd - 156 | cd topdog && go install && cd - 157 | 158 | sudo chown -R vagrant /home/vagrant/* 159 | sudo chgrp -R vagrant /home/vagrant/* 160 | 161 | echo 'echo "NOTE:"' >> /home/vagrant/.profile 162 | echo 'echo "Run /vagrant/vagrant-demo.sh to start the demo."' >> /home/vagrant/.profile 163 | --------------------------------------------------------------------------------