2 |
3 |
4 |
5 |
6 |
7 |
12 |
13 |
14 | Awesome Web App !!!!
15 | ....and the demo worked :)
16 | 
17 |
18 |
19 |
20 |
--------------------------------------------------------------------------------
/code/webapp/web.go:
--------------------------------------------------------------------------------
1 | package main
2 |
3 | import (
4 | "net/http"
5 | )
6 |
7 | func main() {
8 | http.Handle("/", http.FileServer(http.Dir("/web/static")))
9 | http.ListenAndServe(":3000", nil)
10 | }
11 |
--------------------------------------------------------------------------------
/instructor-notes/notes.md:
--------------------------------------------------------------------------------
1 | # Instructor notes
2 |
3 | Here are some notes to help you give the workshop
4 |
5 | ## Approx times per modules
6 | 1. Introduction
7 | 5 minutes to introduce your self and introduce the workshop
8 |
9 | 2. Setup
10 | This will take 10 - 15 mins depending on the skill level of the class
11 |
12 | 3. Kubernetes architecture overview
13 | This will take about 5 - 10 mins
14 |
15 | 4. Securing Kubernetes components
16 | Again 5 - 10 mins
17 |
18 | 5. Securing our pods
19 | 25 mins to max 30
20 |
21 | 6. Rbac, namespaces and cluster roles
22 | 25 mins to max 30
23 |
24 | 7. Introduction to knative
25 | 10 mins
26 |
27 | 8. Securing application communication with knative
28 | This will take up the remander with questions
29 |
30 | ## Known issues
31 |
32 | On play with kubernetes I have seen knative not install sometimes due to pods being evicted.
33 | If this happens fall back to minikube. I am going to raise this is the guys on the play with kubernetes project.
34 |
--------------------------------------------------------------------------------
/introduction-into-istio/intro.md:
--------------------------------------------------------------------------------
1 | # Introduction into Istio
2 |
3 | In this module we will look at istio and its components.
4 |
5 | # Istio
6 |
7 | Cloud platforms provide a wealth of benefits for the organizations that use them. There’s no denying, however, that adopting the cloud can put strains on DevOps teams. Developers must use microservices to architect for portability, meanwhile operators are managing extremely large hybrid and multi-cloud deployments. Istio lets you connect, secure, control, and observe services.
8 |
9 | At a high level, Istio helps reduce the complexity of these deployments, and eases the strain on your development teams. It is a completely open source service mesh that layers transparently onto existing distributed applications. It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.
10 |
11 | Istio makes it easy to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without any changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, then configure and manage Istio using its control plane functionality, which includes:
12 |
13 | Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
14 |
15 | Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
16 |
17 | A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
18 |
19 | Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
20 |
21 | Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.
22 |
23 | ## Pilot and Envoy
24 | The core component used for traffic management in Istio is Pilot, which manages and configures all the Envoy proxy instances deployed in a particular Istio service mesh. Pilot lets you specify which rules you want to use to route traffic between Envoy proxies and configure failure recovery features such as timeouts, retries, and circuit breakers. It also maintains a canonical model of all the services in the mesh and uses this model to let Envoy instances know about the other Envoy
25 | instances in the mesh via its discovery service.
26 |
27 | Each Envoy instance maintains load balancing information based on the information it gets from Pilot and periodic health-checks of other instances in its load-balancing pool, allowing it to intelligently distribute traffic between destination instances while following its specified routing rules.
28 |
29 | Pilot is responsible for the lifecycle of Envoy instances deployed across the Istio service mesh.
30 |
31 |
32 | ## Communication between services
33 |
34 |
35 | ## Ingress and egress
36 | Istio assumes that all traffic entering and leaving the service mesh transits through Envoy proxies. By deploying an Envoy proxy in front of services, you can conduct A/B testing, deploy canary services, etc. for user-facing services. Similarly, by routing traffic to external web services (for instance, accessing a maps API or a video service API) via the Envoy sidecar, you can add failure recovery features such as timeouts, retries, and circuit breakers and obtain detailed metrics on the connections to these services.
37 |
38 |
39 | ## Discovery and load balancing
40 | Istio load balances traffic across instances of a service in a service mesh.
41 |
42 | Istio assumes the presence of a service registry to keep track of the pods/VMs of a service in the application. It also assumes that new instances of a service are automatically registered with the service registry and unhealthy instances are automatically removed. Platforms such as Kubernetes and Mesos already provide such functionality for container-based applications, and many solutions exist for VM-based applications.
43 |
44 | Pilot consumes information from the service registry and provides a platform-independent service discovery interface. Envoy instances in the mesh perform service discovery and dynamically update their load balancing pools accordingly.
45 |
46 |
47 | ## Certificate architecture
48 |
49 | Security in Istio involves multiple components:
50 |
51 | * Citadel for key and certificate management
52 |
53 | * Sidecar and perimeter proxies to implement secure communication between clients and servers
54 |
55 | * Pilot to distribute authentication policies and secure naming information to the proxies
56 |
57 | * Mixer to manage authorization and auditing
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
--------------------------------------------------------------------------------
/kubernetes-architecture/architecture.md:
--------------------------------------------------------------------------------
1 | # Kubernetes architecure
2 |
3 | In this module we are going to have a quick look at the architecture of Kubernetes.
4 | Kubernetes is a container orchestration platform that is made up for multiple services.
5 |
6 | 
7 |
8 | These services have different responsibilities to cover to make the platform work as a cohesive unit.
9 | Now we have run the setup we can issue `kubectl get pods --namespace=kube-system` to look at what pods we have running
10 | in the kube-system namepace. Each on of these pods equals a service. No lets break down what there responsibilities are.
11 |
12 | ## Master node
13 |
14 | ### etcd
15 | a simple, distributed key value storage which is used to store the Kubernetes cluster data (such as number of pods, their state, namespace, etc), API objects and service discovery details. It is only accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage
16 |
17 | ### Kube api server
18 | Kubernetes API server is the central management entity that receives all REST requests for modifications (to pods, services, replication sets/controllers and others), serving as frontend to the cluster. Also, this is the only component that communicates with the etcd cluster, making sure data is stored in etcd and is in agreement with the service details of the deployed pods.
19 |
20 | ### kube controller manager
21 | runs a number of distinct controller processes in the background (for example, replication controller controls number of replicas in a pod, endpoints controller populates endpoint objects like services and pods, and others) to regulate the shared state of the cluster and perform routine tasks. When a change in a service configuration occurs (for example, replacing the image from which the pods are running, or changing parameters in the configuration yaml file), the controller spots the change and starts working towards the new desired state.
22 |
23 | ### cloud controller manager
24 | is responsible for managing controller processes with dependencies on the underlying cloud provider (if applicable). For example, when a controller needs to check if a node was terminated or set up routes, load balancers or volumes in the cloud infrastructure, all that is handled by the cloud-controller-manager.
25 |
26 | ### kube scheduler
27 | helps schedule the pods (a co-located group of containers inside which our application processes are running) on the various nodes based on resource utilization. It reads the service’s operational requirements and schedules it on the best fit node. For example, if the application needs 1GB of memory and 2 CPU cores, then the pods for that application will be scheduled on a node with at least those resources. The scheduler runs each time there is a need to schedule pods. The scheduler must know the total resources available as well as resources allocated to existing workloads on each node.
28 |
29 | ## Worker node
30 |
31 | ## kubelet
32 | An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
33 | The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.
34 |
35 | ## kube proxy
36 | a proxy service that runs on each worker node to deal with individual host subnetting and expose services to the external world. It performs request forwarding to the correct pods/containers across the various isolated networks in a cluster.
37 |
38 |
39 |
--------------------------------------------------------------------------------
/kubernetes-architecture/images/kubernetes.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scotty-c/kubernetes-security-workshop/a273a8d0b16bd46e6a55a3105a3f148df799c7bf/kubernetes-architecture/images/kubernetes.png
--------------------------------------------------------------------------------
/rbac-namespaces-clusterroles/namespaces.md:
--------------------------------------------------------------------------------
1 | # rbac, namespaces and cluster roles
2 |
3 | Now we have learnt how to secure our pods, its a good idea to separate our applications from each other via namespaces and give them the appropriate level of access
4 | within the cluster to run. So we will first look at namespaces to separate our applications.
5 |
6 | ## Namespaces
7 |
8 | Lets look at the default namespaces available to us.
9 | We do this by issuing `kubectl get namespaces`
10 | In the last lab we deployed our deployment to the default namespace as we did not define anything.
11 | Kubernetes will place any pods in the default namespace unless another one is specified.
12 |
13 | For the next part of the lab we will create a namespace to use for the rest of the lab. We will do that by issuing
14 | ```
15 | cat < ca.crt`
109 |
110 | Then get the user token from our secret
111 | `USER_TOKEN=$(kubectl get secret --namespace webapp-namespace "${SECRET_NAME}" -o json | jq -r '.data["token"]' | base64 --decode)`
112 |
113 | Now will will setup our kubeconfig file
114 | ```
115 | context=$(kubectl config current-context)
116 | CLUSTER_NAME=$(kubectl config get-contexts "$context" | awk '{print $3}' | tail -n 1)
117 | ENDPOINT=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"${CLUSTER_NAME}\")].cluster.server}")
118 | kubectl config set-cluster "${CLUSTER_NAME}" --kubeconfig=admin.conf --server="${ENDPOINT}" --certificate-authority=ca.crt --embed-certs=true
119 | kubectl config set-credentials "webapp-service-account-webapp-namespace-${CLUSTER_NAME}" --kubeconfig=admin.conf --token="${USER_TOKEN}"
120 | kubectl config set-context "webapp-service-account-webapp-namespace-${CLUSTER_NAME}" --kubeconfig=admin.conf --cluster="${CLUSTER_NAME}" --user="webapp-service-account-webapp-namespace-${CLUSTER_NAME}" --namespace webapp-namespace
121 | kubectl config use-context "webapp-service-account-webapp-namespace-${CLUSTER_NAME}" --kubeconfig="${KUBECFG_FILE_NAME}"
122 | ```
123 | note if you want to cheat there is a shell script [here](scripts/kubectl.sh)
124 |
125 | We will then load the file in our terminal, make sure you use tmux or a seperate terminal
126 | `export KUBECONFIG=admin.conf`
127 |
128 | Now let's check our permissions by seeing if we can list pods in the default namespace
129 | `kubectl get pods`
130 |
131 | Now let's check our namespace
132 | `kubectl get pods --namespace=webapp-namespace`
133 |
134 | (Check [here](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects) for more info about rbac subjects)
135 |
136 | Now we have limited the blast radius of our application to only the namespace that it resides in.
137 | So there will be no way that we can leak configmaps or secrets from other applications that are not in this namespace.
138 |
139 | ## Users and Certificates
140 |
141 | ServiceAcccounts are for services inside Kubernetes, to authenticate Users we can use "User" instead
142 |
143 | ```
144 | cat < ca.crt
11 | USER_TOKEN=$(kubectl get secret --namespace webapp-namespace "${SECRET_NAME}" -o json | jq -r '.data["token"]' | base64 --decode)
12 | context=$(kubectl config current-context)
13 | CLUSTER_NAME=$(kubectl config get-contexts "$context" | awk '{print $3}' | tail -n 1)
14 | ENDPOINT=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"${CLUSTER_NAME}\")].cluster.server}")
15 | kubectl config set-cluster "${CLUSTER_NAME}" --kubeconfig="${KUBECFG_FILE_NAME}" --server="${ENDPOINT}" --certificate-authority=ca.crt --embed-certs=true
16 | kubectl config set-credentials "${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" --kubeconfig="${KUBECFG_FILE_NAME}" --token="${USER_TOKEN}"
17 | kubectl config set-context "${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" --kubeconfig="${KUBECFG_FILE_NAME}" --cluster="${CLUSTER_NAME}" --user="${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" --namespace="${NAMESPACE}"
18 | kubectl config use-context "${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" --kubeconfig="${KUBECFG_FILE_NAME}"
19 |
--------------------------------------------------------------------------------
/securing-application-communication-with-istio/images/browser.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scotty-c/kubernetes-security-workshop/a273a8d0b16bd46e6a55a3105a3f148df799c7bf/securing-application-communication-with-istio/images/browser.png
--------------------------------------------------------------------------------
/securing-application-communication-with-istio/istio.md:
--------------------------------------------------------------------------------
1 | # Securing application communication with Istio
2 |
3 | # Installing istio
4 |
5 | To install istio run the following from the root of the repo that you have cloned
6 |
7 | ```
8 | ./securing-application-communication-with-istio/scripts/istio.sh
9 | ```
10 |
11 | To view the script click [here](scripts/istio.sh)
12 |
13 | The script will install istio using helm and giving us a few more options.
14 | We are going to enable grafana, tracing and [kiali](https://github.com/kiali/kiali)
15 |
16 | # Checking istio is installed correctly
17 |
18 | Before we move on we want to make sure that istio in installed and running correctly.
19 | We will do that by issuing
20 |
21 | `kubectl get svc -n istio-system`
22 |
23 | The output should look like
24 | ### Azure
25 | ```
26 | $ kubectl get svc -n istio-system
27 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
28 | grafana ClusterIP 10.0.161.95 3000/TCP 1d
29 | istio-citadel ClusterIP 10.0.4.134 8060/TCP,9093/TCP 1d
30 | istio-egressgateway ClusterIP 10.0.185.41 80/TCP,443/TCP 1d
31 | istio-galley ClusterIP 10.0.165.249 443/TCP,9093/TCP 1d
32 | istio-ingressgateway LoadBalancer 10.0.127.246 40.114.74.87 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31536/TCP,8060:31217/TCP,853:31479/TCP,15030:31209/TCP,15031:31978/TCP 1d
33 | istio-pilot ClusterIP 10.0.69.62 15010/TCP,15011/TCP,8080/TCP,9093/TCP 1d
34 | istio-policy ClusterIP 10.0.71.79 9091/TCP,15004/TCP,9093/TCP 1d
35 | istio-sidecar-injector ClusterIP 10.0.100.95 443/TCP 1d
36 | istio-telemetry ClusterIP 10.0.35.107 9091/TCP,15004/TCP,9093/TCP,42422/TCP 1d
37 | jaeger-agent ClusterIP None 5775/UDP,6831/UDP,6832/UDP 1d
38 | jaeger-collector ClusterIP 10.0.27.14 14267/TCP,14268/TCP 1d
39 | jaeger-query ClusterIP 10.0.131.110 16686/TCP 1d
40 | kiali ClusterIP 10.0.163.94 20001/TCP 1d
41 | prometheus ClusterIP 10.0.158.188 9090/TCP 1d
42 | tracing ClusterIP 10.0.162.108 80/TCP 1d
43 | zipkin ClusterIP 10.0.221.151 9411/TCP 1d
44 |
45 | ```
46 |
47 | ### Minikube and Play with k8s
48 | ```
49 | $ kubectl get svc -n istio-system
50 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
51 | grafana ClusterIP 10.105.190.50 3000/TCP 19m
52 | istio-citadel ClusterIP 10.108.71.131 8060/TCP,9093/TCP 19m
53 | istio-egressgateway ClusterIP 10.97.180.125 80/TCP,443/TCP 19m
54 | istio-galley ClusterIP 10.110.109.224 443/TCP,9093/TCP 19m
55 | istio-ingressgateway LoadBalancer 10.109.197.49 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31809/TCP,8060:31143/TCP,853:31762/TCP,15030:31549/TCP,15031:31332/TCP 19m
56 | istio-pilot ClusterIP 10.107.101.198 15010/TCP,15011/TCP,8080/TCP,9093/TCP 19m
57 | istio-policy ClusterIP 10.97.27.29 9091/TCP,15004/TCP,9093/TCP 19m
58 | istio-sidecar-injector ClusterIP 10.111.80.115 443/TCP 19m
59 | istio-telemetry ClusterIP 10.96.180.26 9091/TCP,15004/TCP,9093/TCP,42422/TCP 19m
60 | jaeger-agent ClusterIP None 5775/UDP,6831/UDP,6832/UDP 19m
61 | jaeger-collector ClusterIP 10.111.234.21 14267/TCP,14268/TCP 19m
62 | jaeger-query ClusterIP 10.110.117.50 16686/TCP 19m
63 | kiali ClusterIP 10.107.254.72 20001/TCP 19m
64 | prometheus ClusterIP 10.105.165.152 9090/TCP 19m
65 | tracing ClusterIP 10.108.162.245 80/TCP 19m
66 | zipkin ClusterIP 10.103.129.249 9411/TCP 19m
67 | ```
68 | __NOTE:__ Because we are not using an external load balancer, the EXTERNAL-IP for the `istio-intressgateway` will remain as pending.
69 |
70 | and also `kubectl get pods -n istio-system`
71 | and we should see all the pods as running
72 | ```
73 | $ kubectl get pods -n istio-system
74 | NAME READY STATUS RESTARTS AGE
75 | grafana-59b787b9b-4k994 1/1 Running 0 1d
76 | istio-citadel-5d8956cc6-gl7hg 1/1 Running 0 1d
77 | istio-egressgateway-f48fc7fbb-f5bv6 1/1 Running 0 1d
78 | istio-galley-6975b6bd45-mqtsn 1/1 Running 0 1d
79 | istio-ingressgateway-c6c4bcdbf-njpn5 1/1 Running 0 1d
80 | istio-pilot-5b6c9b47d5-qw8b4 2/2 Running 0 1d
81 | istio-policy-6b465cd4bf-xgc7r 2/2 Running 0 1d
82 | istio-sidecar-injector-575597f5cf-8m7c6 1/1 Running 0 1d
83 | istio-telemetry-6944cd768-dppbr 2/2 Running 0 1d
84 | istio-tracing-7596597bd7-vffz6 1/1 Running 0 1d
85 | kiali-5fbd6ffb-6qqfn 1/1 Running 0 1d
86 | prometheus-76db5fddd5-s7dth 1/1 Running 0 1d
87 | ```
88 |
89 | We will create a new namespace just for this application. To do that we will use the below command
90 | `kubectl create namespace istio-app`
91 |
92 | Once all our pods are up and running we will set istio to automatically inject the side car information to envoy
93 | `kubectl label namespace istio-app istio-injection=enabled`
94 |
95 | If you wanted to do that manually with every application deployment you could use this
96 | `istioctl kube-inject -f .yaml | kubectl apply -f -`
97 |
98 |
99 | # Mutual TLS
100 | I think the bear minimum that we need to class an application as secure is mutual TLS through the back end.
101 | The beauty of istio is that it will handle the heavy lifting for you and implement it without having to change your application.
102 | So let's test this out.
103 |
104 | We will enable mutual TLS across our `istio-app` namespace
105 |
106 | ```
107 | cat < 9080/TCP 36s
153 | productpage ClusterIP 10.0.254.103 9080/TCP 33s
154 | ratings ClusterIP 10.0.241.132 9080/TCP 35s
155 | reviews ClusterIP 10.0.254.18 9080/TCP 35s
156 | ```
157 |
158 | and our pods with
159 | `kubectl get pods -n istio-app`
160 | with an output of
161 | ```
162 | NAME READY STATUS RESTARTS AGE
163 | details-v1-6764bbc7f7-b9p9j 2/2 Running 0 5m
164 | productpage-v1-54b8b9f55-5k4pq 2/2 Running 0 5m
165 | ratings-v1-7bc85949-mxfp7 2/2 Running 0 5m
166 | reviews-v1-fdbf674bb-plpk9 2/2 Running 0 5m
167 | reviews-v2-5bdc5877d6-9xhvd 2/2 Running 0 5m
168 | reviews-v3-dd846cc78-9fgwj 2/2 Running 0 5m
169 | ```
170 | At this point we have all our services up and running but istio is not exposing them to the outside world as we have not defined a virtual service to define how to route our traffic. To do that we can use the below yaml file
171 | ```
172 | cat < GET /details/0 HTTP/1.1
291 | > Host: details:9080
292 | > User-Agent: curl/7.47.0
293 | > Accept: */*
294 | >
295 | * Recv failure: Connection reset by peer
296 | * Closing connection 0
297 | curl: (56) Recv failure: Connection reset by peer
298 | ```
299 |
300 | Now lets get try tcpdump to see what is happening. First we will need the ip address of `eth0`
301 | You can easily get tht from `ifconfig` my output was
302 | ```
303 | eth0 Link encap:Ethernet HWaddr 16:a0:0d:08:ca:09
304 | inet addr:10.244.0.9 Bcast:0.0.0.0 Mask:255.255.255.0
305 | UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
306 | RX packets:7087 errors:0 dropped:0 overruns:0 frame:0
307 | TX packets:7426 errors:0 dropped:0 overruns:0 carrier:0
308 | collisions:0 txqueuelen:0
309 | RX bytes:1714389 (1.7 MB) TX bytes:67897634 (67.8 MB)
310 | ```
311 | So my ip address is `10.244.0.9`
312 | Now we know the port for the traffic we want to capture is `9080` from the curl above.
313 | So tcpdump command will be
314 | ```
315 | IP=$(ip addr show eth0 | grep "inet\b" | awk '{print $2}' | cut -d/ -f1)
316 | sudo tcpdump -vvv -A -i eth0 '((dst port 9080) and (net $IP))'
317 | ```
318 |
319 | ### Minikube and Azure
320 |
321 | Then in another terminal lets hit our web front end with will call our details service.
322 | `curl -o /dev/null -s -w "%{http_code}\n" http://$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/productpage`
323 |
324 | ### Play with k8s
325 |
326 | PLEASE NOTE !!! you will need to replace the below url with your url you can hit your version of the application on.
327 |
328 | Then in another terminal lets hit our web front end with will call our details service.
329 | `curl -o /dev/null -s -w "%{http_code}\n" http://ip172-18-0-4-bivq19oj98i0009oq2g0.direct.workshop.play-with-k8s.com:31380/productpage`
330 |
331 | The out put should have been something like
332 | ```
333 | ^[22:47:36.978639 IP (tos 0x0, ttl 64, id 19003, offset 0, flags [DF], proto TCP (6), length 60)
334 | 10.244.0.12.50662 > details-v1-6764bbc7f7-x7x99.9080: Flags [S], cksum 0x162b (incorrect -> 0xb758), seq 2995501799, win 29200, options [mss 1460,sackOK,TS val 1887650117 ecr 0,nop,wscale 7], length 0
335 | E.. details-v1-6764bbc7f7-x7x99.9080: Flags [.], cksum 0x1623 (incorrect -> 0x3e8d), seq 2995501800, ack 2809488904, win 229, options [nop,nop,TS val 1887650117 ecr 2183432464], length 0
341 | E..4J<@.@...
342 | ...
343 | .. ..#x.....uf......#.....
344 | p.AE.$..
345 | 22:47:36.978742 IP (tos 0x0, ttl 64, id 19005, offset 0, flags [DF], proto TCP (6), length 254)
346 | 10.244.0.12.50662 > details-v1-6764bbc7f7-x7x99.9080: Flags [P.], cksum 0x16ed (incorrect -> 0xe838), seq 0:202, ack 1, win 229, options [nop,nop,TS val 1887650117 ecr 2183432464], length 202
347 | E...J=@.@...
348 | ...
349 | .. ..#x.....uf............
350 | p.AE.$...............A......}.Q..d.....................+.../... ...../.,.0.
351 | .....5...|........7.5..2outbound|9080||details.istio-app.svc.cluster.local.....#.................................istio.......
352 | ........
353 | 22:47:36.979801 IP (tos 0x0, ttl 64, id 19006, offset 0, flags [DF], proto TCP (6), length 52)
354 | 10.244.0.12.50662 > details-v1-6764bbc7f7-x7x99.9080: Flags [.], cksum 0x1623 (incorrect -> 0x38ac), seq 202, ack 1280, win 251, options [nop,nop,TS val 1887650118 ecr 2183432465], length 0
355 | E..4J>@.@...
356 | ...
357 | .. ..#x.....uk......#.....
358 | p.AF.$..
359 | 22:47:36.981099 IP (tos 0x0, ttl 64, id 19007, offset 0, flags [DF], proto TCP (6), length 1245)
360 | 10.244.0.12.50662 > details-v1-6764bbc7f7-x7x99.9080: Flags [P.], cksum 0x1acc (incorrect -> 0xba00), seq 202:1395, ack 1280, win 251, options [nop,nop,TS val 1887650120 ecr 2183432465], length 1193
361 | E...J?@.@...
362 | ...
363 | .. ..#x.....uk............
364 | p.AH.$......:...6..3..00..,0.............->..._..).....0.. *.H........0.1.0...U.
365 | ..k8s.cluster.local0...190108215913Z..190408215913Z0.1 0...U.
366 | ..0.."0.. *.H.............0..
367 | .......K.....u.k...Fi..w..rr^.}...95.r.8..>...oF.q........G%....y...$?.....F$..c.E.N..#3(..9.`.(..a.4...o.t~.../9"D....~...`..CQ..f..qK.}4....hP.. ...c..cO.=E&...
368 | ...6.A|..C.,..Y..?.%.....Vr.e3...|.To.j`. .&....}.wVy}
369 | #..._............>;."8.. .......T.V#v..(.m.......z0x0...U...........0...U.%..0...+.........+.......0...U.......0.09..U...200..spiffe://cluster.local/ns/istio-app/sa/default0.. *.H..............d.0...B..I[r..3A[....P...,.BW...5$.4....SU.c.G..:..S....r....;.]u...2..[.K......eW+....\..{..zO...!{.C..B...s.....vv...!.....}....h.9a..k....`t2.......n...-yD..M...R...Z.U.p. i.P..V...J..W...AI|.:k.Y. s.1$...../..al5.at...s....'..pxw.........9=..HK......|....%...! ..|...../~L-.....T.>.."....d...4.............uB..g....A..Z..+<....J.....4>95...N.~]y1........6...-..;Kp.&....`1Lj....h...:...{g0.`..J......9..o..l^F(....Ns..;...!P.C?_P.0.#.o....@?..h'.`])1.b.g...!..ccf>......W~.6e@.5M7..G/.....!...|...........t1.c>.#..z/........l..:...b..q........~VO/.
370 | *.....+.).f............(..........7....<....;.....n.......
371 | }....
372 | 22:47:36.981570 IP (tos 0x0, ttl 64, id 19008, offset 0, flags [DF], proto TCP (6), length 985)
373 | 10.244.0.12.50662 > details-v1-6764bbc7f7-x7x99.9080: Flags [P.], cksum 0x19c8 (incorrect -> 0xf225), seq 1395:2328, ack 2370, win 271, options [nop,nop,TS val 1887650120 ecr 2183432467], length 933
374 | E...J@@.@...
375 | ...
376 | .. ..#x...[.uoI...........
377 | p.AH.$..................o.......g.g.....c.K....b......>.,Me.M e..F.vH.... .wO..{....Q{.|.>...f..s<./..k...E[.O..hb..p...2..('.....|....oq....o{....O...CQ2.,..
378 | .).~Y.!.\..".....P.f..O....\..b...!{W.....WY.w........0...g..z.'..XmF.OB.....T..t.
379 | ..G.O...s.....]......B......O"....e..*.8.w./.).a..Xv..r+.......D.y.' ^s..e....Q...J.H.......Wi...!...T{qty.b....^....x..../..xk.el...G.......N...E....>.4..g....{.(....m3It`..>1.b.h..N....&*B~{....F...q2R..c.kz=...>Y.I $...vm.08..L.dx=.....>.....y6.9~5R.....O..-.|...I....>....<71-.H.N...z..&L.S.A#.E0...\(..vO...........2...Da.?../.....Z..........V..&.e.2i... O....Qv.}..@...G.7.B....7.3sKt.........d.Zl..|.A_L..... .........Y..u.;...e>H...8.}..H..;...e..ede...y..53..e..1X.J.b.Ji...k....lD...>.@..k..-4.....C........aR...1[].mP......Y....V.JDn...h......A.....MP...*..zN..'@.N5.....k....W4..W.g.O...6.`.m}..V.'q....CK.3..K...4r.D..1...}..(..U|3=...........Cb....1.-.`.P0g.'..xk.J..D6C..l..`....k.F..
380 | 22:47:36.986198 IP (tos 0x0, ttl 64, id 19009, offset 0, flags [DF], proto TCP (6), length 52)
381 | 10.244.0.12.50662 > details-v1-6764bbc7f7-x7x99.9080: Flags [.], cksum 0x1623 (incorrect -> 0x2a49), seq 2328, ack 2764, win 311, options [nop,nop,TS val 1887650125 ecr 2183432471], length 0
382 | E..4JA@.@...
383 | ...
384 | .. ..#x.....up....7.#.....
385 | p.AM.$..
386 | ```
387 |
388 | As you can the traffic is encrypted. All the heavy lifting here is done via envoy and the sidecar proxy. So you can add mTLS without changing your application code.
389 |
--------------------------------------------------------------------------------
/securing-application-communication-with-istio/scripts/istio.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | if [[ "$OSTYPE" == "linux-gnu" ]]; then
4 | OS="linux"
5 | ARCH="linux-amd64"
6 | elif [[ "$OSTYPE" == "darwin"* ]]; then
7 | OS="osx"
8 | ARCH="darwin-amd64"
9 | fi
10 |
11 | ISTIO_VERSION=1.0.4
12 | HELM_VERSION=2.11.0
13 |
14 | check_tiller () {
15 | POD=$(kubectl get pods --all-namespaces|grep tiller|awk '{print $2}'|head -n 1)
16 | kubectl get pods -n kube-system $POD -o jsonpath="Name: {.metadata.name} Status: {.status.phase}" > /dev/null 2>&1 | grep Running
17 | }
18 |
19 | pre_reqs () {
20 | curl -sL "https://github.com/istio/istio/releases/download/$ISTIO_VERSION/istio-$ISTIO_VERSION-$OS.tar.gz" | tar xz
21 | if [ ! -f /usr/local/bin/istioctl ]; then
22 | echo "Installing istioctl binary"
23 | chmod +x ./istio-$ISTIO_VERSION/bin/istioctl
24 | sudo mv ./istio-$ISTIO_VERSION/bin/istioctl /usr/local/bin/istioctl
25 | fi
26 |
27 | if [ ! -f /usr/local/bin/helm ]; then
28 | echo "Installing helm binary"
29 | curl -sL "https://storage.googleapis.com/kubernetes-helm/helm-v$HELM_VERSION-$ARCH.tar.gz" | tar xz
30 | chmod +x $ARCH/helm
31 | sudo mv linux-amd64/helm /usr/local/bin/
32 | fi
33 | }
34 |
35 | install_tiller () {
36 | echo "Checking if tiller is running"
37 | check_tiller
38 | if [ $? -eq 0 ]; then
39 | echo "Tiller is installed and running"
40 | else
41 | echo "Deploying tiller to the cluster"
42 | cat <.yaml`
9 |
10 | ```
11 | cat <` we don't use any external load balancer so we won't have any IP there.
47 |
48 | Now copy the URL provided on the top of the page:
49 |
50 | 
51 |
52 | The service should then be exposed at the `URL:PORT` combination.
53 |
54 | e.g. `http://ip172-18-0-18-bg231u6fa44000dcjlng.direct.labs.play-with-k8s.com:32107`
55 |
56 | ### Minikube
57 | To find the nodes port on minikube we will issue the command `minikube service list`
58 | ```
59 | |-------------|----------------------|-----------------------------|
60 | | NAMESPACE | NAME | URL |
61 | |-------------|----------------------|-----------------------------|
62 | | default | kubernetes | No node port |
63 | | default | webapp-deployment | http://172.16.146.152:30687 |
64 | | kube-system | kube-dns | No node port |
65 | | kube-system | kubernetes-dashboard | No node port |
66 | |-------------|----------------------|-----------------------------|
67 | ```
68 |
69 | For me I can access the app at `http://172.16.146.152:30687`
70 |
71 | ### Azure
72 | For Azure you can find the end point for the webapp with the following
73 | `kubectl get services`
74 |
75 | ## The hack
76 | Now we have our application running, lets look at a few things.
77 | Firstly we will get our pod name `kubectl get pods` mine is `webapp-deployment-865fb4d7c-8c5sv`
78 | We will then exec into the running container `kubectl exec -it webapp-deployment-865fb4d7c-8c5sv sh`
79 | The `cd static` and `vim index.html`
80 | replace the gif link in line 16 with `https://media.giphy.com/media/DBfYJqH5AokgM/giphy.gif`
81 |
82 | Now check your browser !!!
83 |
84 | lastly run `whoami` in the terminal
85 |
86 | Now lets exit out of the pod shell and delete our deployment `kubectl delete deployments.apps webapp-deployment`
87 |
88 | ## Lets protect our app
89 | Now we are going to look at setting some pod security on our deployments which will control our pods.
90 | Pod security policy is set in your deployment yaml under
91 | ```
92 | spec:
93 | containers:
94 | securityContext:
95 | ```
96 |
97 | Pod security policy restricts what user the application is run as inside the pod, if the pod has a read only filesystem and lastly
98 | but not least if you want to allow privilege escalation. By default the pod security context are not turned on. If you want to get a lot of
99 | bang for your buck on the security front, this is the place to start.
100 |
101 | ### Change the user the application runs as
102 |
103 | We will change our deployment.yaml this time to set the user to a random user 1000. This will make sure that we dont have matching
104 | uid to the underlying host.
105 |
106 | ```
107 | cat <