├── On-prem.md ├── Installation Options ├── In Class.md └── AWS_GCE_Azure.md ├── Labs ├── 1.05 - Deploy Public Image Repo Container.md ├── 1.06 - Scaling Kubernetes.md ├── 1.07 - Upgrading Cluster.md ├── 1.08 - (Optional) RBAC in Kubernetes.md ├── 1.10 - (OPTIONAL) Static multiple Kubernetes.md ├── 1.01.a - PART I Installing Kubernetes.md ├── 1.12 - ML-Kafka Streams.md ├── 1.01.b - PART II - Connecting to K8s.md ├── 1.09 - Using CNCF's Helm.md ├── 1.02 - Make Highly Available.md ├── 1.11 - External Ingress.md ├── 1.04 - Running Apps.md ├── 1.03 - Killing A K8s Node.md └── 1.13 - Fraud Detection.md └── README.md /On-prem.md: -------------------------------------------------------------------------------- 1 | You can find the Ansible installation [https://github.com/dcos-labs/ansible-dcos](https://github.com/dcos-labs/ansible-dcos) 2 | -------------------------------------------------------------------------------- /Installation Options/In Class.md: -------------------------------------------------------------------------------- 1 | Your instructor should have set up Labs and will assign IP addresses. 2 | 3 | S/he will give you the credentials to access the cluster. 4 | -------------------------------------------------------------------------------- /Installation Options/AWS_GCE_Azure.md: -------------------------------------------------------------------------------- 1 | You can install a Kubernetes on DC/OS cluster on AWS, GCE or Azure with few commands by following instructions here [https://github.com/mesosphere/dcos-kubernetes-quickstart](https://github.com/mesosphere/dcos-kubernetes-quickstart) 2 | -------------------------------------------------------------------------------- /Labs/1.05 - Deploy Public Image Repo Container.md: -------------------------------------------------------------------------------- 1 | ## Create an App from Image Repo 2 | 3 | It is possible to create an app from the Kubernetes Dashboard. 4 | 5 | ![](https://i.imgur.com/qQTjTlh.png) 6 | 7 | Can use the hello-server image in Google Repo 8 | 9 | ``` 10 | gcr.io/google-samples/hello-app:1.0 11 | ``` 12 | -------------------------------------------------------------------------------- /Labs/1.06 - Scaling Kubernetes.md: -------------------------------------------------------------------------------- 1 | ## Scaling to More Nodes 2 | 3 | Sometimes you need more Kubernetes infrastructure to run more applications. DC/OS can easily scale the cluster. 4 | 5 | There are several ways to scale Kubernetes in DC/OS. 6 | 7 | ## From the UI 8 | 9 | From the UI, go to Services > Kubernetes and choose "Edit" in top right. 10 | 11 | Under "kubernetes" in left hand menu, change the number of "node count" to 2 and the "public node count" to 1 12 | 13 | 14 | ![](https://i.imgur.com/0YJxn1r.png) 15 | 16 | ## From the CLI 17 | 18 | Export the current package configuration into a JSON file called config.json 19 | 20 | ``` 21 | $ dcos kubernetes describe > config.json 22 | ``` 23 | 24 | Use an editor such as vi to change the config file and update "node_count" and "public node count" 25 | to the new number of nodes 26 | 27 | ``` 28 | "kubernetes": { 29 | "cloud_provider": "(none)", 30 | "high_availability": true, 31 | "network_provider": "dcos", 32 | "node_count": 2, 33 | "public_node_count": 1, 34 | "reserved_resources": { 35 | "kube_cpus": 2, 36 | "kube_disk": 10240, 37 | "kube_mem": 2048, 38 | "system_cpus": 1, 39 | "system_mem": 1024 40 | } 41 | ``` 42 | 43 | Scale the cluster 44 | 45 | ``` 46 | dcos kubernetes update --options=config.json 47 | ``` 48 | -------------------------------------------------------------------------------- /Labs/1.07 - Upgrading Cluster.md: -------------------------------------------------------------------------------- 1 | Below is information from documentation, which stronly suggest you backup before upgrading. You can find the full documentation on upgrading the cluster [here](https://docs.mesosphere.com/services/kubernetes/1.1.1-1.10.4/upgrade/) 2 | 3 | ### Updating the package version 4 | 5 | View/List available package versions: 6 | ``` 7 | $ dcos package describe kubernetes --package-versions 8 | [ 9 | "1.2.0-1.10.5", 10 | "1.1.1-1.10.4", 11 | "1.1.0-1.10.3", 12 | "1.0.3-1.9.7", 13 | "1.0.2-1.9.6", 14 | "1.0.1-1.9.4", 15 | "1.0.0-1.9.3" 16 | ] 17 | ``` 18 | 19 | ## Update DC/OS Kubernetes CLI 20 | Before starting the update process, it is mandatory to update the CLI to the new version, in our case 1.2.0-1.10.5: 21 | 22 | ``` 23 | $ dcos package install kubernetes --cli --package-version=1.2.0-1.10.5 24 | ``` 25 | 26 | Below is how the user starts the package version update: 27 | 28 | ``` 29 | $ dcos kubernetes update --package-version=1.2.0-1.10.5 30 | About to start an update from version to 31 | 32 | Updating these components means the Kubernetes cluster may experience some 33 | downtime or, in the worst-case scenario, cease to function properly. 34 | Before updating proceed cautiously and always backup your data. 35 | 36 | This operation is long-running and has to run to completion. 37 | Are you sure you want to continue? [yes/no] yes 38 | 39 | 2018/03/01 15:40:14 starting update process... 40 | 2018/03/01 15:40:15 waiting for update to finish... 41 | 2018/03/01 15:41:56 update complete! 42 | ``` 43 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Manage Kubernetes with DCOS Days 2 | 3 | Mesosphere DC/OS transforms any infrastructure into a Kubernetes cloud provider, automating Kubernetes everywhere: 4 | 5 | * Bare metal 6 | * Virtual machines 7 | * Multi-clouds 8 | 9 | # Background 10 | 11 | Your instructor has set up lab enviroments and will hand out IP addresses and credentials. After the course, you can install an open source DC/OS cluster 12 | * **Public Cloud Installation** - Kubernetes with DC/OS Quickstart (public cloud) for when go home [HERE](https://github.com/mesosphere/dcos-kubernetes-quickstart) 13 | * **On Premise Installation** - DC/OS with Ansible [HERE](https://github.com/dcos-labs/ansible-dcos) 14 | * **Slack** - Join DC/OS Slack [HERE](https://chat.dcos.io/) 15 | 16 | ![](https://i.imgur.com/rIJ1ZxF.png) 17 | 18 | # Course Material 19 | 20 | The instructor will give you access to the presentation. 21 | 22 | # Automate Kubernetes 23 | 24 | Create and manager your first Kubernetes cluster by following labs [HERE](https://github.com/chrisgaun/Manage-Kubernetes-with-DCOS-Days/tree/master/Labs) 25 | 26 | # Fill Out Survey 27 | 28 | We are always looking to add material to the course. 29 | 30 | Please fill out our survey found [HERE](https://goo.gl/forms/ougjUYkLablAdChQ2) 31 | 32 | # Reference 33 | 34 | * Building a Data Science Pipeline (TensorFlow with Kubeflow) [HERE](https://mesosphere.com/resources/building-data-science-platform/) 35 | * Kubernetes Cheatsheet [HERE](https://mesosphere.com/resources/kubernetes-cheatsheet/) 36 | * Other DC/OS demos [HERE](https://github.com/dcos/demos/) 37 | 38 | * Kubernetes Up & Running [HERE](https://mesosphere.com/resources/running-kubernetes-oreilly-ebook/) 39 | 40 | 41 | 42 | # Sponsors 43 | 44 | 45 | -------------------------------------------------------------------------------- /Labs/1.08 - (Optional) RBAC in Kubernetes.md: -------------------------------------------------------------------------------- 1 | ## Deploying RBAC in New Cluster 2 | 3 | As of version number 1.1.0-1.10.3, the Kubernetes DC/OS has allowed users to enable RBAC during provisioning. 4 | 5 | Since you must enable RBAC when you first provision the Kubernetes cluster, if you have an existing cluster, please delete it (we assume that the name of the cluster is kubernetes but if you used a different name you need to change the name in the flag “--app-id=NAME”) 6 | 7 | ``` 8 | dcos package uninstall --app-id=kubernetes kubernetes 9 | ``` 10 | 11 | Now install the cluster with RBAC enabled by going to the DC/OS catalog and choosing “RBAC” when you provision the cluster: 12 | 13 | ![](https://i.imgur.com/Wx4PyYG.png) 14 | 15 | ## How RBAC Works in Kubernetes 16 | 17 | In Kubernetes there are two types of roles. Kubernetes Roles are used to give users or a Services authorization to Kubernetes namespaces. Meanwhile ClusterRoles is used to give users, most likely adminstrators, global authorization over the entire cluster. 18 | 19 | First let's see the roles (can also be seen in Dashboard) 20 | 21 | ``` 22 | kubectl get clusterroles 23 | ``` 24 | 25 | For an example of how the ClusterRole works, create a role binding for a line of business user on the fraud detection team who does not need admin rights but does need to view the cluster. First, let's use Kubernetes soft-multitentancy to create a namespace: 26 | 27 | ``` 28 | kubectl create namespace fraudteam 29 | ``` 30 | 31 | Then let's bind a users role to clusterole and that namespace 32 | 33 | ``` 34 | kubectl create rolebinding viewbind --clusterrole view --user fraudtm --namespace fraudteam 35 | ``` 36 | 37 | Let's view the roles for the namespace 38 | 39 | ``` 40 | kubectl get rolebindings --namespace fraudteam 41 | ``` 42 | 43 | ## Authorization Rights 44 | 45 | We can now test to determine the access rights of this user on the fraud team within their namespace. Let's see if they can get list of all the pods in the namespace: 46 | 47 | ``` 48 | kubectl auth can-i get pods --namespace fraudteam --as fraudtm 49 | yes 50 | ``` 51 | 52 | However, since this user is from a line of business and had no admin rights in the role binding, they cannot create deployments: 53 | 54 | ``` 55 | kubectl auth can-i create deployments --namespace fraudteam --as fraudtm 56 | no 57 | ``` 58 | 59 | -------------------------------------------------------------------------------- /Labs/1.10 - (OPTIONAL) Static multiple Kubernetes.md: -------------------------------------------------------------------------------- 1 | ## Why Multiple Kubernetes Clusters? 2 | 3 | Having multiple Kubernetes clusters is a service reliability best practice. 4 | 5 | Kubernetes has some multi-tentancy from namespaces, but there are a variety of reason why best practices require any strategy to be combined with multiple Kubernetes clusters: 6 | 7 | * Application owners can control upgrades, scaling and entire lifecyle 8 | * Require multi-node cluster with full API access - e.g. development 9 | * High availability in case of cluster outages 10 | * Developement pipeline (e.g. dev, test, staging, production) 11 | * Mutliple independent workloads 12 | * Different regions or availability zones 13 | * Security and compliance (worried about noisy or nosy neighbors) 14 | 15 | For operations in particular, althought there is no public guidance, individual lines of businesses wishing to have a Kubernetes cluster ofen require on order of 10 clusters to satisfy initial use cases. 16 | 17 | ### Multiple Kubernetes Cluters in DC/OS 18 | 19 | Before you start, make sure you have enough DC/OS nodes. If you want a 2 Kubernetes clusters with 7 nodes (including the Master and etcd nodes) then you will need 14 DC/OS nodes in current release. 20 | 21 | Currently, DC/OS supports multiple Kubernetes clusters with the limiation that there is one Kubernetes worker node per Mesos agent. In the early Fall of 2018, DC/OS will release a high density Kubernetes package that can include multiple kubelets per Mesos agent: 22 | 23 | ![](https://i.imgur.com/5xbyAQK.png) 24 | 25 | Follows is instructions on how to install multiple Kubernetes clusters using DC/OS Placement Constraints. 26 | 27 | ### Provision First Kubernetes Cluster 28 | 29 | If you do not already have a Kubernetes cluster on DC/OS, follow the instructions [here](https://github.com/chrisgaun/Manage-Kubernetes-with-DCOS-Days/blob/chrisgaun-patch-1/Labs/Lab%201%20-%20Installing%20Kubernetes.md) to get the first one up and running. 30 | 31 | ### Provision Multiple Kubernetes Clusters 32 | 33 | Multiple Kubernetes clusters is made possible via Placement Constraints. Before you start, navigate to the DC/OS Nodes tab in the GUI and record the IP addresses of 5 of the unused nodes. 34 | 35 | It is easiest to deploy multiple Kubernetes using the DC/OS GUI. Navigate to the DC/OS Catalog in the GUI (left menu) and search for "Kubernetes". Deploy K8s 1.10.4 using "Review & Run" 36 | 37 | ![](https://i.imgur.com/PphTYDg.png) 38 | 39 | Change the name of the Kubernetes service to something unique. 40 | 41 | ``` 42 | kubernetes-2 43 | ``` 44 | 45 | Under Kubernetes change the CIDR to not overlap. 46 | 47 | ``` 48 | service cidr: 10.200.0.0/16 49 | ``` 50 | 51 | Under Kubernetes -> "control plane placement" and "node placement" add the addresses of the occupied nodes. 52 | 53 | ``` 54 | [["@hostname", "unlike", "||||"]] 55 | ``` 56 | -------------------------------------------------------------------------------- /Labs/1.01.a - PART I Installing Kubernetes.md: -------------------------------------------------------------------------------- 1 | ### Access the Cluster 2 | 3 | The instructor will give you access to IP address and credentials that you will need to SSH into. 4 | 5 | ### Set Up DC/OS Command Line 6 | 7 | Set up the DC/OS command line by clicking on the top left and choosing "install CLI" 8 | 9 | ![CLI](https://i.imgur.com/p4kqIj6.png) 10 | 11 | Click in the dialogue box too copy the command 12 | 13 | ![Copy Command](https://i.imgur.com/3rQ2Unj.png) 14 | 15 | Paste that command into your Terminal and press enter 16 | 17 | Confirm that dcos is installed and connected to your cluster by running following command 18 | 19 | ``` 20 | dcos node 21 | ``` 22 | 23 | The output should be a list of nodes in the cluster 24 | 25 | ``` 26 | 27 | HOSTNAME IP ID TYPE REGION ZONE 28 | 10.0.0.101 10.0.0.101 94141db5-28df-4194-a1f2-4378214838a7-S0 agent aws/us-west-2 aws/us-west-2a 29 | 10.0.2.100 10.0.2.100 94141db5-28df-4194-a1f2-4378214838a7-S4 agent aws/us-west-2 aws/us-west-2a 30 | ``` 31 | 32 | ### Tour DC/OS Catalog 33 | 34 | Your instructor will give you a tour of DC/OS UI and catalog. 35 | 36 | ### Install Kubernetes 37 | 38 | To install Kubernetes enter this command into your terminal 39 | 40 | ``` 41 | dcos package install kubernetes --package-version=1.1.0-1.10.3 42 | ``` 43 | You will have to agree to the Terms and Conditions and type yes to continue installing. 44 | 45 | ``` 46 | By Deploying, you agree to the Terms and Conditions https://mesosphere.com/catalog-terms-conditions/#certified-services 47 | Kubernetes on DC/OS. 48 | 49 | Documentation: https://docs.mesosphere.com/service-docs/kubernetes 50 | Issues: https://github.com/mesosphere/dcos-kubernetes-quickstart/issues 51 | Continue installing? [yes/no] yes 52 | 53 | Installing Marathon app for package [kubernetes] version [1.1.1-1.10.4] 54 | Installing CLI subcommand for package [kubernetes] version [1.1.1-1.10.4] 55 | New command available: dcos kubernetes 56 | ``` 57 | 58 | You can see the installation runbook automation and status of installation of each component with this command 59 | 60 | ``` 61 | dcos kubernetes plan status deploy 62 | ``` 63 | 64 | It should look like this when completed 65 | 66 | ``` 67 | $ dcos kubernetes plan status deploy 68 | deploy (serial strategy) (COMPLETE) 69 | ├─ etcd (serial strategy) (COMPLETE) 70 | │ └─ etcd-0:[peer] (COMPLETE) 71 | ├─ apiserver (parallel strategy) (COMPLETE) 72 | │ └─ kube-apiserver-0:[instance] (COMPLETE) 73 | ├─ mandatory-addons (serial strategy) (COMPLETE) 74 | │ ├─ mandatory-addons-0:[additional-cluster-role-bindings] (COMPLETE) 75 | │ ├─ mandatory-addons-0:[kube-dns] (COMPLETE) 76 | │ ├─ mandatory-addons-0:[metrics-server] (COMPLETE) 77 | │ ├─ mandatory-addons-0:[dashboard] (COMPLETE) 78 | │ └─ mandatory-addons-0:[ark] (COMPLETE) 79 | ├─ kubernetes-api-proxy (parallel strategy) (COMPLETE) 80 | │ └─ kubernetes-api-proxy-0:[install] (COMPLETE) 81 | ├─ controller-manager (parallel strategy) (COMPLETE) 82 | │ └─ kube-controller-manager-0:[instance] (COMPLETE) 83 | ├─ scheduler (parallel strategy) (COMPLETE) 84 | │ └─ kube-scheduler-0:[instance] (COMPLETE) 85 | ├─ node (parallel strategy) (COMPLETE) 86 | │ └─ kube-node-0:[kube-proxy, coredns, kubelet] (COMPLETE) 87 | └─ public-node (parallel strategy) (COMPLETE) 88 | ``` 89 | 90 | When all steps are "COMPLETE", confirm that the "dcos kubernetes" CLI was installed. 91 | 92 | ``` 93 | dcos kubernetes 94 | ``` 95 | 96 | ### Install Kubernetes kubectl Command Line 97 | 98 | Install the Kubernetes command line by following instructions [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/) 99 | 100 | **For Macs** with brew installed the command is 101 | 102 | ``` 103 | brew install kubectl 104 | ``` 105 | 106 | **For Ubuntu** the command is 107 | 108 | ``` 109 | sudo apt-get update && sudo apt-get install -y apt-transport-https 110 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - 111 | sudo touch /etc/apt/sources.list.d/kubernetes.list 112 | echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list 113 | sudo apt-get update 114 | sudo apt-get install -y kubectl 115 | ``` 116 | 117 | Confirm that kubectl is installed and in path /usr/local/bin (it will say it is not connected to dcos cluster yet which is ok) 118 | 119 | ``` 120 | kubectl version 121 | ``` 122 | -------------------------------------------------------------------------------- /Labs/1.12 - ML-Kafka Streams.md: -------------------------------------------------------------------------------- 1 | Credit to Andrew Grzeskowiak for creating this demo for DC/OS 2 | 3 | ## Kafka Streams Demo 4 | 5 | For this demo we will show you how easy it is to run your microservices and dataservices in one single DC/OS cluster using the same resources. 6 | 7 | ### Use Case 8 | Today we will be simulating a predictive streaming application, more specifically we will be analyzing incoming real-time Airline flight data to predict whether flights are on-time or delayed 9 | 10 | ### Data Set 11 | The raw dataset we will be using in our load generator is from (here)https://github.com/h2oai/h2o-2/wiki/Hacking-Airline-DataSet-with-H2O] 12 | 13 | ### Architecture 14 | 15 | We will simulate an incoming real-time data stream by using the `load-generator.yaml` below that is a docker container built to send data to Confluent Kafka running on DC/OS: 16 | ``` 17 | apiVersion: apps/v1beta1 18 | kind: Deployment 19 | metadata: 20 | name: kafka-streams-workload-generator 21 | spec: 22 | selector: 23 | matchLabels: 24 | app: kafka-streams-workload-generator 25 | replicas: 2 # tells deployment to run 2 pods matching the template 26 | template: # create pods using pod definition in this template 27 | metadata: 28 | labels: 29 | app: kafka-streams-workload-generator 30 | spec: 31 | containers: 32 | - name: kafka-streams-workload-generator 33 | image: greshwalk/kafka-streams-workload-generator:latest 34 | ``` 35 | 36 | The airline prediction microservice `airline-prediction.yaml` below is a docker container built to read directly off of the DC/OS Kafka broker endpoints to make analytic predictions on flight status: 37 | ``` 38 | apiVersion: apps/v1beta1 39 | kind: Deployment 40 | metadata: 41 | name: kafka-streams 42 | spec: 43 | selector: 44 | matchLabels: 45 | app: kafka-streams 46 | replicas: 2 # tells deployment to run 2 pods matching the template 47 | template: # create pods using pod definition in this template 48 | metadata: 49 | labels: 50 | app: kafka-streams 51 | spec: 52 | containers: 53 | - name: kafka-streams 54 | image: greshwalk/kafka-streams:latest 55 | ``` 56 | 57 | ## Getting Started 58 | 59 | First, we need to deploy Confluent Kafka 60 | ``` 61 | dcos package install confluent-kafka 62 | ``` 63 | 64 | Once Confluent Kafka is installed, create two topics called `AirlineOutputTopic` and `AirlineInputTopic` 65 | ``` 66 | dcos confluent-kafka --name confluent-kafka topic create AirlineOutputTopic --partitions 10 --replication 3 67 | dcos confluent-kafka --name confluent-kafka topic create AirlineInputTopic --partitions 10 --replication 3 68 | ``` 69 | 70 | Deploy the load generator service defined in the Architecture section above: 71 | ``` 72 | kubectl create -f load-generator.yaml 73 | ``` 74 | 75 | Deploy the predictive streaming application defined in the Architecture section above: 76 | ``` 77 | kubectl create -f airline-prediction.yaml 78 | ``` 79 | 80 | Navigate to the K8s UI: 81 | 82 | If you haven't already, run: 83 | ``` 84 | kubectl proxy 85 | ``` 86 | 87 | Point your browser to: 88 | ``` 89 | http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ 90 | ``` 91 | 92 | From the Dashboard navigate to pods --> kafka-streams-XXXXX pod --> logs. Output should resemble below: 93 | ``` 94 | ##################### 95 | Flight Input:1999,10,14,3,741,730,912,849,PS,1451,NA,91,79,NA,23,11,SAN,SFO,447,NA,NA,0,NA,0,NA,NA,NA,NA,NA,YES,YES 96 | Label (aka prediction) is flight departure delayed: NO 97 | Class probabilities: 0.5955978728809052,0.40440212711909485 98 | ##################### 99 | ##################### 100 | Flight Input:1987,10,14,3,741,730,912,849,PS,1451,NA,91,79,NA,23,11,SAN,SFO,447,NA,NA,0,NA,0,NA,NA,NA,NA,NA,YES,YES 101 | Label (aka prediction) is flight departure delayed: YES 102 | Class probabilities: 0.4319916897116479,0.5680083102883521 103 | ##################### 104 | ##################### 105 | Flight Input:1999,10,14,3,741,730,912,849,PS,1451,NA,91,79,NA,23,11,SAN,SFO,447,NA,NA,0,NA,0,NA,NA,NA,NA,NA,YES,YES 106 | Label (aka prediction) is flight departure delayed: NO 107 | Class probabilities: 0.5955978728809052,0.40440212711909485 108 | ##################### 109 | ``` 110 | 111 | ### Congratulations! You just deployed a microservice application on DC/OS that easily connects to a Confluent Kafka dataservice running on the same cluster! 112 | 113 | ## Uninstall ML-Kafka Stream Demo 114 | 115 | To remove the load generator service: 116 | ``` 117 | kubectl delete -f load-generator.yaml 118 | ``` 119 | 120 | To remove the predictive streaming applicaiton: 121 | ``` 122 | kubectl delete -f airline-prediction.yaml 123 | ``` 124 | 125 | To remove Confluent-Kafka from DC/OS 126 | ``` 127 | dcos package uninstall confluent-kafka 128 | ``` 129 | 130 | -------------------------------------------------------------------------------- /Labs/1.01.b - PART II - Connecting to K8s.md: -------------------------------------------------------------------------------- 1 | 2 | ## Connecting kubectl to DC/OS 3 | Deploy Marathon-LB: 4 | ``` 5 | dcos package install marathon-lb 6 | ``` 7 | 8 | Save kubectl-proxy service: 9 | ``` 10 | $ cat < kubectl-proxy.json 11 | { 12 | "id": "/kubectl-proxy", 13 | "instances": 1, 14 | "cpus": 0.001, 15 | "mem": 16, 16 | "cmd": "tail -F /dev/null", 17 | "container": { 18 | "type": "MESOS" 19 | }, 20 | "portDefinitions": [ 21 | { 22 | "protocol": "tcp", 23 | "port": 0 24 | } 25 | ], 26 | "labels": { 27 | "HAPROXY_0_MODE": "http", 28 | "HAPROXY_GROUP": "external", 29 | "HAPROXY_0_SSL_CERT": "/etc/ssl/cert.pem", 30 | "HAPROXY_0_PORT": "6443", 31 | "HAPROXY_0_BACKEND_SERVER_OPTIONS": " server kube-apiserver apiserver.kubernetes.l4lb.thisdcos.directory:6443 ssl verify none\n" 32 | } 33 | } 34 | EOF 35 | ``` 36 | 37 | Deploy kubectl-proxy service: 38 | ``` 39 | dcos marathon app add kubectl-proxy.json 40 | ``` 41 | 42 | Here is how this works: 43 | * Marathon-LB identifies that the application kubeapi-proxy has the HAPROXY_GROUP label set to external (change this if you're using a different HAPROXY_GROUP for your Marathon-LB configuration). 44 | * The instances, cpus, mem, cmd, and container fields basically create a dummy container that takes up minimal space and performs no operation. 45 | * The single port indicates that this application has one "port" (this information is used by Marathon-LB) 46 | * "HAPROXY_0_MODE": "http" indicates to Marathon-LB that the frontend and backend configuration for this particular service should be configured with http. 47 | * "HAPROXY_0_PORT": "6443" tells Marathon-LB to expose the service on port 6443 (rather than the randomly-generated service port, which is ignored) 48 | * "HAPROXY_0_SSL_CERT": "/etc/ssl/cert.pem" tells Marathon-LB to expose the service with the self-signed Marathon-LB certificate (which has no CN) 49 | * The last label HAPROXY_0_BACKEND_SERVER_OPTIONS indicates that Marathon-LB should forward traffic to the endpoint apiserver.kubernetes.l4lb.thisdcos.directory:6443 rather than to the dummy application, and that the connection should be made using TLS without verification. 50 | 51 | 52 | ### Find Public Node (NOTE: If you are in a Mesosphere led class, the public IP of the node is found on the Google Sheet and you can skip the public node step.) 53 | 54 | Find your public DC/OS agent IP (make sure to have jq installed - on Mac "brew install jq") 55 | 56 | ``` 57 | for id in $(dcos node --json | jq --raw-output '.[] | select(.attributes.public_ip == "true") | .id'); do dcos node ssh --option StrictHostKeyChecking=no --option LogLevel=quiet --master-proxy --mesos-id=$id "curl -s ifconfig.co" ; done 2>/dev/null 58 | 59 | ``` 60 | When multiple public DC/OS are deployed another option to find the IPs would be to use a script like the following: 61 | 62 | ``` 63 | $ cat >> get-dcos-public-agent-ip.sh < 72 | # 73 | 74 | echo 75 | if [ "$1" == "" ] 76 | then 77 | num_pub_agents=2 78 | echo " Using the default number of public agent nodes (2)" 79 | else 80 | num_pub_agents=$1 81 | echo " Using $num_pub_agents public agent node(s)" 82 | fi 83 | 84 | 85 | # get the public IP of the public node if unset 86 | cat < /tmp/get-public-agent-ip.json 87 | { 88 | "id": "/get-public-agent-ip", 89 | "cmd": "curl http://169.254.169.254/latest/meta-data/public-ipv4 && sleep 3600", 90 | "cpus": 0.25, 91 | "mem": 32, 92 | "instances": $num_pub_agents, 93 | "acceptedResourceRoles": [ 94 | "slave_public" 95 | ], 96 | "constraints": [ 97 | [ 98 | "hostname", 99 | "UNIQUE" 100 | ] 101 | ] 102 | } 103 | EOF 104 | 105 | echo 106 | echo ' Starting public-ip.json marathon app' 107 | echo 108 | dcos marathon app add /tmp/get-public-agent-ip.json 109 | 110 | sleep 10 111 | 112 | task_list=`dcos task get-public-agent-ip | grep get-public-agent-ip | awk '{print $5}'` 113 | 114 | for task_id in $task_list; 115 | do 116 | public_ip=`dcos task log $task_id stdout | tail -1` 117 | 118 | echo 119 | echo " Public agent node found: public IP is: $public_ip | http://$public_ip:9090/haproxy?stats " 120 | 121 | done 122 | 123 | sleep 2 124 | 125 | dcos marathon app remove get-public-agent-ip 126 | 127 | rm /tmp/get-public-agent-ip.json 128 | echo 129 | 130 | # end of script 131 | /EOF 132 | ``` 133 | 134 | ### Connecting Using Kubeconfig 135 | 136 | Connect kubectl to DC/OS: 137 | ``` 138 | dcos kubernetes kubeconfig \ 139 | --apiserver-url https://:6443 \ 140 | --insecure-skip-tls-verify 141 | ``` 142 | 143 | Confirm connection: 144 | 145 | ``` 146 | kubectl get nodes 147 | ``` 148 | 149 | ### Kubernetes Dashboard (Official UI of Kubernetes) 150 | 151 | (NOTE: if you are using a bootstrap server to access your cluster then the local proxy will not give you access to the Dashboard.) 152 | 153 | To access the dashboard run: 154 | 155 | ``` 156 | kubectl proxy 157 | ``` 158 | 159 | Point your browser to: 160 | 161 | ``` 162 | http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ 163 | ``` 164 | 165 | -------------------------------------------------------------------------------- /Labs/1.09 - Using CNCF's Helm.md: -------------------------------------------------------------------------------- 1 | # The DC/OS + Kubernetes + Helm - cool beer demo 2 | 3 | ![Model](https://github.com/dcos/demos/raw/master/dcos-k8s-beer-demo/1.10/images/components.png) 4 | 5 | Are you wondering how [Java](http://www.oracle.com/technetwork/java/index.html), [Spring Boot](https://projects.spring.io/spring-boot/), [MySQL](https://www.mysql.com), [Neo4j](https://neo4j.com), [Apache Zeppelin](https://zeppelin.apache.org/), [Apache Spark](https://spark.apache.org/), [Elasticsearch](https://www.elastic.co), [Apache Mesos](https://mesos.apache.org/), [DC/OS](https://dcos.io), [Kubernetes](https://kubernetes.io/) and [Helm](https://helm.sh) can all fit in one demo? Well, we'll show you! This is a cool demo, so grab your favourite beer and enjoy. 🍺 6 | 7 | 8 | ### Prerequisite 9 | 10 | This lab assumes you have a Kubernetes cluster and kubectl with API Server access from the previous labs. 11 | 12 | ### Helm setup 13 | 14 | To deploy `beer-service` we will need [Helm](https://helm.sh). 15 | 16 | To download and install Helm cli run: 17 | ```bash 18 | curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash 19 | ``` 20 | 21 | Then we need to install Helm's server side `Tiller`: 22 | ```bash 23 | helm init 24 | ``` 25 | 26 | Once Tiller is installed, running `helm version` should show you both the client and server version: 27 | ```bash 28 | helm version 29 | Client: &version.Version{SemVer:"v2.8.0", GitCommit:"14af25f1de6832228539259b821949d20069a222", GitTreeState:"clean"} 30 | Server: &version.Version{SemVer:"v2.8.0", GitCommit:"14af25f1de6832228539259b821949d20069a222", GitTreeState:"clean"} 31 | ``` 32 | 33 | Now we need to add DC/OS Labs Helm Charts repo (where this demo Helm charts are published) to Helm: 34 | ```bash 35 | helm repo add dlc https://dcos-labs.github.io/charts/ 36 | helm repo update 37 | ``` 38 | 39 | ## Deploy Backend on DC/OS 40 | 41 | Let's deploy Backend using the `dcos` cli: 42 | ```bash 43 | dcos marathon group add https://raw.githubusercontent.com/dcos/demos/master/dcos-k8s-beer-demo/1.10/marathon-apps/marathon-configuration.json 44 | ``` 45 | 46 | Wait till it gets installed, you can check it's progress in DC/OS Dashboard/Services/beer. 47 | 48 | ## Deploy Frontend App on Kubernetes and expose it locally 49 | 50 | ### Frontend App 51 | 52 | To deploy Frontend App [chart](https://github.com/dcos-labs/charts/tree/master/stable/beer-service-web) run: 53 | ```bash 54 | helm install --name beer --namespace beer dlc/beer-service-web 55 | ``` 56 | 57 | Check that pods are running: 58 | ```bash 59 | kubectl -n beer get pods 60 | NAME READY STATUS RESTARTS AGE 61 | beer-beer-service-web-1676235277-s9ptv 1/1 Running 0 2m 62 | beer-beer-service-web-1676235277-tskrp 1/1 Running 0 2m 63 | ``` 64 | 65 | ### Accessing Frontend App locally 66 | 67 | To access app locally run: (replace beer-beer-service-web-1676235277-tskrp with the pod name from the previous command) 68 | ```bash 69 | kubectl port-forward -n beer beer-beer-service-web-1676235277-tskrp 8080 70 | ``` 71 | 72 | Now you should be able to check beer at `http://127.0.0.1:8080` 73 | 74 | ## Deploy Frontend App and expose it via Cloudflare Warp on Kubernetes 75 | 76 | ### Frontend App 77 | 78 | To deploy Frontend App [chart](https://github.com/dcos-labs/charts/tree/master/stable/beer-service-web) (do not forget to replace there with your `domain_name`) run: 79 | ```bash 80 | helm upgrade --install beer --namespace beer dlc/beer-service-web \ 81 | --set ingress.enabled="true",ingress.host="beer.mydomain.com" 82 | ``` 83 | 84 | Check that pods are running: 85 | ```bash 86 | kubectl -n beer get pods 87 | NAME READY STATUS RESTARTS AGE 88 | beer-beer-service-web-854fb8dc65-76bk4 2/2 Running 0 2m 89 | beer-beer-service-web-854fb8dc65-d8tm9 2/2 Running 0 2m 90 | ``` 91 | 92 | ### Cloudflare Warp 93 | 94 | The Cloudflare Warp Ingress Controller makes connections between a Kubernetes service and the Cloudflare edge, exposing an application in your cluster to the internet at a hostname of your choice. A quick description of the details can be found at https://warp.cloudflare.com/quickstart/. 95 | 96 | **Note:** Before installing Cloudflare Warp you need to obtain Cloudflare credentials for your domain zone. 97 | The credentials are obtained by logging in to https://www.cloudflare.com/a/warp, selecting the zone where you will be publishing your services, and saving the file locally to `dcos-k8s-beer-demo` folder. 98 | 99 | To deploy Cloudflare Warp Ingress Controller [chart](https://github.com/dcos-labs/charts/tree/master/stable/cloudflare-warp-ingress) run: 100 | ```bash 101 | helm install --name beer-ingress --namespace beer dlc/cloudflare-warp-ingress --set cert=$(cat cloudflare-warp.pem | base64) 102 | ``` 103 | 104 | Check that pods are running: 105 | ```bash 106 | kubectl -n beer get pods 107 | NAME READY STATUS RESTARTS AGE 108 | beer-beer-service-web-57f9bc955c-c9v4q 2/2 Running 0 3m 109 | beer-beer-service-web-57f9bc955c-k4mps 2/2 Running 0 3m 110 | beer-ingress-cloudflare-warp-ingress-775b5965fd-qd4rk 1/1 Running 0 16s 111 | ``` 112 | 113 | ### Testing external access 114 | 115 | Now you should be able to check beer at https://beer.mydomain.com/ 116 | And if you noticed Cloudflare Warp creates `https` connection by default :-) 117 | 118 | ## Conclusion 119 | 120 | Cheers 🍺 121 | -------------------------------------------------------------------------------- /Labs/1.02 - Make Highly Available.md: -------------------------------------------------------------------------------- 1 | ### Making Kubernetes Highly Available through the GUI 2 | 3 | You can choose to make Kubernetes on DC/OS high availability (HA) at the time of deployment. However, you can also make non-HA clusters into HA by editing and saving the configuration. 4 | 5 | In the DC/OS UI under Services, click on your Kubernetes cluster (Not kube-proxy). 6 | 7 | In the top right, click Edit. 8 | 9 | ![](https://i.imgur.com/2dYmVLp.png) 10 | 11 | Under Kubernetes in left hand menu, choose the high availability box 12 | 13 | ![](https://i.imgur.com/PkGHHlJ.png) 14 | 15 | Save configuration and watch new components come online with the following command or in the GUI. Output should look like below: 16 | 17 | ``` 18 | $ dcos kubernetes plan status deploy 19 | deploy (serial strategy) (COMPLETE) 20 | ├─ etcd (serial strategy) (COMPLETE) 21 | │ ├─ etcd-0:[peer] (COMPLETE) 22 | │ ├─ etcd-1:[peer] (COMPLETE) 23 | │ └─ etcd-2:[peer] (COMPLETE) 24 | ├─ apiserver (serial strategy) (COMPLETE) 25 | │ ├─ kube-apiserver-0:[instance] (COMPLETE) 26 | │ ├─ kube-apiserver-1:[instance] (COMPLETE) 27 | │ └─ kube-apiserver-2:[instance] (COMPLETE) 28 | ├─ kubernetes-api-proxy (serial strategy) (COMPLETE) 29 | │ └─ kubernetes-api-proxy-0:[install] (COMPLETE) 30 | ├─ controller-manager (serial strategy) (COMPLETE) 31 | │ ├─ kube-controller-manager-0:[instance] (COMPLETE) 32 | │ ├─ kube-controller-manager-1:[instance] (COMPLETE) 33 | │ └─ kube-controller-manager-2:[instance] (COMPLETE) 34 | ├─ scheduler (serial strategy) (COMPLETE) 35 | │ ├─ kube-scheduler-0:[instance] (COMPLETE) 36 | │ ├─ kube-scheduler-1:[instance] (COMPLETE) 37 | │ └─ kube-scheduler-2:[instance] (COMPLETE) 38 | ├─ node (serial strategy) (COMPLETE) 39 | │ ├─ kube-node-0:[kube-proxy] (COMPLETE) 40 | │ ├─ kube-node-0:[coredns] (COMPLETE) 41 | │ └─ kube-node-0:[kubelet] (COMPLETE) 42 | ├─ public-node (serial strategy) (COMPLETE) 43 | └─ mandatory-addons (serial strategy) (COMPLETE) 44 | ├─ mandatory-addons-0:[kube-dns] (COMPLETE) 45 | ├─ mandatory-addons-0:[metrics-server] (COMPLETE) 46 | ├─ mandatory-addons-0:[dashboard] (COMPLETE) 47 | └─ mandatory-addons-0:[ark] (COMPLETE) 48 | ``` 49 | 50 | ### Making Kubernetes Highly Available through the CLI 51 | 52 | Grab the Kubernetes options.json file: 53 | 54 | ``` 55 | dcos kubernetes describe > options.json 56 | ``` 57 | 58 | Edit the options.json file to set "high_availability": true, 59 | 60 | ``` 61 | $ cat options.json 62 | { 63 | "apiserver": { 64 | "cpus": 0.5, 65 | "mem": 1024 66 | }, 67 | "controller_manager": { 68 | "cpus": 0.5, 69 | "mem": 512 70 | }, 71 | "etcd": { 72 | "cpus": 0.5, 73 | "data_disk": 3072, 74 | "disk_type": "ROOT", 75 | "mem": 1024, 76 | "wal_disk": 512 77 | }, 78 | "kube_proxy": { 79 | "cpus": 0.1, 80 | "mem": 512 81 | }, 82 | "kubernetes": { 83 | "cloud_provider": "(none)", 84 | "high_availability": true, 85 | "network_provider": "dcos", 86 | "node_count": 1, 87 | "public_node_count": 0, 88 | "reserved_resources": { 89 | "kube_cpus": 2, 90 | "kube_disk": 10240, 91 | "kube_mem": 2048, 92 | "system_cpus": 1, 93 | "system_mem": 1024 94 | }, 95 | "service_cidr": "10.100.0.0/16" 96 | }, 97 | "scheduler": { 98 | "cpus": 0.5, 99 | "mem": 512 100 | }, 101 | "service": { 102 | "log_level": "INFO", 103 | "name": "kubernetes", 104 | "service_account": "", 105 | "service_account_secret": "", 106 | "sleep": 1000 107 | } 108 | } 109 | ``` 110 | 111 | Update the Kubernetes Framework: 112 | 113 | ``` 114 | dcos kubernetes update --options=options.json 115 | ``` 116 | 117 | Output should resemble below: 118 | 119 | ``` 120 | The components of the cluster will be updated according to the changes in the 121 | options file [options.json]. 122 | 123 | Updating these components means the Kubernetes cluster may experience some 124 | downtime or, in the worst-case scenario, cease to function properly. 125 | Before updating proceed cautiously and always backup your data. 126 | 127 | This operation is long-running and has to run to completion. 128 | Are you sure you want to continue? [yes/no] yes 129 | 130 | 2018/07/09 10:09:29 starting update process... 131 | 2018/07/09 10:09:29 waiting for update to finish... 132 | 2018/07/09 10:12:50 update complete! 133 | ``` 134 | 135 | Validate update against the plan: 136 | 137 | ``` 138 | $ dcos kubernetes plan status deploy 139 | deploy (serial strategy) (COMPLETE) 140 | ├─ etcd (serial strategy) (COMPLETE) 141 | │ ├─ etcd-0:[peer] (COMPLETE) 142 | │ ├─ etcd-1:[peer] (COMPLETE) 143 | │ └─ etcd-2:[peer] (COMPLETE) 144 | ├─ apiserver (serial strategy) (COMPLETE) 145 | │ ├─ kube-apiserver-0:[instance] (COMPLETE) 146 | │ ├─ kube-apiserver-1:[instance] (COMPLETE) 147 | │ └─ kube-apiserver-2:[instance] (COMPLETE) 148 | ├─ kubernetes-api-proxy (serial strategy) (COMPLETE) 149 | │ └─ kubernetes-api-proxy-0:[install] (COMPLETE) 150 | ├─ controller-manager (serial strategy) (COMPLETE) 151 | │ ├─ kube-controller-manager-0:[instance] (COMPLETE) 152 | │ ├─ kube-controller-manager-1:[instance] (COMPLETE) 153 | │ └─ kube-controller-manager-2:[instance] (COMPLETE) 154 | ├─ scheduler (serial strategy) (COMPLETE) 155 | │ ├─ kube-scheduler-0:[instance] (COMPLETE) 156 | │ ├─ kube-scheduler-1:[instance] (COMPLETE) 157 | │ └─ kube-scheduler-2:[instance] (COMPLETE) 158 | ├─ node (serial strategy) (COMPLETE) 159 | │ ├─ kube-node-0:[kube-proxy] (COMPLETE) 160 | │ ├─ kube-node-0:[coredns] (COMPLETE) 161 | │ └─ kube-node-0:[kubelet] (COMPLETE) 162 | ├─ public-node (serial strategy) (COMPLETE) 163 | └─ mandatory-addons (serial strategy) (COMPLETE) 164 | ├─ mandatory-addons-0:[kube-dns] (COMPLETE) 165 | ├─ mandatory-addons-0:[metrics-server] (COMPLETE) 166 | ├─ mandatory-addons-0:[dashboard] (COMPLETE) 167 | └─ mandatory-addons-0:[ark] (COMPLETE) 168 | ``` 169 | 170 | -------------------------------------------------------------------------------- /Labs/1.11 - External Ingress.md: -------------------------------------------------------------------------------- 1 | [Kubernetes Service Documentation - External Ingress](https://docs.mesosphere.com/services/kubernetes/1.2.0-1.10.5/ingress/) 2 | 3 | ## Adding External Ingress 4 | 5 | If you havent already done so, scale your Kubernetes `"public_node_count": 1,`. Go back to [Scaling Kubernetes lab](https://github.com/chrisgaun/Manage-Kubernetes-with-DCOS-Days/blob/master/Labs/1.06%20-%20Scaling%20Kubernetes.md) for detailed instructions on how to scale/update the Kubernetes Framework 6 | 7 | If you want to expose HTTP/S (L7) apps to the outside world - at least outside the DC/OS cluster - you should create a Kubernetes Ingress resource. The package does not install such controller by default, so we give you the freedom to choose what Ingress controller your organization wants to use. 8 | Options: 9 | - Traefik 10 | - NGINX 11 | - HAProxy 12 | - Envoy 13 | - Istio 14 | 15 | ### For this Lab we will install Traefik as our Kubernetes Ingress Controller to show a Hello World example 16 | 17 | Save the below Traefik Ingress Controller as `traefik.yaml` 18 | ``` 19 | --- 20 | kind: ClusterRole 21 | apiVersion: rbac.authorization.k8s.io/v1beta1 22 | metadata: 23 | name: traefik-ingress-controller 24 | rules: 25 | - apiGroups: 26 | - "" 27 | resources: 28 | - services 29 | - endpoints 30 | - secrets 31 | verbs: 32 | - get 33 | - list 34 | - watch 35 | - apiGroups: 36 | - extensions 37 | resources: 38 | - ingresses 39 | verbs: 40 | - get 41 | - list 42 | - watch 43 | --- 44 | kind: ClusterRoleBinding 45 | apiVersion: rbac.authorization.k8s.io/v1beta1 46 | metadata: 47 | name: traefik-ingress-controller 48 | roleRef: 49 | apiGroup: rbac.authorization.k8s.io 50 | kind: ClusterRole 51 | name: traefik-ingress-controller 52 | subjects: 53 | - kind: ServiceAccount 54 | name: traefik-ingress-controller 55 | namespace: kube-system 56 | --- 57 | apiVersion: v1 58 | kind: ServiceAccount 59 | metadata: 60 | name: traefik-ingress-controller 61 | namespace: kube-system 62 | --- 63 | kind: Deployment 64 | apiVersion: extensions/v1beta1 65 | metadata: 66 | name: traefik-ingress-controller 67 | namespace: kube-system 68 | labels: 69 | k8s-app: traefik-ingress-lb 70 | spec: 71 | replicas: 1 72 | selector: 73 | matchLabels: 74 | k8s-app: traefik-ingress-lb 75 | template: 76 | metadata: 77 | labels: 78 | k8s-app: traefik-ingress-lb 79 | name: traefik-ingress-lb 80 | spec: 81 | serviceAccountName: traefik-ingress-controller 82 | terminationGracePeriodSeconds: 60 83 | containers: 84 | - image: traefik 85 | name: traefik-ingress-lb 86 | args: 87 | - --web 88 | - --kubernetes 89 | # NOTE: What follows are necessary additions to 90 | # https://docs.traefik.io/user-guide/kubernetes 91 | # Please check below for a detailed explanation 92 | ports: 93 | - containerPort: 80 94 | hostPort: 80 95 | name: http 96 | protocol: TCP 97 | affinity: 98 | podAntiAffinity: 99 | requiredDuringSchedulingIgnoredDuringExecution: 100 | - labelSelector: 101 | matchExpressions: 102 | - key: k8s-app 103 | operator: In 104 | values: 105 | - traefik-ingress-lb 106 | topologyKey: "kubernetes.io/hostname" 107 | nodeSelector: 108 | kubernetes.dcos.io/node-type: public 109 | tolerations: 110 | - key: "node-type.kubernetes.dcos.io/public" 111 | operator: "Exists" 112 | effect: "NoSchedule" 113 | ``` 114 | 115 | 116 | Save the below ingress controller configuation as `traefik-ingress.yaml` 117 | ``` 118 | apiVersion: v1 119 | kind: Service 120 | metadata: 121 | name: traefik-ingress-controller 122 | namespace: kube-system 123 | spec: 124 | selector: 125 | k8s-app: traefik-ingress-controller 126 | ports: 127 | - port: 80 128 | name: http 129 | type: NodePort 130 | ``` 131 | 132 | Deploy your Traefik Ingress Controller 133 | ``` 134 | kubectl create -f traefik.yaml 135 | ``` 136 | 137 | Create Ingress Service: 138 | ``` 139 | kubectl create -f traefik-ingress.yaml 140 | ``` 141 | 142 | Create a Hello World Service and name it `hello-world.yaml`: 143 | ``` 144 | --- 145 | apiVersion: apps/v1 146 | kind: Deployment 147 | metadata: 148 | name: hello-world 149 | labels: 150 | app: hello-world 151 | spec: 152 | replicas: 2 153 | selector: 154 | matchLabels: 155 | app: hello-world 156 | template: 157 | metadata: 158 | labels: 159 | app: hello-world 160 | spec: 161 | containers: 162 | - name: echo-server 163 | image: hashicorp/http-echo 164 | args: 165 | - -listen=:80 166 | - -text="Hello from Kubernetes running on Mesosphere DC/OS!" 167 | ports: 168 | - containerPort: 80 169 | --- 170 | apiVersion: v1 171 | kind: Service 172 | metadata: 173 | name: hello-world 174 | spec: 175 | selector: 176 | app: hello-world 177 | ports: 178 | - port: 80 179 | targetPort: 80 180 | ``` 181 | 182 | Deploy Hello World Service: 183 | ``` 184 | kubectl create -f hello-world.yaml 185 | ``` 186 | 187 | Create Hello World Service Ingress and name it `hw-ingress.yaml`: 188 | ``` 189 | apiVersion: extensions/v1beta1 190 | kind: Ingress 191 | metadata: 192 | annotations: 193 | kubernetes.io/ingress.class: traefik 194 | name: hello-world 195 | spec: 196 | rules: 197 | - http: 198 | paths: 199 | - path: / 200 | backend: 201 | serviceName: hello-world 202 | servicePort: 80 203 | ``` 204 | 205 | Deploy Hello World Ingress: 206 | ``` 207 | kubectl create -f hw-ingress.yaml 208 | ``` 209 | 210 | Access Hello World Application by accessing `http://` 211 | 212 | -------------------------------------------------------------------------------- /Labs/1.04 - Running Apps.md: -------------------------------------------------------------------------------- 1 | 2 | Note: this is from Kubernetes [documetation](https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment) 3 | 4 | ## Creating and exploring an nginx deployment 5 | 6 | You can run an application by creating a Kubernetes Deployment object, and you 7 | can describe a Deployment in a YAML file. For example, this YAML file describes 8 | a Deployment that runs the nginx:1.7.9 Docker image: 9 | 10 | ``` 11 | apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 12 | kind: Deployment 13 | metadata: 14 | name: nginx-deployment 15 | spec: 16 | selector: 17 | matchLabels: 18 | app: nginx 19 | replicas: 2 # tells deployment to run 2 pods matching the template 20 | template: # create pods using pod definition in this template 21 | metadata: 22 | # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is 23 | # generated from the deployment name 24 | labels: 25 | app: nginx 26 | spec: 27 | containers: 28 | - name: nginx 29 | image: nginx:1.7.9 30 | ports: 31 | - containerPort: 80 32 | ``` 33 | 34 | 35 | 1. Create a Deployment based on the YAML file: 36 | 37 | kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment.yaml 38 | 39 | 1. Display information about the Deployment: 40 | 41 | kubectl describe deployment nginx-deployment 42 | 43 | The output is similar to this: 44 | 45 | user@computer:~/website$ kubectl describe deployment nginx-deployment 46 | Name: nginx-deployment 47 | Namespace: default 48 | CreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700 49 | Labels: app=nginx 50 | Annotations: deployment.kubernetes.io/revision=1 51 | Selector: app=nginx 52 | Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable 53 | StrategyType: RollingUpdate 54 | MinReadySeconds: 0 55 | RollingUpdateStrategy: 1 max unavailable, 1 max surge 56 | Pod Template: 57 | Labels: app=nginx 58 | Containers: 59 | nginx: 60 | Image: nginx:1.7.9 61 | Port: 80/TCP 62 | Environment: 63 | Mounts: 64 | Volumes: 65 | Conditions: 66 | Type Status Reason 67 | ---- ------ ------ 68 | Available True MinimumReplicasAvailable 69 | Progressing True NewReplicaSetAvailable 70 | OldReplicaSets: 71 | NewReplicaSet: nginx-deployment-1771418926 (2/2 replicas created) 72 | No events. 73 | 74 | 1. List the pods created by the deployment: 75 | 76 | kubectl get pods -l app=nginx 77 | 78 | The output is similar to this: 79 | 80 | NAME READY STATUS RESTARTS AGE 81 | nginx-deployment-1771418926-7o5ns 1/1 Running 0 16h 82 | nginx-deployment-1771418926-r18az 1/1 Running 0 16h 83 | 84 | 1. Display information about a pod: 85 | 86 | kubectl describe pod 87 | 88 | where `` is the name of one of your pods. 89 | 90 | ## Updating the deployment 91 | 92 | You can update the deployment by applying a new YAML file. This YAML file 93 | specifies that the deployment should be updated to use nginx 1.8. 94 | 95 | ``` 96 | apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 97 | kind: Deployment 98 | metadata: 99 | name: nginx-deployment 100 | spec: 101 | selector: 102 | matchLabels: 103 | app: nginx 104 | replicas: 2 105 | template: 106 | metadata: 107 | labels: 108 | app: nginx 109 | spec: 110 | containers: 111 | - name: nginx 112 | image: nginx:1.8 # Update the version of nginx from 1.7.9 to 1.8 113 | ports: 114 | - containerPort: 80 115 | ``` 116 | 117 | 1. Apply the new YAML file: 118 | 119 | kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment-update.yaml 120 | 121 | 1. Watch the deployment create pods with new names and delete the old pods: 122 | 123 | kubectl get pods -l app=nginx 124 | 125 | ## Scaling the application by increasing the replica count 126 | 127 | You can increase the number of pods in your Deployment by applying a new YAML 128 | file. This YAML file sets `replicas` to 4, which specifies that the Deployment 129 | should have four pods: 130 | 131 | ``` 132 | apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 133 | kind: Deployment 134 | metadata: 135 | name: nginx-deployment 136 | spec: 137 | selector: 138 | matchLabels: 139 | app: nginx 140 | replicas: 4 # Update the replicas from 2 to 4 141 | template: 142 | metadata: 143 | labels: 144 | app: nginx 145 | spec: 146 | containers: 147 | - name: nginx 148 | image: nginx:1.8 149 | ports: 150 | - containerPort: 80 151 | ``` 152 | 153 | 1. Apply the new YAML file: 154 | 155 | kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment-scale.yaml 156 | 157 | 1. Verify that the Deployment has four pods: 158 | 159 | kubectl get pods -l app=nginx 160 | 161 | The output is similar to this: 162 | 163 | NAME READY STATUS RESTARTS AGE 164 | nginx-deployment-148880595-4zdqq 1/1 Running 0 25s 165 | nginx-deployment-148880595-6zgi1 1/1 Running 0 25s 166 | nginx-deployment-148880595-fxcez 1/1 Running 0 2m 167 | nginx-deployment-148880595-rwovn 1/1 Running 0 2m 168 | 169 | ## Deleting the application 170 | 171 | Show existing deployments: 172 | ``` 173 | $ kubectl get deployments 174 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 175 | nginx-deployment 4 4 4 4 10m 176 | ``` 177 | 178 | Delete Deployment: 179 | 180 | ``` 181 | kubectl delete deployment nginx-deployment 182 | ``` 183 | -------------------------------------------------------------------------------- /Labs/1.03 - Killing A K8s Node.md: -------------------------------------------------------------------------------- 1 | ## Automated Self Healing 2 | 3 | Kubernetes with DC/OS includes automated self-healing of Kubernetes infrastructure. 4 | 5 | We can demo this by killing some of the Kubernetes framework components, as well as nodes. 6 | 7 | In the command line enter 8 | 9 | ``` 10 | dcos task 11 | ``` 12 | 13 | Output should resemble: 14 | ``` 15 | $ dcos task 16 | NAME HOST USER STATE ID MESOS ID REGION ZONE 17 | etcd-0-peer 10.0.5.79 root R etcd-0-peer__181046fd-6ed0-4ac2-bfaa-9ec15e944226 4f6be22e-f057-4226-9c40-190496ea9218-S16 aws/us-west-2 aws/us-west-2a 18 | etcd-1-peer 10.0.7.236 root R etcd-1-peer__e4695eff-4318-4b26-98d7-b97db91ddfdd 4f6be22e-f057-4226-9c40-190496ea9218-S7 aws/us-west-2 aws/us-west-2a 19 | etcd-2-peer 10.0.7.104 root R etcd-2-peer__0d63eb6c-4fe4-4498-b82c-d53d53b36000 4f6be22e-f057-4226-9c40-190496ea9218-S4 aws/us-west-2 aws/us-west-2a 20 | kube-apiserver-0-instance 10.0.5.243 root R kube-apiserver-0-instance__e780ad3f-2586-4905-8724-f15efd73851f 4f6be22e-f057-4226-9c40-190496ea9218-S5 aws/us-west-2 aws/us-west-2a 21 | kube-apiserver-1-instance 10.0.7.236 root R kube-apiserver-1-instance__64a0f0a3-767b-4494-81e9-74988f0340d0 4f6be22e-f057-4226-9c40-190496ea9218-S7 aws/us-west-2 aws/us-west-2a 22 | kube-apiserver-2-instance 10.0.5.151 root R kube-apiserver-2-instance__40bdca05-77da-44c9-ae83-9e0ecbcae7e3 4f6be22e-f057-4226-9c40-190496ea9218-S1 aws/us-west-2 aws/us-west-2a 23 | kube-controller-manager-0-instance 10.0.6.4 root R kube-controller-manager-0-instance__21fbf104-a8d8-4d0c-951d-6a649fe3ca4b 4f6be22e-f057-4226-9c40-190496ea9218-S14 aws/us-west-2 aws/us-west-2a 24 | kube-controller-manager-1-instance 10.0.5.209 root R kube-controller-manager-1-instance__0036194d-00da-4cf4-a825-5d5eceabf56d 4f6be22e-f057-4226-9c40-190496ea9218-S15 aws/us-west-2 aws/us-west-2a 25 | kube-controller-manager-2-instance 10.0.7.176 root R kube-controller-manager-2-instance__011d0e3c-8082-47e5-98ca-407ee507e754 4f6be22e-f057-4226-9c40-190496ea9218-S3 aws/us-west-2 aws/us-west-2a 26 | kube-node-0-coredns 10.0.4.185 root R kube-node-0-coredns__992099bf-e867-4db7-be4a-aab4ca65edd7 4f6be22e-f057-4226-9c40-190496ea9218-S9 aws/us-west-2 aws/us-west-2a 27 | kube-node-0-kube-proxy 10.0.4.185 root R kube-node-0-kube-proxy__ca862dcd-2fbd-43de-b708-8c96de6109d2 4f6be22e-f057-4226-9c40-190496ea9218-S9 aws/us-west-2 aws/us-west-2a 28 | kube-node-0-kubelet 10.0.4.185 root R kube-node-0-kubelet__967c1659-a6d6-4e9a-a544-020b36dd27cb 4f6be22e-f057-4226-9c40-190496ea9218-S9 aws/us-west-2 aws/us-west-2a 29 | kube-node-1-coredns 10.0.6.170 root R kube-node-1-coredns__db4df874-24fa-4717-9858-62e0cec0e8b1 4f6be22e-f057-4226-9c40-190496ea9218-S10 aws/us-west-2 aws/us-west-2a 30 | kube-node-1-kube-proxy 10.0.6.170 root R kube-node-1-kube-proxy__a1d35522-0d99-454b-b86c-96821be0d36d 4f6be22e-f057-4226-9c40-190496ea9218-S10 aws/us-west-2 aws/us-west-2a 31 | kube-node-1-kubelet 10.0.6.170 root R kube-node-1-kubelet__3ea399b3-f1b1-43d9-8064-2b663a6132bc 4f6be22e-f057-4226-9c40-190496ea9218-S10 aws/us-west-2 aws/us-west-2a 32 | kube-scheduler-0-instance 10.0.5.79 root R kube-scheduler-0-instance__85eee02c-2ffa-41d7-be72-41367d3785e9 4f6be22e-f057-4226-9c40-190496ea9218-S16 aws/us-west-2 aws/us-west-2a 33 | kube-scheduler-1-instance 10.0.6.4 root R kube-scheduler-1-instance__8e9d3974-dad5-4268-8711-c79c86370779 4f6be22e-f057-4226-9c40-190496ea9218-S14 aws/us-west-2 aws/us-west-2a 34 | kube-scheduler-2-instance 10.0.5.243 root R kube-scheduler-2-instance__50807610-679c-4dc2-a55b-cb9055a6b4c9 4f6be22e-f057-4226-9c40-190496ea9218-S5 aws/us-west-2 aws/us-west-2a 35 | kubernetes 10.0.5.243 root R kubernetes.d7696234-839a-11e8-95d4-de01896a7e14 4f6be22e-f057-4226-9c40-190496ea9218-S5 aws/us-west-2 aws/us-west-2a 36 | kubernetes-proxy 10.0.4.167 root R kubernetes-proxy.65c752a2-8394-11e8-95d4-de01896a7e14 4f6be22e-f057-4226-9c40-190496ea9218-S6 aws/us-west-2 aws/us-west-2a 37 | ``` 38 | 39 | ### Kill etcd 40 | Lets kill an instance of the etcd database to observe auto-healing capabilities: 41 | 42 | First we need to identify the etcd-0 PID value. In the example below the etcd PID value is 3: 43 | ``` 44 | $ dcos task exec -it etcd-0-peer ps ax 45 | PID TTY STAT TIME COMMAND 46 | 1 ? Ss 0:00 /opt/mesosphere/active/mesos/libexec/mesos/mesos-cont 47 | 3 ? Sl 2:04 ./etcd-v3.3.3-linux-amd64/etcd --name=infra0 --cert-f 48 | 7008 pts/0 Ss+ 0:00 /opt/mesosphere/active/mesos/libexec/mesos/mesos-cont 49 | 7009 pts/0 R+ 0:00 ps ax 50 | ``` 51 | 52 | Navigate to the DC/OS UI > Services > Kubernetes tab and open next to the terminal so you can see the components in the DC/OS UI. Use the search bar to search for etcd 53 | 54 | 55 | Kill the etcd manually and watch the UI auto-heal the etcd instance: 56 | ``` 57 | dcos task exec -it etcd-0-peer kill -9 3 58 | ``` 59 | 60 | You may need to refresh brower. Watch as Kubernetes etcd-0 instance is killed and respawned 61 | 62 | ### Kill a Kubelet 63 | Next, lets kill a Kubernetes node to observe auto-healing capabilities: 64 | 65 | First we need to identify the kube-node-0 PID value. Enter etcd PID value associated with the cmd: `sh -c ./bootstrap --resolve=false 2>&1 chmod +x kube`: In the example below the etcd PID value is 3: 66 | 67 | ``` 68 | $ dcos task exec -it kube-node-0-kubelet ps ax 69 | PID TTY STAT TIME COMMAND 70 | 1 ? Ss 0:00 /opt/mesosphere/active/mesos/libexec/mesos/mesos-cont 71 | 3 ? S 0:00 sh -c ./bootstrap --resolve=false 2>&1 chmod +x kube 72 | 10 ? S 0:00 /bin/bash ./kubelet-wrapper.sh 73 | ``` 74 | 75 | Navigate to the DC/OS UI > Services > Kubernetes tab and open next to the terminal so you can see the components in the DC/OS UI. Use the search bar to search for kube-node-0 76 | 77 | Kill the etcd manually and watch the UI auto-heal the etcd instance: 78 | 79 | ``` 80 | dcos task exec -it kube-node-0-kubelet kill -9 3 81 | ``` 82 | 83 | You may need to refresh brower. Watch as Kubernetes Kubelet instance is killed and respawned 84 | -------------------------------------------------------------------------------- /Labs/1.13 - Fraud Detection.md: -------------------------------------------------------------------------------- 1 | # Fast Data: Financial Transaction Processing with Apache Flink and Kubernetes 2 | 3 | During this demo we use Apache Flink and Apache Kafka to setup a high-volume financial transactions pipeline. 4 | Note, this is a derivate of the [Apache Flink Stream Processing](flink/1.11#fast-data-financial-transaction-processing-with-apache-flink) demo, where we use Kubernetes to deploy the generator and viewer microservices. 5 | 6 | - Estimated time for completion: 7 | - Manual install: 15min 8 | - Target audience: Anyone interested in stream data processing and analytics with Apache Kafka and Apache Flink. 9 | 10 | A video of this demo can be found [here](https://www.youtube.com/watch?v=bwPXNlVHTeI). 11 | 12 | **Table of Contents**: 13 | 14 | - [Architecture](#architecture) 15 | - [Prerequisites](#prerequisites) 16 | - [Install](#install) 17 | - [Use the demo](#use) 18 | - [Generating transactions](#generating-transactions) 19 | - [Consuming transactions](#consuming-transactions) 20 | - [Viewing output](#viewing-output) 21 | 22 | ## Architecture 23 | 24 | ![Financial transaction processing demo architecture](https://github.com/chrisgaun/demos/blob/master/flink-k8s/1.11/img/kafka-flink-arch.png?raw=true) 25 | 26 | This demo implements a data processing infrastructure that is able to spot money laundering. In the context of money laundering, we want to detect amounts larger than $10.000 transferred between two accounts, even if that amount is split into many small batches. See also [US](https://www.fincen.gov/history-anti-money-laundering-laws) and [EU](http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32015L0849) legislation and regulations on this topic for more information. 27 | 28 | The architecture follows more or less the [SMACK stack architecture](https://mesosphere.com/blog/smack-stack-new-lamp-stack/): 29 | - Events: Event are being generated by a small [GOLANG generator](https://github.com/dcos/demos/blob/master/flink/1.11/generator/generator.go). The events are in the form 'Sunday, 23-Jul-17 01:06:47 UTC;66;26;7810', where the first field '23-Jul-17 01:06:47 UTC' represents the (increasing) timestamp of transactions; the second field '66' represent the sender account; the third field the receiver account; and the fourth field represent the dollar amount transferred during that transaction. 30 | - Ingestion: The generated events are being ingested and buffered by a Kafka queue with the default topic 'transactions'. Being a Microservice we will deploy the data-generator on kubernetes. 31 | - Stream Processing: As we require fast response times, we use Apache Flink as a Stream processor running the [FinancialTransactionJob](https://github.com/dcos/demos/tree/master/flink/1.10/flink-job/src/main/java/io/dcos). 32 | - Storage: Here we diverge a bit from the typical SMACK stack setup and don't write the results into a Datastore such as Apache Cassandra. Instead we write the results again into a Kafka Stream (default: 'fraud'). Note, that Kafka also offers data persistence for all unprocessed events. 33 | - Actor: In order to view the results we use again a small [Golang viewer](https://github.com/dcos/demos/blob/master/flink/1.11/actor/actor_viewer.go) which simply reads and displays the results from the output Kafka stream. Being a Microservice we will deploy the viewer on kubernetes. 34 | 35 | 36 | ## Prerequisites 37 | 38 | - A running [DC/OS 1.11](https://dcos.io/releases/) or higher cluster with at least 4 private agents and 1 public agent. Each agent should have 2 CPUs and 5 GB of RAM available. The [DC/OS CLI](https://docs.mesosphere.com/latest/cli/install/) also needs to be installed. 39 | 40 | 41 | The DC/OS services used in the demo are as follows: 42 | 43 | - Apache Kafka 44 | - Apache Flink 45 | - Kubernetes 46 | 47 | ## Install 48 | 49 | 50 | #### Kafka 51 | 52 | Install the Apache Kafka package : 53 | 54 | ```bash 55 | $ dcos package install kafka 56 | ``` 57 | 58 | Note that if you are unfamiliar with Kafka and its terminology, you can check out the respective [101 example](https://github.com/dcos/examples/tree/master/kafka). 59 | 60 | Next, figure out where the broker is: 61 | 62 | ```bash 63 | $ dcos kafka endpoints broker 64 | { 65 | "address": [ 66 | "10.0.2.64:1025", 67 | "10.0.2.83:1025", 68 | "10.0.0.161:1025" 69 | ], 70 | "dns": [ 71 | "kafka-0-broker.kafka.autoip.dcos.thisdcos.directory:1025", 72 | "kafka-1-broker.kafka.autoip.dcos.thisdcos.directory:1025", 73 | "kafka-2-broker.kafka.autoip.dcos.thisdcos.directory:1025" 74 | ], 75 | "vip": "broker.kafka.l4lb.thisdcos.directory:9092" 76 | } 77 | ``` 78 | 79 | Note the FQDN for the vip, in our case `broker.kafka.l4lb.thisdcos.directory:9092`, which is independent of the actual broker locations. 80 | It is possible to use the FQDN of any of the brokers, but using the VIP FQDN will give us load balancing. 81 | 82 | ##### Create Kafka Topics 83 | 84 | Fortunately, creating topic is very simple using the DC/OS Kafka CLI. If you have installed Kafka from the UI you might have to 85 | install the cli extensions using `dcos package install kafka --cli'. If you installed Kafka as above using the CLI then it will automatically install the CLI extensions. 86 | 87 | We need two Kafka topics, one with the generated transactions and one for fraudulent transactions, which we can create with: 88 | 89 | `dcos kafka topic create transactions` 90 | and 91 | `dcos kafka topic create fraud` 92 | 93 | 94 | 95 | ### Flink 96 | 97 | Finally, we can deploy [Apache Flink](https://github.com/dcos/examples/tree/master/flink/) : 98 | 99 | ```bash 100 | $ dcos package install flink 101 | ``` 102 | 103 | At this point we have all of the required elements installed - Kafka, Flink and the data generator. We are now ready to start the demo. 104 | 105 | ### Kubernetes 106 | 107 | As we want to deploy both the generator and viewer microservice on Kubernetes, we need to have kubernetes installed, which should have been completed in Lab 1 already. As such this piece will be skipped. 108 | This can be done by a simple 109 | 110 | ### Generator 111 | 112 | Now, we can deploy the [data generator](https://github.com/dcos/demos/blob/master/flink/1.11/generator/generator.go) using the [flink-demo-generator.yaml deployment definition](https://github.com/dcos/demos/blob/master/flink-k8s/1.11/generator/flink-demo-generator.yaml): 113 | 114 | ```bash 115 | $ kubectl apply -f https://raw.githubusercontent.com/dcos/demos/master/flink-k8s/1.11/generator/flink-demo-generator.yaml 116 | ``` 117 | 118 | We can check the status of the deployment: 119 | 120 | ```bash 121 | $ kubectl get deployments 122 | $ kubectl get pods 123 | ``` 124 | 125 | We can also view the log output to make sure it is generating events as expected (you will need to use the actual pod id from the previous command): 126 | 127 | ```bash 128 | $ kubectl logs flink-demo-generator-655890656-8d1ls 129 | ``` 130 | 131 | 132 | ### Final View 133 | 134 | After install your DC/OS UI should look as follows: 135 | 136 | ![All services of the fintrans demo in the DC/OS UI](https://github.com/chrisgaun/demos/raw/master/flink-k8s/1.11/img/services-list.png) 137 | 138 | ## Use 139 | 140 | 141 | The core piece of this demo is the [FinancialTransactionJob](https://github.com/dcos/demos/tree/master/flink/1.11/flink-job/src/main/java/io/dcos) which we will submit to Flink. 142 | 143 | First we need to upload the [jar file](https://downloads.mesosphere.com/dcos-demo/flink/flink-job-1.0.jar) into Flink. Please note that the jar file is too large to be included in this github repo, but can be downloaded [here](https://downloads.mesosphere.com/dcos-demo/flink/flink-job-1.0.jar). 144 | 145 | In the Services tab of the DCOS UI, hover over the name of the flink service, and click on the link which appears to the right of it. This will open the Flink web UI in a new tab. 146 | 147 | ![Flink UI](https://github.com/chrisgaun/demos/raw/master/flink-k8s/1.11/img/flink-gui.png) 148 | 149 | In the Flink web UI, click on Submit New Job, then click the Add New button. This will allow you to select the jar file from $DEMO_HOME and upload it. 150 | 151 | 152 | ![Jar file uploaded](https://github.com/chrisgaun/demos/raw/master/flink-k8s/1.11/img/jar-uploaded.png) 153 | 154 | 155 | Once we hit Submit, we should see the job begin to run in the Flink web UI. 156 | 157 | ![Running Flink job](https://github.com/chrisgaun/demos/raw/master/flink-k8s/1.11/img/running-job.png) 158 | 159 | ### Viewing Output 160 | 161 | Now once the Flink job is running, we only need a way to visualize the results. We do that with another [simple GoLang app](https://github.com/dcos/demos/blob/master/flink/1.11/actor/actor_viewer.go) and again we will deploy this microservice using kubernetes using the [flink-demo-actor.yaml deployment definition](https://github.com/dcos/demos/blob/master/flink-k8s/1.11/actor/flink-demo-actor.yaml): 162 | ```bash 163 | $ kubectl apply -f https://raw.githubusercontent.com/dcos/demos/master/flink-k8s/1.11/actor/flink-demo-actor.yaml 164 | ``` 165 | We can check the status of the deployment: 166 | 167 | ```bash 168 | $ kubectl get deployments 169 | $ kubectl get pods 170 | ``` 171 | 172 | We can also view the log output to make we are detecting fraud as expected (you will need to use the actual pod id from the previous command): 173 | 174 | ```bash 175 | $ kubectl logs flink-demo-actor--655890656-8d1ls 176 | Detected Fraud: TransactionAggregate {startTimestamp=0, endTimestamp=1520473325000, totalAmount=23597: 177 | Transaction{timestamp=1520473023000, origin=3, target='7', amount=5857} 178 | Transaction{timestamp=1520473099000, origin=3, target='7', amount=7062} 179 | Transaction{timestamp=1520473134000, origin=3, target='7', amount=9322} 180 | Transaction{timestamp=1520473167000, origin=3, target='7', amount=921} 181 | Transaction{timestamp=1520473325000, origin=3, target='7', amount=435}} 182 | 183 | Detected Fraud: TransactionAggregate {startTimestamp=0, endTimestamp=1520473387000, totalAmount=47574: 184 | Transaction{timestamp=1520472901000, origin=0, target='2', amount=6955} 185 | Transaction{timestamp=1520472911000, origin=0, target='2', amount=4721} 186 | Transaction{timestamp=1520472963000, origin=0, target='2', amount=3451} 187 | Transaction{timestamp=1520473053000, origin=0, target='2', amount=9361} 188 | Transaction{timestamp=1520473109000, origin=0, target='2', amount=5306} 189 | Transaction{timestamp=1520473346000, origin=0, target='2', amount=4071} 190 | Transaction{timestamp=1520473365000, origin=0, target='2', amount=3974} 191 | Transaction{timestamp=1520473387000, origin=0, target='2', amount=9735}} 192 | 193 | Detected Fraud: TransactionAggregate {startTimestamp=0, endTimestamp=1520473412000, totalAmount=21402: 194 | Transaction{timestamp=1520472906000, origin=2, target='3', amount=8613} 195 | Transaction{timestamp=1520473004000, origin=2, target='3', amount=5027} 196 | Transaction{timestamp=1520473050000, origin=2, target='3', amount=924} 197 | Transaction{timestamp=1520473177000, origin=2, target='3', amount=1566} 198 | Transaction{timestamp=1520473412000, origin=2, target='3', amount=5272}} 199 | ``` 200 | 201 | ### Helm 202 | 203 | You can also install `Generator` and `Viewer` with the Helm. 204 | 205 | #### Install Helm 206 | 207 | Get the latest [Helm release](https://github.com/kubernetes/helm#install). 208 | 209 | #### Add remote Helm repo 210 | 211 | You need to add this Chart repo: 212 | 213 | ```bash 214 | $ helm repo add dlc https://dcos-labs.github.io/charts/ 215 | $ helm repo update 216 | ``` 217 | 218 | #### Install Flink-demo chart 219 | 220 | To install the chart run: 221 | 222 | ```bash 223 | helm install --name flink-demo --namespace flink dlc/flink-demo 224 | ``` 225 | 226 | Check that pods are running: 227 | 228 | ```bash 229 | kubectl -n flink get pods 230 | NAME READY STATUS RESTARTS AGE 231 | flink-demo-actor-555c6d9767-hflvb 1/1 Running 0 26s 232 | flink-demo-generator-7d7785f566-gfzpd 1/1 Running 0 26s 233 | ``` 234 | 235 | Check Display logs: 236 | 237 | ```bash 238 | kubectl logs -n flink flink-demo-actor-555c6d9767-hflvb 239 | ``` 240 | 241 | ### 242 | --------------------------------------------------------------------------------