├── .gitignore ├── .travis.yml ├── CONTRIBUTING.md ├── Lab 0 └── README.md ├── Lab 1 ├── Dockerfile ├── README.md ├── app.js ├── package.json └── script │ └── script.md ├── Lab 2 ├── Dockerfile ├── README.md ├── app.js ├── healthcheck.yml └── package.json ├── Lab 3 ├── README.md ├── watson-deployment.yml ├── watson-talk │ ├── Dockerfile │ ├── app.js │ └── package.json └── watson │ ├── Dockerfile │ ├── app.js │ └── package.json ├── Lab 4 └── README.md ├── Lab 5 ├── Images │ └── cs_security.png ├── README.md └── script │ └── script.md ├── License.txt ├── MAINTAINERS.md ├── README.md ├── bx_login.sh ├── deploy.sh ├── deploy_rollup.sh ├── images ├── VMvsContainer.png ├── app_deploy_workflow.png ├── cluster_ha_roadmap.png ├── container-pod-node-master-relationship.jpg └── kubernetes_arch.png ├── install_bx.sh └── test_yml.sh /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: script 2 | 3 | env: 4 | global: 5 | CF_TARGET_URL="https://api.ng.bluemix.net" 6 | 7 | services: 8 | - docker 9 | 10 | script: bash -n *.sh 11 | 12 | 13 | deploy: 14 | - provider: script 15 | skip_cleanup: true 16 | script: ./deploy_rollup.sh 17 | on: 18 | all_branches: true 19 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | ## Contributing In General 2 | Our project welcomes external contributions! If you have an itch, please feel free to scratch it. 3 | 4 | 5 | To contribute code or documentation, please submit a pull request to the [GitHub repository](https://github.com/ibm/container-service-getting-started-wt/). 6 | 7 | A good way to familiarize yourself with the codebase and contribution process is to look for and tackle low-hanging fruit in the [issue tracker](https://github.com/ibm/container-service-getting-started-wt/issues). Before embarking on a more ambitious contribution, please quickly get in touch with us via an issue. 8 | 9 | **We appreciate your effort, and want to avoid a situation where a contribution requires extensive rework (by you or by us), sits in the queue for a long time, or cannot be accepted at all!** 10 | 11 | ### Proposing new features 12 | 13 | If you would like to implement a new feature, please [raise an issue](https://github.com/ibm/container-service-getting-started-wt/issues) before sending a pull request so the feature can be discussed. 14 | This is to avoid you spending your valuable time working on a feature that the project developers are not willing to accept into the code base. 15 | 16 | ### Fixing bugs 17 | 18 | If you would like to fix a bug, please [raise an issue](https://github.com/ibm/container-service-getting-started-wt/issues) before sending a pull request so it can be discussed. 19 | If the fix is trivial or non controversial then this is not usually necessary. 20 | 21 | ### Merge approval 22 | 23 | The project maintainers use LGTM (Looks Good To Me) in comments on the code review to 24 | indicate acceptance. A change requires LGTMs from two of the maintainers of each 25 | component affected. Note that if your initial push does not pass TravisCI your change will not be approved. 26 | 27 | For more details, see the [MAINTAINERS](MAINTAINERS) page. 28 | 29 | -------------------------------------------------------------------------------- /Lab 0/README.md: -------------------------------------------------------------------------------- 1 | # Lab 0: Get the IBM Cloud Kubernetes Service 2 | 3 | 4 | Before you begin learning, you need to install the required CLIs to create and manage your Kubernetes clusters in IBM Cloud Kubernetes Service and to deploy containerized apps to your cluster. 5 | 6 | This lab includes the information for installing the following CLIs and plug-ins: 7 | 8 | * IBM Cloud CLI 9 | * IBM Cloud Kubernetes Service plug-in 10 | * IBM Cloud Container Registry plug-in 11 | * Kubernetes CLI 12 | * Optional: Docker 13 | 14 | If you already have the CLIs and plug-ins, you can skip this lab and proceed to the next one. 15 | 16 | # Install the IBM Cloud command-line interface 17 | 18 | 1. As a prerequisite for the IBM Cloud Kubernetes Service plug-in, install the [IBM Cloud command-line interface](https://cloud.ibm.com/docs/cli?topic=cloud-cli-install-ibmcloud-cli). Once installed, you can access IBM Cloud from your command-line with the prefix `ibmcloud`. 19 | 20 | 21 | 2. Log in to the IBM Cloud CLI: `ibmcloud login`. 22 | 3. Enter your IBM Cloud credentials when prompted. 23 | 24 | **Note:** If you have a federated ID, use `ibmcloud login --sso` to log in to the IBM Cloud CLI. Enter your user name, and use the provided URL in your CLI output to retrieve your one-time passcode. You know you have a federated ID when the login fails without the `--sso` and succeeds with the `--sso` option. 25 | 26 | # Install the IBM Cloud Kubernetes Service plug-in 27 | 28 | 1. To create Kubernetes clusters and manage worker nodes, install the IBM Cloud Kubernetes Service plug-in: 29 | ```ibmcloud plugin install container-service -r 'IBM Cloud'``` 30 | 31 | **Note:** The prefix for running commands by using the IBM Cloud Kubernetes Service plug-in is `ibmcloud ks`. 32 | 33 | 2. To verify that the plug-in is installed properly, run the following command: 34 | ```ibmcloud plugin list``` 35 | 36 | The IBM Cloud Kubernetes Service plug-in is displayed in the results as `container-service`. 37 | 38 | # Download the IBM Cloud Container Registry plug-in 39 | 40 | 1. To manage a private image repository, install the IBM Cloud Container Registry plug-in: 41 | ``` 42 | ibmcloud plugin install container-registry -r 'IBM Cloud' 43 | ``` 44 | 45 | Use this plug-in to set up your own namespace in a multi-tenant, highly available, and scalable private image registry that is hosted by IBM, and to store and share container images with other users. Container images are required to deploy containers into a cluster. 46 | 47 | **Note:** The prefix for running registry commands is `ibmcloud cr`. 48 | 49 | 2. To verify that the plug-in is installed properly, run `ibmcloud plugin list` 50 | 51 | The plug-in is displayed in the results as `container-registry`. 52 | 53 | # Download the Kubernetes CLI 54 | 55 | To view a local version of the Kubernetes dashboard and to deploy apps into your clusters, you will need to install the Kubernetes CLI that corresponds with your operating system: 56 | 57 | * [OS X](https://storage.googleapis.com/kubernetes-release/release/v1.14.8/bin/darwin/amd64/kubectl) 58 | * [Linux](https://storage.googleapis.com/kubernetes-release/release/v1.14.8/bin/linux/amd64/kubectl) 59 | * [Windows](https://storage.googleapis.com/kubernetes-release/release/v1.14.8/bin/windows/amd64/kubectl.exe) 60 | 61 | **For Windows users:** Install the Kubernetes CLI in the same directory as the IBM Cloud CLI. This setup saves you some filepath changes when you run commands later. 62 | 63 | **For OS X and Linux users:** 64 | 65 | 1. Move the executable file to the `/usr/local/bin` directory using the command `mv //kubectl /usr/local/bin/kubectl` . 66 | 67 | 2. Make sure that `/usr/local/bin` is listed in your PATH system variable. 68 | ``` 69 | $echo $PATH 70 | /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin 71 | ``` 72 | 73 | 3. Convert the binary file to an executable: `chmod +x /usr/local/bin/kubectl` 74 | 75 | # Install Docker 76 | To locally build images and push them to your registry namespace, [install Docker](https://www.docker.com/community-edition#/download). The Docker CLI is used to build apps into images. 77 | 78 | **Note:** The prefix for running commands by using the Docker CLI is `docker`. 79 | -------------------------------------------------------------------------------- /Lab 1/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:9.4.0-alpine 2 | COPY app.js . 3 | COPY package.json . 4 | RUN npm install &&\ 5 | apk update &&\ 6 | apk upgrade 7 | EXPOSE 8080 8 | CMD node app.js 9 | -------------------------------------------------------------------------------- /Lab 1/README.md: -------------------------------------------------------------------------------- 1 | # Lab 1. Set up and deploy your first application 2 | 3 | Learn how to push an image of an application to IBM Cloud Container Registry and deploy a basic application to a cluster. 4 | 5 | # 0. Install Prerequisite CLIs and Provision a Kubernetes Cluster 6 | 7 | If you haven't already: 8 | 1. Install the CLIs, as described in [Lab 0](../Lab%200/README.md). 9 | 2. Provision a cluster: 10 | 11 | ```ibmcloud ks cluster create classic --name ``` 12 | 13 | # 1. Push an image to IBM Cloud Container Registry 14 | 15 | To push an image, we first must have an image to push. We have 16 | prepared several `Dockerfile`s in this repository that will create the 17 | images. We will be running the images, and creating new ones, in the 18 | later labs. 19 | 20 | This lab uses the Container Registry built in to IBM Cloud, but the 21 | image can be created and uploaded to any standard Docker registry to 22 | which your cluster has access. 23 | 24 | 1. Download a copy of this repository: 25 | 26 | 1a. [Clone or download](https://github.com/IBM/container-service-getting-started-wt) the GitHub repository associated with this course 27 | 28 | 2. Change directory to Lab 1: 29 | 30 | ```cd "Lab 1"``` 31 | 32 | 3. Log in to the IBM Cloud CLI: 33 | 34 | ```ibmcloud login``` 35 | 36 | To specify an IBM Cloud region, include the API endpoint. 37 | 38 | **Note:** If you have a federated ID, use `ibmcloud login --sso` to log in to the IBM Cloud CLI. You know you have a federated ID when the login fails without the `--sso` and succeeds with the `--sso` option. 39 | 40 | 4. In order to upload images to the IBM Cloud Container Registry, you first need to create a namespace with the following command: 41 | 42 | ```ibmcloud cr namespace-add ``` 43 | 44 | 5. Build the container image with a `1` tag and push the image to the IBM Cloud Registry: 45 | 46 | ```ibmcloud cr build --tag us.icr.io//hello-world:1 .``` 47 | 48 | 6. Verify the image is built: 49 | 50 | ```ibmcloud cr images``` 51 | 52 | 7. If you created your cluster at the beginning of this, make sure it's ready for use. 53 | 1. Run `ibmcloud ks clusters` and make sure that your cluster is in "Normal" state. 54 | 2. Use `ibmcloud ks workers --cluster `, and make sure that all workers are in "Normal" state with "Ready" status. 55 | 3. Make a note of the public IP of the worker. 56 | 57 | You are now ready to use Kubernetes to deploy the hello-world application. 58 | 59 | # 2. Deploy your application 60 | 61 | 1. Run `ibmcloud ks cluster config --cluster `. 62 | 63 | 2. Start by running your image as a deployment: 64 | 65 | ```kubectl create deployment hello-world-deployment --image=us.icr.io//hello-world:1``` 66 | 67 | This action will take a bit of time. To check the status of your deployment, you can use `kubectl get pods`. 68 | 69 | You should see output similar to the following: 70 | 71 | ``` 72 | => kubectl get pods 73 | NAME READY STATUS RESTARTS AGE 74 | hello-world-562211614-0g2kd 0/1 ContainerCreating 0 1m 75 | ``` 76 | 77 | 3. Once the status reads `Running`, expose that deployment as a service, accessed through the IP of the worker nodes. The example for this course listens on port 8080. Run: 78 | 79 | ```kubectl expose deployment/hello-world-deployment --type=NodePort --port=8080 --name=hello-world-service --target-port=8080``` 80 | 81 | 4. To find the port used on that worker node, examine your new service: 82 | 83 | ```kubectl describe service hello-world-service``` 84 | 85 | Take note of the "NodePort:" line as `` 86 | 87 | 5. Run `ibmcloud ks worker ls --cluster `, and note the public IP as ``. 88 | 89 | 6. You can now access your container/service using `curl :` (or your favorite web browser). If you see, "Hello world! Your app is up and running in a cluster!" you're done! 90 | 91 | When you're all done, you can either use this deployment in the [next lab of this course](../Lab%202/README.md), or you can remove the deployment and thus stop taking the course. 92 | 93 | 1. To remove the deployment and service, use `kubectl delete all -l app=hello-world-deployment`. 94 | -------------------------------------------------------------------------------- /Lab 1/app.js: -------------------------------------------------------------------------------- 1 | var express = require('express') 2 | var os = require("os"); 3 | var hostname = os.hostname(); 4 | var app = express() 5 | 6 | app.get('/', function(req, res) { 7 | res.send('Hello world from ' + hostname + '! Your app is up and running in a cluster!\n') 8 | }) 9 | app.listen(8080, function() { 10 | console.log('Sample app is listening on port 8080.') 11 | }) 12 | -------------------------------------------------------------------------------- /Lab 1/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "hello-world-demo", 3 | "private": false, 4 | "version": "0.0.1", 5 | "description": "Basic hello world application for Node.js", 6 | "dependencies": { 7 | "express": "4.x" 8 | } 9 | } 10 | -------------------------------------------------------------------------------- /Lab 1/script/script.md: -------------------------------------------------------------------------------- 1 | 2 | # Pod 3 | 4 | In Kubernetes, a group of one or more containers is called a pod. Containers in a pod are deployed together, and are started, stopped, and replicated as a group. The simplest pod definition describes the deployment of a single container. For example, an nginx web server pod might be defined as such: 5 | ``` 6 | apiVersion: v1 7 | kind: Pod 8 | metadata: 9 | name: mynginx 10 | namespace: default 11 | labels: 12 | run: nginx 13 | spec: 14 | containers: 15 | - name: mynginx 16 | image: nginx:latest 17 | ports: 18 | - containerPort: 80 19 | ``` 20 | 21 | # Labels 22 | 23 | In Kubernetes, labels are a system to organize objects into groups. Labels are key-value pairs that are attached to each object. Label selectors can be passed along with a request to the apiserver to retrieve a list of objects which match that label selector. 24 | 25 | To add a label to a pod, add a labels section under metadata in the pod definition: 26 | ``` 27 | apiVersion: v1 28 | kind: Pod 29 | metadata: 30 | labels: 31 | run: nginx 32 | ... 33 | ``` 34 | To label a running pod 35 | ``` 36 | kubectl label pod mynginx type=webserver 37 | pod "mynginx" labeled 38 | ``` 39 | To list pods based on labels 40 | ``` 41 | kubectl get pods -l type=webserver 42 | NAME READY STATUS RESTARTS AGE 43 | mynginx 1/1 Running 0 21m 44 | 45 | ``` 46 | 47 | 48 | # Deployments 49 | 50 | A Deployment provides declarative updates for pods and replicas. You only need to describe the desired state in a Deployment object, and it will change the actual state to the desired state. The Deployment object defines the following details: 51 | 52 | The elements of a Replication Controller definition 53 | The strategy for transitioning between deployments 54 | To create a deployment for a nginx webserver, edit the nginx-deploy.yaml file as 55 | ``` 56 | apiVersion: apps/v1 57 | kind: Deployment 58 | metadata: 59 | generation: 1 60 | labels: 61 | run: nginx 62 | name: nginx 63 | namespace: default 64 | spec: 65 | replicas: 3 66 | selector: 67 | matchLabels: 68 | run: nginx 69 | strategy: 70 | rollingUpdate: 71 | maxSurge: 1 72 | maxUnavailable: 1 73 | type: RollingUpdate 74 | template: 75 | metadata: 76 | labels: 77 | run: nginx 78 | spec: 79 | containers: 80 | - image: nginx:latest 81 | imagePullPolicy: Always 82 | name: nginx 83 | ports: 84 | - containerPort: 80 85 | protocol: TCP 86 | dnsPolicy: ClusterFirst 87 | restartPolicy: Always 88 | securityContext: {} 89 | terminationGracePeriodSeconds: 30 90 | 91 | ``` 92 | and create the deployment 93 | ``` 94 | kubectl create -f nginx-deploy.yaml 95 | deployment "nginx" created 96 | ``` 97 | The deployment creates the following objects 98 | ``` 99 | kubectl get all -l run=nginx 100 | 101 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 102 | deploy/nginx 3 3 3 3 4m 103 | 104 | NAME DESIRED CURRENT READY AGE 105 | rs/nginx-664452237 3 3 3 4m 106 | 107 | NAME READY STATUS RESTARTS AGE 108 | po/nginx-664452237-h8dh0 1/1 Running 0 4m 109 | po/nginx-664452237-ncsh1 1/1 Running 0 4m 110 | po/nginx-664452237-vts63 1/1 Running 0 4m 111 | ``` 112 | 113 | # services 114 | 115 | Services 116 | 117 | Kubernetes pods, as containers, are ephemeral. Replication Controllers create and destroy pods dynamically, e.g. when scaling up or down or when doing rolling updates. While each pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of pods provides functionality to other pods inside the Kubernetes cluster, how do those pods find out and keep track of which other? 118 | 119 | A Kubernetes Service is an abstraction which defines a logical set of pods and a policy by which to access them. The set of pods targeted by a Service is usually determined by a label selector. Kubernetes offers a simple Endpoints API that is updated whenever the set of pods in a service changes. 120 | 121 | To create a service for our nginx webserver, edit the nginx-service.yaml file 122 | ``` 123 | apiVersion: v1 124 | kind: Service 125 | metadata: 126 | name: nginx 127 | labels: 128 | run: nginx 129 | spec: 130 | selector: 131 | run: nginx 132 | ports: 133 | - protocol: TCP 134 | port: 8000 135 | targetPort: 80 136 | type: ClusterIP 137 | ``` 138 | Create the service 139 | 140 | `kubectl create -f nginx-service.yaml` 141 | service "nginx" created 142 | ``` 143 | kubectl get service -l run=nginx 144 | NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE 145 | nginx 10.254.60.24 8000/TCP 38s 146 | 147 | ``` 148 | Describe the service: 149 | ``` 150 | kubectl describe service nginx 151 | Name: nginx 152 | Namespace: default 153 | Labels: run=nginx 154 | Selector: run=nginx 155 | Type: ClusterIP 156 | IP: 10.254.60.24 157 | Port: 8000/TCP 158 | Endpoints: 172.30.21.3:80,172.30.4.4:80,172.30.53.4:80 159 | Session Affinity: None 160 | No events. 161 | ``` 162 | The above service is associated to our previous nginx pods. Pay attention to the service selector run=nginx field. It tells Kubernetes that all pods with the label run=nginx are associated to this service, and should have traffic distributed amongst them. In other words, the service provides an abstraction layer, and it is the input point to reach all of the associated pods. 163 | -------------------------------------------------------------------------------- /Lab 2/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:9.4.0-alpine 2 | COPY app.js . 3 | COPY package.json . 4 | RUN npm install &&\ 5 | apk update &&\ 6 | apk upgrade 7 | EXPOSE 8080 8 | CMD node app.js 9 | -------------------------------------------------------------------------------- /Lab 2/README.md: -------------------------------------------------------------------------------- 1 | # Lab 2: Scale and update apps -- services, replica sets, and health checks 2 | 3 | In this lab, understand how to update the number of replicas a deployment has and how to safely roll out an update on Kubernetes. Learn, also, how to perform a simple health check. 4 | 5 | For this lab, you need a running deployment with a single replica. At 6 | the end of the previous lab, we cleaned up the running 7 | deployment. Let's first recreate that deployment with: 8 | ``` 9 | kubectl run hello-world --image=us.icr.io//hello-world 10 | ``` 11 | 12 | # 1. Scale apps with replicas 13 | 14 | A *replica* is how Kubernetes accomplishes scaling out a deployment. A replica is a copy of a pod that already contains a running service. By having multiple replicas of a pod, you can ensure your deployment has the available resources to handle increasing load on your application. 15 | 16 | 1. `kubectl` provides a `scale` subcommand to change the size of an 17 | existing deployment. Let's us it to go from our single running 18 | instance to 10 instances. 19 | 20 | ``` console 21 | $ kubectl scale --replicas=10 deployment hello-world 22 | ``` 23 | 24 | You should see output showing that the deployment has been scaled. 25 | 26 | ``` 27 | deployment "hello-world" scaled 28 | ``` 29 | 30 | Kubernetes will now act according to the desired state model to 31 | try and make true, the condition of 10 replicas. It will do this 32 | by starting new pods with the same configuration. 33 | 34 | 4. To see your changes being rolled out, you can run: `kubectl rollout status deployment/hello-world`. 35 | 36 | The rollout might occur so quickly that the following messages might _not_ display: 37 | 38 | ``` 39 | => kubectl rollout status deployment/hello-world 40 | Waiting for rollout to finish: 1 of 10 updated replicas are available... 41 | Waiting for rollout to finish: 2 of 10 updated replicas are available... 42 | Waiting for rollout to finish: 3 of 10 updated replicas are available... 43 | Waiting for rollout to finish: 4 of 10 updated replicas are available... 44 | Waiting for rollout to finish: 5 of 10 updated replicas are available... 45 | Waiting for rollout to finish: 6 of 10 updated replicas are available... 46 | Waiting for rollout to finish: 7 of 10 updated replicas are available... 47 | Waiting for rollout to finish: 8 of 10 updated replicas are available... 48 | Waiting for rollout to finish: 9 of 10 updated replicas are available... 49 | deployment "hello-world" successfully rolled out 50 | ``` 51 | 52 | 5. Once the rollout has finished, ensure your pods are running by using: `kubectl get pods`. 53 | 54 | You should see output listing 10 replicas of your deployment: 55 | 56 | ``` 57 | => kubectl get pods 58 | NAME READY STATUS RESTARTS AGE 59 | hello-world-562211614-1tqm7 1/1 Running 0 1d 60 | hello-world-562211614-1zqn4 1/1 Running 0 2m 61 | hello-world-562211614-5htdz 1/1 Running 0 2m 62 | hello-world-562211614-6h04h 1/1 Running 0 2m 63 | hello-world-562211614-ds9hb 1/1 Running 0 2m 64 | hello-world-562211614-nb5qp 1/1 Running 0 2m 65 | hello-world-562211614-vtfp2 1/1 Running 0 2m 66 | hello-world-562211614-vz5qw 1/1 Running 0 2m 67 | hello-world-562211614-zksw3 1/1 Running 0 2m 68 | hello-world-562211614-zsp0j 1/1 Running 0 2m 69 | ``` 70 | 71 | **Tip:** Another way to improve availability is to [add clusters and regions](https://cloud.ibm.com/docs/containers?topic=containers-clusters) to your deployment, as shown in the following diagram: 72 | 73 | ![HA with more clusters and regions](../images/cluster_ha_roadmap.png) 74 | 75 | # 2. Update and roll back apps 76 | 77 | Kubernetes allows you to use a rollout to update an app deployment with a new container image. This allows you to easily update the running image and also allows you to easily undo a rollout, if a problem is discovered after deployment. 78 | 79 | In the previous lab, we created an image with a `1` tag. Let's make a version of the image that includes new content and use a `2` tag. This lab also contains a `Dockerfile`. Let's build and push it up to our image registry. 80 | 81 | To update and roll back: 82 | 1. Build the new container image with a `2` tag: 83 | 84 | ```ibmcloud cr build --tag us.icr.io//hello-world:2 .``` 85 | 86 | 2. Using `kubectl`, you can now update your deployment to use the 87 | latest image. `kubectl` allows you to change details about existing 88 | resources with the `set` subcommand. We can use it to change the 89 | image being used. 90 | 91 | ```kubectl set image deployment/hello-world hello-world=us.icr.io//hello-world:2``` 92 | 93 | Note that a pod could have multiple containers, in which case each container will have its own name. Multiple containers can be updated at the same time. ([More information](https://kubernetes.io/docs/user-guide/kubectl/kubectl_set_image/).) 94 | 95 | 3. Run `kubectl rollout status deployment/hello-world` or `kubectl get replicasets` to check the status of the rollout. The rollout might occur so quickly that the following messages might _not_ display: 96 | 97 | ``` 98 | => kubectl rollout status deployment/hello-world 99 | Waiting for rollout to finish: 2 out of 10 new replicas have been updated... 100 | Waiting for rollout to finish: 3 out of 10 new replicas have been updated... 101 | Waiting for rollout to finish: 3 out of 10 new replicas have been updated... 102 | Waiting for rollout to finish: 3 out of 10 new replicas have been updated... 103 | Waiting for rollout to finish: 4 out of 10 new replicas have been updated... 104 | Waiting for rollout to finish: 4 out of 10 new replicas have been updated... 105 | Waiting for rollout to finish: 4 out of 10 new replicas have been updated... 106 | Waiting for rollout to finish: 4 out of 10 new replicas have been updated... 107 | Waiting for rollout to finish: 4 out of 10 new replicas have been updated... 108 | Waiting for rollout to finish: 5 out of 10 new replicas have been updated... 109 | Waiting for rollout to finish: 5 out of 10 new replicas have been updated... 110 | Waiting for rollout to finish: 5 out of 10 new replicas have been updated... 111 | Waiting for rollout to finish: 6 out of 10 new replicas have been updated... 112 | Waiting for rollout to finish: 6 out of 10 new replicas have been updated... 113 | Waiting for rollout to finish: 6 out of 10 new replicas have been updated... 114 | Waiting for rollout to finish: 7 out of 10 new replicas have been updated... 115 | Waiting for rollout to finish: 7 out of 10 new replicas have been updated... 116 | Waiting for rollout to finish: 7 out of 10 new replicas have been updated... 117 | Waiting for rollout to finish: 7 out of 10 new replicas have been updated... 118 | Waiting for rollout to finish: 8 out of 10 new replicas have been updated... 119 | Waiting for rollout to finish: 8 out of 10 new replicas have been updated... 120 | Waiting for rollout to finish: 8 out of 10 new replicas have been updated... 121 | Waiting for rollout to finish: 8 out of 10 new replicas have been updated... 122 | Waiting for rollout to finish: 9 out of 10 new replicas have been updated... 123 | Waiting for rollout to finish: 9 out of 10 new replicas have been updated... 124 | Waiting for rollout to finish: 9 out of 10 new replicas have been updated... 125 | Waiting for rollout to finish: 1 old replicas are pending termination... 126 | Waiting for rollout to finish: 1 old replicas are pending termination... 127 | Waiting for rollout to finish: 1 old replicas are pending termination... 128 | Waiting for rollout to finish: 9 of 10 updated replicas are available... 129 | Waiting for rollout to finish: 9 of 10 updated replicas are available... 130 | Waiting for rollout to finish: 9 of 10 updated replicas are available... 131 | deployment "hello-world" successfully rolled out 132 | ``` 133 | 134 | ``` 135 | => kubectl get replicasets 136 | NAME DESIRED CURRENT READY AGE 137 | hello-world-1663871401 9 9 9 1h 138 | hello-world-3254495675 2 2 0 139 | => kubectl get replicasets 140 | NAME DESIRED CURRENT READY AGE 141 | hello-world-1663871401 7 7 7 1h 142 | hello-world-3254495675 4 4 2 143 | ... 144 | => kubectl get replicasets 145 | NAME DESIRED CURRENT READY AGE 146 | hello-world-1663871401 0 0 0 1h 147 | hello-world-3254495675 10 10 10 1m 148 | ``` 149 | 150 | 4. Perform a `curl :` to confirm your new code is active. 151 | 152 | 5. If you decide to undo your latest rollout, call: `kubectl rollout undo deployment/`. 153 | 154 | # 3. Check the health of apps 155 | 156 | Kubernetes uses availability checks (liveness probes) to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs. 157 | 158 | Also, Kubernetes uses readiness checks to know when a container is ready to start accepting traffic. A pod is considered ready when all of its containers are ready. One use of this check is to control which pods are used as backends for services. When a pod is not ready, it is removed from load balancers. 159 | 160 | In this example, we have defined a HTTP liveness probe to check health of the container every five seconds. For the first 10-15 seconds the `/healthz` returns a `200` response and will fail afterward. Kubernetes will automatically restart the service. 161 | 162 | 1. Open the `healthcheck.yml` file with a text editor. This configuration script combines a few steps from the previous lesson to create a deployment and a service at the same time. App developers can use these scripts when updates are made or to troubleshoot issues by re-creating the pods: 163 | 164 | 1. Update the details for the image in your private registry namespace: 165 | 166 | ``` 167 | image: ".icr.io//hello-world:2" 168 | ``` 169 | 170 | 2. Note the HTTP liveness probe that checks the health of the container every five seconds. 171 | 172 | ``` 173 | livenessProbe: 174 | httpGet: 175 | path: /healthz 176 | port: 8080 177 | initialDelaySeconds: 5 178 | periodSeconds: 5 179 | ``` 180 | 181 | 3. In the **Service** section, note the `NodePort`. Rather than generating a random NodePort like you did in the previous lesson, you can specify a port in the 30000 - 32767 range. This example uses 30072. 182 | 183 | 2. Run the configuration script in the cluster. When the deployment and the service are created, the app is available for anyone to see: 184 | 185 | ``` 186 | kubectl apply -f healthcheck.yml 187 | ``` 188 | 189 | Now that all the deployment work is done, check how everything turned out. You might notice that because more instances are running, things might run a bit slower. 190 | 191 | 3. Open a browser and check out the app. To form the URL, combine the IP with the NodePort that was specified in the configuration script. To get the public IP address for the worker node: 192 | 193 | ``` 194 | ibmcloud ks workers --cluster 195 | ``` 196 | 197 | In a browser, you'll see a success message. If you do not see this text, don't worry. This app is designed to go up and down. 198 | 199 | For the first 10 - 15 seconds, a 200 message is returned, so you know that the app is running successfully. After those 15 seconds, a timeout message is displayed, as is designed in the app. 200 | 201 | 4. Launch your Kubernetes dashboard: 202 | 203 | 1. Get your credentials for Kubernetes. 204 | 205 | ``` 206 | kubectl config view -o jsonpath='{.users[0].user.auth-provider.config.id-token}' 207 | ``` 208 | 209 | 2. Copy the **id-token** value that is shown in the output. 210 | 211 | 3. Set the proxy with the default port number. 212 | 213 | ``` 214 | kubectl proxy 215 | ``` 216 | 217 | Output: 218 | 219 | ``` 220 | Starting to serve on 127.0.0.1:8001 221 | ``` 222 | 223 | 4. Sign in to the dashboard. 224 | 225 | 1. Open the following URL in a web browser. 226 | 227 | ``` 228 | http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ 229 | ``` 230 | 231 | 2. In the sign-on page, select the **Token** authentication method. 232 | 233 | 3. Then, paste the **id-token** value that you previously copied into the **Token** field and click **SIGN IN**. 234 | 235 | In the **Workloads** tab, you can see the resources that you created. From this tab, you can continually refresh and see that the health check is working. In the **Pods** section, you can see how many times the pods are restarted when the containers in them are re-created. You might happen to catch errors in the dashboard, indicating that the health check caught a problem. Give it a few minutes and refresh again. You see the number of restarts changes for each pod. 236 | 237 | 5. Ready to delete what you created before you continue? This time, you can use the same configuration script to delete both of the resources you created. 238 | 239 | ```kubectl delete -f healthcheck.yml``` 240 | 241 | 6. When you are done exploring the Kubernetes dashboard, in your CLI, enter `CTRL+C` to exit the `proxy` command. 242 | 243 | 244 | Congratulations! You deployed the second version of the app. You had to use fewer commands, learned how health check works, and edited a deployment, which is great! Lab 2 is now complete. 245 | -------------------------------------------------------------------------------- /Lab 2/app.js: -------------------------------------------------------------------------------- 1 | var express = require('express') 2 | var os = require("os"); 3 | var hostname = os.hostname(); 4 | var app = express() 5 | var startTime = Date.now() 6 | 7 | var delay = 10000 + Math.floor(Math.random() * 5000) 8 | 9 | app.get('/', function(req, res) { 10 | res.send('Hello world from ' + hostname + '! Great job getting the second stage up and running!\n') 11 | }) 12 | app.get('/healthz', function(req, res) { 13 | if ((Date.now() - startTime) > delay) { 14 | res.status(500).send({ 15 | error: 'Timeout, Health check error!' 16 | }) 17 | } else { 18 | res.send('OK!') 19 | } 20 | }) 21 | app.listen(8080, function() { 22 | console.log('Sample app is listening on port 8080.') 23 | }) 24 | -------------------------------------------------------------------------------- /Lab 2/healthcheck.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: hw-demo-deployment 5 | spec: 6 | selector: 7 | matchLabels: 8 | run: hw-demo-health 9 | test: hello-world-demo 10 | replicas: 3 11 | template: 12 | metadata: 13 | name: pod-liveness-http 14 | labels: 15 | run: hw-demo-health 16 | test: hello-world-demo 17 | spec: 18 | containers: 19 | - name: hw-demo-container 20 | image: "us.icr.io//hello-world:2" 21 | imagePullPolicy: Always 22 | livenessProbe: 23 | httpGet: 24 | path: /healthz 25 | port: 8080 26 | initialDelaySeconds: 5 27 | periodSeconds: 5 28 | --- 29 | apiVersion: v1 30 | kind: Service 31 | metadata: 32 | name: hw-demo-service 33 | labels: 34 | run: hw-demo-health 35 | spec: 36 | type: NodePort 37 | selector: 38 | run: hw-demo-health 39 | ports: 40 | - protocol: TCP 41 | port: 8080 42 | nodePort: 30072 43 | -------------------------------------------------------------------------------- /Lab 2/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "hello-world-armada", 3 | "private": false, 4 | "version": "0.0.1", 5 | "description": "Basic hello world application for Node.js", 6 | "dependencies": { 7 | "express": "4.x" 8 | } 9 | } 10 | -------------------------------------------------------------------------------- /Lab 3/README.md: -------------------------------------------------------------------------------- 1 | # Lab 3: Deploy an application with IBM Watson services 2 | 3 | In this lab, set up an application to leverage the Watson Tone Analyzer service. If you have yet to create a cluster, please refer to first lab of this course. 4 | 5 | # 1. Build the Watson images 6 | 7 | 1. Build the `watson` image. 8 | 9 | ```ibmcloud cr build -t us.icr.io//watson ./watson``` 10 | 11 | **Tip:** If you run out of registry space, clean up the previous lab's images with this example command: 12 | ```ibmcloud cr image-rm us.icr.io//hello-world:2``` 13 | 14 | 2. Build the `watson-talk` image. 15 | 16 | ```ibmcloud cr build -t us.icr.io//watson-talk ./watson-talk``` 17 | 18 | 3. In watson-deployment.yml, update the image tag with the registry path to the image you created in the following two sections. 19 | 20 | ```yml 21 | spec: 22 | containers: 23 | - name: watson 24 | image: "us.icr.io//watson" 25 | # change to the path of the watson image you just pushed 26 | # ex: image: "us.icr.io//watson" 27 | ... 28 | spec: 29 | containers: 30 | - name: watson-talk 31 | image: "us.icr.io//watson-talk" 32 | # change to the path of the watson-talk image you just pushed 33 | # ex: image: "us.icr.io//watson-talk" 34 | ``` 35 | 36 | 37 | # 2. Create an instance of the IBM Watson service via the CLI 38 | 39 | In order to begin using the Watson Tone Analyzer (the IBM Cloud service for this application), you must first request an instance of the Watson service in the org and space where you have set up our cluster. 40 | 41 | 1. If you need to check what space and org you are currently using, simply run `ibmcloud login`. Then use `ibmcloud target --cf` to select the space and org you were using for the previous labs. 42 | 43 | 2. Once you have set your space and org, run `ibmcloud cf create-service tone_analyzer standard tone`, where `tone` is the name you will use for the Watson Tone Analyzer service. 44 | 45 | **Note:** When you add the Tone Analyzer service to your account, a message is displayed that the service is not free. If you [limit your API calls](https://www.ibm.com/watson/developercloud/tone-analyzer.html#pricing-block), this course does not incur charges from the Watson service. 46 | 47 | 3. Run `ibmcloud cf services` to ensure a service named `tone` was created. 48 | 49 | # 3. Bind the Watson service to your cluster 50 | 51 | 1. Run `ibmcloud ks cluster service bind --cluster --namespace default --service tone` to bind the service to your cluster. This command will create a secret for the service. 52 | 53 | 2. Verify the secret was created by running `kubectl get secrets`. 54 | 55 | # 4. Create pods and services 56 | 57 | Now that the service is bound to the cluster, you want to expose the secret to your pod so that it can utilize the service. To do this, create a secret datastore as a part of your deployment configuration. This has been done for you in watson-deployment.yml: 58 | 59 | ```yml 60 | volumeMounts: 61 | - mountPath: /opt/service-bind 62 | name: service-bind-volume 63 | volumes: 64 | - name: service-bind-volume 65 | secret: 66 | defaultMode: 420 67 | secretName: binding-tone 68 | # from the kubectl get secrets command above 69 | ``` 70 | 71 | 1. Build the application using the yaml. 72 | 73 | ```kubectl create -f watson-deployment.yml``` 74 | 75 | 2. Verify the pod has been created: 76 | 77 | ```kubectl get pods``` 78 | 79 | Your secret has now been created. Note that for this lab, this has been done for you. 80 | 81 | # 5. Putting it all together - Run the application and service 82 | 83 | By this time you have created pods, services, and volumes for this lab. 84 | 85 | 1. You can open the Kubernetes dashboard and explore all the new objects created or use the following commands. 86 | 87 | ``` 88 | kubectl get pods 89 | kubectl get deployments 90 | kubectl get services 91 | ``` 92 | 93 | 2. Get the public IP for the worker node to access the application: 94 | 95 | ```ibmcloud ks workers --cluster ``` 96 | 97 | 3. Now that the you have the container IP and port, go to your favorite web browser and launch the following URL to analyze the text and see output. 98 | 99 | ```http://:30080/analyze/"Today is a beautiful day"``` 100 | 101 | If you can see JSON output on your screen, congratulations! You are done with this lab! 102 | -------------------------------------------------------------------------------- /Lab 3/watson-deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: watson-pod 5 | spec: 6 | replicas: 1 7 | selector: 8 | matchLabels: 9 | run: watson-demo 10 | template: 11 | metadata: 12 | name: watson-pod 13 | labels: 14 | run: watson-demo 15 | spec: 16 | containers: 17 | - name: watson 18 | image: "us.icr.io//watson" # edit here! 19 | imagePullPolicy: Always 20 | volumeMounts: 21 | - mountPath: /opt/service-bind 22 | name: service-bind-volume 23 | volumes: 24 | - name: service-bind-volume 25 | secret: 26 | defaultMode: 420 27 | secretName: binding-tone 28 | --- 29 | apiVersion: v1 30 | kind: Service 31 | metadata: 32 | name: watson-service 33 | labels: 34 | run: watson-demo 35 | spec: 36 | type: NodePort 37 | selector: 38 | run: watson-demo 39 | ports: 40 | - protocol: TCP 41 | port: 8081 42 | nodePort: 30081 43 | --- 44 | apiVersion: apps/v1 45 | kind: Deployment 46 | metadata: 47 | name: watson-talk-pod 48 | spec: 49 | replicas: 1 50 | selector: 51 | matchLabels: 52 | run: watson-talk-demo 53 | template: 54 | metadata: 55 | name: watson-talk-pod 56 | labels: 57 | run: watson-talk-demo 58 | spec: 59 | containers: 60 | - name: watson-talk 61 | image: "us.icr.io//watson-talk" # edit here! 62 | imagePullPolicy: Always 63 | --- 64 | apiVersion: v1 65 | kind: Service 66 | metadata: 67 | name: watson-talk-service 68 | labels: 69 | run: watson-talk-demo 70 | spec: 71 | type: NodePort 72 | selector: 73 | run: watson-talk-demo 74 | ports: 75 | - protocol: TCP 76 | port: 8080 77 | nodePort: 30080 78 | -------------------------------------------------------------------------------- /Lab 3/watson-talk/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:9.4.0-alpine 2 | COPY app.js . 3 | COPY package.json . 4 | RUN npm install &&\ 5 | apk update &&\ 6 | apk upgrade 7 | EXPOSE 8080 8 | CMD node app.js 9 | -------------------------------------------------------------------------------- /Lab 3/watson-talk/app.js: -------------------------------------------------------------------------------- 1 | 2 | //Add required modules here 3 | var express = require('express'); 4 | var request = require('request'); 5 | var app = express(); 6 | //http://localhost:3000/analyze/hello! 7 | 8 | app.get('/', function(req, res) { 9 | res.send('Ready to take requests and get them analyzed with Watson Tone Analyzer Service!') 10 | }) 11 | 12 | app.get('/healthz', function(req, res) { 13 | res.send('OK!') 14 | }) 15 | 16 | app.get('/analyze/:string', function(req, res) { 17 | if (!req.params.string) { 18 | res.status(500); 19 | res.send({"Error": "You are not sending a valid string to the Watson Service."}); 20 | console.log("You are not sending a valid string to the Watson Service."); 21 | } 22 | request.get({ url: "http://watson-service:8081/analyze?text=" + req.params.string }, 23 | function(error, response, body) { 24 | if (!error && response.statusCode == 200) { 25 | res.setHeader('Content-Type', 'application/json'); 26 | res.send(body); 27 | } 28 | }); 29 | }); 30 | app.listen(8080, function() { 31 | console.log('tone analyzer frontend is listening on port 8080.') 32 | }); 33 | -------------------------------------------------------------------------------- /Lab 3/watson-talk/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "hello-world-armada", 3 | "private": false, 4 | "version": "0.0.1", 5 | "description": "Basic Watson Tone application for Node.js", 6 | "dependencies": { 7 | "request": "*", 8 | "express": "4.x" 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /Lab 3/watson/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:9.4.0-alpine 2 | COPY app.js . 3 | COPY package.json . 4 | RUN npm install &&\ 5 | apk update &&\ 6 | apk upgrade 7 | EXPOSE 8080 8 | CMD node app.js -------------------------------------------------------------------------------- /Lab 3/watson/app.js: -------------------------------------------------------------------------------- 1 | var express = require('express') 2 | var app = express() 3 | var startTime = Date.now() 4 | var fs = require('fs') 5 | 6 | const ToneAnalyzerV3 = require('ibm-watson/tone-analyzer/v3'); 7 | const { IamAuthenticator } = require('ibm-watson/auth'); 8 | 9 | var binding = JSON.parse(fs.readFileSync('/opt/service-bind/binding', 'utf8')); 10 | 11 | const tone_analyzer = binding.apikey ? new ToneAnalyzerV3({ 12 | authenticator: new IamAuthenticator({ 13 | apikey: binding.apikey, 14 | }), 15 | url: binding.url, 16 | version: '2016-05-19' 17 | }) : new ToneAnalyzerV3({ 18 | username: binding.username, 19 | password: binding.password, 20 | url: binding.url, 21 | version: '2016-05-19' 22 | }); 23 | 24 | app.get('/', function(req, res) { 25 | res.send('Ready to analyze Tone!') 26 | }) 27 | 28 | app.get('/healthz', function(req, res) { 29 | res.send('OK!') 30 | }) 31 | 32 | app.get('/analyze', function(req, res) { 33 | tone_analyzer.tone({ toneInput: { 34 | 'text': req.query.text, 35 | }, }, function(err, tone) { 36 | if (err) { 37 | res.status(500).send(err); 38 | } else { 39 | res.send(JSON.stringify(tone, null, 2)); 40 | } 41 | }); 42 | }) 43 | 44 | app.listen(8081, function() { 45 | console.log('Sample app is listening on port 8081.') 46 | }) 47 | -------------------------------------------------------------------------------- /Lab 3/watson/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "hello-world-armada", 3 | "private": false, 4 | "version": "0.0.1", 5 | "description": "Basic Watson Tone application for Node.js", 6 | "dependencies": { 7 | "express": "4.x", 8 | "ibm-watson": "5.2" 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /Lab 4/README.md: -------------------------------------------------------------------------------- 1 | # Lab 4: Deploy highly available apps with IBM Cloud Kubernetes Service 2 | 3 | The goal of this lab is to learn how to deploy a highly available application. It's easier than many think but can be expensive if deploying across multiple availability zones. In order to explore the concepts, this lab shows how to deploy an application across two worker nodes in the same availability zone, which is a basic level of high availability. 4 | 5 | ### Important 6 | 7 | This section requires a paid cluster, and is thus optional for learning purposes. It contains highly useful real-world examples of Kubernetes and is thus valuable, but you will not see questions relating to this lab on the exam. 8 | 9 | # 1. Create a federated Kubernetes cluster: Two clusters running the same application 10 | 11 | 1. To get started, create a paid cluster with two workers and wait for it to provision. If this is your first paid cluster, then you do not need to specify the public vlan and the private vlan. 12 | 13 | ```ibmcloud ks cluster create classic --name --machine-type b2c.4x16 --location --workers 2 --public-vlan --private-vlan ``` 14 | 15 | 2. While waiting, you need to download kubefed, which will allow you to set up a Kubernetes federated cluster. 16 | As this is for lab purposes only, we will be using an IBM Liberty image as a basic template for an imaged application. 17 | 18 | 1. Download the kubefed tarfile using curl: 19 | 20 | ```curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/kubernetes-client-darwin-amd64.tar.gz``` 21 | 22 | 2. Unpack the tarfile: 23 | 24 | ```tar -xzvf kubernetes-client-darwin-amd64.tar.gz``` 25 | 26 | 3. Copy the extracted binaries to one of the directories in your `$PATH` and set the executable permission on those binaries: 27 | 28 | `sudo cp kubernetes/client/bin/kubefed /usr/local/bin` 29 | `sudo chmod +x /usr/local/bin/kubefed` 30 | `sudo cp kubernetes/client/bin/kubectl /usr/local/bin` 31 | `sudo chmod +x /usr/local/bin/kubectl` 32 | 33 | # 2. Choose a host cluster 34 | 35 | You need to choose one of your Kubernetes worker node clusters to be the host cluster. The host cluster hosts the components that make up your federation control plane. Ensure that you have a kubeconfig entry in your local kubeconfig that corresponds to the host cluster. 36 | 37 | 1. Verify that you have the required kubeconfig entry by running: 38 | 39 | ```kubectl config get-contexts``` 40 | 41 | The output should contain an entry corresponding to your host cluster, similar to the following: 42 | 43 | ``` 44 | CURRENT NAME CLUSTER AUTHINFO NAMESPACE 45 | * gke_myproject_asia-east1-b_gce-asia-east1 gke_myproject_asia-east1-b_gce-asia-east1 gke_myproject_asia-east1-b_gce-asia-east1 46 | 47 | ``` 48 | 49 | You’ll need to provide the kubeconfig context (called `name` in the entry above) for your host cluster when you deploy your federation control plane. 50 | 51 | 2. Export the kubeconfig file for the admin user of the cluster: 52 | 53 | ```ibmcloud ks cluster config --cluster --admin``` 54 | 55 | Make a note of it for error handling later. 56 | 57 | # 3. Deploy a federation control plane 58 | 59 | 1. To deploy a federation control plane on your host cluster, run: 60 | 61 | ```kubefed init``` 62 | 63 | When you use `kubefed init`, you must provide the following: 64 | 65 | ``` 66 | Federation name 67 | --host-cluster-context, the kubeconfig context for the host cluster 68 | --dns-provider, one of 'google-clouddns', aws-route53 or coredns 69 | --dns-zone-name, a domain name suffix for your federated services 70 | ``` 71 | 72 | The following example command deploys a federation control plane with the name fellowship, a host cluster context rivendell, and the domain `suffix example.com.`: 73 | 74 | ``` 75 | kubefed init fellowship \ 76 | --host-cluster-context=rivendell \ 77 | --dns-provider="google-clouddns" \ 78 | --dns-zone-name="example.com." 79 | ``` 80 | 81 | The domain suffix specified in `--dns-zone-name` must be an existing domain that you control and that is programmable by your DNS provider. It must also end with a trailing dot. 82 | 83 | 2. Once the federation control plane is initialized, query the namespaces: 84 | 85 | ```kubectl get namespace --context=fellowship``` 86 | 87 | If you do not see the default namespace listed (this is due to a bug). Create it yourself with the following command: 88 | 89 | ```kubectl create namespace default --context=fellowship``` 90 | 91 | 92 | # 4. Check errors 93 | 94 | If you run into an error when using `kubefed init`, like the one below: 95 | 96 | ``` 97 | | => kubefed init fellowship --host-cluster-context=Cluster02 --dns-provider="coredns" --dns-zone-name="example.com." 98 | error: unable to read certificate-authority ca-dal10-Cluster02.pem for Cluster02 due to open ca-dal10-Cluster02.pem: no such file or directory 99 | ``` 100 | 101 | This is due to the symbolic path of the .pem file not being sufficient in the kubeconfig. Get the admin kubeconfig file again using: 102 | 103 | ```ibmcloud ks cluster config --cluster --admin``` 104 | 105 | You should get a message like the one below: 106 | 107 | `export KUBECONFIG=/Users/ColemanIBMDevMachine/.bluemix/plugins/container-service/clusters/Cluster02-admin/kube-config-dal10-Cluster02.yml` 108 | 109 | Use a text editor to open the file where the KUBECONFIG is located, (the full path next to KUBECONFIG= in the line above) 110 | 111 | You should see a file that looks similar to the one below: 112 | 113 | ``` yml 114 | apiVersion: v1 115 | clusters: 116 | - name: Cluster02 117 | cluster: 118 | certificate-authority: ca-dal10-Cluster02.pem # This line needs changing to the full path 119 | server: https://169.46.7.238:30705 120 | contexts: 121 | - name: Cluster02 122 | context: 123 | cluster: Cluster02 124 | user: admin 125 | namespace: default 126 | current-context: Cluster02 127 | kind: Config 128 | users: 129 | - name: admin 130 | user: 131 | client-certificate: admin.pem # So does this one... 132 | client-key: admin-key.pem # and this one... 133 | ____________________________________ 134 | ``` 135 | 136 | The .pem file is in the same location as the kube-config.yml for the kubeconfig. Simply swap the local path of the `certificate-authority` to the absolute path you copied from your kubeconfig line. Do the same for `client-certificate` and `client-key`. An example is shown below: 137 | 138 | ``` yml 139 | apiVersion: v1 140 | clusters: 141 | - name: Cluster02 142 | cluster: 143 | certificate-authority: /Users/ColemanIBMDevMachine/.bluemix/plugins/container-service/clusters/Cluster02-admin/ca-dal10-Cluster02.pem 144 | server: https://169.46.7.238:30705 145 | contexts: 146 | - name: Cluster02 147 | context: 148 | cluster: Cluster02 149 | user: admin 150 | namespace: default 151 | current-context: Cluster02 152 | kind: Config 153 | users: 154 | - name: admin 155 | user: 156 | client-certificate: /Users/ColemanIBMDevMachine/.bluemix/plugins/container-service/clusters/Cluster02-admin/admin.pem 157 | client-key: /Users/ColemanIBMDevMachine/.bluemix/plugins/container-service/clusters/Cluster02-admin/admin-key.pem 158 | ____________________________________ 159 | ``` 160 | 161 | Save the file, and then try the `kubefed init` command again. You should see output similar to the text below: 162 | 163 | ``` 164 | ________________________________________________________________________________ 165 | | ~/.bluemix/plugins/container-service/clusters/Cluster02 @ colemans-mbp (ColemanIBMDevMachine) 166 | | => kubefed init fellowship --host-cluster-context=Cluster02 --dns-provider="coredns" --dns-zone-name="example.com." 167 | Creating a namespace federation-system for federation system components... done 168 | Creating federation control plane service...... 169 | ``` 170 | 171 | # 5. Deploy an application to a federated cluster 172 | 173 | The API for federated deployment is compatible with the API for traditional Kubernetes deployment. 174 | 175 | 1. Create a deployment by sending a request to the federation data layer, using `kubectl` by running: 176 | 177 | ```kubectl --context=federation-cluster create -f mydeployment.yaml``` 178 | 179 | The `–context=federation-cluster` flag tells `kubectl` to submit the request to the federation API server, instead of sending it to a Kubernetes cluster. 180 | 181 | This example will deploy a simple nginx image to your federated cluster. The configuration file is shown below: 182 | 183 | ```yml 184 | apiVersion: apps/v1 185 | kind: Deployment 186 | metadata: 187 | name: nginx-deployment 188 | spec: 189 | selector: 190 | matchLabels: 191 | app: nginx 192 | replicas: 10 # tells deployment to run 10 pods matching the template 193 | template: # create pods using pod definition in this template 194 | metadata: 195 | # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is 196 | # generated from the deployment name 197 | labels: 198 | app: nginx 199 | spec: 200 | containers: 201 | - name: nginx 202 | image: nginx:1.7.9 203 | ports: 204 | - containerPort: 80 205 | ``` 206 | 2. Save it locally as deployment.yaml. 207 | 208 | 3. Once you have saved the nginx configuration file, deploy it to your federated cluster: 209 | 210 | ```kubectl --context=federeation-cluster -f path/to/deployment.yaml``` 211 | 212 | 213 | If it works, congratulations! You have successfully deployed a basic application across two worker nodes using federation. 214 | -------------------------------------------------------------------------------- /Lab 5/Images/cs_security.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/container-service-getting-started-wt/e4a57c84a5c309eaea59b6a72c2dcaeaeceb2c88/Lab 5/Images/cs_security.png -------------------------------------------------------------------------------- /Lab 5/README.md: -------------------------------------------------------------------------------- 1 | # Lab 5: Set up advance security with IBM Cloud Kubernetes Service 2 | 3 | In this lab, get an introduction to Kubernetes-specific security features that are used to limit the attack surface and harden your cluster against network threats. You can use built-in security features for risk analysis and security protection. These features help you protect your cluster infrastructure and network communication, isolate your compute resources, and ensure security compliance across your infrastructure components and container deployments. 4 | 5 | # 1. Add network policies 6 | In most cases, the default policies do not need to be changed. Only advanced security scenarios might require changes. If you find that you must make changes, install the Calico CLI and create your own network policies. 7 | 8 | Before you begin: 9 | 10 | Target the Kubernetes CLI to the cluster. Include the `--admin` option with the `ibmcloud ks cluster config` command, which is used to download the certificates and permission files. This download also includes the keys for the Administrator RBAC role, which you need to run Calico commands. 11 | 12 | ```ibmcloud ks cluster config --cluster --admin``` 13 | 14 | **Note:** Calico CLI, Version 1.6.1, is supported. 15 | 16 | To add network policies: 17 | 18 | 1. Install the [Calico CLI](https://github.com/projectcalico/calicoctl/releases/tag/v1.6.1). 19 | 20 | **Tip:** If you are using Windows, install the Calico CLI in the same directory as the IBM Cloud CLI. This setup saves you some filepath changes when you run commands later. 21 | 22 | 2. For OS X and Linux users, move the executable file to the /usr/local/bin directory: 23 | - Linux: 24 | 25 | ```mv //calicoctl /usr/local/bin/calicoctl``` 26 | 27 | - OS X: 28 | 29 | ```mv //calico-darwin-amd64 /usr/local/bin/calicoctl``` 30 | 31 | 32 | 3. Convert the binary file to an executable: 33 | 34 | ```chmod +x /usr/local/bin/calicoctl``` 35 | 36 | 4. Verify that the Calico commands run properly by checking the Calico CLI client version: 37 | 38 | ```calicoctl version``` 39 | 40 | 41 | # 2. Configure the Calico CLI 42 | 43 | 1. For OS X and Linux, create the /etc/calico directory: 44 | 45 | ```mkdir -p /etc/calico/``` 46 | 47 | For Windows, any directory can be used. 48 | 49 | 2. Create a calicoctl.cfg file. 50 | 51 | OS X and Linux: 52 | 53 | ```sudo vi /etc/calico/calicoctl.cfg``` 54 | 55 | Windows: Create the file with a text editor. 56 | 57 | 3. Enter the following information in the calicoctl.cfg file: 58 | 59 | ``` 60 | apiVersion: v1 61 | kind: calicoApiConfig 62 | metadata: 63 | spec: 64 | etcdEndpoints: 65 | etcdKeyFile: /admin-key.pem 66 | etcdCertFile: /admin.pem 67 | etcdCACertFile: / 68 | ``` 69 | 70 | 4. Retrieve the . 71 | 72 | OS X and Linux: 73 | 74 | ```kubectl get cm -n kube-system calico-config -o yaml | grep "etcd_endpoints:" | awk '{ print $2 }'``` 75 | 76 | Output example: 77 | 78 | ```https://169.1.1.1:30001``` 79 | 80 | Windows: 81 | 1. Get the calico configuration values from the config map: 82 | 83 | ```kubectl get cm -n kube-system calico-config -o yaml``` 84 | 85 | 2. In the data section, locate the etcd_endpoints value. Example: https://169.1.1.1:30001. 86 | 87 | 5. Retrieve the , the directory that the Kubernetes certificates are downloaded in. 88 | 89 | OS X and Linux: 90 | 91 | ```dirname $KUBECONFIG``` 92 | 93 | Output example: 94 | 95 | ```/home/sysadmin/.bluemix/plugins/container-service/clusters/-admin/``` 96 | 97 | Windows: 98 | 99 | ```echo %KUBECONFIG%``` 100 | 101 | Output example: 102 | 103 | ```C:/Users//.bluemix/plugins/container-service/-admin/kube-config-prod--.yml``` 104 | 105 | **Note:** To get the directory path, remove the file name kube-config-prod--.yml from the end of the output. 106 | 107 | 6. Retrieve the ca-*pem_file. 108 | 109 | OS X and Linux: 110 | 111 | ``` 112 | ls `dirname $KUBECONFIG` | grep ca-*.pem 113 | ``` 114 | Windows: 115 | 116 | 1. Open the directory you retrieved in the last step. 117 | 118 | ```C:\Users\.bluemix\plugins\container-service\-admin\``` 119 | 120 | 2. Locate the ca-*pem_file file. 121 | 122 | 7. Verify that the Calico configuration is working correctly. 123 | 124 | OS X and Linux: 125 | 126 | ```calicoctl get nodes``` 127 | 128 | Windows: 129 | 130 | ```calicoctl get nodes --config=/calicoctl.cfg``` 131 | 132 | Output: 133 | ``` 134 | NAME 135 | kube-dal10-crc21191ee3997497ca90c8173bbdaf560-w1.cloud.ibm 136 | kube-dal10-crc21191ee3997497ca90c8173bbdaf560-w2.cloud.ibm 137 | kube-dal10-crc21191ee3997497ca90c8173bbdaf560-w3.cloud.ibm 138 | ``` 139 | 140 | 8. Examine the existing network policies. 141 | 142 | 9. View the Calico host endpoint: 143 | 144 | ```calicoctl get hostendpoint -o yaml``` 145 | 146 | 10. View all of the Calico and Kubernetes network policies that were created for the cluster. This list includes policies that might not be applied to any pods or hosts yet. For a network policy to be enforced, it must find a Kubernetes resource that matches the selector that was defined in the Calico network policy. 147 | 148 | ```calicoctl get policy -o wide``` 149 | 150 | 11. View details for a network policy: 151 | 152 | ```calicoctl get policy -o yaml ``` 153 | 154 | 12. View the details of all network policies for the cluster: 155 | 156 | ```calicoctl get policy -o yaml``` 157 | 158 | # 3. Define a Calico network policy 159 | 160 | Defining a Calico network policy for Kubernetes clusters is simple once the Calico CLI is installed. In this part of the lab, walk through using the Calico APIs directly in conjunction with Kubernetes `NetworkPolicy` in order to define more complex network policies. 161 | 162 | 1. Begin by creating a namespace in your Kubernetes cluster: 163 | 164 | ```kubectl create ns advanced-policy-demo``` 165 | 166 | 2. Enable isolation on the namespace: 167 | 168 | ```kubectl annotate ns advanced-policy-demo "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"``` 169 | 170 | 3. Run an nginx service in the namespace that you created: 171 | 172 | ```kubectl run --namespace=advanced-policy-demo nginx --replicas=2 --image=nginx``` 173 | ```kubectl expose --namespace=advanced-policy-demo deployment nginx --port=80``` 174 | 175 | Now that you’ve created a namespace and a set of pods, you should see those objects show up in the Calico API using `calicoctl`. 176 | 177 | You can see that the namespace has a corresponding network profile. 178 | 179 | `calicoctl get profile -o wide` 180 | 181 | ``` 182 | NAME TAGS 183 | k8s_ns.advanced-policy-demo k8s_ns.advanced-policy-demo 184 | k8s_ns.default k8s_ns.default 185 | k8s_ns.kube-system k8s_ns.kube-system 186 | ``` 187 | 188 | Because you’ve enabled isolation on the namespace, the profile denies all ingress traffic and allows all egress traffic. Inspect the YAML file to verify. 189 | 190 | `calicoctl get profile k8s_ns.advanced-policy-demo -o yaml` 191 | 192 | ``` 193 | - apiVersion: v1 194 | kind: profile 195 | metadata: 196 | name: k8s_ns.advanced-policy-demo 197 | tags: 198 | - k8s_ns.advanced-policy-demo 199 | spec: 200 | egress: 201 | - action: allow 202 | destination: {} 203 | source: {} 204 | ingress: 205 | - action: deny 206 | destination: {} 207 | source: {} 208 | ``` 209 | You can see that this is the case by running another pod in the namespace and attempting to access the nginx service. 210 | 211 | ``` 212 | $ kubectl run --namespace=advanced-policy-demo access --rm -ti --image busybox /bin/sh 213 | Waiting for pod advanced-policy-demo/access-472357175-y0m47 to be running, status is Pending, pod ready: false 214 | If you don't see a command prompt, try pressing enter. 215 | / # wget -q --timeout=5 nginx -O - 216 | wget: download timed out 217 | / # 218 | ``` 219 | 220 | You can also see that the two nginx pods are represented as WorkloadEndpoints in the Calico API. 221 | 222 | ``` 223 | calicoctl get workloadendpoint 224 | 225 | NODE ORCHESTRATOR WORKLOAD NAME 226 | k8s-node-01 k8s advanced-policy-demo.nginx-701339712-x1uqe eth0 227 | k8s-node-02 k8s advanced-policy-demo.nginx-701339712-xeeay eth0 228 | k8s-node-01 k8s kube-system.kube-dns-v19-mjd8x eth0 229 | ``` 230 | 231 | Taking a closer look, you can see that they reference the correct profile for the namespace, and that the correct label information has been filled in. Notice that the endpoint also includes a special label calico/k8s_ns, which is automatically populated with the pod’s Kubernetes namespace. 232 | 233 | ``` 234 | $ calicoctl get wep --workload advanced-policy-demo.nginx-701339712-x1uqe -o yaml 235 | - apiVersion: v1 236 | kind: workloadEndpoint 237 | metadata: 238 | labels: 239 | calico/k8s_ns: advanced-policy-demo 240 | pod-template-hash: "701339712" 241 | run: nginx 242 | name: eth0 243 | node: k8s-node-01 244 | orchestrator: k8s 245 | workload: advanced-policy-demo.nginx-701339712-x1uqe 246 | spec: 247 | interfaceName: cali347609b8bd7 248 | ipNetworks: 249 | - 192.168.44.65/32 250 | mac: 56:b5:54:be:b2:a2 251 | profiles: 252 | - k8s_ns.advanced-policy-demo 253 | 254 | ``` 255 | 256 | 4. Now, create a new Kubernetes config yaml file, this time with `kind: NetworkPolicy`. The following example shows a network policy that allows traffic. 257 | 258 | 5. Create a file named `networkpol.yaml`, and enter the following information into the file: 259 | 260 | ``` 261 | kind: NetworkPolicy 262 | apiVersion: networking.k8s.io/v1 263 | metadata: 264 | name: access-nginx 265 | namespace: advanced-policy-demo 266 | spec: 267 | podSelector: 268 | matchLabels: 269 | run: nginx 270 | ingress: 271 | - from: 272 | - podSelector: 273 | matchLabels: {} 274 | ``` 275 | 276 | 6. Apply the policies to the cluster. 277 | 278 | OS X and Linux: 279 | 280 | ```calicoctl apply -f networkpol.yaml``` 281 | 282 | It now shows up as a policy object in the Calico API. 283 | 284 | ``` 285 | $ calicoctl get policy -o wide 286 | NAME ORDER SELECTOR 287 | advanced-policy-demo.access-nginx 1000 calico/k8s_ns == 'advanced-policy-demo' && run == 'nginx' 288 | k8s-policy-no-match 2000 has(calico/k8s_ns) 289 | ``` 290 | 291 | Congrats! You just defined your first network policy, and entered into your first foray into network security and network hardening. 292 | -------------------------------------------------------------------------------- /Lab 5/script/script.md: -------------------------------------------------------------------------------- 1 | 2 | # Understand the security offerings available to developers to create a highly secure deployment in IBM Cloud Kubernetes Service 3 | 4 | Every Kubernetes cluster is set up with a network plug-in that is called Calico. Default network policies are set up to secure the public network interface of every worker node. You can use Calico and native Kubernetes capabilities to configure more network policies for a cluster when you have unique security requirements. These network policies specify the network traffic that you want to allow or block to and from a pod in a cluster. 5 | 6 | You can choose between Calico and native Kubernetes capabilities to create network policies for your cluster. You might use Kubernetes network policies to get started, but for more robust capabilities, use the Calico network policies. 7 | 8 | Kubernetes network policies External link icon: Some basic options are provided, such as specifying which pods can communicate with each other. Incoming network traffic for pods can be allowed or blocked for a protocol and port based on the labels and Kubernetes namespaces of the pod that is trying to connect to them. 9 | These policies can be applied by using kubectl commands or the Kubernetes APIs. When these policies are applied, they are converted into Calico network policies and Calico enforces these policies. 10 | Calico network policies External link icon: These policies are a superset of the Kubernetes network policies and enhance the native Kubernetes capabilities with the following features: 11 | * Allow or block network traffic on specific network interfaces, not only Kubernetes pod traffic. 12 | * Allow or block incoming (ingress) and outgoing (egress) network traffic. 13 | * Allow or block traffic that is based on a source or destination IP address or CIDR. 14 | 15 | These policies are applied by using calicoctl commands. Calico enforces these policies, including any Kubernetes network policies that are converted to Calico policies, by setting up Linux iptables rules on the Kubernetes worker nodes. Iptables rules serve as a firewall for the worker node to define the characteristics that the network traffic must meet to be forwarded to the targeted resource. 16 | Default policy configuration 17 | When a cluster is created, default network policies are automatically set up for the public network interface of each worker node to limit incoming traffic for a worker node from the public internet. These policies do not affect pod to pod traffic and are set up to allow access to the Kubernetes nodeport, load balancer, and Ingress services. 18 | 19 | Default policies are not applied to pods directly; they are applied to the public network interface of a worker node by using a Calico host endpoint External link icon. When a host endpoint is created in Calico, all traffic to and from that worker node's network interface is blocked, unless that traffic is allowed by a policy. 20 | 21 | Note that a policy to allow SSH does not exist, so SSH access by way of the public network interface is blocked, as are all other ports that do not have a policy to open them. SSH access, and other access, is available on the private network interface of each worker node. 22 | 23 | Important: Do not remove policies that are applied to a host endpoint unless you fully understand the policy and know that you do not need the traffic that is being allowed by the policy. 24 | 25 | 26 | 27 | Default policies for each cluster: 28 | ``` 29 | allow-all-outbound //Allows all outbound traffic. 30 | allow-icmp //Allows incoming icmp packets (pings). 31 | allow-kubelet-port //Allows all incoming traffic to port 10250, which is the port that is used by the kubelet. This policy allows kubectl logs and kubectl exec to work properly in the Kubernetes cluster. 32 | allow-node-port-dnat //Allows incoming nodeport, load balancer, and ingress service traffic to the pods that those services are exposing. 33 | allow-sys-mgmt //Allows incoming connections for specific IBM Cloud Infrastructure (SoftLayer) systems that are used to manage the worker nodes. 34 | allow-vrrp //Allow vrrp packets, which are used to monitor and move virtual IP addresses between worker nodes. 35 | ``` 36 | 37 | The lab will help you install, learn, and set up a basic calico network policy to secure your own clusters. 38 | -------------------------------------------------------------------------------- /License.txt: -------------------------------------------------------------------------------- 1 | #************************************************************************ 2 | # Copyright 2017 IBM 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | #************************************************************************ 15 | -------------------------------------------------------------------------------- /MAINTAINERS.md: -------------------------------------------------------------------------------- 1 | ## Maintainers Guide 2 | 3 | This guide is intended for maintainers — anybody with commit access to one or more Developer Technology repositories. 4 | 5 | ## Maintainers 6 | 7 | | Name | GitHub | email | 8 | |---|---|---| 9 | | Nathan Fritze | nfritze | nfritz@us.ibm.com | 10 | | Nathan LeViere | nathanleviere | nathanleviere@us.ibm.com | 11 | 12 | ## Methodoology: 13 | 14 | A master branch. This branch MUST be releasable at all times. Commits and merges against this branch MUST contain only bugfixes and/or security fixes. Maintenance releases are tagged against master. 15 | 16 | A develop branch. This branch contains your proposed changes 17 | 18 | The remainder of this document details how to merge pull requests to the repositories. 19 | 20 | ## Merge approval 21 | 22 | The project maintainers use LGTM (Looks Good To Me) in comments on the code review to 23 | indicate acceptance. A change requires LGTMs from one of the maintainers of each 24 | component affected. 25 | 26 | ## Reviewing Pull Requests 27 | 28 | We recommend reviewing pull requests directly within GitHub. This allows a public commentary on changes, providing transparency for all users. When providing feedback be civil, courteous, and kind. Disagreement is fine, so long as the discourse is carried out politely. If we see a record of uncivil or abusive comments, we will revoke your commit privileges and invite you to leave the project. 29 | 30 | During your review, consider the following points: 31 | 32 | ## Does the change have impact? 33 | 34 | While fixing typos is nice as it adds to the overall quality of the project, merging a typo fix at a time can be a waste of effort. (Merging many typo fixes because somebody reviewed the entire component, however, is useful!) Other examples to be wary of: 35 | 36 | Changes in variable names. Ask whether or not the change will make understanding the code easier, or if it could simply a personal preference on the part of the author. 37 | 38 | Essentially: feel free to close issues that do not have impact. 39 | 40 | ## Do the changes make sense? 41 | 42 | If you do not understand what the changes are or what they accomplish, ask the author for clarification. Ask the author to add comments and/or clarify test case names to make the intentions clear. 43 | 44 | At times, such clarification will reveal that the author may not be using the code correctly, or is unaware of features that accommodate their needs. If you feel this is the case, work up a code sample that would address the issue for them, and feel free to close the issue once they confirm. 45 | 46 | ## Is this a new feature? If so: 47 | 48 | Does the issue contain narrative indicating the need for the feature? If not, ask them to provide that information. Since the issue will be linked in the changelog, this will often be a user's first introduction to it. 49 | 50 | Are new unit tests in place that test all new behaviors introduced? If not, do not merge the feature until they are! 51 | Is documentation in place for the new feature? (See the documentation guidelines). If not do not merge the feature until it is! 52 | Is the feature necessary for general use cases? Try and keep the scope of any given component narrow. If a proposed feature does not fit that scope, recommend to the user that they maintain the feature on their own, and close the request. You may also recommend that they see if the feature gains traction amongst other users, and suggest they re-submit when they can show such support. 53 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # IBM Cloud Kubernetes Service lab 3 | 4 | # An introduction to containers 5 | 6 | Hey, are you looking for a containers 101 course? Check out our [Docker Essentials](https://cognitiveclass.ai/courses/docker-essentials/). 7 | 8 | Containers allow you to run securely isolated applications with quotas on system resources. Containers started out as an individual feature delivered with the linux kernel. Docker launched with making containers easy to use and developers quickly latched onto that idea. Containers have also sparked an interest in microservice architecture, a design pattern for developing applications in which complex applications are broken down into smaller, composable pieces which work together. 9 | 10 | Watch this [video](https://www.youtube.com/watch?v=wlBhtc31I8c) to learn about production uses of containers. 11 | 12 | # Objectives 13 | 14 | This lab is an introduction to using containers on Kubernetes in the IBM Cloud Kubernetes Service. By the end of the course, you'll achieve these objectives: 15 | * Understand core concepts of Kubernetes 16 | * Build a container image and deploy an application on Kubernetes in the IBM Cloud Kubernetes Service 17 | * Control application deployments, while minimizing your time with infrastructure management 18 | * Add AI services to extend your app 19 | * Secure and monitor your cluster and app 20 | 21 | # Prerequisites 22 | * A Pay-As-You-Go or Subscription [IBM Cloud account](https://cloud.ibm.com/registration/) 23 | 24 | # Virtual machines 25 | 26 | Prior to containers, most infrastructure ran not on bare metal, but atop hypervisors managing multiple virtualized operating systems (OSes). This arrangement allowed isolation of applications from one another on a higher level than that provided by the OS. These virtualized operating systems see what looks like their own exclusive hardware. However, this also means that each of these virtual operating systems are replicating an entire OS, taking up disk space. 27 | 28 | # Containers 29 | 30 | Containers provide isolation similar to VMs, except provided by the OS and at the process level. Each container is a process or group of processes run in isolation. Typical containers explicitly run only a single process, as they have no need for the standard system services. What they usually need to do can be provided by system calls to the base OS kernel. 31 | 32 | The isolation on linux is provided by a feature called 'namespaces'. Each different kind of isolation (IE user, cgroups) is provided by a different namespace. 33 | 34 | This is a list of some of the namespaces that are commonly used and visible to the user: 35 | 36 | * PID - process IDs 37 | * USER - user and group IDs 38 | * UTS - hostname and domain name 39 | * NS - mount points 40 | * NET - network devices, stacks, and ports 41 | * CGROUPS - control limits and monitoring of resources 42 | 43 | # VM vs container 44 | 45 | Traditional applications are run on native hardware. A single application does not typically use the full resources of a single machine. We try to run multiple applications on a single machine to avoid wasting resources. We could run multiple copies of the same application, but to provide isolation we use VMs to run multiple application instances (VMs) on the same hardware. These VMs have full operating system stacks which make them relatively large and inefficient due to duplication both at runtime and on disk. 46 | 47 | ![Containers versus VMs](images/VMvsContainer.png) 48 | 49 | Containers allow you to share the host OS. This reduces duplication while still providing the isolation. Containers also allow you to drop unneeded files such as system libraries and binaries to save space and reduce your attack surface. If SSHD or LIBC are not installed, they cannot be exploited. 50 | 51 | # Get set up 52 | 53 | Before we dive into Kubernetes, you need to provision a cluster for your containerized app. Then you won't have to wait for it to be ready for the subsequent labs. 54 | 55 | 1. You must install the CLIs per https://cloud.ibm.com/docs/containers/cs_cli_install.html. If you do not yet have these CLIs and the Kubernetes CLI, do [lab 0](https://github.com/IBM/container-service-getting-started-wt/tree/master/Lab%200) before starting the course. 56 | 2. If you haven't already, provision a cluster. This can take a few minutes, so let it start first: `ibmcloud ks cluster create classic --name ` 57 | 3. After creation, before using the cluster, make sure it has completed provisioning and is ready for use. Run `ibmcloud ks clusters` and make sure that your cluster is in state "deployed". 58 | 4. Then use `ibmcloud ks workers --cluster ` and make sure that all worker nodes are in state "normal" with Status "Ready". 59 | 60 | # Kubernetes and containers: an overview 61 | 62 | Let's talk about Kubernetes orchestration for containers before we build an application on it. We need to understand the following facts about it: 63 | 64 | * What is Kubernetes, exactly? 65 | * How was Kubernetes created? 66 | * Kubernetes architecture 67 | * Kubernetes resource model 68 | * Kubernetes at IBM 69 | * Let's get started 70 | 71 | # What is Kubernetes? 72 | 73 | Now that we know what containers are, let's define what Kubernetes is. Kubernetes is a container orchestrator to provision, manage, and scale applications. In other words, Kubernetes allows you to manage the lifecycle of containerized applications within a cluster of nodes (which are a collection of worker machines, for example, VMs, physical machines etc.). 74 | 75 | Your applications may need many other resources to run such as Volumes, Networks, and Secrets that will help you to do things such as connect to databases, talk to firewalled backends, and secure keys. Kubernetes helps you add these resources into your application. Infrastructure resources needed by applications are managed declaratively. 76 | 77 | **Fast fact:** Other orchestration technologies are Mesos and Swarm. 78 | 79 | The key paradigm of kubernetes is it’s Declarative model. The user provides the "desired state" and Kubernetes will do it's best make it happen. If you need 5 instances, you do not start 5 separate instances on your own but rather tell Kubernetes that you need 5 instances and Kubernetes will reconcile the state automatically. Simply at this point you need to know that you declare the state you want and Kubernetes makes that happen. If something goes wrong with one of your instances and it crashes, Kubernetes still knows the desired state and creates a new instances on an available node. 80 | 81 | **Fun to know:** Kubernetes goes by many names. Sometimes it is shortened to _k8s_ (losing the internal 8 letters), or _kube_. The word is rooted in ancient Greek and means "Helmsman". A helmsman is the person who steers a ship. We hope you can seen the analogy between directing a ship and the decisions made to orchestrate containers on a cluster. 82 | 83 | # How was Kubernetes created? 84 | 85 | Google wanted to open source their knowledge of creating and running the internal tools Borg & Omega. It adopted Open Governance for Kubernetes by starting the Cloud Native Computing Foundation (CNCF) and giving Kubernetes to that foundation, therefore making it less influenced by Google directly. Many companies such as RedHat, Microsoft, IBM and Amazon quickly joined the foundation. 86 | 87 | Main entry point for the kubernetes project is at [https://kubernetes.io/](https://kubernetes.io/) and the source code can be found at [https://github.com/kubernetes](https://github.com/kubernetes). 88 | 89 | # Kubernetes architecture 90 | 91 | At its core, Kubernetes is a data store (etcd). The declarative model is stored in the data store as objects, that means when you say I want 5 instances of a container then that request is stored into the data store. This information change is watched and delegated to Controllers to take action. Controllers then react to the model and attempt to take action to achieve the desired state. The power of Kubernetes is in its simplistic model. 92 | 93 | As shown, API server is a simple HTTP server handling create/read/update/delete(CRUD) operations on the data store. Then the controller picks up the change you wanted and makes that happen. Controllers are responsible for instantiating the actual resource represented by any Kubernetes resource. These actual resources are what your application needs to allow it to run successfully. 94 | 95 | ![architecture diagram](/images/kubernetes_arch.png) 96 | 97 | # Kubernetes resource model 98 | 99 | Kubernetes Infrastructure defines a resource for every purpose. Each resource is monitored and processed by a controller. When you define your application, it contains a collection of these resources. This collection will then be read by Controllers to build your applications actual backing instances. Some of resources that you may work with are listed below for your reference, for a full list you should go to [https://kubernetes.io/docs/concepts/](https://kubernetes.io/docs/concepts/). In this class we will only use a few of them, like Pod, Deployment, etc. 100 | 101 | * Config Maps holds configuration data for pods to consume. 102 | * Daemon Sets ensure that each node in the cluster runs this Pod 103 | * Deployments defines a desired state of a deployment object 104 | * Events provides lifecycle events on Pods and other deployment objects 105 | * Endpoints allows a inbound connections to reach the cluster services 106 | * Ingress is a collection of rules that allow inbound connections to reach the cluster services 107 | * Jobs creates one or more pods and as they complete successfully the job is marked as completed. 108 | * Node is a worker machine in Kubernetes 109 | * Namespaces are multiple virtual clusters backed by the same physical cluster 110 | * Pods are the smallest deployable units of computing that can be created and managed in Kubernetes 111 | * Persistent Volumes provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed 112 | * Replica Sets ensures that a specified number of pod replicas are running at any given time 113 | * Secrets are intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys 114 | * Service Accounts provides an identity for processes that run in a Pod 115 | * Services is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. 116 | * Stateful Sets is the workload API object used to manage stateful applications. 117 | * and more... 118 | 119 | 120 | ![Relationship of pods, nodes, and containers](/images/container-pod-node-master-relationship.jpg) 121 | 122 | Kubernetes does not have the concept of an application. It has simple building blocks that you are required to compose. Kubernetes is a cloud native platform where the internal resource model is the same as the end user resource model. 123 | 124 | # Key resources 125 | 126 | A Pod is the smallest object model that you can create and run. You can add labels to a pod to identify a subset to run operations on. When you are ready to scale your application you can use the label to tell Kubernetes which Pod you need to scale. A Pod typically represent a process in your cluster. Pods contain at least one container that runs the job and additionally may have other containers in it called sidecars for monitoring, logging, etc. Essentially a Pod is a group of containers. 127 | 128 | When we talk about a application, we usually refer to group of Pods. Although an entire application can be run in a single Pod, we usually build multiple Pods that talk to each other to make a useful application. We will see why separating the application logic and backend database into separate Pods will scale better when we build an application shortly. 129 | 130 | Services define how to expose your app as a DNS entry to have a stable reference. We use query based selector to choose which pods are supplying that service. 131 | 132 | The user directly manipulates resources via yaml: 133 | `$ kubectl (create|get|apply|delete) -f myResource.yaml` 134 | 135 | Kubernetes provides us with a client interface through ‘kubectl’. Kubectl commands allow you to manage your applications, manage cluster and cluster resources, by modifying the model in the data store. 136 | 137 | # Kubernetes application deployment workflow 138 | 139 | ![deployment workflow](/images/app_deploy_workflow.png) 140 | 141 | 1. User via "kubectl" deploys a new application. Kubectl sends the request to the API Server. 142 | 2. API server receives the request and stores it in the data store (etcd). Once the request is written to data store, the API server is done with the request. 143 | 3. Watchers detects the resource changes and send a notification to controller to act upon it 144 | 4. Controller detects the new app and creates new pods to match the desired number# of instances. Any changes to the stored model will be picked up to create or delete Pods. 145 | 5. Scheduler assigns new pods to a Node based on a criteria. Scheduler makes decisions to run Pods on specific Nodes in the cluster. Scheduler modifies the model with the node information. 146 | 6. Kubelet on a node detects a pod with an assignment to itself, and deploys the requested containers via the container runtime (e.g. Docker). Each Node watches the storage to see what pods it is assigned to run. It takes necessary actions on resource assigned to it like create/delete Pods. 147 | 7. Kubeproxy manages network traffic for the pods – including service discovery and load-balancing. Kubeproxy is responsible for communication between Pods that want to interact. 148 | 149 | 150 | # Lab information 151 | 152 | IBM Cloud provides the capability to run applications in containers on Kubernetes. The IBM Cloud Kubernetes Service runs Kubernetes clusters which deliver the following: 153 | 154 | * Powerful tools 155 | * Intuitive user experience 156 | * Built-in security and isolation to enable rapid delivery of secure applications 157 | * Cloud services including cognitive capabilities from Watson 158 | * Capability to manage dedicated cluster resources for both stateless applications and stateful workloads 159 | 160 | 161 | # Lab overview 162 | 163 | [Lab 0](https://github.com/IBM/container-service-getting-started-wt/tree/master/Lab%200) (Optional): Provides a walkthrough for installing IBM Cloud command-line tools and the Kubernetes CLI. You can skip this lab if you have the IBM Cloud CLI, the container-service plugin, the containers-registry plugin, and the kubectl CLI already installed on your machine. 164 | 165 | [Lab 1](https://github.com/IBM/container-service-getting-started-wt/tree/master/Lab%201): This lab walks through creating and deploying a simple "hello world" app in Node.JS, then accessing that app. 166 | 167 | [Lab 2](https://github.com/IBM/container-service-getting-started-wt/tree/master/Lab%202): Builds on lab 1 to expand to a more resilient setup which can survive having containers fail and recover. Lab 2 will also walk through basic services you need to get started with Kubernetes and the IBM Cloud Kubernetes Service 168 | 169 | [Lab 3](https://github.com/IBM/container-service-getting-started-wt/tree/master/Lab%203): This lab covers adding external services to a cluster. It walks through adding integration to a Watson service, and discusses storing credentials of external services to the cluster. 170 | 171 | [Lab 4](https://github.com/IBM/container-service-getting-started-wt/tree/master/Lab%204) (Under Construction, Paid Only, Optional): This lab will outline how to create a highly available application, and build on the knowledge you have learned in Labs 1 - 3 to deploy clusters simultaneously to multiple availability zones. As this requires a paid IBM Cloud account, skip this lab if you are sticking to the free tier. 172 | 173 | [Lab 5](https://github.com/IBM/container-service-getting-started-wt/tree/master/Lab%205): This lab walks through securing your cluster and applications using network policies, and will later add leveraging tools like Vulnerability Advisor to secure images and manage security in your image registry. 174 | 175 | -------------------------------------------------------------------------------- /bx_login.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | if [ -z $CF_ORG ]; then 4 | CF_ORG="$BLUEMIX_ORG" 5 | fi 6 | if [ -z $CF_SPACE ]; then 7 | CF_SPACE="$BLUEMIX_SPACE" 8 | fi 9 | 10 | 11 | if [ -z "$BLUEMIX_API_KEY" || [ -z "$BLUEMIX_NAMESPACE" ]; then 12 | echo "Define all required environment variables and rerun the stage." 13 | exit 1 14 | fi 15 | echo "Deploy pods" 16 | 17 | echo "ibmcloud login -a $CF_TARGET_URL" 18 | ibmcloud login -a "$CF_TARGET_URL" -o "$CF_ORG" -s "$CF_SPACE" --apikey "$BLUEMIX_API_KEY" 19 | if [ $? -ne 0 ]; then 20 | echo "Failed to authenticate to IBM Cloud" 21 | exit 1 22 | fi 23 | 24 | # Init container clusters 25 | echo "ibmcloud ks init" 26 | ibmcloud ks init 27 | if [ $? -ne 0 ]; then 28 | echo "Failed to initialize to IBM Cloud Kubernetes Service" 29 | exit 1 30 | fi 31 | 32 | # Init container registry 33 | echo "ibmcloud cr login" 34 | ibmcloud cr login 35 | if [ $? -ne 0 ]; then 36 | echo "Failed to login to the IBM Cloud Container Registry" 37 | exit 1 38 | fi -------------------------------------------------------------------------------- /deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Create Demo Application" 4 | 5 | IP_ADDR=$(ibmcloud ks workers $CLUSTER_NAME | grep normal | awk '{ print $2 }') 6 | if [ -z $IP_ADDR ]; then 7 | echo "$CLUSTER_NAME not created or workers not ready" 8 | exit 1 9 | fi 10 | 11 | echo -e "Configuring vars" 12 | exp=$(ibmcloud ks cluster-config $CLUSTER_NAME | grep export) 13 | if [ $? -ne 0 ]; then 14 | echo "Cluster $CLUSTER_NAME not created or not ready." 15 | exit 1 16 | fi 17 | eval "$exp" 18 | 19 | echo -e "Setting up Stage 3 Watson Deployment yml" 20 | cd Stage3/ 21 | # curl --silent "https://raw.githubusercontent.com/IBM/container-service-getting-started-wt/master/Stage3/watson-deployment.yml" > watson-deployment.yml 22 | # 23 | ## WILL NEED FOR LOADBALANCER ### 24 | # #Find the line that has the comment about the load balancer and add the nodeport def after this 25 | # let NU=$(awk '/^ # type: LoadBalancer/{ print NR; exit }' guestbook.yml)+3 26 | # NU=$NU\i 27 | # sed -i "$NU\ \ type: NodePort" guestbook.yml #For OSX: brew install gnu-sed; replace sed references with gsed 28 | 29 | echo -e "Deleting previous version of Watson Deployment if it exists" 30 | kubectl delete --ignore-not-found=true -f watson-deployment.yml 31 | 32 | echo -e "Unbinding previous version of Watson Tone Analyzer if it exists" 33 | ibmcloud service list | grep tone 34 | if [ $? -eq 0 ]; then 35 | ibmcloud ks cluster-service-unbind $CLUSTER_NAME default tone 36 | fi 37 | 38 | echo -e "Deleting previous Watson Tone Analyzer instance if it exists" 39 | ibmcloud service delete tone -f 40 | 41 | echo -e "Creating new instance of Watson Tone Analyzer named tone..." 42 | ibmcloud service create tone_analyzer standard tone 43 | 44 | echo -e "Binding Watson Tone Service to Cluster and Pod" 45 | ibmcloud ks cluster-service-bind $CLUSTER_NAME default tone 46 | 47 | echo -e "Building Watson and Watson-talk images..." 48 | cd watson/ 49 | docker build -t us.icr.io/contbot/watson . &> buildout.txt 50 | if [ $? -ne 0 ]; then 51 | echo "Could not create the watson image for the build" 52 | cat buildout.txt 53 | exit 1 54 | fi 55 | docker push us.icr.io/contbot/watson 56 | if [ $? -ne 0 ]; then 57 | echo "Could not push the watson image for the build" 58 | exit 1 59 | fi 60 | cd .. 61 | cd watson-talk/ 62 | docker build -t us.icr.io/contbot/watson-talk . &> buildout.txt 63 | if [ $? -ne 0 ]; then 64 | echo "Could not create the watson-talk image for the build" 65 | cat buildout.txt 66 | exit 1 67 | fi 68 | docker push us.icr.io/contbot/watson-talk 69 | if [ $? -ne 0 ] ; then 70 | echo "Could not push the watson image for the build" 71 | exit 1 72 | fi 73 | 74 | echo -e "Injecting image namespace into deployment yamls" 75 | cd .. 76 | sed -i "s//${BLUEMIX_NAMESPACE}/" watson-deployment.yml 77 | if [ $? -ne 0 ] ; then 78 | echo "Could not inject image namespace into deployment yaml" 79 | exit 1 80 | fi 81 | 82 | echo -e "Creating pods" 83 | kubectl create -f watson-deployment.yml 84 | 85 | PORT=$(kubectl get services | grep watson-service | sed 's/.*:\([0-9]*\).*/\1/g') 86 | 87 | echo "" 88 | echo "View the watson talk service at http://$IP_ADDR:$PORT" 89 | -------------------------------------------------------------------------------- /deploy_rollup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "Install IBM Cloud CLI" 3 | . ./install_bx.sh 4 | if [ $? -ne 0 ]; then 5 | echo "Failed to install IBM Cloud Kubernetes Service CLI prerequisites" 6 | exit 1 7 | fi 8 | 9 | echo "Login to IBM Cloud" 10 | . ./bx_login.sh 11 | if [ $? -ne 0 ]; then 12 | echo "Failed to authenticate to IBM Cloud Kubernetes Service" 13 | exit 1 14 | fi 15 | 16 | echo "Testing yml files for generalized namespace" 17 | . ./test_yml.sh 18 | if [ $? -ne 0 ]; then 19 | echo "Failed to find in deployment YAML files" 20 | exit 1 21 | fi 22 | 23 | echo "Deploy pods for Stage 3..." 24 | . ./deploy.sh 25 | if [ $? -ne 0 ]; then 26 | echo "Failed to Deploy pods for stage 3 to IBM Cloud Kubernetes Service" 27 | exit 1 28 | fi 29 | -------------------------------------------------------------------------------- /images/VMvsContainer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/container-service-getting-started-wt/e4a57c84a5c309eaea59b6a72c2dcaeaeceb2c88/images/VMvsContainer.png -------------------------------------------------------------------------------- /images/app_deploy_workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/container-service-getting-started-wt/e4a57c84a5c309eaea59b6a72c2dcaeaeceb2c88/images/app_deploy_workflow.png -------------------------------------------------------------------------------- /images/cluster_ha_roadmap.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/container-service-getting-started-wt/e4a57c84a5c309eaea59b6a72c2dcaeaeceb2c88/images/cluster_ha_roadmap.png -------------------------------------------------------------------------------- /images/container-pod-node-master-relationship.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/container-service-getting-started-wt/e4a57c84a5c309eaea59b6a72c2dcaeaeceb2c88/images/container-pod-node-master-relationship.jpg -------------------------------------------------------------------------------- /images/kubernetes_arch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM/container-service-getting-started-wt/e4a57c84a5c309eaea59b6a72c2dcaeaeceb2c88/images/kubernetes_arch.png -------------------------------------------------------------------------------- /install_bx.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Download IBM Cloud CLI" 4 | wget --quiet --output-document=/tmp/Bluemix_CLI_amd64.tar.gz http://public.dhe.ibm.com/cloud/bluemix/cli/bluemix-cli/latest/Bluemix_CLI_amd64.tar.gz 5 | tar -xf /tmp/Bluemix_CLI_amd64.tar.gz --directory=/tmp 6 | 7 | # Create bx alias 8 | echo "#!/bin/sh" >/tmp/Bluemix_CLI/bin/bx 9 | echo "/tmp/Bluemix_CLI/bin/bluemix \"\$@\" " >>/tmp/Bluemix_CLI/bin/bx 10 | chmod +x /tmp/Bluemix_CLI/bin/* 11 | chmod +x /tmp/Bluemix_CLI/bin/cfcli/* 12 | 13 | export PATH="/tmp/Bluemix_CLI/bin:$PATH" 14 | 15 | # Install IBM Cloud CS plugin 16 | echo "Install the IBM Cloud Kubernetes Service plugin" 17 | ibmcloud plugin install container-service -r Bluemix 18 | ibmcloud plugin install container-registry -r Bluemix 19 | 20 | echo "Install kubectl" 21 | wget --quiet --output-document=/tmp/Bluemix_CLI/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl 22 | chmod +x /tmp/Bluemix_CLI/bin/kubectl 23 | 24 | if [ -n "$DEBUG" ]; then 25 | ibmcloud --version 26 | ibmcloud plugin list 27 | fi 28 | -------------------------------------------------------------------------------- /test_yml.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "Testing YAML files for " 3 | ls */*.yml 4 | imageLines=`grep image: */*.yml` 5 | namespaceLines=`grep \ */*.yml` 6 | if [ "$imageLines" = "$namespaceLines" ]; then 7 | echo " found as expected in YAML files" 8 | else 9 | echo " NOT FOUND as expected in YAML files" 10 | exit 1 11 | fi 12 | 13 | --------------------------------------------------------------------------------