├── .gitignore ├── Readme.md ├── demo ├── 00-temp-crd.yaml ├── 00-temp-res-gothenburg.yaml ├── 00-temp-res.yaml ├── 00-temp-res0.yaml ├── 01-temp-crd.yaml ├── 02-temp-crd.yaml └── 02-temp-res-failure.yaml ├── lab1-kube-api.md ├── lab2-crd.md ├── lab3-cli-plugin.md ├── lab4-kubebuilder.md └── lab5-kudo.md /.gitignore: -------------------------------------------------------------------------------- 1 | # General 2 | .DS_Store 3 | .AppleDouble 4 | .LSOverride 5 | 6 | # Thumbnails 7 | ._* 8 | 9 | # Swap 10 | [._]*.s[a-v][a-z] 11 | !*.svg # comment out if you don't need vector files 12 | [._]*.sw[a-p] 13 | [._]s[a-rt-v][a-z] 14 | [._]ss[a-gi-z] 15 | [._]sw[a-p] 16 | 17 | # Session 18 | Session.vim 19 | Sessionx.vim 20 | -------------------------------------------------------------------------------- /Readme.md: -------------------------------------------------------------------------------- 1 | # Labs for Extending Kubernetes Workshop 2 | 3 | ## Requirements 4 | 5 | * git 6 | * go 1.13+ 7 | * Kubernetes 1.15+ cluster 8 | * [Kubernetes in Docker (KinD)](https://github.com/kubernetes-sigs/kind) **note:** v0.7.0 used for lab creation 9 | * [Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) 10 | * curl or httpie 11 | 12 | ## Notes: 13 | 14 | Common `alias k=kubectl` 15 | 16 | 17 | ## Labs 18 | 19 | ### [Lab 1:](lab1-kube-api.md) Kubernetes API 20 | 21 | The focus of this lab is to become familar with the kube-api. Through this lab you should have an understanding of the kube-api, how it can be accessed and how the API is formed. 22 | 23 | ### [Lab 2:](lab2-crd.md) Custom Resource Definition (CRDs) 24 | 25 | ### [Lab 3:](lab3-cli-plugin.md) kubectl plugin 26 | 27 | ### [Lab 4:](lab4-kubebuilder.md) Kubebuilder Operator 28 | 29 | ### [Lab 5:](lab5-kudo.md) KUDO Operator 30 | 31 | 32 | ## Recommendations 33 | 34 | Highly recommend [Goland](https://www.jetbrains.com/go) as a Go editor -------------------------------------------------------------------------------- /demo/00-temp-crd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.k8s.io/v1beta1 2 | kind: CustomResourceDefinition 3 | metadata: 4 | name: thermometers.d2iq.com 5 | spec: 6 | group: d2iq.com 7 | version: v1 8 | names: 9 | kind: Thermometer 10 | plural: thermometers 11 | shortNames: 12 | - therm 13 | scope: Namespaced 14 | -------------------------------------------------------------------------------- /demo/00-temp-res-gothenburg.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: d2iq.com/v1 2 | kind: Thermometer 3 | metadata: 4 | name: gothenburg 5 | namespace: sweden 6 | spec: 7 | unit: Celcius 8 | foo: test 9 | -------------------------------------------------------------------------------- /demo/00-temp-res.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: d2iq.com/v1 2 | kind: Thermometer 3 | metadata: 4 | name: stockholm 5 | namespace: sweden 6 | spec: 7 | unit: Celcius 8 | -------------------------------------------------------------------------------- /demo/00-temp-res0.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: d2iq.com/v1 2 | kind: Thermometer 3 | metadata: 4 | name: stlouis 5 | namespace: usa 6 | spec: 7 | unit: Celcius 8 | foo: test 9 | -------------------------------------------------------------------------------- /demo/01-temp-crd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.k8s.io/v1beta1 2 | kind: CustomResourceDefinition 3 | metadata: 4 | name: thermometers.d2iq.com 5 | spec: 6 | group: d2iq.com 7 | version: v1 8 | names: 9 | kind: Thermometer 10 | plural: thermometers 11 | shortNames: 12 | - therm 13 | scope: Namespaced 14 | additionalPrinterColumns: 15 | - name: Unit 16 | type: string 17 | JSONPath: .spec.unit 18 | - name: Temperature 19 | type: string 20 | JSONPath: .status.temperature 21 | -------------------------------------------------------------------------------- /demo/02-temp-crd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.k8s.io/v1beta1 2 | kind: CustomResourceDefinition 3 | metadata: 4 | name: thermometers.d2iq.com 5 | spec: 6 | group: d2iq.com 7 | version: v1 8 | names: 9 | kind: Thermometer 10 | plural: thermometers 11 | shortNames: 12 | - therm 13 | scope: Namespaced 14 | preserveUnknownFields: false 15 | validation: 16 | openAPIV3Schema: 17 | type: object 18 | description: Thermometer is the Schema for the Thermometer API. 19 | properties: 20 | apiVersion: 21 | type: string 22 | kind: 23 | type: string 24 | metadata: 25 | type: object 26 | spec: 27 | description: Thermometer Spec defines the desired state of Thermometer 28 | type: object 29 | properties: 30 | unit: 31 | description: Units in Celcius or Fahrenheit 32 | type: string 33 | anyOf: [{"pattern": "^Celcius"}, {"pattern": "^Fahrenheit"}] 34 | required: ["unit"] 35 | status: 36 | description: Thermometer Status defines the possible status states for Thermometer 37 | type: object 38 | properties: 39 | temperature: 40 | type: number 41 | -------------------------------------------------------------------------------- /demo/02-temp-res-failure.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: d2iq.com/v1 2 | kind: Thermometer 3 | metadata: 4 | name: stockholm 5 | namespace: sweden 6 | spec: 7 | unit: Celcius 8 | foo: bar 9 | -------------------------------------------------------------------------------- /lab1-kube-api.md: -------------------------------------------------------------------------------- 1 | # Lab Kubernetes API 2 | 3 | ## Objective 4 | 5 | The focus of this lab is to become familar with the kube-api. Through this lab you should have an understanding of the kube-api, how it can be accessed and how the API is formed. 6 | 7 | 8 | 1. Start cluster 9 | 10 | `kind create cluster` 11 | 12 | ```bash 13 | kind create cluster 14 | Creating cluster "kind" ... 15 | ✓ Ensuring node image (kindest/node:v1.17.0) 🖼 16 | ✓ Preparing nodes 📦 17 | ✓ Writing configuration 📜 18 | ⠊⠁ Starting control-plane 🕹️ on 19 | ✓ Starting control-plane 🕹️ 20 | ✓ Installing CNI 🔌 21 | ✓ Installing StorageClass 💾 22 | Set kubectl context to "kind-kind" 23 | You can now use your cluster with: 24 | 25 | kubectl cluster-info --context kind-kind 26 | 27 | Have a nice day! 👋 28 | ``` 29 | 30 | 2. Access the API via `kubectl` 31 | 32 | Try the following: 33 | `k get --raw /` 34 | 35 | Get namespaces 36 | `k get --raw /api/v1/namespaces` 37 | `k get --raw /api/v1/namespaces/default` 38 | 39 | ``` 40 | k get --raw /api/v1/namespaces/default/ 41 | {"kind":"Namespace","apiVersion":"v1","metadata":{"name":"default","selfLink":"/api/v1/namespaces/default","uid":"288cb0cf-4257-4288-9863-d313bc502972","resourceVersion":"146","creationTimestamp":"2020-02-02T03:15:52Z"},"spec":{"finalizers":["kubernetes"]},"status":{"phase":"Active"}} 42 | ``` 43 | 44 | **tip:** `jq` or `python json.tool` can make this easier to read. 45 | `k get --raw /api/v1/namespaces/default | jq .` or 46 | `k get --raw /api/v1/namespaces/default | python -m json.tool` 47 | 48 | 49 | 3. Time for a proxy 50 | 51 | `k proxy` 52 | 53 | ```bash 54 | k proxy 55 | Starting to serve on 127.0.0.1:8001 56 | ``` 57 | 58 | `curl localhost:8001` 59 | `curl localhost:8001/api/v1/namespaces/default` 60 | 61 | **note:** exit proxy with ctrl+c 62 | 63 | 4. api-resouces 64 | 65 | Lets get a list of resources from `kubectl` 66 | `k api-resources` 67 | 68 | Which resources are namespaced? and which are cluster scoped? 69 | 70 | For cluster scoped: `k api-resources --namespaced=false` 71 | 72 | What is the shortname for `PodSecurityPolicy`? 73 | 74 | 5. Explaining Resources 75 | 76 | Explain is a great way to understand the defined structure of a resource or kind. This is accomplish through `k explain ` 77 | 78 | `k explain ns` 79 | 80 | Almost all resources at this high level report roughly the same apiVersion, kind, metadata, spec, status information. In order to get the full structure of this kind use the `--recursive` flag. 81 | 82 | `k explain ns --recursive` 83 | 84 | Notice the status field `phase`. Let's display that as an output. 85 | 86 | `k get ns -o custom-columns=NAME:.metadata.name,PHASE:.status.phase` 87 | 88 | Example output: 89 | ``` 90 | NAME PHASE 91 | default Active 92 | kube-node-lease Active 93 | kube-public Active 94 | kube-system Active 95 | local-path-storage Active 96 | ``` 97 | 98 | Explain is incredibly useful in understanding the structure of types deployed in kubernetes. 99 | 100 | ## Summary 101 | 102 | The Kubernetes API server is the gateway into kubernetes and is accessed via HTTP. All interactions with Kubernetes is through it. All controllers / operators work through API for read, update and control. 103 | 104 | 105 | todo: use v=9 tracing -------------------------------------------------------------------------------- /lab2-crd.md: -------------------------------------------------------------------------------- 1 | # Lab Custom Resource Definition CRD 2 | 3 | ## Objective 4 | 5 | The focus of this lab is to become familar with custom resource definitions (CRDs). Through this lab you hand craft a CRD and add a new type to your kubernetes cluster. The lab will include a look at openAPI v3 Schema with validation and kubectl explain support. 6 | 7 | ## Prerequisites 8 | 9 | * Running Kubernetes 1.15+ cluster 10 | 11 | 1. Create CRD 12 | 13 | You will be creating a new `kind` which is a Thermometer which can its unit's defined and is namespaced. 14 | 15 | Example file named: `therm-crd.yaml` 16 | ```yaml 17 | apiVersion: apiextensions.k8s.io/v1beta1 18 | kind: CustomResourceDefinition 19 | metadata: 20 | name: thermometers.d2iq.com 21 | spec: 22 | group: d2iq.com 23 | version: v1 24 | names: 25 | kind: Thermometer 26 | plural: thermometers 27 | shortNames: 28 | - therm 29 | scope: Namespaced 30 | ``` 31 | 32 | 2. Installing a CRD 33 | 34 | First, lets see what CRDs exist? 35 | 36 | ``` 37 | k get crds 38 | No resources found in default namespace. 39 | ``` 40 | 41 | Without a defined CRD an error is returned if thermometers are queried. 42 | 43 | ``` 44 | k get therm 45 | error: the server doesn't have a resource type "therm" 46 | ``` 47 | 48 | Lets install therm-crd... 49 | 50 | `k apply -f therm-crd.yaml` 51 | 52 | Now we have a 53 | ``` 54 | k get crd 55 | NAME CREATED AT 56 | thermometers.d2iq.com 2020-02-02T05:35:47Z 57 | ``` 58 | 59 | resources are now successful... 60 | 61 | ``` 62 | k get therm 63 | No resources found in default namespace. 64 | ``` 65 | 66 | api-resources now includes thermometers 67 | ``` 68 | k api-resources | grep therm 69 | thermometers therm d2iq.com true Thermometer 70 | ``` 71 | 72 | which can be queried via the api 73 | 74 | `k get --raw /apis/d2iq.com/v1/thermometers` 75 | ``` 76 | {"apiVersion":"d2iq.com/v1","items":[],"kind":"ThermometerList","metadata":{"continue":"","resourceVersion":"9154","selfLink":"/apis/d2iq.com/v1/thermometers"}} 77 | ``` 78 | 79 | 3. Let's create a Custom Resource (CR) 80 | 81 | with a file named `stockholm.yaml` 82 | ``` 83 | apiVersion: d2iq.com/v1 84 | kind: Thermometer 85 | metadata: 86 | name: stockholm 87 | namespace: sweden 88 | spec: 89 | unit: Celcius 90 | ``` 91 | 92 | Create namespace: `k create ns sweden` 93 | 94 | Create CR: `k apply -f stockholm.yaml` 95 | 96 | ``` 97 | k get therm -A 98 | NAMESPACE NAME AGE 99 | sweden stockholm 13s 100 | ``` 101 | 102 | It would be nice if the displayed output included more information regarding the resource. Let's look at that. 103 | 104 | 4. Creating custom print columns 105 | 106 | To the previous CRD (`therm-crd.yaml`) add the following: 107 | 108 | ```yaml 109 | additionalPrinterColumns: 110 | - name: Unit 111 | type: string 112 | JSONPath: .spec.unit 113 | - name: Temperature 114 | type: string 115 | JSONPath: .status.temperature 116 | ``` 117 | 118 | Redeploy and retrieve the thermometers again: 119 | 120 | ```bash 121 | # reapply crd 122 | k apply -f therm-crd.yaml 123 | 124 | # retrieve thermometers 125 | k get therm -A 126 | NAMESPACE NAME UNIT TEMPERATURE 127 | sweden stockholm Celcius 128 | ``` 129 | 130 | 5. Validation 131 | 132 | First, Let's try a new CR such as: 133 | 134 | ```yaml 135 | apiVersion: d2iq.com/v1 136 | kind: Thermometer 137 | metadata: 138 | name: gothenburg 139 | namespace: sweden 140 | spec: 141 | unit: Celcius 142 | foo: test 143 | ``` 144 | 145 | Does it succesfully apply? 146 | 147 | ```bash 148 | # apply 149 | k apply -f gothenburg.yaml 150 | thermometer.d2iq.com/gothenburg created 151 | 152 | # get therm 153 | k get therm -A 154 | NAMESPACE NAME UNIT TEMPERATURE 155 | sweden gothenburg Celcius 156 | sweden stockholm Celcius 157 | ``` 158 | 159 | Now lets add a schema to the CRD using open API v3 Schema. Add the following to the CRD. 160 | 161 | ```yaml 162 | validation: 163 | openAPIV3Schema: 164 | type: object 165 | description: Thermometer is the Schema for the Thermometer API. 166 | properties: 167 | apiVersion: 168 | type: string 169 | kind: 170 | type: string 171 | metadata: 172 | type: object 173 | spec: 174 | description: Thermometer Spec defines the desired state of Thermometer 175 | type: object 176 | properties: 177 | unit: 178 | description: Units in Celcius or Fahrenheit 179 | type: string 180 | anyOf: [{"pattern": "^Celcius"}, {"pattern": "^Fahrenheit"}] 181 | required: ["unit"] 182 | status: 183 | description: Thermometer Status defines the possible status states for Thermometer 184 | type: object 185 | properties: 186 | temperature: 187 | type: number 188 | ``` 189 | 190 | Delete the crds from the cluster `k delete crd --all` and reapply the crd. `k apply -f therm-crd.yaml`. 191 | 192 | Try reapplying the gothenburg CR. `k apply -f gothenburg.yaml` 193 | Try stockholm manifest again. 194 | 195 | 6. Explaining CRDs 196 | 197 | In the CRD manifest, define the following just after `scope: Namespaced` 198 | 199 | ```yaml 200 | scope: Namespaced 201 | preserveUnknownFields: false 202 | ``` 203 | 204 | The `preserveUnknownFields` is not needed for CRD v1, but it is needed for v1beta1. The combination of this field set to false AND the defined schema enables the `k explain` such as: 205 | 206 | ```bash 207 | k explain therm --recursive 208 | KIND: Thermometer 209 | VERSION: d2iq.com/v1 210 | 211 | DESCRIPTION: 212 | Thermometer is the Schema for the Thermometer API. 213 | 214 | FIELDS: 215 | apiVersion 216 | kind 217 | metadata 218 | annotations 219 | clusterName 220 | creationTimestamp 221 | deletionGracePeriodSeconds 222 | deletionTimestamp 223 | finalizers <[]string> 224 | generateName 225 | generation 226 | labels 227 | managedFields <[]Object> 228 | apiVersion 229 | fieldsType 230 | fieldsV1 231 | manager 232 | operation 233 | time 234 | name 235 | namespace 236 | ownerReferences <[]Object> 237 | apiVersion 238 | blockOwnerDeletion 239 | controller 240 | kind 241 | name 242 | uid 243 | resourceVersion 244 | selfLink 245 | uid 246 | spec 247 | unit 248 | status 249 | temperature 250 | ``` 251 | 252 | ## Summary 253 | 254 | Custom Resource Defintion (CRDs) is the mechanism used to add new `kind`s into a kubernetes cluster. It makes the kubernetes API extensible. When added with controllers watching the CRDs, enables a custom declarative experience. -------------------------------------------------------------------------------- /lab3-cli-plugin.md: -------------------------------------------------------------------------------- 1 | # Lab Kubernetes kubectl CLI plugin 2 | 3 | ## Objective 4 | 5 | The focus of this lab is to become familar with kubectl plugin development. Through this lab you great a plugin that will interact with a Kubernetes cluster. 6 | 7 | ## Prerequisites 8 | 9 | * Running Kubernetes 1.15+ cluster 10 | * kubectl installed 11 | 12 | --- 13 | 14 | Step 1: clone https://github.com/codementor/k8s-cli 15 | 16 | `git clone https://github.com/codementor/k8s-cli` 17 | 18 | Or fork and clone 19 | 20 | Step 2: Compile 21 | 22 | Make sure the environment is setup correct by `make cli` which will build the cli or `make` which will build and test. 23 | 24 | Step 3. Run CLI from Go 25 | 26 | `go run cmd/kubectl-example/main.go` 27 | 28 | or 29 | 30 | `go run cmd/kubectl-example/main.go version` 31 | will run the project. 32 | 33 | Step 3: Build and Install 34 | 35 | `make cli-install` will build and install the new example plugin into `$GOPATH/bin`. Assuming that is in your `$PATH`, the following should now be possible. 36 | 37 | ```bash 38 | k example version 39 | Example Version: version.Info{GitVersion:"0.1.0", GitCommit:"64e04046", BuildDate:"2020-01-30T22:00:17Z", GoVersion:"go1.13.7", Compiler:"gc", Platform:"darwin/amd64"} 40 | ``` 41 | 42 | Step 4: Test against your cluster 43 | 44 | Running `k example resources` while kubectl context is set to an active cluster should result in the following: 45 | 46 | ```bash 47 | k example resources 48 | Name Namespaced Kind 49 | replicationcontrollers true ReplicationController 50 | namespaces false Namespace 51 | resourcequotas true ResourceQuota 52 | configmaps true ConfigMap 53 | pods true Pod 54 | nodes false Node 55 | services true Service 56 | persistentvolumeclaims true PersistentVolumeClaim 57 | secrets true Secret 58 | serviceaccounts true ServiceAccount 59 | persistentvolumes false PersistentVolume 60 | ... 61 | ``` 62 | 63 | Step 5: Explore the Project 64 | 65 | Start with `root.go` and understand how commands in Go are created. 66 | Look at `main.go` which is the starting point but for this project isn't very interesting. 67 | 68 | 69 | Step 6: Add pod listing code which uses structure object references in go. 70 | 71 | At line 71 of `pod_list.go` replace `fmt.Printf("add pod list code using direct object references\n")` with the following: 72 | 73 | First acquire a kube client and a pods client: 74 | 75 | ```go 76 | client := env.NewClientSet(&Settings) 77 | podsClient := client.CoreV1().Pods(apiv1.NamespaceDefault) 78 | ``` 79 | 80 | With the `podClient` we can query for a list 81 | 82 | ```go 83 | list, err := podsClient.List(metav1.ListOptions{}) 84 | if err != nil { 85 | return err 86 | } 87 | ``` 88 | 89 | The list that is return is an object which has type information AND contains a list of the pod objects we are interested in. This is a common pattern in kubernetes object access. The follow code checks to see if there are any objects and prints a line for each pod found. 90 | 91 | ```go 92 | if len(list.Items) == 0 { 93 | fmt.Printf("no pods discovered\n") 94 | return nil 95 | } 96 | for _, item := range list.Items { 97 | fmt.Fprintf(p.out, "pod %v in namespace: %v\n", item.Name, item.Namespace) 98 | } 99 | ``` 100 | 101 | Step 7. Command Flags and Pod Status 102 | 103 | Lets add a new command flag for this list command. 104 | 105 | Find in the `pod_list.go` file where it reads `// status boolean` and add the following code `status bool` to the struct. 106 | 107 | Find in the `pod_list.go` file where it reads `// status flag` and add the following code to add a flag 108 | 109 | ```go 110 | f := cmd.Flags() 111 | f.BoolVarP(&pkg.status, "status", "i", true, "display status info") 112 | 113 | ``` 114 | 115 | Now lets change the previous steps code such that a flag will provide a different output. 116 | 117 | ```go 118 | if p.status { 119 | fmt.Fprintf(p.out, "pod %v in namespace: %v, status: %v\n", item.Name, item.Namespace, item.Status.Phase) 120 | 121 | } else { 122 | 123 | ``` 124 | 125 | Now a `--status` flag will provide additional output for the list. 126 | 127 | ```bash 128 | go run cmd/kubectl-example/main.go pod list --status 129 | pod at-sample2-pod in namespace: default, status: Succeeded 130 | pod foo in namespace: default, status: Running 131 | ``` 132 | 133 | Step 8. Using Rest Client 134 | 135 | Lets add another pod list, but using the rest client. Find the code `fmt.Printf("add pod list code using the rest client\n")` in `pod_list.go` and replace with the following: 136 | 137 | First lets get a client, but in this case we are going to use the rest client. Look in the `environment.go` file at the differences. 138 | 139 | ```go 140 | client := env.NewRestClient(&Settings) 141 | result := &v1.PodList{} 142 | ``` 143 | 144 | The REST API is more generic and it is coded using the builder pattern. 145 | ```go 146 | err := client.Get(). 147 | Namespace(apiv1.NamespaceDefault). 148 | Resource("pods"). 149 | Do(). 150 | Into(result) 151 | if err != nil { 152 | return err 153 | } 154 | ``` 155 | 156 | At this point the `PodList` object is the same and can be displayed the same way as in the first part of this lab. 157 | ```go 158 | if len(result.Items) == 0 { 159 | fmt.Printf("no pods discovered\n") 160 | return nil 161 | } 162 | for _, item := range result.Items { 163 | fmt.Fprintf(p.out, "pod %v in namespace: %v\n", item.Name, item.Namespace) 164 | } 165 | ``` 166 | 167 | Step 9: Adding a Pod 168 | 169 | Up to this point in the lab, you have only queried Kubernetes for information. In this part of the lab, you will add a pod programmically. You will be working with the `pod_add` file. Find 170 | code that reads `fmt.Printf("adding a pod\n")` and replace with the following: 171 | 172 | This step in the lab will require the following additions to the imports: 173 | 174 | ```go 175 | apiv1 "k8s.io/api/core/v1" 176 | metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" 177 | 178 | "github.com/codementor/k8s-cli/pkg/example/env" 179 | ``` 180 | 181 | Similar to the first pod list, we need a client and a podClient. 182 | 183 | ```go 184 | client := env.NewClientSet(&Settings) 185 | 186 | podsClient := client.CoreV1().Pods(apiv1.NamespaceDefault) 187 | ``` 188 | 189 | Next we need to define the pod using the v1.Pod API. 190 | 191 | ```go 192 | pod := &apiv1.Pod{ 193 | ObjectMeta: metav1.ObjectMeta{ 194 | Name: name, 195 | Labels: map[string]string{"app": "demo"}, 196 | }, 197 | Spec: apiv1.PodSpec{ 198 | Containers: []apiv1.Container{ 199 | { 200 | Name: name, 201 | Image: p.image, 202 | }, 203 | }, 204 | }, 205 | } 206 | ``` 207 | 208 | Notice the setting of `Name`, `Image` and `Labels`. 209 | 210 | Now lets create it! 211 | ```go 212 | pp, err := podsClient.Create(pod) 213 | if err != nil { 214 | return err 215 | } 216 | 217 | fmt.Fprintf(p.out, "Pod %v created with rev: %v\n", pp.Name, pp.ResourceVersion) 218 | ``` 219 | 220 | Notice we get another object back from create which contains updates to the object we passed. 221 | 222 | Let's check the cluster `k get pods`. 223 | 224 | -------------------------------------------------------------------------------- /lab4-kubebuilder.md: -------------------------------------------------------------------------------- 1 | # Lab Kubebuilder: Creating a CRD and Controller 2 | 3 | ## Objective 4 | 5 | The focus of this lab is to become familar with kubebuilder. In this lab you will create a CRD through go structs and automation. You will see how RBACs are created through generation from code annotations. Then you will create a controller for working with the CRD. 6 | 7 | This lab and code was inspired by the previous work of https://github.com/programming-kubernetes/cnat and in particular https://github.com/programming-kubernetes/cnat/tree/master/cnat-kubebuilder, however that code is out of date for the current versions of controller-runtime and kubebuilder. Using it as a reference you will notice some nice improvements to standard patterns in controller-runtime. 8 | 9 | The basics of this operator is that we will have an `at` CRD which takes a time and command. A controller will monitor the CRD and Pods. The controller will create `at` kinds and will create a pod to run the command if the time has past. Once the pod is finished running, the controller will update the CR regarding the pod status. 10 | 11 | ## Prerequisites 12 | 13 | * Running Kubernetes 1.15+ cluster 14 | * kubebuilder 2.20 ([install from site](https://github.com/kubernetes-sigs/kubebuilder#installation)) or `brew install kubebuilder` 15 | * go 1.13+ 16 | 17 | 1. Creating a Go project 18 | 19 | ```bash 20 | $GOPATH = /Users/kensipe/projects/go 21 | mkdir $GOPATH/src/github.com/kensipe/at-controller 22 | cd $GOPATH/src/github.com/kensipe/at-controller 23 | 24 | go mod init github.com/kensipe/at-controller 25 | ``` 26 | 27 | 2. Initialize kubebuilder 28 | 29 | `kubebuilder init --domain d2iq.com --owner “jfokus"` 30 | 31 | Review the create structure and files. In particular `main.go`. The project has started but there isn't much there yet. 32 | 33 | 3. Add an API 34 | 35 | ```bash 36 | kubebuilder create api \ 37 | --group cnat \ 38 | --version v1alpha1 \ 39 | --kind At 40 | ``` 41 | 42 | Now it's time to review a number of files. `main.go` has changed. Review `at_types.go` and `at_controller.go`. 43 | 44 | 4. Defining the CRD via Go struts 45 | 46 | Start with `at_types.go`. We want to change the `Spec` and `Status` similar to the CRD lab. This requires changes to `AtSpec` and `AtStatus` respectively. You'll notice a defined `Foo` in Spec which should be removed. 47 | 48 | For `AtSpec` add `Schedule` and `Command` both are strings. You will need the `json...` annotation and can use the generated Foo as an example. 49 | 50 | For `AtStatus` you need to add a string variable named `Phase`. 51 | 52 | To complete the types definition and for controller convenience define the following phases in the `at_types.go` file. 53 | 54 | ```go 55 | const ( 56 | PhasePending = "PENDING" 57 | PhaseRunning = "RUNNING" 58 | PhaseDone = "DONE" 59 | ) 60 | ``` 61 | 62 | Let's try it out... In the terminal from the project root run: `make manifests` which will generate files in the `config` folder AND/OR `make install` which will apply the CRDs to the running kubernetes cluster. 63 | 64 | ```bash 65 | # after make install the CRD is installed 66 | k get crd 67 | NAME CREATED AT 68 | ats.cnat.d2iq.com 2020-02-02T09:48:23Z 69 | ``` 70 | 71 | Let's create a CR from this CRD... 72 | 73 | ```yaml 74 | apiVersion: cnat.d2iq.com/v1alpha1 75 | kind: At 76 | metadata: 77 | name: at-sample2 78 | spec: 79 | schedule: "2020-01-30T10:02:00Z" 80 | command: "echo YAY" 81 | ``` 82 | 83 | 84 | **advanced:** Looking to add the printer columns? The following build tags can be placed before `type At struct` 85 | 86 | ```go 87 | // +kubebuilder:object:root=true 88 | // +kubebuilder:subresource:status 89 | // +kubebuilder:printcolumn:JSONPath=".spec.schedule", name=Schedule, type=string 90 | // +kubebuilder:printcolumn:JSONPath=".status.phase", name=Phase, type=string 91 | // At is the Schema for the ats API 92 | type At struct { 93 | metav1.TypeMeta `json:",inline"` 94 | 95 | ``` 96 | 97 | Reinstall manifests with `make install` 98 | 99 | Retrieve the CR 100 | 101 | ```bash 102 | k get at 103 | NAME SCHEDULE PHASE 104 | at-sample2 2020-01-30T10:02:00Z 105 | ``` 106 | 107 | ---- 108 | Now that we have a CRD to work with lets focus on the controller. 109 | 110 | in the `at_controller.go` file, there are 2 tags to generate RBAC for the CRD, however we this controller will need permission for pods as well. 111 | 112 | find: 113 | 114 | ```go 115 | // +kubebuilder:rbac:groups=cnat.d2iq.com,resources=ats,verbs=get;list;watch;create;update;patch;delete 116 | // +kubebuilder:rbac:groups=cnat.d2iq.com,resources=ats/status,verbs=get;update;patch 117 | ``` 118 | 119 | and add: 120 | 121 | ```go 122 | // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete 123 | // +kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get;update;patch 124 | ``` 125 | For more details on kubebuilder markers read: https://book.kubebuilder.io/reference/markers.html 126 | 127 | 128 | Working on the `func (r *AtReconciler) Reconcile` function, change the logger to a specific logger name and with some defined structure as follows: 129 | 130 | ```go 131 | logger := r.Log.WithValues("namespace", req.NamespacedName, "at", req.Name) 132 | logger.Info("== Reconciling At") 133 | ``` 134 | 135 | Following the logger, is a good place for fetching instances of the CR for `at` as follows: 136 | 137 | ```go 138 | // Fetch the At instance 139 | instance := &cnatv1alpha1.At{} 140 | err := r.Get(context.TODO(), req.NamespacedName, instance) 141 | if err != nil { 142 | if errors.IsNotFound(err) { 143 | // Request object not found, could have been deleted after reconcile request - return and don't requeue: 144 | return reconcile.Result{}, nil 145 | } 146 | // Error reading the object - requeue the request: 147 | return reconcile.Result{}, err 148 | } 149 | ``` 150 | 151 | Now that we have an instance defined by the request namespacedname, lets check to see if it has a status, if not, lets initialize. 152 | 153 | ```go 154 | // If no phase set, default to pending (the initial phase): 155 | if instance.Status.Phase == "" { 156 | instance.Status.Phase = cnatv1alpha1.PhasePending 157 | cnatv1alpha1.PhasePending) 158 | } 159 | ``` 160 | 161 | While there is some additional logic you will want to add for working an instance through it's phases, lets follow this up with an update which will define the end of our function just prior to a return. 162 | 163 | ```go 164 | // Update the At instance, setting the status to the respective phase: 165 | err = r.Status().Update(context.TODO(), instance) 166 | if err != nil { 167 | return reconcile.Result{}, err 168 | } 169 | 170 | return ctrl.Result{}, nil 171 | ``` 172 | 173 | It is now possible to see some work within this controller. You will need to re-run `make install` to setup new RBAC manifests. Then to run the controller.. run the following: 174 | 175 | `make run` 176 | 177 | If you have an instance of `at`, after running the controller the status should have been updated... try `k get at` 178 | 179 | Completing the controller requires a couple of support functions for creating the pod and checking the schedule. Add the following functions to the `at_controller.go` file. 180 | 181 | ```go 182 | // newPodForCR returns a busybox pod with the same name/namespace as the cr 183 | func newPodForCR(cr *cnatv1alpha1.At) *corev1.Pod { 184 | labels := map[string]string{ 185 | "app": cr.Name, 186 | } 187 | return &corev1.Pod{ 188 | ObjectMeta: metav1.ObjectMeta{ 189 | Name: cr.Name + "-pod", 190 | Namespace: cr.Namespace, 191 | Labels: labels, 192 | }, 193 | Spec: corev1.PodSpec{ 194 | Containers: []corev1.Container{ 195 | { 196 | Name: "busybox", 197 | Image: "busybox", 198 | Command: strings.Split(cr.Spec.Command, " "), 199 | }, 200 | }, 201 | RestartPolicy: corev1.RestartPolicyOnFailure, 202 | }, 203 | } 204 | } 205 | 206 | // timeUntilSchedule parses the schedule string and returns the time until the schedule. 207 | // When it is overdue, the duration is negative. 208 | func timeUntilSchedule(schedule string) (time.Duration, error) { 209 | now := time.Now().UTC() 210 | layout := "2006-01-02T15:04:05Z" 211 | s, err := time.Parse(layout, schedule) 212 | if err != nil { 213 | return time.Duration(0), err 214 | } 215 | return s.Sub(now), nil 216 | } 217 | ``` 218 | 219 | Finishing the `Reconcile` function, insert the code below after the if body that sets the `instance.Status.Phase = cnatv1alpha1.PhasePending`: 220 | 221 | ```go 222 | // Now let's make the main case distinction: implementing 223 | // the state diagram PENDING -> RUNNING -> DONE 224 | switch instance.Status.Phase { 225 | case cnatv1alpha1.PhasePending: 226 | logger.Info("Phase: PENDING") 227 | // As long as we haven't executed the command yet, we need to check if it's time already to act: 228 | logger.Info("Checking schedule", "Target", instance.Spec.Schedule) 229 | // Check if it's already time to execute the command with a tolerance of 2 seconds: 230 | d, err := timeUntilSchedule(instance.Spec.Schedule) 231 | if err != nil { 232 | logger.Error(err, "Schedule parsing failure") 233 | // Error reading the schedule. Wait until it is fixed. 234 | return reconcile.Result{}, err 235 | } 236 | logger.Info("Schedule parsing done", "Result", fmt.Sprintf("diff=%v", d)) 237 | if d > 0 { 238 | // Not yet time to execute the command, wait until the scheduled time 239 | return reconcile.Result{RequeueAfter: d}, nil 240 | } 241 | logger.Info("It's time!", "Ready to execute", instance.Spec.Command) 242 | instance.Status.Phase = cnatv1alpha1.PhaseRunning 243 | case cnatv1alpha1.PhaseRunning: 244 | logger.Info("Phase: RUNNING") 245 | pod := newPodForCR(instance) 246 | // Set At instance as the owner and controller 247 | if err := controllerutil.SetControllerReference(instance, pod, r.Scheme); err != nil { 248 | // requeue with error 249 | return reconcile.Result{}, err 250 | } 251 | found := &corev1.Pod{} 252 | err = r.Get(context.TODO(), types.NamespacedName{Name: pod.Name, Namespace: pod.Namespace}, found) 253 | // Try to see if the pod already exists and if not 254 | // (which we expect) then create a one-shot pod as per spec: 255 | if err != nil && errors.IsNotFound(err) { 256 | err = r.Create(context.TODO(), pod) 257 | if err != nil { 258 | // requeue with error 259 | return reconcile.Result{}, err 260 | } 261 | logger.Info("Pod launched", "name", pod.Name) 262 | } else if err != nil { 263 | // requeue with error 264 | return reconcile.Result{}, err 265 | } else if found.Status.Phase == corev1.PodFailed || found.Status.Phase == corev1.PodSucceeded { 266 | logger.Info("Container terminated", "reason", found.Status.Reason, "message", found.Status.Message) 267 | instance.Status.Phase = cnatv1alpha1.PhaseDone 268 | } else { 269 | // don't requeue because it will happen automatically when the pod status changes 270 | return reconcile.Result{}, nil 271 | } 272 | case cnatv1alpha1.PhaseDone: 273 | logger.Info("Phase: DONE") 274 | return reconcile.Result{}, nil 275 | default: 276 | logger.Info("NOP") 277 | return reconcile.Result{}, nil 278 | } 279 | ``` 280 | 281 | If you run the controller again `make run` you should see phase status changing for the CR but it never fully gets to "Done". This is because the controller isn't watching pods yet. 282 | 283 | The final modification needed is in the `SetupWithManager` function. Make the following changes: 284 | 285 | ```go 286 | return ctrl.NewControllerManagedBy(mgr). 287 | For(&cnatv1alpha1.At{}). 288 | Owns(&cnatv1alpha1.At{}). 289 | Owns(&corev1.Pod{}). 290 | Complete(r) 291 | ``` 292 | 293 | --- 294 | **Advanced:** Using the kubernete events 295 | 296 | Take a look a the desription of the at RC using `k describe at at-sample` 297 | 298 | ```bash 299 | k describe at at-sample 300 | Name: at-sample 301 | Namespace: default 302 | Labels: 303 | Annotations: kubectl.kubernetes.io/last-applied-configuration: 304 | {"apiVersion":"cnat.d2iq.com/v1alpha1","kind":"At","metadata":{"annotations":{},"name":"at-sample","namespace":"default"},"spec":{"comman... 305 | API Version: cnat.d2iq.com/v1alpha1 306 | Kind: At 307 | Metadata: 308 | Creation Timestamp: 2020-02-02T11:30:46Z 309 | Generation: 1 310 | Resource Version: 27790 311 | Self Link: /apis/cnat.d2iq.com/v1alpha1/namespaces/default/ats/at-sample 312 | UID: 89d3ef30-8714-47b9-bbf8-664262b00610 313 | Spec: 314 | Command: echo YAY 315 | Schedule: 2020-01-30T10:02:00Z 316 | Status: 317 | Phase: RUNNING 318 | ``` 319 | 320 | Notice there are no "events" against this object. This step of the lab changes that. 321 | 322 | Add the `Recorder record.EventRecorder` to the `AtReconciler` struct so that it looks like: 323 | 324 | ```go 325 | // AtReconciler reconciles a At object 326 | type AtReconciler struct { 327 | client.Client 328 | Log logr.Logger 329 | Scheme *runtime.Scheme 330 | Recorder record.EventRecorder 331 | } 332 | ``` 333 | 334 | This struct is initialized in `main.go`, Modify this file to the following: 335 | 336 | ```go 337 | if err = (&controllers.AtReconciler{ 338 | Client: mgr.GetClient(), 339 | Log: ctrl.Log.WithName("controllers").WithName("At"), 340 | Scheme: mgr.GetScheme(), 341 | Recorder: mgr.GetEventRecorderFor("at-controller"), 342 | ``` 343 | 344 | Now modify the `at_controller.go` code to record the events for each transition of the phase status. Below is an example of when the phase is set to "Pending" 345 | 346 | ```go 347 | r.Recorder.Event(instance, "Normal", "PhaseChange", cnatv1alpha1.PhasePending) 348 | ``` 349 | 350 | The results of a describe after this modification will now look like: 351 | 352 | ```bash 353 | k describe at at-sample` 354 | 355 | ```bash 356 | k describe at at-sample 357 | Name: at-sample 358 | Namespace: default 359 | Labels: 360 | Annotations: kubectl.kubernetes.io/last-applied-configuration: 361 | {"apiVersion":"cnat.d2iq.com/v1alpha1","kind":"At","metadata":{"annotations":{},"name":"at-sample","namespace":"default"},"spec":{"comman... 362 | API Version: cnat.d2iq.com/v1alpha1 363 | Kind: At 364 | Metadata: 365 | Creation Timestamp: 2020-02-02T11:30:46Z 366 | Generation: 1 367 | Resource Version: 27790 368 | Self Link: /apis/cnat.d2iq.com/v1alpha1/namespaces/default/ats/at-sample 369 | UID: 89d3ef30-8714-47b9-bbf8-664262b00610 370 | Spec: 371 | Command: echo YAY 372 | Schedule: 2020-01-30T10:02:00Z 373 | Status: 374 | Phase: RUNNING 375 | Events: 376 | Type Reason Age From Message 377 | ---- ------ ---- ---- ------- 378 | Normal PhaseChange 34s at-controller PENDING 379 | Normal PhaseChange 34s at-controller RUNNING 380 | ``` 381 | 382 | ### Notes 383 | 384 | The following are the imports needed for the `at_controller.go` for the changes indicated in this lab. 385 | 386 | ```go 387 | import ( 388 | "context" 389 | "fmt" 390 | "strings" 391 | "time" 392 | 393 | "github.com/go-logr/logr" 394 | corev1 "k8s.io/api/core/v1" 395 | "k8s.io/apimachinery/pkg/api/errors" 396 | metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" 397 | "k8s.io/apimachinery/pkg/runtime" 398 | "k8s.io/apimachinery/pkg/types" 399 | "k8s.io/client-go/tools/record" 400 | ctrl "sigs.k8s.io/controller-runtime" 401 | "sigs.k8s.io/controller-runtime/pkg/client" 402 | "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" 403 | "sigs.k8s.io/controller-runtime/pkg/reconcile" 404 | 405 | cnatv1alpha1 "github.com/codementor/cnat/api/v1alpha1" 406 | ) 407 | ``` 408 | -------------------------------------------------------------------------------- /lab5-kudo.md: -------------------------------------------------------------------------------- 1 | # Lab KUDO 2 | 3 | ## Objective 4 | 5 | The focus of this lab is to become familar with KUDO. Through this lab you hand craft an operator, create an operator repository and install the operator to your kubernetes cluster. 6 | 7 | ## Prerequisites 8 | 9 | * Running Kubernetes 1.15+ cluster 10 | 11 | --- 12 | 13 | ## Create an Operator from Scratch 14 | 15 | This is a step-by-step walk through of the creation of an operator using the KUDO CLI to generate the KUDO operator structure. 16 | 17 | 18 | ### Create the Core Operator Structure 19 | 20 | ```bash 21 | # create operator folder 22 | mkdir first-operator 23 | cd first-operator 24 | kubectl kudo package new first-operator 25 | ``` 26 | 27 | This creates the main structure of the operator which can be viewed using the `tree` command: 28 | 29 | ```bash 30 | $ tree . 31 | . 32 | └── operator 33 | ├── operator.yaml 34 | └── params.yaml 35 | ``` 36 | 37 | ::: tip Note 38 | Use the `-i` flag with `kubectl kudo package new` to be prompted interactively for operator details. 39 | ::: 40 | 41 | ### Add a Maintainer 42 | 43 | `kubectl kudo package add maintainer "your name" your@email.com` 44 | 45 | ### Add a Task 46 | 47 | `kubectl kudo package add task` 48 | 49 | This command uses an interactive prompt to construct the details of the task. Here is an example interaction: 50 | 51 | ```bash 52 | $ kubectl kudo package add task 53 | Task Name: app 54 | ✔ Apply 55 | Task Resource: deployment 56 | ✗ Add another Resource: 57 | ``` 58 | 59 | ### Add a Plan 60 | 61 | `kubectl kudo package add plan` 62 | 63 | This command uses an interactive prompt to construct the details of the plan. Here is an example interaction: 64 | 65 | ```bash 66 | $ kubectl kudo package add plan 67 | ✔ Plan Name: deploy 68 | ✔ serial 69 | Phase 1 name: main 70 | ✔ parallel 71 | Step 1 name: everything 72 | ✔ app 73 | ✗ Add another Task: 74 | ✗ Add another Step: 75 | ✗ Add another Phase: 76 | ``` 77 | 78 | ### Add a Parameter 79 | 80 | `kubectl kudo package add parameter` 81 | 82 | This command uses an interactive prompt to construct the details of the parameter. Here is an example interaction: 83 | 84 | ```bash 85 | $ kubectl kudo package add parameter 86 | Parameter Name: replicas 87 | Default Value: 2 88 | Display Name: 89 | Description: Number of replicas that should be run as part of the deployment 90 | ✔ false 91 | ✗ Add Trigger Plan: 92 | ``` 93 | 94 | These steps have created the entirety of the first-operator with the exception of the details in the `template/deployment.yaml` file. To complete this operator execute the following: 95 | 96 | ```bash 97 | cat << EOF > operator/templates/deployment.yaml 98 | apiVersion: apps/v1 99 | kind: Deployment 100 | metadata: 101 | name: nginx-deployment 102 | spec: 103 | selector: 104 | matchLabels: 105 | app: nginx 106 | replicas: {{ .Params.replicas }} 107 | template: 108 | metadata: 109 | labels: 110 | app: nginx 111 | spec: 112 | containers: 113 | - name: nginx 114 | image: nginx:1.7.9 115 | ports: 116 | - containerPort: 80 117 | EOF 118 | ``` 119 | 120 | --- 121 | 122 | ## Package a KUDO operator 123 | 124 | In order to distribute a KUDO operator the files are packaged together in a compressed tarball. The KUDO CLI provides a mechanism to create this package format while verifying the integrity of the operator. 125 | 126 | ### Package KUDO Operator 127 | 128 | ```bash 129 | rm -rf ~/repo 130 | mkdir -p ~/repo 131 | kubectl kudo package create repository/first-operator/operator/ --destination=~/repo 132 | ``` 133 | 134 | ::: warning Potential Data Loss 135 | You may want to check the contents of the `~/repo` folder prior to deleting it. 136 | ::: 137 | 138 | The output looks like: 139 | 140 | ```bash 141 | kubectl kudo package create repository/first-operator/operator/ --destination=~/repo 142 | package is valid 143 | Package created: /Users/kensipe/repo/first-operator-0.2.0.tgz 144 | ``` 145 | 146 | ### Check to see the operator is built 147 | 148 | ```bash 149 | ls ~/repo 150 | first-operator-0.2.0.tgz 151 | ``` 152 | 153 | --- 154 | 155 | ## Initialize KUDO in a Cluster 156 | 157 | The objective of this lab is to initialize KUDO in a Kubernetes cluster. 158 | 159 | ### Initialize KUDO 160 | 161 | `kubectl kudo init --wait` 162 | 163 | This results in: 164 | 165 | 1. the deployment of KUDO CRDs 166 | 2. the creation of kudo-system namespace 167 | 3. deployment of the kudo controller 168 | 169 | Output of a KUDO init will look like the following: 170 | 171 | ```bash 172 | $ kubectl kudo init 173 | ✅ installed crds 174 | ✅ installed service accounts and other requirements for controller to run 175 | ✅ installed kudo controller 176 | ``` 177 | 178 | ### Check to see KUDO Manager is running 179 | 180 | The installation of KUDO is verified by confirming that the `kudo-controller-manager-0` is in a running status. 181 | 182 | ```bash 183 | $ kubectl get -n kudo-system pod 184 | NAME READY STATUS RESTARTS AGE 185 | kudo-controller-manager-0 1/1 Running 0 11m 186 | ``` 187 | 188 | --- 189 | 190 | ## Host an Operator in a local repository 191 | 192 | This lab explains how to host an operator repository on your local system. 193 | 194 | ### Build Local Index File 195 | 196 | `kubectl kudo repo index ~/repo` 197 | 198 | ### Run Repository HTTP Server 199 | 200 | ```bash 201 | cd ~/repo 202 | python -m http.server 80 203 | ``` 204 | 205 | ### Add the local repository to KUDO client 206 | 207 | `kubectl kudo repo add local http://localhost` 208 | 209 | ### Set the local repository to default KUDO context 210 | 211 | `kubectl kudo repo context local` 212 | 213 | ### Confirm KUDO context 214 | 215 | ```bash 216 | $ kubectl kudo repo list 217 | NAME URL 218 | community https://kudo-repository.storage.googleapis.com/0.10.0 219 | *local http://localhost 220 | ``` 221 | 222 | ::: tip Note 223 | The `*` next to local indicates that it is the default context for the KUDO client. 224 | ::: 225 | 226 | ### Verify you are using the local repository for an installation 227 | 228 | Using the verbose CLI output flag (`-v`) with KUDO it is possible to trace from where an operator is being installed from. 229 | 230 | `kubectl kudo install first-operator -v 9` 231 | 232 | The output should look like: 233 | 234 | ```bash 235 | $ kubectl kudo install first-operator -v 9 236 | repo configs: { name:community, url:https://kudo-repository.storage.googleapis.com/0.10.0 },{ name:local, url:http://localhost } 237 | 238 | repository used { name:local, url:http://localhost } 239 | configuration from "/Users/kensipe/.kube/config" finds host https://127.0.0.1:32768 240 | acquiring kudo client 241 | getting package crds 242 | no local operator discovered, looking for http 243 | no http discovered, looking for repository 244 | getting package reader for first-operator, _ 245 | repository using: { name:local, url:http://localhost } 246 | attempt to retrieve package from url: http://localhost/first-operator-0.2.0.tgz 247 | first-operator is a repository package from { name:local, url:http://localhost } 248 | operator name: first-operator 249 | operator version: 0.2.0 250 | parameters in use: map[] 251 | operator.kudo.dev/first-operator unchanged 252 | instance first-operator-instance created in namespace default 253 | instance.kudo.dev/v1beta1/first-operator-instance created 254 | ``` 255 | 256 | You will also see in the terminal running python http.server the following: 257 | 258 | ```bash 259 | 127.0.0.1 - - [14/Jan/2020 07:59:24] "GET /index.yaml HTTP/1.1" 200 - 260 | 127.0.0.1 - - [14/Jan/2020 07:59:24] "GET /first-operator-0.2.0.tgz HTTP/1.1" 200 - 261 | ``` 262 | --------------------------------------------------------------------------------