├── .gitignore ├── LICENSE ├── README.md ├── demos ├── dashboard │ └── ingress.yaml ├── loadbalancing │ ├── traefik-common.yaml │ ├── traefik-ngrok.yaml │ └── traefik-publicip.yaml ├── monitoring │ ├── custom-metrics.yaml │ ├── influx-grafana.yaml │ ├── metrics-server.yaml │ ├── prometheus-operator.yaml │ ├── sample-metrics-app.yaml │ └── sample-prometheus-instance.yaml ├── sample-apiserver │ ├── my-flunder.yaml │ └── wardle.yaml ├── sample-webservice │ └── nginx.yaml └── storage │ └── hostpath │ ├── kubeadm.yaml │ └── storageclass.yaml ├── images ├── autoscaling │ ├── Dockerfile │ ├── Makefile │ └── server.js ├── k8s-prometheus-adapter │ └── Makefile ├── nginx │ └── Makefile ├── ngrok │ ├── Dockerfile │ └── Makefile ├── nodejs │ └── Makefile ├── prometheus-operator │ ├── Dockerfile │ └── Makefile ├── prometheus │ ├── Dockerfile │ └── Makefile ├── tiller │ └── Makefile ├── traefik │ ├── Dockerfile │ └── Makefile └── wardle-apiserver │ ├── Dockerfile │ └── Makefile ├── init.sh ├── install.sh ├── kubeadm.yaml ├── pictures ├── 404-traefik.png ├── basicauth.png ├── cluster.jpg ├── custom-metrics-architecture.png ├── dashboard.png └── grafana.png └── tools └── manifest-list-converter └── Makefile /.gitignore: -------------------------------------------------------------------------------- 1 | manifest-tool 2 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2017 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 8 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ### Workshop: 2 | 3 | ## Building a multi-platform Kubernetes cluster on bare metal with `kubeadm` 4 | 5 | Hi and welcome to this tutorial and demonstration of how to build a bare-metal Kubernetes cluster with kubeadm! 6 | 7 | I'm one of the main kubeadm developers and very excited about bare metal as well, 8 | so I thought showing some of the things you can do with Kubernetes/kubeadm would be a great fit! 9 | 10 | This workshop is a part of my talk at KubeCon Berlin: [Autoscaling a Multi-Platform Kubernetes Cluster Built with kubeadm [I] - Lucas Käldström - YouTube](https://youtu.be/ZdzKQwMjg2w) 11 | 12 | My slides for the presentation are here: http://slides.com/lucask/kubecon-berlin 13 | 14 | ### Highligts 15 | 16 | * Showcases what you can do on bare-metal, even behind a firewall with no public IP address 17 | * Demonstrates usage of cutting-edge technologies like Persistent Storage running on-cluster, Autoscaling based on Custom Metrics and Aggregated API Servers 18 | 19 | What's more, the Kubernetes yaml manifests included in this repository are multi-architecture and works on ARM, both 32- and 64-bit! 20 | 21 | My own setup at home consists of this hardware: 22 | - 2x Up Board, 4 cores @ 1.44 GHz, 2 GB RAM, 1 GbE, 16 GB eMMc, amd64, [Link](http://up-shop.org/up-boards/2-up-board-2gb-16-gb-emmc-memory.html) 23 | - 2x Odroid C2, 4 cores @ 1.5 GHz, 2 GB RAM, 1 GbE, 16 GB eMMc, arm64, [Link](http://www.hardkernel.com/main/products/prdt_info.php) 24 | - 3x Raspberry Pi, 4 cores @ 1.2 GHz, 1 GB RAM, 100 MbE, 16 GB SD Card, arm/arm64, [Link](https://www.raspberrypi.org/products/raspberry-pi-3-model-b/) 25 | 26 | ![Picture of the cluster](pictures/cluster.jpg) 27 | 28 | So, no more smalltalk then, let's dive right in! 29 | 30 | ### Contents 31 | 32 | This workshop is divided into these parts: 33 | 34 | * Installing kubeadm on all the machines you want in your cluster 35 | * Setting up your Kubernetes master 36 | * Setting up the worker nodes 37 | * Deploying the Pod networking layer 38 | * Deploying the Dashboard and Heapster 39 | * Deploying an Ingress Controller for exposing HTTP services 40 | * Deploying a persistent storage layer on top of Kubernetes with Rook 41 | * Deploying InfluxDB and Grafana for storing and visualizing CPU and memory metrics 42 | * Deploying a extension API Server for extending the Kubernetes API 43 | * Deploying the Prometheus Operator for monitoring Pods in the cluster 44 | * Deploying a sample custom metrics API Server 45 | * Deploying and autoscaling a sample node.js application based on custom metrics 46 | 47 | ### Installing kubeadm on all the machines you want in your cluster 48 | 49 | > WARNING: This workshop uses alpha technologies in order to be on the edge and Kubernetes can't be upgraded. 50 | > This means the features used and demonstrated here might work differently in v1.7 and backwards-compability isn't guaranteed in any way 51 | 52 | **Note:** The first part that describes how to install kubeadm is just copied from the [official kubeadm documentation](https://kubernetes.io/docs/getting-started-guides/kubeadm/) 53 | 54 | **Note:** It's expected that you have basic knowledge about how Kubernetes and kubeadm work, because quite advanced concepts are covered in this workshop. 55 | 56 | **Note:** This guide has been tested on Ubuntu Xenial, Yakkety and Zesty 57 | 58 | You can install kubeadm easily this way: 59 | 60 | ```bash 61 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 62 | cat < /etc/apt/sources.list.d/kubernetes.list 63 | deb http://apt.kubernetes.io/ kubernetes-xenial main 64 | EOF 65 | apt-get update 66 | apt-get install -y docker.io kubeadm 67 | ``` 68 | 69 | You should do this on all machines you're planning to include in your cluster, and these commands are exactly the same regardless on which architecture you are on. 70 | 71 | ### Setting up your Kubernetes master 72 | 73 | SSH into your master node, and switch to the `root` account of the machine or use `sudo` everywhere below. 74 | 75 | As mentioned earlier, experimental features of different kinds will be used in this tutorial to show off the latest and greatest features in Kubernetes. 76 | 77 | kubeadm for example, can take options from a configuration file in order to be customized easily. 78 | But the API exposed right now is _not_ stable, and under heavy development. So this will definitely change (to the better) in time for v1.7. 79 | 80 | The configuration file we'll use here looks like this in `kubeadm.yaml`: 81 | 82 | ```yaml 83 | kind: MasterConfiguration 84 | apiVersion: kubeadm.k8s.io/v1alpha1 85 | controllerManagerExtraArgs: 86 | horizontal-pod-autoscaler-use-rest-clients: "true" 87 | horizontal-pod-autoscaler-sync-period: "10s" 88 | node-monitor-grace-period: "10s" 89 | apiServerExtraArgs: 90 | runtime-config: "api/all=true" 91 | kubernetesVersion: "stable-1.8" 92 | ``` 93 | 94 | A brief walkthrough what the statements mean: 95 | - `horizontal-pod-autoscaler-use-rest-clients: "true"` tells the controller manager to look for the [custom metrics API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/custom-metrics-api.md) 96 | 97 | You can now go ahead and initialize the master node with this command (assuming you're `root`, append `sudo` if not): 98 | 99 | ```console 100 | $ kubeadm init --config kubeadm.yaml 101 | ``` 102 | 103 | Make sure you got kubeadm v1.8.0-beta.1 or higher and docker 1.12.x. 104 | In order to control your cluster securely, you need to specify the `KUBECONFIG` variable to `kubectl` knows where to look for the admin credentials. 105 | Here is an example how to do it as a regular user. 106 | 107 | ```bash 108 | sudo cp /etc/kubernetes/admin.conf $HOME/ 109 | sudo chown $(id -u):$(id -g) $HOME/admin.conf 110 | export KUBECONFIG=$HOME/admin.conf 111 | ``` 112 | 113 | #### Make the `kube-proxy` DaemonSet multi-platform 114 | 115 | Since `kube-proxy` runs in a DaemonSet, it will be scheduled on all nodes. By default, an image with the architecture that `kubeadm init` is run on 116 | is used in the DaemonSet, so if you ran `kubeadm init` on an `arm64` image, the `kube-proxy` image with be `gcr.io/google_containers/kube-proxy-arm64`. 117 | 118 | To make it possible to add nodes with other architectures we have to switch the image to a manifest list like this. First, make the DaemonSet, rolling-upgradeable 119 | and then change the image to a manifest list. 120 | 121 | ```console 122 | $ kubectl -n kube-system set image daemonset/kube-proxy kube-proxy=luxas/kube-proxy:v1.8.0-beta.1 123 | ``` 124 | 125 | With those two commands, `kube-proxy` will come up successfully on whatever node you bring to your cluster. 126 | 127 | #### Deploying the Pod networking layer 128 | 129 | The networking layer in Kubernetes is extensible, and you may pick the networking solution that fits you the best. 130 | I've tested this with Weave Net, but it should work with any other compliant provider. 131 | 132 | Here's how to use Weave Net as the networking provider the really easy way: 133 | 134 | ```console 135 | $ kubectl apply -f https://git.io/weave-kube-1.6 136 | ``` 137 | 138 | **OR** you can run these two commands if you want to encrypt the communication between nodes: 139 | 140 | ```console 141 | $ kubectl create secret -n kube-system generic weave-passwd --from-literal=weave-passwd=$(hexdump -n 16 -e '4/4 "%08x" 1 "\n"' /dev/random) 142 | $ kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&password-secret=weave-passwd" 143 | ``` 144 | 145 | ### Setting up the worker nodes 146 | 147 | `kubeadm init` above will print out a `kubeadm join` command for you to paste for joining the other nodes in your cluster to the master. 148 | 149 | **Note:** Make sure you join all nodes before you arch-taint the nodes (if you do)! 150 | 151 | ```console 152 | $ kubeadm join --token : 153 | ``` 154 | 155 | #### Taints and tolerations 156 | 157 | [`Taints and Tolerations`](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/taint-toleration-dedicated.md) 158 | is a concept of dedicated nodes. Simply put, if you taint a node with a key/value pair and the effect `NoSchedule`, it will reject all Pods 159 | that don't have the same key/value set in the `Tolerations` field of the `PodSpec`. 160 | 161 | By default, the master is tainted with the `node-role.kubernetes.io=""` key/value pair which will make it only allow the `kube-dns` Deployment, 162 | the `kube-proxy` DaemonSet and most often the CNI network provider's DaemonSet, because they have the toleration. 163 | 164 | In case you only have one node available for testing and want to run normal workloads on the master as well (allow all workloads on the master), 165 | run this command: 166 | 167 | ```console 168 | $ kubectl taint nodes --all node-role.kubernetes.io/master- 169 | ``` 170 | 171 | In order to make the default architecture `amd64`, and you know you might deploy workloads that aren't multi-platform, it's best to taint the 172 | "special" nodes of an other architecture and explicitely tolerate ARM (32- and 64-bit) on the workloads that support it. 173 | 174 | You can taint your arm and arm64 nodes with these commands: 175 | 176 | ```console 177 | $ kubectl taint node beta.kubernetes.io/arch=arm:NoSchedule 178 | $ kubectl taint node beta.kubernetes.io/arch=arm64:NoSchedule 179 | ``` 180 | 181 | ### Deploying the Dashboard and Heapster 182 | 183 | I really like visualizing the cluster resources in the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard) (although I'm mostly a CLI guy). 184 | 185 | You can install the dashboard with this command: 186 | 187 | ```console 188 | $ curl -sSL https://git.io/kube-dashboard | sed "s|image:.*|image: luxas/kubernetes-dashboard:v1.6.3|" | kubectl apply -f - 189 | serviceaccount "dashboard" created 190 | clusterrolebinding "dashboard-admin" created 191 | deployment "kubernetes-dashboard" created 192 | service "kubernetes-dashboard" created 193 | ``` 194 | 195 | You probably want some monitoring as well, if you install [Heapster](https://github.com/kubernetes/heapster) you can easily keep track of the CPU and 196 | memory usage in your cluster. Those stats will also be shown in the dashboard! 197 | 198 | ```console 199 | $ kubectl apply -f demos/monitoring/heapster.yaml 200 | serviceaccount "heapster" created 201 | clusterrolebinding "heapster" created 202 | deployment "heapster" created 203 | service "heapster" created 204 | ``` 205 | 206 | You should now see some Services in the `kube-system` namespace: 207 | 208 | ```console 209 | $ kubectl -n kube-system get svc 210 | NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE 211 | heapster 10.104.142.79 80/TCP 5s 212 | kube-dns 10.96.0.10 53/UDP,53/TCP 42s 213 | kubernetes-dashboard 10.97.73.205 80/TCP 11s 214 | ``` 215 | 216 | After `heapster` is up and running (check with `kubectl -n kube-system get pods`), you should be able to see the 217 | CPU and memory usage of the nodes in the cluster and for individual Pods: 218 | 219 | ```console 220 | $ kubectl top nodes 221 | NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 222 | test-node 131m 1% 9130Mi 30% 223 | ``` 224 | 225 | ### Deploying an Ingress Controller for exposing HTTP services 226 | 227 | Now that you have created the dashboard and heapster Deployments and Services, how can you access them? 228 | 229 | One solution might be making your Services of the NodePort type, but that's not a good long-term solution. 230 | 231 | Instead, there is the Ingress object in Kubernetes that let's you create rules for how Services in your cluster should be exposed to the world. 232 | Before one can create Ingress rules, you need a Ingress Controller that watches for rules, applies them and forwards requests as specified. 233 | 234 | One Ingress Controller provider is [Traefik](traefik.io), and I'm using that one here. 235 | 236 | In this demo I go a step further. Normally in order to expose your app you have locally to the internet requires that one of your machines has a public Internet 237 | address. We can workaround this very smoothly in a Kubernetes cluster by letting [Ngrok](ngrok.io) forward requests from a public subdomain of `ngrok.io` to the 238 | Traefik Ingress Controller that's running in our cluster. 239 | 240 | Using ngrok here is perfect for hybrid clusters where you have no control over the network you're connected to... you just have internet access. 241 | Also, this method is can be used in nearly any environment and will behave the same. But for production deployments (which we aren't dealing with here), 242 | you should of course expose a real loadbalancer node with a public IP. 243 | 244 | ```console 245 | $ kubectl apply -f demos/loadbalancing/traefik-common.yaml 246 | clusterrole "traefik-ingress-controller" created 247 | serviceaccount "traefik-ingress-controller" created 248 | clusterrolebinding "traefik-ingress-controller" created 249 | configmap "traefik-cfg" created 250 | 251 | $ kubectl apply -f demos/loadbalancing/traefik-ngrok.yaml 252 | deployment "traefik-ingress-controller" created 253 | service "traefik-ingress-controller" created 254 | service "traefik-web" created 255 | configmap "ngrok-cfg" created 256 | deployment "ngrok" created 257 | service "ngrok" created 258 | 259 | $ curl -sSL $(kubectl -n kube-system get svc ngrok -o template --template "{{.spec.clusterIP}}")/api/tunnels | jq ".tunnels[].public_url" | sed 's/"//g;/http:/d' 260 | https://foobarxyz.ngrok.io 261 | ``` 262 | 263 | You can now try to access the ngrok URL that got outputted by the above command. It first ask you for a password, then return 404 due to the absence of Ingress 264 | rules. 265 | 266 | ![Authenticate to Traefik](pictures/basicauth.png) 267 | 268 | ![404 with no Ingress rules](pictures/404-traefik.png) 269 | 270 | Let's change that by creating an Ingress rule! 271 | 272 | #### Exposing the Dashboard via the Ingress Controller 273 | 274 | We want to expose the dashboard to our newly-created public URL, under the `/dashboard` path. 275 | 276 | That's easily achievable using this command: 277 | 278 | ```console 279 | $ kubectl apply -f demos/dashboard/ingress.yaml 280 | ingress "kubernetes-dashboard" created 281 | ``` 282 | 283 | The Traefik Ingress Controller is set up to require basic auth before one can access the services. 284 | 285 | I've set the username to `kubernetes` and the password to `rocks!`. You can obviously change this if you want by editing the `traefik-common.yaml` before deploying 286 | the Ingress Controller. 287 | 288 | When you've signed in to `https://{ngrok url}/dashboard/` (note the `/` in the end, it's required), you'll see a dashboard like this: 289 | 290 | ![The Kubernetes Dashboard](pictures/dashboard.png) 291 | 292 | ### Deploying a persistent storage layer on top of Kubernetes with Rook 293 | 294 | Stateless services are cool, but deploying stateful applications on your Kubernetes cluster is even more fun. 295 | 296 | For that you need somewhere to store persistent data, and that's not easy to achieve on bare metal. [Rook](https://github.com/rook/rook) is a promising project 297 | aiming to solve this by building a Kubernetes integration layer upon the battle-tested Ceph storage solution. 298 | 299 | Rook is using `ThirdPartyResources` for knowing how to set up your storage solution, and has an [operator](https://github.com/rook/rook/tree/master/cmd/rook-operator) 300 | that is listening for these TPRs. 301 | 302 | Here is how to create a default Rook cluster by deploying the operator, a controller that will listen for PersistentVolumeClaims that need binding, a Rook Cluster 303 | ThirdPartyResource and finally a StorageClass. 304 | 305 | ```console 306 | $ kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-operator.yaml 307 | clusterrole "rook-operator" created 308 | serviceaccount "rook-operator" created 309 | clusterrolebinding "rook-operator" created 310 | deployment "rook-operator" created 311 | 312 | $ kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-cluster.yaml 313 | cluster "my-rook" created 314 | 315 | $ kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-storageclass.yaml 316 | pool "replicapool" created 317 | storageclass "rook-block" created 318 | 319 | $ # Repeat this step for all namespaces you want to deploy PersistentVolumes with Rook in 320 | $ kubectl get secret rook-rook-user -oyaml | sed "/resourceVer/d;/uid/d;/self/d;/creat/d;/namespace/d" | kubectl -n kube-system apply -f - 321 | secret "rook-rook-user" created 322 | 323 | $ # In order to make Rook the default Storage Provider by making the `rook-block` Storage Class the default, run this: 324 | $ kubectl patch storageclass rook-block -p '{"metadata":{"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' 325 | storageclass "rook-block" patched 326 | 327 | $ apt-get update && apt-get install ceph-common -y 328 | ``` 329 | 330 | One limitation with v0.3.0 is that you can't control to which namespaces the rook authentication Secret should be deployed, so if you want to create 331 | `PersistentVolumes` in an other namespace than `default`, run the above `kubectl` command. 332 | 333 | ### Deploying InfluxDB and Grafana for storing and visualizing CPU and memory metrics 334 | 335 | Now that we have got persistent storage in our cluster, we can deploy some stateful services. For example, we can store monitoring data aggregated by Heapster 336 | in an InfluxDB database and visualize that data with a Grafana dashboard. 337 | 338 | You must do this if you want to gather CPU/memory data from Heapster for a longer time, by default heapster just saves data from the latest couple of minutes. 339 | 340 | ```console 341 | $ kubectl apply -f demos/monitoring/influx-grafana.yaml 342 | persistentvolumeclaim "grafana-pv-claim" created 343 | persistentvolumeclaim "influxdb-pv-claim" created 344 | deployment "monitoring-grafana" created 345 | service "monitoring-grafana" created 346 | deployment "monitoring-influxdb" created 347 | service "monitoring-influxdb" created 348 | ingress "monitoring-grafana" created 349 | ``` 350 | 351 | Note that an Ingress rule was created for Grafana automatically. You can access your Grafana instance at the `https://{ngrok url}/grafana/` URL. 352 | 353 | ![Grafana dashboard](pictures/grafana.png) 354 | 355 | ### Sample API Server 356 | 357 | The core API Server is great, but what about if you want to write your own, extended API server that contains more high-level features that build on top of Kubernetes 358 | but still be able to control those high-level features from kubectl? This is now possible using the API Aggregation feature that will make it into beta in v1.7 359 | 360 | First, let's check which API groups are available normally: 361 | 362 | ```console 363 | $ kubectl api-versions 364 | apiregistration.k8s.io/v1beta1 365 | apps/v1beta1 366 | authentication.k8s.io/v1 367 | authentication.k8s.io/v1beta1 368 | authorization.k8s.io/v1 369 | authorization.k8s.io/v1beta1 370 | autoscaling/v1 371 | autoscaling/v2alpha1 372 | batch/v1 373 | batch/v2alpha1 374 | certificates.k8s.io/v1beta1 375 | extensions/v1beta1 376 | policy/v1beta1 377 | rbac.authorization.k8s.io/v1alpha1 378 | rbac.authorization.k8s.io/v1beta1 379 | rook.io/v1beta1 380 | settings.k8s.io/v1alpha1 381 | storage.k8s.io/v1 382 | storage.k8s.io/v1beta1 383 | v1 384 | ``` 385 | 386 | It's pretty straightforward to write your own API server now with the break-out of [`k8s.io/apiserver`](https://github.com/kubernetes/apiserver). 387 | The `sig-api-machinery` team has also given us a sample implementation: [`k8s.io/sample-apiserver`](https://github.com/kubernetes/sample-apiserver). 388 | 389 | The sample API Server called wardle, contains one API group: `wardle.k8s.io/v1alpha1` and one API resource in that group: `Flunder` 390 | This guide shows how easy it will be to extend the Kubernetes API in the future. 391 | 392 | The sample API Server saves its data to a separate etcd instance running in-cluster. Notice the PersistentVolume that is created for etcd for that purpose. 393 | Note that in the future, the etcd Operator should probably be used for running etcd instead of running it manually like now. 394 | 395 | ```console 396 | $ kubectl apply -f demos/sample-apiserver/wardle.yaml 397 | namespace "wardle" created 398 | persistentvolumeclaim "etcd-pv-claim" created 399 | serviceaccount "apiserver" created 400 | clusterrolebinding "wardle:system:auth-delegator" created 401 | rolebinding "wardle-auth-reader" created 402 | deployment "wardle-apiserver" created 403 | service "api" created 404 | apiservice "v1alpha1.wardle.k8s.io" created 405 | 406 | $ kubectl get secret rook-rook-user -oyaml | sed "/resourceVer/d;/uid/d;/self/d;/creat/d;/namespace/d" | kubectl -n wardle apply -f - 407 | secret "rook-rook-user" created 408 | ``` 409 | 410 | After a few minutes, when the extended API server is up and running, `kubectl` will auto-discover that API group and it will be possible to 411 | create, list and delete Flunder objects just as any other API object. 412 | 413 | ```console 414 | $ kubectl api-versions 415 | apiregistration.k8s.io/v1beta1 416 | apps/v1beta1 417 | authentication.k8s.io/v1 418 | authentication.k8s.io/v1beta1 419 | authorization.k8s.io/v1 420 | authorization.k8s.io/v1beta1 421 | autoscaling/v1 422 | autoscaling/v2alpha1 423 | batch/v1 424 | batch/v2alpha1 425 | certificates.k8s.io/v1beta1 426 | extensions/v1beta1 427 | policy/v1beta1 428 | rbac.authorization.k8s.io/v1alpha1 429 | rbac.authorization.k8s.io/v1beta1 430 | rook.io/v1beta1 431 | settings.k8s.io/v1alpha1 432 | storage.k8s.io/v1 433 | storage.k8s.io/v1beta1 434 | v1 435 | ***wardle.k8s.io/v1alpha1*** 436 | 437 | $ # There is no foobarbaz resource, but the flunders resource does now exist 438 | $ kubectl get foobarbaz 439 | the server doesn't have a resource type "foobarbaz" 440 | 441 | $ kubectl get flunders 442 | No resources found. 443 | 444 | $ kubectl apply -f demos/sample-apiserver/my-flunder.yaml 445 | flunder "my-first-flunder" created 446 | ``` 447 | 448 | If you want to make sure this is real, you can check the etcd database running in-cluster with this command: 449 | 450 | ```console 451 | $ kubectl -n wardle exec -it $(kubectl -n wardle get po -l app=wardle-apiserver -otemplate --template "{{ (index .items 0).metadata.name}}") -c etcd /bin/sh -- -c "ETCDCTL_API=3 etcdctl get /registry/wardle.kubernetes.io/registry/wardle.kubernetes.io/wardle.k8s.io/flunders/my-first-flunder" | grep -v /registry/wardle | jq . 452 | { 453 | "kind": "Flunder", 454 | "apiVersion": "wardle.k8s.io/v1alpha1", 455 | "metadata": { 456 | "name": "my-first-flunder", 457 | "uid": "bef75e16-2c5b-11e7-999c-1602732a5d02", 458 | "creationTimestamp": "2017-04-28T21:43:41Z", 459 | "labels": { 460 | "sample-label": "true" 461 | }, 462 | "annotations": { 463 | "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"wardle.k8s.io/v1alpha1\",\"kind\":\"Flunder\",\"metadata\":{\"annotations\":{},\"labels\":{\"sample-label\":\"true\"},\"name\":\"my-first-flunder\",\"namespace\":\"default\"}}\n" 464 | } 465 | }, 466 | "spec": {}, 467 | "status": {} 468 | } 469 | ``` 470 | 471 | Conclusion, the Flunder object we created was saved in the separate etcd instance! 472 | 473 | ### Deploying the Prometheus Operator for monitoring Services in the cluster 474 | 475 | [Prometheus](prometheus.io) is a great monitoring solution, and combining it with Kubernetes makes it even more awesome. 476 | 477 | These commands will first deploy the [Prometheus operator](https://github.com/coreos/prometheus-operator) as well as one Prometheus instance by creating a `Prometheus` 478 | ThirdPartyResource. 479 | 480 | A lightweight nodejs application is deployed as well, which exports the `http_requests_total` metric at `/metrics`. 481 | A `ServiceMonitor` ThirdPartyResource is created that match the sample metrics app by the `app=sample-metrics-app` label. 482 | 483 | The ServiceMonitor will make the Prometheus instance scrape metrics from the sample metrics web app. 484 | 485 | You can access the Prometheus web UI via the NodePort or the internal Service. 486 | 487 | ```console 488 | $ kubectl apply -f demos/monitoring/prometheus-operator.yaml 489 | clusterrole "prometheus-operator" created 490 | serviceaccount "prometheus-operator" created 491 | clusterrolebinding "prometheus-operator" created 492 | deployment "prometheus-operator" created 493 | 494 | $ kubectl apply -f demos/monitoring/sample-prometheus-instance.yaml 495 | clusterrole "prometheus" created 496 | serviceaccount "prometheus" created 497 | clusterrolebinding "prometheus" created 498 | prometheus "sample-metrics-prom" created 499 | service "sample-metrics-prom" created 500 | 501 | $ kubectl get svc 502 | NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE 503 | kubernetes 10.96.0.1 443/TCP 30m 504 | prometheus-operated None 9090/TCP 4m 505 | sample-metrics-prom 10.108.71.184 9090:30999/TCP 4m 506 | ``` 507 | 508 | ### Deploying a custom metrics API Server and a sample app 509 | 510 | In v1.6, the Horizontal Pod Autoscaler controller can now consume custom metrics for autoscaling. 511 | For this to work, one needs to have enabled the `autoscaling/v2alpha1` API group which makes it possible 512 | to create Horizontal Pod Autoscaler resources of the new version. 513 | 514 | Also, one must have API aggregation enabled (which is the case in this demo) and a extension API Server that 515 | provides the `custom-metrics.metrics.k8s.io/v1alpha1` API group/version. 516 | 517 | There won't be an "official" one-size-fits all custom metrics API server, instead there will be a boilerplate 518 | people can use as the base for creating custom monitoring solutions. 519 | 520 | I've built an example custom metrics server that queries a Prometheus instance for metrics data and exposing them 521 | in the custom metrics Kubernetes API. You can think of this custom metrics server as a shim/conversation layer between 522 | Prometheus data and the Horizontal Pod Autoscaling API for Kubernetes. 523 | 524 | Here is a diagram over how this works on a high level: 525 | 526 | ![Custom Metrics Architecture](pictures/custom-metrics-architecture.png) 527 | 528 | You can also read the full custom metrics API proposal [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/custom-metrics-api.md) 529 | 530 | ```console 531 | $ kubectl apply -f demos/monitoring/custom-metrics.yaml 532 | namespace "custom-metrics" created 533 | serviceaccount "custom-metrics-apiserver" created 534 | clusterrolebinding "custom-metrics:system:auth-delegator" created 535 | rolebinding "custom-metrics-auth-reader" created 536 | clusterrole "custom-metrics-read" created 537 | clusterrolebinding "custom-metrics-read" created 538 | deployment "custom-metrics-apiserver" created 539 | service "api" created 540 | apiservice "v1alpha1.custom-metrics.metrics.k8s.io" created 541 | clusterrole "custom-metrics-server-resources" created 542 | clusterrolebinding "hpa-controller-custom-metrics" created 543 | ``` 544 | 545 | If you want to be able to `curl` the custom metrics API server easily (i.e. allow anyone to access the Custom Metrics API), you can 546 | run this `kubectl` command: 547 | 548 | ```console 549 | $ kubectl create clusterrolebinding allowall-cm --clusterrole custom-metrics-server-resources --user system:anonymous 550 | clusterrolebinding "allowall-cm" created 551 | ``` 552 | 553 | ```console 554 | $ kubectl apply -f demos/monitoring/sample-metrics-app.yaml 555 | deployment "sample-metrics-app" created 556 | service "sample-metrics-app" created 557 | servicemonitor "sample-metrics-app" created 558 | horizontalpodautoscaler "sample-metrics-app-hpa" created 559 | ingress "sample-metrics-app" created 560 | ``` 561 | 562 | Now that we have our sample app, we should generate some load against it! 563 | If you don't have [rakyll's](https://github.com/rakyll) excellent [hey](https://github.com/rakyll/hey) load generator already, you can install it this way: 564 | 565 | ```console 566 | $ # Install hey 567 | $ docker run -it -v /usr/local/bin:/go/bin golang:1.8 go get github.com/rakyll/hey 568 | 569 | $ export APP_ENDPOINT=$(kubectl get svc sample-metrics-app -o template --template {{.spec.clusterIP}}); echo ${APP_ENDPOINT} 570 | $ hey -n 50000 -c 1000 http://${APP_ENDPOINT} 571 | ``` 572 | 573 | Then you can go and check out the Custom Metrics API, it should notice that a lot of requests have been served recently. 574 | 575 | ```console 576 | $ curl -sSLk https://10.96.0.1/apis/custom-metrics.metrics.k8s.io/v1alpha1/namespaces/default/services/sample-metrics-app/http_requests 577 | { 578 | "kind": "MetricValueList", 579 | "apiVersion": "custom-metrics.metrics.k8s.io/v1alpha1", 580 | "metadata": { 581 | "selfLink": "/apis/custom-metrics.metrics.k8s.io/v1alpha1/namespaces/default/services/sample-metrics-app/http_requests" 582 | }, 583 | "items": [ 584 | { 585 | "describedObject": { 586 | "kind": "Service", 587 | "name": "sample-metrics-app", 588 | "apiVersion": "/__internal" 589 | }, 590 | "metricName": "http_requests", 591 | "timestamp": "2017-06-30T20:56:34Z", 592 | "value": "501484m" 593 | } 594 | ] 595 | } 596 | ``` 597 | 598 | You can query custom metrics for individual Pods as well: 599 | 600 | ```console 601 | $ kubectl get po 602 | NAME READY STATUS RESTARTS AGE 603 | prometheus-operator-815607840-zknhk 1/1 Running 0 38m 604 | prometheus-sample-metrics-prom-0 2/2 Running 0 33m 605 | rook-operator-3393217773-sglsv 1/1 Running 0 28m 606 | sample-metrics-app-3083280453-3hbd8 1/1 Running 0 33m 607 | sample-metrics-app-3083280453-fbds8 1/1 Running 0 1m 608 | 609 | 610 | $ curl -sSLk https://10.96.0.1/apis/custom-metrics.metrics.k8s.io/v1alpha1/namespaces/default/pods/sample-metrics-app-3083280453-3hbd8/http_requests 611 | { 612 | "kind": "MetricValueList", 613 | "apiVersion": "custom-metrics.metrics.k8s.io/v1alpha1", 614 | "metadata": { 615 | "selfLink": "/apis/custom-metrics.metrics.k8s.io/v1alpha1/namespaces/default/pods/sample-metrics-app-3083280453-3hbd8/http_requests" 616 | }, 617 | "items": [ 618 | { 619 | "describedObject": { 620 | "kind": "Pod", 621 | "name": "sample-metrics-app-3083280453-3hbd8", 622 | "apiVersion": "/__internal" 623 | }, 624 | "metricName": "http_requests", 625 | "timestamp": "2017-06-30T21:00:46Z", 626 | "value": "433m" 627 | } 628 | ] 629 | } 630 | ``` 631 | 632 | #### Install `helm` 633 | 634 | [Helm](https://github.com/kubernetes/helm) is a package manager for applications running on top of Kubernetes. 635 | You can read more about Helm in the official repository, for now we're just gonna install it. 636 | 637 | Below you'll see the famous `curl | bash` pattern for installing an application, and yes, I know it's discouraged. 638 | But I'm doing it this way here to keep the tutorial short and concise, hardening the helm installation is left as an excercise to the user. 639 | 640 | Then we're running `helm init` that will deploy its server side component and set up local cache at `~/.helm`. Make sure the `KUBECONFIG` 641 | environment variable is set to point to the kubeconfig file for your kubeadm cluster. 642 | 643 | ```console 644 | $ curl -sSL https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash 645 | $ helm init 646 | ``` 647 | 648 | `tiller` is the server-side component of the Helm ecosystem, and handles installation, upgrades and more. 649 | By default in version v2.3.x, `helm init` installs tiller without any RBAC privileges. This means tiller won't be able to install any apps unless 650 | we give it some RBAC permissions. However, knowing what `tiller` is gonna install for the user in beforehand is very hard, so the only way to make it 651 | work in all cases is to give it full privileges (root access) to the cluster. We're doing this by binding the `tiller` ServiceAccount here to the 652 | very powerful `cluster-admin` ClusterRole. 653 | 654 | ```console 655 | $ kubectl -n kube-system create serviceaccount tiller 656 | $ kubectl -n kube-system patch deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccountName":"tiller"}}}}' 657 | $ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller 658 | $ kubectl -n kube-system set image deploy/tiller-deploy tiller=luxas/tiller:v2.5.1 659 | ``` 660 | 661 | #### Deploying the Service Catalog 662 | 663 | The [Service Catalog](https://github.com/kubernetes-incubator/service-catalog) Kubernetes project is super-interesting and promising. 664 | 665 | If you're interested in the concept, watch these two KubeCon talks: 666 | - Steward, the Kubernetes-Native Service Broker [A] - Gabe Monroy, Deis: [Youtube video](https://youtu.be/PNPVDKrbgsE?list=PLj6h78yzYM2PAavlbv0iZkod4IVh_iGqV) 667 | - The Open Service Broker API and the Kubernetes Service Catalog [B] - Paul Morie & Chip Childers: [Youtube video](https://youtu.be/p35hOAAsxrQ?list=PLj6h78yzYM2PAavlbv0iZkod4IVh_iGqV) 668 | 669 | Anyway, here's how to install the Service Catalog on your `kubeadm` cluster: 670 | 671 | ```console 672 | $ git clone https://github.com/luxas/service-catalog -b workshop 673 | $ # First install the Service Catalog API Server and Controller Manager and then a sample Broker 674 | $ helm install service-catalog/charts/catalog --name catalog --namespace catalog 675 | $ helm install service-catalog/charts/ups-broker --name ups-broker --namespace ups-broker 676 | ``` 677 | 678 | I highly recommend this [Service Catalog Walkthough](https://github.com/kubernetes-incubator/service-catalog/blob/master/docs/walkthrough.md). 679 | 680 | TL;DR; Now that our Service Catalog API Server is there, we can `kubectl get` the resources: 681 | 682 | ```console 683 | $ kubectl get instances,bindings,serviceclasses,brokers 684 | ... 685 | ``` 686 | 687 | You can for example make an Instance and a Binding to the sample `ups-broker` you installed above like this: 688 | 689 | ```console 690 | $ kubectl apply -f demos/service-catalog/example.yaml 691 | namespace "test-ns" created 692 | instance "ups-instance" created 693 | brinding "ups-binding" created 694 | 695 | $ # Since the binding referenced a new Secret called "my-secret", the Service Catalog should now have created it for you: 696 | $ kubectl -n test-ns get secret my-secret 697 | TODO 698 | ``` 699 | 700 | ### Manifest list images 701 | 702 | All the source for building the images used in this demo is available under `images/`. 703 | 704 | You simply need to cd into the directory and run `REGISTRY=foo make push`, setting the `REGISTRY` 705 | variable to your Docker Hub account for example, where you have push rights. 706 | 707 | All pushed images follow the pattern `REGISTRY/IMAGE-ARCH:VERSION` plus a manifest list of the form 708 | `REGISTRY/IMAGE:VERSION` that references to the architecture-specific images. 709 | 710 | Currently, images are pushed for `amd64`, `arm` and `arm64`. 711 | 712 | ### Acknowledgements / More reference 713 | 714 | I'd like to thank some people that have been very helpful to me while putting together this workshop. 715 | 716 | **David Eads** ([@deads2k](https://github.com/deads2k)) has been very helpful to me and answered my questions about API aggregation, RBAC, etc.. 717 | 718 | **Solly Ross** ([@DirectXMan12](https://github.com/DirectXMan12)) has worked on the custom metrics API and helped me quickly understand 719 | the essential parts of it. He also uploaded a [custom metrics API Server boilerplate](https://github.com/DirectXMan12/custom-metrics-boilerplate) 720 | which I've used as the base for my custom metrics implementation. 721 | 722 | Also, these I want to thank the maintainers of the great projects below. Let's be grateful for all the 723 | really nice projects that are open sourced on Github. 724 | 725 | **Prometheus Operator by CoreOS**: The Prometheus is an integral part of the custom metrics service in 726 | this workshop, it made it super-easy to create managed Prometheus instances with the TPR! 727 | 728 | **Prometheus by CNCF**: Some projects are just rock-solid. The Prometheus core is such a project. 729 | Monitoring made available for everyone, simply. 730 | 731 | **Rook by Quantum**: Rook is a very interesting and promising project and I'm excited to see how this 732 | project can be brought into something stable and reliable in the future. 733 | 734 | **Traefik by Containous**: Traefik is a powerful loadbalancer, and I love the Kubernetes integration it has. 735 | Also, with the Prometheus exporter integration in v1.2, it got even cooler. 736 | 737 | **Weave by Weaveworks**: Weave is a distributed networking system that plays very well with Kubernetes, it also 738 | is CNI-compliant, which is a good thing. 739 | 740 | 741 | ### Future work / contributing 742 | 743 | This workshop uses my own custom-built images under the `luxas` Docker Hub user. 744 | This is only a temporary solution while I carry patches I had to make in order to get it working, 745 | I will work to upstream these changes eventually though. 746 | 747 | Feel free to contribute and help me improve things here and I'd be very thankful ;) 748 | 749 | I use the Github tracker for tracking the improvements I want to make to this repository 750 | 751 | ### License 752 | 753 | MIT 754 | -------------------------------------------------------------------------------- /demos/dashboard/ingress.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Ingress 3 | metadata: 4 | name: kubernetes-dashboard 5 | namespace: kube-system 6 | annotations: 7 | traefik.frontend.rule.type: PathPrefixStrip 8 | spec: 9 | rules: 10 | - http: 11 | paths: 12 | - path: /dashboard 13 | backend: 14 | serviceName: kubernetes-dashboard 15 | servicePort: 443 16 | -------------------------------------------------------------------------------- /demos/loadbalancing/traefik-common.yaml: -------------------------------------------------------------------------------- 1 | kind: ClusterRole 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: traefik-ingress-controller 5 | rules: 6 | - apiGroups: 7 | - "" 8 | resources: 9 | - services 10 | - endpoints 11 | - secrets 12 | verbs: 13 | - get 14 | - list 15 | - watch 16 | - apiGroups: 17 | - extensions 18 | resources: 19 | - ingresses 20 | verbs: 21 | - get 22 | - list 23 | - watch 24 | --- 25 | apiVersion: v1 26 | kind: ServiceAccount 27 | metadata: 28 | name: traefik-ingress-controller 29 | namespace: kube-system 30 | --- 31 | kind: ClusterRoleBinding 32 | apiVersion: rbac.authorization.k8s.io/v1 33 | metadata: 34 | name: traefik-ingress-controller 35 | roleRef: 36 | apiGroup: rbac.authorization.k8s.io 37 | kind: ClusterRole 38 | name: traefik-ingress-controller 39 | subjects: 40 | - kind: ServiceAccount 41 | name: traefik-ingress-controller 42 | namespace: kube-system 43 | --- 44 | apiVersion: v1 45 | data: 46 | auth: a3ViZXJuZXRlczokYXByMSRVNDlTVllISiQzNnZVelFhQktTNzRtY3lpT0V6MUkuCg== 47 | kind: Secret 48 | metadata: 49 | name: traefik-basic-auth 50 | namespace: kube-system 51 | type: Opaque 52 | --- 53 | apiVersion: v1 54 | data: 55 | auth: a3ViZXJuZXRlczokYXByMSRVNDlTVllISiQzNnZVelFhQktTNzRtY3lpT0V6MUkuCg== 56 | kind: Secret 57 | metadata: 58 | name: traefik-basic-auth 59 | namespace: default 60 | type: Opaque 61 | --- 62 | kind: ConfigMap 63 | apiVersion: v1 64 | metadata: 65 | name: traefik-cfg 66 | namespace: kube-system 67 | labels: 68 | app: traefik 69 | data: 70 | traefik.toml: | 71 | defaultEntryPoints = ["http"] 72 | InsecureSkipVerify = true 73 | [entryPoints] 74 | [entryPoints.http] 75 | address = ":80" 76 | 77 | # Enable the kubernetes integration 78 | [kubernetes] 79 | 80 | [web] 81 | address = ":8080" 82 | 83 | [web.statistics] 84 | 85 | [web.metrics.prometheus] 86 | buckets=[0.1,0.3,1.2,5.0] 87 | 88 | traefik-acme.toml: | 89 | defaultEntryPoints = ["http", "https"] 90 | InsecureSkipVerify = true 91 | [entryPoints] 92 | [entryPoints.http] 93 | address = ":80" 94 | [entryPoints.http.redirect] 95 | entryPoint = "https" 96 | [entryPoints.https] 97 | address = ":443" 98 | [entryPoints.https.tls] 99 | [acme] 100 | email = "example@example.com" 101 | storage = "acme.json" 102 | onHostRule = true 103 | entryPoint = "https" 104 | [acme.httpChallenge] 105 | entryPoint = "http" 106 | 107 | # Enable the kubernetes integration 108 | [kubernetes] 109 | 110 | [web] 111 | address = ":8080" 112 | 113 | [web.statistics] 114 | 115 | [web.metrics.prometheus] 116 | buckets=[0.1,0.3,1.2,5.0] 117 | -------------------------------------------------------------------------------- /demos/loadbalancing/traefik-ngrok.yaml: -------------------------------------------------------------------------------- 1 | kind: Deployment 2 | apiVersion: apps/v1 3 | metadata: 4 | name: traefik-ingress-controller 5 | namespace: kube-system 6 | labels: 7 | app: traefik-ingress-controller 8 | spec: 9 | replicas: 1 10 | selector: 11 | matchLabels: 12 | app: traefik-ingress-controller 13 | template: 14 | metadata: 15 | labels: 16 | app: traefik-ingress-controller 17 | spec: 18 | tolerations: 19 | - key: beta.kubernetes.io/arch 20 | value: arm 21 | effect: NoSchedule 22 | - key: beta.kubernetes.io/arch 23 | value: arm64 24 | effect: NoSchedule 25 | serviceAccountName: traefik-ingress-controller 26 | containers: 27 | - image: luxas/traefik:v1.5.4 28 | name: traefik-ingress-controller 29 | resources: 30 | limits: 31 | cpu: 200m 32 | memory: 30Mi 33 | requests: 34 | cpu: 100m 35 | memory: 20Mi 36 | ports: 37 | - name: http 38 | containerPort: 80 39 | - name: web 40 | containerPort: 8080 41 | args: 42 | - --configfile=/etc/traefik/traefik.toml 43 | volumeMounts: 44 | - name: traefik-cfg 45 | mountPath: /etc/traefik/traefik.toml 46 | volumes: 47 | - name: traefik-cfg 48 | configMap: 49 | name: traefik-cfg 50 | --- 51 | apiVersion: v1 52 | kind: Service 53 | metadata: 54 | name: traefik-ingress-controller 55 | labels: 56 | app: traefik-ingress-controller 57 | namespace: kube-system 58 | spec: 59 | ports: 60 | - port: 80 61 | targetPort: 80 62 | selector: 63 | app: traefik-ingress-controller 64 | --- 65 | apiVersion: v1 66 | kind: Service 67 | metadata: 68 | name: traefik-web 69 | labels: 70 | app: traefik-ingress-controller 71 | namespace: kube-system 72 | spec: 73 | ports: 74 | - port: 80 75 | targetPort: 8080 76 | selector: 77 | app: traefik-ingress-controller 78 | --- 79 | kind: ConfigMap 80 | apiVersion: v1 81 | metadata: 82 | name: ngrok-cfg 83 | namespace: kube-system 84 | labels: 85 | app: ngrok 86 | data: 87 | ngrok.yaml: | 88 | web_addr: 0.0.0.0:4040 89 | log: stdout 90 | log_level: debug 91 | log_format: logfmt 92 | tunnels: 93 | traefik: 94 | proto: http 95 | addr: traefik-ingress-controller.kube-system:80 96 | --- 97 | kind: Deployment 98 | apiVersion: apps/v1 99 | metadata: 100 | name: ngrok 101 | namespace: kube-system 102 | labels: 103 | app: ngrok 104 | spec: 105 | replicas: 1 106 | selector: 107 | matchLabels: 108 | app: ngrok 109 | template: 110 | metadata: 111 | labels: 112 | app: ngrok 113 | spec: 114 | tolerations: 115 | - key: beta.kubernetes.io/arch 116 | value: arm 117 | effect: NoSchedule 118 | - key: beta.kubernetes.io/arch 119 | value: arm64 120 | effect: NoSchedule 121 | containers: 122 | - image: luxas/ngrok:v2.1.18 123 | name: ngrok 124 | ports: 125 | - name: web 126 | containerPort: 4040 127 | args: 128 | - start 129 | - -config=/etc/ngrok/ngrok.yaml 130 | - traefik 131 | volumeMounts: 132 | - name: ngrok-cfg 133 | mountPath: /etc/ngrok/ 134 | volumes: 135 | - name: ngrok-cfg 136 | configMap: 137 | name: ngrok-cfg 138 | --- 139 | apiVersion: v1 140 | kind: Service 141 | metadata: 142 | name: ngrok 143 | namespace: kube-system 144 | spec: 145 | ports: 146 | - port: 80 147 | # Run this command in order to get the public URL for this ingress controller 148 | # curl -sSL $(kubectl -n kube-system get svc ngrok -o template --template "{{.spec.clusterIP}}")/api/tunnels | jq ".tunnels[].public_url" | sed 's/"//g;/http:/d' 149 | targetPort: 4040 150 | selector: 151 | app: ngrok 152 | -------------------------------------------------------------------------------- /demos/loadbalancing/traefik-publicip.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: traefik-ingress-controller 5 | namespace: kube-system 6 | labels: 7 | app: traefik-ingress-controller 8 | spec: 9 | replicas: 1 10 | selector: 11 | matchLabels: 12 | app: traefik-ingress-controller 13 | template: 14 | metadata: 15 | labels: 16 | app: traefik-ingress-controller 17 | spec: 18 | nodeSelector: 19 | ingress-controller: "true" 20 | serviceAccountName: traefik-ingress-controller 21 | containers: 22 | - image: luxas/traefik:v1.5.4 23 | name: traefik-ingress-controller 24 | resources: 25 | limits: 26 | cpu: 200m 27 | memory: 30Mi 28 | requests: 29 | cpu: 100m 30 | memory: 20Mi 31 | ports: 32 | - name: http 33 | containerPort: 80 34 | hostPort: 80 35 | - name: https 36 | containerPort: 443 37 | hostPort: 443 38 | args: 39 | - --configfile=/etc/traefik/traefik-acme.toml 40 | volumeMounts: 41 | - name: traefik-cfg 42 | mountPath: /etc/traefik/ 43 | volumes: 44 | - name: traefik-cfg 45 | configMap: 46 | name: traefik-cfg 47 | -------------------------------------------------------------------------------- /demos/monitoring/custom-metrics.yaml: -------------------------------------------------------------------------------- 1 | kind: Namespace 2 | apiVersion: v1 3 | metadata: 4 | name: custom-metrics 5 | --- 6 | kind: ServiceAccount 7 | apiVersion: v1 8 | metadata: 9 | name: custom-metrics-apiserver 10 | namespace: custom-metrics 11 | --- 12 | apiVersion: rbac.authorization.k8s.io/v1 13 | kind: ClusterRoleBinding 14 | metadata: 15 | name: custom-metrics:system:auth-delegator 16 | roleRef: 17 | apiGroup: rbac.authorization.k8s.io 18 | kind: ClusterRole 19 | name: system:auth-delegator 20 | subjects: 21 | - kind: ServiceAccount 22 | name: custom-metrics-apiserver 23 | namespace: custom-metrics 24 | --- 25 | apiVersion: rbac.authorization.k8s.io/v1 26 | kind: RoleBinding 27 | metadata: 28 | name: custom-metrics-auth-reader 29 | namespace: kube-system 30 | roleRef: 31 | apiGroup: rbac.authorization.k8s.io 32 | kind: Role 33 | name: extension-apiserver-authentication-reader 34 | subjects: 35 | - kind: ServiceAccount 36 | name: custom-metrics-apiserver 37 | namespace: custom-metrics 38 | --- 39 | apiVersion: rbac.authorization.k8s.io/v1 40 | kind: ClusterRole 41 | metadata: 42 | name: custom-metrics-resource-reader 43 | rules: 44 | - apiGroups: 45 | - "" 46 | resources: 47 | - namespaces 48 | - pods 49 | - services 50 | verbs: 51 | - get 52 | - list 53 | --- 54 | apiVersion: rbac.authorization.k8s.io/v1 55 | kind: ClusterRoleBinding 56 | metadata: 57 | name: custom-metrics-apiserver-resource-reader 58 | roleRef: 59 | apiGroup: rbac.authorization.k8s.io 60 | kind: ClusterRole 61 | name: custom-metrics-resource-reader 62 | subjects: 63 | - kind: ServiceAccount 64 | name: custom-metrics-apiserver 65 | namespace: custom-metrics 66 | --- 67 | apiVersion: rbac.authorization.k8s.io/v1 68 | kind: ClusterRole 69 | metadata: 70 | name: custom-metrics-getter 71 | rules: 72 | - apiGroups: 73 | - custom.metrics.k8s.io 74 | resources: 75 | - "*" 76 | verbs: 77 | - "*" 78 | --- 79 | apiVersion: rbac.authorization.k8s.io/v1 80 | kind: ClusterRoleBinding 81 | metadata: 82 | name: hpa-custom-metrics-getter 83 | roleRef: 84 | apiGroup: rbac.authorization.k8s.io 85 | kind: ClusterRole 86 | name: custom-metrics-getter 87 | subjects: 88 | - kind: ServiceAccount 89 | name: horizontal-pod-autoscaler 90 | namespace: kube-system 91 | --- 92 | apiVersion: apps/v1 93 | kind: Deployment 94 | metadata: 95 | name: custom-metrics-apiserver 96 | namespace: custom-metrics 97 | labels: 98 | app: custom-metrics-apiserver 99 | spec: 100 | replicas: 1 101 | selector: 102 | matchLabels: 103 | app: custom-metrics-apiserver 104 | template: 105 | metadata: 106 | labels: 107 | app: custom-metrics-apiserver 108 | spec: 109 | tolerations: 110 | - key: beta.kubernetes.io/arch 111 | value: arm 112 | effect: NoSchedule 113 | - key: beta.kubernetes.io/arch 114 | value: arm64 115 | effect: NoSchedule 116 | serviceAccountName: custom-metrics-apiserver 117 | containers: 118 | - name: custom-metrics-server 119 | image: luxas/k8s-prometheus-adapter:v0.2.0-beta.0 120 | args: 121 | - --prometheus-url=http://sample-metrics-prom.default.svc:9090 122 | - --metrics-relist-interval=30s 123 | - --rate-interval=60s 124 | - --v=10 125 | - --logtostderr=true 126 | ports: 127 | - containerPort: 443 128 | securityContext: 129 | runAsUser: 0 130 | --- 131 | apiVersion: v1 132 | kind: Service 133 | metadata: 134 | name: api 135 | namespace: custom-metrics 136 | spec: 137 | ports: 138 | - port: 443 139 | targetPort: 443 140 | selector: 141 | app: custom-metrics-apiserver 142 | --- 143 | apiVersion: apiregistration.k8s.io/v1 144 | kind: APIService 145 | metadata: 146 | name: v1beta1.custom.metrics.k8s.io 147 | spec: 148 | insecureSkipTLSVerify: true 149 | group: custom.metrics.k8s.io 150 | groupPriorityMinimum: 1000 151 | versionPriority: 5 152 | service: 153 | name: api 154 | namespace: custom-metrics 155 | version: v1beta1 156 | --- 157 | apiVersion: rbac.authorization.k8s.io/v1 158 | kind: ClusterRole 159 | metadata: 160 | name: custom-metrics-server-resources 161 | rules: 162 | - apiGroups: 163 | - custom-metrics.metrics.k8s.io 164 | resources: ["*"] 165 | verbs: ["*"] 166 | --- 167 | apiVersion: rbac.authorization.k8s.io/v1 168 | kind: ClusterRoleBinding 169 | metadata: 170 | name: hpa-controller-custom-metrics 171 | roleRef: 172 | apiGroup: rbac.authorization.k8s.io 173 | kind: ClusterRole 174 | name: custom-metrics-server-resources 175 | subjects: 176 | - kind: ServiceAccount 177 | name: horizontal-pod-autoscaler 178 | namespace: kube-system 179 | -------------------------------------------------------------------------------- /demos/monitoring/influx-grafana.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | name: grafana-pv-claim 5 | namespace: kube-system 6 | labels: 7 | k8s-app: grafana 8 | task: monitoring 9 | spec: 10 | accessModes: 11 | - ReadWriteOnce 12 | resources: 13 | requests: 14 | storage: 1Gi 15 | storageClassName: rook-block 16 | --- 17 | apiVersion: v1 18 | kind: PersistentVolumeClaim 19 | metadata: 20 | name: influxdb-pv-claim 21 | namespace: kube-system 22 | labels: 23 | k8s-app: influxdb 24 | task: monitoring 25 | spec: 26 | accessModes: 27 | - ReadWriteOnce 28 | resources: 29 | requests: 30 | storage: 5Gi 31 | storageClassName: rook-block 32 | --- 33 | apiVersion: apps/v1 34 | kind: Deployment 35 | metadata: 36 | name: monitoring-grafana 37 | labels: 38 | k8s-app: grafana 39 | task: monitoring 40 | namespace: kube-system 41 | spec: 42 | replicas: 1 43 | selector: 44 | matchLabels: 45 | k8s-app: grafana 46 | task: monitoring 47 | template: 48 | metadata: 49 | labels: 50 | k8s-app: grafana 51 | task: monitoring 52 | spec: 53 | tolerations: 54 | - key: beta.kubernetes.io/arch 55 | value: arm 56 | effect: NoSchedule 57 | - key: beta.kubernetes.io/arch 58 | value: arm64 59 | effect: NoSchedule 60 | containers: 61 | - name: grafana 62 | image: luxas/heapster-grafana:v4.4.3 63 | ports: 64 | - containerPort: 3000 65 | protocol: TCP 66 | env: 67 | - name: INFLUXDB_HOST 68 | value: monitoring-influxdb 69 | - name: GRAFANA_PORT 70 | value: "3000" 71 | # The following env variables are required to make Grafana accessible via 72 | # the kubernetes api-server proxy. On production clusters, we recommend 73 | # removing these env variables, setup auth for grafana, and expose the grafana 74 | # service using a LoadBalancer or a public IP. 75 | - name: GF_AUTH_BASIC_ENABLED 76 | value: "false" 77 | - name: GF_AUTH_ANONYMOUS_ENABLED 78 | value: "true" 79 | - name: GF_AUTH_ANONYMOUS_ORG_ROLE 80 | value: Admin 81 | - name: GF_SERVER_ROOT_URL 82 | value: /grafana 83 | volumeMounts: 84 | - mountPath: /var/lib/grafana 85 | name: grafana-storage 86 | volumes: 87 | - name: grafana-storage 88 | persistentVolumeClaim: 89 | claimName: grafana-pv-claim 90 | --- 91 | apiVersion: v1 92 | kind: Service 93 | metadata: 94 | labels: 95 | k8s-app: grafana 96 | task: monitoring 97 | kubernetes.io/cluster-service: "true" 98 | kubernetes.io/name: monitoring-grafana 99 | name: monitoring-grafana 100 | namespace: kube-system 101 | spec: 102 | ports: 103 | - port: 80 104 | targetPort: 3000 105 | selector: 106 | k8s-app: grafana 107 | --- 108 | apiVersion: apps/v1 109 | kind: Deployment 110 | metadata: 111 | name: monitoring-influxdb 112 | labels: 113 | task: monitoring 114 | k8s-app: influxdb 115 | namespace: kube-system 116 | spec: 117 | replicas: 1 118 | selector: 119 | matchLabels: 120 | task: monitoring 121 | k8s-app: influxdb 122 | template: 123 | metadata: 124 | labels: 125 | task: monitoring 126 | k8s-app: influxdb 127 | spec: 128 | tolerations: 129 | - key: beta.kubernetes.io/arch 130 | value: arm 131 | effect: NoSchedule 132 | - key: beta.kubernetes.io/arch 133 | value: arm64 134 | effect: NoSchedule 135 | containers: 136 | - name: influxdb 137 | image: luxas/heapster-influxdb:v1.3.3 138 | volumeMounts: 139 | - mountPath: /data 140 | name: influxdb-storage 141 | volumes: 142 | - name: influxdb-storage 143 | persistentVolumeClaim: 144 | claimName: influxdb-pv-claim 145 | --- 146 | apiVersion: v1 147 | kind: Service 148 | metadata: 149 | labels: 150 | k8s-app: influxdb 151 | task: monitoring 152 | kubernetes.io/cluster-service: "true" 153 | kubernetes.io/name: monitoring-influxdb 154 | name: monitoring-influxdb 155 | namespace: kube-system 156 | spec: 157 | ports: 158 | - port: 8086 159 | targetPort: 8086 160 | selector: 161 | k8s-app: influxdb 162 | --- 163 | apiVersion: extensions/v1beta1 164 | kind: Ingress 165 | metadata: 166 | name: monitoring-grafana 167 | namespace: kube-system 168 | annotations: 169 | traefik.frontend.rule.type: PathPrefixStrip 170 | ingress.kubernetes.io/auth-type: "basic" 171 | ingress.kubernetes.io/auth-secret: "traefik-basic-auth" 172 | spec: 173 | rules: 174 | - http: 175 | paths: 176 | - path: /grafana 177 | backend: 178 | serviceName: monitoring-grafana 179 | servicePort: 80 180 | -------------------------------------------------------------------------------- /demos/monitoring/metrics-server.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: metrics-server 5 | namespace: kube-system 6 | --- 7 | apiVersion: rbac.authorization.k8s.io/v1 8 | kind: ClusterRoleBinding 9 | metadata: 10 | name: metrics-server:system:auth-delegator 11 | roleRef: 12 | apiGroup: rbac.authorization.k8s.io 13 | kind: ClusterRole 14 | name: system:auth-delegator 15 | subjects: 16 | - kind: ServiceAccount 17 | name: metrics-server 18 | namespace: kube-system 19 | --- 20 | apiVersion: rbac.authorization.k8s.io/v1 21 | kind: RoleBinding 22 | metadata: 23 | name: metrics-server-auth-reader 24 | namespace: kube-system 25 | roleRef: 26 | apiGroup: rbac.authorization.k8s.io 27 | kind: Role 28 | name: extension-apiserver-authentication-reader 29 | subjects: 30 | - kind: ServiceAccount 31 | name: metrics-server 32 | namespace: kube-system 33 | --- 34 | apiVersion: apps/v1 35 | kind: Deployment 36 | metadata: 37 | name: metrics-server 38 | namespace: kube-system 39 | labels: 40 | k8s-app: metrics-server 41 | spec: 42 | selector: 43 | matchLabels: 44 | k8s-app: metrics-server 45 | template: 46 | metadata: 47 | name: metrics-server 48 | labels: 49 | k8s-app: metrics-server 50 | annotations: 51 | scheduler.alpha.kubernetes.io/critical-pod: '' 52 | spec: 53 | serviceAccountName: metrics-server 54 | containers: 55 | - name: metrics-server 56 | image: gcr.io/google_containers/metrics-server-amd64:v0.2.1 57 | imagePullPolicy: Always 58 | resources: 59 | requests: 60 | memory: 100Mi 61 | command: 62 | - /metrics-server 63 | - --source=kubernetes.summary_api:'' 64 | ports: 65 | - containerPort: 443 66 | name: https 67 | protocol: TCP 68 | tolerations: 69 | - key: "CriticalAddonsOnly" 70 | operator: "Exists" 71 | --- 72 | apiVersion: v1 73 | kind: Service 74 | metadata: 75 | name: metrics-server 76 | namespace: kube-system 77 | labels: 78 | kubernetes.io/name: "Metrics-server" 79 | spec: 80 | selector: 81 | k8s-app: metrics-server 82 | ports: 83 | - port: 443 84 | protocol: TCP 85 | targetPort: https 86 | --- 87 | apiVersion: apiregistration.k8s.io/v1 88 | kind: APIService 89 | metadata: 90 | name: v1beta1.metrics.k8s.io 91 | spec: 92 | service: 93 | name: metrics-server 94 | namespace: kube-system 95 | group: metrics.k8s.io 96 | version: v1beta1 97 | insecureSkipTLSVerify: true 98 | groupPriorityMinimum: 100 99 | versionPriority: 100 100 | --- 101 | apiVersion: rbac.authorization.k8s.io/v1 102 | kind: ClusterRole 103 | metadata: 104 | name: system:metrics-server 105 | rules: 106 | - apiGroups: 107 | - "" 108 | resources: 109 | - pods 110 | - nodes 111 | - namespaces 112 | verbs: 113 | - get 114 | - list 115 | - watch 116 | - apiGroups: 117 | - "extensions" 118 | resources: 119 | - deployments 120 | verbs: 121 | - get 122 | - list 123 | - watch 124 | --- 125 | apiVersion: rbac.authorization.k8s.io/v1 126 | kind: ClusterRoleBinding 127 | metadata: 128 | name: system:metrics-server 129 | roleRef: 130 | apiGroup: rbac.authorization.k8s.io 131 | kind: ClusterRole 132 | name: system:metrics-server 133 | subjects: 134 | - kind: ServiceAccount 135 | name: metrics-server 136 | namespace: kube-system 137 | -------------------------------------------------------------------------------- /demos/monitoring/prometheus-operator.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: prometheus-operator 5 | rules: 6 | - apiGroups: 7 | - extensions 8 | resources: 9 | - thirdpartyresources 10 | verbs: 11 | - create 12 | - apiGroups: 13 | - apiextensions.k8s.io 14 | resources: 15 | - customresourcedefinitions 16 | verbs: 17 | - "*" 18 | - apiGroups: 19 | - monitoring.coreos.com 20 | resources: 21 | - alertmanagers 22 | - prometheuses 23 | - servicemonitors 24 | verbs: 25 | - "*" 26 | - apiGroups: 27 | - apps 28 | resources: 29 | - statefulsets 30 | verbs: ["*"] 31 | - apiGroups: [""] 32 | resources: 33 | - configmaps 34 | - secrets 35 | verbs: ["*"] 36 | - apiGroups: [""] 37 | resources: 38 | - pods 39 | verbs: ["list", "delete"] 40 | - apiGroups: [""] 41 | resources: 42 | - services 43 | - endpoints 44 | verbs: ["get", "create", "update"] 45 | - apiGroups: [""] 46 | resources: 47 | - nodes 48 | verbs: ["list", "watch"] 49 | - apiGroups: [""] 50 | resources: 51 | - namespaces 52 | verbs: ["list"] 53 | --- 54 | apiVersion: v1 55 | kind: ServiceAccount 56 | metadata: 57 | name: prometheus-operator 58 | --- 59 | apiVersion: rbac.authorization.k8s.io/v1 60 | kind: ClusterRoleBinding 61 | metadata: 62 | name: prometheus-operator 63 | roleRef: 64 | apiGroup: rbac.authorization.k8s.io 65 | kind: ClusterRole 66 | name: prometheus-operator 67 | subjects: 68 | - kind: ServiceAccount 69 | name: prometheus-operator 70 | namespace: default 71 | --- 72 | apiVersion: apps/v1 73 | kind: Deployment 74 | metadata: 75 | name: prometheus-operator 76 | labels: 77 | operator: prometheus 78 | spec: 79 | replicas: 1 80 | selector: 81 | matchLabels: 82 | operator: prometheus 83 | template: 84 | metadata: 85 | labels: 86 | operator: prometheus 87 | spec: 88 | serviceAccountName: prometheus-operator 89 | containers: 90 | - name: prometheus-operator 91 | image: luxas/prometheus-operator:v0.17.0 92 | resources: 93 | requests: 94 | cpu: 100m 95 | memory: 50Mi 96 | limits: 97 | cpu: 200m 98 | memory: 100Mi 99 | -------------------------------------------------------------------------------- /demos/monitoring/sample-metrics-app.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | labels: 5 | app: sample-metrics-app 6 | name: sample-metrics-app 7 | spec: 8 | replicas: 2 9 | selector: 10 | matchLabels: 11 | app: sample-metrics-app 12 | template: 13 | metadata: 14 | labels: 15 | app: sample-metrics-app 16 | spec: 17 | tolerations: 18 | - key: beta.kubernetes.io/arch 19 | value: arm 20 | effect: NoSchedule 21 | - key: beta.kubernetes.io/arch 22 | value: arm64 23 | effect: NoSchedule 24 | - key: node.alpha.kubernetes.io/unreachable 25 | operator: Exists 26 | effect: NoExecute 27 | tolerationSeconds: 0 28 | - key: node.alpha.kubernetes.io/notReady 29 | operator: Exists 30 | effect: NoExecute 31 | tolerationSeconds: 0 32 | containers: 33 | - image: luxas/autoscale-demo:v0.1.2 34 | name: sample-metrics-app 35 | ports: 36 | - name: web 37 | containerPort: 8080 38 | readinessProbe: 39 | httpGet: 40 | path: / 41 | port: 8080 42 | initialDelaySeconds: 3 43 | periodSeconds: 5 44 | livenessProbe: 45 | httpGet: 46 | path: / 47 | port: 8080 48 | initialDelaySeconds: 3 49 | periodSeconds: 5 50 | --- 51 | apiVersion: v1 52 | kind: Service 53 | metadata: 54 | name: sample-metrics-app 55 | labels: 56 | app: sample-metrics-app 57 | spec: 58 | ports: 59 | - name: web 60 | port: 80 61 | targetPort: 8080 62 | selector: 63 | app: sample-metrics-app 64 | --- 65 | apiVersion: monitoring.coreos.com/v1 66 | kind: ServiceMonitor 67 | metadata: 68 | name: sample-metrics-app 69 | labels: 70 | service-monitor: sample-metrics-app 71 | spec: 72 | selector: 73 | matchLabels: 74 | app: sample-metrics-app 75 | endpoints: 76 | - port: web 77 | --- 78 | kind: HorizontalPodAutoscaler 79 | apiVersion: autoscaling/v2beta1 80 | metadata: 81 | name: sample-metrics-app-hpa 82 | spec: 83 | scaleTargetRef: 84 | apiVersion: apps/v1 85 | kind: Deployment 86 | name: sample-metrics-app 87 | minReplicas: 2 88 | maxReplicas: 10 89 | metrics: 90 | - type: Object 91 | object: 92 | target: 93 | kind: Service 94 | name: sample-metrics-app 95 | metricName: http_requests 96 | targetValue: 100 97 | --- 98 | apiVersion: extensions/v1beta1 99 | kind: Ingress 100 | metadata: 101 | name: sample-metrics-app 102 | namespace: default 103 | annotations: 104 | traefik.frontend.rule.type: PathPrefixStrip 105 | spec: 106 | rules: 107 | - http: 108 | paths: 109 | - path: /sample-app 110 | backend: 111 | serviceName: sample-metrics-app 112 | servicePort: 80 113 | -------------------------------------------------------------------------------- /demos/monitoring/sample-prometheus-instance.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: prometheus 5 | rules: 6 | - apiGroups: 7 | - "" 8 | resources: 9 | - nodes 10 | - services 11 | - endpoints 12 | - pods 13 | verbs: 14 | - get 15 | - list 16 | - watch 17 | --- 18 | apiVersion: v1 19 | kind: ServiceAccount 20 | metadata: 21 | name: prometheus 22 | --- 23 | apiVersion: rbac.authorization.k8s.io/v1 24 | kind: ClusterRoleBinding 25 | metadata: 26 | name: prometheus 27 | roleRef: 28 | apiGroup: rbac.authorization.k8s.io 29 | kind: ClusterRole 30 | name: prometheus 31 | subjects: 32 | - kind: ServiceAccount 33 | name: prometheus 34 | namespace: default 35 | --- 36 | apiVersion: monitoring.coreos.com/v1 37 | kind: Prometheus 38 | metadata: 39 | name: sample-metrics-prom 40 | labels: 41 | app: sample-metrics-prom 42 | prometheus: sample-metrics-prom 43 | spec: 44 | replicas: 1 45 | baseImage: luxas/prometheus 46 | version: v2.2.1 47 | serviceAccountName: prometheus 48 | serviceMonitorSelector: 49 | matchLabels: 50 | service-monitor: sample-metrics-app 51 | resources: 52 | requests: 53 | memory: 300Mi 54 | retention: 7d 55 | storage: 56 | class: "" 57 | selector: {} 58 | resources: {} 59 | volumeClaimTemplate: 60 | spec: 61 | resources: 62 | requests: 63 | storage: 3Gi 64 | --- 65 | apiVersion: v1 66 | kind: Service 67 | metadata: 68 | name: sample-metrics-prom 69 | labels: 70 | app: sample-metrics-prom 71 | prometheus: sample-metrics-prom 72 | spec: 73 | type: NodePort 74 | ports: 75 | - name: web 76 | nodePort: 30999 77 | port: 9090 78 | targetPort: web 79 | selector: 80 | prometheus: sample-metrics-prom 81 | -------------------------------------------------------------------------------- /demos/sample-apiserver/my-flunder.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: wardle.k8s.io/v1alpha1 2 | kind: Flunder 3 | metadata: 4 | name: my-first-flunder 5 | labels: 6 | sample-label: "true" 7 | -------------------------------------------------------------------------------- /demos/sample-apiserver/wardle.yaml: -------------------------------------------------------------------------------- 1 | kind: Namespace 2 | apiVersion: v1 3 | metadata: 4 | name: wardle 5 | --- 6 | apiVersion: v1 7 | kind: PersistentVolumeClaim 8 | metadata: 9 | name: etcd-pv-claim 10 | namespace: wardle 11 | labels: 12 | app: wardle-apiserver 13 | spec: 14 | accessModes: 15 | - ReadWriteOnce 16 | resources: 17 | requests: 18 | storage: 1Gi 19 | storageClassName: rook-block 20 | --- 21 | kind: ServiceAccount 22 | apiVersion: v1 23 | metadata: 24 | name: apiserver 25 | namespace: wardle 26 | --- 27 | apiVersion: rbac.authorization.k8s.io/v1 28 | kind: ClusterRoleBinding 29 | metadata: 30 | name: wardle:system:auth-delegator 31 | roleRef: 32 | apiGroup: rbac.authorization.k8s.io 33 | kind: ClusterRole 34 | name: system:auth-delegator 35 | subjects: 36 | - kind: ServiceAccount 37 | name: apiserver 38 | namespace: wardle 39 | --- 40 | apiVersion: rbac.authorization.k8s.io/v1 41 | kind: RoleBinding 42 | metadata: 43 | name: wardle-auth-reader 44 | namespace: kube-system 45 | roleRef: 46 | apiGroup: rbac.authorization.k8s.io 47 | kind: Role 48 | name: extension-apiserver-authentication-reader 49 | subjects: 50 | - kind: ServiceAccount 51 | name: apiserver 52 | namespace: wardle 53 | --- 54 | apiVersion: apps/v1 55 | kind: Deployment 56 | metadata: 57 | name: wardle-apiserver 58 | namespace: wardle 59 | labels: 60 | app: wardle-apiserver 61 | spec: 62 | replicas: 1 63 | selector: 64 | matchLabels: 65 | app: wardle-apiserver 66 | template: 67 | metadata: 68 | labels: 69 | app: wardle-apiserver 70 | spec: 71 | tolerations: 72 | - key: beta.kubernetes.io/arch 73 | value: arm 74 | effect: NoSchedule 75 | - key: beta.kubernetes.io/arch 76 | value: arm64 77 | effect: NoSchedule 78 | serviceAccountName: apiserver 79 | containers: 80 | - name: wardle-server 81 | image: luxas/sample-apiserver:v0.3.0 82 | args: 83 | - --etcd-servers=http://localhost:2379 84 | - name: etcd 85 | image: luxas/etcd:3.0.17 86 | args: 87 | - etcd 88 | - --data-dir=/var/lib/etcd 89 | volumeMounts: 90 | - mountPath: /var/lib/etcd 91 | name: etcd-storage 92 | volumes: 93 | - name: etcd-storage 94 | persistentVolumeClaim: 95 | claimName: etcd-pv-claim 96 | --- 97 | apiVersion: v1 98 | kind: Service 99 | metadata: 100 | name: api 101 | namespace: wardle 102 | spec: 103 | ports: 104 | - port: 443 105 | targetPort: 443 106 | selector: 107 | app: wardle-apiserver 108 | --- 109 | apiVersion: apiregistration.k8s.io/v1 110 | kind: APIService 111 | metadata: 112 | name: v1alpha1.wardle.k8s.io 113 | spec: 114 | insecureSkipTLSVerify: true 115 | group: wardle.k8s.io 116 | groupPriorityMinimum: 1000 117 | versionPriority: 5 118 | service: 119 | name: api 120 | namespace: wardle 121 | version: v1alpha1 122 | -------------------------------------------------------------------------------- /demos/sample-webservice/nginx.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | labels: 5 | run: my-nginx 6 | name: my-nginx 7 | spec: 8 | replicas: 2 9 | selector: 10 | matchLabels: 11 | run: my-nginx 12 | template: 13 | metadata: 14 | labels: 15 | run: my-nginx 16 | spec: 17 | tolerations: 18 | - key: beta.kubernetes.io/arch 19 | value: arm 20 | effect: NoSchedule 21 | - key: beta.kubernetes.io/arch 22 | value: arm64 23 | effect: NoSchedule 24 | containers: 25 | - image: luxas/nginx:1.11.10 26 | name: my-nginx 27 | ports: 28 | - containerPort: 80 29 | --- 30 | apiVersion: v1 31 | kind: Service 32 | metadata: 33 | name: my-nginx 34 | spec: 35 | ports: 36 | - port: 80 37 | targetPort: 80 38 | selector: 39 | run: my-nginx 40 | -------------------------------------------------------------------------------- /demos/storage/hostpath/kubeadm.yaml: -------------------------------------------------------------------------------- 1 | kind: MasterConfiguration 2 | apiVersion: kubeadm.k8s.io/v1alpha1 3 | controllerManagerExtraArgs: 4 | enable-hostpath-provisioner: "true" 5 | -------------------------------------------------------------------------------- /demos/storage/hostpath/storageclass.yaml: -------------------------------------------------------------------------------- 1 | kind: StorageClass 2 | apiVersion: storage.k8s.io/v1beta1 3 | metadata: 4 | name: hostpath 5 | annotations: 6 | storageclass.beta.kubernetes.io/is-default-class: "true" 7 | provisioner: kubernetes.io/host-path 8 | -------------------------------------------------------------------------------- /images/autoscaling/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM BASEIMAGE 2 | COPY server.js /server.js 3 | CMD ["node", "/server.js"] 4 | -------------------------------------------------------------------------------- /images/autoscaling/Makefile: -------------------------------------------------------------------------------- 1 | REGISTRY?=luxas 2 | IMAGE?=autoscale-demo 3 | TEMP_DIR:=$(shell mktemp -d) 4 | ARCH?=amd64 5 | ALL_ARCH=amd64 arm arm64 6 | ML_PLATFORMS=linux/amd64,linux/arm,linux/arm64 7 | 8 | VERSION?=v0.1.2 9 | BASEIMAGE=luxas/node-$(ARCH):latest 10 | 11 | build: 12 | cp Dockerfile *.js $(TEMP_DIR) 13 | cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile 14 | docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) $(TEMP_DIR) 15 | 16 | push-%: 17 | $(MAKE) ARCH=$* build 18 | docker push $(REGISTRY)/$(IMAGE)-$*:$(VERSION) 19 | 20 | push: ./manifest-tool $(addprefix push-,$(ALL_ARCH)) 21 | ./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:$(VERSION) --target $(REGISTRY)/$(IMAGE):$(VERSION) 22 | 23 | ./manifest-tool: 24 | curl -sSL https://github.com/estesp/manifest-tool/releases/download/v0.7.0/manifest-tool-linux-amd64 > manifest-tool 25 | chmod +x manifest-tool 26 | -------------------------------------------------------------------------------- /images/autoscaling/server.js: -------------------------------------------------------------------------------- 1 | var http = require('http'); 2 | var os = require('os'); 3 | 4 | var totalrequests = 0; 5 | 6 | http.createServer(function(request, response) { 7 | totalrequests += 1 8 | 9 | response.writeHead(200); 10 | 11 | if (request.url == "/metrics") { 12 | response.end("# HELP http_requests_total The amount of requests served by the server in total\n# TYPE http_requests_total counter\nhttp_requests_total " + totalrequests + "\n"); 13 | return; 14 | } 15 | response.end("Hello! My name is " + os.hostname() + ". I have served "+ totalrequests + " requests so far.\n"); 16 | }).listen(8080) 17 | -------------------------------------------------------------------------------- /images/k8s-prometheus-adapter/Makefile: -------------------------------------------------------------------------------- 1 | REGISTRY?=luxas 2 | IMAGE?=k8s-prometheus-adapter 3 | VERSION?=v0.2.0 4 | TEMP_DIR:=$(shell mktemp -d) 5 | 6 | all: build 7 | download: 8 | git clone https://github.com/DirectXMan12/k8s-prometheus-adapter $(TEMP_DIR) -b $(VERSION) 9 | 10 | build: download 11 | make -C $(TEMP_DIR) build REGISTRY=$(REGISTRY) VERSION=$(VERSION) IMAGE=$(IMAGE) VENDOR_DOCKERIZED=1 12 | 13 | push: download 14 | make -C $(TEMP_DIR) push REGISTRY=$(REGISTRY) VERSION=$(VERSION) IMAGE=$(IMAGE) VENDOR_DOCKERIZED=1 15 | -------------------------------------------------------------------------------- /images/nginx/Makefile: -------------------------------------------------------------------------------- 1 | REGISTRY?=luxas 2 | IMAGE?=nginx 3 | TEMP_DIR:=$(shell mktemp -d) 4 | ARCH?=amd64 5 | ALL_ARCH=amd64 arm arm64 6 | ML_PLATFORMS=linux/amd64,linux/arm,linux/arm64 7 | 8 | ifeq ($(ARCH),arm) 9 | BASEIMAGE?=arm32v6\\/alpine:3.6 10 | QEMUARCH=arm 11 | endif 12 | ifeq ($(ARCH),arm64) 13 | BASEIMAGE?=arm64v8\\/alpine:3.6 14 | QEMUARCH=arm64v8 15 | endif 16 | 17 | VERSION?=$(shell curl -sSL https://raw.githubusercontent.com/nginxinc/docker-nginx/master/mainline/alpine/Dockerfile | grep "ENV NGINX_VERSION" | awk '{print $$3}') 18 | 19 | all: build 20 | build: 21 | 22 | ifeq ($(ARCH),amd64) 23 | docker pull nginx:alpine 24 | docker tag nginx:alpine $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) 25 | else 26 | curl -sSL https://raw.githubusercontent.com/nginxinc/docker-nginx/master/mainline/alpine/Dockerfile > $(TEMP_DIR)/Dockerfile 27 | curl -sSL https://raw.githubusercontent.com/nginxinc/docker-nginx/master/mainline/alpine/nginx.conf > $(TEMP_DIR)/nginx.conf 28 | curl -sSL https://raw.githubusercontent.com/nginxinc/docker-nginx/master/mainline/alpine/nginx.vh.default.conf > $(TEMP_DIR)/nginx.vh.default.conf 29 | cd $(TEMP_DIR) && sed -e "s/alpine:[0-9.]*/$(BASEIMAGE)\nCOPY qemu-$(QEMUARCH)-static \\/usr\\/bin\\//" -i Dockerfile 30 | 31 | # Register /usr/bin/qemu-ARCH-static as the handler for ARM binaries in the kernel 32 | docker run --rm --privileged multiarch/qemu-user-static:register --reset 33 | curl -sSL --retry 5 https://github.com/multiarch/qemu-user-static/releases/download/v2.7.0/x86_64_qemu-${QEMUARCH}-static.tar.gz | tar -xz -C ${TEMP_DIR} 34 | 35 | docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) $(TEMP_DIR) 36 | endif 37 | 38 | rm -r $(TEMP_DIR) 39 | 40 | push-%: 41 | $(MAKE) ARCH=$* build 42 | docker push $(REGISTRY)/$(IMAGE)-$*:$(VERSION) 43 | docker tag $(REGISTRY)/$(IMAGE)-$*:$(VERSION) $(REGISTRY)/$(IMAGE)-$*:latest 44 | docker push $(REGISTRY)/$(IMAGE)-$*:latest 45 | 46 | push: ./manifest-tool $(addprefix push-,$(ALL_ARCH)) 47 | ./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:$(VERSION) --target $(REGISTRY)/$(IMAGE):$(VERSION) 48 | ./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:latest --target $(REGISTRY)/$(IMAGE):latest 49 | 50 | ./manifest-tool: 51 | curl -sSL https://github.com/estesp/manifest-tool/releases/download/v0.7.0/manifest-tool-linux-amd64 > manifest-tool 52 | chmod +x manifest-tool 53 | -------------------------------------------------------------------------------- /images/ngrok/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM scratch 2 | COPY ngrok / 3 | ENTRYPOINT ["/ngrok"] 4 | -------------------------------------------------------------------------------- /images/ngrok/Makefile: -------------------------------------------------------------------------------- 1 | REGISTRY?=luxas 2 | IMAGE?=ngrok 3 | TEMP_DIR:=$(shell mktemp -d) 4 | ARCH?=amd64 5 | ALL_ARCH=amd64 arm arm64 6 | ML_PLATFORMS=linux/amd64,linux/arm,linux/arm64 7 | 8 | VERSION?=v2.1.18 9 | VERSION_HASH?=4VmDzA7iaHb 10 | URL?=https://bin.equinox.io/c/$(VERSION_HASH)/ngrok-stable-linux-$(ARCH).zip 11 | 12 | all: build 13 | build: 14 | curl -sSL $(URL) > $(TEMP_DIR)/ngrok.zip 15 | cd $(TEMP_DIR) && unzip ngrok.zip 16 | cp Dockerfile $(TEMP_DIR) 17 | 18 | docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) $(TEMP_DIR) 19 | rm -r $(TEMP_DIR) 20 | 21 | push-%: 22 | $(MAKE) ARCH=$* build 23 | docker push $(REGISTRY)/$(IMAGE)-$*:$(VERSION) 24 | 25 | push: ./manifest-tool $(addprefix push-,$(ALL_ARCH)) 26 | ./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:$(VERSION) --target $(REGISTRY)/$(IMAGE):$(VERSION) 27 | 28 | ./manifest-tool: 29 | curl -sSL https://github.com/estesp/manifest-tool/releases/download/v0.7.0/manifest-tool-linux-amd64 > manifest-tool 30 | chmod +x manifest-tool 31 | -------------------------------------------------------------------------------- /images/nodejs/Makefile: -------------------------------------------------------------------------------- 1 | REGISTRY?=luxas 2 | IMAGE?=node 3 | TEMP_DIR:=$(shell mktemp -d) 4 | ARCH?=amd64 5 | ALL_ARCH=amd64 arm arm64 6 | ML_PLATFORMS=linux/amd64,linux/arm,linux/arm64 7 | VERSION?=$(shell curl -sSL https://raw.githubusercontent.com/nodejs/docker-node/master/7.7/alpine/Dockerfile | grep "ENV NODE_VERSION" | awk '{print $$3}') 8 | 9 | ifeq ($(ARCH),arm) 10 | BASEIMAGE?=arm32v6\\/alpine:3.6 11 | QEMUARCH=arm 12 | endif 13 | ifeq ($(ARCH),arm64) 14 | BASEIMAGE?=arm64v8\\/alpine:3.6 15 | QEMUARCH=arm64v8 16 | endif 17 | 18 | all: build 19 | build: 20 | 21 | ifeq ($(ARCH),amd64) 22 | docker pull node:alpine 23 | docker tag node:alpine $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) 24 | else 25 | curl -sSL https://raw.githubusercontent.com/nodejs/docker-node/master/7.7/alpine/Dockerfile > $(TEMP_DIR)/Dockerfile 26 | cd $(TEMP_DIR) && sed -e "s/alpine:[0-9.]*/$(BASEIMAGE)\nCOPY qemu-$(QEMUARCH)-static \\/usr\\/bin\\//" -i Dockerfile 27 | 28 | # Register /usr/bin/qemu-ARCH-static as the handler for ARM binaries in the kernel 29 | docker run --rm --privileged multiarch/qemu-user-static:register --reset 30 | curl -sSL --retry 5 https://github.com/multiarch/qemu-user-static/releases/download/v2.7.0/x86_64_qemu-${QEMUARCH}-static.tar.gz | tar -xz -C ${TEMP_DIR} 31 | 32 | docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) $(TEMP_DIR) 33 | endif 34 | 35 | rm -r $(TEMP_DIR) 36 | 37 | push-%: 38 | $(MAKE) ARCH=$* build 39 | docker push $(REGISTRY)/$(IMAGE)-$*:$(VERSION) 40 | docker tag $(REGISTRY)/$(IMAGE)-$*:$(VERSION) $(REGISTRY)/$(IMAGE)-$*:latest 41 | docker push $(REGISTRY)/$(IMAGE)-$*:latest 42 | 43 | push: ./manifest-tool $(addprefix push-,$(ALL_ARCH)) 44 | ./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:$(VERSION) --target $(REGISTRY)/$(IMAGE):$(VERSION) 45 | ./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:latest --target $(REGISTRY)/$(IMAGE):latest 46 | 47 | ./manifest-tool: 48 | curl -sSL https://github.com/estesp/manifest-tool/releases/download/v0.7.0/manifest-tool-linux-amd64 > manifest-tool 49 | chmod +x manifest-tool 50 | -------------------------------------------------------------------------------- /images/prometheus-operator/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM BASEIMAGE 2 | COPY operator /bin/ 3 | ENTRYPOINT ["/bin/operator"] 4 | -------------------------------------------------------------------------------- /images/prometheus-operator/Makefile: -------------------------------------------------------------------------------- 1 | REGISTRY?=luxas 2 | IMAGE?=prometheus-operator 3 | TEMP_DIR:=$(shell mktemp -d) 4 | ARCH?=amd64 5 | ALL_ARCH=amd64 arm arm64 6 | ML_PLATFORMS=linux/amd64,linux/arm,linux/arm64 7 | 8 | VERSION?=v0.17.0 9 | 10 | ifeq ($(ARCH),amd64) 11 | BASEIMAGE?=busybox 12 | endif 13 | ifeq ($(ARCH),arm) 14 | BASEIMAGE?=arm32v7/busybox 15 | endif 16 | ifeq ($(ARCH),arm64) 17 | BASEIMAGE?=arm64v8/busybox 18 | endif 19 | 20 | all: build 21 | build: 22 | cp Dockerfile $(TEMP_DIR) 23 | cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile 24 | 25 | docker run -it -v $(TEMP_DIR):/build -e GOARCH=$(ARCH) golang:1.10 /bin/bash -c "\ 26 | git clone --branch $(VERSION) https://github.com/coreos/prometheus-operator /go/src/github.com/coreos/prometheus-operator && \ 27 | CGO_ENABLED=0 go build -a -tags netgo -o /build/operator github.com/coreos/prometheus-operator/cmd/operator" 28 | 29 | docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) $(TEMP_DIR) 30 | rm -r $(TEMP_DIR) 31 | 32 | push-%: 33 | $(MAKE) ARCH=$* build 34 | docker push $(REGISTRY)/$(IMAGE)-$*:$(VERSION) 35 | 36 | push: ./manifest-tool $(addprefix push-,$(ALL_ARCH)) 37 | ./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:$(VERSION) --target $(REGISTRY)/$(IMAGE):$(VERSION) 38 | 39 | ./manifest-tool: 40 | curl -sSL https://github.com/estesp/manifest-tool/releases/download/v0.7.0/manifest-tool-linux-amd64 > manifest-tool 41 | chmod +x manifest-tool 42 | -------------------------------------------------------------------------------- /images/prometheus/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM BASEIMAGE 2 | 3 | COPY prometheus promtool /bin/ 4 | COPY prometheus.yml /etc/prometheus/prometheus.yml 5 | COPY console_libraries/ /usr/share/prometheus/console_libraries/ 6 | COPY consoles/ /usr/share/prometheus/consoles/ 7 | COPY console_libraries/ /etc/prometheus/console_libraries/ 8 | COPY consoles/ /etc/prometheus/consoles/ 9 | 10 | ENTRYPOINT ["/bin/prometheus"] 11 | -------------------------------------------------------------------------------- /images/prometheus/Makefile: -------------------------------------------------------------------------------- 1 | REGISTRY?=luxas 2 | IMAGE?=prometheus 3 | TEMP_DIR:=$(shell mktemp -d) 4 | ARCH?=amd64 5 | ALL_ARCH=amd64 arm arm64 6 | ML_PLATFORMS=linux/amd64,linux/arm,linux/arm64 7 | 8 | PROMARCH=$(ARCH) 9 | ifeq ($(ARCH),amd64) 10 | BASEIMAGE=busybox 11 | endif 12 | ifeq ($(ARCH),arm) 13 | PROMARCH=armv7 14 | BASEIMAGE=arm32v7/busybox 15 | endif 16 | ifeq ($(ARCH),arm64) 17 | BASEIMAGE=arm64v8/busybox 18 | endif 19 | 20 | VERSION_SEMVER=2.2.1 21 | VERSION?=v$(VERSION_SEMVER) 22 | URL?=https://github.com/prometheus/prometheus/releases/download/$(VERSION)/prometheus-$(VERSION_SEMVER).linux-$(PROMARCH).tar.gz 23 | 24 | all: build 25 | build: 26 | curl -sSL $(URL) | tar -xz -C $(TEMP_DIR) --strip-component=1 27 | cp Dockerfile $(TEMP_DIR) 28 | cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile 29 | 30 | docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) $(TEMP_DIR) 31 | rm -r $(TEMP_DIR) 32 | 33 | push-%: 34 | $(MAKE) ARCH=$* build 35 | docker push $(REGISTRY)/$(IMAGE)-$*:$(VERSION) 36 | 37 | push: ./manifest-tool $(addprefix push-,$(ALL_ARCH)) 38 | ./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:$(VERSION) --target $(REGISTRY)/$(IMAGE):$(VERSION) 39 | 40 | ./manifest-tool: 41 | curl -sSL https://github.com/estesp/manifest-tool/releases/download/v0.7.0/manifest-tool-linux-amd64 > manifest-tool 42 | chmod +x manifest-tool 43 | -------------------------------------------------------------------------------- /images/tiller/Makefile: -------------------------------------------------------------------------------- 1 | REGISTRY?=luxas 2 | VERSION?=v2.6.1 3 | TEMP_DIR:=$(shell mktemp -d) 4 | 5 | all: push 6 | download: 7 | git clone https://github.com/luxas/helm $(TEMP_DIR) -b backport_crossbuild_261 8 | 9 | push: download 10 | make -C $(TEMP_DIR) bootstrap-dockerized docker-push DOCKER_REGISTRY=docker.io IMAGE_PREFIX=$(REGISTRY) VERSION=$(VERSION) 11 | -------------------------------------------------------------------------------- /images/traefik/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM scratch 2 | COPY traefik / 3 | ADD https://raw.githubusercontent.com/containous/traefik/master/script/ca-certificates.crt /etc/ssl/certs/ 4 | ENTRYPOINT ["/traefik"] 5 | -------------------------------------------------------------------------------- /images/traefik/Makefile: -------------------------------------------------------------------------------- 1 | REGISTRY?=luxas 2 | IMAGE?=traefik 3 | TEMP_DIR:=$(shell mktemp -d) 4 | ARCH?=amd64 5 | ALL_ARCH=amd64 arm arm64 6 | ML_PLATFORMS=linux/amd64,linux/arm,linux/arm64 7 | 8 | VERSION?=v1.5.4 9 | URL?=https://github.com/containous/traefik/releases/download/$(VERSION)/traefik_linux-$(ARCH) 10 | 11 | all: build 12 | build: 13 | curl -sSL $(URL) > $(TEMP_DIR)/traefik 14 | chmod +x $(TEMP_DIR)/traefik 15 | cp Dockerfile $(TEMP_DIR) 16 | 17 | docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) $(TEMP_DIR) 18 | rm -r $(TEMP_DIR) 19 | 20 | push-%: 21 | $(MAKE) ARCH=$* build 22 | docker push $(REGISTRY)/$(IMAGE)-$*:$(VERSION) 23 | 24 | push: ./manifest-tool $(addprefix push-,$(ALL_ARCH)) 25 | ./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:$(VERSION) --target $(REGISTRY)/$(IMAGE):$(VERSION) 26 | 27 | ./manifest-tool: 28 | curl -sSL https://github.com/estesp/manifest-tool/releases/download/v0.7.0/manifest-tool-linux-amd64 > manifest-tool 29 | chmod +x manifest-tool 30 | -------------------------------------------------------------------------------- /images/wardle-apiserver/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM BASEIMAGE 2 | COPY sample-apiserver / 3 | ENTRYPOINT ["/sample-apiserver"] -------------------------------------------------------------------------------- /images/wardle-apiserver/Makefile: -------------------------------------------------------------------------------- 1 | REGISTRY?=luxas 2 | IMAGE?=sample-apiserver 3 | TEMP_DIR:=$(shell mktemp -d) 4 | ARCH?=amd64 5 | ALL_ARCH=amd64 arm arm64 6 | ML_PLATFORMS=linux/amd64,linux/arm,linux/arm64 7 | 8 | COMMIT?=aa1ce319891e390026f7f7759dceb03c99156a4b 9 | VERSION?=v0.3.0 10 | 11 | ifeq ($(ARCH),amd64) 12 | BASEIMAGE?=busybox 13 | endif 14 | ifeq ($(ARCH),arm) 15 | BASEIMAGE?=arm32v7/busybox 16 | endif 17 | ifeq ($(ARCH),arm64) 18 | BASEIMAGE?=arm64v8/busybox 19 | endif 20 | 21 | all: build 22 | build: 23 | cp Dockerfile $(TEMP_DIR) 24 | cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile 25 | 26 | docker run -it -v $(TEMP_DIR):/build -e GOARCH=$(ARCH) golang:1.10 /bin/bash -c "\ 27 | git clone https://github.com/kubernetes/sample-apiserver /go/src/k8s.io/sample-apiserver && \ 28 | cd /go/src/k8s.io/sample-apiserver && git checkout $(COMMIT) && \ 29 | CGO_ENABLED=0 go build -a -tags netgo -o /build/sample-apiserver k8s.io/sample-apiserver" 30 | 31 | docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) $(TEMP_DIR) 32 | 33 | push-%: 34 | $(MAKE) ARCH=$* build 35 | docker push $(REGISTRY)/$(IMAGE)-$*:$(VERSION) 36 | 37 | push: ./manifest-tool $(addprefix push-,$(ALL_ARCH)) 38 | ./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:$(VERSION) --target $(REGISTRY)/$(IMAGE):$(VERSION) 39 | 40 | ./manifest-tool: 41 | curl -sSL https://github.com/estesp/manifest-tool/releases/download/v0.7.0/manifest-tool-linux-amd64 > manifest-tool 42 | chmod +x manifest-tool 43 | -------------------------------------------------------------------------------- /init.sh: -------------------------------------------------------------------------------- 1 | 2 | kubeadm init --config kubeadm.yaml 3 | 4 | export KUBECONFIG=/etc/kubernetes/admin.conf 5 | 6 | kubectl create secret -n kube-system generic weave-passwd --from-literal=weave-passwd=$(hexdump -n 16 -e '4/4 "%08x" 1 "\n"' /dev/random) 7 | kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&password-secret=weave-passwd" 8 | kubectl taint nodes --all node-role.kubernetes.io/master- 9 | 10 | # Deploy the Dashboard 11 | curl -sSL https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml | sed "s|gcr.io/google_containers/kubernetes-dashboard-amd64:.*|luxas/kubernetes-dashboard:v1.7.1|" | kubectl apply -f - 12 | kubectl apply -f demos/dashboard/ingress.yaml 13 | 14 | # Deploy metrics-server and Heapster 15 | kubectl apply -f demos/monitoring/metrics-server.yaml 16 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml 17 | curl -sSL https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml | \ 18 | sed "s|image:.*|image: luxas/heapster:v1.4.0|" | kubectl apply -f - 19 | 20 | kubectl apply -f demos/loadbalancing/traefik-common.yaml 21 | kubectl apply -f demos/loadbalancing/traefik-ngrok.yaml 22 | 23 | # Install the Rook and Prometheus operators 24 | ROOK_BRANCH=${ROOK_BRANCH:-"release-0.5"} 25 | kubectl apply -f https://raw.githubusercontent.com/rook/rook/${ROOK_BRANCH}/cluster/examples/kubernetes/rook-operator.yaml 26 | kubectl apply -f demos/monitoring/prometheus-operator.yaml 27 | 28 | echo "Waiting for the Rook and Prometheus operators to create the TPRs/CRDs" 29 | while [[ $(kubectl get cluster; echo $?) == 1 ]]; do sleep 1; done 30 | while [[ $(kubectl get prometheus; echo $?) == 1 ]]; do sleep 1; done 31 | 32 | # Requires the Rook and Prometheus API groups 33 | kubectl apply -f https://raw.githubusercontent.com/rook/rook/${ROOK_BRANCH}/cluster/examples/kubernetes/rook-cluster.yaml 34 | kubectl apply -f https://raw.githubusercontent.com/rook/rook/${ROOK_BRANCH}/cluster/examples/kubernetes/rook-storageclass.yaml 35 | kubectl patch storageclass rook-block -p '{"metadata":{"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' 36 | 37 | echo "Waiting for rook to create a Secret" 38 | while [[ $(kubectl get secret rook-rook-user; echo $?) == 1 ]]; do sleep 1; done 39 | 40 | # Set up the rook Secret for other namespaces than default 41 | kubectl get secret rook-rook-user -oyaml | sed "/resourceVer/d;/uid/d;/self/d;/creat/d;/namespace/d" | kubectl -n kube-system apply -f - 42 | kubectl create ns wardle 43 | kubectl get secret rook-rook-user -oyaml | sed "/resourceVer/d;/uid/d;/self/d;/creat/d;/namespace/d" | kubectl -n wardle apply -f - 44 | 45 | 46 | kubectl apply -f demos/monitoring/influx-grafana.yaml 47 | 48 | # Demo the autoscaling based on custom metrics feature 49 | kubectl apply -f demos/monitoring/sample-prometheus-instance.yaml 50 | kubectl apply -f demos/monitoring/sample-metrics-app.yaml 51 | kubectl apply -f demos/monitoring/custom-metrics.yaml 52 | 53 | # Setup helm and install tiller 54 | helm init 55 | kubectl -n kube-system create serviceaccount tiller 56 | kubectl -n kube-system set serviceaccount deploy tiller-deploy tiller 57 | kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller 58 | kubectl -n kube-system set image deploy/tiller-deploy tiller=luxas/tiller:v2.6.1 59 | 60 | # Demo an aggregated API server 61 | kubectl apply -f demos/sample-apiserver/wardle.yaml 62 | while [[ $(kubectl get flunder; echo $?) == 1 ]]; do sleep 1; done 63 | kubectl apply -f demos/sample-apiserver/my-flunder.yaml 64 | -------------------------------------------------------------------------------- /install.sh: -------------------------------------------------------------------------------- 1 | 2 | # Add the kubernetes apt repo 3 | apt-get update && apt-get install -y apt-transport-https 4 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 5 | cat </etc/apt/sources.list.d/kubernetes.list 6 | deb http://apt.kubernetes.io/ kubernetes-xenial main 7 | EOF 8 | 9 | # Install docker and kubeadm 10 | apt-get update && apt-get install -y docker.io kubeadm ceph-common 11 | 12 | # Set current arch 13 | ARCH=${ARCH:-"amd64"} 14 | 15 | # Enable hostPort support using CNI & Weave 16 | mkdir -p /etc/cni/net.d/ 17 | cat > /etc/cni/net.d/10-mynet.conflist < manifest-tool 56 | chmod +x manifest-tool 57 | --------------------------------------------------------------------------------