├── .gitignore ├── README.md ├── acbuild.sh ├── app.go ├── bridge.png ├── control-loop.png ├── dashboard-ingress.png ├── flower.png ├── host-mode.png ├── image-standards.png ├── ipvlan.png ├── k8s-overview.png ├── macvlan.png ├── os-procs.png ├── pattern-adapter.png ├── pattern-ambassador.png ├── pattern-leader.png ├── pattern-scatter-gather.png ├── pattern-sidecar.png ├── pattern-work-queue.png ├── pod-apps.png ├── pod-net-canal.png ├── pod-net-fabric.png ├── pod-net-weave.png ├── pod-net.png ├── ptp.png ├── redis-service.png ├── rkt-horizontal-color.png ├── rkt-vs-docker-fetch.png ├── rkt-vs-docker-process-model.png ├── rkt8s.png ├── traefik-ingress.png ├── vagrant ├── Vagrantfile ├── apiserver.service ├── bashrc ├── controller-manager.service ├── etc-wait-for │ ├── apiserver │ ├── etcd │ └── kubelet ├── etcd.service ├── host-rkt ├── install-rkt.sh ├── k8s.conf ├── kubelet.service ├── proxy.service ├── resolv.conf ├── rkt-api.service ├── scheduler.service ├── selinux.config ├── skydns.yaml ├── traefik.yaml ├── wait-for └── wait-for@.service ├── workshop.slide └── you.png /.gitignore: -------------------------------------------------------------------------------- 1 | vagrant/.vagrant 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # rktnetes-workshop 2 | 3 | This repo contains a Vagrantfile and necessary assets to help you launch your [Kubernetes](https://github.com/kubernetes/kubernetes) cluster with [rkt](https://github.com/coreos/rkt) as a runtime. 4 | 5 | ## Workshop 6 | 7 | The workshop slides can be viewed at http://go-talks.appspot.com/github.com/coreos/rktnetes-workshop/workshop.slide. 8 | 9 | ## Vagrant up your rktnetes cluster. 10 | 11 | ```shell 12 | $ cd vagrant 13 | $ vagrant up 14 | [...] 15 | 16 | $ vagrant ssh 17 | 18 | [vagrant@localhost ~]$ kubectl describe node 127.0.0.1 19 | Name: 127.0.0.1 20 | Labels: beta.kubernetes.io/arch=amd64 21 | beta.kubernetes.io/os=linux 22 | kubernetes.io/hostname=127.0.0.1 23 | Taints: 24 | CreationTimestamp: Mon, 27 Jun 2016 21:23:07 +0000 25 | Phase: 26 | Conditions: 27 | Type Status LastHeartbeatTime LastTransitionTime Reason Message 28 | ---- ------ ----------------- ------------------ ------ ------- 29 | OutOfDisk False Mon, 27 Jun 2016 21:25:10 +0000 Mon, 27 Jun 2016 21:23:07 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available 30 | MemoryPressure False Mon, 27 Jun 2016 21:25:10 +0000 Mon, 27 Jun 2016 21:23:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available 31 | Ready True Mon, 27 Jun 2016 21:25:10 +0000 Mon, 27 Jun 2016 21:23:07 +0000 KubeletReady kubelet is posting ready status 32 | Addresses: 127.0.0.1,127.0.0.1 33 | Capacity: 34 | alpha.kubernetes.io/nvidia-gpu: 0 35 | cpu: 2 36 | memory: 2048712Ki 37 | pods: 110 38 | Allocatable: 39 | alpha.kubernetes.io/nvidia-gpu: 0 40 | cpu: 2 41 | memory: 2048712Ki 42 | pods: 110 43 | System Info: 44 | Machine ID: 508619e7e68e446c84d1fcdf7e0dc577 45 | System UUID: B488DBD6-3470-4E69-BF62-DE81AD069083 46 | Boot ID: 1ea277cf-87e2-4383-bbea-305339975de9 47 | Kernel Version: 4.5.5-300.fc24.x86_64 48 | OS Image: Fedora 24 (Cloud Edition) 49 | Operating System: linux 50 | Architecture: amd64 51 | Container Runtime Version: rkt://1.9.1 52 | Kubelet Version: v1.3.0-beta.2 53 | Kube-Proxy Version: v1.3.0-beta.2 54 | ExternalID: 127.0.0.1 55 | Non-terminated Pods: (0 in total) 56 | Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits 57 | --------- ---- ------------ ---------- --------------- ------------- 58 | Allocated resources: 59 | (Total limits may be over 100 percent, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) 60 | CPU Requests CPU Limits Memory Requests Memory Limits 61 | ------------ ---------- --------------- ------------- 62 | 0 (0%) 0 (0%) 0 (0%) 0 (0%) 63 | Events: 64 | FirstSeen LastSeen Count From SubobjectPath Type Reason Message 65 | --------- -------- ----- ---- ------------- -------- ------ ------- 66 | 2m 2m 1 {kube-proxy 127.0.0.1} Normal Starting Starting kube-proxy. 67 | 2m 2m 1 {kubelet 127.0.0.1} Normal Starting Starting kubelet. 68 | 2m 2m 7 {kubelet 127.0.0.1} Normal NodeHasSufficientDisk Node 127.0.0.1 status is now: NodeHasSufficientDisk 69 | 2m 2m 7 {kubelet 127.0.0.1} Normal NodeHasSufficientMemory Node 127.0.0.1 status is now: NodeHasSufficientMemory 70 | 2m 2m 1 {controllermanager } Normal RegisteredNode Node 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController 71 | 72 | ``` 73 | 74 | Now we can start playing with the rktnetes cluster! 75 | 76 | ```shell 77 | [vagrant@localhost ~]$ kubectl run redis --image=redis 78 | deployment "redis" created 79 | 80 | [vagrant@localhost ~]$ kubectl get pods 81 | NAME READY STATUS RESTARTS AGE 82 | redis-4282552436-bwue8 1/1 Running 0 13s 83 | 84 | [vagrant@localhost ~]$ rkt list 85 | UUID APP IMAGE NAME STATE CREATED STARTED NETWORKS 86 | 2a827036 hyperkube quay.io/coreos/hyperkube:v1.3.0-beta.2_coreos.0 running 14 minutes ago 14 minutes ago 87 | 72210e51 etcd quay.io/coreos/etcd:v2.3.7 running 14 minutes ago 14 minutes ago 88 | 928296f0 hyperkube quay.io/coreos/hyperkube:v1.3.0-beta.2_coreos.0 running 14 minutes ago 14 minutes ago 89 | 97c2bef9 hyperkube quay.io/coreos/hyperkube:v1.3.0-beta.2_coreos.0 running 14 minutes ago 14 minutes ago 90 | edee7641 redis registry-1.docker.io/library/redis:latest running 1 minute ago 1 minute ago rkt.kubernetes.io:ip4=10.1.0.2, default-restricted:ip4=172.16.28.2 91 | ``` 92 | 93 | Add the Kubernetes Dashboard: 94 | 95 | ```shell 96 | [vagrant@localhost ~]$ kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml 97 | ``` 98 | 99 | Add an Ingress controller based on [traefik](https://traefik.io/): 100 | 101 | ```shell 102 | [vagrant@localhost ~]$ kubectl create -f /vagrant/traefik.yaml 103 | ``` 104 | 105 | Modify your local `/etc/hosts`, replace the entry with the Vagrant IP address: 106 | ``` 107 | [vagrant@localhost ~]$ ip addr show dev eth1 108 | 3: eth1: mtu 1500 qdisc fq_codel state UP group default qlen 1000 109 | link/ether 52:54:00:90:3f:9a brd ff:ff:ff:ff:ff:ff 110 | inet 172.28.128.126/24 brd 172.28.128.255 scope global dynamic eth1 111 | valid_lft 3221sec preferred_lft 3221sec 112 | inet6 fe80::5054:ff:fe90:3f9a/64 scope link 113 | valid_lft forever preferred_lft forever 114 | 115 | [vagrant@localhost ~]$ exit 116 | 117 | $ vi /etc/hosts 118 | ... 119 | 172.28.128.126 traefik.rktnetes 120 | ``` 121 | 122 | Open http://traefik.rktnetes. 123 | -------------------------------------------------------------------------------- /acbuild.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | set -x 4 | 5 | name=app 6 | os=linux 7 | version=0.0.1 8 | arch=amd64 9 | 10 | acbuildend () { 11 | export EXIT=$?; 12 | acbuild --debug end && exit $EXIT; 13 | } 14 | 15 | acbuild --debug begin 16 | trap acbuildend EXIT 17 | 18 | GOOS="${os}" 19 | GOARCH="${arch}" 20 | CGO_ENABLED=0 go build 21 | 22 | acbuild set-name workshop/app // HL 23 | acbuild copy "${name}" /"${name}" 24 | acbuild set-exec /"${name}" 25 | acbuild port add www tcp 8080 26 | acbuild label add version "${version}" 27 | acbuild label add arch "${arch}" 28 | acbuild label add os "${os}" 29 | acbuild annotation add authors "Your Name " 30 | acbuild write --overwrite "${name}"-"${version}"-"${os}"-"${arch}".aci 31 | 32 | gpg2 --yes --batch \ 33 | --armor \ 34 | --output "${name}"-"${version}"-"${os}"-"${arch}".aci.asc \ 35 | --detach-sign "${name}"-"${version}"-"${os}"-"${arch}".aci 36 | -------------------------------------------------------------------------------- /app.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "log" 6 | "net/http" 7 | "net/http/httputil" 8 | ) 9 | 10 | func handler(w http.ResponseWriter, r *http.Request) { 11 | b, _ := httputil.DumpRequest(r, false) 12 | log.Println(string(b)) 13 | fmt.Fprintln(w, "Hello World") 14 | } 15 | 16 | func main() { 17 | http.HandleFunc("/", handler) 18 | http.ListenAndServe(":8080", nil) 19 | } 20 | -------------------------------------------------------------------------------- /bridge.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/bridge.png -------------------------------------------------------------------------------- /control-loop.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/control-loop.png -------------------------------------------------------------------------------- /dashboard-ingress.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/dashboard-ingress.png -------------------------------------------------------------------------------- /flower.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/flower.png -------------------------------------------------------------------------------- /host-mode.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/host-mode.png -------------------------------------------------------------------------------- /image-standards.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/image-standards.png -------------------------------------------------------------------------------- /ipvlan.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/ipvlan.png -------------------------------------------------------------------------------- /k8s-overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/k8s-overview.png -------------------------------------------------------------------------------- /macvlan.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/macvlan.png -------------------------------------------------------------------------------- /os-procs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/os-procs.png -------------------------------------------------------------------------------- /pattern-adapter.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pattern-adapter.png -------------------------------------------------------------------------------- /pattern-ambassador.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pattern-ambassador.png -------------------------------------------------------------------------------- /pattern-leader.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pattern-leader.png -------------------------------------------------------------------------------- /pattern-scatter-gather.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pattern-scatter-gather.png -------------------------------------------------------------------------------- /pattern-sidecar.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pattern-sidecar.png -------------------------------------------------------------------------------- /pattern-work-queue.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pattern-work-queue.png -------------------------------------------------------------------------------- /pod-apps.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pod-apps.png -------------------------------------------------------------------------------- /pod-net-canal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pod-net-canal.png -------------------------------------------------------------------------------- /pod-net-fabric.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pod-net-fabric.png -------------------------------------------------------------------------------- /pod-net-weave.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pod-net-weave.png -------------------------------------------------------------------------------- /pod-net.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/pod-net.png -------------------------------------------------------------------------------- /ptp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/ptp.png -------------------------------------------------------------------------------- /redis-service.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/redis-service.png -------------------------------------------------------------------------------- /rkt-horizontal-color.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/rkt-horizontal-color.png -------------------------------------------------------------------------------- /rkt-vs-docker-fetch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/rkt-vs-docker-fetch.png -------------------------------------------------------------------------------- /rkt-vs-docker-process-model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/rkt-vs-docker-process-model.png -------------------------------------------------------------------------------- /rkt8s.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/rkt8s.png -------------------------------------------------------------------------------- /traefik-ingress.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/traefik-ingress.png -------------------------------------------------------------------------------- /vagrant/Vagrantfile: -------------------------------------------------------------------------------- 1 | Vagrant.configure("2") do |config| 2 | config.vm.box = "fedora/24-cloud-base" 3 | 4 | config.vm.provider "virtualbox" do |vb| 5 | vb.gui = false 6 | vb.memory = "2048" 7 | vb.cpus = "2" 8 | end 9 | 10 | config.vm.network "private_network", type: "dhcp" 11 | config.vm.provision :shell, :privileged => true, :path => "install-rkt.sh" 12 | end 13 | -------------------------------------------------------------------------------- /vagrant/apiserver.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | After=network.target 3 | 4 | After=wait-for@etcd.service 5 | Requires=wait-for@etcd.service 6 | 7 | [Service] 8 | ExecStartPre=/usr/bin/mkdir -p /var/run/kubernetes 9 | ExecStart=/usr/bin/rkt run \ 10 | --net=host \ 11 | --volume etc-kubernetes,kind=host,source=/etc/kubernetes \ 12 | --volume var-run-kubernetes,kind=host,source=/var/run/kubernetes \ 13 | --mount volume=etc-kubernetes,target=/etc/kubernetes \ 14 | --mount volume=var-run-kubernetes,target=/var/run/kubernetes \ 15 | quay.io/coreos/hyperkube:v1.4.3_coreos.0 \ 16 | --exec=/hyperkube \ 17 | -- \ 18 | apiserver \ 19 | --v=3 \ 20 | --cert-dir=/var/run/kubernetes \ 21 | --service-account-key-file=/etc/kubernetes/kube-serviceaccount.key \ 22 | --service-account-lookup=false \ 23 | --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ 24 | --insecure-bind-address=0.0.0.0 \ 25 | --insecure-port=8080 \ 26 | --advertise-address=0.0.0.0 \ 27 | --etcd-servers=http://127.0.0.1:4001 \ 28 | --service-cluster-ip-range=10.0.0.0/24 \ 29 | --cloud-provider= \ 30 | \'--cors-allowed-origins=/127.0.0.1(:[0-9]+)?$,/localhost(:[0-9]+)?$\' 31 | 32 | [Install] 33 | WantedBy=multi-user.target 34 | 35 | -------------------------------------------------------------------------------- /vagrant/bashrc: -------------------------------------------------------------------------------- 1 | # .bashrc 2 | 3 | # Source global definitions 4 | if [ -f /etc/bashrc ]; then 5 | . /etc/bashrc 6 | fi 7 | 8 | # Uncomment the following line if you don't like systemctl's auto-paging feature: 9 | # export SYSTEMD_PAGER= 10 | 11 | # User specific aliases and functions 12 | source <(kubectl completion bash) 13 | 14 | export GOPATH=$HOME/gopath 15 | -------------------------------------------------------------------------------- /vagrant/controller-manager.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | After=network.target 3 | 4 | After=wait-for@apiserver.service 5 | Requires=wait-for@apiserver.service 6 | 7 | [Service] 8 | ExecStart=/usr/bin/rkt run \ 9 | --net=host \ 10 | --volume etc-kubernetes,kind=host,source=/etc/kubernetes \ 11 | --volume var-run-kubernetes,kind=host,source=/var/run/kubernetes \ 12 | --mount volume=etc-kubernetes,target=/etc/kubernetes \ 13 | --mount volume=var-run-kubernetes,target=/var/run/kubernetes \ 14 | quay.io/coreos/hyperkube:v1.4.3_coreos.0 \ 15 | --exec=/hyperkube \ 16 | -- \ 17 | controller-manager \ 18 | --v=3 \ 19 | --service-account-private-key-file=/etc/kubernetes/kube-serviceaccount.key \ 20 | --root-ca-file=/var/run/kubernetes/apiserver.crt \ 21 | --enable-hostpath-provisioner=false \ 22 | --pvclaimbinder-sync-period=15s \ 23 | --cloud-provider= \ 24 | --master=0.0.0.0:8080 25 | 26 | [Install] 27 | WantedBy=multi-user.target 28 | 29 | -------------------------------------------------------------------------------- /vagrant/etc-wait-for/apiserver: -------------------------------------------------------------------------------- 1 | http://localhost:8080/healthz 1 180 2 | -------------------------------------------------------------------------------- /vagrant/etc-wait-for/etcd: -------------------------------------------------------------------------------- 1 | http://localhost:4001/version 1 180 2 | -------------------------------------------------------------------------------- /vagrant/etc-wait-for/kubelet: -------------------------------------------------------------------------------- 1 | http://localhost:10248/healthz 1 180 2 | -------------------------------------------------------------------------------- /vagrant/etcd.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | After=network.target 3 | 4 | [Service] 5 | Type=notify 6 | ExecStart=/usr/bin/rkt run \ 7 | --net=host \ 8 | --volume var-lib-etcd,kind=host,source=/var/lib/etcd \ 9 | --mount volume=var-lib-etcd,target=/var/lib/etcd \ 10 | quay.io/coreos/etcd:v2.3.7 \ 11 | --exec=/etcd \ 12 | -- \ 13 | --data-dir=/var/lib/etcd 14 | 15 | [Install] 16 | WantedBy=multi-user.target 17 | -------------------------------------------------------------------------------- /vagrant/host-rkt: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # This is bind mounted into the kubelet rootfs and all rkt shell-outs go 3 | # through this rkt wrapper. It essentially enters the host mount namespace 4 | # (which it is already in) only for the purpose of breaking out of the chroot 5 | # before calling rkt. It makes things like rkt gc work and avoids bind mounting 6 | # in certain rkt filesystem dependancies into the kubelet rootfs. This can 7 | # eventually be obviated when the write-api stuff gets upstream and rkt gc is 8 | # through the api-server. Related issue: 9 | # https://github.com/coreos/rkt/issues/2878 10 | exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "$@" 11 | -------------------------------------------------------------------------------- /vagrant/install-rkt.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | set -x 4 | 5 | cd $(mktemp -d) 6 | 7 | rkt_version="1.17.0" 8 | k8s_version="v1.4.3_coreos.0" 9 | acbuild_version="0.4.0" 10 | 11 | dnf -y install \ 12 | openssl \ 13 | systemd-container \ 14 | go \ 15 | git \ 16 | rng-tools 17 | 18 | curl -O -L https://github.com/containers/build/releases/download/v"${acbuild_version}"/acbuild-v"${acbuild_version}".tar.gz 19 | tar -xzf acbuild-v"${acbuild_version}".tar.gz 20 | install -Dm755 acbuild-v"${acbuild_version}"/acbuild /usr/bin/acbuild 21 | install -Dm755 acbuild-v"${acbuild_version}"/acbuild-chroot /usr/bin/acbuild-chroot 22 | install -Dm755 acbuild-v"${acbuild_version}"/acbuild-script /usr/bin/acbuild-script 23 | 24 | kurl="https://storage.googleapis.com/kubernetes-release/release/v1.4.3/bin/linux/amd64" 25 | curl -O -L "${kurl}"/kubectl 26 | install -Dm755 kubectl /usr/bin/kubectl 27 | 28 | curl -O -L https://github.com/coreos/rkt/releases/download/v"${rkt_version}"/rkt-"${rkt_version}"-1.x86_64.rpm 29 | rpm -Uvh rkt-"${rkt_version}"-1.x86_64.rpm 30 | install -Dm755 /vagrant/host-rkt /usr/bin/host-rkt 31 | 32 | for unit in etcd.service apiserver.service controller-manager.service kubelet.service scheduler.service proxy.service; do 33 | install -Dm644 /vagrant/${unit} /usr/lib/systemd/system/${unit} 34 | done 35 | 36 | gpasswd -a vagrant rkt 37 | gpasswd -a vagrant rkt-admin 38 | 39 | cp /vagrant/selinux.config /etc/selinux/config 40 | setenforce 0 41 | 42 | mkdir --parents /etc/kubernetes 43 | mkdir --parents /var/lib/docker 44 | mkdir --parents /var/lib/kubelet 45 | mkdir --parents /run/kubelet 46 | mkdir --parents /var/run/kubernetes 47 | mkdir --parents /etc/rkt/net.d 48 | mkdir --parents /etc/cni/net.d 49 | mkdir --parents /var/lib/etcd 50 | 51 | cp /vagrant/resolv.conf /etc/kubernetes/resolv.conf 52 | cp /vagrant/k8s.conf /etc/rkt/net.d 53 | cp /vagrant/k8s.conf /etc/cni/net.d 54 | cp /vagrant/bashrc /home/vagrant/.bashrc 55 | chown vagrant:vagrant ~/.bashrc 56 | 57 | sudo cp -r /vagrant/etc-wait-for /etc/wait-for 58 | install -Dm644 /vagrant/wait-for@.service /usr/lib/systemd/system/wait-for@.service 59 | install -Dm755 /vagrant/wait-for /usr/bin/wait-for 60 | 61 | openssl genrsa -out /etc/kubernetes/kube-serviceaccount.key 2048 62 | 63 | rkt trust --trust-keys-from-https --prefix "quay.io/coreos" 64 | 65 | rkt fetch quay.io/coreos/hyperkube:"${k8s_version}" 66 | rkt fetch quay.io/coreos/etcd:v2.3.7 67 | 68 | systemctl daemon-reload 69 | 70 | systemctl enable rkt-api 71 | systemctl start rkt-api 72 | 73 | systemctl enable rngd 74 | systemctl start rngd 75 | 76 | install -d --group=vagrant --owner=vagrant /home/vagrant/gopath /home/vagrant/gopath/src /home/vagrant/gopath/bin /home/vagrant/gopath/pkg 77 | -------------------------------------------------------------------------------- /vagrant/k8s.conf: -------------------------------------------------------------------------------- 1 | { 2 | "name": "rkt.kubernetes.io", 3 | "type": "bridge", 4 | "bridge": "mybridge", 5 | "mtu": 1460, 6 | "addIf": "true", 7 | "isGateway": true, 8 | "ipMasq": true, 9 | "ipam": { 10 | "type": "host-local", 11 | "subnet": "10.1.0.0/16", 12 | "gateway": "10.1.0.1", 13 | "routes": [ 14 | { 15 | "dst": "0.0.0.0/0" 16 | } 17 | ] 18 | } 19 | } 20 | -------------------------------------------------------------------------------- /vagrant/kubelet.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | After=network.target 3 | 4 | After=wait-for@apiserver.service 5 | Requires=wait-for@apiserver.service 6 | 7 | [Service] 8 | ExecStart=/usr/bin/rkt run \ 9 | --volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=false \ 10 | --mount volume=etc-kubernetes,target=/etc/kubernetes \ 11 | --volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true \ 12 | --mount volume=etc-ssl-certs,target=/etc/ssl/certs \ 13 | --volume var-lib-docker,kind=host,source=/var/lib/docker,readOnly=false \ 14 | --mount volume=var-lib-docker,target=/var/lib/docker \ 15 | --volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,readOnly=false \ 16 | --mount volume=var-lib-kubelet,target=/var/lib/kubelet \ 17 | --volume os-release,kind=host,source=/usr/lib/os-release,readOnly=true \ 18 | --mount volume=os-release,target=/etc/os-release \ 19 | --volume run,kind=host,source=/run \ 20 | --mount volume=run,target=/run \ 21 | --volume dns,kind=host,source=/etc/resolv.conf \ 22 | --mount volume=dns,target=/etc/resolv.conf \ 23 | --volume rkt,kind=host,source=/usr/bin/host-rkt \ 24 | --mount volume=rkt,target=/usr/bin/rkt \ 25 | --volume var-lib-rkt,kind=host,source=/var/lib/rkt \ 26 | --mount volume=var-lib-rkt,target=/var/lib/rkt \ 27 | --volume stage,kind=host,source=/tmp \ 28 | --mount volume=stage,target=/tmp \ 29 | --volume var-log,kind=host,source=/var/log \ 30 | --mount volume=var-log,target=/var/log \ 31 | --volume etc-cni-netd,kind=host,source=/etc/cni/net.d \ 32 | --mount volume=etc-cni-netd,target=/etc/cni/net.d \ 33 | --trust-keys-from-https \ 34 | --stage1-from-dir=stage1-fly.aci \ 35 | quay.io/coreos/hyperkube:v1.4.3_coreos.0 \ 36 | --exec=/hyperkube \ 37 | -- \ 38 | kubelet \ 39 | --v=3 \ 40 | --container-runtime=rkt \ 41 | --rkt-path=/usr/bin/rkt \ 42 | --hostname-override=127.0.0.1 \ 43 | --address=0.0.0.0 \ 44 | --port=10250 \ 45 | --api-servers=127.0.0.1:8080 \ 46 | --cluster-dns=10.0.0.10 \ 47 | --cluster-domain=cluster.local \ 48 | --network-plugin=cni \ 49 | --cni-conf-dir=/etc/cni/net.d \ 50 | --resolv-conf=/etc/kubernetes/resolv.conf \ 51 | --register-schedulable=true \ 52 | --allow-privileged=true \ 53 | --rkt-stage1-image=coreos.com/rkt/stage1-coreos 54 | 55 | [Install] 56 | WantedBy=multi-user.target 57 | -------------------------------------------------------------------------------- /vagrant/proxy.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | After=network.target 3 | 4 | After=wait-for@apiserver.service 5 | Requires=wait-for@apiserver.service 6 | 7 | [Service] 8 | ExecStart=/usr/bin/rkt run \ 9 | --trust-keys-from-https \ 10 | --stage1-from-dir=stage1-fly.aci \ 11 | --volume var-run-dbus,kind=host,source=/var/run/dbus \ 12 | --mount volume=var-run-dbus,target=/var/run/dbus \ 13 | quay.io/coreos/hyperkube:v1.4.3_coreos.0 \ 14 | --exec=/hyperkube \ 15 | -- \ 16 | proxy \ 17 | --v=3 \ 18 | --hostname-override=127.0.0.1 \ 19 | --master=http://0.0.0.0:8080 20 | 21 | [Install] 22 | WantedBy=multi-user.target 23 | -------------------------------------------------------------------------------- /vagrant/resolv.conf: -------------------------------------------------------------------------------- 1 | nameserver 8.8.8.8 2 | nameserver 8.8.4.4 3 | -------------------------------------------------------------------------------- /vagrant/rkt-api.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=rkt api service 3 | Documentation=http://github.com/coreos/rkt 4 | After=network.target 5 | 6 | [Service] 7 | ExecStart=/usr/bin/rkt api-service --listen=127.0.0.1:15441 8 | 9 | [Install] 10 | WantedBy=multi-user.target 11 | -------------------------------------------------------------------------------- /vagrant/scheduler.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | After=network.target 3 | 4 | After=wait-for@apiserver.service 5 | Requires=wait-for@apiserver.service 6 | 7 | [Service] 8 | ExecStart=/usr/bin/rkt run \ 9 | --net=host \ 10 | --volume etc-kubernetes,kind=host,source=/etc/kubernetes \ 11 | --volume var-run-kubernetes,kind=host,source=/var/run/kubernetes \ 12 | --mount volume=etc-kubernetes,target=/etc/kubernetes \ 13 | --mount volume=var-run-kubernetes,target=/var/run/kubernetes \ 14 | quay.io/coreos/hyperkube:v1.4.3_coreos.0 \ 15 | --exec=/hyperkube \ 16 | -- \ 17 | scheduler \ 18 | --v=3 \ 19 | --master=http://0.0.0.0:8080 20 | 21 | [Install] 22 | WantedBy=multi-user.target 23 | 24 | -------------------------------------------------------------------------------- /vagrant/selinux.config: -------------------------------------------------------------------------------- 1 | # This file controls the state of SELinux on the system. 2 | # SELINUX= can take one of these three values: 3 | # enforcing - SELinux security policy is enforced. 4 | # permissive - SELinux prints warnings instead of enforcing. 5 | # disabled - No SELinux policy is loaded. 6 | SELINUX=disabled 7 | # SELINUXTYPE= can take one of these three values: 8 | # targeted - Targeted processes are protected, 9 | # minimum - Modification of targeted policy. Only selected processes are protected. 10 | # mls - Multi Level Security protection. 11 | SELINUXTYPE=targeted 12 | 13 | -------------------------------------------------------------------------------- /vagrant/skydns.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: kube-dns 5 | namespace: kube-system 6 | labels: 7 | k8s-app: kube-dns 8 | kubernetes.io/cluster-service: "true" 9 | kubernetes.io/name: "KubeDNS" 10 | spec: 11 | selector: 12 | k8s-app: kube-dns 13 | clusterIP: 10.0.0.10 14 | ports: 15 | - name: dns 16 | port: 53 17 | protocol: UDP 18 | - name: dns-tcp 19 | port: 53 20 | protocol: TCP 21 | --- 22 | apiVersion: v1 23 | kind: ReplicationController 24 | metadata: 25 | name: kube-dns-v17 26 | namespace: kube-system 27 | labels: 28 | k8s-app: kube-dns 29 | version: v17 30 | kubernetes.io/cluster-service: "true" 31 | spec: 32 | replicas: 1 33 | selector: 34 | k8s-app: kube-dns 35 | version: v17 36 | template: 37 | metadata: 38 | labels: 39 | k8s-app: kube-dns 40 | version: v17 41 | kubernetes.io/cluster-service: "true" 42 | spec: 43 | containers: 44 | - name: kubedns 45 | image: gcr.io/google_containers/kubedns-amd64:1.5 46 | resources: 47 | # TODO: Set memory limits when we've profiled the container for large 48 | # clusters, then set request = limit to keep this container in 49 | # guaranteed class. Currently, this container falls into the 50 | # "burstable" category so the kubelet doesn't backoff from restarting it. 51 | limits: 52 | cpu: 100m 53 | memory: 200Mi 54 | requests: 55 | cpu: 100m 56 | memory: 50Mi 57 | livenessProbe: 58 | httpGet: 59 | path: /healthz 60 | port: 8080 61 | scheme: HTTP 62 | initialDelaySeconds: 60 63 | timeoutSeconds: 5 64 | successThreshold: 1 65 | failureThreshold: 5 66 | readinessProbe: 67 | httpGet: 68 | path: /readiness 69 | port: 8081 70 | scheme: HTTP 71 | # we poll on pod startup for the Kubernetes master service and 72 | # only setup the /readiness HTTP server once that's available. 73 | initialDelaySeconds: 30 74 | timeoutSeconds: 5 75 | args: 76 | # command = "/kube-dns" 77 | - --domain=cluster.local. 78 | - --dns-port=10053 79 | ports: 80 | - containerPort: 10053 81 | name: dns-local 82 | protocol: UDP 83 | - containerPort: 10053 84 | name: dns-tcp-local 85 | protocol: TCP 86 | - name: dnsmasq 87 | image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3 88 | args: 89 | - --cache-size=1000 90 | - --no-resolv 91 | - --server=127.0.0.1#10053 92 | ports: 93 | - containerPort: 53 94 | name: dns 95 | protocol: UDP 96 | - containerPort: 53 97 | name: dns-tcp 98 | protocol: TCP 99 | - name: healthz 100 | image: gcr.io/google_containers/exechealthz-amd64:1.0 101 | resources: 102 | # keep request = limit to keep this container in guaranteed class 103 | limits: 104 | cpu: 10m 105 | memory: 20Mi 106 | requests: 107 | cpu: 10m 108 | memory: 20Mi 109 | args: 110 | - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null 111 | - -port=8080 112 | - -quiet 113 | ports: 114 | - containerPort: 8080 115 | protocol: TCP 116 | dnsPolicy: Default # Don't use cluster DNS. 117 | -------------------------------------------------------------------------------- /vagrant/traefik.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | labels: 5 | app: traefik-ingress 6 | name: traefik-ingress 7 | spec: 8 | type: ClusterIP 9 | ports: 10 | - port: 8080 11 | targetPort: 8080 12 | selector: 13 | app: traefik-ingress 14 | --- 15 | apiVersion: extensions/v1beta1 16 | kind: Ingress 17 | metadata: 18 | name: traefik 19 | spec: 20 | rules: 21 | - host: traefik.rktnetes 22 | http: 23 | paths: 24 | - path: / 25 | backend: 26 | serviceName: traefik-ingress 27 | servicePort: 8080 28 | --- 29 | apiVersion: v1 30 | kind: ReplicationController 31 | metadata: 32 | name: traefik-ingress 33 | labels: 34 | app: traefik-ingress 35 | spec: 36 | replicas: 1 37 | selector: 38 | app: traefik-ingress 39 | template: 40 | metadata: 41 | labels: 42 | app: traefik-ingress 43 | name: traefik-ingress 44 | spec: 45 | terminationGracePeriodSeconds: 60 46 | containers: 47 | - image: traefik 48 | name: traefik-ingress 49 | imagePullPolicy: Always 50 | ports: 51 | - containerPort: 80 52 | hostPort: 80 53 | - containerPort: 443 54 | hostPort: 443 55 | - containerPort: 8080 56 | args: 57 | - --web 58 | - --kubernetes 59 | - --logLevel=DEBUG 60 | -------------------------------------------------------------------------------- /vagrant/wait-for: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | wait_for() { 4 | local url=$1 5 | local wait=${2:-1} 6 | local times=${3:-30} 7 | 8 | echo url $url wait $wait times $times 9 | 10 | which curl >/dev/null || { 11 | echo "curl must be installed" 12 | exit 1 13 | } 14 | 15 | local i 16 | for i in $(seq 1 $times); do 17 | echo "trying ${i}/${times}" 18 | local out 19 | if out=$(curl -fs $url 2>/dev/null); then 20 | echo "On try ${i}: ${out}" 21 | return 0 22 | fi 23 | sleep ${wait} 24 | done 25 | echo "Timed out waiting to answer at ${url}; tried ${times} waiting ${wait} between each" 26 | return 1 27 | } 28 | 29 | argsfile=/etc/wait-for/"${1}" 30 | 31 | if [[ ! -f $argsfile ]]; then 32 | echo ${argsfile} does not exist 33 | exit 1 34 | fi 35 | 36 | wait_for $(< $argsfile) 37 | -------------------------------------------------------------------------------- /vagrant/wait-for@.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | After=network.target 3 | 4 | Requires=%i.service 5 | After=%i.service 6 | 7 | [Service] 8 | Type=oneshot 9 | ExecStart=/usr/bin/wait-for %i 10 | 11 | [Install] 12 | WantedBy=multi-user.target 13 | -------------------------------------------------------------------------------- /workshop.slide: -------------------------------------------------------------------------------- 1 | Basics with rkt, the container engine by CoreOS 2 | 3 | Oct 20th 2016 4 | 5 | Sergiusz Urbaniak 6 | rkt Engineer, CoreOS 7 | sur@coreos.com 8 | @_surbaniak 9 | 10 | * Overview 11 | 12 | .image rkt8s.png _ 600 13 | 14 | - Learn about rkt 15 | - Use rtk 16 | - Learn about Kubernetes 17 | - Use Kubernetes+rkt = rktnetes 18 | 19 | Requirements: 20 | 21 | - Vagrant 22 | - Virtualbox 23 | 24 | * Setup 25 | 26 | We will: 27 | 28 | - use Linux Fedora 24 29 | - install [[http://github.com/coreos/rkt][rkt]] 30 | - install [[https://github.com/containers/build][acbuild]] 31 | 32 | git clone https://github.com/coreos/rktnetes-workshop 33 | cd vagrant 34 | vagrant up 35 | vagrant ssh 36 | 37 | * Starting nginx 38 | 39 | Simple one-shot command 40 | 41 | sudo rkt run --insecure-options=image docker://nginx 42 | 43 | In another terminal: 44 | 45 | $ rkt list 46 | [vagrant@localhost ~]$ rkt list 47 | UUID APP ... NETWORKS 48 | 0e32f69d busybox ... default:ip4=172.16.28.2 49 | 50 | $ curl 172.16.28.2 51 | curl 172.16.28.2 52 | 53 | 54 | 55 | Welcome to nginx! 56 | ... 57 | 58 | quit by hitting `Ctrl-]` three times 59 | 60 | * Starting nginx 61 | 62 | Fetch first, then run: 63 | 64 | rkt fetch quay.io/josh_wood/caddy 65 | sudo rkt run quay.io/josh_wood/caddy 66 | 67 | Prepare, then run: 68 | 69 | sudo rkt prepare quay.io/josh_wood/caddy 70 | ... 71 | 8978e12d-6687-47e3-9480-69e3a155295c 72 | 73 | 74 | 75 | sudo rkt run-prepared 8978e12d-6687-47e3-9480-69e3a155295c 76 | 77 | * Starting an interactive busybox 78 | 79 | $ sudo rkt run --insecure-options=image --interactive docker://progrium/busybox 80 | 81 | image: using image from local store for image name coreos.com/rkt/stage1-coreos:1.17.0 82 | image: using image from local store for url docker://progrium/busybox 83 | networking: loading networks from /etc/rkt/net.d 84 | networking: loading network default with type ptp 85 | / # 86 | 87 | 88 | 89 | / # ping www.google.de 90 | ping: bad address 'www.google.de' 91 | 92 | Note: pod doesn't have DNS by default 93 | 94 | Start the pod above using `--dns=8.8.8.8` 95 | 96 | * Operations on pods 97 | 98 | 99 | $ rkt list --format=json 100 | [{"name":"f64282eb-9f76-4b5d-8d48-bd2b3c7c41e6","state":"running","networks":[{"netName":"default","netConf":"net/99-default.conf","pluginPath":"stage1/rootfs/usr/lib/rkt/plugins/net/ptp","ifName":"eth0","ip":"172.16.28.3","args":"","mask":"255.255.255.0"}],"app_names":["nginx"]}] 101 | 102 | 103 | 104 | $ rkt status f64282eb-9f76-4b5d-8d48-bd2b3c7c41e6 105 | state=running 106 | created=2016-10-20 12:37:34.069 +0000 UTC 107 | started=2016-10-20 12:37:34.163 +0000 UTC 108 | networks=default:ip4=172.16.28.3 109 | pid=8113 110 | 111 | 112 | $ sudo rkt stop f64282eb-9f76-4b5d-8d48-bd2b3c7c41e6 113 | "f64282eb-9f76-4b5d-8d48-bd2b3c7c41e6" 114 | 115 | 116 | 117 | $ rkt cat-manifest f64282eb-9f76-4b5d-8d48-bd2b3c7c41e6 118 | { 119 | "acVersion": "1.17.0", 120 | "acKind": "PodManifest", 121 | ... 122 | 123 | * Operations on images 124 | 125 | Listing images 126 | 127 | $ rkt image list 128 | ID NAME SIZE IMPORT TIME LAST USED 129 | sha512-3214c5b3dad1 quay.io/coreos/hyperkube:v1.4.3_coreos.0 1.2GiB 9 minutes ago 9 minutes ago 130 | sha512-06ef01473d5f quay.io/coreos/etcd:v2.3.7 62MiB 9 minutes ago 9 minutes ago 131 | sha512-bd4c342830c8 coreos.com/rkt/stage1-coreos:1.17.0 179MiB 8 minutes ago 8 minutes ago 132 | sha512-ec6258f8adf2 coreos.com/rkt/stage1-fly:1.17.0 17MiB 4 minutes ago 4 minutes ago 133 | sha512-1e315d546ce1 registry-1.docker.io/library/nginx:latest 356MiB 2 minutes ago 2 minutes ago 134 | 135 | Deleting an image 136 | 137 | $ sudo rkt image rm quay.io/josh_wood/caddy 138 | 139 | * Cleaning up 140 | 141 | rkt does not have a daemon, hence cleaning up is done manually, or in a cron-like job: 142 | 143 | $ sudo rkt gc --grace-period=0s 144 | 145 | * Starting rkt in the background 146 | 147 | i.e. 148 | 149 | $ sudo rkt run docker://nginx & 150 | 151 | not a good idea, rather start a transient systemd unit: 152 | 153 | $ sudo systemd-run rkt run docker://nginx 154 | Running as unit run-r267cd42efd35471aaa1deb65ede22f25.service. 155 | 156 | inspect logs using: 157 | 158 | $ sudo journalctl -u run-r267cd42efd35471aaa1deb65ede22f25.service 159 | 160 | stop the unit using: 161 | 162 | $ sudo systemctl stop run-r267cd42efd35471aaa1deb65ede22f25.service 163 | 164 | * rkt Documentation 165 | 166 | We have quite nice man pages: 167 | 168 | man rkt 169 | man rkt_run 170 | man rkt_prepare 171 | man rkt_gc 172 | man rkt_fetch 173 | man rkt_image 174 | man rkt_image_rm 175 | 176 | .link https://github.com/coreos/rkt/tree/master/Documentation 177 | 178 | * Let's step back 179 | 180 | .image os-procs.png _ 200 181 | 182 | In a classical "OS" setup we have: 183 | 184 | - A supervisor, aka "init daemon", aka PID1 185 | - Not only one process, but many processes 186 | - Processes work together, either via localhost net, IPC 187 | - Communicate with outside world 188 | 189 | * rkt - Pods 190 | 191 | .image pod-apps.png _ 300 192 | 193 | - Grouping of applications executing in a shared context (network, namespaces, volumes) 194 | - Shared fate 195 | - The _only_ execution primitive: single applications are modelled as singleton pods 196 | 197 | * rkt - Sample Pod: micro-service talking to Redis 198 | 199 | .image redis-service.png _ 230 200 | 201 | sudo rkt run \ 202 | --insecure-options=image \ 203 | docker://redis \ 204 | s-urbaniak.github.io/images/redisservice:0.0.2 205 | 206 | .link https://github.com/s-urbaniak/redis-service 207 | 208 | * Pods - Patterns, patterns everywhere 209 | 210 | Container/App Design Patterns 211 | 212 | - Kubernetes enables new design patterns 213 | - Similar to OO patterns 214 | - Key difference: technologically agnostic 215 | 216 | .link http://blog.kubernetes.io/2016/06/container-design-patterns.html 217 | .link https://www.usenix.org/system/files/conference/hotcloud16/hotcloud16_burns.pdf 218 | 219 | * Pods - Sidecar pattern 220 | 221 | .image pattern-sidecar.png _ 400 222 | 223 | - Auxiliary app 224 | - Extend, enhance main app 225 | 226 | Pros: 227 | 228 | - Separate packaging units 229 | - Each app contained in a separate failure boundary 230 | - Potentially different technologies/languages 231 | 232 | * Pods - Ambassador pattern 233 | 234 | .image pattern-ambassador.png _ 400 235 | 236 | - Proxy communication 237 | - Separation of concerns 238 | - Main app has simplified view 239 | 240 | * Pods - Adapter pattern 241 | 242 | .image pattern-adapter.png _ 400 243 | 244 | - Use an interface of an existing app as another interface 245 | - Very useful for legacy apps, translating protocols 246 | 247 | * Pods - Leader election pattern 248 | 249 | .image pattern-leader.png _ 400 250 | 251 | - Separate the leader logic from the election logic 252 | - Swappable algorithms/technologies/environments 253 | 254 | Ready-to-use generic leader elector: 255 | 256 | .link http://blog.kubernetes.io/2016/01/simple-leader-election-with-Kubernetes.html 257 | 258 | * Pods - Work queue pattern 259 | 260 | .image pattern-work-queue.png _ 400 261 | 262 | - Separate app logic from queue enqueing/dequeing 263 | 264 | * Pods - Scatter gather pattern 265 | 266 | .image pattern-scatter-gather.png _ 400 267 | 268 | - Main app sends a simple request 269 | - Auxiliary app implements complex scatter/gather logic 270 | - Fan-Out/Fan-In requests separate from main app 271 | 272 | * Building pods - a small web app 273 | 274 | $ cat /home/vagrant/gopath/src/app/app.go 275 | 276 | .code app.go 277 | 278 | * Create a GPG key 279 | 280 | 1. Create a key (if you don't have one already) 281 | 282 | $ gpg2 --full-gen-key 283 | 284 | 2. Trust the key 285 | 286 | $ gpg2 --armor --export your@email.com >public.asc 287 | $ sudo rkt trust --prefix=workshop ./public.asc 288 | 289 | * Create an ACI image 290 | 291 | $ cat /home/vagrant/gopath/src/app/acbuild.sh 292 | 293 | .code acbuild.sh 1,20 294 | 295 | * Create an ACI image 296 | 297 | .code acbuild.sh 21, 298 | 299 | $ sudo rkt run ./app-0.0.1-linux-amd64.aci 300 | 301 | * What if I have Docker images? 302 | 303 | - No need to convert them 304 | - Just use 305 | 306 | rkt run --insecure-options=image docker://app 307 | 308 | * rkt - Networking 309 | 310 | The CNI (Container Network Interface) 311 | 312 | .image pod-net.png _ 300 313 | 314 | - Abstraction layer for network configuration 315 | - Single API to multiple, extensible networks 316 | - Narrow, simple API 317 | - Plugins for third-party implementations 318 | 319 | * rkt - Networking - Host Mode 320 | 321 | .image host-mode.png _ 300 322 | 323 | rkt run --net=host ... 324 | 325 | - Inherit the network namespace of the process that is invoking rkt. 326 | - Pod apps are able to access everything associated with the host’s network interfaces. 327 | 328 | *Workshop*time* 329 | 330 | 1. Start nginx using `--net=host` 331 | 332 | * rkt - Networking - Default Mode (CNI ptp) 333 | 334 | .image ptp.png _ 300 335 | 336 | rkt run --net ... 337 | rkt run --net=default ... 338 | 339 | .link https://github.com/containernetworking/cni/blob/master/Documentation/ptp.md 340 | 341 | - Creates a virtual ethernet pair 342 | - One placed in the pod 343 | - Other one placed on the host 344 | 345 | * rkt - Networking - CNI brigde 346 | 347 | .image bridge.png _ 300 348 | 349 | .link https://github.com/containernetworking/cni/blob/master/Documentation/bridge.md 350 | 351 | - Creates a virtual ethernet pair 352 | - One placed in the pod 353 | - Other one placed on the host 354 | - Host veth pluggind into a linux bridge 355 | 356 | * rkt - Networking - CNI macvlan 357 | 358 | .image macvlan.png _ 300 359 | 360 | .link https://github.com/containernetworking/cni/blob/master/Documentation/macvlan.md 361 | 362 | - Functions like a switch 363 | - Pods get different MAC addresses 364 | - Pods share the same physical device 365 | 366 | * rkt - Networking - CNI ipvlan 367 | 368 | .image ipvlan.png _ 300 369 | 370 | .link https://github.com/containernetworking/cni/blob/master/Documentation/ipvlan.md 371 | 372 | - Functions like a switch 373 | - Pods share the same MAC address 374 | - Pods get different IPs 375 | - Pods share the same physical device 376 | 377 | * rkt - Networking - SDN (software defined networking) 378 | 379 | .image pod-net-canal.png 300 _ 380 | 381 | - Communicate with pods across different _hosts_ 382 | - Each pod across all hosts gets its own IP 383 | - Virtual overlay network 384 | 385 | * rkt - Networking 386 | 387 | Example: bridge two pods, so they can see each other 388 | 389 | $ cat /etc/rkt/net.d/bridge.conf 390 | { 391 | "name": "pod-bridge", 392 | "type": "bridge", 393 | "bridge": "rkt-bridge-nat", 394 | "ipMasq": true, 395 | "isGateway": true, 396 | "ipam": { 397 | "type": "host-local", 398 | "subnet": "10.2.0.0/24", 399 | "routes": [ 400 | { "dst": "0.0.0.0/0" } 401 | ] 402 | } 403 | } 404 | 405 | $ sudo rkt run --net=pod-bridge docker://nginx 406 | 407 | $ sudo rkt run --net=pod-bridge --interactive docker://progrium/busybox 408 | $ wget 10.2.0.2 409 | 410 | * Kubernetes - Overview 411 | 412 | .image flower.png 413 | 414 | .link https://github.com/kubernetes/kubernetes 415 | 416 | - Open Source project initiated by Google 417 | - Cluster-level container orchestration 418 | 419 | Handles: 420 | 421 | - Scheduling/Upgrades 422 | - Failure recovery 423 | - Scaling 424 | 425 | * k8s - Components - API Server 426 | 427 | - Validates, configures, persists for all Kubernetes API objects 428 | - Provides REST based operations via JSON 429 | - Uses etcd is its database 430 | 431 | * k8s - Components - Controller Manager 432 | 433 | .image control-loop.png _ 300 434 | 435 | Embeds the core control loops: 436 | Is _state-less_ 437 | Is decoupled from etcd via the API-server 438 | 439 | Just a daemon talking to etcd 440 | 441 | * k8s - Components - Controller Manager 442 | 443 | Token Controller 444 | Endpoint Controller 445 | Replication, Replica Set Controller 446 | Node Controller 447 | Resource Quota Controller 448 | Namespace Controller 449 | Horizontal Scaling Controller 450 | Daemon Sets, Pet Sets Controller 451 | Job Controller, Scheduled Job Controller 452 | Deployment Controller 453 | Disruption Controller 454 | Persistent Volumes Controller 455 | Attach/Detach Controller 456 | Certification Controller 457 | Service Accounts Controller 458 | Garbage Collector 459 | 460 | * k8s - Components - Scheduler 461 | 462 | Schedules pods on nodes 463 | 464 | - Policy-rich 465 | - Topology-aware 466 | - Workload-specific 467 | - Considers resource requirements, QoS, HW/SW/policy constraints, ... 468 | 469 | Again ... just a daemon talking to etcd 470 | 471 | * k8s - Components - Kubelet 472 | 473 | - Primary node agent 474 | - Starts/Stops/Supervises pods on its node 475 | 476 | Sigh ... just a daemon talking to etcd and to *rkt* !!! 477 | 478 | Instructs rkt to: 479 | 480 | - Fetch pods 481 | - Start, and stop pods 482 | - Exec into pods 483 | - Tail pod logs 484 | 485 | PS: We are working very hard to make *rkt* a first class runtime in Kubernetes. 486 | 487 | * k8s - Components - Kube Proxy 488 | 489 | Primary network proxy on each node. 490 | 491 | - configures `iptables` to reflect services 492 | - Forwards TCP/UDP streams across a set of backends 493 | 494 | * k8s - The big picture 495 | 496 | .image k8s-overview.png 350 _ 497 | 498 | 1. Watches created pods, assigns them to nodes 499 | 2. Runs controllers (node ctl, replication ctl, endpoint ctl, service account ctl, ...) 500 | 3. Watches for pods assigned to its node, runs *rkt*! 501 | 4. Manipulates network rules (iptables) for services on a node, does connection forwarding. 502 | 503 | * k8s - Let's do it! 504 | 505 | $ systemctl start etcd apiserver controller-manager scheduler kubelet proxy 506 | 507 | Verify startup 508 | 509 | $ systemctl status etcd apiserver controller-manager scheduler kubelet proxy 510 | $ kubectl get node 511 | 512 | * rktnetes - nginx Deployment 513 | 514 | apiVersion: extensions/v1beta1 515 | kind: Deployment 516 | metadata: 517 | name: nginx-deployment 518 | spec: 519 | replicas: 1 520 | template: 521 | metadata: 522 | labels: 523 | app: nginx 524 | spec: 525 | containers: 526 | - name: nginx 527 | image: nginx:1.7.9 528 | ports: 529 | - containerPort: 80 530 | 531 | 532 | 533 | kubectl create -f nginx.yaml 534 | 535 | * rktnetes - busybox pod 536 | 537 | *Workshop*time* 538 | 539 | apiVersion: v1 540 | kind: Pod 541 | metadata: 542 | name: busybox 543 | spec: 544 | containers: 545 | - name: busybox 546 | image: progrium/busybox 547 | args: 548 | - sleep 549 | - "1000000" 550 | 551 | 552 | 553 | kubectl create -f busybox.yaml 554 | 555 | Exec into the pod: 556 | 557 | kubectl exec -ti busybox /bin/sh 558 | -------------------------------------------------------------------------------- /you.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/coreos/rktnetes-workshop/1ca7beb802c23d4e7eb5552e09c3c8abc965f43c/you.png --------------------------------------------------------------------------------