├── .gitignore ├── README.md ├── README.pt-br.md ├── Vagrantfile ├── container └── apache-auth │ ├── .dockerignore │ ├── Containerfile │ └── entrypoint.sh ├── files ├── auth.ini ├── check.sh ├── couchdb-check.sh ├── couchdb-import.sh ├── httpd.conf ├── id_ed25519 ├── id_ed25519.pub └── tasks.sh ├── manifests ├── 2-pod.yml ├── 3-deployment.yml ├── 4-deployment.yml ├── 5-daemonset.yml ├── 6-pod.yml ├── 7-pod.yml ├── 8-statefulset.yml ├── README.md └── solve.sh └── provision ├── challenge.sh ├── control.sh ├── k8s.sh ├── provision.sh ├── storage.sh ├── worker-steps.sh └── worker.sh /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant 2 | *.swp 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # What is this? 2 | 3 | This is a repository that can provision a [Kubernetes](https://kubernetes.io/) cluster using [Vagrant](https://www.vagrantup.com/). 4 | 5 | You can enable a **challenge** that will mess up some components, so you can test your knowledge in infrastructure, try to complete the **tasks** to test your operation knowledge or both. 6 | 7 | To use it, you need to [install Vagrant](https://developer.hashicorp.com/vagrant/docs/installation) and also an [hypervisor](https://pt.wikipedia.org/wiki/Hipervisor) like [VirtualBox](https://www.virtualbox.org/) or [Libvirt](https://libvirt.org/), unfortunately [HyperV](https://pt.wikipedia.org/wiki/Hyper-V) support is limited, since Vagrant cannot create networks inside it. 8 | 9 | Four machines will be created, ensure that you have enough free memory: 10 | 11 | | Machine | IP | CPU | Memory | 12 | |---------|---------------|-----|--------| 13 | | control | 192.168.56.10 | 2 | 2048 | 14 | | worker1 | 192.168.56.20 | 1 | 1024 | 15 | | worker2 | 192.168.56.30 | 1 | 1024 | 16 | | storage | 192.168.56.40 | 1 | 512 | 17 | 18 | You can change the default memory/CPU of each virtual machine changing the hash named `vms` inside the `Vagrantfile`: 19 | 20 | ```ruby 21 | vms = { 22 | 'control' => {'memory' => '2048', 'cpus' => 2, 'ip' => '10', 'provision' => 'control.sh'}, 23 | 'worker1' => {'memory' => '1024', 'cpus' => 1, 'ip' => '20', 'provision' => 'worker.sh'}, 24 | 'worker2' => {'memory' => '1024', 'cpus' => 1, 'ip' => '30', 'provision' => 'worker.sh'}, 25 | 'storage' => {'memory' => '512', 'cpus' => 1, 'ip' => '40', 'provision' => 'storage.sh'} 26 | } 27 | ``` 28 | 29 | If you are starting in Kubernetes I recommend to take a look into [minikube](https://minikube.sigs.k8s.io/docs/) because this repository is focused in people that want to understand its infrastructure. 30 | 31 | ## Provisioning 32 | 33 | Install Vagrant - and maybe some [plugin](https://vagrant-lists.github.io/) - a hypervisor, download or clone the repository and execute `vagrant up`: 34 | 35 | ```bash 36 | git clone git@github.com:hector-vido/kubernetes.git --config core.autocrlf=false 37 | cd kubernetes 38 | vagrant up 39 | ``` 40 | 41 | > **Important:** the option `--config core.autocrlf=true` configure Windows to not add `\r` at line ends. 42 | 43 | After the provisioning, every command should be execute from **control** as `root` user: 44 | 45 | ```bash 46 | vagrant ssh control 47 | sudo -i 48 | kubectl get nodes 49 | 50 | # Output: 51 | # 52 | # NAME STATUS ROLES AGE VERSION 53 | # control Ready control-plane 82m v1.31.0 54 | # worker1 Ready 82m v1.31.0 55 | # worker2 Ready 82m v1.31.0 56 | ``` 57 | 58 | # Challenge 59 | 60 | The **challenge** can be activated executing the following command: 61 | 62 | ```bash 63 | k8s-challenge 64 | ``` 65 | 66 | This will mess up some components and create an outage in the cluster, is up to you fix that. 67 | 68 | # Tasks 69 | 70 | The **tasks** are a list of things you should do inside Kubernetes. 71 | 72 | You can see the list here or you can execute `k8s-tasks`, you can check if you sucessful completed a task executing `k8s-check`. 73 | 74 | ``` 75 | 1 - Fix the communication problem between the machines: 76 | 1.1 - control....192.168.56.10 77 | 1.2 - worker1....192.168.56.20 78 | 1.3 - worker2....192.168.56.30 79 | Note: Do not use kubeadm, do not reset the cluster. 80 | SSH with "root" user is allowed between the mentioned machines. 81 | The namespace should always be "default" unless specified. 82 | 83 | 2 - Provision a pod called "apache" with the image "httpd:alpine". 84 | 85 | 3 - Create a Deployment called "cgi" with the image "hectorvido/sh-cgi" and a service: 86 | 3.1 - The Deployment should have 4 replicas; 87 | 3.2 - Create a service called "cgi" for the "cgi" Deployment; 88 | 3.3 - The service will respond internally on port 9090. 89 | 90 | 4 - Create a Deployment called "nginx" based on "nginx:alpine": 91 | 4.1 - Update the Deployment to the "nginx:perl" image; 92 | 4.2 - Rollback to the previous version. 93 | 94 | 5 - Create a "memcached:alpine" pod for each worker in the cluster: 95 | 5.1 - If a new node is added to the cluster, a replica 96 | of this pod needs to be automatically provisioned inside the new node; 97 | 98 | 6 - Create a pod with the image "hectorvido/apache-auth" called "auth": 99 | 6.1 - Create a Secret called "httpd-auth" based on the file "files/auth.ini"; 100 | 6.2 - Create two environment variables in the pod: 101 | HTPASSWD_USER and HTPASSWD_PASS with the respective values of "httpd-auth"; 102 | 6.4 - Create a ConfigMap called "httpd-conf" with the contents of "files/httpd.conf"; 103 | 6.5 - Mount it inside the pod at "/etc/apache2/httpd.conf" using "subpath"; 104 | 6.6 - The page should only be displayed by executing the following command: 105 | curl -u developer:secret 106 | Otherwise an unauthorized message should appear. 107 | Note: No extra configuration is required, Secret and ConfigMap take care of 108 | the entire configuration process. 109 | 110 | 7 - Create a pod called "tools": 111 | 7.1 - The pod should use the "busybox" image; 112 | 7.2 - The pod must be static; 113 | 7.3 - The pod should only be present in "worker1". 114 | 115 | 8 - Create a statefulSet called "couchdb" with the image "couchdb" 116 | inside the "database" namespace: 117 | 8.1 - Create the "database" namespace; 118 | 8.2 - The "headless service" should be called "couchdb" and listen on port 5984; 119 | 8.3 - Create the "/srv/couchdb" directory on the "worker2" machine; 120 | 8.4 - Create a persistent volume that uses the above directory; 121 | 8.5 - The pod can only go to the "worker2" machine; 122 | 8.6 - The connection user must be "developer" and the password "secret"; 123 | 8.7 - Persist the couchdb data on the volume created above; 124 | Note: The directory used by couchdb to persist data is "/opt/couchdb/data". 125 | ``` 126 | 127 | ## Solving 128 | 129 | If you want to see everything working and also test the environment to ensure that nothing is wrong, you can execute `k8s-solve`, this command will create everything necessary to solve the tasks. This command will execute as if nothing was tried, if you already did some of the tasks you will probably see some errors. 130 | 131 | # Shared Files 132 | 133 | This repository uses a lot of files shared between the host and the guest machines, make sure a folder named `/vagrant` with the content of this repo is present in all machines. 134 | 135 | Vagrant do it in different ways: it can simply copy everything, mount a NFS or use more advanced features of other hypervisors. 136 | 137 | ## VirtualBox 138 | 139 | When Vagrant provision a machine with VirtualBox the content of `/vagrant` will be populated with a `rsync`. 140 | 141 | ## Libvirt 142 | 143 | When Vagrant provision a machine with Libvirt the content of `/vagrant` can be populated with `nfs` or `virtiofs`. 144 | 145 | If you use Linux base in RHEL and have some issues with SELinux or don't want to type your sudo password each time you execute `vagrant up` to mount the NFS, you can use `virtiofs` to share directories: 146 | 147 | **~/.vagrant.d/Vagrantfile** 148 | 149 | ```ruby 150 | Vagrant.configure("2") do |config| 151 | config.vm.provider :libvirt do |libvirt| 152 | libvirt.memorybacking :access, :mode => "shared" 153 | libvirt.qemu_use_session = false 154 | libvirt.system_uri = 'qemu:///system' 155 | end 156 | config.vm.synced_folder "./", "/vagrant", type: "virtiofs" 157 | end 158 | ``` 159 | 160 | > **Important:** The option `qemu_use_session` is `false` because a common user session cannot create networks. 161 | -------------------------------------------------------------------------------- /README.pt-br.md: -------------------------------------------------------------------------------- 1 | # O que é isso? 2 | 3 | Este é um repositório que provisiona um cluster [Kubernetes](https://kubernetes.io/) usando o [Vagrant](https://www.vagrantup.com/). 4 | 5 | É possível ativar um **desafio** que mexerá em alguns componentes para que você possa testar seu conhecimento sobre infraestrutura, tentar concluir as **tarefas** para testar seu conhecimento sobre operações ou ambos. 6 | 7 | Para usá-lo, você precisa [instalar o Vagrant](https://developer.hashicorp.com/vagrant/docs/installation) e também um [hypervisor](https://pt.wikipedia.org/wiki/Hipervisor) como [VirtualBox](https://www.virtualbox.org/) ou [Libvirt](https://libvirt.org/), infelizmente o suporte ao [HyperV](https://pt.wikipedia.org/wiki/Hyper-V) é limitado, pois o Vagrant não pode criar redes dentro dele. 8 | 9 | Serão criadas quatro máquinas; certifique-se de que você tenha memória livre suficiente: 10 | 11 | | Máquina | IP | CPU | Memória | 12 | |---------|---------------|-----|---------| 13 | | control | 192.168.56.10 | 2 | 2048 | 14 | | worker1 | 192.168.56.20 | 1 | 1024 | 15 | | worker2 | 192.168.56.30 | 1 | 1024 | 16 | | storage | 192.168.56.40 | 1 | 512 | 17 | 18 | Você pode alterar a memória/CPU padrão de cada máquina virtual, alterando o hash denominado `vms` dentro do `Vagrantfile`: 19 | 20 | ```ruby 21 | vms = { 22 | 'control' => {'memory' => '2048', 'cpus' => 2, 'ip' => '10', 'provision' => 'control.sh'}, 23 | 'worker1' => {'memory' => '1024', 'cpus' => 1, 'ip' => '20', 'provision' => 'worker.sh'}, 24 | 'worker2' => {'memory' => '1024', 'cpus' => 1, 'ip' => '30', 'provision' => 'worker.sh'}, 25 | 'storage' => {'memory' => '512', 'cpus' => 1, 'ip' => '40', 'provision' => 'storage.sh'} 26 | } 27 | ``` 28 | 29 | Se estiver começando no Kubernetes, recomendo dar uma olhada no [minikube](https://minikube.sigs.k8s.io/docs/) porque esse repositório é voltado para pessoas que querem entender sua infraestrutura. 30 | 31 | ## Provisionamento 32 | 33 | Instale o Vagrant - e talvez algum [plugin](https://vagrant-lists.github.io/) - e um hypervisor, clone o repositório e execute `vagrant up`: 34 | 35 | ```bash 36 | git clone git@github.com:hector-vido/kubernetes.git --config core.autocrlf=false 37 | cd kubernetes 38 | vagrant up 39 | ``` 40 | 41 | > **Importante:** a opção `--config core.autocrlf=true` configura o Windows para não adicionar `\r` aos finais de linha. 42 | 43 | Após o provisionamento, todos os comandos devem ser executados a partir do **control** como usuário `root`: 44 | 45 | ```bash 46 | vagrant ssh control 47 | sudo -i 48 | kubectl get nodes 49 | 50 | # Saída: 51 | # 52 | # NAME STATUS ROLES AGE VERSION 53 | # control Ready control-plane 82m v1.31.0 54 | # worker1 Ready 82m v1.31.0 55 | # worker2 Ready 82m v1.31.0 56 | ``` 57 | 58 | # Desafio 59 | 60 | O **desafio** pode ser ativado com a execução do seguinte comando: 61 | 62 | ```bash 63 | k8s-challenge 64 | ``` 65 | 66 | Isso desconfigurará alguns componentes e criará uma interrupção no cluster, cabe a você corrigir. 67 | 68 | # Tarefas 69 | 70 | As **tarefas** são uma lista de coisas que você deve fazer no Kubernetes. 71 | 72 | Você pode ver a lista aqui ou pode executar `k8s-tasks`, também pode verificar se concluiu com êxito uma tarefa executando o `k8s-check`. 73 | 74 | ``` 75 | 1 - Corrigir o problema de comunicação entre as máquinas: 76 | 1.1 - control....192.168.56.10 77 | 1.2 - worker1....192.168.56.20 78 | 1.3 - worker2....192.168.56.30 79 | Observação: Não use o kubeadm, não reinicie o cluster. 80 | O SSH com o usuário "root" é permitido entre as máquinas mencionadas. 81 | O namespace deve ser sempre "default", a menos que seja especificado. 82 | 83 | 2 - Provisione um pod chamado "apache" com a imagem "httpd:alpine". 84 | 85 | 3 - Crie um Deployment chamado "cgi" com a imagem "hectorvido/sh-cgi" e um Service: 86 | 3.1 - O Deployment deve ter 4 réplicas; 87 | 3.2 - Crie um Service chamado "cgi" para a implantação "cgi"; 88 | 3.3 - O Service responderá internamente na porta 9090. 89 | 90 | 4 - Crie um Deployment chamado "nginx" com base em "nginx:alpine": 91 | 4.1 - Atualize o Deployment para a imagem "nginx:perl"; 92 | 4.2 - Reverta para a versão anterior. 93 | 94 | 5 - Crie um pod "memcached:alpine" para cada worker no cluster: 95 | 5.1 - Se um novo nó for adicionado ao cluster, uma réplica 96 | desse pod precisa ser provisionada automaticamente dentro do novo nó; 97 | 98 | 6 - Crie um pod com a imagem "hectorvido/apache-auth" chamado "auth": 99 | 6.1 - Crie um Secret chamado "httpd-auth" com base no arquivo "files/auth.ini"; 100 | 6.2 - Crie duas variáveis de ambiente no pod: 101 | HTPASSWD_USER e HTPASSWD_PASS com os respectivos valores de "httpd-auth"; 102 | 6.4 - Crie um ConfigMap chamado "httpd-conf" com o conteúdo de "files/httpd.conf"; 103 | 6.5 - Monte-o dentro do pod em "/etc/apache2/httpd.conf" usando "subpath"; 104 | 6.6 - A página só deve ser exibida com a execução do seguinte comando: 105 | curl -u developer:secret 106 | Caso contrário, uma mensagem sobre autorização deverá ser exibida. 107 | Observação: não é necessária nenhuma configuração extra, o Secret e o ConfigMap cuidam de 108 | todo o processo de configuração. 109 | 110 | 7 - Crie um pod chamado "tools": 111 | 7.1 - O pod deve usar a imagem "busybox"; 112 | 7.2 - O pod deve ser estático; 113 | 7.3 - O pod deve estar presente apenas em "worker1". 114 | 115 | 8 - Crie um StatefulSet chamado "couchdb" com a imagem "couchdb" 116 | dentro do namespace "database": 117 | 8.1 - Crie o namespace "database"; 118 | 8.2 - O "headless service" deve se chamar "couchdb" e escutar na porta 5984; 119 | 8.3 - Crie o diretório "/srv/couchdb" na máquina "worker2"; 120 | 8.4 - Crie um volume persistente que use o diretório acima; 121 | 8.5 - O pod só pode ir para a máquina "worker2"; 122 | 8.6 - O usuário de conexão deve ser "developer" e a senha "secret"; 123 | 8.7 - Persista os dados do couchdb no volume criado acima; 124 | Observação: o diretório usado pelo couchdb para persistir os dados é "/opt/couchdb/data". 125 | ``` 126 | 127 | ## Resolvendo 128 | 129 | Se quiser ver tudo funcionando e também testar o ambiente para garantir que nada esteja errado, você pode executar o `k8s-solve`, esse comando criará tudo o que for necessário para resolver as tarefas. Esse comando será executado como se nenhuma tentativa tivesse sido feita, se você já tiver executado algumas das tarefas, provavelmente verá alguns erros. 130 | 131 | # Arquivos compartilhados 132 | 133 | Esse repositório usa muitos arquivos compartilhados entre o host e as máquinas convidadas. Certifique-se de que uma pasta chamada `/vagrant` com o conteúdo desse repositório esteja presente em todas as máquinas. 134 | 135 | O Vagrant faz isso de diferentes maneiras: ele pode simplesmente copiar tudo, montar um NFS ou usar recursos mais avançados de outros hypervisors. 136 | 137 | ## VirtualBox 138 | 139 | Quando o Vagrant provisiona uma máquina com o VirtualBox, o conteúdo do `/vagrant` será populado com um `rsync`. 140 | 141 | ## Libvirt 142 | 143 | Quando o Vagrant provisiona uma máquina com o Libvirt, o conteúdo de `/vagrant` pode ser populado com `nfs` ou `virtiofs`. 144 | 145 | Se você usa Linux baseados em RHEL e tem alguns problemas com o SELinux ou não quer digitar sua senha do sudo toda vez que executar o `vagrant up` para montar o NFS, você pode usar o `virtiofs` para compartilhar diretórios: 146 | 147 | **~/.vagrant.d/Vagrantfile** 148 | 149 | ```ruby 150 | Vagrant.configure("2") do |config| 151 | config.vm.provider :libvirt do |libvirt| 152 | libvirt.memorybacking :access, :mode => "shared" 153 | libvirt.qemu_use_session = false 154 | libvirt.system_uri = 'qemu:///system' 155 | fim 156 | config.vm.synced_folder "./", "/vagrant", type: "virtiofs" 157 | end 158 | ``` 159 | 160 | **Importante:** A opção `qemu_use_session` é `false` porque uma sessão de usuário comum não pode criar redes. 161 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | vms = { 5 | 'control' => {'memory' => '2048', 'cpus' => 2, 'ip' => '10', 'provision' => 'control.sh'}, 6 | 'worker1' => {'memory' => '1024', 'cpus' => 1, 'ip' => '20', 'provision' => 'worker.sh'}, 7 | 'worker2' => {'memory' => '1024', 'cpus' => 1, 'ip' => '30', 'provision' => 'worker.sh'}, 8 | 'storage' => {'memory' => '512', 'cpus' => 1, 'ip' => '40', 'provision' => 'storage.sh'} 9 | } 10 | 11 | Vagrant.configure('2') do |config| 12 | 13 | config.vm.box = 'debian/bookworm64' 14 | config.vm.box_check_update = false 15 | 16 | vms.each do |name, conf| 17 | config.vm.define "#{name}" do |k| 18 | k.vm.hostname = "#{name}.k8s.local" 19 | k.vm.network 'private_network', ip: "192.168.56.#{conf['ip']}" 20 | k.vm.provider 'virtualbox' do |vb| 21 | vb.memory = conf['memory'] 22 | vb.cpus = conf['cpus'] 23 | end 24 | k.vm.provider 'libvirt' do |lv| 25 | lv.memory = conf['memory'] 26 | lv.cpus = conf['cpus'] 27 | lv.cputopology :sockets => 1, :cores => conf['cpus'], :threads => '1' 28 | end 29 | k.vm.provision 'shell', path: "provision/#{conf['provision']}", args: "#{conf['ip']}" 30 | end 31 | end 32 | 33 | config.vm.provision 'shell', path: 'provision/provision.sh' 34 | end 35 | -------------------------------------------------------------------------------- /container/apache-auth/.dockerignore: -------------------------------------------------------------------------------- 1 | Dockerfile 2 | .dockerignore 3 | -------------------------------------------------------------------------------- /container/apache-auth/Containerfile: -------------------------------------------------------------------------------- 1 | FROM alpine 2 | 3 | ENV HTPASSWD_USER=apache 4 | ENV HTPASSWD_PASS=123 5 | 6 | RUN apk add --no-cache apache2 apache2-utils && \ 7 | echo '

:)

' > /var/www/localhost/htdocs/index.html 8 | 9 | COPY entrypoint.sh / 10 | 11 | CMD ["sh", "/entrypoint.sh"] 12 | -------------------------------------------------------------------------------- /container/apache-auth/entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | htpasswd -bc /etc/apache2/htpasswd $HTPASSWD_USER $HTPASSWD_PASS > /dev/null 2>&1 4 | 5 | echo 'No useful logs this time...' 6 | 7 | httpd -DFOREGROUND 8 | -------------------------------------------------------------------------------- /files/auth.ini: -------------------------------------------------------------------------------- 1 | HTPASSWD_USER=developer 2 | HTPASSWD_PASS=secret 3 | -------------------------------------------------------------------------------- /files/check.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | function echo_fail { 4 | echo -e "\033[0;31m$1\033[0m" 5 | exit 1 6 | } 7 | 8 | function echo_warning { 9 | echo -e "\033[0;33m$1\033[0m" 10 | } 11 | 12 | function echo_success { 13 | echo -e "\033[0;32m$1\033[0m" 14 | } 15 | 16 | echo 'Task 1 - Communication between nodes:' 17 | test ! -x /usr/bin/kubectl && echo_fail 'Cannot execute kubectl command.' 18 | test "3" -ne "$(kubectl get nodes --no-headers | grep -wi ready | wc -l)" && echo_fail 'Not all three nodes are responding.' 19 | echo_success 'All three nodes are responding!\n' 20 | 21 | echo 'Task 2 - Pod called apache:' 22 | kubectl -n default describe pod/apache > /tmp/task 2> /dev/null 23 | test -z "$(cat /tmp/task)" && echo_fail 'Could not find pod named "apache".' 24 | echo_success 'Pod found...' 25 | grep -E 'State: *Running' /tmp/task > /dev/null 26 | test "0" -ne "$?" && echo_fail "The pod doesn't seem to be running." 27 | grep -E 'Image:.* (docker.io/library/)?httpd:alpine' /tmp/task > /dev/null 28 | test "0" -ne "$?" && echo_fail 'The pod image is not httpd:alpine.' 29 | echo_success 'The pod is configured correctly!\n' 30 | 31 | echo 'Task 3 - Deployment and Service called cgi:' 32 | kubectl -n default describe deploy/cgi > /tmp/task 2> /dev/null 33 | test -z "$(cat /tmp/task)" && echo_fail 'Could not find a deployment called "cgi".' 34 | echo_success 'The deployment was found...' 35 | grep -E 'Image: *(docker.io/)?hectorvido/sh-cgi$' /tmp/task > /dev/null 36 | test "0" -ne "$?" && echo_fail 'The image used is not hectorvido/sh-cgi.' 37 | echo_success 'The image is correct...' 38 | grep -Ew 'Replicas:.*4 available' /tmp/task > /dev/null 39 | test "0" -ne "$?" && echo_fail "It looks like there aren't 4 replicas working." 40 | echo_success 'The 4 replicas were found...' 41 | kubectl -n default describe svc cgi > /tmp/task 2> /dev/null 42 | test -z "$(cat /tmp/task)" && echo_fail 'A service called "cgi" could not be found.' 43 | echo_success 'The service was found...' 44 | grep -E 'Port:.*9090/TCP' /tmp/task > /dev/null 45 | test "0" -ne "$?" && echo_fail 'Port 9090 does not seem to be open in the "cgi" service.' 46 | echo_success 'The service is listening on port 9090...' 47 | curl --connect-timeout 2 -s $(grep 'IP:' /tmp/task | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}'):9090 > /dev/null 48 | test "0" -ne "$?" && echo_fail 'The connection to the application on port 9090 seems to be having problems.' 49 | echo_success 'The pod and service are configured correctly!\n' 50 | 51 | echo 'Task 4 - Deployment called nginx:' 52 | kubectl -n default describe deploy nginx > /tmp/task 2> /dev/null 53 | test -z "$(cat /tmp/task)" && echo_fail 'Could not find a deployment named "nginx".' 54 | echo_success 'Deployment nginx found...' 55 | kubectl -n default get rs | grep nginx- | cut -d' ' -f1 | xargs kubectl describe rs | grep -E 'Image: *(docker.io/library/)?nginx:alpine$' > /dev/null 56 | test "0" -ne "$?" && echo_fail 'Version with nginx:alpine not found.' 57 | echo_success 'Version with nginx:alpine found...' 58 | kubectl -n default get rs | grep nginx- | cut -d' ' -f1 | xargs kubectl describe rs | grep -E 'Image: *(docker.io/library/)?nginx:perl$' > /dev/null 59 | test "0" -ne "$?" && echo_fail 'Version with nginx:perl not found.' 60 | echo_success 'Version with nginx:perl found...' 61 | grep -E 'Image: *(docker.io/library/)?nginx:alpine$' /tmp/task > /dev/null 62 | test "0" -ne "$?" && echo_fail 'The image currently used is not nginx:alpine.' 63 | grep -E 'kubectl rollout undo deploy/?\s?nginx' /root/.bash_history > /dev/null 2>&1 64 | test "0" -ne "$?" && echo_fail 'The "rollout" command was not used.' 65 | echo_success 'The deployment was updated and recovered correctly!\n' 66 | 67 | echo 'Task 5 - Pods on all workers:' 68 | kubectl -n default describe ds > /tmp/task 2> /dev/null 69 | test -z "$(cat /tmp/task)" && echo_fail 'The ideal object for the pods was not found.' 70 | echo_success 'StatefulSet found...' 71 | grep -E 'Image: *(docker.io/library/)?memcached:alpine$' /tmp/task > /dev/null 72 | test "0" -ne "$?" && echo_fail 'The pods based on "memcached:alpine" were not found.' 73 | echo_success 'The replicaset was created correctly!\n' 74 | 75 | echo 'Task 6 - Apache with authentication:' 76 | kubectl -n default describe secret httpd-auth > /tmp/task 2> /dev/null 77 | test -z "$(cat /tmp/task)" && echo_fail 'Secret "httpd-auth" was not found.' 78 | echo_success 'Secret found...' 79 | test "$(grep -E '^HTPASSWD_USER|^HTPASSWD_PASS' /tmp/task | wc -l)" -ne 2 && echo_fail 'Secret does not have USER and PASS keys.' 80 | echo_success 'USER and PASS keys found in Secret...' 81 | kubectl -n default describe cm httpd-conf > /tmp/task 2> /dev/null 82 | test -z "$(cat /tmp/task)" && echo_fail 'ConfigMap "httpd-conf" was not found.' 83 | echo_success 'ConfigMap found...' 84 | kubectl describe pod auth -n default > /tmp/task 2> /dev/null 85 | test -z "$(cat /tmp/task)" && echo_fail 'Pod "auth" was not found.' 86 | echo_success 'Pod found...' 87 | grep -E 'Image: *(docker.io/)?hectorvido/apache-auth$' /tmp/task > /dev/null 88 | test "0" -ne "$?" && echo_fail 'The image currently used is not hectorvido/apache-auth.' 89 | ENV_FROM="$(grep -E 'httpd-auth\s*Secret' /tmp/task)" 90 | ENV_REF="$(grep -E 'HTPASSWD_USER:|HTPASSWD_PASS:' /tmp/task | wc -l)" 91 | if [ -z "$ENV_FROM" ] && [ "$ENV_REF" -ne 2 ]; then echo_fail 'The variables inside the "auth" pod were not configured.'; fi 92 | echo_success 'Variables defined inside the pod...' 93 | grep -E '/etc/apache2/httpd.conf.*path="httpd.conf"' /tmp/task > /dev/null 94 | test "0" -ne "$?" && echo_fail 'ConfigMap "httpd-conf" was not mounted inside pod "auth" with subPath.' 95 | POD_IP=$(grep -Eo '^IP: *([0-9]{1,3}\.){3}[0-9]{1,3}$' /tmp/task | sed 's/IP: *//') 96 | HTTP_STATUS="$(curl -u developer:secret -s -o /dev/null -w '%{http_code}' $POD_IP)" 97 | test "$HTTP_STATUS" -ne "200" && echo_fail "Could not access pod at address $POD_IP." 98 | echo_success 'The "auth" pod was created correctly!\n' 99 | 100 | echo 'Task 7 - Static pod:' 101 | kubectl -n default describe pod tools-worker1 > /tmp/task 2> /dev/null 102 | test -z "$(cat /tmp/task)" && echo_fail 'The pod "tools-worker1" was not found.' 103 | echo_success 'The static pod was found...' 104 | ssh -o stricthostkeychecking=no 192.168.56.20 "ls /etc/kubernetes/manifests/*.y*ml > /dev/null 2>&1" 105 | test "0" -ne "$?" && echo_fail 'The manifest was not found on worker1.' 106 | echo_success 'Manifest found...' 107 | ssh 192.168.56.20 "grep -roEz 'metadata:\s*name: *tools.*image: (docker.io/library/)?busybox' /etc/kubernetes/manifests > /dev/null 2>&1" 108 | test "0" -ne "$?" && echo_fail 'The manifest definitions in "worker1" are wrong.' 109 | echo_success 'The manifest definitions are correct...' 110 | grep -E 'State: *Running' /tmp/task > /dev/null 111 | test "0" -ne "$?" && echo_fail 'The pod does not appear to be running.' 112 | echo_success 'The static pod was created correctly!\n' 113 | 114 | echo 'Task 8 - Persistence with hostPath:' 115 | kubectl get ns | grep database > /dev/null 116 | test "0" -ne "$?" && echo_fail 'The namespace "database" was not found.' 117 | echo_success 'The namespace was found...' 118 | kubectl describe svc/couchdb -n database > /tmp/task 2>&1 119 | test "0" -ne "$?" && echo_fail 'The "couchdb" service was not found.' 120 | echo_success 'The service was found...' 121 | grep -E 'Port:.*5984' /tmp/task > /dev/null 122 | test "0" -ne "$?" && echo_fail 'The "couchdb" service is not listening on port 5984.' 123 | echo_success 'The service is listening on port 5984...' 124 | ssh -o stricthostkeychecking=no 192.168.56.30 stat /srv/couchdb > /dev/null 2>&1 125 | test "0" -ne "$?" && echo_fail 'The directory "/srv/couchdb" was not found in "worker2".' 126 | echo_success 'Directory "/srv/couchdb" found...' 127 | kubectl describe pv | grep -zEo 'HostPath.*/srv/couchdb' > /dev/null 128 | test "0" -ne "$?" && echo_fail 'No volume of type "HostPath" was found using "/srv/couchdb".' 129 | echo_success 'PersistentVolume found...' 130 | kubectl -n database describe statefulset/couchdb > /tmp/task 2> /dev/null 131 | test -z "$(cat /tmp/task)" && echo_fail 'The statefulset "couchdb" was not found.' 132 | kubectl describe pod/couchdb-0 -n database > /tmp/task 2> /dev/null 133 | grep -E 'Ready\s*True' /tmp/task > /dev/null 134 | test "0" -ne "$?" && echo_fail 'The pod is not running.' 135 | echo_success 'The pod is running...' 136 | kubectl get pods -o wide -n database | grep couchdb | grep worker2 > /dev/null 2>&1 137 | test "0" -ne "$?" && echo_fail 'The pod is not in "worker2".' 138 | echo_success 'The pod is in "worker2"...' 139 | kubectl -n database describe statefulset/couchdb > /tmp/task 140 | grep -zoE 'Mounts:.*/opt/couchdb/data\s.*Volume Claims:.*Name:' /tmp/task > /dev/null 141 | test "0" -ne "$?" && echo_fail 'The pod is not using a PVC mounted on "/opt/couchdb/data".' 142 | echo_success 'The volume is mounted correctly...' 143 | kubectl exec -ti couchdb-0 -n database -- curl -u developer:secret couchdb:5984 > /tmp/task 2> /dev/null 144 | grep -i welcome /tmp/task > /dev/null 145 | test "0" -ne "$?" && echo_fail 'The user or password is wrong.' 146 | echo_success 'Correct user and password...' 147 | echo_warning 'Cleaning and importing Roberto Carlos songs into couchdb...' 148 | bash /vagrant/files/couchdb-import.sh 149 | echo_warning 'Destroying pod to test persistence...' 150 | kubectl delete pod couchdb-0 -n database 151 | bash /vagrant/files/couchdb-check.sh 152 | test "0" -ne "$?" && echo_fail 'Not all data was found, is the volume correct?' 153 | echo_success 'The statefulset worked, all Roberto Carlos songs persisted!\n' 154 | -------------------------------------------------------------------------------- /files/couchdb-check.sh: -------------------------------------------------------------------------------- 1 | #!/bin/#!/usr/bin/env bash 2 | 3 | function echo_warning { 4 | echo -e "\033[0;33m$1\033[0m" 5 | } 6 | 7 | while [ -z "$(kubectl get pod couchdb-0 -n database | grep Running)" ]; do 8 | echo_warning "Waiting for pod to initialize..." 9 | sleep 2 10 | done 11 | 12 | while [ -z "$(kubectl exec -ti couchdb-0 -n database -- curl --connect-timeout 1 -u developer:secret couchdb:5984 2> /dev/null | grep -i welcome)" ]; do 13 | echo_warning "Waiting for application..." 14 | sleep 2 15 | done 16 | 17 | for X in 'Amigo' 'Detalhes' 'Cavalgada' 'Vivendo por Viver' 'Ternura' 'Como Dois e Dois' 'Ilegal, Imoral ou Engorda' 'Quando' 'Eu Te Amo, Te Amo, Te Amo'; do 18 | kubectl exec -ti couchdb-0 -n database -- curl -u developer:secret -H 'Content-Type: application/json' -d "{\"selector\": {\"nome\": \"$X\"}}" couchdb:5984/robertocarlos/_find | grep "$X" > /dev/null 19 | if [ "$?" -eq 0 ]; then 20 | echo "Music \"$X\" found!" 21 | else 22 | exit 1 23 | fi 24 | done 25 | -------------------------------------------------------------------------------- /files/couchdb-import.sh: -------------------------------------------------------------------------------- 1 | #!/bin/#!/usr/bin/env bash 2 | 3 | kubectl exec -ti couchdb-0 -n database -- curl -X DELETE -u developer:secret couchdb:5984/robertocarlos > /dev/null 4 | 5 | kubectl exec -ti couchdb-0 -n database -- curl -X PUT -u developer:secret couchdb:5984/robertocarlos > /dev/null 6 | 7 | for X in 'Amigo' 'Detalhes' 'Cavalgada' 'Vivendo por Viver' 'Ternura' 'Como Dois e Dois' 'Ilegal, Imoral ou Engorda' 'Quando' 'Eu Te Amo, Te Amo, Te Amo'; do 8 | kubectl exec -ti couchdb-0 -n database -- curl -u developer:secret -H 'Content-Type: application/json' -d "{\"nome\": \"$X\", \"seed\": \"$RANDOM\"}" couchdb:5984/robertocarlos > /dev/null 9 | echo "Music \"$X\" inserted!" 10 | done 11 | -------------------------------------------------------------------------------- /files/httpd.conf: -------------------------------------------------------------------------------- 1 | ServerRoot /var/www 2 | 3 | LoadModule mpm_prefork_module modules/mod_mpm_prefork.so 4 | LoadModule authn_file_module modules/mod_authn_file.so 5 | LoadModule authn_core_module modules/mod_authn_core.so 6 | LoadModule authz_user_module modules/mod_authz_user.so 7 | LoadModule authz_core_module modules/mod_authz_core.so 8 | LoadModule access_compat_module modules/mod_access_compat.so 9 | LoadModule auth_basic_module modules/mod_auth_basic.so 10 | LoadModule mime_module modules/mod_mime.so 11 | LoadModule unixd_module modules/mod_unixd.so 12 | LoadModule autoindex_module modules/mod_autoindex.so 13 | LoadModule dir_module modules/mod_dir.so 14 | LoadModule negotiation_module modules/mod_negotiation.so 15 | 16 | Listen 80 17 | LogLevel warn 18 | ServerTokens OS 19 | ServerSignature On 20 | ErrorLog logs/error.log 21 | ServerName k8s-challenge 22 | DirectoryIndex index.html 23 | ServerAdmin you@example.com 24 | 25 | 26 | User apache 27 | Group apache 28 | 29 | 30 | 31 | AllowOverride none 32 | Require all denied 33 | 34 | 35 | DocumentRoot "/var/www/localhost/htdocs" 36 | 37 | Options Indexes FollowSymLinks 38 | AllowOverride None 39 | #Require all granted 40 | AuthType Basic 41 | AuthName "Restricted Content" 42 | AuthUserFile /etc/apache2/htpasswd 43 | Require valid-user 44 | 45 | 46 | IncludeOptional /etc/apache2/conf.d/*.conf 47 | -------------------------------------------------------------------------------- /files/id_ed25519: -------------------------------------------------------------------------------- 1 | -----BEGIN OPENSSH PRIVATE KEY----- 2 | b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW 3 | QyNTUxOQAAACBKf9GCpJ5WFhGq6arl/XYMW3S9bW5nSNqhLE7keRIvFwAAAJB6PpaXej6W 4 | lwAAAAtzc2gtZWQyNTUxOQAAACBKf9GCpJ5WFhGq6arl/XYMW3S9bW5nSNqhLE7keRIvFw 5 | AAAEAu3D6HoX5eY7YaZqNSMf6Rs2q0BoebO0tOa7DYXeIa9kp/0YKknlYWEarpquX9dgxb 6 | dL1tbmdI2qEsTuR5Ei8XAAAADWhlY3RvckBmZWRvcmE= 7 | -----END OPENSSH PRIVATE KEY----- 8 | -------------------------------------------------------------------------------- /files/id_ed25519.pub: -------------------------------------------------------------------------------- 1 | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEp/0YKknlYWEarpquX9dgxbdL1tbmdI2qEsTuR5Ei8X kubernetes@k8s 2 | -------------------------------------------------------------------------------- /files/tasks.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | cat < 35 | Otherwise an unauthorized message should appear. 36 | Note: No extra configuration is required, Secret and ConfigMap take care of 37 | the entire configuration process. 38 | 39 | 7 - Create a pod called "tools": 40 | 7.1 - The pod should use the "busybox" image; 41 | 7.2 - The pod must be static; 42 | 7.3 - The pod should only be present in "worker1". 43 | 44 | 8 - Create a statefulSet called "couchdb" with the image "couchdb" 45 | inside the "database" namespace: 46 | 8.1 - Create the "database" namespace; 47 | 8.2 - The "headless service" should be called "couchdb" and listen on port 5984; 48 | 8.3 - Create the "/srv/couchdb" directory on the "worker2" machine; 49 | 8.4 - Create a persistent volume that uses the above directory; 50 | 8.5 - The pod can only go to the "worker2" machine; 51 | 8.6 - The connection user must be "developer" and the password "secret"; 52 | 8.7 - Persist the couchdb data on the volume created above; 53 | Note: The directory used by couchdb to persist data is "/opt/couchdb/data". 54 | EOF 55 | -------------------------------------------------------------------------------- /manifests/2-pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: apache 5 | spec: 6 | containers: 7 | - name: apache 8 | image: docker.io/library/httpd:alpine 9 | imagePullPolicy: IfNotPresent 10 | -------------------------------------------------------------------------------- /manifests/3-deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | labels: 5 | app: cgi 6 | name: cgi 7 | namespace: default 8 | spec: 9 | replicas: 4 10 | selector: 11 | matchLabels: 12 | app: cgi 13 | template: 14 | metadata: 15 | labels: 16 | app: cgi 17 | spec: 18 | containers: 19 | - name: sh-cgi 20 | image: docker.io/hectorvido/sh-cgi 21 | imagePullPolicy: IfNotPresent 22 | 23 | --- 24 | apiVersion: v1 25 | kind: Service 26 | metadata: 27 | labels: 28 | app: cgi 29 | name: cgi 30 | namespace: default 31 | spec: 32 | ports: 33 | - port: 9090 34 | protocol: TCP 35 | targetPort: 8080 36 | selector: 37 | app: cgi 38 | type: ClusterIP 39 | -------------------------------------------------------------------------------- /manifests/4-deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | labels: 5 | app: nginx 6 | name: nginx 7 | namespace: default 8 | spec: 9 | replicas: 1 10 | selector: 11 | matchLabels: 12 | app: nginx 13 | template: 14 | metadata: 15 | labels: 16 | app: nginx 17 | spec: 18 | containers: 19 | - name: nginx 20 | image: docker.io/library/nginx:alpine 21 | imagePullPolicy: IfNotPresent 22 | -------------------------------------------------------------------------------- /manifests/5-daemonset.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: DaemonSet 3 | metadata: 4 | name: memcached 5 | spec: 6 | selector: 7 | matchLabels: 8 | name: memcached 9 | template: 10 | metadata: 11 | labels: 12 | name: memcached 13 | spec: 14 | containers: 15 | - name: memcached 16 | image: docker.io/library/memcached:alpine 17 | imagePullPolicy: IfNotPresent 18 | -------------------------------------------------------------------------------- /manifests/6-pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: auth 5 | spec: 6 | containers: 7 | - name: apache 8 | image: docker.io/hectorvido/apache-auth 9 | imagePullPolicy: IfNotPresent 10 | envFrom: 11 | - secretRef: 12 | name: httpd-auth 13 | volumeMounts: 14 | - name: conf 15 | mountPath: /etc/apache2/httpd.conf 16 | subPath: httpd.conf 17 | volumes: 18 | - name: conf 19 | configMap: 20 | name: httpd-conf 21 | -------------------------------------------------------------------------------- /manifests/7-pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: tools 5 | spec: 6 | containers: 7 | - name: busybox 8 | image: docker.io/library/busybox 9 | imagePullPolicy: IfNotPresent 10 | tty: true 11 | stdin: true 12 | -------------------------------------------------------------------------------- /manifests/8-statefulset.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: database 5 | --- 6 | apiVersion: v1 7 | kind: Service 8 | metadata: 9 | name: couchdb 10 | namespace: database 11 | labels: 12 | app: couchdb 13 | spec: 14 | ports: 15 | - port: 5984 16 | name: web 17 | clusterIP: None 18 | selector: 19 | app: couchdb 20 | --- 21 | apiVersion: apps/v1 22 | kind: StatefulSet 23 | metadata: 24 | name: couchdb 25 | namespace: database 26 | spec: 27 | selector: 28 | matchLabels: 29 | app: couchdb 30 | serviceName: "couchdb" 31 | replicas: 1 32 | template: 33 | metadata: 34 | labels: 35 | app: couchdb 36 | spec: 37 | nodeName: worker2 38 | containers: 39 | - name: couchdb 40 | image: docker.io/library/couchdb 41 | imagePullPolicy: IfNotPresent 42 | env: 43 | - name: COUCHDB_USER 44 | value: developer 45 | - name: COUCHDB_PASSWORD 46 | value: secret 47 | volumeMounts: 48 | - name: data 49 | mountPath: /opt/couchdb/data 50 | volumeClaimTemplates: 51 | - metadata: 52 | name: data 53 | spec: 54 | accessModes: [ "ReadWriteOnce" ] 55 | resources: 56 | requests: 57 | storage: 1Gi 58 | --- 59 | apiVersion: v1 60 | kind: PersistentVolume 61 | metadata: 62 | name: couchdb 63 | spec: 64 | capacity: 65 | storage: 1Gi 66 | volumeMode: Filesystem 67 | accessModes: 68 | - ReadWriteOnce 69 | hostPath: 70 | path: /srv/couchdb 71 | -------------------------------------------------------------------------------- /manifests/README.md: -------------------------------------------------------------------------------- 1 | # Manifests 2 | 3 | Don't read these manifests if you want to solve the problems by your own. 4 | -------------------------------------------------------------------------------- /manifests/solve.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | chmod +x /usr/bin/kubectl 4 | sed -i 's,/etc/kubernete/manifest,/etc/kubernetes/manifests,' /var/lib/kubelet/config.yaml 5 | systemctl restart kubelet 6 | 7 | ssh -o stricthostkeychecking=no root@192.168.56.20 hostname 8 | scp /lib/systemd/system/kubelet.service root@192.168.56.20:/lib/systemd/system/kubelet.service 9 | ssh root@192.168.56.20 'systemctl daemon-reload && systemctl restart kubelet' 10 | 11 | ssh -o stricthostkeychecking=no root@192.168.56.30 hostname 12 | ssh root@192.168.56.30 "sed -i 's/ca\.pem/ca\.crt/' /var/lib/kubelet/config.yaml && systemctl restart kubelet" 13 | ssh root@192.168.56.30 'mkdir -p /srv/couchdb' 14 | 15 | while [ "`curl -sk https://localhost:6443/healthz`" != 'ok' ]; do 16 | echo 'Waiting for kubernetes api...' 17 | sleep 1 18 | done 19 | 20 | mkdir -p /tmp/manifests 21 | cp /vagrant/manifests/*.yml /tmp/manifests 22 | scp /tmp/manifests/7-pod.yml root@192.168.56.20:/etc/kubernetes/manifests 23 | rm /tmp/manifests/7-pod.yml 24 | kubectl create -f /tmp/manifests/ 25 | 26 | while [ "`kubectl get deploy/nginx | grep '1/1'`" == "" ]; do 27 | echo Waiting for nginx deployment... 28 | sleep 5 29 | done 30 | 31 | kubectl patch deploy nginx -p \ 32 | '{"spec":{"template":{"spec":{"containers":[{"name":"nginx","image":"nginx:perl"}]}}}}' 33 | 34 | kubectl rollout undo deploy/nginx 35 | echo 'kubectl rollout undo deploy/nginx' >> ~/.bash_history 36 | 37 | kubectl create secret generic httpd-auth --from-env-file /vagrant/files/auth.ini 38 | kubectl create configmap httpd-conf --from-file /vagrant/files/httpd.conf 39 | 40 | while [ "`kubectl get pods --no-headers -A | grep -v Running`" != '' ]; do 41 | echo Waiting for all pods to be ready... 42 | sleep 5 43 | done 44 | 45 | kubectl delete pods --all -n kube-system 46 | 47 | while [ "`kubectl get pods --no-headers -A | grep -v Running`" != '' ]; do 48 | echo Waiting for all pods to be ready... 49 | sleep 5 50 | done 51 | -------------------------------------------------------------------------------- /provision/challenge.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # WARNING: DON'T READ THIS FILE 4 | 5 | # The content of this script creates one of the tasks to solve. 6 | # If you want to learn how to solve the cluster problem, do not read this file 7 | # before you try to solve the communication problem between the nodes. 8 | 9 | while [ "`kubectl get nodes --no-headers | wc -l`" -lt 3 ]; do 10 | echo Waiting for the 2 workers... 11 | sleep 1 12 | done 13 | 14 | while [ "`kubectl get pods --no-headers -A | grep -v Running`" != '' ]; do 15 | echo Waiting for all pods to be ready... 16 | sleep 5 17 | done 18 | 19 | ssh-keyscan -H 192.168.56.20 >> ~/.ssh/known_hosts 2>/dev/null 20 | ssh root@192.168.56.20 'systemctl stop kubelet && systemctl daemon-reload && rm -rf /lib/systemd/system/kubelet.service' 21 | ssh-keyscan -H 192.168.56.30 >> ~/.ssh/known_hosts 2>/dev/null 22 | ssh root@192.168.56.30 'sed -i "s/ca\.crt/ca\.pem/" /var/lib/kubelet/config.yaml && systemctl restart kubelet' 23 | 24 | sed -i 's,/etc/kubernetes/manifests,/etc/kubernete/manifest,' /var/lib/kubelet/config.yaml 25 | systemctl restart kubelet 26 | chmod -x `which kubectl` 27 | -------------------------------------------------------------------------------- /provision/control.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | bash /vagrant/provision/k8s.sh 4 | 5 | # The presence of ~/.kube indicate to workers that they 6 | # can download the packages from control 7 | mkdir -p ~/.kube 8 | 9 | echo "KUBELET_EXTRA_ARGS='--node-ip=192.168.56.$1'" > /etc/default/kubelet 10 | 11 | # Ensure pause image is the same for kubeadm and crictl 12 | PAUSE=`kubeadm config images list | grep pause` 13 | sed -Ei "s,(signature.*),\1\npause_image = \"$PAUSE\"," /etc/crio/crio.conf.d/10-crio.conf 14 | systemctl restart crio 15 | 16 | kubeadm init --apiserver-advertise-address=192.168.56.10 --pod-network-cidr=10.244.0.0/16 17 | 18 | # The presence of ~/.kube/config indicate to workers that they 19 | # can join the cluster 20 | cp /etc/kubernetes/admin.conf ~/.kube/config 21 | 22 | # Change from docker.io to quay.io to avoid Docker Hub constraints 23 | curl -sL https://docs.projectcalico.org/manifests/calico.yaml | sed 's,docker,quay,' > calico.yml 24 | kubectl create -f calico.yml 25 | 26 | # Enable colorful "ls" 27 | sed -Ei 's/# (export LS)/\1/' /root/.bashrc 28 | sed -Ei 's/# (eval ")/\1/' /root/.bashrc 29 | sed -Ei 's/# (alias ls=)/\1/' /root/.bashrc 30 | # Force bash to save history after each command 31 | echo "export PROMPT_COMMAND='history -a'" >> /root/.bashrc 32 | 33 | install --mode=755 /vagrant/files/check.sh /usr/local/bin/k8s-check 34 | install --mode=755 /vagrant/files/tasks.sh /usr/local/bin/k8s-tasks 35 | install --mode=755 /vagrant/manifests/solve.sh /usr/local/bin/k8s-solve 36 | install --mode=755 /vagrant/provision/challenge.sh /usr/local/bin/k8s-challenge 37 | -------------------------------------------------------------------------------- /provision/k8s.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo 'br_netfilter' > /etc/modules-load.d/k8s.conf 4 | modprobe br_netfilter 5 | 6 | echo 'net.ipv4.ip_forward = 1' > /etc/sysctl.d/k8s.conf 7 | sysctl --system 8 | 9 | apt-get update 10 | apt-get install -y apt-transport-https ca-certificates curl gnupg2 vim telnet nfs-common 11 | 12 | export K8S_VERSION='v1.31' 13 | export CRIO_VERSION='v1.30' 14 | 15 | # Kubernetes 16 | curl -fsSL https://pkgs.k8s.io/core:/stable:/$K8S_VERSION/deb/Release.key | 17 | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg 18 | 19 | echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$K8S_VERSION/deb/ /" | 20 | tee /etc/apt/sources.list.d/kubernetes.list 21 | 22 | # CRI-O 23 | curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/deb/Release.key | 24 | gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg 25 | 26 | echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/deb/ /" | 27 | tee /etc/apt/sources.list.d/cri-o.list 28 | 29 | apt-get update 30 | apt-get install -y cri-o kubelet kubeadm kubectl 31 | apt-mark hold kubelet kubeadm kubectl 32 | 33 | systemctl enable --now crio 34 | -------------------------------------------------------------------------------- /provision/provision.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | mkdir -p /root/.ssh 4 | cp /vagrant/files/id_ed25519* /root/.ssh 5 | chmod 400 /root/.ssh/id_ed25519* 6 | cp /vagrant/files/id_ed25519.pub /root/.ssh/authorized_keys 7 | 8 | HOSTS=$(head -n7 /etc/hosts) 9 | echo -e "$HOSTS" > /etc/hosts 10 | echo '192.168.56.10 control.k8s.local' >> /etc/hosts 11 | echo '192.168.56.20 node1.k8s.local' >> /etc/hosts 12 | echo '192.168.56.30 node2.k8s.local' >> /etc/hosts 13 | echo '192.168.56.40 storage.k8s.local' >> /etc/hosts 14 | 15 | if [ "$HOSTNAME" != "storage" ]; then 16 | sed -Ei 's/(.*swap.*)/#\1/g' /etc/fstab 17 | swapoff -a 18 | fi 19 | -------------------------------------------------------------------------------- /provision/storage.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | apt-get update 4 | apt-get install -y vim nfs-kernel-server 5 | 6 | mkdir -p /srv/nfs/v{0,1,2,3,4,5,6,7,8,9} 7 | 8 | cat > /etc/exports < /etc/default/kubelet 10 | 11 | while [ -z "$(ssh 192.168.56.10 stat /root/.kube/config)" ]; do 12 | sleep 5 13 | done 14 | 15 | $(ssh 192.168.56.10 kubeadm token create --print-join-command) 16 | apt-get clean 17 | -------------------------------------------------------------------------------- /provision/worker.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # The workers privision are splitted in two steps 4 | # so we can release the terminal much earlier 5 | # and also avoid the log polution on screen 6 | 7 | bash /vagrant/provision/worker-steps.sh $1 > /tmp/provision.log 2>&1 & 8 | --------------------------------------------------------------------------------