├── demo.gif ├── destroy-cluster.sh ├── kind-config.yaml ├── metallb-config.yaml ├── create-clusters.sh ├── icanhazcluster.sh ├── create-cluster.sh ├── setup.sh └── README.md /demo.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kabisa/k8s-workshop-in-a-box/HEAD/demo.gif -------------------------------------------------------------------------------- /destroy-cluster.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | USER="$1" 4 | 5 | kind delete cluster --name $USER 6 | userdel "$USER" --remove -------------------------------------------------------------------------------- /kind-config.yaml: -------------------------------------------------------------------------------- 1 | kind: Cluster 2 | apiVersion: kind.x-k8s.io/v1alpha4 3 | 4 | nodes: 5 | - role: control-plane 6 | - role: worker 7 | - role: worker -------------------------------------------------------------------------------- /metallb-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | namespace: metallb-system 5 | name: config 6 | data: 7 | config: | 8 | address-pools: 9 | - name: default 10 | protocol: layer2 11 | addresses: 12 | - 172.17.255.1-172.17.255.250 -------------------------------------------------------------------------------- /create-clusters.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | if [ "$#" -ne 1 ]; then 4 | echo "Missing number of clusters to create" 5 | exit 1 6 | fi 7 | 8 | for i in {1..$(seq $1)} 9 | do 10 | USER=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13) 11 | $(dirname $0)/create-cluster.sh "$USER" 12 | 13 | echo "$USER" >> /home/icanhazcluster/clusters.txt 14 | done 15 | 16 | chown icanhazcluster:icanhazcluster /home/icanhazcluster/clusters.txt 17 | -------------------------------------------------------------------------------- /icanhazcluster.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | CLUSTER=$(( 3 | flock -x 200 4 | CLUSTER=$(head -n 1 ~/clusters.txt) 5 | sed -i '1d' ~/clusters.txt 6 | echo $CLUSTER 7 | ) 200>/tmp/clusters-lock) 8 | 9 | if [ -n "$CLUSTER" ]; then 10 | echo -e "Cluster available! 🙌\n" 11 | 12 | echo "Use the following command to SSH into your shell and start using kubectl:" 13 | echo -e "$(tput setaf 2)ssh $CLUSTER@$(curl -s v4.icanhazip.com)$(tput sgr0)\n" 14 | else 15 | echo -e "$(tput setaf 1)Uh oh, no more clusters available!$(tput sgr0) 😱\n" 16 | exit 1 17 | fi 18 | -------------------------------------------------------------------------------- /create-cluster.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | USER="$1" 4 | 5 | # Setup OS user 6 | useradd -m "$USER" -s /bin/bash 7 | passwd -d "$USER" # passwordless login 8 | 9 | # Temp allow Docker to create kind cluster 10 | usermod -aG docker $USER 11 | 12 | sudo su $USER -c "kind create cluster --name $USER --config $(dirname $0)/kind-config.yaml" 13 | echo -n "$(tput setaf 2)Configuring cluster... $(tput sgr0)" 14 | 15 | sudo su $USER -c "kubectl cluster-info --context kind-$USER" 16 | sudo su $USER -c "kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml" 17 | 18 | # Assign a unique subnet to each cluster, counting down from 255 19 | NUM_CLUSTERS=$(kind get clusters | wc -l) 20 | CLUSTER_SUBNET=$(expr 255 - $NUM_CLUSTERS) 21 | cat $(dirname $0)/metallb-config.yaml | sed "s/172.17.255.1-172.17.255.250/172.17.$CLUSTER_SUBNET.1-172.17.$CLUSTER_SUBNET.250/" > /tmp/metallb-config.yaml 22 | sudo su $USER -c "kubectl apply -f /tmp/metallb-config.yaml" 23 | 24 | # Start/Stop node scripts 25 | 26 | cat << EOF > /home/$USER/kill-node-1 27 | #!/usr/bin/env bash 28 | docker stop $USER-worker 29 | EOF 30 | cat << EOF > /home/$USER/kill-node-2 31 | #!/usr/bin/env bash 32 | docker stop $USER-worker2 33 | EOF 34 | 35 | cat << EOF > /home/$USER/start-node-1 36 | #!/usr/bin/env bash 37 | docker start $USER-worker 38 | EOF 39 | cat << EOF > /home/$USER/start-node-2 40 | #!/usr/bin/env bash 41 | docker start $USER-worker2 42 | EOF 43 | 44 | # Allow read and exec by users only 45 | 46 | chmod 705 /home/$USER/kill-node-1 47 | chmod 705 /home/$USER/kill-node-2 48 | chmod 705 /home/$USER/start-node-1 49 | chmod 705 /home/$USER/start-node-2 50 | 51 | # Allow passwordless sudo 52 | 53 | cat << EOF > /etc/sudoers.d/$USER 54 | $USER ALL=(ALL) NOPASSWD: /home/$USER/kill-node-1 55 | $USER ALL=(ALL) NOPASSWD: /home/$USER/start-node-1 56 | $USER ALL=(ALL) NOPASSWD: /home/$USER/kill-node-2 57 | $USER ALL=(ALL) NOPASSWD: /home/$USER/start-node-2 58 | EOF 59 | 60 | # Cleanup 61 | gpasswd -d $USER docker 62 | rm /tmp/metallb-config.yaml 63 | -------------------------------------------------------------------------------- /setup.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | apt-get install -y \ 3 | apt-transport-https \ 4 | ca-certificates \ 5 | curl \ 6 | gnupg-agent \ 7 | software-properties-common \ 8 | jq \ 9 | vim 10 | 11 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 12 | 13 | add-apt-repository \ 14 | "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ 15 | $(lsb_release -cs) \ 16 | stable" 17 | 18 | apt-get install -y docker-ce docker-ce-cli containerd.io 19 | 20 | curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64 \ 21 | && chmod +x ./kind && mv kind /usr/local/bin/ 22 | 23 | curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.2/bin/linux/amd64/kubectl \ 24 | && chmod +x ./kubectl && mv kubectl /usr/local/bin/ 25 | 26 | curl -L https://github.com/derailed/k9s/releases/download/v0.19.2/k9s_Linux_x86_64.tar.gz \ 27 | | tar -zxvf - -C /usr/local/bin/ k9s 28 | 29 | # https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files 30 | echo "fs.inotify.max_user_watches=524288" >> /etc/sysctl.d/90-inotify.conf 31 | echo "fs.inotify.max_user_instances=1024" >> /etc/sysctl.d/90-inotify.conf 32 | sysctl --load /etc/sysctl.d/90-inotify.conf 33 | 34 | # Allow passwordless SSH 35 | sudo sed -i 's/nullok_secure/nullok/' /etc/pam.d/common-auth 36 | sed -i "s/.*PasswordAuthentication.*/PasswordAuthentication yes/g" /etc/ssh/sshd_config 37 | cat << EOF >> /etc/ssh/sshd_config 38 | PermitEmptyPasswords yes 39 | EOF 40 | 41 | # Setup icanhazcluster user 42 | 43 | useradd -m -s /bin/bash icanhazcluster 44 | passwd -d icanhazcluster 45 | 46 | cp $(dirname $0)/icanhazcluster.sh /home/icanhazcluster/icanhazcluster.sh 47 | chmod +x /home/icanhazcluster/icanhazcluster.sh 48 | chown icanhazcluster:icanhazcluster /home/icanhazcluster/icanhazcluster.sh 49 | touch /home/icanhazcluster/clusters.txt 50 | 51 | cat << EOF >> /etc/ssh/sshd_config 52 | Match User icanhazcluster 53 | ForceCommand ~/icanhazcluster.sh 54 | EOF 55 | 56 | /etc/init.d/ssh reload 57 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # K8s workshop in a box 2 | 3 | This is a setup for running Kubernetes workshops using 'real' clusters, 4 | without attendees having to install anything on their personal computers. 5 | 6 | ## How does it work? 7 | 8 | This setup uses [Kind](https://kind.sigs.k8s.io/) to run a number of clusters isolated in Docker containers, on one big (cloud) server. Every participant of the workshop gets a 3 node Kubernetes cluster (one master two workers), running in 3 Docker containers per cluster. 9 | 10 | The clusters are fully isolated, with their own internal IP space. 11 | 12 | ## Requirements 13 | 14 | * A beefy (cloud) server. 15 | * I used an AWS `m5.8xlarge` instance for a 15 person workshop, we had quite a bit of capacity left. Initialising the clusters is most resource intensive. Besides that it depends mostly on the workloads you're running during the workshop. 16 | * Running a three hour workshop costs about ~ 5$ this way. That's even cheaper than a pizza 🍕! 17 | * Ubuntu 18.04 host OS. It may work with other Linux flavours, but the setup was only tested with Ubuntu 18.04 so far. 18 | * SSH port open on the host node. 19 | 20 | ## Setup 21 | 22 | 1. Provision a host node, as described in Requirements above. 23 | 2. SSH into the node, clone this repository (`git clone https://github.com/kabisa/k8s-workshop-in-a-box.git`). 24 | 3. Run the `setup.sh` script with root privileges. 25 | 4. Create the desired number of clusters using `./create-clusters.sh `. Best run this from screen/tmux as it will take a while (few minutes per cluster). 26 | 5. Done! 🙌 27 | 28 | ## Accessing the clusters 29 | 30 | After the clusters are created, participants can claim their own cluster via SSH: 31 | 32 | ``` 33 | ssh icanhazcluster@ 34 | ``` 35 | 36 | An SSH command to connect to your personal cluster will be printed. 37 | Here it is in action: 38 | 39 | ![demo](./demo.gif) 40 | 41 | ## Caveats 42 | 43 | * This is horribly insecure, using unauthenticated SSH (for ease of use). It's probably best to only allow SSH access from the IP address of your workshop venue. 44 | * This was tested with up to 15 clusters. 45 | * Use at your own risk :-) 46 | --------------------------------------------------------------------------------