├── .github ├── ISSUE_TEMPLATE │ └── bug_report.md └── workflows │ └── ci.yml ├── .travis.yml ├── README.md └── labs ├── 0_kubeadm ├── README.md ├── create.sh ├── env.sh ├── exercice-kubeadm.md ├── reset.sh ├── resource │ ├── backup_master.sh │ ├── centos │ │ ├── prereq.sh │ │ ├── upgrade_master.sh │ │ └── upgrade_worker.sh │ ├── env.sh │ ├── init.sh │ ├── kubeadm-config.yaml │ ├── reset.sh │ ├── tokens.csv │ ├── ubuntu │ │ ├── prereq.sh │ │ ├── upgrade_master.sh │ │ └── upgrade_worker.sh │ └── uncordon.sh └── upgrade.sh ├── 1_internals ├── ci.sh ├── ex1.sh ├── ex2-backup.sh └── ex3_staticpods.sh ├── 2_authorization ├── 1_RBAC_sa.sh ├── 2_0_RBAC_simple.sh ├── 2_RBAC_role.sh ├── 3_RBAC_clusterrole.sh ├── 4.0_RBAC_gencert.sh ├── 4.1_RBAC_useraccount.sh ├── 5_install-dashboard.sh ├── A_rbac_tools_demo.cast ├── A_rbac_tools_demo.sh ├── README.md ├── audit-example.TODO.txt ├── ci.sh ├── kubectl-proxy.yaml ├── kubiscan │ ├── README.txt │ ├── ca.crt │ └── token ├── manifest │ ├── local-storage.yaml │ ├── pod.yaml │ ├── pvc.yaml │ ├── role-deployment-manager-nopod-rbac.yaml │ ├── role-deployment-manager-pvc.yaml │ ├── role-deployment-manager.yaml │ ├── rolebinding-deployment-manager.yaml │ ├── sa_dashboard.yaml │ └── service-reader.yaml └── sec-issue.sh ├── 3_policies ├── README.md ├── ci.sh ├── ex1-securitycontext.sh ├── ex2-demo-podsecurity.sh ├── ex2-podsecurity.sh ├── ex2-psp.sh.deprecated ├── ex3-psp.sh.deprecated ├── ex4-network.sh ├── manifests │ ├── pod-add-settime-capability.yaml │ ├── pod-as-user-guest.yaml │ ├── pod-drop-chown-capability.yaml │ ├── pod-privileged.yaml │ ├── pod-run-as-non-root.yaml │ ├── pod-with-host-network.yaml │ ├── pod-with-host-pid-and-ipc.yaml │ ├── pod-with-hostport.yaml │ ├── pod-with-readonly-filesystem.yaml │ └── pod-with-shared-volume-fsgroup.yaml ├── rego │ ├── sol1.rego │ ├── sol2.rego.nok │ └── sol3.rego └── resource │ ├── allow-dns-access.yaml │ ├── curl_loop.sh │ ├── default-deny.yaml │ ├── egress-www-db.yaml │ ├── ingress-external-ipblock.yaml │ ├── ingress-external.yaml │ ├── ingress-www-db.yaml │ ├── pod-security-policy.yaml │ ├── psp-capabilities.yaml │ ├── psp-must-run-as.yaml │ ├── psp-volumes.yaml │ └── role-use-psp.yaml ├── 4_computational_resources ├── TODO ├── ci.sh ├── ex1.sh ├── ex2-quota.sh ├── ex3-limitrange.sh └── manifest │ └── local-storage-class.yaml └── conf.version.sh /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Operator Logs** 24 | ``` 25 | 26 | 27 | ``` 28 | -------------------------------------------------------------------------------- /.github/workflows/ci.yml: -------------------------------------------------------------------------------- 1 | name: "Integration tests" 2 | on: 3 | push: 4 | pull_request: 5 | branches: 6 | - master 7 | jobs: 8 | main: 9 | name: Run k8s advanced exercices 10 | runs-on: ubuntu-20.04 11 | steps: 12 | - name: Checkout code 13 | uses: actions/checkout@v2 14 | - name: Stop apparmor 15 | run: | 16 | sudo /etc/init.d/apparmor stop 17 | - uses: actions/setup-go@v3 18 | with: 19 | go-version: '^1.19.2' 20 | - name: Create k8s/kind cluster 21 | run: | 22 | go install github.com/k8s-school/ktbx@v1.1.4-rc6 23 | ktbx install helm 24 | ktbx create -c cilium 25 | - name: Install ink 26 | run: | 27 | go install github.com/k8s-school/ink@v0.0.1-rc3 28 | - name: Run test on internals 29 | run: | 30 | ./labs/1_internals/ci.sh 31 | - name: Run test on authorization 32 | run: | 33 | ./labs/2_authorization/ci.sh 34 | - name: Run test on policies 35 | run: | 36 | ./labs/3_policies/ci.sh 37 | - name: Run test on computational resources 38 | run: | 39 | ./labs/4_computational_resources/ci.sh 40 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | sudo: required 2 | language: go 3 | dist: bionic 4 | 5 | go: 6 | - 1.13.5 7 | 8 | before_script: 9 | - git clone --depth 1 -b "k8s-v1.20.2-1" --single-branch https://github.com/k8s-school/kind-helper.git 10 | - sudo ./kind-helper/helm-install.sh 11 | - ./kind-helper/k8s-create.sh -p -c calico 12 | - curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 13 | - chmod 700 get_helm.sh 14 | - ./get_helm.sh --version v3.5.4 15 | 16 | script: 17 | - ./1_internals/ci.sh 18 | - ./2_authorization/ci.sh 19 | - ./3_policies/ci.sh 20 | - ./4_computational_resources/ci.sh 21 | - ./B_prometheus/install.sh 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [K8s-school Logo, expertise et formation Kubernetes](https://k8s-school.fr) 2 | 3 | [![CI](https://github.com/k8s-school/k8s-advanced/actions/workflows/ci.yml/badge.svg)](https://github.com/k8s-school/k8s-advanced/actions/workflows/ci.yml) 4 | 5 | # Kubernetes advanced course 6 | 7 | ## Slides and materials 8 | 9 | [Shared framapad](https://annuel.framapad.org/p/k8s-school?lang=en) 10 | 11 | All slides are [on our website](https://www.k8s-school.fr/pdf) 12 | 13 | Happy k8s! 14 | -------------------------------------------------------------------------------- /labs/0_kubeadm/README.md: -------------------------------------------------------------------------------- 1 | # Pre-requisites 2 | 3 | Get 3 or 4 GCE instances 4 | 5 | # Upgrade using kubeadm file 6 | 7 | ```shell 8 | sudo kubeadm upgrade apply --config /etc/kubeadm/kubeadm-config.yaml 9 | ``` 10 | 11 | # Set up access to gce instances 12 | 13 | ```shell 14 | # It will not work with kubectl for api-server access 15 | #because of SSL certs (localhost is not recognized) 16 | # but it can be used with a 'port-forward' to a pod 17 | NODE=clus0-0 18 | gcloud compute ssh --ssh-flag="-L 3000:localhost:3000" "$NODE" 19 | 20 | # It will not work with kubectl for api-server access 21 | # because of SSL certs (external and internal address for instance are different 22 | gcloud compute firewall-rules create apiserver --allow tcp:6443 23 | mkdir -p $HOME/.kube 24 | gcloud compute scp $NODE:~/.kube/config $HOME/.kube/config 25 | ``` -------------------------------------------------------------------------------- /labs/0_kubeadm/create.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # Create an up and running k8s cluster 4 | 5 | set -e 6 | set -x 7 | 8 | usage() { 9 | cat << EOD 10 | Usage: $(basename "$0") [options] 11 | Available options: 12 | -p Add support for network policies 13 | -h This message 14 | 15 | Init k8s master 16 | 17 | EOD 18 | } 19 | 20 | # Get the options 21 | while getopts h c ; do 22 | case $c in 23 | h) usage ; exit 0 ;; 24 | \?) usage ; exit 2 ;; 25 | esac 26 | done 27 | shift "$((OPTIND-1))" 28 | 29 | if [ $# -ne 0 ] ; then 30 | usage 31 | exit 2 32 | fi 33 | 34 | DIR=$(cd "$(dirname "$0")"; pwd -P) 35 | 36 | . "$DIR/env.sh" 37 | 38 | echo "Copy scripts to all nodes" 39 | echo "-------------------------" 40 | parallel --tag -- $SCP --recurse "$DIR/resource" $USER@{}:/tmp ::: "$MASTER" $NODES 41 | 42 | echo "Install prerequisites" 43 | echo "---------------------" 44 | parallel -vvv --tag -- "gcloud compute ssh $USER@{} -- sudo bash /tmp/resource/$DISTRIB/prereq.sh" ::: "$MASTER" $NODES 45 | 46 | echo "Initialize master" 47 | echo "-----------------" 48 | $SSH "$USER@$MASTER" -- bash /tmp/resource/init.sh 49 | 50 | echo "Join nodes" 51 | echo "----------" 52 | # TODO test '-ttl' option 53 | JOIN_CMD=$($SSH "$USER@$MASTER" -- 'sudo kubeadm token create --print-join-command') 54 | # Remove trailing carriage return 55 | JOIN_CMD=$(echo "$JOIN_CMD" | grep 'kubeadm join' | sed -e 's/[\r\n]//g') 56 | echo "Join command: $JOIN_CMD" 57 | parallel -vvv --tag -- "$SSH $USER@{} -- sudo '$JOIN_CMD'" ::: $NODES 58 | -------------------------------------------------------------------------------- /labs/0_kubeadm/env.sh: -------------------------------------------------------------------------------- 1 | # DISTRIB="centos" 2 | DISTRIB="ubuntu" 3 | 4 | 5 | MASTER="clus0-0" 6 | NODES="clus0-1" 7 | 8 | USER=fabrice_jammes_gmail_com 9 | # USER=k8sstudent_gmail_com 10 | 11 | gcloud config set project "coastal-sunspot-206412" 12 | ZONE="us-central1-a" 13 | gcloud config set compute/zone $ZONE 14 | 15 | SCP="gcloud compute scp" 16 | SSH="gcloud compute ssh" 17 | -------------------------------------------------------------------------------- /labs/0_kubeadm/exercice-kubeadm.md: -------------------------------------------------------------------------------- 1 | # EXERCICE: Installer automatiquement un cluster k8s avec kubeadm 2 | 3 | ## Lire la documentation d'installation simplifiée: https://www.k8s-school.fr/resources/fr/blog/kubeadm/ 4 | 5 | ## Vérifier la connection SSH depuis la toolbox: 6 | 7 | ``` 8 | ssh clusX-0 9 | ssh clusX-1 10 | ssh clusX-2 11 | ``` 12 | 13 | ## Récupérer la zone des VMs: 14 | 15 | ``` 16 | $ gcloud compute instances list 17 | clus0-0 asia-east1-c n1-standard-2 10.140.15.225 35.229.139.210 RUNNING 18 | clus0-1 asia-east1-c n1-standard-2 10.140.15.227 35.201.237.31 RUNNING 19 | clus0-2 asia-east1-c n1-standard-2 10.140.15.226 104.199.223.107 RUNNING 20 | clus1-0 asia-east2-c n1-standard-2 10.170.0.55 34.96.183.117 RUNNING 21 | clus1-1 asia-east2-c n1-standard-2 10.170.0.57 34.92.1.180 RUNNING 22 | clus1-2 asia-east2-c n1-standard-2 10.170.0.56 34.92.89.8 RUNNING 23 | clus2-0 asia-northeast1-c n1-standard-2 10.146.0.49 34.84.172.255 RUNNING 24 | clus2-1 asia-northeast1-c n1-standard-2 10.146.0.48 34.84.234.214 RUNNING 25 | clus2-2 asia-northeast1-c n1-standard-2 10.146.0.50 34.84.252.155 RUNNING 26 | clus3-0 asia-northeast2-c n1-standard-2 10.174.0.44 34.97.102.250 RUNNING 27 | clus3-1 asia-northeast2-c n1-standard-2 10.174.0.45 34.97.223.13 RUNNING 28 | clus3-2 asia-northeast2-c n1-standard-2 10.174.0.43 34.97.247.194 RUNNING 29 | clus4-0 asia-southeast1-c n1-standard-2 10.148.0.39 34.126.77.71 RUNNING 30 | clus4-1 asia-southeast1-c n1-standard-2 10.148.0.38 35.197.153.195 RUNNING 31 | clus4-2 asia-southeast1-c n1-standard-2 10.148.0.37 34.87.110.164 RUNNING 32 | ``` 33 | 34 | ## Copier puis paramétrer ce fichier dans la toolbox 35 | 36 | `env.sh`: 37 | ```shell 38 | # DISTRIB="centos" 39 | DISTRIB="ubuntu" 40 | 41 | 42 | MASTER="clusX-0" 43 | NODES="clusX-1 clusX-2" 44 | 45 | # USER=fabrice_jammes_clermont_in2p3_fr 46 | USER=k8sstudent_gmail_com 47 | 48 | ZONE="asia-east1-c" 49 | gcloud config set compute/zone $ZONE 50 | 51 | SCP="gcloud compute scp" 52 | SSH="gcloud compute ssh" 53 | ``` 54 | 55 | ## Copier ce fichier dans la toolbox 56 | 57 | `create.sh`: 58 | ```shell 59 | #!/bin/sh 60 | 61 | # Create an up and running k8s cluster 62 | 63 | set -e 64 | set -x 65 | 66 | usage() { 67 | cat << EOD 68 | Usage: $(basename "$0") [options] 69 | Available options: 70 | -p Add support for network policies 71 | -h This message 72 | 73 | Init k8s master 74 | 75 | EOD 76 | } 77 | 78 | # Get the options 79 | while getopts h c ; do 80 | case $c in 81 | h) usage ; exit 0 ;; 82 | \?) usage ; exit 2 ;; 83 | esac 84 | done 85 | shift "$((OPTIND-1))" 86 | 87 | if [ $# -ne 0 ] ; then 88 | usage 89 | exit 2 90 | fi 91 | 92 | DIR=$(cd "$(dirname "$0")"; pwd -P) 93 | 94 | . "$DIR/env.sh" 95 | 96 | echo "Copy scripts to all nodes" 97 | echo "-------------------------" 98 | parallel --tag -- $SCP --recurse "$DIR/resource" $USER@{}:/tmp ::: "$MASTER" $NODES 99 | 100 | echo "Install prerequisites" 101 | echo "---------------------" 102 | parallel -vvv --tag -- "gcloud compute ssh $USER@{} -- sudo bash /tmp/resource/$DISTRIB/prereq.sh" ::: "$MASTER" $NODES 103 | 104 | echo "Initialize master" 105 | echo "-----------------" 106 | $SSH "$USER@$MASTER" -- bash /tmp/resource/init.sh 107 | 108 | echo "Join nodes" 109 | echo "----------" 110 | # TODO test '-ttl' option 111 | JOIN_CMD=$($SSH "$USER@$MASTER" -- 'sudo kubeadm token create --print-join-command') 112 | # Remove trailing carriage return 113 | JOIN_CMD=$(echo "$JOIN_CMD" | grep 'kubeadm join' | sed -e 's/[\r\n]//g') 114 | echo "Join command: $JOIN_CMD" 115 | parallel -vvv --tag -- "$SSH $USER@{} -- sudo '$JOIN_CMD'" ::: $NODES 116 | ``` 117 | 118 | ## Ecrire les scripts resource/prereq.sh et resource/init.sh 119 | 120 | ## Réinitialisation du cluster 121 | 122 | `reset.sh`: 123 | ```shell 124 | #!/bin/sh 125 | 126 | # Reset a k8s cluster an all nodes 127 | 128 | set -e 129 | 130 | DIR=$(cd "$(dirname "$0")"; pwd -P) 131 | . "$DIR/env.sh" 132 | 133 | echo "Copy scripts to all nodes" 134 | echo "-------------------------" 135 | parallel --tag -- $SCP --recurse "$DIR/resource" $USER@{}:/tmp ::: "$MASTER" $NODES 136 | 137 | echo "Reset all nodes" 138 | echo "---------------" 139 | parallel -vvv --tag -- "$SSH {} -- sh /tmp/resource/reset.sh" ::: $NODES 140 | $SSH "$USER@$MASTER" -- sh /tmp/resource/reset.sh "$MASTER" 141 | ``` 142 | 143 | et `resource/reset.sh` 144 | ``` 145 | #!/bin/sh 146 | 147 | # Reset k8s cluster 148 | 149 | set -e 150 | 151 | sudo -- kubeadm reset -f 152 | sudo -- sh -c "iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X" 153 | sudo -- ipvsadm --clear 154 | echo "Reset succeed" 155 | echo "-------------" 156 | ``` 157 | -------------------------------------------------------------------------------- /labs/0_kubeadm/reset.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # Reset a k8s cluster an all nodes 4 | 5 | set -e 6 | 7 | DIR=$(cd "$(dirname "$0")"; pwd -P) 8 | . "$DIR/env.sh" 9 | 10 | echo "Copy scripts to all nodes" 11 | echo "-------------------------" 12 | parallel --tag -- $SCP --recurse "$DIR/resource" $USER@{}:/tmp ::: "$MASTER" $NODES 13 | 14 | echo "Reset all nodes" 15 | echo "---------------" 16 | parallel -vvv --tag -- "$SSH {} -- sh /tmp/resource/reset.sh" ::: $NODES 17 | $SSH "$USER@$MASTER" -- sh /tmp/resource/reset.sh "$MASTER" 18 | 19 | -------------------------------------------------------------------------------- /labs/0_kubeadm/resource/backup_master.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # Backup k8s 4 | # see https://elastisys.com/2018/12/10/backup-kubernetes-how-and-why/ 5 | 6 | set -e 7 | set -x 8 | 9 | DIR=$(cd "$(dirname "$0")"; pwd -P) 10 | 11 | DATE=$(date -u +%Y%m%d-%H%M%S) 12 | BACKUP_DIR="$HOME/backup-$DATE" 13 | mkdir -p "$BACKUP_DIR" 14 | 15 | # Backup certificates 16 | sudo cp -r /etc/kubernetes/pki "$BACKUP_DIR" 17 | 18 | # Make etcd snapshot 19 | # 20 | 21 | INSTALL_DIR="/usr/local/etcd" 22 | if [ ! -f "$INSTALL_DIR"/etcdctl ]; then 23 | rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz 24 | # Install etcdctl 25 | ETCD_VER=v3.4.1 26 | # choose either URL 27 | # WARN: Google does not work on 2019-09-30 28 | GOOGLE_URL=https://storage.googleapis.com/etcd 29 | GITHUB_URL=https://github.com/etcd-io/etcd/releases/download 30 | DOWNLOAD_URL=${GITHUB_URL} 31 | sudo rm -rf "$INSTALL_DIR" 32 | sudo mkdir -p "$INSTALL_DIR" 33 | 34 | curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz 35 | sudo tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C "$INSTALL_DIR" --strip-components=1 36 | rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz 37 | fi 38 | 39 | export ETCDCTL_API=3 40 | 41 | sudo "$INSTALL_DIR"/etcdctl --endpoints=https://127.0.0.1:2379 \ 42 | --cacert=/etc/kubernetes/pki/etcd/ca.crt \ 43 | --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \ 44 | --key=/etc/kubernetes/pki/etcd/healthcheck-client.key \ 45 | snapshot save "$BACKUP_DIR/etcd-snapshot.db" 46 | 47 | # Get snapshot status 48 | sudo "$INSTALL_DIR"/etcdctl snapshot status "$BACKUP_DIR/etcd-snapshot.db" 49 | 50 | # Backup kubeadm-config 51 | sudo cp /etc/kubeadm/kubeadm-config.yaml "$BACKUP_DIR" 52 | -------------------------------------------------------------------------------- /labs/0_kubeadm/resource/centos/prereq.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | set -eux 4 | 5 | DIR=$(cd "$(dirname "$0")"; pwd -P) 6 | . "$DIR/../env.sh" 7 | 8 | yum update -y 9 | yum install -y git wget 10 | 11 | # Install containerd 12 | ## Set up the repository 13 | ### Install required packages 14 | yum install -y yum-utils device-mapper-persistent-data lvm2 15 | 16 | ### Add docker repository 17 | yum-config-manager \ 18 | --add-repo \ 19 | https://download.docker.com/linux/centos/docker-ce.repo 20 | 21 | ## Install containerd 22 | yum update -y 23 | yum install -y containerd.io 24 | 25 | # Configure containerd 26 | mkdir -p /etc/containerd 27 | containerd config default > /etc/containerd/config.toml 28 | 29 | # Restart containerd 30 | systemctl restart containerd 31 | 32 | # Configure crictl client 33 | cat > /etc/crictl.yaml < /etc/yum.repos.d/kubernetes.repo 42 | [kubernetes] 43 | name=Kubernetes 44 | baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 45 | enabled=1 46 | gpgcheck=1 47 | repo_gpgcheck=1 48 | gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 49 | EOF 50 | 51 | # Set SELinux in permissive mode (effectively disabling it) 52 | setenforce 0 53 | sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 54 | yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes 55 | systemctl enable --now kubelet 56 | yum install -y ipvsadm 57 | 58 | # containerd 59 | ## 60 | 61 | ## Pre-requisites 62 | cat > /etc/modules-load.d/containerd.conf < /etc/sysctl.d/99-kubernetes-cri.conf < /etc/sysctl.d/k8s.conf 79 | net.bridge.bridge-nf-call-ip6tables = 1 80 | net.bridge.bridge-nf-call-iptables = 1 81 | EOF 82 | sysctl --system 83 | 84 | 85 | # Helm 86 | # 87 | HELM_VERSION="3.0.2" 88 | wget -O /tmp/helm.tgz \ 89 | https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz 90 | cd /tmp 91 | tar zxvf /tmp/helm.tgz 92 | chmod +x /tmp/linux-amd64/helm 93 | mv /tmp/linux-amd64/helm /usr/local/bin/helm-${HELM_VERSION} 94 | ln -sf /usr/local/bin/helm-${HELM_VERSION} /usr/local/bin/helm 95 | -------------------------------------------------------------------------------- /labs/0_kubeadm/resource/centos/upgrade_master.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | set -e 4 | set -x 5 | 6 | DIR=$(cd "$(dirname "$0")"; pwd -P) 7 | . "$DIR/../env.sh" 8 | 9 | sudo cp -f $DIR/kubeadm-config.yaml /etc/kubeadm 10 | 11 | # On whole control plane 12 | sudo apt-mark unhold kubeadm 13 | sudo apt-get install -y kubeadm="$BUMP_KUBEADM" 14 | sudo apt-mark hold kubeadm 15 | kubeadm version 16 | 17 | # On master node only 18 | kubectl wait --for=condition=ready node clus0-0 19 | sudo kubeadm upgrade plan "$BUMP_K8S" 20 | sudo kubeadm upgrade apply -y "$BUMP_K8S" 21 | 22 | # On whole control plane 23 | sudo apt-mark unhold kubelet kubectl 24 | sudo apt-get update -q 25 | sudo apt-get install -y kubelet="$BUMP_KUBEADM" kubectl="$BUMP_KUBEADM" 26 | sudo apt-mark hold kubelet kubectl 27 | -------------------------------------------------------------------------------- /labs/0_kubeadm/resource/centos/upgrade_worker.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # Upgrade a worker node 4 | 5 | set -e 6 | 7 | DIR=$(cd "$(dirname "$0")"; pwd -P) 8 | . "$DIR/../env.sh" 9 | 10 | sudo kubeadm upgrade node config --kubelet-version "$BUMP_K8S" 11 | 12 | sudo apt-get update -q 13 | sudo apt-mark unhold kubeadm kubelet kubectl 14 | sudo apt-get install -y kubectl="$BUMP_KUBEADM" kubelet="$BUMP_KUBEADM" \ 15 | kubeadm="$BUMP_KUBEADM" 16 | sudo apt-mark hold kubeadm kubelet kubectl 17 | 18 | sudo systemctl restart kubelet 19 | sudo systemctl status kubelet 20 | -------------------------------------------------------------------------------- /labs/0_kubeadm/resource/env.sh: -------------------------------------------------------------------------------- 1 | K8S_VERSION="v1.29" 2 | KUBEADM_VERSION="1.29.3-1.1" 3 | 4 | # Current version 5 | # KUBEADM_VERSION="1.25.2-00" 6 | # Use: 7 | # apt-cache madison kubeadm 8 | 9 | # Required for update procedure 10 | BUMP_KUBEADM="1.29.4-2.1" 11 | BUMP_K8S="v1.29.4" 12 | 13 | # Remove debconf messages 14 | export TERM="linux" 15 | -------------------------------------------------------------------------------- /labs/0_kubeadm/resource/init.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euxo pipefail 4 | 5 | DIR=$(cd "$(dirname "$0")"; pwd -P) 6 | 7 | usage() { 8 | cat << EOD 9 | 10 | Initialize k8s master with kubeadm 11 | 12 | Usage: $(basename "$0") [options] 13 | Available options: 14 | -h This message 15 | 16 | Init k8s master 17 | 18 | EOD 19 | } 20 | 21 | # Get the options 22 | while getopts h c ; do 23 | case $c in 24 | h) usage ; exit 0 ;; 25 | \?) usage ; exit 2 ;; 26 | esac 27 | done 28 | shift "$((OPTIND-1))" 29 | 30 | if [ $# -ne 0 ] ; then 31 | usage 32 | exit 2 33 | fi 34 | 35 | # Move token file to k8s master 36 | TOKEN_DIR=/etc/kubernetes/auth 37 | sudo mkdir -p $TOKEN_DIR 38 | sudo chmod 600 $TOKEN_DIR 39 | sudo cp -f "$DIR/tokens.csv" $TOKEN_DIR 40 | 41 | sudo mkdir -p /etc/kubeadm 42 | sudo cp -f $DIR/kubeadm-config*.yaml /etc/kubeadm 43 | 44 | if [ ! -d "$HOME/k8s-advanced" ] 45 | then 46 | git clone https://github.com/k8s-school/k8s-advanced.git $HOME/k8s-advanced 47 | else 48 | cd "$HOME/k8s-advanced" 49 | git pull 50 | fi 51 | 52 | KUBEADM_CONFIG="/etc/kubeadm/kubeadm-config.yaml" 53 | 54 | # Init cluster using configuration file 55 | sudo kubeadm init --config="$KUBEADM_CONFIG" 56 | 57 | # Manage kubeconfig 58 | mkdir -p $HOME/.kube 59 | sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config 60 | sudo chown $(id -u):$(id -g) $HOME/.kube/config 61 | 62 | # Enable auto-completion 63 | echo 'source <(kubectl completion bash)' >> ~/.bashrc 64 | 65 | # Install CNI plugin 66 | # See https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart 67 | kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml 68 | kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/custom-resources.yaml 69 | 70 | kubectl wait --for=condition=ready --timeout=-1s nodes $(hostname) 71 | 72 | # Update kubeconfig with users alice and bob 73 | USER=alice 74 | kubectl config set-credentials "$USER" --token=02b50b05283e98dd0fd71db496ef01e8 75 | kubectl config set-context $USER --cluster=kubernetes --user=$USER 76 | 77 | USER=bob 78 | kubectl config set-credentials "$USER" --token=492f5cd80d11c00e91f45a0a5b963bb6 79 | kubectl config set-context $USER --cluster=kubernetes --user=$USER 80 | -------------------------------------------------------------------------------- /labs/0_kubeadm/resource/kubeadm-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kubeadm.k8s.io/v1beta3 2 | kind: ClusterConfiguration 3 | apiServer: 4 | extraArgs: 5 | enable-admission-plugins: AlwaysPullImages,DefaultStorageClass,LimitRanger,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota 6 | token-auth-file: /etc/kubernetes/auth/tokens.csv 7 | extraVolumes: 8 | - hostPath: /etc/kubernetes/auth 9 | mountPath: /etc/kubernetes/auth 10 | name: tokens 11 | timeoutForControlPlane: 4m0s 12 | certificatesDir: /etc/kubernetes/pki 13 | clusterName: kubernetes 14 | kubernetesVersion: v1.29.3 15 | # Current version: 16 | # kubernetesVersion: v1.26.1 17 | networking: 18 | # See https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart 19 | podSubnet: 192.168.0.0/16 20 | serviceSubnet: 10.96.0.0/12 21 | -------------------------------------------------------------------------------- /labs/0_kubeadm/resource/reset.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # Reset k8s cluster 4 | 5 | set -e 6 | 7 | sudo -- kubeadm reset -f 8 | sudo -- sh -c "iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X" 9 | sudo -- ipvsadm --clear 10 | echo "Reset succeed" 11 | echo "-------------" 12 | -------------------------------------------------------------------------------- /labs/0_kubeadm/resource/tokens.csv: -------------------------------------------------------------------------------- 1 | 02b50b05283e98dd0fd71db496ef01e8,alice,10001,"system:k8s-school" 2 | 492f5cd80d11c00e91f45a0a5b963bb6,bob,10002,"system:k8s-school" -------------------------------------------------------------------------------- /labs/0_kubeadm/resource/ubuntu/prereq.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euxo pipefail 4 | 5 | DIR=$(cd "$(dirname "$0")"; pwd -P) 6 | . "$DIR/../env.sh" 7 | 8 | 9 | # This file might block apt-update 10 | sudo rm -f /etc/apt/sources.list.d/kubernetes.list 11 | sudo apt-get update -q 12 | 13 | cat < /dev/null 50 | 51 | ## Install containerd 52 | sudo apt-get update -q 53 | sudo apt-get install -y containerd.io 54 | 55 | # Configure containerd 56 | sudo mkdir -p /etc/containerd 57 | containerd config default | sudo tee /etc/containerd/config.toml 58 | 59 | # Restart containerd 60 | sudo systemctl restart containerd 61 | 62 | # kubeadm 63 | ## 64 | sudo apt-get update 65 | sudo apt-get install -y apt-transport-https ca-certificates curl gpg 66 | sudo mkdir -p /etc/apt/keyrings 67 | sudo rm -f /etc/apt/keyrings/kubernetes-apt-keyring.gpg 68 | curl -fsSL https://pkgs.k8s.io/core:/stable:/"$K8S_VERSION"/deb/Release.key | sudo gpg --batch --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg 69 | # This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list 70 | echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/'"$K8S_VERSION"'/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list 71 | 72 | sudo apt-get update 73 | apt-get install -y --allow-downgrades --allow-change-held-packages \ 74 | kubelet="$KUBEADM_VERSION" kubeadm="$KUBEADM_VERSION" kubectl="$KUBEADM_VERSION" 75 | sudo apt-mark hold kubelet kubeadm kubectl 76 | 77 | 78 | sudo apt-get install -y ipvsadm 79 | 80 | # Configure crictl client 81 | sudo cat > /etc/crictl.yaml < /tmp/pod.yaml 20 | kubectl apply -f "/tmp/pod.yaml" 21 | kubectl label pod curl-custom-sa "RBAC=sa" 22 | 23 | # Wait for pod to be in running state 24 | kubectl wait --for=condition=Ready pods --timeout=180s curl-custom-sa 25 | 26 | # Inspect the token mounted into the pod’s container(s) 27 | kubectl exec -it curl-custom-sa -c main -- \ 28 | cat /var/run/secrets/kubernetes.io/serviceaccount/token 29 | echo 30 | 31 | kubectl delete pod curl-custom-sa 32 | 33 | # Create secret for this SA (no longer needed since k8S 1.24) 34 | ink -y "WARNING: security exposure of persisting a non-expiring token credential in a readable API object" 35 | FOO_TOKEN="foo-token" 36 | cat < /tmp/pod.yaml 51 | kubectl apply -f "/tmp/pod.yaml" 52 | kubectl label pod curl-custom-sa "RBAC=sa" 53 | 54 | # Wait for pod to be in running state 55 | kubectl wait --for=condition=Ready pods --timeout=180s curl-custom-sa 56 | 57 | # Inspect the token mounted into the pod’s container(s) 58 | kubectl exec -it curl-custom-sa -c main -- \ 59 | cat /var/run/secrets/kubernetes.io/serviceaccount/token 60 | echo 61 | 62 | # Talk to the API server with custom ServiceAccount 'foo' 63 | # (tip: use 'main' container inside 'curl-custom-sa' pod) 64 | # If RBAC is enabled, it should not be able to list anything 65 | kubectl exec -it curl-custom-sa -c main -- curl localhost:8001/api/v1/pods 66 | 67 | ink "Non mandatory, check expiry date on https://jwt.io/" 68 | kubectl create token foo 69 | -------------------------------------------------------------------------------- /labs/2_authorization/2_0_RBAC_simple.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euxo pipefail 4 | 5 | DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 6 | 7 | . "$DIR"/../conf.version.sh 8 | 9 | # if ink is not defined, define it 10 | if [ -z "$(command -v ink)" ]; then 11 | ink() { 12 | echo -e "$@" 13 | } 14 | fi 15 | 16 | ID=$(whoami) 17 | 18 | FOO_NAMESPACE="foo-${ID}" 19 | BAR_NAMESPACE="bar-${ID}" 20 | 21 | PROXY_POD="curl-custom-sa" 22 | 23 | ink "Cleanup" 24 | kubectl delete namespace -l lab=rbac 25 | 26 | ink "Creating namespaces..." 27 | kubectl create namespace "$FOO_NAMESPACE" 28 | kubectl create namespace "$BAR_NAMESPACE" 29 | kubectl label namespace "$FOO_NAMESPACE" lab=rbac 30 | kubectl label namespace "$BAR_NAMESPACE" lab=rbac 31 | 32 | ink "Deploying kubectl-proxy pod in $FOO_NAMESPACE..." 33 | 34 | # Download the kubectl-proxy pod definition 35 | curl -s -o kubectl-proxy.yaml https://raw.githubusercontent.com/k8s-school/k8s-advanced/master/labs/2_authorization/kubectl-proxy.yaml 36 | 37 | # Replace the service account name in the pod definition 38 | sed -i "s/serviceAccountName: foo/serviceAccountName: default/" kubectl-proxy.yaml 39 | 40 | kubectl apply -f kubectl-proxy.yaml -n "$FOO_NAMESPACE" 41 | 42 | ink "Creating services in $FOO_NAMESPACE and $BAR_NAMESPACE..." 43 | kubectl create service clusterip foo-service --tcp=80:80 -n "$FOO_NAMESPACE" || true 44 | kubectl create service clusterip bar-service --tcp=80:80 -n "$BAR_NAMESPACE" || true 45 | 46 | ink "Waiting for kubectl-proxy pod to be ready..." 47 | kubectl wait --for=condition=ready pod -n "$FOO_NAMESPACE" --timeout=60s $PROXY_POD 48 | 49 | ink "Creating RBAC (Role and RoleBinding) in $FOO_NAMESPACE..." 50 | kubectl apply -f - <:/api/v1/namespaces/foo/services 51 | kubectl exec -it shell -- curl localhost:8001/api/v1/namespaces/foo/services 52 | 53 | # Study and create role manifest/service-reader.yaml in ns 'foo' 54 | kubectl apply -f "$DIR/manifest/service-reader.yaml" 55 | 56 | # Create role service-reader.yaml in ns 'bar' 57 | # Use 'kubectl create role' command instead of yaml 58 | kubectl create role service-reader --verb=get --verb=list --resource=services -n bar 59 | 60 | # Create a rolebindind 'service-reader-rb' to bind role foo:service-reader 61 | # to sa (i.e. serviceaccount) foo:default 62 | kubectl create rolebinding service-reader-rb --role=service-reader --serviceaccount=foo:default 63 | 64 | # List service in ns 'foo' from foo:shell 65 | kubectl exec -it -n foo shell -- curl localhost:8001/api/v1/namespaces/foo/services 66 | 67 | # List service in ns 'foo' from bar:shell 68 | kubectl exec -it -n bar shell -- curl localhost:8001/api/v1/namespaces/foo/services 69 | 70 | # Use the patch command, and jsonpatch syntax to add bind foo:service-reader to sa bar.default 71 | # See http://jsonpatch.com for examples 72 | kubectl patch rolebindings.rbac.authorization.k8s.io -n foo service-reader-rb --type='json' \ 73 | -p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount","name": "default","namespace": "bar"} }]' 74 | 75 | # List service in ns 'foo' from bar:shell 76 | kubectl exec -it -n bar shell -- curl localhost:8001/api/v1/namespaces/foo/services 77 | -------------------------------------------------------------------------------- /labs/2_authorization/3_RBAC_clusterrole.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | set -e 4 | set -x 5 | 6 | # RBAC clusterrole 7 | # see "kubernetes in action" p362 8 | 9 | DIR=$(cd "$(dirname "$0")"; pwd -P) 10 | 11 | . $DIR/../conf.version.sh 12 | 13 | NS="baz" 14 | 15 | # Delete all namespaces, clusterrole, clusterrolebinding, pv 16 | # with label 'RBAC=role' to make current script idempotent 17 | kubectl delete ns -l RBAC=clusterrole 18 | kubectl delete pv,ns,clusterrole,clusterrolebinding -l RBAC=clusterrole 19 | 20 | # Create namespace '$NS' in yaml, with label "RBAC=clusterrole" 21 | cat </tmp/ns_$NS.yaml 22 | apiVersion: v1 23 | kind: Namespace 24 | metadata: 25 | name: $NS 26 | labels: 27 | RBAC: clusterrole 28 | EOF 29 | kubectl apply -f "/tmp/ns_$NS.yaml" 30 | 31 | # Use namespace 'baz' in current context 32 | kubectl config set-context --current --namespace=$NS 33 | 34 | # Create local storage class 35 | kubectl apply -f "$DIR/manifest/local-storage.yaml" 36 | 37 | # Create a local PersistentVolume on kube-node-1:/data/disk1 38 | # with label "RBAC=clusterrole" 39 | # see https://kubernetes.io/docs/concepts/storage/volumes/#local 40 | # WARN: Directory kube-node-1:/data/disk1, must exist, 41 | # for next exercice, create also kube-node-1:/data/disk2 42 | NODE="kind-worker" 43 | cat </tmp/pv-1.yaml 44 | apiVersion: v1 45 | kind: PersistentVolume 46 | metadata: 47 | name: pv-1 48 | labels: 49 | RBAC: clusterrole 50 | spec: 51 | capacity: 52 | storage: 1Gi 53 | volumeMode: Filesystem 54 | accessModes: 55 | - ReadWriteOnce 56 | persistentVolumeReclaimPolicy: Retain 57 | storageClassName: local-storage 58 | local: 59 | path: /data/disk1 60 | nodeAffinity: 61 | required: 62 | nodeSelectorTerms: 63 | - matchExpressions: 64 | - key: kubernetes.io/hostname 65 | operator: In 66 | values: 67 | - $NODE 68 | EOF 69 | kubectl apply -f "/tmp/pv-1.yaml" 70 | 71 | # Create clusterrole 'pv-reader' which can get and list resource 'persistentvolumes' 72 | kubectl create clusterrole pv-reader --verb=get,list --resource=persistentvolumes 73 | 74 | # Add label "RBAC=clusterrole" 75 | kubectl label clusterrole pv-reader "RBAC=clusterrole" 76 | 77 | # Create pod using image 'k8sschool/kubectl-proxy', and named 'shell' in ns '$NS' 78 | kubectl run shell --image=k8sschool/kubectl-proxy:$KUBECTL_PROXY_VERSION -n $NS 79 | 80 | # Wait for $NS:shell to be in running state 81 | while true 82 | do 83 | sleep 2 84 | STATUS=$(kubectl get pods -n $NS shell -o jsonpath="{.status.phase}") 85 | if [ "$STATUS" = "Running" ]; then 86 | break 87 | fi 88 | done 89 | 90 | # List persistentvolumes at the cluster scope, with user "system:serviceaccount:$NS:default" 91 | kubectl exec -it -n $NS shell -- curl localhost:8001/api/v1/persistentvolumes 92 | 93 | # Create rolebinding 'pv-reader' which can get and list resource 'persistentvolumes' 94 | kubectl create rolebinding pv-reader --clusterrole=pv-reader --serviceaccount=$NS:default -n $NS 95 | 96 | # List again persistentvolumes at the cluster scope, with user "system:serviceaccount:$NS:default" 97 | kubectl exec -it -n $NS shell -- curl localhost:8001/api/v1/persistentvolumes 98 | 99 | # Why does it not work? Find the solution. 100 | kubectl delete rolebinding pv-reader -n $NS 101 | kubectl create clusterrolebinding pv-reader --clusterrole=pv-reader --serviceaccount=$NS:default 102 | kubectl label clusterrolebinding pv-reader "RBAC=clusterrole" 103 | 104 | # List again persistentvolumes at the cluster scope, with user "system:serviceaccount:$NS:default" 105 | kubectl exec -it -n $NS shell -- curl localhost:8001/api/v1/persistentvolumes 106 | -------------------------------------------------------------------------------- /labs/2_authorization/4.0_RBAC_gencert.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | set -e 4 | set -x 5 | 6 | DIR=$(cd "$(dirname "$0")"; pwd -P) 7 | 8 | # ~/src/k8s-school/homefs/.certs 9 | CERT_DIR="$HOME/.certs" 10 | mkdir -p "$CERT_DIR" 11 | 12 | ORG="k8s-school" 13 | 14 | # Follow "Use case 1" with ns foo instead of office 15 | # in certificate subject CN is the use name and O the group 16 | openssl genrsa -out "$CERT_DIR/employee.key" 2048 17 | openssl req -new -key "$CERT_DIR/employee.key" -out "$CERT_DIR/employee.csr" \ 18 | -subj "/CN=employee/O=$ORG" 19 | 20 | # Get key from dind cluster: 21 | docker cp kind-control-plane:/etc/kubernetes/pki/ca.crt "$CERT_DIR" 22 | docker cp kind-control-plane:/etc/kubernetes/pki/ca.key "$CERT_DIR" 23 | # Or on clus0-0@gcp: 24 | # sudo cp /etc/kubernetes/pki/ca.crt $HOME/.certs/ && sudo chown $USER $HOME/.certs/ca.crt 25 | # sudo cp /etc/kubernetes/pki/ca.key $HOME/.certs/ && sudo chown $USER $HOME/.certs/ca.key 26 | 27 | openssl x509 -req -in "$CERT_DIR/employee.csr" -CA "$CERT_DIR/ca.crt" \ 28 | -CAkey "$CERT_DIR/ca.key" -CAcreateserial -out "$CERT_DIR/employee.crt" -days 500 -------------------------------------------------------------------------------- /labs/2_authorization/4.1_RBAC_useraccount.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | set -e 4 | set -x 5 | 6 | # RBAC user 7 | # see Use case 1 in 8 | # https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/#use-case-1-create-user-with-limited-namespace-access 9 | 10 | DIR=$(cd "$(dirname "$0")"; pwd -P) 11 | 12 | KIND_CLUSTER_NAME="kind-kind" 13 | KIND_CONTEXT="$KIND_CLUSTER_NAME" 14 | # WARN: Directory kind-worker:/data/disk2, must exist 15 | 16 | # on kind run: 17 | PV_NODE="kind-worker" 18 | docker exec -- "$PV_NODE" mkdir -p /data/disk2 19 | 20 | # on gce run: 21 | # PV_NODE="clus0-1" 22 | # ssh $PV_NODE -- sudo mkdir -p /data/disk2 23 | 24 | ORG="k8s-school" 25 | 26 | # Use context 'kind-kind' and delete ns,pv with label "RBAC=user" 27 | kubectl config use-context "$KIND_CONTEXT" 28 | kubectl delete pv,clusterrolebinding,ns -l "RBAC=user" 29 | 30 | # Create namespace 'foo' in yaml, with label "RBAC=clusterrole" 31 | kubectl create ns office 32 | kubectl label ns office "RBAC=user" 33 | 34 | CERT_DIR="$HOME/.certs" 35 | 36 | kubectl config set-credentials employee --client-certificate="$CERT_DIR/employee.crt" \ 37 | --client-key="$CERT_DIR/employee.key" 38 | kubectl config set-context employee-context --cluster="$KIND_CLUSTER_NAME" --namespace=office \ 39 | --user=employee 40 | 41 | kubectl --context=employee-context get pods || \ 42 | >&2 echo "EXPECTED ERROR: failed to get pods" 43 | 44 | # Use 'apply' instead of 'create' to create 45 | # 'role-deployment-manager' and 'rolebinding-deployment-manager' 46 | kubectl apply -f "$DIR/manifest/role-deployment-manager.yaml" 47 | 48 | kubectl apply -f "$DIR/manifest/rolebinding-deployment-manager.yaml" 49 | 50 | kubectl --context=employee-context run --image=ubuntu -- sleep infinity 51 | kubectl --context=employee-context get pods 52 | 53 | kubectl --context=employee-context get pods --namespace=default || \ 54 | >&2 echo "EXPECTED ERROR: failed to get pods" 55 | 56 | # With employee user, try to run a shell in a pod in ns 'office' 57 | kubectl --context=employee-context run -it --image=busybox shell sh || \ 58 | >&2 echo "EXPECTED ERROR: failed to start shell" 59 | 60 | # Create a local PersistentVolume on kube-node-1:/data/disk2 61 | # with label "RBAC=user" 62 | # see https://kubernetes.io/docs/concepts/storage/volumes/#local 63 | # WARN: Directory kube-node-1:/data/disk2, must exist 64 | 65 | cat </tmp/task-pv.yaml 66 | apiVersion: v1 67 | kind: PersistentVolume 68 | metadata: 69 | name: task-pv 70 | labels: 71 | RBAC: user 72 | spec: 73 | capacity: 74 | storage: 10Gi 75 | volumeMode: Filesystem 76 | accessModes: 77 | - ReadWriteOnce 78 | persistentVolumeReclaimPolicy: Retain 79 | storageClassName: local-storage 80 | local: 81 | path: /data/disk2 82 | nodeAffinity: 83 | required: 84 | nodeSelectorTerms: 85 | - matchExpressions: 86 | - key: kubernetes.io/hostname 87 | operator: In 88 | values: 89 | - $PV_NODE 90 | EOF 91 | kubectl apply -f "/tmp/task-pv.yaml" 92 | 93 | kubectl describe pv task-pv 94 | 95 | # With employee user, create a PersistentVolumeClaim which use pv-1 in ns 'foo' 96 | # See https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim 97 | kubectl --context=employee-context apply -f "$DIR/manifest/pvc.yaml" || 98 | >&2 echo "EXPECTED ERROR: failed to create pvc" 99 | 100 | # Edit role-deployment-manager.yaml to enable pvc management 101 | kubectl apply -f "$DIR/manifest/role-deployment-manager-pvc.yaml" 102 | 103 | 104 | # WHICH CONTEXT???? employee or admin???? 105 | # Use context employee-context 106 | kubectl config use-context employee-context 107 | 108 | # Try again to create a PersistentVolumeClaim which use pv-1 in ns 'foo' 109 | kubectl --context=employee-context apply -f "$DIR/manifest/pvc.yaml" 110 | 111 | # Launch the nginx pod which attach the pvc 112 | kubectl apply -n office -f https://k8s.io/examples/pods/storage/pv-pod.yaml 113 | 114 | # Wait for office:task-pv-pod to be in running state 115 | kubectl wait --for=condition=Ready -n office --timeout=180s pods task-pv-pod 116 | 117 | # Launch a command in task-pv-pod 118 | kubectl exec -it task-pv-pod -- echo "SUCCESS in lauching command in task-pv-pod" 119 | 120 | # Switch back to context kubernetes-admin@kubernetes 121 | kubectl config use-context "$KIND_CONTEXT" 122 | 123 | # Try to get pv using 'employee-context' 124 | kubectl --context=employee-context get pv || 125 | >&2 echo "EXPECTED ERROR: failed to get pv" 126 | 127 | # Create a 'clusterrolebinding' between clusterrole=pv-reader and group=$ORG 128 | kubectl create clusterrolebinding "pv-reader-$ORG" --clusterrole=pv-reader --group="$ORG" 129 | kubectl label clusterrolebinding "pv-reader-$ORG" "RBAC=user" 130 | 131 | # Try to get pv using 'employee-context' 132 | kubectl --context=employee-context get pv 133 | 134 | # Exercice: remove pod resource for deployment-manager role and check what happen when creating a deployment, then a pod? 135 | # kubectl apply -f manifest/role-deployment-manager-nopod-rbac.yaml 136 | # Work ok: 137 | # kubectl --context=employee-context create deployment --image=nginx nginx 138 | # Do not work: 139 | # kubectl --context=employee-context run --image=nginx nginx 140 | # Error from server (Forbidden): pods is forbidden: User "employee" cannot create resource "pods" in API group "" in the namespace "office" 141 | # => Answer: deployment runs ok, but it is not possible to create a pod (think of controllers role) 142 | -------------------------------------------------------------------------------- /labs/2_authorization/5_install-dashboard.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euxo pipefail 4 | 5 | # Install dashboard and set up RBAC 6 | # see https://github.com/kubernetes/dashboard 7 | 8 | DIR=$(cd "$(dirname "$0")"; pwd -P) 9 | 10 | kubectl delete ns -l "RBAC=dashboard" 11 | 12 | kubectl create ns kubernetes-dashboard 13 | kubectl label ns kubernetes-dashboard "RBAC=dashboard" 14 | 15 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml 16 | kubectl apply -f "$DIR"/manifest/sa_dashboard.yaml 17 | 18 | echo "Get token" 19 | kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') 20 | 21 | 22 | echo "Run:\n \ 23 | $ kubectl proxy \n\ 24 | Now access Dashboard at: \n\ 25 | http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/." 26 | -------------------------------------------------------------------------------- /labs/2_authorization/A_rbac_tools_demo.cast: -------------------------------------------------------------------------------- 1 | {"version": 2, "width": 55, "height": 28, "timestamp": 1675080743, "env": {"SHELL": "/usr/bin/zsh", "TERM": "xterm-256color"}} 2 | [0.188793, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 3 | [0.189324, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 4 | [0.311866, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 5 | [0.311965, "o", "\u001b[?1h\u001b="] 6 | [0.312024, "o", "\u001b[?2004h"] 7 | [4.848356, "o", "\u001b[7m# Install rbac-tool\u001b[27m\r\r\n\u001b[K"] 8 | [5.612787, "o", "\u001b[A\u001b[18C\u001b[27m#\u001b[27m \u001b[27mI\u001b[27mn\u001b[27ms\u001b[27mt\u001b[27ma\u001b[27ml\u001b[27ml\u001b[27m \u001b[27mr\u001b[27mb\u001b[27ma\u001b[27mc\u001b[27m-\u001b[27mt\u001b[27mo\u001b[27mo\u001b[27ml\u001b[1B\r\u001b[K"] 9 | [5.612961, "o", "\u001b[?1l\u001b>\u001b[?2004l\r\r\n"] 10 | [5.613963, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 11 | [5.614138, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 12 | [5.679356, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 13 | [5.679404, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 14 | [12.553343, "o", "\u001b[7mcurl https://raw.githubusercontent.co\u001b[7mm\u001b[7m/alcideio/rbac-tool/master/download.sh | bash\u001b[27m\u001b[K\r\r\n\u001b[K"] 15 | [13.39313, "o", "\u001b[A\u001b[A\u001b[18C\u001b[27mc\u001b[27mu\u001b[27mr\u001b[27ml\u001b[27m \u001b[27mh\u001b[27mt\u001b[27mt\u001b[27mp\u001b[27ms\u001b[27m:\u001b[27m/\u001b[27m/\u001b[27mr\u001b[27ma\u001b[27mw\u001b[27m.\u001b[27mg\u001b[27mi\u001b[27mt\u001b[27mh\u001b[27mu\u001b[27mb\u001b[27mu\u001b[27ms\u001b[27me\u001b[27mr\u001b[27mc\u001b[27mo\u001b[27mn\u001b[27mt\u001b[27me\u001b[27mn\u001b[27mt\u001b[27m.\u001b[27mc\u001b[27mom\u001b[27m/\u001b[27ma\u001b[27ml\u001b[27mc\u001b[27mi\u001b[27md\u001b[27me\u001b[27mi\u001b[27mo\u001b[27m/\u001b[27mr\u001b[27mb\u001b[27ma\u001b[27mc\u001b[27m-\u001b[27mt\u001b[27mo\u001b[27mo\u001b[27ml\u001b[27m/\u001b[27mm\u001b[27ma\u001b[27ms\u001b[27mt\u001b[27me\u001b[27mr\u001b[27m/\u001b[27md\u001b[27mo\u001b[27mw\u001b[27mn\u001b[27ml\u001b[27mo\u001b[27ma\u001b[27md\u001b[27m.\u001b[27ms\u001b[27mh\u001b[27m \u001b[27m|\u001b[27m \u001b[27mb\u001b[27ma\u001b[27ms\u001b[27mh\u001b[1B\r\u001b[K"] 16 | [13.393343, "o", "\u001b[?1l\u001b>\u001b[?2004l\r\r\n"] 17 | [13.393734, "o", "\u001b]2;curl https://raw.githubusercontent.com/alcideio/rbac-tool/master/download.sh\u0007\u001b]1;curl\u0007"] 18 | [13.400888, "o", " % Total % Receiv"] 19 | [13.400921, "o", "ed % Xferd Average Speed "] 20 | [13.401109, "o", " Time Time Time Current\r\n Dload Upload"] 21 | [13.401163, "o", " Total Spent Left Speed\r\n\r "] 22 | [13.401242, "o", " 0 0 0 0 0 0 0"] 23 | [13.4013, "o", " 0 --:--:-- --:--:-- --:--:-- 0"] 24 | [13.485975, "o", "\r100 9446 100 9446 "] 25 | [13.486085, "o", " 0 0 108k 0 --:--:-- --:--:-- --:--:-- 108k\r\n"] 26 | [13.500904, "o", "alcideio/rbac-tool info checking GitHub for latest tag\r\n"] 27 | [13.893077, "o", "alcideio/rbac-tool info found version: 1.14.0 for v1.14.0/linux/amd64\r\n"] 28 | [15.122013, "o", "alcideio/rbac-tool info installed ./bin/rbac-tool\r\n"] 29 | [15.126195, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 30 | [15.126301, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 31 | [15.16822, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 32 | [15.168342, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 33 | [18.891193, "o", "\u001b[7m# Add rbac-tool to PATH\u001b[27m\r\r\n\u001b[K"] 34 | [19.757917, "o", "\u001b[A\u001b[18C\u001b[27m#\u001b[27m \u001b[27mA\u001b[27md\u001b[27md\u001b[27m \u001b[27mr\u001b[27mb\u001b[27ma\u001b[27mc\u001b[27m-\u001b[27mt\u001b[27mo\u001b[27mo\u001b[27ml\u001b[27m \u001b[27mt\u001b[27mo\u001b[27m \u001b[27mP\u001b[27mA\u001b[27mT\u001b[27mH\u001b[1B\r\u001b[K"] 35 | [19.757973, "o", "\u001b[?1l\u001b>\u001b[?2004l\r\r\n"] 36 | [19.758477, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007"] 37 | [19.758494, "o", "\u001b]1;..authorization\u0007"] 38 | [19.821885, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 39 | [19.821919, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 40 | [23.560059, "o", "\u001b[7mexport PATH=$PWD/bin:$PATH\u001b[27m\r\r\n\u001b[K"] 41 | [24.544043, "o", "\u001b[A\u001b[18C\u001b[27me\u001b[27mx\u001b[27mp\u001b[27mo\u001b[27mr\u001b[27mt\u001b[27m \u001b[27mP\u001b[27mA\u001b[27mT\u001b[27mH\u001b[27m=\u001b[27m$\u001b[27mP\u001b[27mW\u001b[27mD\u001b[27m/\u001b[27mb\u001b[27mi\u001b[27mn\u001b[27m:\u001b[27m$\u001b[27mP\u001b[27mA\u001b[27mT\u001b[27mH\u001b[1B\r\u001b[K"] 42 | [24.544158, "o", "\u001b[?1l\u001b>"] 43 | [24.544236, "o", "\u001b[?2004l\r\r\n"] 44 | [24.545741, "o", "\u001b]2;export PATH=$PWD/bin:$PATH \u0007\u001b]1;export\u0007"] 45 | [24.547503, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 46 | [24.547798, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 47 | [24.630498, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 48 | [24.630604, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 49 | [28.429147, "o", "\u001b[7m# Generate bash completion\u001b[27m\r\r\n\u001b[K"] 50 | [29.206037, "o", "\u001b[A\u001b[18C\u001b[27m#\u001b[27m \u001b[27mG\u001b[27me\u001b[27mn\u001b[27me\u001b[27mr\u001b[27ma\u001b[27mt\u001b[27me\u001b[27m \u001b[27mb\u001b[27ma\u001b[27ms\u001b[27mh\u001b[27m \u001b[27mc\u001b[27mo\u001b[27mm\u001b[27mp\u001b[27ml\u001b[27me\u001b[27mt\u001b[27mi\u001b[27mo\u001b[27mn\u001b[1B\r\u001b[K\u001b[?1l\u001b>"] 51 | [29.206124, "o", "\u001b[?2004l\r\r\n"] 52 | [29.207093, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 53 | [29.207299, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 54 | [29.310041, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 55 | [29.310215, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 56 | [32.719941, "o", "\u001b[7msource <(rbac-tool bash-completion)\u001b[27m\r\r\n\u001b[K"] 57 | [33.591137, "o", "\u001b[A\u001b[18C\u001b[27ms\u001b[27mo\u001b[27mu\u001b[27mr\u001b[27mc\u001b[27me\u001b[27m \u001b[27m<\u001b[27m(\u001b[27mr\u001b[27mb\u001b[27ma\u001b[27mc\u001b[27m-\u001b[27mt\u001b[27mo\u001b[27mo\u001b[27ml\u001b[27m \u001b[27mb\u001b[27ma\u001b[27ms\u001b[27mh\u001b[27m-\u001b[27mc\u001b[27mo\u001b[27mm\u001b[27mp\u001b[27ml\u001b[27me\u001b[27mt\u001b[27mi\u001b[27mo\u001b[27mn\u001b[27m)\u001b[1B\r\u001b[K"] 58 | [33.59162, "o", "\u001b[?1l\u001b>\u001b[?2004l\r\r\n"] 59 | [33.592849, "o", "\u001b]2;source <(rbac-tool bash-completion)\u0007"] 60 | [33.593115, "o", "\u001b]1;source\u0007"] 61 | [33.735011, "o", "/proc/self/fd/11:type:834: bad option: -t\r\n"] 62 | [33.735592, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 63 | [33.735677, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 64 | [33.782653, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 65 | [33.782793, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 66 | [37.542627, "o", "\u001b[7m# Analyze RBAC permissions of the clu\u001b[7ms\u001b[7mter pointed by current context\u001b[27m\u001b[K\r\r\n\u001b[K"] 67 | [38.464827, "o", "\u001b[A\u001b[A\u001b[18C\u001b[27m#\u001b[27m \u001b[27mA\u001b[27mn\u001b[27ma\u001b[27ml\u001b[27my\u001b[27mz\u001b[27me\u001b[27m \u001b[27mR\u001b[27mB\u001b[27mA\u001b[27mC\u001b[27m \u001b[27mp\u001b[27me\u001b[27mr\u001b[27mm\u001b[27mi\u001b[27ms\u001b[27ms\u001b[27mi\u001b[27mo\u001b[27mn\u001b[27ms\u001b[27m \u001b[27mo\u001b[27mf\u001b[27m \u001b[27mt\u001b[27mh\u001b[27me\u001b[27m \u001b[27mc\u001b[27ml\u001b[27mus\u001b[27mt\u001b[27me\u001b[27mr\u001b[27m \u001b[27mp\u001b[27mo\u001b[27mi\u001b[27mn\u001b[27mt\u001b[27me\u001b[27md\u001b[27m \u001b[27mb\u001b[27my\u001b[27m \u001b[27mc\u001b[27mu\u001b[27mr\u001b[27mr\u001b[27me\u001b[27mn\u001b[27mt\u001b[27m \u001b[27mc\u001b[27mo\u001b[27mn\u001b[27mt\u001b[27me\u001b[27mx\u001b[27mt\u001b[1B\r\u001b[K"] 68 | [38.465113, "o", "\u001b[?1l\u001b>\u001b[?2004l\r\r\n"] 69 | [38.465777, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 70 | [38.465974, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 71 | [38.572549, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 72 | [38.5726, "o", "\u001b[?1h\u001b="] 73 | [38.572749, "o", "\u001b[?2004h"] 74 | [43.588048, "o", "\u001b[7mrbac-tool analysis | less\u001b[27m\r\r\n\u001b[K"] 75 | [44.794795, "o", "\u001b[A\u001b[18C\u001b[27mr\u001b[27mb\u001b[27ma\u001b[27mc\u001b[27m-\u001b[27mt\u001b[27mo\u001b[27mo\u001b[27ml\u001b[27m \u001b[27ma\u001b[27mn\u001b[27ma\u001b[27ml\u001b[27my\u001b[27ms\u001b[27mi\u001b[27ms\u001b[27m \u001b[27m|\u001b[27m \u001b[27ml\u001b[27me\u001b[27ms\u001b[27ms\u001b[1B\r\u001b[K"] 76 | [44.795037, "o", "\u001b[?1l\u001b>\u001b[?2004l\r\r\n"] 77 | [44.795883, "o", "\u001b]2;rbac-tool analysis | less\u0007\u001b]1;rbac-tool\u0007"] 78 | [44.808334, "o", "\u001b[?1049h\u001b[22;0;0t\u001b[?1h\u001b=\r"] 79 | [45.096695, "o", "AnalysisConfigInfo:\u001b[m\r\n Description: Rapid7 InsightCloudSec default RBAC anal\u001b[m \bysis rules\u001b[m\r\n Name: InsightCloudSec\u001b[m\r\n Uuid: 9371719c-1031-468c-91ed-576fdc9e9f59\u001b[m\r\nCreatedOn: \"2023-01-30T13:13:08+01:00\"\u001b[m\r\nFindings:\u001b[m\r\n- Finding:\u001b[m\r\n Message: Capture principals that can read secrets\u001b[m\r\n Recommendation: |-\u001b[m\r\n Review the policy rules for 'kafka/kafka-cluster-\u001b[m \bentity-operator' (ServiceAccount) by running 'rbac-tool\u001b[m \b policy-rules -e kafka-cluster-entity-operator'.\u001b[m\r\n You can visualize the RBAC policy by running 'rba\u001b[m \bc-tool viz --include-subjects=kafka-cluster-entity-oper\u001b[m \bator'\u001b[m\r\n References: []\u001b[m\r\n RuleName: Secret Readers\u001b[m\r\n RuleUuid: 3c942117-f4ff-423a-83d4-f7d6b75a6b78\u001b[m\r\n Severity: HIGH\u001b[m\r\n Subject:\u001b[m\r\n kind: ServiceAccount\u001b[m\r\n name: kafka-cluster-entity-operator\u001b[m\r\n namespace: kafka\u001b[m\r\n- Finding:\u001b[m\r\n Message: Capture principals that can read secrets\u001b[m\r\n Recommendation: |-\u001b[m\r\n:\u001b[K"] 80 | [63.002688, "o", "\r\u001b[K\u001b[?1l\u001b>\u001b[?1049l\u001b[23;0;0t"] 81 | [63.02312, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 82 | [63.023263, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 83 | [63.070653, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 84 | [63.070812, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 85 | [69.718199, "o", "\u001b[7m# Get ClusterRole used by ServiceAcco\u001b[7mu\u001b[7mnt system:serviceaccount:kube-system:replicaset-contro\u001b[7ml\u001b[7mler\u001b[27m\u001b[K\r\r\n\u001b[7m# Use regex to match service account name\u001b[27m\u001b[K\r\r\n\u001b[K"] 86 | [70.84196, "o", "\u001b[4A\u001b[18C\u001b[27m#\u001b[27m \u001b[27mG\u001b[27me\u001b[27mt\u001b[27m \u001b[27mC\u001b[27ml\u001b[27mu\u001b[27ms\u001b[27mt\u001b[27me\u001b[27mr\u001b[27mR\u001b[27mo\u001b[27ml\u001b[27me\u001b[27m \u001b[27mu\u001b[27ms\u001b[27me\u001b[27md\u001b[27m \u001b[27mb\u001b[27my\u001b[27m \u001b[27mS\u001b[27me\u001b[27mr\u001b[27mv\u001b[27mi\u001b[27mc\u001b[27me\u001b[27mA\u001b[27mc\u001b[27mc\u001b[27mou\u001b[27mn\u001b[27mt\u001b[27m \u001b[27ms\u001b[27my\u001b[27ms\u001b[27mt\u001b[27me\u001b[27mm\u001b[27m:\u001b[27ms\u001b[27me\u001b[27mr\u001b[27mv\u001b[27mi\u001b[27mc\u001b[27me\u001b[27ma\u001b[27mc\u001b[27mc\u001b[27mo\u001b[27mu\u001b[27mn\u001b[27mt\u001b[27m:\u001b[27mk\u001b[27mu\u001b[27mb\u001b[27me\u001b[27m-\u001b[27ms\u001b[27my\u001b[27ms\u001b[27mt\u001b[27me\u001b[27mm\u001b[27m:\u001b[27mr\u001b[27me\u001b[27mp\u001b[27ml\u001b[27mi\u001b[27mc\u001b[27ma\u001b[27ms\u001b[27me\u001b[27mt\u001b[27m-\u001b[27mc\u001b[27mo\u001b[27mn\u001b[27mt\u001b[27mr\u001b[27mol\u001b[27ml\u001b[27me\u001b[27mr\u001b[1B\r\u001b[27m#\u001b[27m \u001b[27mU\u001b[27ms\u001b[27me\u001b[27m \u001b[27mr\u001b[27me\u001b[27mg\u001b[27me\u001b[27mx\u001b[27m \u001b[27mt\u001b[27mo\u001b[27m \u001b[27mm\u001b[27ma\u001b[27mt\u001b[27mc\u001b[27mh\u001b[27m \u001b[27ms\u001b[27me\u001b[27mr\u001b[27mv\u001b[27mi\u001b[27mc\u001b[27me\u001b[27m \u001b[27ma\u001b[27mc\u001b[27mc\u001b[27mo\u001b[27mu\u001b[27mn\u001b[27mt\u001b[27m \u001b[27mn\u001b[27ma\u001b[27mm\u001b[27me\u001b[1B\r\u001b[K"] 87 | [70.842214, "o", "\u001b[?1l\u001b>"] 88 | [70.842405, "o", "\u001b[?2004l\r\r\n"] 89 | [70.843013, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 90 | [70.84321, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 91 | [70.926873, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 92 | [70.927008, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 93 | [75.550459, "o", "\u001b[7mrbac-tool lookup -e \".*replicaset.*\"\u001b[27m\r\r\n\u001b[K"] 94 | [76.577238, "o", "\u001b[A\u001b[18C\u001b[27mr\u001b[27mb\u001b[27ma\u001b[27mc\u001b[27m-\u001b[27mt\u001b[27mo\u001b[27mo\u001b[27ml\u001b[27m \u001b[27ml\u001b[27mo\u001b[27mo\u001b[27mk\u001b[27mu\u001b[27mp\u001b[27m \u001b[27m-\u001b[27me\u001b[27m \u001b[27m\"\u001b[27m.\u001b[27m*\u001b[27mr\u001b[27me\u001b[27mp\u001b[27ml\u001b[27mi\u001b[27mc\u001b[27ma\u001b[27ms\u001b[27me\u001b[27mt\u001b[27m.\u001b[27m*\u001b[27m\"\u001b[1B\r\u001b[K\u001b[?1l\u001b>"] 95 | [76.577286, "o", "\u001b[?2004l\r\r\n"] 96 | [76.577769, "o", "\u001b]2;rbac-tool lookup -e \".*replicaset.*\"\u0007\u001b]1;rbac-tool\u0007"] 97 | [76.727944, "o", " SUBJECT | SUBJECT TYPE | SCOPE | NAMESPACE | ROLE \r\n"] 98 | [76.728055, "o", "+-----------------------+----------------+-------------+-----------+-----------------------------------------+\r\n replicaset-controller | ServiceAccount | ClusterRole | "] 99 | [76.728128, "o", " | system:controller:replicaset-controller \r\n"] 100 | [76.730927, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 101 | [76.731044, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 102 | [76.773991, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 103 | [76.774023, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 104 | [84.085616, "o", "\u001b[7m# List Policy Rules For ServiceAccoun\u001b[7mt\u001b[7m replicaset-controller\u001b[27m\u001b[K\r\r\n\u001b[7m# It can create and delete Pods and update replicasets,\u001b[7m \u001b[7mthis is consistent\u001b[27m\u001b[K\r\r\n\u001b[K"] 105 | [85.735272, "o", "\u001b[4A\u001b[18C\u001b[27m#\u001b[27m \u001b[27mL\u001b[27mi\u001b[27ms\u001b[27mt\u001b[27m \u001b[27mP\u001b[27mo\u001b[27ml\u001b[27mi\u001b[27mc\u001b[27my\u001b[27m \u001b[27mR\u001b[27mu\u001b[27ml\u001b[27me\u001b[27ms\u001b[27m \u001b[27mF\u001b[27mo\u001b[27mr\u001b[27m \u001b[27mS\u001b[27me\u001b[27mr\u001b[27mv\u001b[27mi\u001b[27mc\u001b[27me\u001b[27mA\u001b[27mc\u001b[27mc\u001b[27mo\u001b[27mu\u001b[27mnt\u001b[27m \u001b[27mr\u001b[27me\u001b[27mp\u001b[27ml\u001b[27mi\u001b[27mc\u001b[27ma\u001b[27ms\u001b[27me\u001b[27mt\u001b[27m-\u001b[27mc\u001b[27mo\u001b[27mn\u001b[27mt\u001b[27mr\u001b[27mo\u001b[27ml\u001b[27ml\u001b[27me\u001b[27mr\u001b[1B\r\u001b[27m#\u001b[27m \u001b[27mI\u001b[27mt\u001b[27m \u001b[27mc\u001b[27ma\u001b[27mn\u001b[27m \u001b[27mc\u001b[27mr\u001b[27me\u001b[27ma\u001b[27mt\u001b[27me\u001b[27m \u001b[27ma\u001b[27mn\u001b[27md\u001b[27m \u001b[27md\u001b[27me\u001b[27ml\u001b[27me\u001b[27mt\u001b[27me\u001b[27m \u001b[27mP\u001b[27mo\u001b[27md\u001b[27ms\u001b[27m \u001b[27ma\u001b[27mn\u001b[27md\u001b[27m \u001b[27mu\u001b[27mp\u001b[27md\u001b[27ma\u001b[27mt\u001b[27me\u001b[27m \u001b[27mr\u001b[27me\u001b[27mp\u001b[27ml\u001b[27mi\u001b[27mc\u001b[27ma\u001b[27ms\u001b[27me\u001b[27mt\u001b[27ms\u001b[27m, \u001b[27mt\u001b[27mh\u001b[27mi\u001b[27ms\u001b[27m \u001b[27mi\u001b[27ms\u001b[27m \u001b[27mc\u001b[27mo\u001b[27mn\u001b[27ms\u001b[27mi\u001b[27ms\u001b[27mt\u001b[27me\u001b[27mn\u001b[27mt\u001b[1B\r\u001b[K\u001b[?1l\u001b>"] 106 | [85.73538, "o", "\u001b[?2004l\r\r\n"] 107 | [85.735733, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 108 | [85.735824, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 109 | [85.781361, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 110 | [85.78146, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 111 | [91.941013, "o", "\u001b[7mrbac-tool policy-rules -e \"replicaset\u001b[7m-\u001b[7mcontroller\"\u001b[27m\u001b[K\r\r\n\u001b[K"] 112 | [93.294917, "o", "\u001b[A\u001b[A\u001b[18C\u001b[27mr\u001b[27mb\u001b[27ma\u001b[27mc\u001b[27m-\u001b[27mt\u001b[27mo\u001b[27mo\u001b[27ml\u001b[27m \u001b[27mp\u001b[27mo\u001b[27ml\u001b[27mi\u001b[27mc\u001b[27my\u001b[27m-\u001b[27mr\u001b[27mu\u001b[27ml\u001b[27me\u001b[27ms\u001b[27m \u001b[27m-\u001b[27me\u001b[27m \u001b[27m\"\u001b[27mr\u001b[27me\u001b[27mp\u001b[27ml\u001b[27mi\u001b[27mc\u001b[27ma\u001b[27ms\u001b[27me\u001b[27mt-\u001b[27mc\u001b[27mo\u001b[27mn\u001b[27mt\u001b[27mr\u001b[27mo\u001b[27ml\u001b[27ml\u001b[27me\u001b[27mr\u001b[27m\"\u001b[1B\r\u001b[K"] 113 | [93.295058, "o", "\u001b[?1l\u001b>\u001b[?2004l\r\r\n"] 114 | [93.296534, "o", "\u001b]2;rbac-tool policy-rules -e \"replicaset-controller\"\u0007\u001b]1;rbac-tool\u0007"] 115 | [93.47672, "o", " TYPE |"] 116 | [93.476809, "o", " SUBJECT | VERBS | NAMESPACE | API GROUP | KIND | NAMES | NONRESOURCEURI | ORIGINATED FROM \r\n+----------------+-----------------------+--------+-----------+---------------+------------------------+-------+----------------+-------------------------------------------------------+\r\n ServiceAccount | replicaset-controller "] 117 | [93.476846, "o", "| create | * | core | events | | | ClusterRoles>>system:controller:replicaset-controller "] 118 | [93.476875, "o", "\r\n ServiceAccount | replicaset-controller | "] 119 | [93.476922, "o", "create | * | core | pods | "] 120 | [93.476975, "o", " | | ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount "] 121 | [93.477018, "o", "| replicaset-controller | create | "] 122 | [93.477061, "o", "* | events.k8s.io | events | |"] 123 | [93.477149, "o", " | ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | delete | * | core | pods | |"] 124 | [93.477219, "o", " | ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | get | * | apps | replicasets | |"] 125 | [93.477262, "o", " | ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | get "] 126 | [93.477307, "o", " | * | extensions | replicasets | | | "] 127 | [93.477346, "o", "ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | list | "] 128 | [93.477383, "o", "* | apps | replicasets | | "] 129 | [93.477421, "o", "| ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | list "] 130 | [93.477458, "o", "| * | core | pods | | "] 131 | [93.477496, "o", " | ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | list "] 132 | [93.47754, "o", " | * | extensions | replicasets | | |"] 133 | [93.47758, "o", " ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | patch "] 134 | [93.477618, "o", "| * | core | events | | "] 135 | [93.477664, "o", " | ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | patch | "] 136 | [93.477719, "o", "* | core | pods | | | ClusterRoles>>system:controller:replicaset-controller \r\n "] 137 | [93.477758, "o", "ServiceAccount | replicaset-controller | patch | * | events.k8s.io | "] 138 | [93.477798, "o", "events | | | ClusterRoles>>system:controller:replicaset-controller \r\n "] 139 | [93.477836, "o", "ServiceAccount | replicaset-controller | update | * | apps |"] 140 | [93.477869, "o", " replicasets | | | ClusterRoles>>system:controller:replicaset-controller"] 141 | [93.477901, "o", " \r\n ServiceAccount | replicaset-controller | update |"] 142 | [93.477932, "o", " * | apps | replicasets/finalizers | "] 143 | [93.477964, "o", " | | ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount "] 144 | [93.477978, "o", "| replicaset-controller | update "] 145 | [93.478018, "o", "| * | apps | replicasets/status | | "] 146 | [93.478052, "o", " | ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller "] 147 | [93.478083, "o", "| update | * | core | events "] 148 | [93.478139, "o", " | | | ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | update"] 149 | [93.478183, "o", " | * | events.k8s.io | events | | |"] 150 | [93.478225, "o", " ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | update |"] 151 | [93.478267, "o", " * | extensions | replicasets | | | "] 152 | [93.478301, "o", "ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | update"] 153 | [93.478332, "o", " | * | extensions | replicasets/finalizers | "] 154 | [93.478364, "o", " | | ClusterRoles>>system:controller:replicaset-controller \r\n "] 155 | [93.478396, "o", "ServiceAccount | replicaset-controller | update | * | "] 156 | [93.478429, "o", "extensions | replicasets/status | | | "] 157 | [93.478465, "o", "ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount | replicaset-controller | watch "] 158 | [93.478498, "o", "| * | apps | replicasets | "] 159 | [93.47853, "o", "| | ClusterRoles>>system:controller:replicaset-controller \r\n ServiceAccount |"] 160 | [93.478562, "o", " replicaset-controller | watch | * | core "] 161 | [93.478593, "o", "| pods | | | ClusterRoles>>system:controller:replicaset-controller "] 162 | [93.478628, "o", " \r\n ServiceAccount | replicaset-controller | watch |"] 163 | [93.478671, "o", " * | extensions | replicasets | | | "] 164 | [93.478762, "o", "ClusterRoles>>system:controller:replicaset-controller \r\n"] 165 | [93.481429, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 166 | [93.481514, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007"] 167 | [93.481576, "o", "\u001b]1;..authorization\u0007"] 168 | [93.524031, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 169 | [93.52417, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 170 | [109.916234, "o", "\u001b[7m# Shows which subjects (user/group/se\u001b[7mr\u001b[7mviceaccount) have RBAC permissions to perform an actio\u001b[7mn\u001b[27m\u001b[K\r\r\n\u001b[K"] 171 | [110.838394, "o", "\u001b[3A\u001b[18C\u001b[27m#\u001b[27m \u001b[27mS\u001b[27mh\u001b[27mo\u001b[27mw\u001b[27ms\u001b[27m \u001b[27mw\u001b[27mh\u001b[27mi\u001b[27mc\u001b[27mh\u001b[27m \u001b[27ms\u001b[27mu\u001b[27mb\u001b[27mj\u001b[27me\u001b[27mc\u001b[27mt\u001b[27ms\u001b[27m \u001b[27m(\u001b[27mu\u001b[27ms\u001b[27me\u001b[27mr\u001b[27m/\u001b[27mg\u001b[27mr\u001b[27mo\u001b[27mu\u001b[27mp\u001b[27m/\u001b[27ms\u001b[27mer\u001b[27mv\u001b[27mi\u001b[27mc\u001b[27me\u001b[27ma\u001b[27mc\u001b[27mc\u001b[27mo\u001b[27mu\u001b[27mn\u001b[27mt\u001b[27m)\u001b[27m \u001b[27mh\u001b[27ma\u001b[27mv\u001b[27me\u001b[27m \u001b[27mR\u001b[27mB\u001b[27mA\u001b[27mC\u001b[27m \u001b[27mp\u001b[27me\u001b[27mr\u001b[27mm\u001b[27mi\u001b[27ms\u001b[27ms\u001b[27mi\u001b[27mo\u001b[27mn\u001b[27ms\u001b[27m \u001b[27mt\u001b[27mo\u001b[27m \u001b[27mp\u001b[27me\u001b[27mr\u001b[27mf\u001b[27mo\u001b[27mr\u001b[27mm\u001b[27m \u001b[27ma\u001b[27mn\u001b[27m \u001b[27ma\u001b[27mc\u001b[27mt\u001b[27mi\u001b[27mon\u001b[1B\r\u001b[K"] 172 | [110.839002, "o", "\u001b[?1l\u001b>\u001b[?2004l\r\r\n"] 173 | [110.840195, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 174 | [110.840644, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007\u001b]1;..authorization\u0007"] 175 | [110.911721, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 176 | [110.91179, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 177 | [114.627506, "o", "\u001b[7mrbac-tool who-can create pods\u001b[27m\r\r\n\u001b[K"] 178 | [115.570791, "o", "\u001b[A\u001b[18C\u001b[27mr\u001b[27mb\u001b[27ma\u001b[27mc\u001b[27m-\u001b[27mt\u001b[27mo\u001b[27mo\u001b[27ml\u001b[27m \u001b[27mw\u001b[27mh\u001b[27mo\u001b[27m-\u001b[27mc\u001b[27ma\u001b[27mn\u001b[27m \u001b[27mc\u001b[27mr\u001b[27me\u001b[27ma\u001b[27mt\u001b[27me\u001b[27m \u001b[27mp\u001b[27mo\u001b[27md\u001b[27ms\u001b[1B\r\u001b[K"] 179 | [115.570939, "o", "\u001b[?1l\u001b>\u001b[?2004l\r\r\n"] 180 | [115.571959, "o", "\u001b]2;rbac-tool who-can create pods\u0007\u001b]1;rbac-tool\u0007"] 181 | [115.738841, "o", " TYPE | SUBJECT | NAMESPACE \r\n+----------------+----------------------------------------+--------------------+\r\n Group "] 182 | [115.738958, "o", "| system:masters | \r\n ServiceAccount | argo-workflows-workflow-controller | default \r\n ServiceAccount | daemon-set-controller | kube-system \r\n ServiceAccount | job-controller | kube-system \r\n "] 183 | [115.739007, "o", "ServiceAccount | local-path-provisioner-service-account | local-path-storage \r\n ServiceAccount | persistent-volume-binder | kube-system "] 184 | [115.739062, "o", " \r\n ServiceAccount | replicaset-controller | kube-system \r\n ServiceAccount | replication-controller | "] 185 | [115.739115, "o", "kube-system \r\n ServiceAccount | spark | default \r\n ServiceAccount"] 186 | [115.739172, "o", " | statefulset-controller | kube-system \r\n ServiceAccount | strimzi-cluster-operator | kafka \r\n"] 187 | [115.741988, "o", "\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r"] 188 | [115.742131, "o", "\u001b]2;fjammes@clrinfopo18:~/src/k8s-advanced/2_authorization\u0007"] 189 | [115.7422, "o", "\u001b]1;..authorization\u0007"] 190 | [115.787483, "o", "\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b[39m\u001b[0m\u001b[49m\u001b[40m\u001b[39m fjammes@clrinfopo18 \u001b[44m\u001b[30m\u001b[30m ~/src/k8s-advanced/2_authorization \u001b[43m\u001b[34m\u001b[30m  master ± \u001b[49m\u001b[33m\u001b[39m \u001b[K"] 191 | [115.787515, "o", "\u001b[?1h\u001b=\u001b[?2004h"] 192 | [118.688784, "o", "\u001b[?2004l\r\r\n"] 193 | -------------------------------------------------------------------------------- /labs/2_authorization/A_rbac_tools_demo.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # See https://github.com/NorfairKing/autorecorder 4 | 5 | set -euxo pipefail 6 | 7 | # Install rbac-tool 8 | curl https://raw.githubusercontent.com/alcideio/rbac-tool/master/download.sh | bash 9 | 10 | # Add rbac-tool to PATH 11 | export PATH=$PWD/bin:$PATH 12 | 13 | # Generate bash completion 14 | source <(rbac-tool bash-completion) 15 | 16 | # Analyze RBAC permissions of the cluster pointed by current context 17 | rbac-tool analysis | less 18 | 19 | # Get ClusterRole used by ServiceAccount system:serviceaccount:kube-system:replicaset-controller 20 | # Use regex to match service account name 21 | rbac-tool lookup -e ".*replicaset.*" 22 | 23 | # List Policy Rules For ServiceAccount replicaset-controller 24 | # It can create and delete Pods and update replicasets, this is consistent 25 | rbac-tool policy-rules -e "replicaset-controller" 26 | 27 | # Shows which subjects (user/group/serviceaccount) have RBAC permissions to perform an action 28 | rbac-tool who-can create pods 29 | 30 | -------------------------------------------------------------------------------- /labs/2_authorization/README.md: -------------------------------------------------------------------------------- 1 | # Pre-requisite 2 | 3 | Run on kind cluster >= v0.5.1 4 | 5 | # Create user 6 | 7 | Documentation: 8 | https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/#use-case-1-create-user-with-limited-namespace-access 9 | -------------------------------------------------------------------------------- /labs/2_authorization/audit-example.TODO.txt: -------------------------------------------------------------------------------- 1 | # rakkess 2 | 3 | ## Install 4 | kubectl krew install access-matrix 5 | 6 | ## Example 7 | 8 | $ kubectl access-matrix resource pod --namespace kube-system 9 | NAME KIND SA-NAMESPACE LIST CREATE UPDATE DELETE 10 | attachdetach-controller ServiceAccount kube-system ✔ ✖ ✖ ✖ 11 | coredns ServiceAccount kube-system ✔ ✖ ✖ ✖ 12 | cronjob-controller ServiceAccount kube-system ✔ ✖ ✖ ✔ 13 | daemon-set-controller ServiceAccount kube-system ✔ ✔ ✖ ✔ 14 | deployment-controller ServiceAccount kube-system ✔ ✖ ✔ ✖ 15 | endpoint-controller ServiceAccount kube-system ✔ ✖ ✖ ✖ 16 | endpointslice-controller ServiceAccount kube-system ✔ ✖ ✖ ✖ 17 | generic-garbage-collector ServiceAccount kube-system ✔ ✖ ✔ ✔ 18 | horizontal-pod-autoscaler ServiceAccount kube-system ✔ ✖ ✖ ✖ 19 | job-controller ServiceAccount kube-system ✔ ✔ ✖ ✔ 20 | local-path-provisioner-service-account ServiceAccount local-path-storage ✔ ✔ ✔ ✔ 21 | namespace-controller ServiceAccount kube-system ✔ ✖ ✖ ✔ 22 | node-controller ServiceAccount kube-system ✔ ✖ ✖ ✔ 23 | persistent-volume-binder ServiceAccount kube-system ✔ ✔ ✖ ✔ 24 | pod-garbage-collector ServiceAccount kube-system ✔ ✖ ✖ ✔ 25 | prometheus-stack-kube-prom-operator ServiceAccount monitoring ✔ ✖ ✖ ✔ 26 | prometheus-stack-kube-prom-prometheus ServiceAccount monitoring ✔ ✖ ✖ ✖ 27 | prometheus-stack-kube-state-metrics ServiceAccount monitoring ✔ ✖ ✖ ✖ 28 | pvc-protection-controller ServiceAccount kube-system ✔ ✖ ✖ ✖ 29 | replicaset-controller ServiceAccount kube-system ✔ ✔ ✖ ✔ 30 | replication-controller ServiceAccount kube-system ✔ ✔ ✖ ✔ 31 | resourcequota-controller ServiceAccount kube-system ✔ ✖ ✖ ✖ 32 | statefulset-controller ServiceAccount kube-system ✔ ✔ ✔ ✔ 33 | system:kube-controller-manager User ✔ ✖ ✖ ✖ 34 | system:kube-scheduler User ✔ ✖ ✖ ✔ 35 | system:masters Group ✔ ✔ ✔ ✔ 36 | 37 | 38 | 39 | # who-can 40 | Install : kubectl krew install who-can 41 | 42 | Usage : kubectl who-can VERB [Flags] 43 | 44 | Aide: kubectl who-can --help 45 | 46 | # rback 47 | Install: 48 | curl -sL https://github.com/team-soteria/rback/releases/download/v0.4.0/linux_rback -o rback 49 | chmod +x rback 50 | 51 | Usage: 52 | kubectl get sa,roles,rolebindings,clusterroles,clusterrolebindings --all-namespaces -o json | ./rback > result.dot 53 | -------------------------------------------------------------------------------- /labs/2_authorization/ci.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euo pipefail 4 | 5 | DIR=$(cd "$(dirname "$0")"; pwd -P) 6 | 7 | FILES=$DIR/*.sh 8 | for f in $FILES 9 | do 10 | if echo "$f" | grep "ci\.sh"; then 11 | echo 12 | ink "+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++" 13 | ink "NOT processing $f" 14 | ink "+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++" 15 | else 16 | echo 17 | ink "+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++" 18 | ink "Processing $f" 19 | ink "+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++" 20 | sh -c "$f" 21 | fi 22 | done 23 | -------------------------------------------------------------------------------- /labs/2_authorization/kubectl-proxy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: curl-custom-sa 5 | spec: 6 | serviceAccountName: default 7 | containers: 8 | - name: main 9 | image: curlimages/curl 10 | command: ["sleep", "9999999"] 11 | - name: ambassador 12 | image: k8sschool/kubectl-proxy:1.27.3 13 | -------------------------------------------------------------------------------- /labs/2_authorization/kubiscan/README.txt: -------------------------------------------------------------------------------- 1 | kubectl apply -f - << EOF 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: kubiscan-sa 6 | namespace: default 7 | --- 8 | kind: ClusterRoleBinding 9 | apiVersion: rbac.authorization.k8s.io/v1beta1 10 | metadata: 11 | name: kubiscan-clusterrolebinding 12 | subjects: 13 | - kind: ServiceAccount 14 | name: kubiscan-sa 15 | namespace: default 16 | apiGroup: "" 17 | roleRef: 18 | kind: ClusterRole 19 | name: kubiscan-clusterrole 20 | apiGroup: "" 21 | --- 22 | kind: ClusterRole 23 | apiVersion: rbac.authorization.k8s.io/v1beta1 24 | metadata: 25 | name: kubiscan-clusterrole 26 | rules: 27 | - apiGroups: ["*"] 28 | resources: ["roles", "clusterroles", "rolebindings", "clusterrolebindings", "pods"] 29 | verbs: ["get", "list"] 30 | EOF 31 | 32 | kubectl get secrets $(kubectl get sa kubiscan-sa -o json | jq -r '.secrets[0].name') -o json | jq -r '.data.token' | base64 -d > token 33 | 34 | docker cp kind-control-plane:/etc/kubernetes/pki/ca.crt . 35 | docker run -it --rm -v $PWD/token:/token -v $PWD/ca.crt:/ca.crt --net=host cyberark/kubiscan 36 | # Get API server IP inside ~/.kube/config 37 | kubiscan -ho 127.0.0.1:38803 -t /token -c /ca.crt -rs 38 | -------------------------------------------------------------------------------- /labs/2_authorization/kubiscan/ca.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl 3 | cm5ldGVzMB4XDTIwMTEyNDA5MTk0MVoXDTMwMTEyMjA5MTk0MVowFTETMBEGA1UE 4 | AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKvT 5 | 6jTyRlxPhB2fNAFZdZjPDQARkl9B33rmhsDAVEZgf93cw6/sEKLrJd+O9T89/uoI 6 | K3J8JfDL1+XoNB6z2WCwTQgxvHlsmn/IEfBqsRQIQRRMxP5/RAtD77jiUSAqinJ7 7 | c7V34OcBmJa9oSgAbzg2MBFbcbaVjqGwzSEhimEBAiLiBI9k6Xz0jgIhCcW8R9YB 8 | L6QWBCPFpTXyhyubbyGVvgf/nngUNMmOlL8XzbHNV9NV6IB+D45cUtwQ8sl3MuYq 9 | JQjRUN0D0lUoxAIiO97fkSwIbetCI73mfsOVT/txRlUvYOSzXtaaY3kK0zVem+NJ 10 | iH6RTcf9zZFYG71RuoECAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB 11 | /wQFMAMBAf8wHQYDVR0OBBYEFFArWn4XPJ01ZTeB6gZo29ohsiZGMA0GCSqGSIb3 12 | DQEBCwUAA4IBAQAk1NjFXeoCNUwovS3fvBpvxAUoc706moS6WDq/EDPRC1CzSSLK 13 | OBnxcMEau8gJpd5KLsDM1hTezkvNUFTv+ULPbegKynEYobyUHFhCDLXx2gTa3SBm 14 | KV30lxLZ1LgkgkiwUeXyKXta2GpoIpryqbIYOBtKE8E6m/RMGgLAEeZyC8sJYOEa 15 | cpQpBP1b5q2UETCq1Ol6FMByTpJUMXkXe3W9Vs8xkaIzeO6aqMROEBwhci/ajqG/ 16 | tvLxnECYS4Cqh0HTytG38tnZAETxvj606YPSz87teR4imxHh7pwIdFNoYqF+jn38 17 | 67UMhqWKOvxpzITjkayVz5jRD/SR2GC8piE/ 18 | -----END CERTIFICATE----- 19 | -------------------------------------------------------------------------------- /labs/2_authorization/kubiscan/token: -------------------------------------------------------------------------------- 1 | eyJhbGciOiJSUzI1NiIsImtpZCI6IlFadlNlSFFpUHVCUElRMlV0M3R2RzdnU0IwWHNwRFZvdHdCWlM0MzJXQlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imt1YmlzY2FuLXNhLXRva2VuLXNzcHRsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmlzY2FuLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYWQwNzJhNWQtOTkwMC00NmZjLTkxZTQtNzllYmZjNDk1Njg4Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6a3ViaXNjYW4tc2EifQ.khQjxVgdDMEaNsCCXVqkUZPriPTgPHAc2Wyja1Sh6DsZTxzoGKhNFc4iQOnn4Oyzy5X-Y058MvgZRPAO4I3VXXd3MUCan42kfAroWgdUfDW_1fRmVALqkSAo-s69J_PXkxjO1MmDIKiOfl9wY_teXeOB6VdMT-GJV1HFmnDq_D814SvYrVyAGKHZrjHy0p44Oj_gzC3jlxFgBleU0hf4ofIW3Z-HYYWQLgrBeDelWssAZ8q0pvbdnGNFaERe3BI_Um3rksIrQ9e_fqf-WXxizSeJnXfktFIko4I0lfA7oV5cXncs9vbx0Mf3tJ8r3wysmO9Nl8okkaqGy9Cg29u2rg -------------------------------------------------------------------------------- /labs/2_authorization/manifest/local-storage.yaml: -------------------------------------------------------------------------------- 1 | kind: StorageClass 2 | apiVersion: storage.k8s.io/v1 3 | metadata: 4 | name: local-storage 5 | provisioner: kubernetes.io/no-provisioner 6 | volumeBindingMode: WaitForFirstConsumer 7 | -------------------------------------------------------------------------------- /labs/2_authorization/manifest/pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: curl-custom-sa 5 | spec: 6 | containers: 7 | - name: main 8 | image: curlimages/curl 9 | command: ["sleep", "9999999"] 10 | - name: ambassador 11 | image: k8sschool/kubectl-proxy:1.27.3 12 | -------------------------------------------------------------------------------- /labs/2_authorization/manifest/pvc.yaml: -------------------------------------------------------------------------------- 1 | kind: PersistentVolumeClaim 2 | apiVersion: v1 3 | metadata: 4 | namespace: office 5 | name: task-pv-claim 6 | labels: 7 | RBAC: user 8 | spec: 9 | storageClassName: local-storage 10 | accessModes: 11 | - ReadWriteOnce 12 | resources: 13 | requests: 14 | storage: 10Gi -------------------------------------------------------------------------------- /labs/2_authorization/manifest/role-deployment-manager-nopod-rbac.yaml: -------------------------------------------------------------------------------- 1 | kind: Role 2 | apiVersion: rbac.authorization.k8s.io/v1beta1 3 | metadata: 4 | namespace: office 5 | name: deployment-manager 6 | rules: 7 | - apiGroups: ["", "extensions", "apps"] 8 | resources: ["deployments", "replicasets", "persistentvolumeclaims"] 9 | verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # You can also use ["*"] 10 | -------------------------------------------------------------------------------- /labs/2_authorization/manifest/role-deployment-manager-pvc.yaml: -------------------------------------------------------------------------------- 1 | kind: Role 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | namespace: office 5 | name: deployment-manager 6 | rules: 7 | - apiGroups: ["", "extensions", "apps"] 8 | resources: ["deployments", "replicasets", "pods", "pods/log", "pods/attach", "pods/exec", "persistentvolumeclaims"] 9 | verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # You can also use ["*"] 10 | -------------------------------------------------------------------------------- /labs/2_authorization/manifest/role-deployment-manager.yaml: -------------------------------------------------------------------------------- 1 | kind: Role 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | namespace: office 5 | name: deployment-manager 6 | rules: 7 | - apiGroups: ["", "extensions", "apps"] 8 | resources: ["deployments", "replicasets", "pods"] 9 | verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # You can also use ["*"] 10 | -------------------------------------------------------------------------------- /labs/2_authorization/manifest/rolebinding-deployment-manager.yaml: -------------------------------------------------------------------------------- 1 | kind: RoleBinding 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: deployment-manager-binding 5 | namespace: office 6 | subjects: 7 | - kind: User 8 | name: employee 9 | apiGroup: "" 10 | roleRef: 11 | kind: Role 12 | name: deployment-manager 13 | apiGroup: "" 14 | -------------------------------------------------------------------------------- /labs/2_authorization/manifest/sa_dashboard.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: admin-user 5 | namespace: kubernetes-dashboard 6 | --- 7 | apiVersion: rbac.authorization.k8s.io/v1 8 | kind: ClusterRoleBinding 9 | metadata: 10 | name: admin-user 11 | roleRef: 12 | apiGroup: rbac.authorization.k8s.io 13 | kind: ClusterRole 14 | name: cluster-admin 15 | subjects: 16 | - kind: ServiceAccount 17 | name: admin-user 18 | namespace: kubernetes-dashboard 19 | -------------------------------------------------------------------------------- /labs/2_authorization/manifest/service-reader.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: Role 3 | metadata: 4 | namespace: foo 5 | name: service-reader 6 | rules: 7 | - apiGroups: [""] 8 | verbs: ["get", "list"] 9 | resources: ["services"] 10 | -------------------------------------------------------------------------------- /labs/2_authorization/sec-issue.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Inject a security issue inside a k8s cluster 4 | 5 | set -euxo pipefail 6 | 7 | NS="monitor" 8 | 9 | kubectl create namespace "$NS" 10 | kubectl create clusterrolebinding cluster-monitoring --clusterrole=cluster-admin --serviceaccount=monitor:default 11 | 12 | # To find it use "rbac-tool analysis", and then 13 | # Same command than "kubens monitoring" 14 | kubectl config set-context $(kubectl config current-context) --namespace="$NS" 15 | SERVICE_ACCOUNT_NAME="default" 16 | kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath="{range .items[?(@.subjects[0].name=='$SERVICE_ACCOUNT_NAME')]}[role: {.roleRef.kind},{.roleRef.name}, binding: {.metadata.name}]{end}" 17 | -------------------------------------------------------------------------------- /labs/3_policies/README.md: -------------------------------------------------------------------------------- 1 | # Set-up platform 2 | 3 | ## Pre-requisite 4 | 5 | Get access to 3 Google Cloud nodes 6 | 7 | ## Create an up and running k8s cluster with PSP enabled 8 | 9 | ```shell 10 | WORKDIR="../0_kubeadm" 11 | "$WORKDIR"/create.sh -p 12 | ``` 13 | -------------------------------------------------------------------------------- /labs/3_policies/ci.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euxo pipefail 4 | 5 | DIR=$(cd "$(dirname "$0")"; pwd -P) 6 | 7 | $DIR/ex1-securitycontext.sh 8 | $DIR/ex2-podsecurity.sh 9 | export EX4_NETWORK_FULL=true 10 | $DIR/ex4-network.sh 11 | -------------------------------------------------------------------------------- /labs/3_policies/ex1-securitycontext.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euxo pipefail 4 | 5 | DIR=$(cd "$(dirname "$0")"; pwd -P) 6 | 7 | . $DIR/../conf.version.sh 8 | 9 | NS="security-context" 10 | kubectl delete ns -l "$NS=true" 11 | kubectl create ns "$NS" 12 | kubectl label ns "$NS" "$NS=true" 13 | 14 | kubectl config set-context $(kubectl config current-context) --namespace="$NS" 15 | 16 | POD="pod-with-host-network" 17 | kubectl apply -f "$DIR/manifests/$POD.yaml" 18 | kubectl wait --timeout=60s --for=condition=Ready pods "$POD" 19 | kubectl exec "$POD" -- ifconfig 20 | 21 | kubectl apply -f "$DIR/manifests/pod-with-hostport.yaml" 22 | # Run 'curl http://localhost:9000' on host 23 | 24 | POD="pod-with-host-pid-and-ipc" 25 | kubectl apply -f "$DIR/manifests/$POD.yaml" 26 | kubectl wait --timeout=60s --for=condition=Ready pods "$POD" 27 | kubectl exec "$POD" -- ps aux 28 | 29 | # RUNNING A POD WITHOUT SPECIFYING A SECURITY CONTEXT 30 | POD="pod-with-defaults" 31 | kubectl run "$POD" --restart=Never --image alpine:$ALPINE_VERSION --restart Never -- /bin/sleep 999999 32 | kubectl wait --timeout=60s --for=condition=Ready pods "$POD" 33 | kubectl exec "$POD" -- id 34 | 35 | POD="pod-as-user-guest" 36 | kubectl apply -f "$DIR/manifests/$POD.yaml" 37 | kubectl wait --timeout=60s --timeout=30s --for=condition=Ready pods "$POD" 38 | kubectl exec "$POD" -- id 39 | kubectl exec "$POD" -- cat /etc/passwd 40 | 41 | kubectl apply -f "$DIR/manifests/pod-run-as-non-root.yaml" 42 | kubectl get po pod-run-as-non-root 43 | 44 | POD="pod-privileged" 45 | kubectl apply -f "$DIR/manifests/$POD.yaml" 46 | kubectl wait --timeout=60s --for=condition=Ready pods "$POD" 47 | kubectl exec "$POD" -- ls /dev 48 | kubectl exec -it pod-with-defaults -- ls /dev 49 | kubectl exec -it pod-with-defaults -- date +%T -s "12:00:00" 50 | 51 | POD="pod-add-settime-capability" 52 | kubectl apply -f "$DIR/manifests/$POD.yaml" 53 | kubectl wait --timeout=60s --for=condition=Ready pods "$POD" 54 | # WARN: might break the cluster 55 | # kubectl exec "$POD" -- date +%T -s "12:00:00" 56 | 57 | # Dropping capabilities from a container 58 | kubectl exec pod-with-defaults -- chown guest /tmp 59 | kubectl exec pod-with-defaults -- ls -la / | grep tmp 60 | 61 | POD="pod-drop-chown-capability" 62 | kubectl apply -f "$DIR/manifests/$POD.yaml" 63 | kubectl wait --timeout=60s --for=condition=Ready pods "$POD" 64 | kubectl exec "$POD" -- chown guest /tmp && ink -r "ERROR this command should have failed" 65 | 66 | # 13.2.6 Preventing processes from writing to the container’s filesystem 67 | POD="pod-with-readonly-filesystem" 68 | kubectl apply -f "$DIR/manifests/$POD.yaml" 69 | kubectl wait --timeout=60s --for=condition=Ready pods "$POD" 70 | kubectl exec "$POD" -- touch /new-file && ink -r "ERROR this command should have failed" 71 | kubectl exec -it "$POD" -- touch /volume/newfile 72 | kubectl exec -it "$POD" -- ls -la /volume/newfile 73 | 74 | # 13.2.7 Sharing volumes when containers run as different users 75 | POD="pod-with-shared-volume-fsgroup" 76 | kubectl apply -f "$DIR/manifests/$POD.yaml" 77 | kubectl wait --timeout=60s --for=condition=Ready pods "$POD" 78 | kubectl exec -it "$POD" -c first -- sh -c "id && \ 79 | ls -l / | grep volume && \ 80 | echo foo > /volume/foo && \ 81 | ls -l /volume && \ 82 | echo foo > /tmp/foo && \ 83 | ls -l /tmp" 84 | -------------------------------------------------------------------------------- /labs/3_policies/ex2-demo-podsecurity.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euo pipefail 4 | 5 | DIR=$(cd "$(dirname "$0")"; pwd -P) 6 | 7 | . $DIR/../conf.version.sh 8 | 9 | 10 | readonly DIR=$(cd "$(dirname "$0")"; pwd -P) 11 | 12 | NS="demo-pod-security" 13 | kubectl delete namespace -l "kubernetes.io/metadata.name=$NS" 14 | 15 | kubectl create namespace "$NS" 16 | kubectl config set-context $(kubectl config current-context) --namespace=$NS 17 | ink -y "Set pod-security to warn=restricted" 18 | ink -y "###################################" 19 | ink -y "label namespace $NS 'pod-security.kubernetes.io/warn=restricted'" 20 | set -x 21 | kubectl label namespace "$NS" "pod-security.kubernetes.io/warn=restricted" 22 | set +x 23 | ink "Create privileged pod 1" 24 | set -x 25 | kubectl run privileged-pod1 --image=busybox:$BUSYBOX_VERSION --privileged 26 | kubectl get pod privileged-pod1 27 | set +x 28 | 29 | ink -y "Set pod-security to enforce=restricted" 30 | ink -y "######################################" 31 | set -x 32 | ink -y "label namespace $NS 'pod-security.kubernetes.io/enforce=restricted'" 33 | kubectl label namespace "$NS" "pod-security.kubernetes.io/enforce=restricted" 34 | set +x 35 | ink "Delete privileged pod 1" 36 | set -x 37 | kubectl delete pod privileged-pod1 --now 38 | set +x 39 | ink "Create privileged pod 2" 40 | set -x 41 | if ! kubectl run privileged-pod2 --image=busybox:$BUSYBOX_VERSION --privileged 42 | then 43 | set +x 44 | ink -r "EXPECTED ERROR: Privileged pod 2 not allowed" 45 | else 46 | set +x 47 | ink -r "ERROR Privileged pod allowed" 48 | exit 1 49 | fi -------------------------------------------------------------------------------- /labs/3_policies/ex2-podsecurity.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euxo pipefail 4 | shopt -s expand_aliases 5 | 6 | DIR=$(cd "$(dirname "$0")"; pwd -P) 7 | 8 | . $DIR/../conf.version.sh 9 | 10 | 11 | readonly DIR=$(cd "$(dirname "$0")"; pwd -P) 12 | 13 | # See https://kubernetes.io/blog/2021/12/09/pod-security-admission-beta/#hands-on-demo for details 14 | 15 | kubectl delete namespace -l "podsecurity=enabled" 16 | NS="verify-pod-security" 17 | 18 | ink "Confirm Pod Security is enabled v1" 19 | # API_SERVER_POD=$(kubectl get pods -n openshift-kube-apiserver -l apiserver=true -o jsonpath='{.items[0].metadata.name}') 20 | # kubectl -n openshift-kube-apiserver exec "$API_SERVER_POD" -t -- kube-apiserver -h | grep "default enabled plugins" | grep "PodSecurity" 21 | kubectl -n kube-system exec kube-apiserver-kind-control-plane -t -- kube-apiserver -h | grep "default enabled plugins" | grep "PodSecurity" 22 | 23 | 24 | ink "Confirm Pod Security is enabled v2" 25 | kubectl create namespace "$NS" 26 | kubectl label ns "$NS" "podsecurity=enabled" 27 | kubectl label namespace "$NS" pod-security.kubernetes.io/enforce=restricted 28 | # The following command does NOT create a workload (--dry-run=server) 29 | kubectl -n "$NS" run test --dry-run=server --image=busybox:$BUSYBOX_VERSION --privileged || ink -y "EXPECTED ERROR" 30 | kubectl delete namespace "$NS" 31 | 32 | kubectl create namespace "$NS" 33 | kubectl label ns "$NS" "podsecurity=enabled" 34 | 35 | ink "Enforces a \"restricted\" security policy and audits on restricted" 36 | kubectl label --overwrite ns verify-pod-security \ 37 | pod-security.kubernetes.io/enforce=restricted \ 38 | pod-security.kubernetes.io/audit=restricted 39 | 40 | ink "Next, try to deploy a privileged workload in the namespace $NS" 41 | if cat < /tmp/pause.yaml 31 | apiVersion: v1 32 | kind: Pod 33 | metadata: 34 | name: pause 35 | spec: 36 | containers: 37 | - name: pause 38 | image: k8s.gcr.io/pause 39 | EOF 40 | 41 | if kubectl-user create -f /tmp/pause.yaml 42 | then 43 | >&2 echo "ERROR: User 'fake-user' should not be able to create pod" 44 | else 45 | >&2 echo "EXPECTED ERROR: User 'fake-user' cannot create pod" 46 | fi 47 | 48 | kubectl-user auth can-i use podsecuritypolicy/example || 49 | >&2 echo "EXPECTED ERROR" 50 | 51 | # kubectl-admin create role psp:unprivileged \ 52 | # --verb=use \ 53 | # --resource=podsecuritypolicy \ 54 | # --resource-name=example 55 | 56 | kubectl apply -n psp-example -f "$DIR"/resource/role-use-psp.yaml 57 | 58 | kubectl-admin create rolebinding fake-user:psp:unprivileged \ 59 | --role=psp:unprivileged \ 60 | --serviceaccount=psp-example:fake-user 61 | 62 | kubectl-user auth can-i use podsecuritypolicy/example 63 | 64 | kubectl-user create -f /tmp/pause.yaml 65 | 66 | cat < /tmp/priv-pause.yaml 67 | apiVersion: v1 68 | kind: Pod 69 | metadata: 70 | name: privileged 71 | spec: 72 | containers: 73 | - name: pause 74 | image: k8s.gcr.io/pause 75 | securityContext: 76 | privileged: true 77 | EOF 78 | 79 | kubectl-user create -f /tmp/priv-pause.yaml || 80 | >&2 echo "EXPECTED ERROR: User 'fake-user' cannot create privileged container" 81 | 82 | kubectl-user delete pod pause 83 | 84 | kubectl-user create deployment pause --image=k8s.gcr.io/pause 85 | kubectl-user get pods 86 | kubectl-user get events | head -n 2 87 | kubectl-admin create rolebinding default:psp:unprivileged \ 88 | --role=psp:unprivileged \ 89 | --serviceaccount=psp-example:default 90 | # Wait for deployment to recreate the pod 91 | sleep 5 92 | kubectl wait --timeout=60s --for=condition=Ready pods -l app=pause -n psp-example 93 | kubectl-user get pods 94 | -------------------------------------------------------------------------------- /labs/3_policies/ex3-psp.sh.deprecated: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | set -e 4 | set -x 5 | 6 | DIR=$(cd "$(dirname "$0")"; pwd -P) 7 | 8 | # Run on kubeadm cluster 9 | # see "kubernetes in action" p391 10 | 11 | # Delete psp 'restricted', installed during kind install, so that fake use can not create pod 12 | kubectl delete psp -l restricted 13 | 14 | NS="psp-advanced" 15 | 16 | kubectl delete ns,psp,clusterrolebindings -l "policies=$NS" 17 | 18 | kubectl create namespace "$NS" 19 | kubectl label ns "$NS" "policies=$NS" 20 | 21 | KUBIA_DIR="/tmp/kubernetes-in-action" 22 | if [ ! -d "$KUBIA_DIR" ]; then 23 | git clone https://github.com/k8s-school/kubernetes-in-action.git /tmp/kubernetes-in-action 24 | 25 | fi 26 | 27 | # Exercice: define default policy 28 | kubectl apply -f $DIR/../0_kubeadm/resource/psp/default-psp-with-rbac.yaml 29 | 30 | # Exercice: enable alice to create pod 31 | kubectl create rolebinding alice:edit \ 32 | --clusterrole=edit \ 33 | --user=alice \ 34 | --namespace "$NS" 35 | 36 | # Check 37 | alias kubectl-user="kubectl --as=alice --namespace '$NS'" 38 | 39 | kubectl-user run --restart=Never -it ubuntu --image=ubuntu id 40 | 41 | # Remark: cluster-admin has access to all psp (see cluster-admin role), and use the most permissive in each section 42 | 43 | cd "$KUBIA_DIR"/Chapter13 44 | 45 | # 13.3.1 Introducing the PodSecurityPolicy resource 46 | kubectl apply -f "$DIR"/resource/pod-security-policy.yaml 47 | kubectl-user create --namespace "$NS" -f pod-privileged.yaml || 48 | >&2 echo "EXPECTED ERROR: User 'alice' cannot create privileged pod" 49 | 50 | # 13.3.2 Understanding runAsUser, fsGroup, and supplementalGroups 51 | # policies 52 | kubectl apply -f "$DIR"/resource/psp-must-run-as.yaml 53 | # DEPLOYING A POD WITH RUN_AS USER OUTSIDE OF THE POLICY’S RANGE 54 | kubectl-user create --namespace "$NS" -f pod-as-user-guest.yaml || 55 | >&2 echo "EXPECTED ERROR: Cannot deploy a pod with 'run_as user' outside of the policy’s range" 56 | # DEPLOYING A POD WITH A CONTAINER IMAGE WITH AN OUT-OF-RANGE USER ID 57 | kubectl-user run --restart=Never --namespace "$NS" run-as-5 --image luksa/kubia-run-as-user-5 --restart Never 58 | kubectl wait --timeout=60s -n "$NS" --for=condition=Ready pods run-as-5 59 | kubectl exec --namespace "$NS" run-as-5 -- id 60 | 61 | kubectl apply -f "$DIR"/resource/psp-capabilities.yaml 62 | kubectl-user create -f pod-add-sysadmin-capability.yaml || 63 | >&2 echo "EXPECTED ERROR: Cannot deploy a pod with capability 'sysadmin'" 64 | kubectl apply -f "$DIR"/resource/psp-volumes.yaml 65 | 66 | # 13.3.5 Assigning different PodSecurityPolicies to different users 67 | # and groups 68 | # Enable bob to create pod 69 | kubectl create rolebinding bob:edit \ 70 | --clusterrole=edit \ 71 | --user=bob\ 72 | --namespace "$NS" 73 | # WARN: book says 'psp-privileged', p.398 74 | kubectl create clusterrolebinding psp-bob --clusterrole=psp:privileged --user=bob 75 | kubectl label clusterrolebindings psp-bob "policies=$NS" 76 | 77 | # --as is usable even if user does not exist, this does not apply for -- user 78 | kubectl --namespace "$NS" --as alice create -f pod-privileged.yaml || 79 | >&2 echo "EXPECTED ERROR: User 'alice' cannot create a privileged pod" 80 | kubectl --namespace "$NS" --as bob create -f pod-privileged.yaml 81 | -------------------------------------------------------------------------------- /labs/3_policies/ex4-network.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euo pipefail 4 | shopt -s expand_aliases 5 | 6 | DIR=$(cd "$(dirname "$0")"; pwd -P) 7 | 8 | . $DIR/../conf.version.sh 9 | 10 | EX4_NETWORK_FULL="${EX4_NETWORK_FULL:-false}" 11 | 12 | usage() { 13 | cat << EOD 14 | 15 | Usage: `basename $0` [options] 16 | 17 | Available options: 18 | -h this message 19 | -s run exercice and solution 20 | 21 | Run network exercice 22 | EOD 23 | } 24 | 25 | # get the options 26 | while getopts hs c ; do 27 | case $c in 28 | h) usage ; exit 0 ;; 29 | s) EX4_NETWORK_FULL=true ;; 30 | \?) usage ; exit 2 ;; 31 | esac 32 | done 33 | shift `expr $OPTIND - 1` 34 | 35 | if [ $# -ne 0 ] ; then 36 | usage 37 | exit 2 38 | fi 39 | 40 | ID="$(whoami)" 41 | NS="network-$ID" 42 | 43 | 44 | # Run on kubeadm cluster 45 | # see "kubernetes in action" p391 46 | kubectl delete ns -l "kubernetes.io/metadata.name=$NS" 47 | kubectl create namespace "$NS" 48 | 49 | set +x 50 | ink 'Install one postgresql pod with helm and add label "tier:database"' 51 | ink "Disable data persistence" 52 | set -x 53 | if ! helm delete pgsql --namespace "$NS" 54 | then 55 | set +x 56 | ink -y "WARN pgsql instance not found" 57 | set -x 58 | fi 59 | 60 | if ! helm repo add bitnami https://charts.bitnami.com/bitnami 61 | then 62 | set +x 63 | ink -y "WARN Failed to add bitnami repo" 64 | set -x 65 | fi 66 | helm repo update 67 | 68 | set +x 69 | ink "Install postgresql database with helm" 70 | set -x 71 | helm install --version 11.9.1 --namespace "$NS" pgsql bitnami/postgresql --set primary.podLabels.tier="database",persistence.enabled="false" 72 | 73 | set +x 74 | ink "Create external pod" 75 | set -x 76 | kubectl run -n "$NS" external --image=nginx:$NGINX_VERSION -l "app=external" 77 | set +x 78 | ink "Create webserver pod" 79 | set -x 80 | kubectl run -n "$NS" webserver --image=nginx:$NGINX_VERSION -l "tier=webserver" 81 | 82 | kubectl wait --timeout=60s -n "$NS" --for=condition=Ready pods external 83 | 84 | kubectl expose -n "$NS" pod external --type=NodePort --port 80 --name=external 85 | set +x 86 | ink "Install netcat, ping, netstat and ps in these pods" 87 | set -x 88 | kubectl exec -n "$NS" -it external -- \ 89 | sh -c "apt-get update && apt-get install -y dnsutils inetutils-ping netcat-traditional net-tools" 90 | 91 | kubectl wait --timeout=60s -n "$NS" --for=condition=Ready pods webserver 92 | kubectl exec -n "$NS" -it webserver -- \ 93 | sh -c "apt-get update && apt-get install -y dnsutils inetutils-ping netcat-traditional net-tools" 94 | 95 | set +x 96 | ink "Wait for pgsql pods to be ready" 97 | set -x 98 | kubectl wait --for=condition=Ready -n "$NS" pods -l app.kubernetes.io/instance=pgsql 99 | 100 | set +x 101 | ink "Check what happen with no network policies defined" 102 | ink -b "++++++++++++++++++++" 103 | ink -b "NO NETWORK POLICIES" 104 | ink -b "++++++++++++++++++++" 105 | set -x 106 | EXTERNAL_IP=$(kubectl get pods -n "$NS" external -o jsonpath='{.status.podIP}') 107 | PGSQL_IP=$(kubectl get pods -n "$NS" pgsql-postgresql-0 -o jsonpath='{.status.podIP}') 108 | set +x 109 | ink "webserver to database" 110 | set -x 111 | kubectl exec -n "$NS" webserver -- netcat -q 2 -nzv ${PGSQL_IP} 5432 112 | set +x 113 | ink "webserver to database, using DNS name" 114 | set -x 115 | kubectl exec -n "$NS" webserver -- netcat -q 2 -zv pgsql-postgresql 5432 116 | set +x 117 | ink "webserver to outside external pod" 118 | set -x 119 | kubectl exec -n "$NS" webserver -- netcat -q 2 -nzv $EXTERNAL_IP 80 120 | set +x 121 | ink "external to outside world" 122 | set -x 123 | kubectl exec -n "$NS" external -- netcat -w 2 -zv www.k8s-school.fr 443 124 | 125 | set +x 126 | ink -b "EXERCICE: Secure communication between webserver and database, and validate it (webserver, database, external, outside)" 127 | set -x 128 | if [ "$EX4_NETWORK_FULL" = false ] 129 | then 130 | exit 0 131 | fi 132 | 133 | set +x 134 | ink "Enable DNS access, see https://docs.projectcalico.org/v3.7/security/advanced-policy#5-allow-dns-egress-traffic" 135 | set -x 136 | # See https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-policies 137 | kubectl apply -n "$NS" -f $DIR/resource/default-deny.yaml 138 | # Edit original file, replace app with tier 139 | kubectl apply -n "$NS" -f $DIR/resource/ingress-www-db.yaml 140 | # Edit original file, replace app with tier 141 | kubectl apply -n "$NS" -f $DIR/resource/egress-www-db.yaml 142 | ink "Set default deny network policies" 143 | 144 | kubectl apply -n "$NS" -f $DIR/resource/allow-dns-access.yaml 145 | 146 | 147 | 148 | set +x 149 | ink "Check what happen with network policies defined" 150 | ink -b "+---------------------+" 151 | ink -b "WITH NETWORK POLICIES" 152 | ink -b "+---------------------+" 153 | set -x 154 | set +x 155 | ink "webserver to database" 156 | set -x 157 | kubectl exec -n "$NS" webserver -- netcat -q 2 -nzv ${PGSQL_IP} 5432 158 | set +x 159 | ink "webserver to database, using DNS name" 160 | set -x 161 | kubectl exec -n "$NS" webserver -- netcat -q 2 -zv pgsql-postgresql 5432 162 | set +x 163 | ink "webserver to external pod" 164 | set -x 165 | if kubectl exec -n "$NS" webserver -- netcat -w 3 -nzv $EXTERNAL_IP 80 166 | then 167 | set +x 168 | ink -r "ERROR this connection should have failed" 169 | exit 1 170 | set -x 171 | else 172 | set +x 173 | ink -y "Connection failed" 174 | set -x 175 | fi 176 | set +x 177 | ink "external pod to database" 178 | set -x 179 | if kubectl exec -n "$NS" external -- netcat -w 3 -zv pgsql-postgresql 5432 180 | then 181 | set +x 182 | ink -r "ERROR this connection should have failed" 183 | exit 1 184 | set -x 185 | else 186 | set +x 187 | ink -y "Connection failed" 188 | set -x 189 | fi 190 | set +x 191 | ink "external pod to outside world" 192 | set -x 193 | if kubectl exec -n "$NS" external -- netcat -w 3 -zv www.k8s-school.fr 80 194 | then 195 | set +x 196 | ink -r "ERROR this connection should have failed" 197 | exit 1 198 | set -x 199 | else 200 | set +x 201 | ink -y "Connection failed" 202 | set -x 203 | fi 204 | -------------------------------------------------------------------------------- /labs/3_policies/manifests/pod-add-settime-capability.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-add-settime-capability 5 | spec: 6 | containers: 7 | - name: main 8 | image: alpine:3.19 9 | command: ["/bin/sleep", "999999"] 10 | securityContext: 11 | capabilities: 12 | add: 13 | - SYS_TIME 14 | -------------------------------------------------------------------------------- /labs/3_policies/manifests/pod-as-user-guest.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-as-user-guest 5 | spec: 6 | containers: 7 | - name: main 8 | image: alpine:3.19 9 | command: ["/bin/sleep", "999999"] 10 | securityContext: 11 | runAsUser: 405 12 | -------------------------------------------------------------------------------- /labs/3_policies/manifests/pod-drop-chown-capability.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-drop-chown-capability 5 | spec: 6 | containers: 7 | - name: main 8 | image: alpine:3.19 9 | command: ["/bin/sleep", "999999"] 10 | securityContext: 11 | capabilities: 12 | drop: 13 | - CHOWN 14 | -------------------------------------------------------------------------------- /labs/3_policies/manifests/pod-privileged.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-privileged 5 | spec: 6 | containers: 7 | - name: main 8 | image: alpine:3.19 9 | command: ["/bin/sleep", "999999"] 10 | securityContext: 11 | privileged: true 12 | -------------------------------------------------------------------------------- /labs/3_policies/manifests/pod-run-as-non-root.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-run-as-non-root 5 | spec: 6 | containers: 7 | - name: main 8 | image: alpine:3.19 9 | command: ["/bin/sleep", "999999"] 10 | securityContext: 11 | runAsNonRoot: true 12 | -------------------------------------------------------------------------------- /labs/3_policies/manifests/pod-with-host-network.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-with-host-network 5 | spec: 6 | hostNetwork: true 7 | containers: 8 | - name: main 9 | image: alpine:3.19 10 | command: ["/bin/sleep", "999999"] 11 | -------------------------------------------------------------------------------- /labs/3_policies/manifests/pod-with-host-pid-and-ipc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-with-host-pid-and-ipc 5 | spec: 6 | hostPID: true 7 | hostIPC: true 8 | containers: 9 | - name: main 10 | image: alpine:3.19 11 | command: ["/bin/sleep", "999999"] 12 | -------------------------------------------------------------------------------- /labs/3_policies/manifests/pod-with-hostport.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: nginx-hostport 5 | spec: 6 | containers: 7 | - image: nginx:1.25.3 8 | name: nginx 9 | ports: 10 | - containerPort: 8080 11 | hostPort: 9000 12 | protocol: TCP 13 | -------------------------------------------------------------------------------- /labs/3_policies/manifests/pod-with-readonly-filesystem.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-with-readonly-filesystem 5 | spec: 6 | containers: 7 | - name: main 8 | image: alpine:3.19 9 | command: ["/bin/sleep", "999999"] 10 | securityContext: 11 | readOnlyRootFilesystem: true 12 | volumeMounts: 13 | - name: my-volume 14 | mountPath: /volume 15 | readOnly: false 16 | volumes: 17 | - name: my-volume 18 | emptyDir: 19 | -------------------------------------------------------------------------------- /labs/3_policies/manifests/pod-with-shared-volume-fsgroup.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-with-shared-volume-fsgroup 5 | spec: 6 | securityContext: 7 | fsGroup: 555 8 | supplementalGroups: [666, 777] 9 | containers: 10 | - name: first 11 | image: alpine:3.19 12 | command: ["/bin/sleep", "999999"] 13 | securityContext: 14 | runAsUser: 1111 15 | volumeMounts: 16 | - name: shared-volume 17 | mountPath: /volume 18 | readOnly: false 19 | - name: second 20 | image: alpine:3.19 21 | command: ["/bin/sleep", "999999"] 22 | securityContext: 23 | runAsUser: 2222 24 | volumeMounts: 25 | - name: shared-volume 26 | mountPath: /volume 27 | readOnly: false 28 | volumes: 29 | - name: shared-volume 30 | emptyDir: 31 | -------------------------------------------------------------------------------- /labs/3_policies/rego/sol1.rego: -------------------------------------------------------------------------------- 1 | # Image Safety 2 | # ------------ 3 | # 4 | 5 | # Check images do not have "latest" tag, whether it is explicit or implicit 6 | 7 | # FIXME : manage "toto.com:9090/mysql:latest", 8 | 9 | package kubernetes.validating.images 10 | 11 | import future.keywords.in 12 | 13 | # If image contains do not contains ":", returns [image, "latest"] 14 | split_image(image) = [image, "latest"] { 15 | not contains(image, ":") 16 | } 17 | 18 | # If image contains ":", returns [image_name, tag] 19 | split_image(image) = [image_name, tag] { 20 | [image_name, tag] = split(image, ":") 21 | } 22 | 23 | deny[msg] { 24 | input.request.kind.kind == "Pod" 25 | some container in input.request.object.spec.containers 26 | [image_name, "latest"] = split_image(container.image) 27 | msg = sprintf("%s in the Pod %s has an image, %s, using the latest tag", [container.name, input.request.object.metadata.name, image_name]) 28 | } 29 | -------------------------------------------------------------------------------- /labs/3_policies/rego/sol2.rego.nok: -------------------------------------------------------------------------------- 1 | # Image Safety 2 | # ------------ 3 | # 4 | 5 | # Check images do not have "latest" tag, whether it is explicit or implicit 6 | 7 | # FIXME: mak it work! 8 | 9 | package kubernetes.validating.images 10 | 11 | import future.keywords.in 12 | 13 | requested_images = {img | img := input.request.object.spec.containers[_].image} 14 | 15 | deny[msg] { 16 | input.request.kind.kind == "Pod" 17 | msg := sprintf("Pod %v could not be created because it uses images that are tagged latest or images with no tags",[input.request.object.metadata.name]) 18 | } 19 | 20 | ensure { 21 | # Does the image tag is latest? this should violate the policy 22 | has_string(":latest",requested_images) 23 | } 24 | ensure { 25 | # OR Is this a naked image? this should also violate the policy 26 | not has_string(":",requested_images) 27 | 28 | } 29 | has_string(str,arr){ 30 | contains(arr[_],str) 31 | } 32 | -------------------------------------------------------------------------------- /labs/3_policies/rego/sol3.rego: -------------------------------------------------------------------------------- 1 | package kubernetes.validating.images 2 | 3 | import future.keywords.contains 4 | import future.keywords.if 5 | import future.keywords.in 6 | 7 | deny contains msg if { 8 | input.request.kind.kind == "Pod" 9 | 10 | # The `some` keyword declares local variables. This rule declares a variable 11 | # called `container`, with the value any of the input request's spec's container 12 | # objects. It then checks if the container object's `"image"` field does not 13 | # start with "hooli.com/". 14 | some container in input.request.object.spec.containers 15 | endswith(container.image, ":latest") 16 | msg := sprintf("Tag 'latest' is forbidden for image %v", [container.image]) 17 | } 18 | 19 | deny contains msg if { 20 | input.request.kind.kind == "Pod" 21 | some container2 in input.request.object.spec.containers 22 | not contains(container2.image, ":") 23 | msg := sprintf("Image must contains a tag for image %v", [container2.image]) 24 | } 25 | -------------------------------------------------------------------------------- /labs/3_policies/resource/allow-dns-access.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: NetworkPolicy 3 | metadata: 4 | name: allow-dns-access 5 | spec: 6 | podSelector: 7 | matchLabels: {} 8 | policyTypes: 9 | - Egress 10 | egress: 11 | - to: 12 | - namespaceSelector: 13 | matchLabels: 14 | kubernetes.io/metadata.name: kube-system 15 | ports: 16 | - protocol: UDP 17 | port: 53 -------------------------------------------------------------------------------- /labs/3_policies/resource/curl_loop.sh: -------------------------------------------------------------------------------- 1 | # tcpdump port 30657 -i any 2 | while true 3 | do 4 | curl http://clus0-1:30657 5 | done -------------------------------------------------------------------------------- /labs/3_policies/resource/default-deny.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: NetworkPolicy 3 | metadata: 4 | name: default-deny 5 | spec: 6 | podSelector: {} 7 | policyTypes: 8 | - Ingress 9 | - Egress -------------------------------------------------------------------------------- /labs/3_policies/resource/egress-www-db.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: NetworkPolicy 3 | metadata: 4 | name: egress-www-db 5 | spec: 6 | podSelector: 7 | matchLabels: 8 | tier: webserver 9 | egress: 10 | - to: 11 | - podSelector: 12 | matchLabels: 13 | tier: database 14 | ports: 15 | - port: 5432 16 | -------------------------------------------------------------------------------- /labs/3_policies/resource/ingress-external-ipblock.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: NetworkPolicy 3 | metadata: 4 | name: allow-external-ipblock 5 | spec: 6 | podSelector: 7 | matchLabels: 8 | app: external 9 | ingress: 10 | - from: 11 | - ipBlock: 12 | # flannel address of node 13 | # run 'ip addr show' on the node 14 | cidr: 10.132.0.37/32 15 | - ipBlock: 16 | cidr: 10.132.0.38/32 17 | 18 | -------------------------------------------------------------------------------- /labs/3_policies/resource/ingress-external.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: NetworkPolicy 3 | metadata: 4 | name: allow-external 5 | spec: 6 | podSelector: 7 | matchLabels: 8 | app: external 9 | ingress: 10 | - {} 11 | policyTypes: 12 | - Ingress -------------------------------------------------------------------------------- /labs/3_policies/resource/ingress-www-db.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: NetworkPolicy 3 | metadata: 4 | name: ingress-www-db 5 | spec: 6 | podSelector: 7 | matchLabels: 8 | tier: database 9 | ingress: 10 | - from: 11 | - podSelector: 12 | matchLabels: 13 | tier: webserver 14 | ports: 15 | - port: 5432 16 | policyTypes: 17 | - Ingress 18 | -------------------------------------------------------------------------------- /labs/3_policies/resource/pod-security-policy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: policy/v1beta1 2 | kind: PodSecurityPolicy 3 | metadata: 4 | name: default 5 | spec: 6 | hostIPC: false 7 | hostPID: false 8 | hostNetwork: false 9 | hostPorts: 10 | - min: 10000 11 | max: 11000 12 | - min: 13000 13 | max: 14000 14 | privileged: false 15 | readOnlyRootFilesystem: true 16 | runAsUser: 17 | rule: RunAsAny 18 | fsGroup: 19 | rule: RunAsAny 20 | supplementalGroups: 21 | rule: RunAsAny 22 | seLinux: 23 | rule: RunAsAny 24 | volumes: 25 | - '*' 26 | -------------------------------------------------------------------------------- /labs/3_policies/resource/psp-capabilities.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: policy/v1beta1 2 | kind: PodSecurityPolicy 3 | metadata: 4 | name: default 5 | spec: 6 | allowedCapabilities: 7 | - SYS_TIME 8 | defaultAddCapabilities: 9 | - CHOWN 10 | requiredDropCapabilities: 11 | - SYS_ADMIN 12 | - SYS_MODULE 13 | hostIPC: false 14 | hostPID: false 15 | hostNetwork: false 16 | hostPorts: 17 | - min: 10000 18 | max: 11000 19 | - min: 13000 20 | max: 14000 21 | privileged: false 22 | readOnlyRootFilesystem: true 23 | runAsUser: 24 | rule: RunAsAny 25 | fsGroup: 26 | rule: RunAsAny 27 | supplementalGroups: 28 | rule: RunAsAny 29 | seLinux: 30 | rule: RunAsAny 31 | volumes: 32 | - '*' 33 | -------------------------------------------------------------------------------- /labs/3_policies/resource/psp-must-run-as.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: policy/v1beta1 2 | kind: PodSecurityPolicy 3 | metadata: 4 | name: default 5 | spec: 6 | hostIPC: false 7 | hostPID: false 8 | hostNetwork: false 9 | hostPorts: 10 | - min: 10000 11 | max: 11000 12 | - min: 13000 13 | max: 14000 14 | privileged: false 15 | readOnlyRootFilesystem: true 16 | runAsUser: 17 | rule: MustRunAs 18 | ranges: 19 | - min: 2 20 | max: 2 21 | fsGroup: 22 | rule: MustRunAs 23 | ranges: 24 | - min: 2 25 | max: 10 26 | - min: 20 27 | max: 30 28 | supplementalGroups: 29 | rule: MustRunAs 30 | ranges: 31 | - min: 2 32 | max: 10 33 | - min: 20 34 | max: 30 35 | seLinux: 36 | rule: RunAsAny 37 | volumes: 38 | - '*' 39 | -------------------------------------------------------------------------------- /labs/3_policies/resource/psp-volumes.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: policy/v1beta1 2 | kind: PodSecurityPolicy 3 | metadata: 4 | name: default 5 | spec: 6 | runAsUser: 7 | rule: RunAsAny 8 | fsGroup: 9 | rule: RunAsAny 10 | supplementalGroups: 11 | rule: RunAsAny 12 | seLinux: 13 | rule: RunAsAny 14 | volumes: 15 | - emptyDir 16 | - configMap 17 | - secret 18 | - downwardAPI 19 | - persistentVolumeClaim 20 | -------------------------------------------------------------------------------- /labs/3_policies/resource/role-use-psp.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: Role 3 | metadata: 4 | name: psp:unprivileged 5 | rules: 6 | - apiGroups: ['policy'] 7 | resources: ['podsecuritypolicies'] 8 | verbs: ['use'] 9 | resourceNames: ['example'] 10 | -------------------------------------------------------------------------------- /labs/4_computational_resources/TODO: -------------------------------------------------------------------------------- 1 | Add exercice in slides! 2 | -------------------------------------------------------------------------------- /labs/4_computational_resources/ci.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euxo pipefail 4 | 5 | DIR=$(cd "$(dirname "$0")"; pwd -P) 6 | 7 | $DIR/ex1.sh 8 | $DIR/ex2-quota.sh 9 | $DIR/ex3-limitrange.sh 10 | -------------------------------------------------------------------------------- /labs/4_computational_resources/ex1.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | set -e 4 | set -x 5 | 6 | NS="compute" 7 | 8 | # GCP specific: 9 | # NODE="clus0-1" 10 | 11 | NODE="kind-worker" 12 | 13 | DIR=$(cd "$(dirname "$0")"; pwd -P) 14 | 15 | KIND_CONTEXT="kind-kind" 16 | # Switch back to context kubernetes-admin@kubernetes 17 | kubectl config use-context "$KIND_CONTEXT" 18 | 19 | # Run on a Kubernetes cluster with LimitRange admission control plugin enable 20 | # see "kubernetes in action" p405 21 | kubectl delete ns -l "compute=true" 22 | kubectl create namespace "$NS" 23 | kubectl label ns "$NS" "compute=true" 24 | 25 | kubectl config set-context $(kubectl config current-context) --namespace=$NS 26 | 27 | KUBIA_DIR="/tmp/kubernetes-in-action" 28 | if [ ! -d "$KUBIA_DIR" ]; then 29 | git clone https://github.com/k8s-school/kubernetes-in-action.git /tmp/kubernetes-in-action 30 | 31 | fi 32 | 33 | cd "$KUBIA_DIR/Chapter14" 34 | POD="requests-pod" 35 | kubectl apply -f "$KUBIA_DIR"/Chapter14/"$POD".yaml 36 | kubectl wait --for=condition=Ready pods "$POD" 37 | 38 | if timeout --foreground 3 kubectl exec -it "$POD" -- top 39 | then 40 | ink -y "WARN: 'top' has exited for unknow reason" 41 | else 42 | ink "Exiting from 'top' command" 43 | fi 44 | 45 | # INSPECTING A NODE’S CAPACITY 46 | # Exercice: flood the cluster CPU capacity by creation two pods 47 | 48 | for i in 2 3 4 49 | do 50 | POD="requests-pod-$i" 51 | cat </tmp/$POD.yaml 52 | apiVersion: v1 53 | kind: Pod 54 | metadata: 55 | name: $POD 56 | spec: 57 | containers: 58 | - image: busybox 59 | command: ["dd", "if=/dev/zero", "of=/dev/null"] 60 | name: main 61 | resources: 62 | requests: 63 | cpu: 1500m 64 | memory: 20Mi 65 | EOF 66 | kubectl apply -f "/tmp/requests-pod-$i.yaml" 67 | if [ $i -neq 4 ] 68 | then 69 | kubectl wait --for=condition=Ready pods "$POD" 70 | kubectl describe pod "$POD" 71 | kubectl describe node "$NODE" 72 | fi 73 | done 74 | 75 | kubectl describe po requests-pod-4 76 | kubectl describe node "$NODE" 77 | kubectl delete po requests-pod-3 78 | kubectl get po 79 | kubectl delete pods requests-pod-4 80 | 81 | POD="limited-pod" 82 | kubectl apply -f "$KUBIA_DIR"/Chapter14/"$POD".yaml 83 | kubectl wait --for=condition=Ready pods "$POD" 84 | kubectl describe pod "$POD" 85 | if timeout 3 kubectl exec -it "$POD" -- top 86 | then 87 | ink -y "WARN: 'top' has exited for unknow reason" 88 | else 89 | ink "Exiting from 'top' command" 90 | fi 91 | 92 | # LimitRange 93 | kubectl apply -f $DIR/manifest/local-storage-class.yaml 94 | kubectl apply -f "$KUBIA_DIR"/Chapter14/limits.yaml 95 | kubectl apply -f "$KUBIA_DIR"/Chapter14/limits-pod-too-big.yaml && \ 96 | ink -r "ERROR this command should have failed" 97 | kubectl apply -f "$KUBIA_DIR"/Chapter03/kubia-manual.yaml 98 | 99 | # ResourceQuota 100 | kubectl apply -f "$KUBIA_DIR"/Chapter14/quota-cpu-memory.yaml 101 | kubectl describe quota 102 | # requests.storage is the overall max limit 103 | # https://kubernetes.io/docs/concepts/policy/resource-quotas/#storage-resource-quota 104 | # so there is an inconsistency in example 105 | kubectl apply -f "$KUBIA_DIR"/Chapter14/quota-storage.yaml 106 | kubectl apply -f "$KUBIA_DIR"/Chapter14/quota-object-count.yaml 107 | kubectl apply -f "$KUBIA_DIR"/Chapter14/quota-scoped.yaml 108 | 109 | kubectl config set-context $(kubectl config current-context) --namespace=default 110 | -------------------------------------------------------------------------------- /labs/4_computational_resources/ex2-quota.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -euxo pipefail 4 | 5 | kubectl delete ns -l "quota=true" 6 | NS="quota-object-example" 7 | kubectl create namespace "$NS" 8 | kubectl label ns "$NS" "quota=true" 9 | 10 | kubectl apply -f https://k8s.io/examples/admin/resource/quota-objects.yaml --namespace="$NS" 11 | kubectl get resourcequota object-quota-demo --namespace="$NS" --output=yaml 12 | kubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc.yaml --namespace="$NS" 13 | kubectl get persistentvolumeclaims --namespace="$NS" 14 | kubectl apply -f https://k8s.io/examples/admin/resource/quota-objects-pvc-2.yaml --namespace="$NS" || 15 | ink -y "EXPECTED ERROR: failed to exceed quota" 16 | kubectl get persistentvolumeclaims --namespace="$NS" 17 | -------------------------------------------------------------------------------- /labs/4_computational_resources/ex3-limitrange.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # See https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/ 4 | 5 | set -euxo pipefail 6 | 7 | kubectl delete ns -l "limitrange=true" 8 | NS="constraints-cpu-example" 9 | kubectl create namespace "$NS" 10 | kubectl label ns "$NS" "limitrange=true" 11 | 12 | kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints.yaml --namespace="$NS" 13 | kubectl get limitrange cpu-min-max-demo-lr --output=yaml --namespace="$NS" 14 | kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod.yaml --namespace="$NS" 15 | kubectl get pod constraints-cpu-demo --namespace="$NS" 16 | kubectl get pod constraints-cpu-demo --output=yaml --namespace="$NS" 17 | kubectl delete pod constraints-cpu-demo --namespace="$NS" 18 | 19 | # Attempt to create a Pod that exceeds the maximum CPU constraint 20 | kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-2.yaml --namespace="$NS" || \ 21 | ink -y "EXPECTED ERROR: pod cpu request is below limitrange" 22 | 23 | # Attempt to create a Pod that does not meet the minimum CPU request 24 | kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-3.yaml --namespace="$NS" || \ 25 | ink -y "EXPECTED ERROR: pod cpu request is below limitrange" 26 | 27 | # Create a Pod that does not specify any CPU request or limit 28 | kubectl apply -f https://k8s.io/examples/admin/resource/cpu-constraints-pod-4.yaml --namespace="$NS" 29 | kubectl get pod constraints-cpu-demo-4 --namespace="$NS" --output=yaml 30 | 31 | -------------------------------------------------------------------------------- /labs/4_computational_resources/manifest/local-storage-class.yaml: -------------------------------------------------------------------------------- 1 | kind: StorageClass 2 | apiVersion: storage.k8s.io/v1 3 | metadata: 4 | name: local-storage 5 | provisioner: kubernetes.io/no-provisioner 6 | volumeBindingMode: WaitForFirstConsumer 7 | -------------------------------------------------------------------------------- /labs/conf.version.sh: -------------------------------------------------------------------------------- 1 | # Using fixed version allow to use imagePullPolicy=IfNotPresent 2 | # this avoid hitting docker hub quotas 3 | ALPINE_VERSION="3.19" 4 | BUSYBOX_VERSION="1.36" 5 | KUBECTL_PROXY_VERSION="1.27.3" 6 | NGINX_VERSION="1.25.3" 7 | --------------------------------------------------------------------------------