├── .github └── workflows │ └── build-images.yaml ├── README.md ├── Vagrantfile ├── calico.yaml ├── charts ├── README.md ├── egg │ ├── .helmignore │ ├── Chart.yaml │ ├── templates │ │ ├── _helpers.tpl │ │ ├── configmap.yaml │ │ ├── deployment.yaml │ │ ├── networkpolicy.yaml │ │ └── service.yaml │ └── values.yaml ├── entry │ ├── .helmignore │ ├── Chart.yaml │ ├── templates │ │ ├── _helpers.tpl │ │ ├── configmap.yaml │ │ ├── deployment.yaml │ │ ├── hpa.yaml │ │ ├── ingress.yaml │ │ ├── networkpolicy.yaml │ │ ├── secret.yaml │ │ ├── service.yaml │ │ └── serviceaccount.yaml │ └── values.yaml ├── honk │ ├── .helmignore │ ├── Chart.yaml │ ├── templates │ │ ├── _helpers.tpl │ │ ├── configmap.yaml │ │ ├── deployment.yaml │ │ └── service.yaml │ └── values.yaml ├── joker │ ├── .helmignore │ ├── Chart.yaml │ ├── templates │ │ ├── _helpers.tpl │ │ ├── configmap.yaml │ │ ├── deployment.yaml │ │ ├── ingress.yaml │ │ ├── networkpolicy.yaml │ │ └── service.yaml │ └── values.yaml └── ssh │ ├── .helmignore │ ├── Chart.yaml │ ├── templates │ ├── _helpers.tpl │ ├── configmap.yaml │ ├── deployment.yaml │ ├── hpa.yaml │ ├── networkpolicy.yaml │ └── service.yaml │ └── values.yaml ├── config ├── create-cluster.sh ├── datadog-metric-np.yaml ├── docker-images ├── egg │ ├── Dockerfile │ ├── README.md │ └── server.js ├── entry │ ├── Dockerfile │ ├── README.md │ ├── etc │ │ └── ssh │ │ │ └── ssh_config │ └── web │ │ ├── hacker.css │ │ ├── index.html │ │ ├── index.php │ │ └── three-easter-eggs-svgrepo-com.svg └── ssh │ ├── Dockerfile │ ├── README.md │ └── sshd_config ├── install-with-helm.sh ├── k8s-easter-ctf.drawio ├── k8s-easter-ctf.png ├── key ├── ssh └── ssh.pub └── uninstall-with-helm.sh /.github/workflows/build-images.yaml: -------------------------------------------------------------------------------- 1 | name: Build images and push to dockerhub 2 | # This workflow is triggered on pushes to the repository. 3 | on: [push] 4 | 5 | jobs: 6 | build: 7 | name: Build-all-and-Push-to-DH 8 | runs-on: ubuntu-latest 9 | steps: 10 | - uses: actions/checkout@v2 11 | - name: Build entry 12 | id: entry 13 | uses: docker/build-push-action@v1 14 | with: 15 | username: ${{ secrets.DOCKER_USERNAME }} 16 | password: ${{ secrets.DOCKER_PASSWORD }} 17 | path: docker-images/entry/ 18 | build_args: GHA_EASTER_EGG=${{ secrets.GHA_EASTER_EGG }} 19 | repository: nodyd/e20-entry 20 | tags: latest 21 | 22 | - name: Build ssh 23 | id: ssh 24 | uses: docker/build-push-action@v1 25 | with: 26 | username: ${{ secrets.DOCKER_USERNAME }} 27 | password: ${{ secrets.DOCKER_PASSWORD }} 28 | path: docker-images/ssh/ 29 | repository: nodyd/e20-ssh 30 | tags: latest 31 | 32 | - name: Build egg 33 | id: token 34 | uses: docker/build-push-action@v1 35 | with: 36 | username: ${{ secrets.DOCKER_USERNAME }} 37 | password: ${{ secrets.DOCKER_PASSWORD }} 38 | path: docker-images/egg/ 39 | repository: nodyd/e20-egg 40 | tags: latest 41 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes CTF 2 | 3 | These are all resource that are used to setup the Kubernetes Easter CTF. The CTF was hosted on http://k8s-ctf.rocks/ and ended with the end of eastern. The CTF itself was hosted on Amazon EKS. This repository contains a [Vagrantfile](Vagrantfile) (for [HashiCorp Vagrant](https://www.vagrantup.com/)) that allows you to setup the CTF locally. There might be some parts undocumented or not perfectly working, that I forgot to document. Feel free to reach out and we can fix it! :-) 4 | 5 | ![Setup](k8s-easter-ctf.png) 6 | 7 | ## Setup with Vagrant 8 | 9 | To simplify the Installation, a [Vagrantfile](Vagrantfile) is supplied to bootstraps the CTF local on an Ubuntu VM + k3s 10 | 11 | You can start it with: 12 | ```bash 13 | vagrant up 14 | ``` 15 | Even if the VM is started, the cluster needs some time to pull all images. The status of the deployment can be checked with following commands: 16 | ```bash 17 | # Connect to vm 18 | vagrant ssh 19 | 20 | # Get status of pods 21 | kubectl get pods --all-namespaces 22 | ``` 23 | As soon as the Status is `Running` or `Completed` the cluster can be accessed on [http://localhost:8080](http://localhost:8080). 24 | 25 | ## Configuration 26 | 27 | Most of the configurations can be in adjusted in the config [config](config). The vagrant setup depends on k3s and needs according to the [documentation](https://rancher.com/docs/k3s/latest/en/installation/network-options/) some manual adjustment of the calico deployment. 28 | 29 | ## Install k3s 30 | 31 | In case you want to deploy it on an existing maschine, k3s can installed as following- 32 | 33 | ```bash 34 | . ./config 35 | curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--cluster-cidr=$POD_SUBNET --service-cidr=$SVC_SUBNET --write-kubeconfig-mode=644 --no-flannel" sudo -E sh - 36 | sleep 5 37 | kubectl apply -f calico.yaml 38 | mkdir -p ~/.kube 39 | ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config 40 | ``` 41 | 42 | ## Install Helm 43 | 44 | The Kubernetes resources are written in Helm 3 Charts. Following commands are necessary to install Helm 3. 45 | 46 | ```bash 47 | curl -fsSL -o ~/get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 48 | chmod 700 ~/get_helm.sh 49 | ~/get_helm.sh 50 | rm ~/get_helm.sh 51 | helm repo add stable https://kubernetes-charts.storage.googleapis.com/ 52 | ``` 53 | 54 | ## Deploy CTF 55 | 56 | The CTF can as well deployed with Helm 3 to an existing cluster with the following command. 57 | 58 | ```bash 59 | ./install-with-helm.sh 60 | ``` 61 | 62 | And don't forget to adjust the configuration in the [config](config). 63 | 64 | ## Docker Images 65 | 66 | The Dockerfiles are stored in the [docker-images](docker-images/) directory. The images are build automatically by GitHub Actions and published on Docker Hub: 67 | 68 | * [nodyd/e20-entry](https://hub.docker.com/r/nodyd/e20-entry) 69 | * [nodyd/e20-ssh](https://hub.docker.com/r/nodyd/e20-ssh) 70 | * [nodyd/e20-egg](https://hub.docker.com/r/nodyd/e20-egg) 71 | 72 | ## Fixed issues 73 | 74 | - Helm 3 stores all details about the different deployments in the Kubernetes Secrets. Since I stored one EGG in the kubernetes Secret API, the Helm secrets were as well available. According to [Issue #6409](https://github.com/helm/helm/issues/6409) you can decode the complete deployment with 2x base64 decode + gunzp (`kubectl get secrets -o json | jq .data.release -r | base64 --decode | base64 --decode | gunzip -`) and all the Kubernetes magic was gone. xD I deleted the Secrets during the CTF manually to avoid the info leak. For now, I relocated the Helm meta info to another namespace. 75 | - I deployed [Datadog Cloud Monitoring](https://www.datadoghq.com/) for the very first time on a cluster. It is nice as an operator to have fancy charts and stats, to name an advantage. Another advantage was for the CTF participants was the service `kube-state-metrics`, which exposed the whole log of my overall deployment. After deploying an additional [NetworPolicy]( datadog-metric-np.yaml), the service was not anymore available. 76 | 77 | 78 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | 2 | ################################################################################################ 3 | # Dont forget to define EGGS in config !!!!! 4 | ################################################################################################ 5 | 6 | Vagrant.configure("2") do |config| 7 | # define system 8 | config.vm.box = "bento/ubuntu-18.04" 9 | if Vagrant::Util::Platform.darwin? 10 | config.vm.provider "vmware_desktop" do |v| 11 | v.clone_directory = "~/Documents/Virtual\ Machines.localized/" 12 | end 13 | end 14 | 15 | # Copy repo 16 | config.vm.provision "file", 17 | source: ".", 18 | destination: "/home/vagrant/" 19 | 20 | # install k3s 21 | config.vm.provision "shell", 22 | privileged: false, 23 | inline: <<-SHELL 24 | . ./config 25 | curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--cluster-cidr=$POD_SUBNET --service-cidr=$SVC_SUBNET --write-kubeconfig-mode=644 --no-flannel" sudo -E sh - 26 | mkdir -p ~/.kube 27 | ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config 28 | sleep 3 29 | kubectl apply -f calico.yaml 30 | SHELL 31 | 32 | # install helm 33 | config.vm.provision "shell", 34 | privileged: false, 35 | inline: <<-SHELL 36 | curl -fsSL -o ~/get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 37 | chmod 700 ~/get_helm.sh 38 | ~/get_helm.sh 39 | rm ~/get_helm.sh 40 | SHELL 41 | 42 | 43 | # deploy challenges 44 | config.vm.provision "shell", 45 | privileged: false, 46 | inline: <<-SHELL 47 | chmod +x ./install-with-helm.sh 48 | ./install-with-helm.sh 49 | SHELL 50 | 51 | # Make challenges accessible 52 | config.vm.network "forwarded_port", 53 | guest: 80, 54 | host: 8080 55 | 56 | end 57 | 58 | -------------------------------------------------------------------------------- /calico.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Source: calico/templates/calico-config.yaml 3 | # This ConfigMap is used to configure a self-hosted Calico installation. 4 | kind: ConfigMap 5 | apiVersion: v1 6 | metadata: 7 | name: calico-config 8 | namespace: kube-system 9 | data: 10 | # Typha is disabled. 11 | typha_service_name: "none" 12 | # Configure the backend to use. 13 | calico_backend: "bird" 14 | 15 | # Configure the MTU to use 16 | veth_mtu: "1440" 17 | 18 | # The CNI network configuration to install on each node. The special 19 | # values in this config will be automatically populated. 20 | cni_network_config: |- 21 | { 22 | "name": "k8s-pod-network", 23 | "cniVersion": "0.3.1", 24 | "plugins": [ 25 | { 26 | "type": "calico", 27 | "log_level": "info", 28 | "datastore_type": "kubernetes", 29 | "nodename": "__KUBERNETES_NODE_NAME__", 30 | "mtu": __CNI_MTU__, 31 | "ipam": { 32 | "type": "calico-ipam" 33 | }, 34 | "policy": { 35 | "type": "k8s" 36 | }, 37 | "kubernetes": { 38 | "kubeconfig": "__KUBECONFIG_FILEPATH__" 39 | }, 40 | "container_settings": { 41 | "allow_ip_forwarding": true 42 | } 43 | }, 44 | { 45 | "type": "portmap", 46 | "snat": true, 47 | "capabilities": {"portMappings": true} 48 | } 49 | ] 50 | } 51 | 52 | --- 53 | # Source: calico/templates/kdd-crds.yaml 54 | apiVersion: apiextensions.k8s.io/v1beta1 55 | kind: CustomResourceDefinition 56 | metadata: 57 | name: felixconfigurations.crd.projectcalico.org 58 | spec: 59 | scope: Cluster 60 | group: crd.projectcalico.org 61 | version: v1 62 | names: 63 | kind: FelixConfiguration 64 | plural: felixconfigurations 65 | singular: felixconfiguration 66 | --- 67 | 68 | apiVersion: apiextensions.k8s.io/v1beta1 69 | kind: CustomResourceDefinition 70 | metadata: 71 | name: ipamblocks.crd.projectcalico.org 72 | spec: 73 | scope: Cluster 74 | group: crd.projectcalico.org 75 | version: v1 76 | names: 77 | kind: IPAMBlock 78 | plural: ipamblocks 79 | singular: ipamblock 80 | 81 | --- 82 | 83 | apiVersion: apiextensions.k8s.io/v1beta1 84 | kind: CustomResourceDefinition 85 | metadata: 86 | name: blockaffinities.crd.projectcalico.org 87 | spec: 88 | scope: Cluster 89 | group: crd.projectcalico.org 90 | version: v1 91 | names: 92 | kind: BlockAffinity 93 | plural: blockaffinities 94 | singular: blockaffinity 95 | 96 | --- 97 | 98 | apiVersion: apiextensions.k8s.io/v1beta1 99 | kind: CustomResourceDefinition 100 | metadata: 101 | name: ipamhandles.crd.projectcalico.org 102 | spec: 103 | scope: Cluster 104 | group: crd.projectcalico.org 105 | version: v1 106 | names: 107 | kind: IPAMHandle 108 | plural: ipamhandles 109 | singular: ipamhandle 110 | 111 | --- 112 | 113 | apiVersion: apiextensions.k8s.io/v1beta1 114 | kind: CustomResourceDefinition 115 | metadata: 116 | name: ipamconfigs.crd.projectcalico.org 117 | spec: 118 | scope: Cluster 119 | group: crd.projectcalico.org 120 | version: v1 121 | names: 122 | kind: IPAMConfig 123 | plural: ipamconfigs 124 | singular: ipamconfig 125 | 126 | --- 127 | 128 | apiVersion: apiextensions.k8s.io/v1beta1 129 | kind: CustomResourceDefinition 130 | metadata: 131 | name: bgppeers.crd.projectcalico.org 132 | spec: 133 | scope: Cluster 134 | group: crd.projectcalico.org 135 | version: v1 136 | names: 137 | kind: BGPPeer 138 | plural: bgppeers 139 | singular: bgppeer 140 | 141 | --- 142 | 143 | apiVersion: apiextensions.k8s.io/v1beta1 144 | kind: CustomResourceDefinition 145 | metadata: 146 | name: bgpconfigurations.crd.projectcalico.org 147 | spec: 148 | scope: Cluster 149 | group: crd.projectcalico.org 150 | version: v1 151 | names: 152 | kind: BGPConfiguration 153 | plural: bgpconfigurations 154 | singular: bgpconfiguration 155 | 156 | --- 157 | 158 | apiVersion: apiextensions.k8s.io/v1beta1 159 | kind: CustomResourceDefinition 160 | metadata: 161 | name: ippools.crd.projectcalico.org 162 | spec: 163 | scope: Cluster 164 | group: crd.projectcalico.org 165 | version: v1 166 | names: 167 | kind: IPPool 168 | plural: ippools 169 | singular: ippool 170 | 171 | --- 172 | 173 | apiVersion: apiextensions.k8s.io/v1beta1 174 | kind: CustomResourceDefinition 175 | metadata: 176 | name: hostendpoints.crd.projectcalico.org 177 | spec: 178 | scope: Cluster 179 | group: crd.projectcalico.org 180 | version: v1 181 | names: 182 | kind: HostEndpoint 183 | plural: hostendpoints 184 | singular: hostendpoint 185 | 186 | --- 187 | 188 | apiVersion: apiextensions.k8s.io/v1beta1 189 | kind: CustomResourceDefinition 190 | metadata: 191 | name: clusterinformations.crd.projectcalico.org 192 | spec: 193 | scope: Cluster 194 | group: crd.projectcalico.org 195 | version: v1 196 | names: 197 | kind: ClusterInformation 198 | plural: clusterinformations 199 | singular: clusterinformation 200 | 201 | --- 202 | 203 | apiVersion: apiextensions.k8s.io/v1beta1 204 | kind: CustomResourceDefinition 205 | metadata: 206 | name: globalnetworkpolicies.crd.projectcalico.org 207 | spec: 208 | scope: Cluster 209 | group: crd.projectcalico.org 210 | version: v1 211 | names: 212 | kind: GlobalNetworkPolicy 213 | plural: globalnetworkpolicies 214 | singular: globalnetworkpolicy 215 | 216 | --- 217 | 218 | apiVersion: apiextensions.k8s.io/v1beta1 219 | kind: CustomResourceDefinition 220 | metadata: 221 | name: globalnetworksets.crd.projectcalico.org 222 | spec: 223 | scope: Cluster 224 | group: crd.projectcalico.org 225 | version: v1 226 | names: 227 | kind: GlobalNetworkSet 228 | plural: globalnetworksets 229 | singular: globalnetworkset 230 | 231 | --- 232 | 233 | apiVersion: apiextensions.k8s.io/v1beta1 234 | kind: CustomResourceDefinition 235 | metadata: 236 | name: networkpolicies.crd.projectcalico.org 237 | spec: 238 | scope: Namespaced 239 | group: crd.projectcalico.org 240 | version: v1 241 | names: 242 | kind: NetworkPolicy 243 | plural: networkpolicies 244 | singular: networkpolicy 245 | 246 | --- 247 | 248 | apiVersion: apiextensions.k8s.io/v1beta1 249 | kind: CustomResourceDefinition 250 | metadata: 251 | name: networksets.crd.projectcalico.org 252 | spec: 253 | scope: Namespaced 254 | group: crd.projectcalico.org 255 | version: v1 256 | names: 257 | kind: NetworkSet 258 | plural: networksets 259 | singular: networkset 260 | --- 261 | # Source: calico/templates/rbac.yaml 262 | 263 | # Include a clusterrole for the kube-controllers component, 264 | # and bind it to the calico-kube-controllers serviceaccount. 265 | kind: ClusterRole 266 | apiVersion: rbac.authorization.k8s.io/v1 267 | metadata: 268 | name: calico-kube-controllers 269 | rules: 270 | # Nodes are watched to monitor for deletions. 271 | - apiGroups: [""] 272 | resources: 273 | - nodes 274 | verbs: 275 | - watch 276 | - list 277 | - get 278 | # Pods are queried to check for existence. 279 | - apiGroups: [""] 280 | resources: 281 | - pods 282 | verbs: 283 | - get 284 | # IPAM resources are manipulated when nodes are deleted. 285 | - apiGroups: ["crd.projectcalico.org"] 286 | resources: 287 | - ippools 288 | verbs: 289 | - list 290 | - apiGroups: ["crd.projectcalico.org"] 291 | resources: 292 | - blockaffinities 293 | - ipamblocks 294 | - ipamhandles 295 | verbs: 296 | - get 297 | - list 298 | - create 299 | - update 300 | - delete 301 | # Needs access to update clusterinformations. 302 | - apiGroups: ["crd.projectcalico.org"] 303 | resources: 304 | - clusterinformations 305 | verbs: 306 | - get 307 | - create 308 | - update 309 | --- 310 | kind: ClusterRoleBinding 311 | apiVersion: rbac.authorization.k8s.io/v1 312 | metadata: 313 | name: calico-kube-controllers 314 | roleRef: 315 | apiGroup: rbac.authorization.k8s.io 316 | kind: ClusterRole 317 | name: calico-kube-controllers 318 | subjects: 319 | - kind: ServiceAccount 320 | name: calico-kube-controllers 321 | namespace: kube-system 322 | --- 323 | # Include a clusterrole for the calico-node DaemonSet, 324 | # and bind it to the calico-node serviceaccount. 325 | kind: ClusterRole 326 | apiVersion: rbac.authorization.k8s.io/v1 327 | metadata: 328 | name: calico-node 329 | rules: 330 | # The CNI plugin needs to get pods, nodes, and namespaces. 331 | - apiGroups: [""] 332 | resources: 333 | - pods 334 | - nodes 335 | - namespaces 336 | verbs: 337 | - get 338 | - apiGroups: [""] 339 | resources: 340 | - endpoints 341 | - services 342 | verbs: 343 | # Used to discover service IPs for advertisement. 344 | - watch 345 | - list 346 | # Used to discover Typhas. 347 | - get 348 | - apiGroups: [""] 349 | resources: 350 | - nodes/status 351 | verbs: 352 | # Needed for clearing NodeNetworkUnavailable flag. 353 | - patch 354 | # Calico stores some configuration information in node annotations. 355 | - update 356 | # Watch for changes to Kubernetes NetworkPolicies. 357 | - apiGroups: ["networking.k8s.io"] 358 | resources: 359 | - networkpolicies 360 | verbs: 361 | - watch 362 | - list 363 | # Used by Calico for policy information. 364 | - apiGroups: [""] 365 | resources: 366 | - pods 367 | - namespaces 368 | - serviceaccounts 369 | verbs: 370 | - list 371 | - watch 372 | # The CNI plugin patches pods/status. 373 | - apiGroups: [""] 374 | resources: 375 | - pods/status 376 | verbs: 377 | - patch 378 | # Calico monitors various CRDs for config. 379 | - apiGroups: ["crd.projectcalico.org"] 380 | resources: 381 | - globalfelixconfigs 382 | - felixconfigurations 383 | - bgppeers 384 | - globalbgpconfigs 385 | - bgpconfigurations 386 | - ippools 387 | - ipamblocks 388 | - globalnetworkpolicies 389 | - globalnetworksets 390 | - networkpolicies 391 | - networksets 392 | - clusterinformations 393 | - hostendpoints 394 | - blockaffinities 395 | verbs: 396 | - get 397 | - list 398 | - watch 399 | # Calico must create and update some CRDs on startup. 400 | - apiGroups: ["crd.projectcalico.org"] 401 | resources: 402 | - ippools 403 | - felixconfigurations 404 | - clusterinformations 405 | verbs: 406 | - create 407 | - update 408 | # Calico stores some configuration information on the node. 409 | - apiGroups: [""] 410 | resources: 411 | - nodes 412 | verbs: 413 | - get 414 | - list 415 | - watch 416 | # These permissions are only requried for upgrade from v2.6, and can 417 | # be removed after upgrade or on fresh installations. 418 | - apiGroups: ["crd.projectcalico.org"] 419 | resources: 420 | - bgpconfigurations 421 | - bgppeers 422 | verbs: 423 | - create 424 | - update 425 | # These permissions are required for Calico CNI to perform IPAM allocations. 426 | - apiGroups: ["crd.projectcalico.org"] 427 | resources: 428 | - blockaffinities 429 | - ipamblocks 430 | - ipamhandles 431 | verbs: 432 | - get 433 | - list 434 | - create 435 | - update 436 | - delete 437 | - apiGroups: ["crd.projectcalico.org"] 438 | resources: 439 | - ipamconfigs 440 | verbs: 441 | - get 442 | # Block affinities must also be watchable by confd for route aggregation. 443 | - apiGroups: ["crd.projectcalico.org"] 444 | resources: 445 | - blockaffinities 446 | verbs: 447 | - watch 448 | # The Calico IPAM migration needs to get daemonsets. These permissions can be 449 | # removed if not upgrading from an installation using host-local IPAM. 450 | - apiGroups: ["apps"] 451 | resources: 452 | - daemonsets 453 | verbs: 454 | - get 455 | --- 456 | apiVersion: rbac.authorization.k8s.io/v1 457 | kind: ClusterRoleBinding 458 | metadata: 459 | name: calico-node 460 | roleRef: 461 | apiGroup: rbac.authorization.k8s.io 462 | kind: ClusterRole 463 | name: calico-node 464 | subjects: 465 | - kind: ServiceAccount 466 | name: calico-node 467 | namespace: kube-system 468 | 469 | --- 470 | # Source: calico/templates/calico-node.yaml 471 | # This manifest installs the calico-node container, as well 472 | # as the CNI plugins and network config on 473 | # each master and worker node in a Kubernetes cluster. 474 | kind: DaemonSet 475 | apiVersion: apps/v1 476 | metadata: 477 | name: calico-node 478 | namespace: kube-system 479 | labels: 480 | k8s-app: calico-node 481 | spec: 482 | selector: 483 | matchLabels: 484 | k8s-app: calico-node 485 | updateStrategy: 486 | type: RollingUpdate 487 | rollingUpdate: 488 | maxUnavailable: 1 489 | template: 490 | metadata: 491 | labels: 492 | k8s-app: calico-node 493 | annotations: 494 | # This, along with the CriticalAddonsOnly toleration below, 495 | # marks the pod as a critical add-on, ensuring it gets 496 | # priority scheduling and that its resources are reserved 497 | # if it ever gets evicted. 498 | scheduler.alpha.kubernetes.io/critical-pod: '' 499 | spec: 500 | nodeSelector: 501 | beta.kubernetes.io/os: linux 502 | hostNetwork: true 503 | tolerations: 504 | # Make sure calico-node gets scheduled on all nodes. 505 | - effect: NoSchedule 506 | operator: Exists 507 | # Mark the pod as a critical add-on for rescheduling. 508 | - key: CriticalAddonsOnly 509 | operator: Exists 510 | - effect: NoExecute 511 | operator: Exists 512 | serviceAccountName: calico-node 513 | # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force 514 | # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. 515 | terminationGracePeriodSeconds: 0 516 | priorityClassName: system-node-critical 517 | initContainers: 518 | # This container performs upgrade from host-local IPAM to calico-ipam. 519 | # It can be deleted if this is a fresh installation, or if you have already 520 | # upgraded to use calico-ipam. 521 | - name: upgrade-ipam 522 | image: calico/cni:v3.11.1 523 | command: ["/opt/cni/bin/calico-ipam", "-upgrade"] 524 | env: 525 | - name: KUBERNETES_NODE_NAME 526 | valueFrom: 527 | fieldRef: 528 | fieldPath: spec.nodeName 529 | - name: CALICO_NETWORKING_BACKEND 530 | valueFrom: 531 | configMapKeyRef: 532 | name: calico-config 533 | key: calico_backend 534 | volumeMounts: 535 | - mountPath: /var/lib/cni/networks 536 | name: host-local-net-dir 537 | - mountPath: /host/opt/cni/bin 538 | name: cni-bin-dir 539 | securityContext: 540 | privileged: true 541 | # This container installs the CNI binaries 542 | # and CNI network config file on each node. 543 | - name: install-cni 544 | image: calico/cni:v3.11.1 545 | command: ["/install-cni.sh"] 546 | env: 547 | # Name of the CNI config file to create. 548 | - name: CNI_CONF_NAME 549 | value: "10-calico.conflist" 550 | # The CNI network config to install on each node. 551 | - name: CNI_NETWORK_CONFIG 552 | valueFrom: 553 | configMapKeyRef: 554 | name: calico-config 555 | key: cni_network_config 556 | # Set the hostname based on the k8s node name. 557 | - name: KUBERNETES_NODE_NAME 558 | valueFrom: 559 | fieldRef: 560 | fieldPath: spec.nodeName 561 | # CNI MTU Config variable 562 | - name: CNI_MTU 563 | valueFrom: 564 | configMapKeyRef: 565 | name: calico-config 566 | key: veth_mtu 567 | # Prevents the container from sleeping forever. 568 | - name: SLEEP 569 | value: "false" 570 | volumeMounts: 571 | - mountPath: /host/opt/cni/bin 572 | name: cni-bin-dir 573 | - mountPath: /host/etc/cni/net.d 574 | name: cni-net-dir 575 | securityContext: 576 | privileged: true 577 | # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes 578 | # to communicate with Felix over the Policy Sync API. 579 | - name: flexvol-driver 580 | image: calico/pod2daemon-flexvol:v3.11.1 581 | volumeMounts: 582 | - name: flexvol-driver-host 583 | mountPath: /host/driver 584 | securityContext: 585 | privileged: true 586 | containers: 587 | # Runs calico-node container on each Kubernetes node. This 588 | # container programs network policy and routes on each 589 | # host. 590 | - name: calico-node 591 | image: calico/node:v3.11.1 592 | env: 593 | # Use Kubernetes API as the backing datastore. 594 | - name: DATASTORE_TYPE 595 | value: "kubernetes" 596 | # Wait for the datastore. 597 | - name: WAIT_FOR_DATASTORE 598 | value: "true" 599 | # Set based on the k8s node name. 600 | - name: NODENAME 601 | valueFrom: 602 | fieldRef: 603 | fieldPath: spec.nodeName 604 | # Choose the backend to use. 605 | - name: CALICO_NETWORKING_BACKEND 606 | valueFrom: 607 | configMapKeyRef: 608 | name: calico-config 609 | key: calico_backend 610 | # Cluster type to identify the deployment type 611 | - name: CLUSTER_TYPE 612 | value: "k8s,bgp" 613 | # Auto-detect the BGP IP address. 614 | - name: IP 615 | value: "autodetect" 616 | # Enable IPIP 617 | - name: CALICO_IPV4POOL_IPIP 618 | value: "Always" 619 | # Set MTU for tunnel device used if ipip is enabled 620 | - name: FELIX_IPINIPMTU 621 | valueFrom: 622 | configMapKeyRef: 623 | name: calico-config 624 | key: veth_mtu 625 | # The default IPv4 pool to create on startup if none exists. Pod IPs will be 626 | # chosen from this range. Changing this value after installation will have 627 | # no effect. This should fall within `--cluster-cidr`. 628 | - name: CALICO_IPV4POOL_CIDR 629 | value: "10.1.0.0/16" 630 | # Disable file logging so `kubectl logs` works. 631 | - name: CALICO_DISABLE_FILE_LOGGING 632 | value: "true" 633 | # Set Felix endpoint to host default action to ACCEPT. 634 | - name: FELIX_DEFAULTENDPOINTTOHOSTACTION 635 | value: "ACCEPT" 636 | # Disable IPv6 on Kubernetes. 637 | - name: FELIX_IPV6SUPPORT 638 | value: "false" 639 | # Set Felix logging to "info" 640 | - name: FELIX_LOGSEVERITYSCREEN 641 | value: "info" 642 | - name: FELIX_HEALTHENABLED 643 | value: "true" 644 | securityContext: 645 | privileged: true 646 | resources: 647 | requests: 648 | cpu: 250m 649 | livenessProbe: 650 | exec: 651 | command: 652 | - /bin/calico-node 653 | - -felix-live 654 | - -bird-live 655 | periodSeconds: 10 656 | initialDelaySeconds: 10 657 | failureThreshold: 6 658 | readinessProbe: 659 | exec: 660 | command: 661 | - /bin/calico-node 662 | - -felix-ready 663 | - -bird-ready 664 | periodSeconds: 10 665 | volumeMounts: 666 | - mountPath: /lib/modules 667 | name: lib-modules 668 | readOnly: true 669 | - mountPath: /run/xtables.lock 670 | name: xtables-lock 671 | readOnly: false 672 | - mountPath: /var/run/calico 673 | name: var-run-calico 674 | readOnly: false 675 | - mountPath: /var/lib/calico 676 | name: var-lib-calico 677 | readOnly: false 678 | - name: policysync 679 | mountPath: /var/run/nodeagent 680 | volumes: 681 | # Used by calico-node. 682 | - name: lib-modules 683 | hostPath: 684 | path: /lib/modules 685 | - name: var-run-calico 686 | hostPath: 687 | path: /var/run/calico 688 | - name: var-lib-calico 689 | hostPath: 690 | path: /var/lib/calico 691 | - name: xtables-lock 692 | hostPath: 693 | path: /run/xtables.lock 694 | type: FileOrCreate 695 | # Used to install CNI. 696 | - name: cni-bin-dir 697 | hostPath: 698 | path: /opt/cni/bin 699 | - name: cni-net-dir 700 | hostPath: 701 | path: /etc/cni/net.d 702 | # Mount in the directory for host-local IPAM allocations. This is 703 | # used when upgrading from host-local to calico-ipam, and can be removed 704 | # if not using the upgrade-ipam init container. 705 | - name: host-local-net-dir 706 | hostPath: 707 | path: /var/lib/cni/networks 708 | # Used to create per-pod Unix Domain Sockets 709 | - name: policysync 710 | hostPath: 711 | type: DirectoryOrCreate 712 | path: /var/run/nodeagent 713 | # Used to install Flex Volume Driver 714 | - name: flexvol-driver-host 715 | hostPath: 716 | type: DirectoryOrCreate 717 | path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds 718 | --- 719 | 720 | apiVersion: v1 721 | kind: ServiceAccount 722 | metadata: 723 | name: calico-node 724 | namespace: kube-system 725 | 726 | --- 727 | # Source: calico/templates/calico-kube-controllers.yaml 728 | 729 | # See https://github.com/projectcalico/kube-controllers 730 | apiVersion: apps/v1 731 | kind: Deployment 732 | metadata: 733 | name: calico-kube-controllers 734 | namespace: kube-system 735 | labels: 736 | k8s-app: calico-kube-controllers 737 | spec: 738 | # The controllers can only have a single active instance. 739 | replicas: 1 740 | selector: 741 | matchLabels: 742 | k8s-app: calico-kube-controllers 743 | strategy: 744 | type: Recreate 745 | template: 746 | metadata: 747 | name: calico-kube-controllers 748 | namespace: kube-system 749 | labels: 750 | k8s-app: calico-kube-controllers 751 | annotations: 752 | scheduler.alpha.kubernetes.io/critical-pod: '' 753 | spec: 754 | nodeSelector: 755 | beta.kubernetes.io/os: linux 756 | tolerations: 757 | # Mark the pod as a critical add-on for rescheduling. 758 | - key: CriticalAddonsOnly 759 | operator: Exists 760 | - key: node-role.kubernetes.io/master 761 | effect: NoSchedule 762 | serviceAccountName: calico-kube-controllers 763 | priorityClassName: system-cluster-critical 764 | containers: 765 | - name: calico-kube-controllers 766 | image: calico/kube-controllers:v3.11.1 767 | env: 768 | # Choose which controllers to run. 769 | - name: ENABLED_CONTROLLERS 770 | value: node 771 | - name: DATASTORE_TYPE 772 | value: kubernetes 773 | readinessProbe: 774 | exec: 775 | command: 776 | - /usr/bin/check-status 777 | - -r 778 | 779 | --- 780 | 781 | apiVersion: v1 782 | kind: ServiceAccount 783 | metadata: 784 | name: calico-kube-controllers 785 | namespace: kube-system 786 | --- 787 | # Source: calico/templates/calico-etcd-secrets.yaml 788 | 789 | --- 790 | # Source: calico/templates/calico-typha.yaml 791 | 792 | --- 793 | # Source: calico/templates/configure-canal.yaml 794 | 795 | 796 | -------------------------------------------------------------------------------- /charts/README.md: -------------------------------------------------------------------------------- 1 | # Challanges 2 | 3 | ## entry 4 | 5 | - Web frontend with RCE 6 | - Resources: 7 | - deployment nodyd/entry 8 | - service `entry` 9 | - ingress `*` 10 | - secret ssh-key /var/run/secrets/kubernetes.io/ssh 11 | - egg /entry/cfg/config.php 12 | 13 | 14 | ## ssh 15 | 16 | ## token 17 | 18 | ## joker 19 | -------------------------------------------------------------------------------- /charts/egg/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *.orig 18 | *~ 19 | # Various IDEs 20 | .project 21 | .idea/ 22 | *.tmproj 23 | .vscode/ 24 | -------------------------------------------------------------------------------- /charts/egg/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v2 2 | name: egg 3 | description: A Helm chart for Kubernetes 4 | 5 | # A chart can be either an 'application' or a 'library' chart. 6 | # 7 | # Application charts are a collection of templates that can be packaged into versioned archives 8 | # to be deployed. 9 | # 10 | # Library charts provide useful utilities or functions for the chart developer. They're included as 11 | # a dependency of application charts to inject those utilities and functions into the rendering 12 | # pipeline. Library charts do not define any templates and therefore cannot be deployed. 13 | type: application 14 | 15 | # This is the chart version. This version number should be incremented each time you make changes 16 | # to the chart and its templates, including the app version. 17 | version: 0.1.0 18 | 19 | # This is the version number of the application being deployed. This version number should be 20 | # incremented each time you make changes to the application. 21 | appVersion: 1.16.0 22 | -------------------------------------------------------------------------------- /charts/egg/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "egg.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "egg.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Create chart name and version as used by the chart label. 29 | */}} 30 | {{- define "egg.chart" -}} 31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Common labels 36 | */}} 37 | {{- define "egg.labels" -}} 38 | helm.sh/chart: {{ include "egg.chart" . }} 39 | {{ include "egg.selectorLabels" . }} 40 | {{- if .Chart.AppVersion }} 41 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 42 | {{- end }} 43 | app.kubernetes.io/managed-by: {{ .Release.Service }} 44 | {{- end -}} 45 | 46 | {{/* 47 | Selector labels 48 | */}} 49 | {{- define "egg.selectorLabels" -}} 50 | app.kubernetes.io/name: {{ include "egg.name" . }} 51 | app.kubernetes.io/instance: {{ .Release.Name }} 52 | {{- end -}} 53 | 54 | {{/* 55 | Create the name of the service account to use 56 | */}} 57 | {{- define "egg.serviceAccountName" -}} 58 | {{- if .Values.serviceAccount.create -}} 59 | {{ default (include "egg.fullname" .) .Values.serviceAccount.name }} 60 | {{- else -}} 61 | {{ default "default" .Values.serviceAccount.name }} 62 | {{- end -}} 63 | {{- end -}} 64 | -------------------------------------------------------------------------------- /charts/egg/templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: "{{ .Chart.Name }}-egg" 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | app: {{ template "egg.fullname" . }} 8 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 9 | release: "{{ .Release.Name }}" 10 | heritage: "{{ .Release.Service }}" 11 | data: 12 | EGG: {{ .Values.egg.env | quote }} 13 | -------------------------------------------------------------------------------- /charts/egg/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ include "egg.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "egg.labels" . | nindent 4 }} 8 | spec: 9 | replicas: {{ .Values.replicaCount }} 10 | selector: 11 | matchLabels: 12 | {{- include "egg.selectorLabels" . | nindent 6 }} 13 | template: 14 | metadata: 15 | labels: 16 | {{- include "egg.selectorLabels" . | nindent 8 }} 17 | spec: 18 | securityContext: 19 | {{- toYaml .Values.podSecurityContext | nindent 8 }} 20 | containers: 21 | - name: {{ .Chart.Name }} 22 | securityContext: 23 | {{- toYaml .Values.securityContext | nindent 12 }} 24 | image: "{{ .Values.image.repository }}" 25 | imagePullPolicy: {{ .Values.image.pullPolicy }} 26 | ports: 27 | - name: http 28 | containerPort: 8080 29 | protocol: TCP 30 | envFrom: 31 | - configMapRef: 32 | name: "{{ .Chart.Name }}-egg" 33 | resources: 34 | {{- toYaml .Values.resources | nindent 12 }} 35 | -------------------------------------------------------------------------------- /charts/egg/templates/networkpolicy.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: networking.k8s.io/v1 3 | metadata: 4 | name: {{ template "egg.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | app: {{ template "egg.name" . }} 8 | chart: {{ template "egg.chart" . }} 9 | release: {{ .Release.Name }} 10 | heritage: {{ .Release.Service }} 11 | spec: 12 | podSelector: 13 | matchLabels: 14 | app.kubernetes.io/name: {{ template "egg.name" . }} 15 | policyTypes: 16 | - Ingress 17 | ingress: 18 | - from: 19 | - podSelector: 20 | matchLabels: 21 | app.kubernetes.io/name: ssh 22 | ports: 23 | - protocol: TCP 24 | port: 8080 25 | -------------------------------------------------------------------------------- /charts/egg/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ include "egg.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "egg.labels" . | nindent 4 }} 8 | spec: 9 | ports: 10 | - port: 80 11 | targetPort: http 12 | protocol: TCP 13 | name: http 14 | selector: 15 | {{- include "egg.selectorLabels" . | nindent 4 }} 16 | -------------------------------------------------------------------------------- /charts/egg/values.yaml: -------------------------------------------------------------------------------- 1 | replicaCount: 1 2 | 3 | image: 4 | repository: docker.io/nodyd/e20-egg:latest 5 | pullPolicy: Always 6 | 7 | podSecurityContext: {} 8 | # fsGroup: 2000 9 | 10 | securityContext: {} 11 | # capabilities: 12 | # drop: 13 | # - ALL 14 | # readOnlyRootFilesystem: true 15 | # runAsNonRoot: true 16 | # runAsUser: 1000 17 | 18 | egg: 19 | env: "11111111" 20 | 21 | namespace: "foo" 22 | 23 | resources: 24 | limits: 25 | cpu: 100m 26 | memory: 128Mi 27 | requests: 28 | cpu: 100m 29 | memory: 128Mi 30 | -------------------------------------------------------------------------------- /charts/entry/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *.orig 18 | *~ 19 | # Various IDEs 20 | .project 21 | .idea/ 22 | *.tmproj 23 | .vscode/ 24 | -------------------------------------------------------------------------------- /charts/entry/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v2 2 | name: entry 3 | description: The entry for the Easter challange 4 | type: application 5 | version: 0.1.0 6 | appVersion: 1.16.0 7 | -------------------------------------------------------------------------------- /charts/entry/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "entry.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "entry.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Create chart name and version as used by the chart label. 29 | */}} 30 | {{- define "entry.chart" -}} 31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Common labels 36 | */}} 37 | {{- define "entry.labels" -}} 38 | helm.sh/chart: {{ include "entry.chart" . }} 39 | {{ include "entry.selectorLabels" . }} 40 | {{- if .Chart.AppVersion }} 41 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 42 | {{- end }} 43 | app.kubernetes.io/managed-by: {{ .Release.Service }} 44 | {{- end -}} 45 | 46 | {{/* 47 | Selector labels 48 | */}} 49 | {{- define "entry.selectorLabels" -}} 50 | app.kubernetes.io/name: {{ include "entry.name" . }} 51 | app.kubernetes.io/instance: {{ .Release.Name }} 52 | {{- end -}} 53 | 54 | {{/* 55 | Create the name of the service account to use 56 | */}} 57 | {{- define "entry.serviceAccountName" -}} 58 | {{- if .Values.serviceAccount.create -}} 59 | {{ default (include "entry.fullname" .) .Values.serviceAccount.name }} 60 | {{- else -}} 61 | {{ default "default" .Values.serviceAccount.name }} 62 | {{- end -}} 63 | {{- end -}} 64 | -------------------------------------------------------------------------------- /charts/entry/templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: "{{ .Chart.Name }}-ssh-key" 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | app: {{ template "entry.fullname" . }} 8 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 9 | release: "{{ .Release.Name }}" 10 | heritage: "{{ .Release.Service }}" 11 | data: 12 | private_key: {{ .Values.ssh.private_key | quote }} 13 | public_key: {{ .Values.ssh.public_key | quote }} 14 | 15 | --- 16 | 17 | apiVersion: v1 18 | kind: ConfigMap 19 | metadata: 20 | name: "{{ .Chart.Name }}-config-egg" 21 | namespace: {{ .Values.namespace | quote }} 22 | labels: 23 | app: {{ template "entry.fullname" . }} 24 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 25 | release: "{{ .Release.Name }}" 26 | heritage: "{{ .Release.Service }}" 27 | data: 28 | config.php: {{ printf "\n" .Values.egg.config | quote }} 29 | 30 | -------------------------------------------------------------------------------- /charts/entry/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ include "entry.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "entry.labels" . | nindent 4 }} 8 | spec: 9 | replicas: {{ .Values.replicaCount }} 10 | selector: 11 | matchLabels: 12 | {{- include "entry.selectorLabels" . | nindent 6 }} 13 | template: 14 | metadata: 15 | labels: 16 | {{- include "entry.selectorLabels" . | nindent 8 }} 17 | spec: 18 | serviceAccountName: {{ include "entry.serviceAccountName" . }} 19 | securityContext: 20 | {{- toYaml .Values.podSecurityContext | nindent 8 }} 21 | containers: 22 | - name: {{ .Chart.Name }} 23 | securityContext: 24 | {{- toYaml .Values.securityContext | nindent 12 }} 25 | image: "{{ .Values.image.repository }}" 26 | imagePullPolicy: {{ .Values.image.pullPolicy }} 27 | ports: 28 | - name: http 29 | containerPort: 80 30 | protocol: TCP 31 | volumeMounts: 32 | - name: ssh-key 33 | mountPath: /var/run/secrets/kubernetes.io/ssh 34 | readOnly: true 35 | - name: entry-config 36 | mountPath: /entry/cfg 37 | readOnly: true 38 | resources: 39 | {{- toYaml .Values.resources | nindent 12 }} 40 | livenessProbe: 41 | httpGet: 42 | path: / 43 | port: http 44 | volumes: 45 | - name: ssh-key 46 | configMap: 47 | name: "{{ .Chart.Name }}-ssh-key" 48 | - name: entry-config 49 | configMap: 50 | name: "{{ .Chart.Name }}-config-egg" 51 | -------------------------------------------------------------------------------- /charts/entry/templates/hpa.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.autoscaling.enabled }} 2 | apiVersion: autoscaling/v2beta1 3 | kind: HorizontalPodAutoscaler 4 | metadata: 5 | name: {{ include "entry.fullname" . }} 6 | namespace: {{ .Values.namespace | quote }} 7 | labels: 8 | {{- include "entry.labels" . | nindent 4 }} 9 | spec: 10 | scaleTargetRef: 11 | apiVersion: apps/v1 12 | kind: Deployment 13 | name: {{ include "entry.fullname" . }} 14 | minReplicas: {{ .Values.autoscaling.minReplicas }} 15 | maxReplicas: {{ .Values.autoscaling.maxReplicas }} 16 | metrics: 17 | {{- with .Values.autoscaling.targetCPUUtilizationPercentage }} 18 | - type: Resource 19 | resource: 20 | name: cpu 21 | targetAverageUtilization: {{ . }} 22 | {{- end }} 23 | {{- with .Values.autoscaling.targetMemoryUtilizationPercentage }} 24 | - type: Resource 25 | resource: 26 | name: memory 27 | targetAverageUtilization: {{ . }} 28 | {{- end }} 29 | {{- end }} 30 | -------------------------------------------------------------------------------- /charts/entry/templates/ingress.yaml: -------------------------------------------------------------------------------- 1 | {{- $fullName := include "entry.fullname" . -}} 2 | apiVersion: networking.k8s.io/v1beta1 3 | apiVersion: extensions/v1beta1 4 | kind: Ingress 5 | metadata: 6 | name: {{ $fullName }} 7 | namespace: {{ .Values.namespace | quote }} 8 | labels: 9 | {{- include "entry.labels" . | nindent 4 }} 10 | spec: 11 | rules: 12 | - host: 13 | http: 14 | paths: 15 | - path: / 16 | backend: 17 | serviceName: {{ $fullName }} 18 | servicePort: 80 19 | 20 | -------------------------------------------------------------------------------- /charts/entry/templates/networkpolicy.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: networking.k8s.io/v1 3 | metadata: 4 | name: {{ template "entry.name" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "entry.labels" . | nindent 4 }} 8 | spec: 9 | podSelector: 10 | matchLabels: 11 | app.kubernetes.io/name: {{ template "entry.name" . }} 12 | policyTypes: 13 | - Egress 14 | egress: 15 | - to: 16 | - ipBlock: 17 | cidr: 0.0.0.0/0 18 | except: 19 | - 169.254.169.254/32 20 | ports: 21 | - protocol: TCP 22 | port: 4444 23 | - to: 24 | - ipBlock: 25 | cidr: 10.0.2.15/32 26 | - to: 27 | - ipBlock: 28 | cidr: {{ .Values.network.svc }} 29 | - to: 30 | - ipBlock: 31 | cidr: {{ .Values.network.pod }} 32 | 33 | --- 34 | -------------------------------------------------------------------------------- /charts/entry/templates/secret.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: "{{ .Chart.Name }}-secret-egg" 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | app: {{ template "entry.fullname" . }} 8 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 9 | release: "{{ .Release.Name }}" 10 | heritage: "{{ .Release.Service }}" 11 | type: Opaque 12 | data: 13 | EGG: {{ .Values.egg.secret | b64enc | quote }} 14 | 15 | -------------------------------------------------------------------------------- /charts/entry/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ include "entry.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "entry.labels" . | nindent 4 }} 8 | spec: 9 | ports: 10 | - port: 80 11 | targetPort: http 12 | protocol: TCP 13 | name: http 14 | selector: 15 | {{- include "entry.selectorLabels" . | nindent 4 }} 16 | -------------------------------------------------------------------------------- /charts/entry/templates/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: {{ include "entry.serviceAccountName" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "entry.labels" . | nindent 4 }} 8 | 9 | --- 10 | 11 | apiVersion: rbac.authorization.k8s.io/v1 12 | kind: Role 13 | metadata: 14 | name: secret-reader 15 | namespace: {{ .Values.namespace | quote }} 16 | labels: 17 | {{- include "entry.labels" . | nindent 4 }} 18 | rules: 19 | - apiGroups: [""] 20 | resources: ["secrets"] 21 | verbs: ["get", "watch", "list"] 22 | 23 | --- 24 | 25 | apiVersion: rbac.authorization.k8s.io/v1 26 | kind: RoleBinding 27 | metadata: 28 | name: secret-reader-binding 29 | namespace: {{ .Values.namespace | quote }} 30 | labels: 31 | {{- include "entry.labels" . | nindent 4 }} 32 | roleRef: 33 | apiGroup: rbac.authorization.k8s.io 34 | kind: Role 35 | name: secret-reader 36 | subjects: 37 | - kind: ServiceAccount 38 | apiGroup: "" 39 | name: {{ include "entry.serviceAccountName" . }} 40 | -------------------------------------------------------------------------------- /charts/entry/values.yaml: -------------------------------------------------------------------------------- 1 | replicaCount: 1 2 | 3 | autoscaling: 4 | enabled: true 5 | minReplicas: 1 6 | maxReplicas: 10 7 | targetCPUUtilizationPercentage: 50 8 | targetMemoryUtilizationPercentage: 50 9 | 10 | serviceAccount: 11 | create: true 12 | name: egg-thief 13 | 14 | image: 15 | repository: docker.io/nodyd/e20-entry:latest 16 | pullPolicy: Always 17 | 18 | nameOverride: "entry" 19 | fullnameOverride: "entry" 20 | 21 | podSecurityContext: {} 22 | # fsGroup: 2000 23 | 24 | securityContext: {} 25 | # capabilities: 26 | # drop: 27 | # - ALL 28 | # readOnlyRootFilesystem: true 29 | # runAsNonRoot: true 30 | # runAsUser: 1000 31 | 32 | ssh: 33 | private_key: "" 34 | public_key: "" 35 | 36 | egg: 37 | config: "huiiiiiiiiiiiii" 38 | secret: "44444444444" 39 | 40 | namespace: "foo" 41 | 42 | resources: 43 | limits: 44 | cpu: 100m 45 | memory: 256Mi 46 | requests: 47 | cpu: 100m 48 | memory: 256Mi 49 | 50 | 51 | 52 | -------------------------------------------------------------------------------- /charts/honk/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *.orig 18 | *~ 19 | # Various IDEs 20 | .project 21 | .idea/ 22 | *.tmproj 23 | .vscode/ 24 | -------------------------------------------------------------------------------- /charts/honk/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v2 2 | name: honk 3 | description: A Helm chart for Kubernetes 4 | 5 | # A chart can be either an 'application' or a 'library' chart. 6 | # 7 | # Application charts are a collection of templates that can be packaged into versioned archives 8 | # to be deployed. 9 | # 10 | # Library charts provide useful utilities or functions for the chart developer. They're included as 11 | # a dependency of application charts to inject those utilities and functions into the rendering 12 | # pipeline. Library charts do not define any templates and therefore cannot be deployed. 13 | type: application 14 | 15 | # This is the chart version. This version number should be incremented each time you make changes 16 | # to the chart and its templates, including the app version. 17 | version: 0.1.0 18 | 19 | # This is the version number of the application being deployed. This version number should be 20 | # incremented each time you make changes to the application. 21 | appVersion: 1.16.0 22 | -------------------------------------------------------------------------------- /charts/honk/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "honk.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "honk.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Create chart name and version as used by the chart label. 29 | */}} 30 | {{- define "honk.chart" -}} 31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Common labels 36 | */}} 37 | {{- define "honk.labels" -}} 38 | helm.sh/chart: {{ include "honk.chart" . }} 39 | {{ include "honk.selectorLabels" . }} 40 | {{- if .Chart.AppVersion }} 41 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 42 | {{- end }} 43 | app.kubernetes.io/managed-by: {{ .Release.Service }} 44 | {{- end -}} 45 | 46 | {{/* 47 | Selector labels 48 | */}} 49 | {{- define "honk.selectorLabels" -}} 50 | app.kubernetes.io/name: {{ include "honk.name" . }} 51 | app.kubernetes.io/instance: {{ .Release.Name }} 52 | {{- end -}} 53 | 54 | {{/* 55 | Create the name of the service account to use 56 | */}} 57 | {{- define "honk.serviceAccountName" -}} 58 | {{- if .Values.serviceAccount.create -}} 59 | {{ default (include "honk.fullname" .) .Values.serviceAccount.name }} 60 | {{- else -}} 61 | {{ default "default" .Values.serviceAccount.name }} 62 | {{- end -}} 63 | {{- end -}} 64 | -------------------------------------------------------------------------------- /charts/honk/templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: "{{ .Chart.Name }}-token" 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | app: {{ template "honk.fullname" . }} 8 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 9 | release: "{{ .Release.Name }}" 10 | heritage: "{{ .Release.Service }}" 11 | data: 12 | EGG: {{ .Values.egg.env | quote }} 13 | -------------------------------------------------------------------------------- /charts/honk/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ include "honk.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "honk.labels" . | nindent 4 }} 8 | spec: 9 | replicas: {{ .Values.replicaCount }} 10 | selector: 11 | matchLabels: 12 | {{- include "honk.selectorLabels" . | nindent 6 }} 13 | template: 14 | metadata: 15 | labels: 16 | {{- include "honk.selectorLabels" . | nindent 8 }} 17 | spec: 18 | securityContext: 19 | {{- toYaml .Values.podSecurityContext | nindent 8 }} 20 | containers: 21 | - name: {{ .Chart.Name }} 22 | securityContext: 23 | {{- toYaml .Values.securityContext | nindent 12 }} 24 | image: "{{ .Values.image.repository }}" 25 | imagePullPolicy: {{ .Values.image.pullPolicy }} 26 | ports: 27 | - name: http 28 | containerPort: 8080 29 | protocol: TCP 30 | envFrom: 31 | - configMapRef: 32 | name: "{{ .Chart.Name }}-token" 33 | resources: 34 | {{- toYaml .Values.resources | nindent 12 }} 35 | -------------------------------------------------------------------------------- /charts/honk/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ include "honk.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "honk.labels" . | nindent 4 }} 8 | spec: 9 | ports: 10 | - port: 80 11 | targetPort: http 12 | protocol: TCP 13 | name: http 14 | selector: 15 | {{- include "honk.selectorLabels" . | nindent 4 }} 16 | -------------------------------------------------------------------------------- /charts/honk/values.yaml: -------------------------------------------------------------------------------- 1 | replicaCount: 1 2 | 3 | image: 4 | repository: docker.io/nodyd/e20-egg:latest 5 | pullPolicy: Always 6 | 7 | 8 | podSecurityContext: {} 9 | # fsGroup: 2000 10 | 11 | securityContext: {} 12 | # capabilities: 13 | # drop: 14 | # - ALL 15 | # readOnlyRootFilesystem: true 16 | # runAsNonRoot: true 17 | # runAsUser: 1000 18 | 19 | egg: 20 | env: "11111111" 21 | 22 | namespace: "bar" 23 | 24 | 25 | resources: 26 | limits: 27 | cpu: 100m 28 | memory: 128Mi 29 | requests: 30 | cpu: 100m 31 | memory: 128Mi 32 | -------------------------------------------------------------------------------- /charts/joker/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *.orig 18 | *~ 19 | # Various IDEs 20 | .project 21 | .idea/ 22 | *.tmproj 23 | .vscode/ 24 | -------------------------------------------------------------------------------- /charts/joker/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v2 2 | name: joker 3 | description: A Helm chart for Kubernetes 4 | 5 | # A chart can be either an 'application' or a 'library' chart. 6 | # 7 | # Application charts are a collection of templates that can be packaged into versioned archives 8 | # to be deployed. 9 | # 10 | # Library charts provide useful utilities or functions for the chart developer. They're included as 11 | # a dependency of application charts to inject those utilities and functions into the rendering 12 | # pipeline. Library charts do not define any templates and therefore cannot be deployed. 13 | type: application 14 | 15 | # This is the chart version. This version number should be incremented each time you make changes 16 | # to the chart and its templates, including the app version. 17 | version: 0.1.0 18 | 19 | # This is the version number of the application being deployed. This version number should be 20 | # incremented each time you make changes to the application. 21 | appVersion: 1.16.0 22 | -------------------------------------------------------------------------------- /charts/joker/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "joker.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "joker.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Create chart name and version as used by the chart label. 29 | */}} 30 | {{- define "joker.chart" -}} 31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Common labels 36 | */}} 37 | {{- define "joker.labels" -}} 38 | helm.sh/chart: {{ include "joker.chart" . }} 39 | {{ include "joker.selectorLabels" . }} 40 | {{- if .Chart.AppVersion }} 41 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 42 | {{- end }} 43 | app.kubernetes.io/managed-by: {{ .Release.Service }} 44 | {{- end -}} 45 | 46 | {{/* 47 | Selector labels 48 | */}} 49 | {{- define "joker.selectorLabels" -}} 50 | app.kubernetes.io/name: {{ include "joker.name" . }} 51 | app.kubernetes.io/instance: {{ .Release.Name }} 52 | {{- end -}} 53 | 54 | {{/* 55 | Create the name of the service account to use 56 | */}} 57 | {{- define "joker.serviceAccountName" -}} 58 | {{- if .Values.serviceAccount.create -}} 59 | {{ default (include "joker.fullname" .) .Values.serviceAccount.name }} 60 | {{- else -}} 61 | {{ default "default" .Values.serviceAccount.name }} 62 | {{- end -}} 63 | {{- end -}} 64 | -------------------------------------------------------------------------------- /charts/joker/templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: "{{ .Chart.Name }}-egg" 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | app: {{ template "joker.fullname" . }} 8 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 9 | release: "{{ .Release.Name }}" 10 | heritage: "{{ .Release.Service }}" 11 | data: 12 | EGG: {{ .Values.egg.env | quote }} 13 | -------------------------------------------------------------------------------- /charts/joker/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ include "joker.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "joker.labels" . | nindent 4 }} 8 | spec: 9 | replicas: {{ .Values.replicaCount }} 10 | selector: 11 | matchLabels: 12 | {{- include "joker.selectorLabels" . | nindent 6 }} 13 | template: 14 | metadata: 15 | labels: 16 | {{- include "joker.selectorLabels" . | nindent 8 }} 17 | spec: 18 | securityContext: 19 | {{- toYaml .Values.podSecurityContext | nindent 8 }} 20 | containers: 21 | - name: {{ .Chart.Name }} 22 | securityContext: 23 | {{- toYaml .Values.securityContext | nindent 12 }} 24 | image: "{{ .Values.image.repository }}" 25 | imagePullPolicy: {{ .Values.image.pullPolicy }} 26 | ports: 27 | - name: http 28 | containerPort: 8080 29 | protocol: TCP 30 | envFrom: 31 | - configMapRef: 32 | name: "{{ .Chart.Name }}-egg" 33 | resources: 34 | {{- toYaml .Values.resources | nindent 12 }} 35 | -------------------------------------------------------------------------------- /charts/joker/templates/ingress.yaml: -------------------------------------------------------------------------------- 1 | {{- $fullName := include "joker.fullname" . -}} 2 | apiVersion: networking.k8s.io/v1beta1 3 | apiVersion: extensions/v1beta1 4 | kind: Ingress 5 | metadata: 6 | name: {{ $fullName }} 7 | namespace: {{ .Values.namespace | quote }} 8 | labels: 9 | {{- include "joker.labels" . | nindent 4 }} 10 | spec: 11 | rules: 12 | - host: "joker" 13 | http: 14 | paths: 15 | - path: / 16 | backend: 17 | serviceName: {{ $fullName }} 18 | servicePort: 80 19 | - host: "joker.{{ .Values.dnsZone }}" 20 | http: 21 | paths: 22 | - path: / 23 | backend: 24 | serviceName: {{ $fullName }} 25 | servicePort: 80 26 | 27 | -------------------------------------------------------------------------------- /charts/joker/templates/networkpolicy.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: networking.k8s.io/v1 3 | metadata: 4 | name: {{ template "joker.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | app: {{ template "joker.name" . }} 8 | chart: {{ template "joker.chart" . }} 9 | release: {{ .Release.Name }} 10 | heritage: {{ .Release.Service }} 11 | spec: 12 | podSelector: 13 | matchLabels: 14 | app.kubernetes.io/name: {{ template "joker.name" . }} 15 | policyTypes: 16 | - Ingress 17 | ingress: 18 | - from: 19 | - namespaceSelector: 20 | matchLabels: {} 21 | podSelector: 22 | matchLabels: 23 | app: traefik 24 | ports: 25 | - protocol: TCP 26 | port: 8080 27 | -------------------------------------------------------------------------------- /charts/joker/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ include "joker.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "joker.labels" . | nindent 4 }} 8 | spec: 9 | ports: 10 | - port: 80 11 | targetPort: http 12 | protocol: TCP 13 | name: http 14 | selector: 15 | {{- include "joker.selectorLabels" . | nindent 4 }} 16 | -------------------------------------------------------------------------------- /charts/joker/values.yaml: -------------------------------------------------------------------------------- 1 | replicaCount: 1 2 | 3 | image: 4 | repository: docker.io/nodyd/e20-egg:latest 5 | pullPolicy: Always 6 | 7 | 8 | podSecurityContext: {} 9 | # fsGroup: 2000 10 | 11 | securityContext: {} 12 | # capabilities: 13 | # drop: 14 | # - ALL 15 | # readOnlyRootFilesystem: true 16 | # runAsNonRoot: true 17 | # runAsUser: 1000 18 | 19 | dnsZone: "k8s.home" 20 | 21 | egg: 22 | env: "11111111" 23 | 24 | namespace: "foo" 25 | 26 | 27 | resources: 28 | limits: 29 | cpu: 100m 30 | memory: 128Mi 31 | requests: 32 | cpu: 100m 33 | memory: 128Mi 34 | -------------------------------------------------------------------------------- /charts/ssh/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *.orig 18 | *~ 19 | # Various IDEs 20 | .project 21 | .idea/ 22 | *.tmproj 23 | .vscode/ 24 | -------------------------------------------------------------------------------- /charts/ssh/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v2 2 | name: ssh 3 | description: A Helm chart for Kubernetes 4 | 5 | # A chart can be either an 'application' or a 'library' chart. 6 | # 7 | # Application charts are a collection of templates that can be packaged into versioned archives 8 | # to be deployed. 9 | # 10 | # Library charts provide useful utilities or functions for the chart developer. They're included as 11 | # a dependency of application charts to inject those utilities and functions into the rendering 12 | # pipeline. Library charts do not define any templates and therefore cannot be deployed. 13 | type: application 14 | 15 | # This is the chart version. This version number should be incremented each time you make changes 16 | # to the chart and its templates, including the app version. 17 | version: 0.1.0 18 | 19 | # This is the version number of the application being deployed. This version number should be 20 | # incremented each time you make changes to the application. 21 | appVersion: 1.16.0 22 | -------------------------------------------------------------------------------- /charts/ssh/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "ssh.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "ssh.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Create chart name and version as used by the chart label. 29 | */}} 30 | {{- define "ssh.chart" -}} 31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Common labels 36 | */}} 37 | {{- define "ssh.labels" -}} 38 | helm.sh/chart: {{ include "ssh.chart" . }} 39 | {{ include "ssh.selectorLabels" . }} 40 | {{- if .Chart.AppVersion }} 41 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 42 | {{- end }} 43 | app.kubernetes.io/managed-by: {{ .Release.Service }} 44 | {{- end -}} 45 | 46 | {{/* 47 | Selector labels 48 | */}} 49 | {{- define "ssh.selectorLabels" -}} 50 | app.kubernetes.io/name: {{ include "ssh.name" . }} 51 | app.kubernetes.io/instance: {{ .Release.Name }} 52 | {{- end -}} 53 | 54 | {{/* 55 | Create the name of the service account to use 56 | */}} 57 | {{- define "ssh.serviceAccountName" -}} 58 | {{- if .Values.serviceAccount.create -}} 59 | {{ default (include "ssh.fullname" .) .Values.serviceAccount.name }} 60 | {{- else -}} 61 | {{ default "default" .Values.serviceAccount.name }} 62 | {{- end -}} 63 | {{- end -}} 64 | -------------------------------------------------------------------------------- /charts/ssh/templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: "{{ .Chart.Name }}-authorized-keys" 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | app: {{ template "ssh.fullname" . }} 8 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 9 | release: "{{ .Release.Name }}" 10 | heritage: "{{ .Release.Service }}" 11 | data: 12 | authorized_keys: {{ .Values.ssh.public_key | quote }} 13 | 14 | --- 15 | 16 | apiVersion: v1 17 | kind: ConfigMap 18 | metadata: 19 | name: "{{ .Chart.Name }}-egg" 20 | namespace: {{ .Values.namespace | quote }} 21 | labels: 22 | app: {{ template "ssh.fullname" . }} 23 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 24 | release: "{{ .Release.Name }}" 25 | heritage: "{{ .Release.Service }}" 26 | data: 27 | EGG: {{ .Values.egg.env | quote }} 28 | -------------------------------------------------------------------------------- /charts/ssh/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ include "ssh.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "ssh.labels" . | nindent 4 }} 8 | spec: 9 | replicas: {{ .Values.replicaCount }} 10 | selector: 11 | matchLabels: 12 | {{- include "ssh.selectorLabels" . | nindent 6 }} 13 | template: 14 | metadata: 15 | labels: 16 | {{- include "ssh.selectorLabels" . | nindent 8 }} 17 | spec: 18 | securityContext: 19 | {{- toYaml .Values.podSecurityContext | nindent 8 }} 20 | containers: 21 | - name: {{ .Chart.Name }} 22 | securityContext: 23 | {{- toYaml .Values.securityContext | nindent 12 }} 24 | image: "{{ .Values.image.repository }}" 25 | imagePullPolicy: {{ .Values.image.pullPolicy }} 26 | ports: 27 | - name: ssh 28 | containerPort: 6667 29 | protocol: TCP 30 | volumeMounts: 31 | - name: ssh-pub 32 | mountPath: /home/user/.ssh/authorized 33 | readOnly: true 34 | - name: ssh-key 35 | mountPath: /home/user/.ssh/key 36 | envFrom: 37 | - configMapRef: 38 | name: "{{ .Chart.Name }}-egg" 39 | resources: 40 | {{- toYaml .Values.resources | nindent 12 }} 41 | volumes: 42 | - name: ssh-pub 43 | configMap: 44 | name: "{{ .Chart.Name }}-authorized-keys" 45 | - name: ssh-key 46 | emptyDir: {} 47 | -------------------------------------------------------------------------------- /charts/ssh/templates/hpa.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.autoscaling.enabled }} 2 | apiVersion: autoscaling/v2beta1 3 | kind: HorizontalPodAutoscaler 4 | metadata: 5 | name: {{ include "ssh.fullname" . }} 6 | namespace: {{ .Values.namespace | quote }} 7 | labels: 8 | {{- include "ssh.labels" . | nindent 4 }} 9 | spec: 10 | scaleTargetRef: 11 | apiVersion: apps/v1 12 | kind: Deployment 13 | name: {{ include "ssh.fullname" . }} 14 | minReplicas: {{ .Values.autoscaling.minReplicas }} 15 | maxReplicas: {{ .Values.autoscaling.maxReplicas }} 16 | metrics: 17 | {{- with .Values.autoscaling.targetCPUUtilizationPercentage }} 18 | - type: Resource 19 | resource: 20 | name: cpu 21 | targetAverageUtilization: {{ . }} 22 | {{- end }} 23 | {{- with .Values.autoscaling.targetMemoryUtilizationPercentage }} 24 | - type: Resource 25 | resource: 26 | name: memory 27 | targetAverageUtilization: {{ . }} 28 | {{- end }} 29 | {{- end }} 30 | -------------------------------------------------------------------------------- /charts/ssh/templates/networkpolicy.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: networking.k8s.io/v1 3 | metadata: 4 | name: {{ template "ssh.name" . }}-deny-metadata-server 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "ssh.labels" . | nindent 4 }} 8 | spec: 9 | podSelector: 10 | matchLabels: 11 | app.kubernetes.io/name: {{ template "ssh.name" . }} 12 | policyTypes: 13 | - Egress 14 | egress: 15 | - to: 16 | - ipBlock: 17 | cidr: 0.0.0.0/0 18 | except: 19 | - 169.254.169.254/32 20 | ports: 21 | - protocol: TCP 22 | port: 4444 23 | - to: 24 | - ipBlock: 25 | cidr: 10.0.2.15/32 26 | - to: 27 | - ipBlock: 28 | cidr: {{ .Values.network.svc }} 29 | - to: 30 | - ipBlock: 31 | cidr: {{ .Values.network.pod }} 32 | 33 | -------------------------------------------------------------------------------- /charts/ssh/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ include "ssh.fullname" . }} 5 | namespace: {{ .Values.namespace | quote }} 6 | labels: 7 | {{- include "ssh.labels" . | nindent 4 }} 8 | spec: 9 | ports: 10 | - port: 6667 11 | targetPort: ssh 12 | protocol: TCP 13 | name: ssh 14 | selector: 15 | {{- include "ssh.selectorLabels" . | nindent 4 }} 16 | -------------------------------------------------------------------------------- /charts/ssh/values.yaml: -------------------------------------------------------------------------------- 1 | replicaCount: 1 2 | 3 | autoscaling: 4 | enabled: true 5 | minReplicas: 1 6 | maxReplicas: 10 7 | targetCPUUtilizationPercentage: 50 8 | targetMemoryUtilizationPercentage: 50 9 | 10 | image: 11 | repository: docker.io/nodyd/e20-ssh:latest 12 | pullPolicy: Always 13 | 14 | resources: 15 | limits: 16 | cpu: 100m 17 | memory: 128Mi 18 | requests: 19 | cpu: 100m 20 | memory: 128Mi 21 | 22 | securityContext: 23 | runAsUser: 1000 24 | runAsGroup: 1000 25 | readOnlyRootFilesystem: true 26 | # capabilities: 27 | # drop: 28 | # - ALL 29 | 30 | podSecurityContext: 31 | fsGroup: 1000 32 | 33 | ssh: 34 | public_key: "" 35 | 36 | egg: 37 | env: "" 38 | 39 | namespace: "foo" 40 | 41 | 42 | -------------------------------------------------------------------------------- /config: -------------------------------------------------------------------------------- 1 | # Define tokens for the challanges 2 | 3 | ## DNS zone 4 | DNS_ZONE="k8s.local" 5 | 6 | ## API 7 | DATADOG_API_KEY="" 8 | 9 | ## The namespaces for the challenges 10 | NS1="foo" 11 | NS2="bar" 12 | 13 | ## Hosted on AWS!? 14 | IAAS="Vagrant" 15 | 16 | # K3s configuration for vagrant 17 | POD_SUBNET="10.1.0.0/16" # <<<---- Has to be changed in calico.yaml 18 | SVC_SUBNET="10.100.0.0/16" 19 | 20 | ## Stored in secrets of API `kubectl get secrets -n foo entry-secret-egg -o go-template='{{ index .data "EGG" | base64decode }}'` 21 | EGG1="HUHU-WHAT-AN-AMAZING-K8S-SECRETS-EGG" 22 | 23 | ## Hard coded in Dockerfile of image 'entry' -->> https://github.com/NodyHub/k8s-ctf-rocks/blob/master/docker-images/entry/Dockerfile#L17 24 | EGG2="DEFAULT-ARG-EASTER-EGG" 25 | 26 | ## Hard coded in Github Actions -->> `docker pull nodyd/e20-entry && docker image history nodyd/e20-entry --no-trunc|grep /EGG | head -1 | cut -d '|' -f 2 | cut -d ' ' -f 2` 27 | EGG3="WHOOO-A-SECRET-GITHUB-ACTION-BUILDARG-EGG" 28 | 29 | ## "hidden" on 'entry' -->> `cat cfg/config.php` 30 | EGG4="AMAZING-CONFIG-FILE-EGG" 31 | 32 | ## Egg for env of ssh -->> `ssh -i /var/run/secrets/kubernetes.io/ssh/private_key -l user -p 6667 ssh -- cat /proc/1/environ | tr '\0' '\n'|grep 'EGG='` 33 | EGG5="SUPER-SECRET-ENV-EGG" 34 | 35 | ## Egg for webpage -->> `ssh -p 6667 -i /run/secrets/kubernetes.io/ssh/private_key -l user ssh.foo -- curl http://egg` 36 | EGG6="WOOOOOOOT-AN-EGG-FROM-A-HIDDEN-WEBPAGE-EGG" 37 | 38 | ## Egg from ingress `curl -H "Host: joker" http://` 39 | EGG7="NANANA-ACCESSIBLE-ONLY-OVER-INGRESS-LOL-EGG" 40 | 41 | ## Egg from other namespace -->> `curl http://honk.$NS2` 42 | EGG8="HONK-HONK-HONK-HONK-HONK-HONK-HONK-HONK-HONK-HONK-HONK-HONK-HONK-EGG" 43 | 44 | ## Egg from Dockerfile of image 'egg' -->> https://github.com/NodyHub/k8s-ctf-rocks/blob/master/docker-images/egg/Dockerfile#L3 45 | EGG9="OMG-FANCY-DEFAULT-HOW-WOULD-EXPECT-THIS-EGG" 46 | 47 | -------------------------------------------------------------------------------- /create-cluster.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | NAME="easter-ctf" 4 | REGION="eu-central-1" 5 | NODE_GROUP_NAME="easter-ctf-nodes" 6 | NODE_SIZE="t2.medium" 7 | SSH_KEY="" 8 | AWS_PROFILE= 9 | 10 | 11 | eksctl create cluster \ 12 | --name $NAME \ 13 | --region $REGION \ 14 | --nodegroup-name $NODE_GROUP_NAME \ 15 | --node-type $NODE_SIZE \ 16 | --nodes 2 \ 17 | --nodes-min 1 \ 18 | --nodes-max 4 \ 19 | --ssh-access \ 20 | --ssh-public-key $SSH_KEY \ 21 | --managed 22 | 23 | exit 0 24 | -------------------------------------------------------------------------------- /datadog-metric-np.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: networking.k8s.io/v1 3 | metadata: 4 | name: allow-same-namespace 5 | namespace: default 6 | spec: 7 | podSelector: 8 | matchLabels: 9 | app.kubernetes.io/name: kube-state-metrics 10 | policyTypes: 11 | - Ingress 12 | ingress: 13 | - from: 14 | - namespaceSelector: 15 | matchLabels: 16 | ns: default 17 | ports: 18 | - protocol: TCP 19 | port: 8080 20 | - from: 21 | - namespaceSelector: 22 | matchLabels: 23 | ns: kube-system 24 | ports: 25 | - protocol: TCP 26 | port: 8080 27 | 28 | -------------------------------------------------------------------------------- /docker-images/egg/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:6.14.2 2 | EXPOSE 8080 3 | ENV EGG="OMG-FANCY-DEFAULT-WHO-WOULD-EXPECT-THIS-EGG" 4 | COPY server.js . 5 | CMD node server.js 6 | -------------------------------------------------------------------------------- /docker-images/egg/README.md: -------------------------------------------------------------------------------- 1 | # Egg 2 | 3 | This is the build context of the container image [nodyd/e20-egg](https://hub.docker.com/r/nodyd/e20-egg). 4 | -------------------------------------------------------------------------------- /docker-images/egg/server.js: -------------------------------------------------------------------------------- 1 | var os = require('os') 2 | var http = require('http'); 3 | 4 | var handleRequest = function(request, response) { 5 | console.log('Received request for URL: ' + request.url); 6 | response.writeHead(200); 7 | response.end(process.env.EGG+'\n'); 8 | }; 9 | var www = http.createServer(handleRequest); 10 | www.listen(8080); 11 | -------------------------------------------------------------------------------- /docker-images/entry/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM php:7.1-apache 2 | 3 | # Install supporting tools 4 | RUN apt-get update && apt-get install -y \ 5 | ncat \ 6 | nmap \ 7 | ssh-client \ 8 | socat \ 9 | python \ 10 | iputils-ping \ 11 | && rm -rf /var/lib/apt/lists/* \ 12 | && curl -L -o /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl \ 13 | && chmod +x /usr/local/bin/kubectl 14 | 15 | 16 | # Come and get me :p 17 | ARG GHA_EASTER_EGG="DEFAULT-ARG-EASTER-EGG" 18 | RUN echo $GHA_EASTER_EGG > /EGG 19 | RUN rm /EGG 20 | 21 | 22 | # Define SSH client defaults 23 | COPY etc /etc 24 | 25 | # Prepare webpage 26 | ENV APACHE_DOCUMENT_ROOT /entry 27 | COPY web /entry 28 | RUN sed -ri -e 's!/var/www/html!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/sites-available/*.conf && \ 29 | sed -ri -e 's!/var/www/!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf 30 | 31 | WORKDIR /entry 32 | 33 | # go for it 34 | CMD ["apache2-foreground"] 35 | 36 | EXPOSE 80 37 | -------------------------------------------------------------------------------- /docker-images/entry/README.md: -------------------------------------------------------------------------------- 1 | # Entry 2 | 3 | This is the build context of the container image [nodyd/e20-entry](https://hub.docker.com/r/nodyd/e20-entry). 4 | -------------------------------------------------------------------------------- /docker-images/entry/etc/ssh/ssh_config: -------------------------------------------------------------------------------- 1 | Host * 2 | StrictHostKeyChecking no 3 | CheckHostIP no 4 | SendEnv LANG LC_* 5 | HashKnownHosts yes 6 | GSSAPIAuthentication yes 7 | UserKnownHostsFile=/dev/null 8 | -------------------------------------------------------------------------------- /docker-images/entry/web/hacker.css: -------------------------------------------------------------------------------- 1 | .btn-primary { 2 | background-color: #EA2F83; 3 | border-color: #EA2F83; 4 | } 5 | 6 | h1 { 7 | font-size: 4.5em; 8 | } 9 | 10 | h1 > span { 11 | font-size: 0.3em; 12 | } 13 | 14 | h2 { 15 | color: #0597D5; 16 | } 17 | 18 | .alert-info { 19 | border-color: #42DA43; 20 | background-color: #BBFFC2; 21 | } 22 | 23 | .gray { 24 | color: #AAAAAA; 25 | } 26 | 27 | -------------------------------------------------------------------------------- /docker-images/entry/web/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Kubernetes Easter CTF 5 | 9 | 10 | 11 | 12 | 13 |
14 |
15 |
16 | 18 |

Mini Kubernetes Easter CTF by @NodyTweet

19 |
20 |
21 |
22 |

23 | Thanks to everyone who participated within the CTF. 24 | Eastern 2020 is over and so that is also the end of the game. 25 | It was a lot of fun to see people approaching the Kubernetes cluster and I learned a lot from you. 26 |

27 |

28 | Stay safe and home during these terrible times and wish you all the best! 29 |

30 |
31 | 32 |
33 |

Scoreboard

34 |
35 |
36 |
37 | Following people got all 9 EGGS: 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 |
#NameHandle
1Artem Rootman@ArtemRootman
2Daniel@22dh22
3Yuval Kohavi@KohaviYuval
40x07027@0x07027
5Michael Shen@faiyafrower
6KiuBy@Kiu_By
79 |
80 |
81 |
82 |

Introduction

83 |
84 |
85 |

86 | Since COVID19 start spreading around the world, a lot of people are sitting at home and are wondering what they may do within their free time. 87 | As a security guy, I am absolutely excited to learn new technology and techniques. 88 | Due to the lock-down you may have even more time, while you are not allowed to go after common easter-practice, e.g., egg hunting in the nature. 89 | This is a free Kubernetes Easter CTF that is dedicated to hackers, security engineers, Kubernetes administrators or developers who want to take a look into a Kubernetes cluster and practice and improve their hacking skills. 90 | This CTF is without any benefits, non-commercial and just for fun. 91 | A scoreboard will stay online, but the cluster will shutdown after eastern. 92 | In case you find any bugs, the service goes down or any other issues, feel free to reach out to me and we figure out the issue. 93 |

94 |
95 |
96 |

Mission // TL;DR

97 |
98 |
99 |

100 | This is the entry page to a Kubernetes CTF. 101 | All commands that are submitted by the input box are executed within a container that runs on AWS EKS. 102 | Your mission is to find 9 EGGs in or maybe outside the cluster. 103 | If you want to be listed on the Scoreboard, reach out to me on Twitter @NodyTweet after you got all EGGs. 104 |

105 |
106 |
107 |

Input

108 |
109 |
110 | 111 |
112 | 114 |
115 | 116 |
117 |
118 |
119 |
120 |

Rules & Facts

121 |
122 |
123 |
124 |
125 |
    126 |
  • AWS Metadataservice as well as other AWS services are out-of-scope.
  • 127 |
  • Container Breakouts are not part of the game.
  • 128 |
  • Outbound communication is only on port 4444 permitted.
  • 129 |
  • Do not abuse the cluster for malicious purposes.
  • 130 |
  • Scope:
  • 131 |
      132 |
    • The container and services inside the cluster
    • 133 |
    • The Dockerfiles on GitHub
    • 134 |
    • The container images on DockerHub (prefix e20)
    • 135 |
    136 |
  • An example how an EGG may look like:
    THIS-IS-JUST-AN-EXAMPLE-FOR-AN-EGG-EGG
  • 137 |
  • EGGs are located in common jucy spots of Kubernetes pentests.
  • 138 |
  • Please do not DoS the CTF, ty :)
  • 139 |
  • Enjoy the hunt!
  • 140 |
141 |
142 |
143 |
144 |
145 |
146 | Kudos for supporting me with the CTF to 147 | 148 | Jonas & 149 | Matthias 150 |
151 |
152 |
153 | 154 | 155 | 156 | -------------------------------------------------------------------------------- /docker-images/entry/web/index.php: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Kubernetes Easter CTF 5 | 9 | 10 | 11 | 12 | 13 |
14 |
15 |
16 | 18 |

Mini Kubernetes Easter CTF by @NodyTweet

19 |
20 |
21 |
22 |

23 | Covid19 is not red, neither blue and this Kubernetes CTF is an easter challenge just for you! 24 |

25 |
26 | 27 |
28 |

Introduction

29 |
30 |
31 |

32 | Since COVID19 start spreading around the world, a lot of people are sitting at home and are wondering what they may do within their free time. 33 | As a security guy, I am absolutely excited to learn new technology and techniques. 34 | Due to the lock-down you may have even more time, while you are not allowed to go after common easter-practice, e.g., egg hunting in the nature. 35 | This is a free Kubernetes Easter CTF that is dedicated to hackers, security engineers, Kubernetes administrators or developers who want to take a look into a Kubernetes cluster and practice and improve their hacking skills. 36 | This CTF is without any benefits, non-commercial and just for fun. 37 | A scoreboard will stay online, but the cluster will shutdown after eastern. 38 | In case you find any bugs, the service goes down or any other issues, feel free to reach out to me and we figure out the issue. 39 |

40 |
41 |
42 |

Mission // TL;DR

43 |
44 |
45 |

46 | This is the entry page to a Kubernetes CTF. 47 | All commands that are submitted by the input box are executed within a container that runs on AWS EKS. 48 | Your mission is to find 9 EGGs in or maybe outside the cluster. 49 | If you want to be listed on the Scoreboard, reach out to me on Twitter @NodyTweet after you got all EGGs. 50 |

51 |
52 |
53 |

Input

54 |
55 |
56 | 57 |
58 | 60 |
61 | 62 |
63 |
64 |
65 | 68 |
69 |

Output

70 |
71 |
72 |
73 |
 74 | &1");
 76 | ?>
 77 |         
78 |
79 |
80 | 83 |
84 |

Rules & Facts

85 |
86 |
87 |
88 |
89 |
    90 |
  • AWS Metadataservice as well as other AWS services are out-of-scope.
  • 91 |
  • Container Breakouts are not part of the game.
  • 92 |
  • Outbound communication is only on port 4444 permitted.
  • 93 |
  • Do not abuse the cluster for malicious purposes.
  • 94 |
  • Scope:
  • 95 |
      96 |
    • The container and services inside the cluster
    • 97 |
    • The Dockerfiles on GitHub
    • 98 |
    • The container images on DockerHub (prefix e20)
    • 99 |
    100 |
  • An example how an EGG may look like:
    THIS-IS-JUST-AN-EXAMPLE-FOR-AN-EGG-EGG
  • 101 |
  • EGGs are located in common jucy spots of Kubernetes pentests.
  • 102 |
  • Please do not DoS the CTF, ty :)
  • 103 |
  • Enjoy the hunt!
  • 104 |
105 |
106 |
107 |
108 |
109 |

Scoreboard

110 |
111 |
112 |
113 | Following people got all 9 EGGS: 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | 143 | 144 | 145 | 146 | 147 | 148 | 149 | 150 | 151 | 152 | 153 | 154 |
#NameHandle
1Artem Rootman@ArtemRootman
2Daniel@22dh22
3Yuval Kohavi@KohaviYuval
40x07027@0x07027
5Michael Shen@faiyafrower
6KiuBy@Kiu_By
155 |
156 |
157 |
158 |
159 | Kudos for supporting me with the CTF to 160 | 161 | Jonas & 162 | Matthias 163 |
164 |
165 |
166 | 167 | 168 | 169 | -------------------------------------------------------------------------------- /docker-images/entry/web/three-easter-eggs-svgrepo-com.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docker-images/ssh/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine:3.11 2 | 3 | RUN apk add openssh nmap curl 4 | 5 | RUN adduser --disabled-password user 6 | 7 | RUN echo "HISTFILE=/dev/null" > /etc/profile.d/history_off 8 | 9 | # Prepare keys and cfg 10 | COPY sshd_config /home/user/.ssh/sshd_config 11 | RUN chmod 655 /home/user/.ssh/sshd_config && \ 12 | chown -R user:user /home/user 13 | 14 | USER user 15 | WORKDIR /home/user 16 | 17 | CMD /usr/bin/ssh-keygen -b 2048 -t rsa -P "" -q -f /home/user/.ssh/key/host_key ; \ 18 | /usr/sbin/sshd -D -f /home/user/.ssh/sshd_config 19 | 20 | EXPOSE 6667 21 | -------------------------------------------------------------------------------- /docker-images/ssh/README.md: -------------------------------------------------------------------------------- 1 | # Ssh 2 | 3 | This is the build context of the container image [nodyd/e20-ssh](https://hub.docker.com/r/nodyd/e20-ssh). 4 | -------------------------------------------------------------------------------- /docker-images/ssh/sshd_config: -------------------------------------------------------------------------------- 1 | Port 6667 2 | PermitRootLogin no 3 | AuthorizedKeysFile .ssh/authorized/authorized_keys 4 | AllowTcpForwarding yes 5 | PasswordAuthentication yes 6 | StrictModes no 7 | GatewayPorts yes 8 | UseDNS no 9 | HostKey /home/user/.ssh/key/host_key 10 | Subsystem sftp /usr/lib/ssh/sftp-server 11 | -------------------------------------------------------------------------------- /install-with-helm.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Get configuration 4 | . config 5 | 6 | echo [i] Create Namespaces if necessary 7 | [ $(kubectl get ns $NS1 2> /dev/null | wc -l) -eq 0 ] && kubectl create ns $NS1 8 | [ $(kubectl get ns $NS2 2> /dev/null | wc -l) -eq 0 ] && kubectl create ns $NS2 9 | 10 | echo [i] Create ssh keys ... if necessary 11 | [ ! -d ./key ] && mkdir -p ./key 12 | [ ! -f ./key/ssh ] && ssh-keygen -f ./key/ssh -t ed25519 -P "" -C "user@ssh" 13 | 14 | echo [i] Deploy entry challange 15 | helm install entry ./charts/entry \ 16 | --set namespace=$NS1 \ 17 | --set egg.secret=$EGG1 \ 18 | --set egg.config=$EGG4 \ 19 | --set-file ssh.private_key=./key/ssh \ 20 | --set-file ssh.public_key=./key/ssh.pub \ 21 | --set network.pod=$POD_SUBNET \ 22 | --set network.svc=$SVC_SUBNET 23 | 24 | echo [i] Deploy SSH Challange 25 | helm install ssh ./charts/ssh \ 26 | --set namespace=$NS1 \ 27 | --set egg.env=$EGG5 \ 28 | --set-file ssh.public_key=./key/ssh.pub \ 29 | --set network.pod=$POD_SUBNET \ 30 | --set network.svc=$SVC_SUBNET 31 | 32 | echo [i] Deploy egg challanges 33 | helm install egg ./charts/egg \ 34 | --set namespace=$NS1 \ 35 | --set egg.env=$EGG6 36 | 37 | echo [i] Deploy joker challanges 38 | helm install joker ./charts/joker \ 39 | --set namespace=$NS1 \ 40 | --set dnsZone=$DNS_ZONE \ 41 | --set egg.env=$EGG7 42 | 43 | echo [i] Deploy namespace challange 44 | helm install honk ./charts/honk \ 45 | --set namespace=$NS2 \ 46 | --set egg.env=$EGG8 47 | 48 | echo [i] Label Namespace default and kube-system 49 | kubectl label namespace default ns=default 50 | kubectl label namespace kube-system ns=kube-system 51 | 52 | 53 | # Only for AWS 54 | 55 | if [ $IAAS = "AWS" ] 56 | then 57 | 58 | echo [i] Deploy loadbalancer traefik 59 | [ $(kubectl get ns traefik 2> /dev/null | wc -l) -eq 0 ] && kubectl create ns traefik 60 | helm install traefik stable/traefik \ 61 | --namespace traefik \ 62 | --set dashboard.enabled=false \ 63 | --set accessLogs.enabled=true \ 64 | --set rbac.enabled=true \ 65 | --set serviceType=LoadBalancer 66 | 67 | echo [i] Install Calico for NetworkPolicies 68 | kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.5/config/v1.5/calico.yaml 69 | 70 | echo [i] Install datadog 71 | helm install datadog stable/datadog \ 72 | --set datadog.site=datadoghq.eu \ 73 | --set token=whateverwhateverwhateverwhateverwhateverwhateverwhateverwhateverwhatever\ 74 | --set datadog.apiKey=$DATADOG_API_KEY 75 | kubectl apply -f datadog-metric-np.yaml 76 | 77 | fi # end AWS 78 | 79 | 80 | echo [i] Done! 81 | 82 | exit 0 83 | -------------------------------------------------------------------------------- /k8s-easter-ctf.drawio: -------------------------------------------------------------------------------- 1 | 7Vxbc5s4FP41fowHxMX40XHitrPbTmfTnbZPHQIy1gaQB+TE2V+/EnckZa0kXGzXfjE6gBDnfDr6zpHExFhG+w+Ju918xj4MJ0Dz9xPjZgIA0GyD/jHJcy7RdUPLJUGC/EJWC+7Qv7AQlpftkA/T1oUE45CgbVvo4TiGHmnJ3CTBT+3L1jhsP3XrBlAQ3HluKEq/I59scqljabX8I0TBpnyyrhVnIre8uBCkG9fHTw2RcTsxlgnGJD+K9ksYMu2VesnvW71wtmpYAmOickOikb+j70+fn6NPWrAgePvtr/lVqeZHN9wVb1y0ljyXKkjwLvYhq0WbGNdPG0Tg3db12NknanUq25AopCWdHhbVwYTA/YsN1avXp8CBOIIkeaaXFDeYZtGoAjPALtr0VBvAKNW8aSp/5hSGL4weVHXXeqEHhWpeoybwGjXpw6jJ4tQ00wQ16XOZmmytJzVZxwcm4LS1ZOhzUUtAoiVb70lJ5vEpac7pSDOnwBLVZEnUNNemVk+KMg4rivrULTtEUebGm2phCkHUjy9CFMRURittSP9072H4FaeIIMzO3mNCcEQvCNmJa9d7CDIjLHGIk+xZxjr70Uuyhy3SbT7cMAu5ZWGN9sxs10V7bjaEsHFqwRQBVp4fa1NER6o1ouZNph59Ilj5LnHpH5On9B97pDzUNWAyzbnJw1WAyGZ3Tx2RM93GQfmIuqFcGzsAxeywe5EBwurLuczOEA7GQTik2ENueBVBH7lX1HVT77XKUJGf+JWd+NUqhDjAv3zsPcCkwksHjtTm8OCIjtRyJHjoa0x2DuMBxv6CcUBa8kI3TZHXBgXcI/KjcfyTGZD6s7x0sy/smRWey0JMG/+jWWjcxYr1bVmpvC9vHPQFuskZgL4A3iUe/J83L0YQ4iYBJIeGY9Ggih04gaFL0GO7uTIrFk/4ihF9kZqeaG282BrnGPLXLO5q8la+Io7nWLyHyfUgVJSBqnrtt+NMgfoJHb6BsS1rV9ZS63pi3TD3ULggjxoeJrV7uF7jmBQRECWctdtAEe3AqxDds/anngvp/x+7e5jEkMB0mj4GHTFKwHVxY6bk8s3eaLf+2/ZxS7GP65rcpAN1cp1jjuacp4PK3dwet5vrCnSzASvfTTdthp62GfoahWGDLgBnZt2yTp2SBD9AGZFoe4prbappVJdLbcqSKEuNOQ+wzE7omRhw0rlUmlXBXzl/oeJZdjc9L6lE52TA4a7NnFtJwaJ9wDJC04faTzFKw14z+ZQd3Gyx34HTsninZYk81ZT0AKMvnqorBHgXIB0jkKy2L9PF0W9YIL0qnTJM0slWSDo5EiX1l3PS7Ut3O8XuxtN6wxzZb5fjyAVIpwUkS+f8ti0mJoYFkkLUcgHSEQKJmyqo0uKjAUmBSR4Mf6tQto5ef7aCV3kom5W+wgTRV2FJilfGt4fjVkNui2HiVoubYqzmyU4uOaXAfgaFSJluqVMsPyd18uWFdAstvB1qB1MpZWR/GJPmmJg8n4SpwkyNatbuvTBqYFtvYXsK+nWA3aHypVFrIE/JB3yO/TZU8qNrxfeHQmUX80VHOpg21hONMphyBNycnajjmh+GyIW4Hx9xt7mlCoYlLl4ZlLiXFXfka5itWmMXNdq7qFl7LgyMSc7KrMthcmZfyFkHPs7oYkq1hibzT51Cs8B6Bc68vtFCB9WlFvnQcUHne9EJBHR+caNyXF1M2AJuLOB16EkQi5v1rmbBm4slZQuUQW8Djmzy2g4JYynokR4G7PDLXa7Dezcpz7IFZ/UFo2vWLqnj8WhWzMFtcPwgaIq+M2mro83yYhxDjhIWImE5EL9UMUK+zx4j1X97qXAfZMoQTSAjU/1ZQJwGTdPN+RqAz00aztgGEPM5MAjO1wDCzKRkjntYA4ipi3+oYpPzNYGQLJLs2hjWBGJqICOyZ2wCjR8H1ILq3kxQPqxhApK4cI3OeDC2rHZqDZhgZCPIwkeBaN5kmw+oegvSDoF2VXQXJdr5fvuFcE1Gtx6/rQqIC8qq7bDDWE8Mr1Stxwb839l25ui2UwrxPnyklyw8to0ppUffqOLjlwwn3IziUfrqkfha4HDTGJYhmlwf1OTdrv9w2pk6Z26fTRZZPU8nh8AwaTqDCyzenqbjuNnQaTpTYdXJye3FBAf3YkI3pei9AhpLCq3YrqcVuLoNgkO7cm17tbLtjrwUv+NGQipk29n7c1Ky1R5KnCLL4vzOnEKyUHVgTqGwJuLSj4fpx5JtS4P2Y0sMry9YGAYLBp/xHdunW7I48YKFIbBgalzGZ3S/cI6fYTkRLNi8XxCzf8NiQSEgvWChFyzwaw4MR0zHD4sFhd2pFyz0ggWbw4LsUzzDYuEcv810GliY8dN0MzFrOSwWLjHlaGMEv3BBGzmOAEpzFn+nbCmDUiKoBM4uChcewUlH0JCDroFKvCMhiuGy+shsV/O73DcnqrVtrU+dihZ7w9cUaLH+1myeCK4/2Wvc/gc= -------------------------------------------------------------------------------- /k8s-easter-ctf.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/NodyHub/k8s-ctf-rocks/463816d902f4f3cfd386af4a1f38da6ffc9cc330/k8s-easter-ctf.png -------------------------------------------------------------------------------- /key/ssh: -------------------------------------------------------------------------------- 1 | -----BEGIN OPENSSH PRIVATE KEY----- 2 | b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW 3 | QyNTUxOQAAACA2Uf4tsbwcfo1qohqHUr+ukkjNch3XiMF9xb/T6G3XpQAAAJAU87NbFPOz 4 | WwAAAAtzc2gtZWQyNTUxOQAAACA2Uf4tsbwcfo1qohqHUr+ukkjNch3XiMF9xb/T6G3XpQ 5 | AAAEB9YiQJviAkbpycQ/SjuGCGcmbaV4evpTu59IeoLoNsLTZR/i2xvBx+jWqiGodSv66S 6 | SM1yHdeIwX3Fv9PobdelAAAACHVzZXJAc3NoAQIDBAU= 7 | -----END OPENSSH PRIVATE KEY----- 8 | -------------------------------------------------------------------------------- /key/ssh.pub: -------------------------------------------------------------------------------- 1 | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDZR/i2xvBx+jWqiGodSv66SSM1yHdeIwX3Fv9Pobdel user@ssh 2 | -------------------------------------------------------------------------------- /uninstall-with-helm.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Get configuration 4 | . config 5 | 6 | echo [i] Remove entry challange 7 | helm uninstall entry --namespace $NS1 8 | 9 | echo [i] Remove SSH Challange 10 | helm uninstall ssh --namespace $NS1 11 | 12 | echo [i] Remove egg challanges 13 | helm uninstall egg --namespace $NS1 14 | 15 | echo [i] Remove joker challanges 16 | helm uninstall joker --namespace $NS1 17 | 18 | echo [i] Remove namespace challange 19 | helm uninstall honk --namespace $NS2 20 | 21 | echo [i] Remove Datadog 22 | helm uninstall datadog 23 | 24 | echo [i] Remove Calico 25 | kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.5/config/v1.5/calico.yaml 26 | 27 | echo [i] Done! 28 | 29 | exit 0 30 | --------------------------------------------------------------------------------