├── Dockerfile.iputil ├── Dockerfile.sshd ├── Dockerfile.sudo ├── README.md ├── config-idm ├── group └── passwd ├── config-sshd ├── sshd └── sshd_config ├── init.sh ├── job-create-user-home.yaml ├── nfs-provisioner ├── deploy.yaml ├── psp.yaml ├── pvc.yaml ├── role.yaml ├── rolebinding.yaml ├── service.yaml ├── serviceaccount.yaml └── storageclass.yaml ├── pvc-home.yaml ├── script-session-gateway ├── create-users.sh ├── gateway-entrypoint.sh └── session-spawner.sh ├── script-session-host └── add-session-user.sh ├── secret-config-idm-shadow └── shadow ├── secret-config-sshd-host-keys ├── ssh_host_dsa_key ├── ssh_host_dsa_key.pub ├── ssh_host_ecdsa_key ├── ssh_host_ecdsa_key.pub ├── ssh_host_ed25519_key ├── ssh_host_ed25519_key.pub ├── ssh_host_rsa_key └── ssh_host_rsa_key.pub ├── session-gateway ├── deploy.yaml ├── role.yaml ├── rolebinding.yaml ├── service.yaml └── serviceaccount.yaml ├── template-session-host ├── restricted.json └── unrestricted.json └── users ├── demo.yaml └── test.yaml /Dockerfile.iputil: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | run apt-get update \ 4 | && apt-get -y install \ 5 | curl \ 6 | iproute2 \ 7 | iputils-ping \ 8 | netcat 9 | -------------------------------------------------------------------------------- /Dockerfile.sshd: -------------------------------------------------------------------------------- 1 | FROM alpine:3.7 2 | 3 | ARG KUBECTL_VERSION=v1.9.0 4 | 5 | EXPOSE 22 6 | 7 | RUN apk add --no-cache \ 8 | bash \ 9 | ca-certificates \ 10 | curl \ 11 | linux-pam \ 12 | openssl \ 13 | openssh-server-pam \ 14 | && curl -o /usr/local/bin/kubectl \ 15 | https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl \ 16 | && chmod +x /usr/local/bin/kubectl \ 17 | && mkdir -p /etc/ssh/keys 18 | 19 | CMD ["/usr/sbin/sshd", "-D"] 20 | -------------------------------------------------------------------------------- /Dockerfile.sudo: -------------------------------------------------------------------------------- 1 | FROM centos:latest 2 | 3 | RUN yum -y update \ 4 | && yum -y install \ 5 | sudo 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PoC - SSHD K8s Session Gateway 2 | 3 | 4 | ## Overview 5 | This project is a small proof of concept demo that provisions an ssh server and spawns a pod for each connecting user as their associated `uid` and mounts their home directory with the correct permissions. 6 | 7 | Behind the scenes when a user successfully connects and authenticates, sshd will match the user, and then immediately execute a command via `ForceCommand` config option. This command will call the [`session-spawner.sh`](script-session-gateway/session-spawner.sh) script with some optional parameters. 8 | 9 | The session-spawner script queries Kubernetes to check if the user's pod is already running, and if so attaches to it. If the pod is not present, the script fetches the user's profile information stored in a ConfigMap (`session-$USER`). This ConfigMap contains the user's uid, gid, home directory and default shell. 10 | 11 | It then uses one of several [pre-configured deployment json templates](template-session-host) stored as another series of configmaps, modifying it with the user's information to generate a `kubectl run` command flagging it with `-i` and `-t` to give the session a tty and make it interactive. 12 | 13 | These deployment templates strip the user session container down to a much smaller subset of linux capabilities, and configure the `securityContext` to run as the specific `uid` for the user, and adds the `gid` to the `suupplementalGroups` to ensure the home directory is mounted as the correct `uid` and `gid`. This measure is needed until the [`runAsGroup` feature](https://github.com/kubernetes/kubernetes/pull/52077) is implimented. 14 | 15 | Before the user's container spins up, an init container is executed using the same image. This init container executes the [`add-session-user.sh`](script-session-host/add-session-user.sh) script which copies the container's `/etc/passwd` and `/etc/group` files to a shared `emptyDir` volume and appends the user's `uid` and `gid` to the copied files. 16 | 17 | When the session-host container starts, the copied `passwd` and `group` file are mounted read-only into the new container, allowing it to be started as that particular user without erroring, or presenting something similar to `I have no name!@d969c8e14f66:/$`. 18 | 19 | In addition to the `passwd` and `group` files, the home directory pvc is attached as a volume, with their home directory (username) being mounted via `subPath`. This gives the connecting user a near *'native'* ssh experience, while running within Kubernetes. 20 | 21 | 22 | ## Usage 23 | 24 | From a Linux or OSX host, with minikube installed launch the `./init.sh` script in the root of the project directory. This will go through the steps of spinning up minikube, and provisioning everything needed to test. 25 | 26 | Once the script is done, you can ssh as one of two users `demo` and `test`, with their passwords being the same as their username. The demo user launches an ubuntu based container with some additional network tools. The test user launches a CentOS based container with elevated privileges and sudo capabilities. **NOTE:** Enabling sudo in a container is not something that should be done in any prod capacity, simply there as a PoC. 27 | 28 | The ssh service endpoint will be made available from the host at `192.168.99.100:32222`. e.g. 29 | ``` 30 | # Note: The 'Could not chdir to home directory' error is expected. 31 | muninn:~$ ssh demo@192.168.99.100 -p 32222 32 | Password: 33 | Could not chdir to home directory : No such file or directory 34 | [Wed Jan 17 19:29:58 UTC 2018][INFO] Starting session instance with image ubuntu-ip:latest rm: false 35 | If you don't see a command prompt, try pressing enter. 36 | demo@session-demo-7988fb5967-mz8lq:/$ 37 | ``` 38 | 39 | By default the user pods do not clean up when exited and the pod will stay running. However, if the user kills the connection by typing `exit` the pod will restart. SSH disconnect or simply closing the window will prevent this. 40 | 41 | #### Terminating a Running Session 42 | 43 | To delete the session container, or to create an ephemeral one, `rm=true` must be passed as an environment variable through ssh. 44 | ``` 45 | # Note: The 'Could not chdir to home directory' error is expected. 46 | muninn:~ $ rm=true ssh demo@192.168.99.100 -p 32222 -o SendEnv=rm 47 | Password: 48 | Could not chdir to home directory : No such file or directory 49 | [Wed Jan 17 19:34:04 UTC 2018][INFO] Deleting previous instance.. 50 | deployment "session-demo" deleted 51 | [Wed Jan 17 19:34:08 UTC 2018][INFO] Waiting for Pod to terminate... 52 | [Wed Jan 17 19:34:13 UTC 2018][INFO] Waiting for Pod to terminate... 53 | [Wed Jan 17 19:34:18 UTC 2018][INFO] Waiting for Pod to terminate... 54 | [Wed Jan 17 19:34:23 UTC 2018][INFO] Waiting for Pod to terminate... 55 | [Wed Jan 17 19:34:29 UTC 2018][INFO] Waiting for Pod to terminate... 56 | [Wed Jan 17 19:34:34 UTC 2018][INFO] Waiting for Pod to terminate... 57 | [Wed Jan 17 19:34:39 UTC 2018][INFO] Waiting for Pod to terminate... 58 | [Wed Jan 17 19:34:45 UTC 2018][INFO] Starting session instance session-demo-7988fb5967-mz8lq with image ubuntu-ip:latest rm: true 59 | If you don't see a command prompt, try pressing enter. 60 | demo@session-demo-7988fb5967-txmbc:/$ 61 | ``` 62 | 63 | ### Verifying home directory permissions 64 | 65 | To verify the home directory file permissions, simply ssh in and touch a file in the user home directory: 66 | ``` 67 | demo@session-demo-7988fb5967-txmbc:~$ cd ~/ 68 | demo@session-demo-7988fb5967-txmbc:~$ pwd 69 | /home/demo 70 | demo@session-demo-7988fb5967-txmbc:~$ touch test 71 | demo@session-demo-7988fb5967-txmbc:~$ ls -lah 72 | total 12K 73 | drwxr-sr-x 2 demo demo 4.0K Jan 17 19:37 . 74 | drwxr-xr-x 1 root root 4.0K Jan 17 19:34 .. 75 | -rw------- 1 demo demo 74 Jan 17 19:33 .bash_history 76 | -rw-r--r-- 1 demo demo 0 Jan 17 19:37 test 77 | demo@session-demo-7988fb5967-txmbc:~$ 78 | ``` 79 | -------------------------------------------------------------------------------- /config-idm/group: -------------------------------------------------------------------------------- 1 | root:x:0:root 2 | bin:x:1:root,bin,daemon 3 | daemon:x:2:root,bin,daemon 4 | sys:x:3:root,bin,adm 5 | adm:x:4:root,adm,daemon 6 | tty:x:5: 7 | disk:x:6:root,adm 8 | lp:x:7:lp 9 | mem:x:8: 10 | kmem:x:9: 11 | wheel:x:10:root 12 | floppy:x:11:root 13 | mail:x:12:mail 14 | news:x:13:news 15 | uucp:x:14:uucp 16 | man:x:15:man 17 | cron:x:16:cron 18 | console:x:17: 19 | audio:x:18: 20 | cdrom:x:19: 21 | dialout:x:20:root 22 | ftp:x:21: 23 | sshd:x:22: 24 | input:x:23: 25 | at:x:25:at 26 | tape:x:26:root 27 | video:x:27:root 28 | netdev:x:28: 29 | readproc:x:30: 30 | squid:x:31:squid 31 | xfs:x:33:xfs 32 | kvm:x:34:kvm 33 | games:x:35: 34 | shadow:x:42: 35 | postgres:x:70: 36 | cdrw:x:80: 37 | usb:x:85: 38 | vpopmail:x:89: 39 | users:x:100:games 40 | ntp:x:123: 41 | nofiles:x:200: 42 | smmsp:x:209:smmsp 43 | locate:x:245: 44 | abuild:x:300: 45 | utmp:x:406: 46 | ping:x:999: 47 | nogroup:x:65533: 48 | nobody:x:65534: 49 | demo:x:1110: 50 | test:x:1111: 51 | -------------------------------------------------------------------------------- /config-idm/passwd: -------------------------------------------------------------------------------- 1 | root:x:0:0:root:/root:/bin/ash 2 | bin:x:1:1:bin:/bin:/sbin/nologin 3 | daemon:x:2:2:daemon:/sbin:/sbin/nologin 4 | adm:x:3:4:adm:/var/adm:/sbin/nologin 5 | lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin 6 | sync:x:5:0:sync:/sbin:/bin/sync 7 | shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown 8 | halt:x:7:0:halt:/sbin:/sbin/halt 9 | mail:x:8:12:mail:/var/spool/mail:/sbin/nologin 10 | news:x:9:13:news:/usr/lib/news:/sbin/nologin 11 | uucp:x:10:14:uucp:/var/spool/uucppublic:/sbin/nologin 12 | operator:x:11:0:operator:/root:/bin/sh 13 | man:x:13:15:man:/usr/man:/sbin/nologin 14 | postmaster:x:14:12:postmaster:/var/spool/mail:/sbin/nologin 15 | cron:x:16:16:cron:/var/spool/cron:/sbin/nologin 16 | ftp:x:21:21::/var/lib/ftp:/sbin/nologin 17 | sshd:x:22:22:sshd:/dev/null:/sbin/nologin 18 | at:x:25:25:at:/var/spool/cron/atjobs:/sbin/nologin 19 | squid:x:31:31:Squid:/var/cache/squid:/sbin/nologin 20 | xfs:x:33:33:X Font Server:/etc/X11/fs:/sbin/nologin 21 | games:x:35:35:games:/usr/games:/sbin/nologin 22 | postgres:x:70:70::/var/lib/postgresql:/bin/sh 23 | cyrus:x:85:12::/usr/cyrus:/sbin/nologin 24 | vpopmail:x:89:89::/var/vpopmail:/sbin/nologin 25 | ntp:x:123:123:NTP:/var/empty:/sbin/nologin 26 | smmsp:x:209:209:smmsp:/var/spool/mqueue:/sbin/nologin 27 | guest:x:405:100:guest:/dev/null:/sbin/nologin 28 | nobody:x:65534:65534:nobody:/:/sbin/nologin 29 | demo:x:1110:1110:Demo User,,,:: 30 | test:x:1111:1111:Test User,,,:: 31 | -------------------------------------------------------------------------------- /config-sshd/sshd: -------------------------------------------------------------------------------- 1 | account include base-account 2 | auth include base-auth 3 | auth required pam_env.so 4 | auth required pam_nologin.so successok 5 | -------------------------------------------------------------------------------- /config-sshd/sshd_config: -------------------------------------------------------------------------------- 1 | Port 22 2 | Protocol 2 3 | LoginGraceTime 120 4 | LogLevel INFO 5 | PermitRootLogin no 6 | StrictModes yes 7 | PasswordAuthentication no 8 | ChallengeResponseAuthentication yes 9 | IgnoreRhosts yes 10 | HostbasedAuthentication no 11 | PermitEmptyPasswords no 12 | AcceptEnv LANG LC_* 13 | AcceptEnv rm 14 | HostKey /etc/ssh/keys/ssh_host_rsa_key 15 | HostKey /etc/ssh/keys/ssh_host_dsa_key 16 | HostKey /etc/ssh/keys/ssh_host_ecdsa_key 17 | HostKey /etc/ssh/keys/ssh_host_ed25519_key 18 | UsePAM yes 19 | ForceCommand image=ubuntu-ip:latest /scripts/session-spawner.sh 20 | 21 | Match User test 22 | ForceCommand image=centos-sudo:latest template=unrestricted.json /scripts/session-spawner.sh 23 | -------------------------------------------------------------------------------- /init.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if ! minikube status | grep "cluster: Running"; then 4 | echo "[$(date)][INFO] Starting minikube" 5 | minikube start 6 | else 7 | echo "[$(date)][INFO] Minikube running." 8 | fi 9 | 10 | echo "[$(date)][INFO] Checking for container image presence." 11 | eval "$(minikube docker-env)" 12 | if [ "$(docker images sshd-gateway -q)" == "" ]; then 13 | echo "[$(date)][INFO] Building sshd-gateway container." 14 | docker build -t sshd-gateway -f ./Dockerfile.sshd . 15 | else 16 | echo "[$(date)][INFO] sshd-gateway image found." 17 | fi 18 | if [ "$(docker images centos-sudo -q)" == "" ]; then 19 | echo "[$(date)][INFO] Building centos-sudo container." 20 | docker build -t centos-sudo -f ./Dockerfile.sudo . 21 | else 22 | echo "[$(date)][INFO] Centos-sudo image found." 23 | fi 24 | if [ "$(docker images ubuntu-ip -q)" == "" ]; then 25 | echo "[$(date)][INFO] Building ubuntu-ip container." 26 | docker build -t ubuntu-ip -f ./Dockerfile.iputil . 27 | else 28 | echo "[$(date)][INFO] ubuntu-ip image found." 29 | fi 30 | eval "$(minikube docker-env -u)" 31 | 32 | echo "[$(date)][INFO] Labeling namespace" 33 | kubectl label ns default --overwrite=true session=true 34 | echo "[$(date)][INFO] Creating/Updating configs and secrets..." 35 | 36 | config_items=( 37 | config-idm 38 | config-sshd 39 | script-session-gateway 40 | script-session-host 41 | template-session-host 42 | ) 43 | config_types=( 44 | config 45 | config 46 | script 47 | script 48 | template 49 | ) 50 | config_length=${#config_items[@]} 51 | for (( i=0; i<${config_length}; i++ )); do 52 | if ! kubectl get cm "${config_items[$i]}" > /dev/null 2>&1; then 53 | kubectl create cm "${config_items[$i]}" --from-file="${config_items[$i]}/" 54 | else 55 | kubectl create --dry-run cm "${config_items[$i]}" -o yaml \ 56 | --from-file="${config_items[$i]}/" | kubectl replace -f - 57 | fi 58 | kubectl label cm "${config_items[$i]}" type="${config_types[$i]}" 59 | done 60 | 61 | secret_items=( 62 | config-idm-shadow 63 | config-sshd-host-keys 64 | ) 65 | secret_types=( 66 | config 67 | config 68 | ) 69 | 70 | secret_length=${#secret_items[@]} 71 | for (( i=0; i<${secret_length}; i++ )); do 72 | if ! kubectl get secret "${secret_items[$i]}" > /dev/null 2>&1; then 73 | kubectl create secret generic "${secret_items[$i]}" --from-file="secret-${secret_items[$i]}/" 74 | else 75 | kubectl create secret generic "${secret_items[$i]}" --dry-run -o yaml \ 76 | --from-file="secret-${secret_items[$i]}/" | kubectl replace -f - 77 | fi 78 | kubectl label secret "${secret_items[$i]}" type="${secret_types[$i]}" 79 | done 80 | 81 | echo "[$(date)][INFO] Creating User Home NFS Server" 82 | kubectl apply -f nfs-provisioner/ 83 | while [ "$(kubectl get deploy nfs-provisioner --no-headers=true | awk '{print $5}')" != "1" ]; do 84 | echo "[$(date)][INFO] Waiting for NFS Server to become ready." 85 | sleep 5 86 | done 87 | 88 | echo "[$(date)][INFO] Creating/Updating Users.." 89 | kubectl apply -f users/ 90 | 91 | echo "[$(date)][INFO] Creating Home PVC" 92 | kubectl apply -f pvc-home.yaml 93 | 94 | echo "[$(date)][INFO] Preparing User Home" 95 | kubectl apply -f job-create-user-home.yaml 96 | while [ "$(kubectl get job create-user-home --no-headers=true | awk '{print $3}')" != "1" ]; do 97 | echo "[$(date)][INFO] Waiting for user home directories to be created." 98 | sleep 5 99 | done 100 | 101 | echo "[$(date)][INFO] Deploying session-gateway" 102 | kubectl apply -f session-gateway/ 103 | -------------------------------------------------------------------------------- /job-create-user-home.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: batch/v1 2 | kind: Job 3 | metadata: 4 | name: create-user-home 5 | spec: 6 | template: 7 | metadata: 8 | name: create-user-home 9 | spec: 10 | containers: 11 | - name: create-user-home 12 | image: sshd-gateway 13 | imagePullPolicy: Never 14 | command: [ "/scripts/create-users.sh"] 15 | volumeMounts: 16 | - name: user-home 17 | mountPath: "/user_home" 18 | - name: scripts 19 | mountPath: "/scripts" 20 | restartPolicy: OnFailure 21 | volumes: 22 | - name: user-home 23 | persistentVolumeClaim: 24 | claimName: home 25 | - name: scripts 26 | configMap: 27 | name: script-session-gateway 28 | defaultMode: 0755 29 | -------------------------------------------------------------------------------- /nfs-provisioner/deploy.yaml: -------------------------------------------------------------------------------- 1 | kind: Deployment 2 | apiVersion: apps/v1beta2 3 | metadata: 4 | name: nfs-provisioner 5 | labels: 6 | app: nfs-provisioner 7 | spec: 8 | replicas: 1 9 | selector: 10 | matchLabels: 11 | app: nfs-provisioner 12 | strategy: 13 | type: Recreate 14 | template: 15 | metadata: 16 | labels: 17 | app: nfs-provisioner 18 | spec: 19 | serviceAccount: nfs-provisioner 20 | containers: 21 | - name: nfs-provisioner 22 | image: quay.io/kubernetes_incubator/nfs-provisioner:v1.0.8 23 | ports: 24 | - name: nfs 25 | containerPort: 2049 26 | - name: mountd 27 | containerPort: 20048 28 | - name: rpcbind 29 | containerPort: 111 30 | - name: rpcbind-udp 31 | containerPort: 111 32 | protocol: UDP 33 | securityContext: 34 | capabilities: 35 | add: 36 | - DAC_READ_SEARCH 37 | - SYS_RESOURCE 38 | args: 39 | - "-provisioner=local.nfs/nfs" 40 | env: 41 | - name: POD_IP 42 | valueFrom: 43 | fieldRef: 44 | fieldPath: status.podIP 45 | - name: SERVICE_NAME 46 | value: nfs-provisioner 47 | - name: POD_NAMESPACE 48 | valueFrom: 49 | fieldRef: 50 | fieldPath: metadata.namespace 51 | imagePullPolicy: "IfNotPresent" 52 | volumeMounts: 53 | - name: export-volume 54 | mountPath: /export 55 | volumes: 56 | - name: export-volume 57 | persistentVolumeClaim: 58 | claimName: exports 59 | -------------------------------------------------------------------------------- /nfs-provisioner/psp.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: PodSecurityPolicy 3 | metadata: 4 | name: nfs-provisioner 5 | spec: 6 | fsGroup: 7 | rule: RunAsAny 8 | allowedCapabilities: 9 | - DAC_READ_SEARCH 10 | - SYS_RESOURCE 11 | runAsUser: 12 | rule: RunAsAny 13 | seLinux: 14 | rule: RunAsAny 15 | supplementalGroups: 16 | rule: RunAsAny 17 | volumes: 18 | - configMap 19 | - downwardAPI 20 | - emptyDir 21 | - persistentVolumeClaim 22 | - secret 23 | - hostPath 24 | -------------------------------------------------------------------------------- /nfs-provisioner/pvc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | name: exports 5 | spec: 6 | accessModes: 7 | - ReadWriteMany 8 | resources: 9 | requests: 10 | storage: 10Gi 11 | -------------------------------------------------------------------------------- /nfs-provisioner/role.yaml: -------------------------------------------------------------------------------- 1 | kind: Role 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: nfs-provisioner 5 | rules: 6 | - apiGroups: [""] 7 | resources: ["persistentvolumes"] 8 | verbs: ["get", "list", "watch", "create", "delete"] 9 | - apiGroups: [""] 10 | resources: ["persistentvolumeclaims"] 11 | verbs: ["get", "list", "watch", "update"] 12 | - apiGroups: ["storage.k8s.io"] 13 | resources: ["storageclasses"] 14 | verbs: ["get", "list", "watch"] 15 | - apiGroups: [""] 16 | resources: ["events"] 17 | verbs: ["list", "watch", "create", "update", "patch"] 18 | - apiGroups: [""] 19 | resources: ["services", "endpoints"] 20 | verbs: ["get"] 21 | - apiGroups: ["extensions"] 22 | resources: ["podsecuritypolicies"] 23 | resourceNames: ["nfs-provisioner"] 24 | verbs: ["use"] 25 | -------------------------------------------------------------------------------- /nfs-provisioner/rolebinding.yaml: -------------------------------------------------------------------------------- 1 | kind: RoleBinding 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: nfs-provisioner 5 | subjects: 6 | - kind: ServiceAccount 7 | name: nfs-provisioner 8 | roleRef: 9 | kind: ClusterRole 10 | name: nfs-provisioner 11 | apiGroup: rbac.authorization.k8s.io 12 | -------------------------------------------------------------------------------- /nfs-provisioner/service.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: nfs-provisioner 5 | labels: 6 | app: nfs-provisioner 7 | spec: 8 | ports: 9 | - name: nfs 10 | port: 2049 11 | - name: mountd 12 | port: 20048 13 | - name: rpcbind 14 | port: 111 15 | - name: rpcbind-udp 16 | port: 111 17 | protocol: UDP 18 | selector: 19 | app: nfs-provisioner 20 | -------------------------------------------------------------------------------- /nfs-provisioner/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: nfs-provisioner 5 | -------------------------------------------------------------------------------- /nfs-provisioner/storageclass.yaml: -------------------------------------------------------------------------------- 1 | kind: StorageClass 2 | apiVersion: storage.k8s.io/v1 3 | metadata: 4 | name: nfs 5 | provisioner: local.nfs/nfs 6 | -------------------------------------------------------------------------------- /pvc-home.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | name: home 5 | spec: 6 | storageClassName: nfs 7 | accessModes: 8 | - ReadWriteMany 9 | resources: 10 | requests: 11 | storage: 10Gi 12 | -------------------------------------------------------------------------------- /script-session-gateway/create-users.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | user_home=${user_home:-/user_home} 4 | 5 | for session_user in $(kubectl get cm -l type=user --output=jsonpath='{.items..metadata.name}'); do 6 | home_name="$(basename "$(kubectl get cm "$session_user" --output=jsonpath='{.data.home}')")" 7 | uid="$(kubectl get cm "$session_user" --output=jsonpath='{.data.uid}')" 8 | gid="$(kubectl get cm "$session_user" --output=jsonpath='{.data.gid}')" 9 | echo "[$(date)][INFO] Creating $user_home/$home_name" 10 | mkdir -p "$user_home/$home_name" 11 | echo "[$(date)][INFO] Chowning $uid:$gid $user_home/$home_name" 12 | chown -R "$uid":"$gid" "$user_home/$home_name" 13 | done 14 | -------------------------------------------------------------------------------- /script-session-gateway/gateway-entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | echo "[$(date)][INFO] Adding Kubernetes Cluster environment variables to pam config." 4 | printenv | grep KUBERNETES > /etc/security/pam_env.conf 5 | echo "PATH=$PATH" >> /etc/security/pam_env.conf 6 | 7 | while [ ! -S /syslog/log ] 8 | do 9 | echo "[$(date)][INFO] Waiting for log socket to become available." 10 | sleep 5 11 | done 12 | ln -s /syslog/log /dev/log 13 | echo "[$(date)][INFO] Syslog Socket ready." 14 | echo "[$(date)][INFO] Starting sshd." 15 | exec /usr/sbin/sshd -D 16 | -------------------------------------------------------------------------------- /script-session-gateway/session-spawner.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | export rm=${rm:-false} 4 | export template=${template:-'restricted.json'} 5 | export image=${image:-'ubuntu:latest'} 6 | 7 | exec_session() { 8 | uid=$(kubectl get cm "user-$USER" -o jsonpath='{.data.uid}') 9 | gid=$(kubectl get cm "user-$USER" -o jsonpath='{.data.gid}') 10 | shell=$(kubectl get cm "user-$USER" -o jsonpath='{.data.shell}') 11 | overrides=$( 12 | sed -e "s/__UID__/$uid/g" -e "s/__USER__/$USER/g" \ 13 | -e "s/__GID__/$gid/g" -e "s/__IMAGE__/$image/g" "/templates/$template" 14 | ) 15 | echo "[$(date)][INFO] Starting session instance $pod_name with image $image rm: $rm" 16 | exec kubectl run "session-$USER" --rm="$rm" -i -t \ 17 | --labels=app=session-host,user="$USER" --image="$image" \ 18 | --overrides="$overrides" -- "$shell" 19 | } 20 | 21 | 22 | main() { 23 | pod_name=$( 24 | kubectl get pods -l app=session-host,user="$USER" \ 25 | --output=jsonpath='{.items..metadata.name}' 26 | ) 27 | 28 | if [ "$pod_name" != "" ]; then 29 | if [ "$rm" = "true" ]; then 30 | echo "[$(date)][INFO] Deleting previous instance.." 31 | kubectl delete deploy "session-$USER" 32 | while ! kubectl get pod -l "app=session-host,user=$USER" --no-headers=true 2>&1 \ 33 | | grep -q "No resources found"; do 34 | echo "[$(date)][INFO] Waiting for Pod to terminate..." 35 | sleep 5 36 | done 37 | exec_session 38 | else 39 | echo "[$(date)][INFO] Attaching to session instance $pod_name" 40 | exec kubectl attach "$pod_name" -c session-host -i -t 41 | fi 42 | elif ! kubectl get cm "user-$USER" > /dev/null 2>&1; then 43 | echo "[$(date)][ERROR] Could not locate user configMap." 44 | exit 1 45 | else 46 | exec_session 47 | fi 48 | } 49 | 50 | main "$@" 51 | -------------------------------------------------------------------------------- /script-session-host/add-session-user.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | export USER_CONFIG_PATH=${USER_CONFIG_PATH:-'/userconfig'} 4 | export USER_DATA_PATH=${USER_DATA_PATH:-'/userdata'} 5 | export USER_SUDO=${USER_SUDO:-false} 6 | 7 | terminate=false 8 | 9 | if [ "x$SESSION_USER" = "x" ]; then 10 | >&2 echo "[$(date)][ERROR] No SESSION_USER specified." 11 | terminate=true 12 | fi 13 | 14 | if [ ! -d "$USER_CONFIG_PATH" ]; then 15 | >&2 echo "[$(date)][ERROR] No User Config directory found at $USER_CONFIG_PATH." 16 | terminate=true 17 | fi 18 | 19 | if [ ! -d "$USER_DATA_PATH" ]; then 20 | >&2 echo "[$(date)][ERROR] No User Data directory found at $USER_DATA_PATH." 21 | terminate=true 22 | fi 23 | 24 | if [ $terminate = true ]; then 25 | exit 1 26 | fi 27 | 28 | if [ ! -f "$USER_CONFIG_PATH/uid" ]; then 29 | >&2 echo "[$(date)][ERROR] uid not found at $USER_CONFIG_PATH/uid." 30 | terminate=true 31 | else 32 | uid=$(cat "$USER_CONFIG_PATH/uid") 33 | fi 34 | 35 | if [ ! -f "$USER_CONFIG_PATH/gid" ]; then 36 | >&2 echo "[$(date)][ERROR] gid not found at $USER_CONFIG_PATH/gid." 37 | terminate=true 38 | else 39 | gid=$(cat "$USER_CONFIG_PATH/gid") 40 | fi 41 | 42 | if [ $terminate = true ]; then 43 | exit 1 44 | fi 45 | 46 | if [ ! -f "$USER_CONFIG_PATH/home" ]; then 47 | >&2 echo "[$(date)][WARNING] User home not specified. Defaulting to /home." 48 | home="/home" 49 | else 50 | home=$(cat "$USER_CONFIG_PATH/home") 51 | fi 52 | 53 | if [ ! -f "$USER_CONFIG_PATH/shell" ]; then 54 | >&2 echo "[$(date)][WARNING] User shell not specified. Defaulting to /bin/sh." 55 | shell="/bin/sh" 56 | else 57 | shell=$(cat "$USER_CONFIG_PATH/shell") 58 | fi 59 | 60 | cp /etc/passwd "$USER_DATA_PATH/passwd" 61 | cp /etc/group "$USER_DATA_PATH/group" 62 | 63 | if grep -q -E "$SESSION_USER:x:" "$USER_DATA_PATH/passwd"; then 64 | >&2 echo "[$(date)][WARNING] $SESSION_USER already found in local passwd file. User will not be added." 65 | else 66 | echo "[$(date)][INFO] Adding \"$SESSION_USER:x:$uid:$gid:$SESSION_USER:$home:$shell\" to passwd file." 67 | echo "$SESSION_USER:x:$uid:$gid:$SESSION_USER:$home:$shell" >> "$USER_DATA_PATH/passwd" 68 | fi 69 | 70 | if grep -q -E "^$SESSION_USER:x:" "$USER_DATA_PATH/group"; then 71 | >&2 echo "[$(date)][WARNING] $SESSION_USER group already found in local group file. Group will not be added." 72 | else 73 | echo "[$(date)][INFO] Adding \"$SESSION_USER:x:$gid:\" to group file." 74 | echo "$SESSION_USER:x:$gid:" >> "$USER_DATA_PATH/group" 75 | fi 76 | -------------------------------------------------------------------------------- /secret-config-idm-shadow/shadow: -------------------------------------------------------------------------------- 1 | root:::0::::: 2 | bin:!::0::::: 3 | daemon:!::0::::: 4 | adm:!::0::::: 5 | lp:!::0::::: 6 | sync:!::0::::: 7 | shutdown:!::0::::: 8 | halt:!::0::::: 9 | mail:!::0::::: 10 | news:!::0::::: 11 | uucp:!::0::::: 12 | operator:!::0::::: 13 | man:!::0::::: 14 | postmaster:!::0::::: 15 | cron:!::0::::: 16 | ftp:!::0::::: 17 | sshd:!::0::::: 18 | at:!::0::::: 19 | squid:!::0::::: 20 | xfs:!::0::::: 21 | games:!::0::::: 22 | postgres:!::0::::: 23 | cyrus:!::0::::: 24 | vpopmail:!::0::::: 25 | ntp:!::0::::: 26 | smmsp:!::0::::: 27 | guest:!::0::::: 28 | nobody:!::0::::: 29 | demo:$6$/z.SrWdhomNvLgBZ$1ngJ8hk/sadMPjUDC2yAB2TXrdP04/vbO576nqJamoz.WU9q8ReinaMQXU04u6vs4oo3AHI07AAQuDEA5LtzE.:17527:0:99999:7::: 30 | test:$6$aZwtl/zkB9MAKmi8$c.U/cOtWPdsgbZU5uqgCjoz4FlaOLGdzF.21RiUqFH9iEOnQC6l5hbsdl3MWH/WP0gQHh8KAnHH8ApKkCwwiC.:17533:0:99999:7::: 31 | -------------------------------------------------------------------------------- /secret-config-sshd-host-keys/ssh_host_dsa_key: -------------------------------------------------------------------------------- 1 | -----BEGIN DSA PRIVATE KEY----- 2 | MIIBuwIBAAKBgQDmlfcd7Q3fJI1X1NnZatlxpUbOB0WweumowvHv5O8uZH+Gd1ZC 3 | 0N0H/ZqPoU775/pvhIraogKSRblOfCpSURzxhrHW7hIBVbIePhJNnEmRlWDetdgY 4 | mzmnEUPmS8j0OSW3yIXISOggDq4eeN0t1x8VfxjQPRnEjISenmLyQWx2xwIVALtA 5 | V7fVRMO+y2doQUpVeFwyPKY1AoGAMrnQF0AdomJmex6V5F1Y7La27MAK7IPERpjW 6 | N470kiNvTuUIiEyzWR6VPZJatKenDR2I/MF9UaNvcHOLPv86AoZzSFsMqgZcNZn4 7 | x89yC7MpPjx94/53HltVKORb14IdneUkfdXRzYuOJO2nF4Su4JRXEtvF8xNrocIG 8 | KzLYXWcCgYEAw4bo99OzDWK6K0pq3S23tVIwQHTk9Z0MOjKUu8GI+l79QZ93tNwT 9 | ahl5lS67LZtGlNHU7l9gv6Z3A+mBJ7cKCfPQ4+uClhdGj3hONcRE729OeZ02VMsG 10 | cfNBPSoUnoPJWdocorTXUWpNCFEd47xA5HfnmKvt2P3DfEvkzuG7S10CFClEZRAu 11 | Ogem85BGHhb3H22Evak+ 12 | -----END DSA PRIVATE KEY----- 13 | -------------------------------------------------------------------------------- /secret-config-sshd-host-keys/ssh_host_dsa_key.pub: -------------------------------------------------------------------------------- 1 | ssh-dss AAAAB3NzaC1kc3MAAACBAOaV9x3tDd8kjVfU2dlq2XGlRs4HRbB66ajC8e/k7y5kf4Z3VkLQ3Qf9mo+hTvvn+m+EitqiApJFuU58KlJRHPGGsdbuEgFVsh4+Ek2cSZGVYN612BibOacRQ+ZLyPQ5JbfIhchI6CAOrh543S3XHxV/GNA9GcSMhJ6eYvJBbHbHAAAAFQC7QFe31UTDvstnaEFKVXhcMjymNQAAAIAyudAXQB2iYmZ7HpXkXVjstrbswArsg8RGmNY3jvSSI29O5QiITLNZHpU9klq0p6cNHYj8wX1Ro29wc4s+/zoChnNIWwyqBlw1mfjHz3ILsyk+PH3j/nceW1Uo5FvXgh2d5SR91dHNi44k7acXhK7glFcS28XzE2uhwgYrMthdZwAAAIEAw4bo99OzDWK6K0pq3S23tVIwQHTk9Z0MOjKUu8GI+l79QZ93tNwTahl5lS67LZtGlNHU7l9gv6Z3A+mBJ7cKCfPQ4+uClhdGj3hONcRE729OeZ02VMsGcfNBPSoUnoPJWdocorTXUWpNCFEd47xA5HfnmKvt2P3DfEvkzuG7S10= root@cd131a471c1e 2 | -------------------------------------------------------------------------------- /secret-config-sshd-host-keys/ssh_host_ecdsa_key: -------------------------------------------------------------------------------- 1 | -----BEGIN EC PRIVATE KEY----- 2 | MHcCAQEEIMMfkzdjDpHrEWtzd8gb7uiQbWFiQNMUP2sW3OKioPv/oAoGCCqGSM49 3 | AwEHoUQDQgAEQfPHJCX94Y0Fg3p8l/8/MYnPFeXeKc1BX6g113w16SJGLf+134UK 4 | FztCTT4wdG3CBLsEZlFp8HZVDp1E5k4xNg== 5 | -----END EC PRIVATE KEY----- 6 | -------------------------------------------------------------------------------- /secret-config-sshd-host-keys/ssh_host_ecdsa_key.pub: -------------------------------------------------------------------------------- 1 | ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEHzxyQl/eGNBYN6fJf/PzGJzxXl3inNQV+oNdd8NekiRi3/td+FChc7Qk0+MHRtwgS7BGZRafB2VQ6dROZOMTY= root@cd131a471c1e 2 | -------------------------------------------------------------------------------- /secret-config-sshd-host-keys/ssh_host_ed25519_key: -------------------------------------------------------------------------------- 1 | -----BEGIN OPENSSH PRIVATE KEY----- 2 | b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW 3 | QyNTUxOQAAACBBD2lPnyW7VDOWHA1u8Pohqppz9pAQwUaHr6O60RcmLAAAAJikPDc8pDw3 4 | PAAAAAtzc2gtZWQyNTUxOQAAACBBD2lPnyW7VDOWHA1u8Pohqppz9pAQwUaHr6O60RcmLA 5 | AAAEB8dl1ZVt1k+t0SCLwQoaabA8ej13DQthEmGOSfGEEpMEEPaU+fJbtUM5YcDW7w+iGq 6 | mnP2kBDBRoevo7rRFyYsAAAAEXJvb3RAY2QxMzFhNDcxYzFlAQIDBA== 7 | -----END OPENSSH PRIVATE KEY----- 8 | -------------------------------------------------------------------------------- /secret-config-sshd-host-keys/ssh_host_ed25519_key.pub: -------------------------------------------------------------------------------- 1 | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEEPaU+fJbtUM5YcDW7w+iGqmnP2kBDBRoevo7rRFyYs root@cd131a471c1e 2 | -------------------------------------------------------------------------------- /secret-config-sshd-host-keys/ssh_host_rsa_key: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEpAIBAAKCAQEAs8Vt96gSNktm2OMXzb4bmjXHtMnwLHPqy/4ZXip8mqg+XhUD 3 | 6oaknIwJELuPcKIlg1ghuonIVVCwV5H6z5p5IJKVaGMmSuFOi3XJbXT6KKRrHsK7 4 | vF41vdP57ypp1cf+2uICwpwoO5rEqSL4CCIVi8TMtaJjvNVXzcEz69q6Yk7udtfD 5 | 6ScI4CCE5JvlROeUDVqu8Hkxakn5g/ZGg2JUMUFFIAhWIlFhLpLB4B0YQRdm7/H9 6 | yUOcgrSi31B1o2+4RKcKfMGRerQuOZmbD87vNYUzYbfc4R6gA/jnS6EtaKQ9J3iz 7 | gDAeXc3ZcpIsR/eEIO8cIY6M9L81Thbu2YXZzQIDAQABAoIBADmOsbnEZyhZFfHF 8 | K97kykOiinFY4nvpFTkA/zBGHCUMTwOiaOGTAGta7qAb3T4rvCUEd7AY4zplnkA7 9 | bflANR33sLx+WklJP/Oo37ga5ulSUzXDFYanBz/i+bfYdZBL+04rZMTYaI1E7UhV 10 | +OHpv8pDVWOmPZa9G+K1xCD0pA5LOzjy9TVgmSPJXVQYOuu2d0Js0YqxixR4RzKK 11 | lSKIK2CWDR9UN8qcqu0P/v2ysmicujIJ+XUqhjUNnyGr6qzezGd7Z0DSPwyw7EEG 12 | 2p2N85rHOpdEyXKxVdD6N/zK565WQFxgSMzXBk044htOEcNxJJbG4rKJ00HJ+MGv 13 | o+staQUCgYEA7aSPYsjY93xk9dceKhmxq2QOVvvjscY3N19UZPp2EmuUoT/moyE4 14 | z//tAPL7FYSmKilRTqij1p1KC3lVai7//Kb5lkl1r8MYp0eqqwJgXjL69UPnC0ny 15 | Ofib0nEnhzeNhLgIo9SsaiHJyyuTiV/mHzbZ+m/Kbq3/oqAbPsuEB9MCgYEAwahy 16 | Dcsvea+KG/9t9LTTamz0TmVyypZnjgy/KnFZHA05Zi+WoD4BuLPT1W03SWvSHE0o 17 | rOSffOT3kHDKIRnRk4vqi1d6AChpjh6V18fuiLRsVcdx3egy2KzSmH699yNdOt+y 18 | kCy+JFpIXI1v9u+xvjjRXXVw8FKE1XTR1Ue8M98CgYEA2d27QtJl74bQvH3Kfsht 19 | lXa2mtJ1bj8N0isIoUbpxntwmOCPntDPWAoGi4832ANzn0Wf8CA1jIVJI/nJ7/5E 20 | 26ltOnYAefHAAWR3uC4GkXXlk8P75uKVOsaMfMMWfSXWDW33JdPNecOeUDCUIyaT 21 | P9y2vJ2OlifZLIviTpCga9cCgYBiQrrkPtVu89+qxxcek+W12WS4pobxPhF7JQKW 22 | YX7qWddm/vx5gBzVgAEbCNiFm3y3uXrLBxHZiEAI/QHCe9w39kViwFb534d3ghNb 23 | smlY2dsiRxmCk//AqygMEjsHO91hMwHiX6F2xoxy0Z0e+Y4BS8kHl8BfYC9gM28I 24 | veZDlwKBgQCzPwIyYv9hNvaEGvnOfHKsl4oezzfCdH9pVg2fUUE5MWtDlNgRSCva 25 | WVDp/nkPc1sBKBg9mB5CV410B9sMhG0kV83/4fi2aLK8sGaZLJnwuoGCbH+701RP 26 | pDZfQsHPrEYQISAQef8O8ZIfzJuMYSNw8/3Ns0JuNV2Abx/Zus3qdQ== 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /secret-config-sshd-host-keys/ssh_host_rsa_key.pub: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCzxW33qBI2S2bY4xfNvhuaNce0yfAsc+rL/hleKnyaqD5eFQPqhqScjAkQu49woiWDWCG6ichVULBXkfrPmnkgkpVoYyZK4U6LdcltdPoopGsewru8XjW90/nvKmnVx/7a4gLCnCg7msSpIvgIIhWLxMy1omO81VfNwTPr2rpiTu5218PpJwjgIITkm+VE55QNWq7weTFqSfmD9kaDYlQxQUUgCFYiUWEuksHgHRhBF2bv8f3JQ5yCtKLfUHWjb7hEpwp8wZF6tC45mZsPzu81hTNht9zhHqAD+OdLoS1opD0neLOAMB5dzdlykixH94Qg7xwhjoz0vzVOFu7ZhdnN root@cd131a471c1e 2 | -------------------------------------------------------------------------------- /session-gateway/deploy.yaml: -------------------------------------------------------------------------------- 1 | # Capabilities are dropped assuming Docker is in use (minikube) 2 | apiVersion: apps/v1beta2 3 | kind: Deployment 4 | metadata: 5 | name: session-gateway 6 | labels: 7 | app: session-gateway 8 | network: unrestricted 9 | spec: 10 | replicas: 1 11 | selector: 12 | matchLabels: 13 | app: session-gateway 14 | network: unrestricted 15 | template: 16 | metadata: 17 | labels: 18 | app: session-gateway 19 | network: unrestricted 20 | spec: 21 | serviceAccount: session-gateway 22 | containers: 23 | - name: logs 24 | image: arcts/rsyslog:1.1.0 25 | imagePullPolicy: IfNotPresent 26 | securityContext: 27 | capabilities: 28 | drop: 29 | - AUDIT_WRITE 30 | - CHOWN 31 | - DAC_OVERRIDE 32 | - FOWNER 33 | - FSETID 34 | - MKNOD 35 | - NET_RAW 36 | - SETFCAP 37 | - SETGID 38 | - SETUID 39 | - SYS_CHROOT 40 | volumeMounts: 41 | - name: syslog 42 | mountPath: /syslog 43 | - name: gateway 44 | image: sshd-gateway 45 | command: ["./scripts/gateway-entrypoint.sh"] 46 | imagePullPolicy: Never 47 | ports: 48 | - containerPort: 22 49 | securityContext: 50 | capabilities: 51 | drop: 52 | - AUDIT_WRITE 53 | - DAC_OVERRIDE 54 | - FOWNER 55 | - FSETID 56 | - MKNOD 57 | - SETFCAP 58 | volumeMounts: 59 | - name: passwd 60 | mountPath: /etc/passwd 61 | readOnly: true 62 | subPath: passwd 63 | - name: group 64 | mountPath: /etc/group 65 | readOnly: true 66 | subPath: group 67 | - name: shadow 68 | mountPath: /etc/shadow 69 | readOnly: true 70 | subPath: shadow 71 | - name: sshd-config 72 | mountPath: /etc/ssh/sshd_config 73 | subPath: sshd_config 74 | readOnly: true 75 | - name: sshd-pam 76 | mountPath: /etc/pam.d/sshd 77 | subPath: sshd 78 | readOnly: true 79 | - name: host-keys 80 | mountPath: /etc/ssh/keys 81 | - name: scripts 82 | mountPath: /scripts 83 | readOnly: true 84 | - name: templates 85 | mountPath: /templates 86 | readOnly: true 87 | - name: syslog 88 | mountPath: /syslog 89 | volumes: 90 | - name: passwd 91 | configMap: 92 | name: config-idm 93 | items: 94 | - key: passwd 95 | path: passwd 96 | - name: group 97 | configMap: 98 | name: config-idm 99 | items: 100 | - key: group 101 | path: group 102 | - name: shadow 103 | secret: 104 | secretName: config-idm-shadow 105 | defaultMode: 0640 106 | - name: sshd-config 107 | configMap: 108 | name: config-sshd 109 | items: 110 | - key: sshd_config 111 | path: sshd_config 112 | - name: sshd-pam 113 | configMap: 114 | name: config-sshd 115 | items: 116 | - key: sshd 117 | path: sshd 118 | - name: host-keys 119 | secret: 120 | secretName: config-sshd-host-keys 121 | defaultMode: 0600 122 | - name: scripts 123 | configMap: 124 | name: script-session-gateway 125 | defaultMode: 0755 126 | - name: templates 127 | configMap: 128 | name: template-session-host 129 | defaultMode: 0644 130 | - name: syslog 131 | emptyDir: {} 132 | -------------------------------------------------------------------------------- /session-gateway/role.yaml: -------------------------------------------------------------------------------- 1 | kind: Role 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: session-gateway 5 | rules: 6 | - apiGroups: [""] 7 | resources: ["configmaps", "secrets"] 8 | verbs: ["get", "list", "watch"] 9 | - apiGroups: [""] 10 | resources: ["pods"] 11 | verbs: ["get", list", "watch", "create", "delete"] 12 | - apiGroups: ["apps"] 13 | resources: ["deployments"] 14 | verbs: ["get", list", "watch", "create", "delete"] 15 | -------------------------------------------------------------------------------- /session-gateway/rolebinding.yaml: -------------------------------------------------------------------------------- 1 | kind: RoleBinding 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: session-gateway 5 | roleRef: 6 | apiGroup: rbac.authorization.k8s.io 7 | kind: Role 8 | name: session-gateway 9 | subjects: 10 | - kind: ServiceAccount 11 | name: session-gateway 12 | -------------------------------------------------------------------------------- /session-gateway/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | labels: 5 | app: session-gateway 6 | name: session-gateway 7 | spec: 8 | ports: 9 | - nodePort: 32222 10 | port: 22 11 | protocol: TCP 12 | targetPort: 22 13 | selector: 14 | app: session-gateway 15 | type: NodePort 16 | -------------------------------------------------------------------------------- /session-gateway/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: session-gateway 5 | -------------------------------------------------------------------------------- /template-session-host/restricted.json: -------------------------------------------------------------------------------- 1 | { 2 | "spec": { 3 | "strategy": { 4 | "type": "Recreate" 5 | }, 6 | "template": { 7 | "metadata": { 8 | "labels": { 9 | "mode": "restricted" 10 | } 11 | }, 12 | "spec": { 13 | "automountServiceAccountToken": false, 14 | "containers": [{ 15 | "image": "__IMAGE__", 16 | "stdin": true, 17 | "tty": true, 18 | "imagePullPolicy": "IfNotPresent", 19 | "name": "session-host", 20 | "securityContext": { 21 | "allowPrivilegeEscalation": false, 22 | "capabilities": { 23 | "drop": [ 24 | "AUDIT_WRITE", 25 | "DAC_OVERRIDE", 26 | "MKNOD", 27 | "NET_BIND_SERVICE", 28 | "NET_RAW", 29 | "SETGID", 30 | "SETUID", 31 | "SYS_CHROOT" 32 | ] 33 | }, 34 | "runAsUser": __UID__ 35 | }, 36 | "volumeMounts": [{ 37 | "mountPath": "/etc/passwd", 38 | "name": "userdata", 39 | "readOnly": true, 40 | "subPath": "passwd" 41 | }, 42 | { 43 | "mountPath": "/etc/group", 44 | "name": "userdata", 45 | "readOnly": true, 46 | "subPath": "group" 47 | }, 48 | { 49 | "mountPath": "/home/__USER__", 50 | "name": "home", 51 | "subPath": "__USER__" 52 | } 53 | ] 54 | }], 55 | "initContainers": [{ 56 | "command": [ 57 | "./scripts/add-session-user.sh" 58 | ], 59 | "env": [{ 60 | "name": "SESSION_USER", 61 | "value": "__USER__" 62 | }], 63 | "image": "__IMAGE__", 64 | "imagePullPolicy": "IfNotPresent", 65 | "name": "add-session-user", 66 | "volumeMounts": [{ 67 | "mountPath": "/scripts", 68 | "name": "scripts" 69 | }, 70 | { 71 | "mountPath": "/userconfig", 72 | "name": "userconfig" 73 | }, 74 | { 75 | "mountPath": "/userdata", 76 | "name": "userdata" 77 | } 78 | ] 79 | }], 80 | "securityContext": { 81 | "fsGroup": __GID__, 82 | "supplementalGroups": [__GID__] 83 | }, 84 | "volumes": [{ 85 | "configMap": { 86 | "defaultMode": 420, 87 | "name": "user-__USER__" 88 | }, 89 | "name": "userconfig" 90 | }, 91 | { 92 | "configMap": { 93 | "defaultMode": 448, 94 | "name": "script-session-host" 95 | }, 96 | "name": "scripts" 97 | }, 98 | { 99 | "emptyDir": {}, 100 | "name": "userdata" 101 | }, 102 | { 103 | "name": "home", 104 | "persistentVolumeClaim": { 105 | "claimName": "home" 106 | } 107 | }] 108 | } 109 | } 110 | } 111 | } 112 | -------------------------------------------------------------------------------- /template-session-host/unrestricted.json: -------------------------------------------------------------------------------- 1 | { 2 | "spec": { 3 | "strategy": { 4 | "type": "Recreate" 5 | }, 6 | "template": { 7 | "metadata": { 8 | "labels": { 9 | "mode": "unrestricted" 10 | } 11 | }, 12 | "spec": { 13 | "automountServiceAccountToken": false, 14 | "containers": [{ 15 | "image": "__IMAGE__", 16 | "stdin": true, 17 | "tty": true, 18 | "imagePullPolicy": "IfNotPresent", 19 | "name": "session-host", 20 | "securityContext": { 21 | "runAsUser": __UID__ 22 | }, 23 | "volumeMounts": [{ 24 | "mountPath": "/etc/passwd", 25 | "name": "userdata", 26 | "readOnly": true, 27 | "subPath": "passwd" 28 | }, 29 | { 30 | "mountPath": "/etc/group", 31 | "name": "userdata", 32 | "readOnly": true, 33 | "subPath": "group" 34 | }, 35 | { 36 | "mountPath": "/etc/sudoers.d/sudo_user", 37 | "name": "userdata", 38 | "readOnly": true, 39 | "subPath": "sudo_user" 40 | }, 41 | { 42 | "mountPath": "/home/__USER__", 43 | "name": "home", 44 | "subPath": "__USER__" 45 | } 46 | ] 47 | }], 48 | "initContainers": [{ 49 | "command": [ 50 | "./scripts/add-session-user.sh" 51 | ], 52 | "env": [{ 53 | "name": "SESSION_USER", 54 | "value": "__USER__" 55 | }], 56 | "image": "__IMAGE__", 57 | "imagePullPolicy": "IfNotPresent", 58 | "name": "add-session-user", 59 | "volumeMounts": [{ 60 | "mountPath": "/scripts", 61 | "name": "scripts" 62 | }, 63 | { 64 | "mountPath": "/userconfig", 65 | "name": "userconfig" 66 | }, 67 | { 68 | "mountPath": "/userdata", 69 | "name": "userdata" 70 | } 71 | ] 72 | }, 73 | { 74 | "command": [ "/bin/sh" ], 75 | "args": [ "-c", "echo \"$(SESSION_USER) ALL=(ALL) NOPASSWD:ALL\" > /userdata/sudo_user" 76 | ], 77 | "env": [{ 78 | "name": "SESSION_USER", 79 | "value": "__USER__" 80 | }], 81 | "image": "__IMAGE__", 82 | "imagePullPolicy": "IfNotPresent", 83 | "name": "add-sudo-user", 84 | "volumeMounts": [{ 85 | "mountPath": "/userdata", 86 | "name": "userdata" 87 | } 88 | ] 89 | }], 90 | "securityContext": { 91 | "fsGroup": __GID__, 92 | "supplementalGroups": [__GID__] 93 | }, 94 | "volumes": [{ 95 | "configMap": { 96 | "defaultMode": 420, 97 | "name": "user-__USER__" 98 | }, 99 | "name": "userconfig" 100 | }, 101 | { 102 | "configMap": { 103 | "defaultMode": 448, 104 | "name": "script-session-host" 105 | }, 106 | "name": "scripts" 107 | }, 108 | { 109 | "emptyDir": {}, 110 | "name": "userdata" 111 | }, 112 | { 113 | "name": "home", 114 | "persistentVolumeClaim": { 115 | "claimName": "home" 116 | } 117 | }] 118 | } 119 | } 120 | } 121 | } 122 | -------------------------------------------------------------------------------- /users/demo.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: user-demo 5 | labels: 6 | type: user 7 | user: demo 8 | data: 9 | uid: "1110" 10 | gid: "1110" 11 | home: /home/demo 12 | shell: /bin/bash 13 | -------------------------------------------------------------------------------- /users/test.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: user-test 5 | labels: 6 | type: user 7 | user: test 8 | data: 9 | uid: "1111" 10 | gid: "1111" 11 | home: /home/test 12 | shell: /bin/bash 13 | --------------------------------------------------------------------------------