├── .gitignore ├── README.md ├── dex └── dex.yaml ├── example-app ├── build.sh ├── go.mod ├── go.sum ├── main.go └── templates.go ├── kind └── kind.yaml ├── ldap ├── ldap.yaml └── ldif │ ├── 0-ous.ldif │ ├── 1-users.ldif │ └── 2-groups.ldif ├── manifests ├── authorization.yaml └── kube-apiserver.yaml ├── misc ├── architecture.png ├── screenshots-with-kubelogin.png └── screenshots.png ├── setup.sh └── tls-setup ├── Makefile ├── ca-config.json ├── ca-csr.json ├── req-csr-dex.json └── req-csr-k8s.json /.gitignore: -------------------------------------------------------------------------------- 1 | _* 2 | **/.DS_Store -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes + Dex + LDAP Integration 2 | 3 | A simple walk-through guide for how to integrate `Kubernetes` with `Dex` + `LDAP`. 4 | 5 | In this experiment, we're going to use these major components: 6 | 7 | - Kubernetes v1.21.x, powered by [`kind` v0.11.1](https://kind.sigs.k8s.io/); 8 | - [Dex](https://github.com/dexidp/dex) v2.30.x; 9 | - [OpenLDAP](https://www.openldap.org/) with [osixia/openldap:1.5.x](https://github.com/osixia/docker-openldap) 10 | 11 | A Medium article was posted too, here: https://brightzheng100.medium.com/kubernetes-dex-ldap-integration-f305292a16b9 12 | 13 | The overall idea can be illustrated as below: 14 | 15 | ![architecture](misc/architecture.png) 16 | 17 | ## Get Started 18 | 19 | ```sh 20 | git clone https://github.com/brightzheng100/kubernetes-dex-ldap-integration.git 21 | cd kubernetes-dex-ldap-integration 22 | ``` 23 | 24 | ## The TL;DR Guide 25 | 26 | ### Setup 27 | 28 | The TD;DR guide uses the script here: [setup.sh](setup.sh), which will: 29 | 1. check the required tools -- there are some of them: `docker`, `git`, `cfssl`, `cfssljson`, `kind`, `kubectl`; 30 | 2. generate the necessary TLS certs/keys for both Kubernetes and Dex; 31 | 3. create `kind`-powered Kubernetes with OIDC configured with Dex; 32 | 4. deploy OpenLDAP in namespace `ldap` as the LDAP Server with some dummy entities; 33 | 5. deploy Dex in namespace `dex`; 34 | 6. create a proxy so that we can access Dex from our laptop (e.g. my MBP) 35 | 36 | ```sh 37 | ./setup.sh 38 | ``` 39 | 40 | > Note: the populated dummy LDAP entities, all with password `secret`, include: 41 | > - `admin1@example.org` 42 | > - `admin2@example.org` 43 | > - `developer1@example.org` 44 | > - `developer2@example.org` 45 | 46 | ### Use 47 | 48 | It's common to set up the kube config, e.g. `~/.kube/config`, for daily use. 49 | 50 | For that, we may simply follow these steps: 51 | 52 | 1. Bind some users, say **"admin1@example.org"**, as the **"cluster-admin"** 53 | 54 | ```sh 55 | $ kubectl create clusterrolebinding oidc-cluster-admin \ 56 | --clusterrole=cluster-admin \ 57 | --user="admin1@example.org" 58 | ``` 59 | 60 | 2. Use [`kubelogin`](https://github.com/int128/kubelogin) plugin to simplify the integration 61 | 62 | ```sh 63 | $ echo "127.0.0.1 dex.dex.svc" | sudo tee -a /etc/hosts 64 | 65 | $ SVC_PORT="$(kubectl get -n dex svc/dex -o json | jq '.spec.ports[0].nodePort')" 66 | $ kubectl config set-credentials oidc \ 67 | --exec-api-version=client.authentication.k8s.io/v1beta1 \ 68 | --exec-command=kubectl \ 69 | --exec-arg=oidc-login \ 70 | --exec-arg=get-token \ 71 | --exec-arg=--oidc-issuer-url=https://dex.dex.svc:$SVC_PORT \ 72 | --exec-arg=--oidc-redirect-url-hostname=dex.dex.svc \ 73 | --exec-arg=--oidc-client-id=example-app \ 74 | --exec-arg=--oidc-client-secret=ZXhhbXBsZS1hcHAtc2VjcmV0 \ 75 | --exec-arg=--oidc-extra-scope=email \ 76 | --exec-arg=--certificate-authority=`pwd`/tls-setup/_certs/ca.pem 77 | ``` 78 | 79 | 3. Use the user to access Kubernetes 80 | 81 | ```sh 82 | $ kubectl --user=oidc get nodes 83 | ``` 84 | 85 | This will prompt us a authentication UI in our default browser, key in the credential of abovementioned LDAP user: 86 | - Email Address: `admin1@example.org` 87 | - Password: `secret` 88 | 89 | ![screenshot-with-kubelogin](misc/screenshots-with-kubelogin.png) 90 | 91 | It will be authenticated by Dax + LDAP, and once the authentication is done we can see the output like: 92 | 93 | ``` 94 | $ kubectl --user=oidc get nodes 95 | NAME STATUS ROLES AGE VERSION 96 | dex-ldap-cluster-control-plane Ready control-plane,master 8m55s v1.21.1 97 | dex-ldap-cluster-worker Ready 8m30s v1.21.1 98 | ``` 99 | 100 | > Notes: 101 | > - as the login will be cached so the subsequent access will be transparent; 102 | > - if you want to change the user, you may remove the cache (`rm -rf ~/.kube/cache/oidc-login/*`) first and you would be prompted again for what user you want to use; 103 | 104 | 4. (optional) Keep multiple users logged in? 105 | 106 | So far I haven't found a good way to have multiple users coexist and maintain logged in with `kubelogin`. But we can try generate tokens for different users and set the credentials accordingly: 107 | 108 | ```sh 109 | cd example-app 110 | go run . --issuer https://dex.dex.svc:32000 --issuer-root-ca `pwd`/../tls-setup/_certs/ca.pem 111 | ``` 112 | 113 | Then open your browser and navigate to http://127.0.0.1:5555, log into it with the users you want, one at a time to generate different tokens, then add different users with the `ID Token` and `Refresh Token` into your kubeconfig file, e.g `~/.kube/config`, like this: 114 | 115 | ```sh 116 | # Add user admin1@example.org as oidc-admin1 117 | kubectl config set-credentials oidc-admin1 \ 118 | --auth-provider=oidc \ 119 | --auth-provider-arg=idp-issuer-url=https://dex.dex.svc:32000 \ 120 | --auth-provider-arg=client-id=example-app \ 121 | --auth-provider-arg=client-secret=ZXhhbXBsZS1hcHAtc2VjcmV0 \ 122 | --auth-provider-arg=refresh-token= \ 123 | --auth-provider-arg=idp-certificate-authority=`pwd`/tls-setup/_certs/ca.pem \ 124 | --auth-provider-arg=id-token= 125 | 126 | # Add user developer1@example.org as oidc-developer1 127 | kubectl config set-credentials oidc-developer1 \ 128 | --auth-provider=oidc \ 129 | --auth-provider-arg=idp-issuer-url=https://dex.dex.svc:32000 \ 130 | --auth-provider-arg=client-id=example-app \ 131 | --auth-provider-arg=client-secret=ZXhhbXBsZS1hcHAtc2VjcmV0 \ 132 | --auth-provider-arg=refresh-token= \ 133 | --auth-provider-arg=idp-certificate-authority=`pwd`/tls-setup/_certs/ca.pem \ 134 | --auth-provider-arg=id-token= 135 | ``` 136 | 137 | 138 | ## The Step-by-step Guide 139 | 140 | ### Generating TLS PKI files for both Dex and K8s 141 | 142 | > Note: 143 | > 1. [`cfssl` and `cfssljson`](https://github.com/cloudflare/cfssl/releases) are required to generate certs/keys 144 | > 2. You may try using [cert-manager](https://github.com/jetstack/cert-manager), if you want 145 | 146 | ```sh 147 | cd tls-setup 148 | 149 | make ca req-dex req-k8s 150 | ``` 151 | 152 | > OUTPUT: a folder `_certs` will be created and a couple of pairs of certs/keys will be generated. 153 | 154 | ```sh 155 | $ tree _certs 156 | _certs 157 | ├── ca-key.pem 158 | ├── ca.csr 159 | ├── ca.pem 160 | ├── dex-key.pem 161 | ├── dex.csr 162 | ├── dex.pem 163 | ├── k8s-key.pem 164 | ├── k8s.csr 165 | └── k8s.pem 166 | 167 | 0 directories, 9 files 168 | ``` 169 | 170 | ### Creating Kubernetes cluster with API Server configured 171 | 172 | > Note: Here I'm going to use `kind`, you may try any other ways too, like `minikube`, `k3s/k3d`, but the process might have to tune a little bit. 173 | 174 | ```sh 175 | # Make sure we're working from the Git repo's root folder 176 | cd "$( git rev-parse --show-toplevel )" 177 | 178 | PROJECT_ROOT="$(pwd)" envsubst < kind/kind.yaml | kind create cluster --name dex-ldap-cluster --config - 179 | ``` 180 | 181 | ### Deploying OpenLDAP as the LDAP Server 182 | 183 | ```sh 184 | # Make sure we're working from the Git repo's root folder 185 | cd "$( git rev-parse --show-toplevel )" 186 | 187 | kubectl create ns ldap 188 | 189 | kubectl create secret generic openldap \ 190 | --namespace ldap \ 191 | --from-literal=adminpassword=adminpassword 192 | 193 | kubectl create configmap ldap \ 194 | --namespace ldap \ 195 | --from-file=ldap/ldif 196 | 197 | kubectl apply --namespace ldap -f ldap/ldap.yaml 198 | 199 | # Load ldif data after the OpenLDAP is ready 200 | # Note: by right, they should be loaded automatically but it doesn't work so we load them manually 201 | LDAP_POD=$(kubectl -n ldap get pod -l "app.kubernetes.io/name=openldap" -o jsonpath="{.items[0].metadata.name}") 202 | kubectl -n ldap exec $LDAP_POD -- ldapadd -x -D "cn=admin,dc=example,dc=org" -w adminpassword -H ldap://localhost:389 -f /ldifs/0-ous.ldif 203 | kubectl -n ldap exec $LDAP_POD -- ldapadd -x -D "cn=admin,dc=example,dc=org" -w adminpassword -H ldap://localhost:389 -f /ldifs/1-users.ldif 204 | kubectl -n ldap exec $LDAP_POD -- ldapadd -x -D "cn=admin,dc=example,dc=org" -w adminpassword -H ldap://localhost:389 -f /ldifs/2-groups.ldif 205 | 206 | # Check the users loaded 207 | kubectl -n ldap exec $LDAP_POD -- \ 208 | ldapsearch -LLL -x -H ldap://localhost:389 -D "cn=admin,dc=example,dc=org" -w adminpassword -b "ou=people,dc=example,dc=org" dn 209 | ``` 210 | 211 | You should see some users that have been created: 212 | 213 | ``` 214 | dn: ou=people,dc=example,dc=org 215 | dn: cn=admin1,ou=people,dc=example,dc=org 216 | dn: cn=admin2,ou=people,dc=example,dc=org 217 | dn: cn=developer1,ou=people,dc=example,dc=org 218 | dn: cn=developer2,ou=people,dc=example,dc=org 219 | ``` 220 | 221 | ### Deploying Dex on Kubernetes with LDAP integrated 222 | 223 | ```sh 224 | # Make sure we're working from the Git repo's root folder 225 | cd "$( git rev-parse --show-toplevel )" 226 | 227 | kubectl create ns dex 228 | 229 | kubectl create secret tls dex-tls \ 230 | --namespace dex \ 231 | --cert=tls-setup/_certs/dex.pem \ 232 | --key=tls-setup/_certs/dex-key.pem 233 | 234 | kubectl apply --namespace dex -f dex/dex.yaml 235 | ``` 236 | 237 | ### Enabling proxy to Dex 238 | 239 | As the Kubernetes is powered by `kind`, we need to do something extra to access the `Dex`. 240 | 241 | There are some other ways [kind#702](https://github.com/kubernetes-sigs/kind/issues/702) but let's try this: 242 | 243 | ```sh 244 | $ SVC_PORT="$(kubectl get -n dex svc/dex -o json | jq '.spec.ports[0].nodePort')" 245 | 246 | # Create this proxy container 247 | $ docker run -d --restart always \ 248 | --name dex-kind-proxy-$SVC_PORT \ 249 | --publish 127.0.0.1:$SVC_PORT:$SVC_PORT \ 250 | --link kind-control-plane:target \ 251 | --network kind \ 252 | alpine/socat -dd \ 253 | tcp-listen:$SVC_PORT,fork,reuseaddr tcp-connect:target:$SVC_PORT 254 | ``` 255 | 256 | Now we can access Dex by: `https://127.0.0.1:$SVC_PORT/`. 257 | 258 | For example, issuing an HTTPS request to the discovery endpoint can verify the installation of Dex: 259 | 260 | ```sh 261 | $ curl -k https://127.0.0.1:$SVC_PORT/.well-known/openid-configuration 262 | { 263 | "issuer": "https://dex.dex.svc:32000", 264 | "authorization_endpoint": "https://dex.dex.svc:32000/auth", 265 | "token_endpoint": "https://dex.dex.svc:32000/token", 266 | "jwks_uri": "https://dex.dex.svc:32000/keys", 267 | "response_types_supported": [ 268 | "code" 269 | ], 270 | "subject_types_supported": [ 271 | "public" 272 | ], 273 | "id_token_signing_alg_values_supported": [ 274 | "RS256" 275 | ], 276 | "scopes_supported": [ 277 | "openid", 278 | "email", 279 | "groups", 280 | "profile", 281 | "offline_access" 282 | ], 283 | "token_endpoint_auth_methods_supported": [ 284 | "client_secret_basic" 285 | ], 286 | "claims_supported": [ 287 | "aud", 288 | "email", 289 | "email_verified", 290 | "exp", 291 | "iat", 292 | "iss", 293 | "locale", 294 | "name", 295 | "sub" 296 | ] 297 | } 298 | ``` 299 | 300 | But the issuer has exposed its URL though domain of `dex.dex.svc`, so we have to edit the `/etc/hosts` as the easy fix. 301 | 302 | ```sh 303 | $ echo "127.0.0.1 dex.dex.svc" | sudo tee -a /etc/hosts 304 | ``` 305 | 306 | ### Logging into the cluster 307 | 308 | > Note: 309 | > 1. this `example-app` was copied from Dex's repo, [here](https://github.com/dexidp/dex/tree/master/examples/example-app`). 310 | > 2. I've enabled `go mod` support so the life of playing with it is much easier. 311 | 312 | ```sh 313 | # Make sure we're working from the Git repo's root folder 314 | cd "$( git rev-parse --show-toplevel )" 315 | 316 | cd example-app 317 | 318 | go run . \ 319 | --issuer https://dex.dex.svc:$SVC_PORT \ 320 | --issuer-root-ca ../tls-setup/_certs/ca.pem \ 321 | --debug 322 | ``` 323 | 324 | Now open browser and access: `http://127.0.0.1:5555/` 325 | - Leave the form as is and click "Login"; 326 | - In "Log in to Your Account" page: 327 | - Email Address: `admin1@example.org` 328 | - Password: `secret` 329 | - A series of tokens will be generated, copy down the **`ID Token`**, something like this: 330 | ``` 331 | eyJhbGciOiJSUzI1NiIsImtpZCI6IjkzMzBkOTRhNGIzZTYwNjNiZTFmMmFhN2JhMWExMzY1ODZlY2MzMWMifQ.eyJpc3MiOiJodHRwczovL2RleC5kZXguc3ZjOjMyMDAwIiwic3ViIjoiQ2lWamJqMWhaRzFwYmpFc2IzVTljR1Z2Y0d4bExHUmpQV1Y0WVcxd2JHVXNaR005YjNKbkVnUnNaR0Z3IiwiYXVkIjoiZXhhbXBsZS1hcHAiLCJleHAiOjE2MDc3NTgxODgsImlhdCI6MTYwNzY3MTc4OCwiYXRfaGFzaCI6IlB3NWJxNF9TYkcwYUZtUkYyZDQwV3ciLCJlbWFpbCI6ImFkbWluMUBleGFtcGxlLm9yZyIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJuYW1lIjoiYWRtaW4xIn0.D_7kzzwlT5u9eq0KYrL64K_az2sO7iQ_5-Oz7nYHcHWQ8bBmxkH5NldsaZjzHKi0myo7EBJtb_6fqT4817h8Tf-FmGw_Ig0Fx-iA8c651L563qsy86s1usrrKyxQo-B6nZi-gvbY_K27KemNhgyGfLjl0PlvNWSUhoA94E3mpnEkdHs0H7Ni8iOgyOoNQV6TisrQgcr6blaVFJoMVhx4_XP1WnC3YZBX3vbGMCamu67BUP1KgnRbUwGqsuWntT-MuNuu8nOaBeIDGSrXFmqkUVGqIwGsG5bBsHqsfXtgePkhxXChhMUwQbUs3B4FkWITSJsjyrvCGEeGBjRtEH1w7A 332 | ``` 333 | 334 | The screenshots are captured like this: 335 | 336 | ![screenshots](misc/screenshots.png) 337 | 338 | ### Access Kubernetes by the token retrieved 339 | 340 | Now we have the token, let's access it through raw API first: 341 | 342 | ```sh 343 | # Retrieve the API Endpoint 344 | $ kubectl cluster-info 345 | Kubernetes master is running at https://127.0.0.1:55662 346 | KubeDNS is running at https://127.0.0.1:55662/api/v1/namespaces/kube-system/services/kube-dns:dns/prox 347 | 348 | $ APISERVER=https://127.0.0.1:55662 && \ 349 | BEARER_TOKEN="eyJhbGciOiJSUzI1NiIsImtpZCI6IjkzMzBkOTRhNGIzZTYwNjNiZTFmMmFhN2JhMWExMzY1ODZlY2MzMWMifQ.eyJpc3MiOiJodHRwczovL2RleC5kZXguc3ZjOjMyMDAwIiwic3ViIjoiQ2lWamJqMWhaRzFwYmpFc2IzVTljR1Z2Y0d4bExHUmpQV1Y0WVcxd2JHVXNaR005YjNKbkVnUnNaR0Z3IiwiYXVkIjoiZXhhbXBsZS1hcHAiLCJleHAiOjE2MDc3NTgxODgsImlhdCI6MTYwNzY3MTc4OCwiYXRfaGFzaCI6IlB3NWJxNF9TYkcwYUZtUkYyZDQwV3ciLCJlbWFpbCI6ImFkbWluMUBleGFtcGxlLm9yZyIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJuYW1lIjoiYWRtaW4xIn0.D_7kzzwlT5u9eq0KYrL64K_az2sO7iQ_5-Oz7nYHcHWQ8bBmxkH5NldsaZjzHKi0myo7EBJtb_6fqT4817h8Tf-FmGw_Ig0Fx-iA8c651L563qsy86s1usrrKyxQo-B6nZi-gvbY_K27KemNhgyGfLjl0PlvNWSUhoA94E3mpnEkdHs0H7Ni8iOgyOoNQV6TisrQgcr6blaVFJoMVhx4_XP1WnC3YZBX3vbGMCamu67BUP1KgnRbUwGqsuWntT-MuNuu8nOaBeIDGSrXFmqkUVGqIwGsG5bBsHqsfXtgePkhxXChhMUwQbUs3B4FkWITSJsjyrvCGEeGBjRtEH1w7A" 350 | 351 | $ curl -k $APISERVER/api/v1/namespaces/default/pods/ --header "Authorization: Bearer $BEARER_TOKEN" 352 | { 353 | "kind": "Status", 354 | "apiVersion": "v1", 355 | "metadata": { 356 | 357 | }, 358 | "status": "Failure", 359 | "message": "pods is forbidden: User \"admin1@example.org\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"", 360 | "reason": "Forbidden", 361 | "details": { 362 | "kind": "pods" 363 | }, 364 | "code": 403 365 | } 366 | 367 | $ kubectl auth can-i --as admin1@example.org -n dex list pods 368 | no 369 | ``` 370 | 371 | The good news is that Kubernetes has recognized the login user as `admin1@example.org` but still declined the access with `403 Forbidden`. 372 | 373 | Why? It's because there is no permission, by default, granted to a new user like `admin1@example.org`. 374 | 375 | ### Kubernetes Authorization 376 | 377 | As you may have seen, authentication is delegated to `Dex` but authorization is handled by Kubernetes itself. 378 | 379 | ```sh 380 | # Make sure we're working from the Git repo's root folder 381 | cd "$( git rev-parse --show-toplevel )" 382 | 383 | kubectl apply -f manifests/authorization.yaml 384 | 385 | kubectl auth can-i --as admin1@example.org -n dex list pods 386 | yes 387 | 388 | curl -k -s $APISERVER/api/v1/namespaces/dex/pods/ -H "Authorization: Bearer $BEARER_TOKEN" | jq '.items[].metadata.name' 389 | "dex-5f97556766-kcfvl" 390 | ``` 391 | 392 | Yes! We now can access pods within `dex` namespace, as per the permissions granted. 393 | 394 | ### Generate `kubeconfig` 395 | 396 | It's common to generate and distribute such a token by constructing a `kubeconfig` file. 397 | 398 | ```sh 399 | # Generate a kubeconfig 400 | $ cat > ~/.kube/config-kind < 10 | 11 |
12 |

13 | Authenticate for: 14 |

15 |

16 | Extra scopes: 17 |

18 |

19 | Connector ID: 20 |

21 |

22 | Request offline access: 23 |

24 | 25 |
26 | 27 | `)) 28 | 29 | func renderIndex(w http.ResponseWriter) { 30 | renderTemplate(w, indexTmpl, nil) 31 | } 32 | 33 | type tokenTmplData struct { 34 | IDToken string 35 | AccessToken string 36 | RefreshToken string 37 | RedirectURL string 38 | Claims string 39 | } 40 | 41 | var tokenTmpl = template.Must(template.New("token.html").Parse(` 42 | 43 | 53 | 54 | 55 |

ID Token:

{{ .IDToken }}

56 |

Access Token:

{{ .AccessToken }}

57 |

Claims:

{{ .Claims }}

58 | {{ if .RefreshToken }} 59 |

Refresh Token:

{{ .RefreshToken }}

60 |
61 | 62 | 63 |
64 | {{ end }} 65 | 66 | 67 | `)) 68 | 69 | func renderToken(w http.ResponseWriter, redirectURL, idToken, accessToken, refreshToken, claims string) { 70 | renderTemplate(w, tokenTmpl, tokenTmplData{ 71 | IDToken: idToken, 72 | AccessToken: accessToken, 73 | RefreshToken: refreshToken, 74 | RedirectURL: redirectURL, 75 | Claims: claims, 76 | }) 77 | } 78 | 79 | func renderTemplate(w http.ResponseWriter, tmpl *template.Template, data interface{}) { 80 | err := tmpl.Execute(w, data) 81 | if err == nil { 82 | return 83 | } 84 | 85 | switch err := err.(type) { 86 | case *template.Error: 87 | // An ExecError guarantees that Execute has not written to the underlying reader. 88 | log.Printf("Error rendering template %s: %s", tmpl.Name(), err) 89 | 90 | // TODO(ericchiang): replace with better internal server error. 91 | http.Error(w, "Internal server error", http.StatusInternalServerError) 92 | default: 93 | // An error with the underlying write, such as the connection being 94 | // dropped. Ignore for now. 95 | } 96 | } 97 | -------------------------------------------------------------------------------- /kind/kind.yaml: -------------------------------------------------------------------------------- 1 | kind: Cluster 2 | apiVersion: kind.x-k8s.io/v1alpha4 3 | # patch the generated kubeadm config with some extra settings 4 | kubeadmConfigPatches: 5 | - | 6 | apiVersion: kubeadm.k8s.io/v1beta2 7 | kind: ClusterConfiguration 8 | metadata: 9 | name: config 10 | apiServer: 11 | extraArgs: 12 | # dex will be deployed in `dex` namespace, exposed by `dex` svc 13 | oidc-issuer-url: https://dex.dex.svc:32000 14 | # the client-id that is inbuilt in the example-app 15 | oidc-client-id: example-app 16 | # the CA that we generated for Dex 17 | oidc-ca-file: /etc/ssl/certs/dex/ca.pem 18 | # email will be used as the claim 19 | oidc-username-claim: email 20 | oidc-groups-claim: groups 21 | nodes: 22 | - role: control-plane 23 | extraMounts: 24 | - hostPath: "${PROJECT_ROOT}/tls-setup/_certs" 25 | containerPath: /etc/ssl/certs/dex 26 | - role: worker 27 | -------------------------------------------------------------------------------- /ldap/ldap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: openldap 5 | labels: 6 | app.kubernetes.io/name: openldap 7 | spec: 8 | type: ClusterIP 9 | ports: 10 | - name: tcp-ldap 11 | port: 389 12 | targetPort: tcp-ldap 13 | selector: 14 | app.kubernetes.io/name: openldap 15 | --- 16 | apiVersion: apps/v1 17 | kind: Deployment 18 | metadata: 19 | name: openldap 20 | labels: 21 | app.kubernetes.io/name: openldap 22 | spec: 23 | selector: 24 | matchLabels: 25 | app.kubernetes.io/name: openldap 26 | replicas: 1 27 | template: 28 | metadata: 29 | labels: 30 | app.kubernetes.io/name: openldap 31 | spec: 32 | containers: 33 | - name: openldap 34 | image: osixia/openldap:1.5.0 35 | imagePullPolicy: "Always" 36 | env: 37 | - name: LDAP_ROOT 38 | value: "dc=example,dc=org" 39 | - name: LDAP_ADMIN_USERNAME 40 | value: "admin" 41 | - name: LDAP_ADMIN_PASSWORD 42 | valueFrom: 43 | secretKeyRef: 44 | key: adminpassword 45 | name: openldap 46 | # - name: LDAP_USERS 47 | # valueFrom: 48 | # secretKeyRef: 49 | # key: users 50 | # name: openldap 51 | # - name: LDAP_PASSWORDS 52 | # valueFrom: 53 | # secretKeyRef: 54 | # key: passwords 55 | # name: openldap 56 | - name: LDAP_CUSTOM_LDIF_DIR 57 | value: "/ldifs" 58 | ports: 59 | - name: tcp-ldap 60 | containerPort: 389 61 | volumeMounts: 62 | - name: ldap 63 | mountPath: /ldifs 64 | volumes: 65 | - name: ldap 66 | configMap: 67 | name: ldap -------------------------------------------------------------------------------- /ldap/ldif/0-ous.ldif: -------------------------------------------------------------------------------- 1 | dn: ou=people,dc=example,dc=org 2 | ou: people 3 | description: All people in organisation 4 | objectclass: organizationalunit 5 | 6 | dn: ou=groups,dc=example,dc=org 7 | objectClass: organizationalUnit 8 | ou: groups -------------------------------------------------------------------------------- /ldap/ldif/1-users.ldif: -------------------------------------------------------------------------------- 1 | # admin1 2 | dn: cn=admin1,ou=people,dc=example,dc=org 3 | objectClass: inetOrgPerson 4 | sn: admin1 5 | cn: admin1 6 | uid: admin1 7 | mail: admin1@example.org 8 | # secret, by: slappasswd -h {SSHA} -s secret 9 | userPassword: {SSHA}RRN6AM9u0tpTEOn6oBcIt9X3BbFPKVk5 10 | 11 | # admin2 12 | dn: cn=admin2,ou=people,dc=example,dc=org 13 | objectClass: inetOrgPerson 14 | sn: admin2 15 | cn: admin2 16 | uid: admin2 17 | mail: admin2@example.org 18 | # secret 19 | userPassword: {SSHA}RRN6AM9u0tpTEOn6oBcIt9X3BbFPKVk5 20 | 21 | # developer1 22 | dn: cn=developer1,ou=people,dc=example,dc=org 23 | objectClass: inetOrgPerson 24 | sn: developer1 25 | cn: developer1 26 | uid: developer1 27 | mail: developer1@example.org 28 | userPassword: {SSHA}RRN6AM9u0tpTEOn6oBcIt9X3BbFPKVk5 29 | 30 | # developer2 31 | dn: cn=developer2,ou=people,dc=example,dc=org 32 | objectClass: inetOrgPerson 33 | sn: developer2 34 | cn: developer2 35 | uid: developer2 36 | mail: developer2@example.org 37 | userPassword: {SSHA}RRN6AM9u0tpTEOn6oBcIt9X3BbFPKVk5 -------------------------------------------------------------------------------- /ldap/ldif/2-groups.ldif: -------------------------------------------------------------------------------- 1 | dn: cn=admins,ou=groups,dc=example,dc=org 2 | objectClass: groupOfNames 3 | cn: admins 4 | member: cn=admin1,ou=people,dc=example,dc=org 5 | member: cn=admin2,ou=people,dc=example,dc=org 6 | 7 | dn: cn=developers,ou=groups,dc=example,dc=org 8 | objectClass: groupOfNames 9 | cn: developers 10 | member: cn=developer1,ou=people,dc=example,dc=org 11 | member: cn=developer2,ou=people,dc=example,dc=org -------------------------------------------------------------------------------- /manifests/authorization.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: Role 3 | metadata: 4 | namespace: dex 5 | name: read-pods 6 | rules: 7 | - apiGroups: [""] 8 | resources: ["pods"] 9 | verbs: ["get", "watch", "list"] 10 | --- 11 | apiVersion: rbac.authorization.k8s.io/v1 12 | kind: RoleBinding 13 | metadata: 14 | name: read-pods 15 | namespace: dex 16 | subjects: 17 | - kind: User 18 | name: admin1@example.org 19 | apiGroup: rbac.authorization.k8s.io 20 | roleRef: 21 | kind: Role 22 | name: read-pods 23 | apiGroup: rbac.authorization.k8s.io -------------------------------------------------------------------------------- /manifests/kube-apiserver.yaml: -------------------------------------------------------------------------------- 1 | # a sample kube-apiserver.yaml manifest when external oidc is configured 2 | apiVersion: v1 3 | kind: Pod 4 | metadata: 5 | annotations: 6 | kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 172.18.0.3:6443 7 | creationTimestamp: null 8 | labels: 9 | component: kube-apiserver 10 | tier: control-plane 11 | name: kube-apiserver 12 | namespace: kube-system 13 | spec: 14 | containers: 15 | - command: 16 | - kube-apiserver 17 | - --advertise-address=172.18.0.3 18 | - --allow-privileged=true 19 | - --authorization-mode=Node,RBAC 20 | - --client-ca-file=/etc/kubernetes/pki/ca.crt 21 | - --enable-admission-plugins=NodeRestriction 22 | - --enable-bootstrap-token-auth=true 23 | - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt 24 | - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt 25 | - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key 26 | - --etcd-servers=https://127.0.0.1:2379 27 | - --insecure-port=0 28 | - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt 29 | - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key 30 | - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname 31 | - --oidc-ca-file=/etc/ssl/certs/dex/ca.pem 32 | - --oidc-client-id=example-app 33 | - --oidc-groups-claim=groups 34 | - --oidc-issuer-url=https://dex.dex.svc:32000 35 | - --oidc-username-claim=email 36 | - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt 37 | - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key 38 | - --requestheader-allowed-names=front-proxy-client 39 | - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt 40 | - --requestheader-extra-headers-prefix=X-Remote-Extra- 41 | - --requestheader-group-headers=X-Remote-Group 42 | - --requestheader-username-headers=X-Remote-User 43 | - --runtime-config= 44 | - --secure-port=6443 45 | - --service-account-key-file=/etc/kubernetes/pki/sa.pub 46 | - --service-cluster-ip-range=10.96.0.0/16 47 | - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt 48 | - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key 49 | image: k8s.gcr.io/kube-apiserver:v1.19.1 50 | imagePullPolicy: IfNotPresent 51 | livenessProbe: 52 | failureThreshold: 8 53 | httpGet: 54 | host: 172.18.0.3 55 | path: /livez 56 | port: 6443 57 | scheme: HTTPS 58 | initialDelaySeconds: 10 59 | periodSeconds: 10 60 | timeoutSeconds: 15 61 | name: kube-apiserver 62 | readinessProbe: 63 | failureThreshold: 3 64 | httpGet: 65 | host: 172.18.0.3 66 | path: /readyz 67 | port: 6443 68 | scheme: HTTPS 69 | periodSeconds: 1 70 | timeoutSeconds: 15 71 | resources: 72 | requests: 73 | cpu: 250m 74 | startupProbe: 75 | failureThreshold: 24 76 | httpGet: 77 | host: 172.18.0.3 78 | path: /livez 79 | port: 6443 80 | scheme: HTTPS 81 | initialDelaySeconds: 10 82 | periodSeconds: 10 83 | timeoutSeconds: 15 84 | volumeMounts: 85 | - mountPath: /etc/ssl/certs 86 | name: ca-certs 87 | readOnly: true 88 | - mountPath: /etc/ca-certificates 89 | name: etc-ca-certificates 90 | readOnly: true 91 | - mountPath: /etc/kubernetes/pki 92 | name: k8s-certs 93 | readOnly: true 94 | - mountPath: /usr/local/share/ca-certificates 95 | name: usr-local-share-ca-certificates 96 | readOnly: true 97 | - mountPath: /usr/share/ca-certificates 98 | name: usr-share-ca-certificates 99 | readOnly: true 100 | hostNetwork: true 101 | priorityClassName: system-node-critical 102 | volumes: 103 | - hostPath: 104 | path: /etc/ssl/certs 105 | type: DirectoryOrCreate 106 | name: ca-certs 107 | - hostPath: 108 | path: /etc/ca-certificates 109 | type: DirectoryOrCreate 110 | name: etc-ca-certificates 111 | - hostPath: 112 | path: /etc/kubernetes/pki 113 | type: DirectoryOrCreate 114 | name: k8s-certs 115 | - hostPath: 116 | path: /usr/local/share/ca-certificates 117 | type: DirectoryOrCreate 118 | name: usr-local-share-ca-certificates 119 | - hostPath: 120 | path: /usr/share/ca-certificates 121 | type: DirectoryOrCreate 122 | name: usr-share-ca-certificates 123 | status: {} -------------------------------------------------------------------------------- /misc/architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/brightzheng100/kubernetes-dex-ldap-integration/e0dbbec574cd03e42a70d1ca7a5421a942f095de/misc/architecture.png -------------------------------------------------------------------------------- /misc/screenshots-with-kubelogin.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/brightzheng100/kubernetes-dex-ldap-integration/e0dbbec574cd03e42a70d1ca7a5421a942f095de/misc/screenshots-with-kubelogin.png -------------------------------------------------------------------------------- /misc/screenshots.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/brightzheng100/kubernetes-dex-ldap-integration/e0dbbec574cd03e42a70d1ca7a5421a942f095de/misc/screenshots.png -------------------------------------------------------------------------------- /setup.sh: -------------------------------------------------------------------------------- 1 | 2 | function log { 3 | echo "$(date +"%Y-%m-%d %H:%M:%S %Z"): $@" 4 | } 5 | 6 | function logn { 7 | echo -n "$(date +"%Y-%m-%d %H:%M:%S %Z"): $@" 8 | } 9 | 10 | function is_required_tool_missed { 11 | logn "--> Checking required tool: $1 ... " 12 | if [ -x "$(command -v $1)" ]; then 13 | echo "installed" 14 | false 15 | else 16 | echo "NOT installed" 17 | true 18 | fi 19 | } 20 | 21 | 22 | # Firstly, let's do a quick check for required tools 23 | missed_tools=0 24 | log "Firstly, let's do a quick check for required tools ..." 25 | # check docker 26 | if is_required_tool_missed "docker"; then missed_tools=$((missed_tools+1)); fi 27 | # check git 28 | if is_required_tool_missed "git"; then missed_tools=$((missed_tools+1)); fi 29 | # check cfssl 30 | if is_required_tool_missed "cfssl"; then missed_tools=$((missed_tools+1)); fi 31 | # check cfssljson 32 | if is_required_tool_missed "cfssljson"; then missed_tools=$((missed_tools+1)); fi 33 | # check kind 34 | if is_required_tool_missed "kind"; then missed_tools=$((missed_tools+1)); fi 35 | # check kubectl 36 | if is_required_tool_missed "kubectl"; then missed_tools=$((missed_tools+1)); fi 37 | # final check 38 | if [[ $missed_tools > 0 ]]; then 39 | log "Abort! There are some required tools missing, please have a check." 40 | exit 98 41 | fi 42 | 43 | 44 | # Generating TLS for both Kubernetes and Dex 45 | log "Generating TLS for both Kubernetes and Dex ..." 46 | pushd tls-setup 47 | make ca req-dex req-k8s 48 | popd 49 | 50 | 51 | # Creating Kubernetes cluster with API Server configured 52 | log "Creating Kubernetes cluster with API Server configured ..." 53 | PROJECT_ROOT="$(pwd)" envsubst < kind/kind.yaml | kind create cluster --name dex-ldap-cluster --config - 54 | 55 | 56 | # Deploying OpenLDAP in namespace 'ldap' as the LDAP Server 57 | log "Deploying OpenLDAP in namespace 'ldap' as the LDAP Server ..." 58 | kubectl create ns ldap 59 | kubectl create secret generic openldap \ 60 | --namespace ldap \ 61 | --from-literal=adminpassword=adminpassword 62 | kubectl create configmap ldap \ 63 | --namespace ldap \ 64 | --from-file=ldap/ldif 65 | kubectl apply --namespace ldap -f ldap/ldap.yaml 66 | kubectl wait --namespace ldap --for=condition=ready pod -l app.kubernetes.io/name=openldap 67 | 68 | 69 | # Initializing some dummy LDAP entities 70 | log "Initializing some dummy LDAP entities ..." 71 | sleep 5 72 | LDAP_POD=$(kubectl -n ldap get pod -l "app.kubernetes.io/name=openldap" -o jsonpath="{.items[0].metadata.name}") 73 | kubectl -n ldap exec $LDAP_POD -- ldapadd -x -D "cn=admin,dc=example,dc=org" -w adminpassword -H ldap://localhost:389 -f /ldifs/0-ous.ldif 74 | kubectl -n ldap exec $LDAP_POD -- ldapadd -x -D "cn=admin,dc=example,dc=org" -w adminpassword -H ldap://localhost:389 -f /ldifs/1-users.ldif 75 | kubectl -n ldap exec $LDAP_POD -- ldapadd -x -D "cn=admin,dc=example,dc=org" -w adminpassword -H ldap://localhost:389 -f /ldifs/2-groups.ldif 76 | # List down the entities loaded 77 | kubectl -n ldap exec $LDAP_POD -- \ 78 | ldapsearch -LLL -x -H ldap://localhost:389 -D "cn=admin,dc=example,dc=org" -w adminpassword -b "ou=people,dc=example,dc=org" dn 79 | 80 | 81 | # Deploying Dex in namespace 'dex' 82 | log "Deploying Dex in namespace 'dex' ..." 83 | kubectl create ns dex 84 | kubectl create secret tls dex-tls \ 85 | --namespace dex \ 86 | --cert=tls-setup/_certs/dex.pem \ 87 | --key=tls-setup/_certs/dex-key.pem 88 | kubectl apply --namespace dex -f dex/dex.yaml 89 | kubectl wait --namespace dex --for=condition=ready pod -l app=dex 90 | 91 | 92 | # Creating a proxy to access Dex directly from laptop 93 | log "Creating a proxy to access Dex directly from laptop ..." 94 | SVC_PORT="$(kubectl get -n dex svc/dex -o json | jq '.spec.ports[0].nodePort')" 95 | docker run -d --restart always \ 96 | --name dex-kind-proxy-$SVC_PORT \ 97 | --publish 127.0.0.1:$SVC_PORT:$SVC_PORT \ 98 | --link dex-ldap-cluster-control-plane:target \ 99 | --network kind \ 100 | alpine/socat -dd \ 101 | tcp-listen:$SVC_PORT,fork,reuseaddr tcp-connect:target:$SVC_PORT 102 | -------------------------------------------------------------------------------- /tls-setup/Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: cfssl cfssljson ca req-dex req-k8s clean 2 | 3 | all: cfssl cfssljson ca req-dex req-k8s 4 | 5 | cfssl: 6 | go install github.com/cloudflare/cfssl/cmd/cfssl@v1.6.4 7 | 8 | cfssljson: 9 | go install github.com/cloudflare/cfssl/cmd/cfssljson@v1.6.4 10 | 11 | ca: cfssl cfssljson 12 | mkdir -p _certs 13 | cfssl gencert -initca ca-csr.json | cfssljson -bare _certs/ca 14 | 15 | req-dex: cfssl cfssljson 16 | cfssl gencert \ 17 | -ca _certs/ca.pem \ 18 | -ca-key _certs/ca-key.pem \ 19 | -config ca-config.json \ 20 | req-csr-dex.json | cfssljson -bare _certs/dex 21 | 22 | req-k8s: cfssl cfssljson 23 | cfssl gencert \ 24 | -ca _certs/ca.pem \ 25 | -ca-key _certs/ca-key.pem \ 26 | -config ca-config.json \ 27 | req-csr-k8s.json | cfssljson -bare _certs/k8s 28 | 29 | clean: 30 | rm -rf _certs -------------------------------------------------------------------------------- /tls-setup/ca-config.json: -------------------------------------------------------------------------------- 1 | { 2 | "signing": { 3 | "default": { 4 | "usages": [ 5 | "signing", 6 | "key encipherment", 7 | "server auth", 8 | "client auth" 9 | ], 10 | "expiry": "8760h" 11 | } 12 | } 13 | } -------------------------------------------------------------------------------- /tls-setup/ca-csr.json: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "Autogenerated CA", 3 | "key": { 4 | "algo": "ecdsa", 5 | "size": 384 6 | }, 7 | "names": [ 8 | { 9 | "O": "Honest Achmed's Used Certificates", 10 | "OU": "Hastily-Generated Values Divison", 11 | "L": "San Francisco", 12 | "ST": "California", 13 | "C": "US" 14 | } 15 | ] 16 | } -------------------------------------------------------------------------------- /tls-setup/req-csr-dex.json: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "dex", 3 | "hosts": [ 4 | "localhost", 5 | "127.0.0.1", 6 | "dex.dex.svc" 7 | ], 8 | "key": { 9 | "algo": "ecdsa", 10 | "size": 384 11 | }, 12 | "names": [ 13 | { 14 | "O": "autogenerated", 15 | "OU": "dex server", 16 | "L": "the internet" 17 | } 18 | ] 19 | } -------------------------------------------------------------------------------- /tls-setup/req-csr-k8s.json: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "kube-apiserver", 3 | "hosts": [ 4 | "localhost", 5 | "127.0.0.1" 6 | ], 7 | "key": { 8 | "algo": "ecdsa", 9 | "size": 384 10 | }, 11 | "names": [ 12 | { 13 | "O": "autogenerated", 14 | "OU": "dex server", 15 | "L": "the internet" 16 | } 17 | ] 18 | } --------------------------------------------------------------------------------