├── README.md ├── authorization └── ocp-policy-controller.kubeconfig ├── charts └── open-policy-agent │ ├── Chart.yaml │ ├── README.md │ ├── templates │ ├── _helpers.tpl │ ├── opa.yaml │ ├── rbac.yaml │ ├── rules.yaml │ └── webhook.yaml │ └── values.yaml └── examples ├── authorization-webhooks ├── unreadable_secret_test.rego └── unreadable_secrets.rego ├── kubernetes ├── matches.rego └── policymatches.rego ├── mutating-admission-webhooks ├── no_serviceaccount_secret.rego ├── no_serviceaccount_secret_test.rego └── no_serviceaccount_secret_test.yaml └── validating-admission-webhook ├── cmdb_integration.rego ├── cmdb_integration_test.rego ├── cmdb_integration_test.yaml ├── latest_and_IfNotPresent.rego ├── latest_and_IfNotPresent_test.rego ├── latest_and_IfNotPresent_test.yaml ├── loadbalancer_quota.rego ├── loadbalancer_quota_test.rego ├── loadbalancer_quota_test1.yaml ├── loadbalancer_quota_test2.yaml ├── software_license.rego ├── software_license_test.rego ├── software_license_test1.yaml └── software_license_test2.yaml /README.md: -------------------------------------------------------------------------------- 1 | # Open Policy Agent 2 | 3 | ## Install 4 | 5 | Instructions to deploy on OpenShift: 6 | 7 | add the following fragment to your `master-config.yaml` file at the section admissionConfig->pluginConfig : 8 | 9 | ```yaml 10 | MutatingAdmissionWebhook: 11 | configuration: 12 | apiVersion: apiserver.config.k8s.io/v1alpha1 13 | kind: WebhookAdmission 14 | kubeConfigFile: /dev/null 15 | ValidatingAdmissionWebhook: 16 | configuration: 17 | apiVersion: apiserver.config.k8s.io/v1alpha1 18 | kind: WebhookAdmission 19 | kubeConfigFile: /dev/null 20 | ``` 21 | 22 | Indentify the value of the OpenShift service caBundle. 23 | One way to do is to run: 24 | 25 | ```shell 26 | SECRET=$(oc describe sa default -n default | grep 'Tokens:' | awk '{print $2}') 27 | CA_BUNDLE=$(oc get secret $SECRET -n default -o "jsonpath={.data['service-ca\.crt']}") 28 | ``` 29 | 30 | Deploy the helm chart: 31 | 32 | ```shell 33 | oc new-project opa 34 | helm template ./charts/open-policy-agent --namespace opa --set kubernetes_policy_controller.image_tag=2.0 --set kubernetes_policy_controller.image=quay.io/raffaelespazzoli/kubernetes-policy-controller --set caBundle=$CA_BUNDLE --set log_level=debug | oc apply -f - -n opa 35 | ``` 36 | 37 | This configurations will enforce rules only on those namespaces with the following label `opa-controlled=true`. This is done to have a "safe" deployment. You can easiliy customize the helm template to change this rule. 38 | 39 | ### Enable authorization 40 | 41 | If you want to enable authorization, you need to do the following: 42 | 43 | 1. Copy the ocp-policy-controller.kubeconfig file to the `/etc/origin/master` directory in each of your masters. 44 | 2. Edit the master-config.yaml file adding the following: 45 | 46 | ```yaml 47 | kubernetesMasterConfig: 48 | ... 49 | apiServerArguments: 50 | ... 51 | authorization-mooc new-project opa 52 | helm template ./charts/open-policy-agent --namespace opa --set kubernetes_policy_controller.image_tag=2.0 --set kubernetes_policy_controller.image=quay.io/raffaelespazzoli/kubernetes-policy-controller --set caBundle=$CA_BUNDLE --set log_level=debug | oc apply -f - -n opade: 53 | - Node 54 | - Webhook 55 | - RBAC 56 | authorization-webhook-config-file: 57 | - /etc/origin/master/opa-policy-controller.kubeconfig 58 | ``` 59 | 60 | These above steps are intentionally left manual because the are significnaly differetn between the 3.x and 4.x version of OCP. 61 | 62 | ## Examples 63 | 64 | ### no IfnotPresent image pull policy and latest images 65 | 66 | This rule will prevent users from deplying images with the IfNotPresent image pull policy and the latest tag in the image. 67 | 68 | Run the following command to deploy the rule. 69 | 70 | ```shell 71 | oc create configmap no-ifnotpresent-latest-rule --from-file=./examples/validating-admission-webhook/latest_and_IfNotPresent.rego -n opa 72 | ``` 73 | 74 | Once the rule is deployed run the following: 75 | 76 | ```shell 77 | oc new-project ifnotporesent-latest-opa-test 78 | oc label ns ifnotporesent-latest-opa-test opa-controlled=true 79 | oc apply -f ./examples/validating-admission-webhook/latest_and_IfNotPresent_test.yaml -n ifnotporesent-latest-opa-test 80 | ``` 81 | 82 | you should get an error. 83 | 84 | To clean up run the following: 85 | 86 | ```shell 87 | oc delete project ifnotporesent-latest-opa-test 88 | oc delete configmap no-ifnotpresent-latest-rule -n opa 89 | ``` 90 | 91 | ## Quota on LoadBalancer service types 92 | 93 | LoadBalancer type services are billable resorces in clud deploymen tso it might be a good idea to put a quota on them. 94 | In this example the quota is 2 per namespace. 95 | 96 | Run the following command to deploy the rule. 97 | 98 | ```shell 99 | oc create configmap loadbalancer-quota-rule --from-file=./examples/validating-admission-webhook/loadbalancer_quota.rego -n opa 100 | ``` 101 | 102 | Once the rule is deployed run the following: 103 | 104 | ```shell 105 | oc new-project loadbalancer-quota-opa-test 106 | oc label ns loadbalancer-quota-opa-test opa-controlled=true 107 | oc apply -f ./examples/validating-admission-webhook/loadbalancer_quota_test1.yaml -n loadbalancer-quota-opa-test 108 | ``` 109 | 110 | wait a few seconds for opa to catch up with the cluster status then type: 111 | 112 | ```shell 113 | oc apply -f ./examples/validating-admission-webhook/loadbalancer_quota_test2.yaml -n loadbalancer-quota-opa-test 114 | ``` 115 | 116 | you should get an error. 117 | 118 | To clean up run the following: 119 | 120 | ```shell 121 | oc delete project loadbalancer-quota-opa-test 122 | oc delete configmap loadbalancer-quota-rule -n opa 123 | ``` 124 | 125 | ## CMDB Integration 126 | 127 | Sometimes apps deployed in OpenShift need to be referrable back to a CMDB database. You can do that with label. This rule enforces that the following label are defined: 128 | 129 | - cmdb_id 130 | - emergency_contact 131 | - tier 132 | 133 | Run the following command to deploy the rule. 134 | 135 | ```shell 136 | oc create configmap cmdb-integration-rule --from-file=./examples/validating-admission-webhook/cmdb_integration.rego -n opa 137 | ``` 138 | 139 | Once the rule is deployed run the following: 140 | 141 | ```shell 142 | oc new-project cmdb-integration-test 143 | oc label ns cmdb-integration-test opa-controlled=true 144 | oc apply -f ./examples/validating-admission-webhook/cmdb_integration_test.yaml -n cmdb-integration-test 145 | ``` 146 | 147 | you should get an error. 148 | 149 | To clean up run the following: 150 | 151 | ```shell 152 | oc delete project cmdb-integration-test 153 | oc delete configmap cmdb-integration-rule -n opa 154 | ``` 155 | 156 | ## Enforcing software license 157 | 158 | Sometime software licences can be tied to a measurable dimension. In this case we can write polocies that ensure that we don't go over a specific limit within a cluster (in a way this is a cluster-wide quota). 159 | In this example we use CPU request and we assume that we have licensed the sofware for 500 cpus. 160 | 161 | Run the following command to deploy the rule. 162 | 163 | ```shell 164 | oc create configmap software-license-rule --from-file=./examples/validating-admission-webhook/software_license.rego -n opa 165 | ``` 166 | 167 | Once the rule is deployed run the following: 168 | 169 | ```shell 170 | oc new-project software-license-test 171 | oc label ns software-license-test opa-controlled=true 172 | oc apply -f ./examples/validating-admission-webhook/software_license_test1.yaml -n software-license-test 173 | ``` 174 | 175 | wait a few seconds for opa to sync and the type: 176 | 177 | ```shell 178 | oc apply -f ./examples/validating-admission-webhook/software_license_test2.yaml -n software-license-test 179 | ``` 180 | 181 | you should get an error. 182 | 183 | To clean up run the following: 184 | 185 | ```shell 186 | oc delete project software-license-test 187 | oc delete configmap software-license-rule -n opa 188 | ``` 189 | 190 | ## Preventing mounting the service account secret 191 | 192 | Arguably the service account secret should not be mounted by default. To flip the default behavior we can add an annotation to reuest the service account to be mounted (`requires-service-account-secret`). The we can create a mutating admission rule that will remove the service account secret if the above annotation is not set: 193 | 194 | Run the following command to deploy the rule. 195 | 196 | ```shell 197 | oc create configmap no-serviceaccount-secret-rule --from-file=./examples/mutating-admission-webhooks/no_serviceaccount_secret.rego -n opa 198 | ``` 199 | 200 | Once the rule is deployed run the following: 201 | 202 | ```shell 203 | oc new-project no-serviceaccount-secret-test 204 | oc label ns no-serviceaccount-secret-test opa-controlled=true 205 | oc apply -f ./examples/mutating-admission-webhooks/no_serviceaccount_secret_test.yaml -n no-serviceaccount-secret-test 206 | ``` 207 | 208 | check that the pod did not mount a volume: 209 | 210 | ```shell 211 | oc get pod busybox -n no-serviceaccount-secret-test -o yaml | grep -A 4 volumeMount 212 | ``` 213 | 214 | The output should be empty. 215 | 216 | To clean up run the following: 217 | 218 | ```shell 219 | oc delete project no-serviceaccount-secret-test 220 | oc delete configmap no-serviceaccount-secret-rule -n opa -------------------------------------------------------------------------------- /authorization/ocp-policy-controller.kubeconfig: -------------------------------------------------------------------------------- 1 | # Kubernetes API version 2 | apiVersion: v1 3 | # kind of the API object 4 | kind: Config 5 | # clusters refers to the remote service. 6 | clusters: 7 | - name: opa-server 8 | cluster: 9 | # CA for verifying the remote service. 10 | certificate-authority: /etc/origin/master/ca-bundle.crt 11 | # URL of remote service to query. Must use 'https'. May not include parameters. 12 | server: https://opa.opa.svc 13 | 14 | # users refers to the API Server's webhook configuration. 15 | users: 16 | - name: opa-user 17 | user: 18 | client-certificate: /etc/origin/master/master.kubelet-client.crt # cert for the webhook plugin to use 19 | client-key: /etc/origin/master/master.kubelet-client.key # key matching the cert 20 | 21 | # kubeconfig files require a context. Provide one for the API Server. 22 | current-context: opa-webhook 23 | contexts: 24 | - context: 25 | cluster: opa-server 26 | user: opa-user 27 | name: opa-webhook -------------------------------------------------------------------------------- /charts/open-policy-agent/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | name: open-policy-agent 3 | version: 0.0.1 4 | description: Helm chart to deploy Open Policy Agent 5 | sources: 6 | - https://www.openpolicyagent.org 7 | engine: gotpl 8 | 9 | -------------------------------------------------------------------------------- /charts/open-policy-agent/README.md: -------------------------------------------------------------------------------- 1 | # OPA 2 | 3 | This chart is for deploying OPA. 4 | -------------------------------------------------------------------------------- /charts/open-policy-agent/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "istio.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | */}} 13 | {{- define "istio.fullname" -}} 14 | {{- $name := default .Chart.Name .Values.nameOverride -}} 15 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 16 | {{- end -}} 17 | 18 | {{/* 19 | Create a fully qualified configmap name. 20 | */}} 21 | {{- define "istio.configmap.fullname" -}} 22 | {{- printf "%s-%s" .Release.Name "istio-mesh-config" | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | 25 | {{/* 26 | Configmap checksum. 27 | */}} 28 | {{- define "istio.configmap.checksum" -}} 29 | {{- print $.Template.BasePath "/configmap.yaml" | sha256sum -}} 30 | {{- end -}} 31 | -------------------------------------------------------------------------------- /charts/open-policy-agent/templates/opa.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: Service 3 | apiVersion: v1 4 | metadata: 5 | name: opa 6 | namespace: {{ .Release.Namespace }} 7 | annotations: 8 | service.alpha.openshift.io/serving-cert-secret-name: opa-server 9 | spec: 10 | selector: 11 | app: opa 12 | ports: 13 | - name: https 14 | protocol: TCP 15 | port: 443 16 | targetPort: 8443 17 | --- 18 | apiVersion: extensions/v1beta1 19 | kind: Deployment 20 | metadata: 21 | labels: 22 | app: opa 23 | namespace: {{ .Release.Namespace }} 24 | name: opa 25 | spec: 26 | replicas: 1 27 | selector: 28 | matchLabels: 29 | app: opa 30 | template: 31 | metadata: 32 | labels: 33 | app: opa 34 | name: opa 35 | spec: 36 | containers: 37 | # WARNING: OPA is NOT running with an authorization policy configured. This 38 | # means that clients can read and write policies in OPA. If you are 39 | # deploying OPA in an insecure environment, be sure to configure 40 | # authentication and authorization on the daemon. See the Security page for 41 | # details: https://www.openpolicyagent.org/docs/security.html. 42 | - name: opa 43 | image: {{ .Values.opa.image }}:{{ .Values.opa.image_tag }} 44 | imagePullPolicy: {{ .Values.opa.image_pull_policy }} 45 | args: 46 | - "run" 47 | - "--server" 48 | - --addr={{ .Values.opa_local_address }} 49 | - --log-level={{ .Values.log_level }} 50 | - name: kube-mgmt 51 | image: {{ .Values.kube_mgmt.image }}:{{ .Values.kube_mgmt.image_tag }} 52 | imagePullPolicy: {{ .Values.kube_mgmt.image_pull_policy }} 53 | args: 54 | - "--replicate-cluster=v1/namespaces" 55 | - "--replicate=extensions/v1beta1/ingresses" 56 | - "--replicate=apps/v1/deployments" 57 | - "--replicate=v1/services" 58 | - "--replicate=v1/pods" 59 | - --register-admission-controller=false 60 | - --opa-url={{ .Values.opa_local_address }}/v1 61 | - --register-admission-controller=false 62 | - name: controller 63 | image: {{ .Values.kubernetes_policy_controller.image }}:{{ .Values.kubernetes_policy_controller.image_tag }} 64 | imagePullPolicy: {{ .Values.kubernetes_policy_controller.image_pull_policy }} 65 | args: 66 | - "--addr=https://0.0.0.0:8443" 67 | - "--addr=http://127.0.0.1:7925" 68 | - --opa-url={{ .Values.opa_local_address }}/v1 69 | - --log-level={{ .Values.log_level }} 70 | - "--tls-cert-file=/certs/tls.crt" 71 | - "--tls-private-key-file=/certs/tls.key" 72 | volumeMounts: 73 | - readOnly: true 74 | mountPath: /certs 75 | name: opa-server 76 | volumes: 77 | - name: opa-server 78 | secret: 79 | secretName: opa-server 80 | # --- 81 | # kind: ConfigMap 82 | # apiVersion: v1 83 | # metadata: 84 | # name: opa-default-system-main 85 | # namespace: {{ .Release.Namespace }} 86 | # data: 87 | # main: | 88 | # package system 89 | 90 | # import data.kubernetes.admission 91 | 92 | # main = { 93 | # "apiVersion": "admission.k8s.io/v1beta1", 94 | # "kind": "AdmissionReview", 95 | # "response": response, 96 | # } 97 | 98 | # default response = {"allowed": true} 99 | 100 | # response = { 101 | # "allowed": false, 102 | # "status": { 103 | # "reason": reason, 104 | # }, 105 | # } { 106 | # reason = concat(", ", admission.deny) 107 | # reason != "" 108 | # } -------------------------------------------------------------------------------- /charts/open-policy-agent/templates/rbac.yaml: -------------------------------------------------------------------------------- 1 | # Grant OPA/kube-mgmt read-only access to resources. This let's kube-mgmt 2 | # replicate resources into OPA so they can be used in policies. 3 | kind: ClusterRoleBinding 4 | apiVersion: rbac.authorization.k8s.io/v1 5 | metadata: 6 | name: opa-viewer 7 | roleRef: 8 | kind: ClusterRole 9 | name: view 10 | apiGroup: rbac.authorization.k8s.io 11 | subjects: 12 | - kind: Group 13 | name: system:serviceaccounts:{{ .Release.Namespace }} 14 | apiGroup: rbac.authorization.k8s.io 15 | --- 16 | # Define role for OPA/kube-mgmt to update configmaps with policy status. 17 | kind: Role 18 | apiVersion: rbac.authorization.k8s.io/v1 19 | metadata: 20 | namespace: {{ .Release.Namespace }} 21 | name: configmap-modifier 22 | rules: 23 | - apiGroups: [""] 24 | resources: ["configmaps"] 25 | verbs: ["update", "patch"] 26 | --- 27 | # Grant OPA/kube-mgmt role defined above. 28 | kind: RoleBinding 29 | apiVersion: rbac.authorization.k8s.io/v1 30 | metadata: 31 | namespace: {{ .Release.Namespace }} 32 | name: opa-configmap-modifier 33 | roleRef: 34 | kind: Role 35 | name: configmap-modifier 36 | apiGroup: rbac.authorization.k8s.io 37 | subjects: 38 | - kind: Group 39 | name: system:serviceaccounts:{{ .Release.Namespace }} 40 | apiGroup: rbac.authorization.k8s.io 41 | 42 | -------------------------------------------------------------------------------- /charts/open-policy-agent/templates/rules.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: ConfigMap 3 | apiVersion: v1 4 | metadata: 5 | name: kubernetes-matches 6 | namespace: {{ .Release.Namespace }} 7 | data: 8 | matches.rego: | 9 | package k8s 10 | import data.kubernetes 11 | 12 | matches[[kind, namespace, name, resource]] { 13 | resource := kubernetes[kind][namespace][name].object 14 | } 15 | 16 | matches[[kind, namespace, name, resource]] { 17 | resource := kubernetes[kind][namespace][name] 18 | } 19 | policy-matches.rego: | 20 | package k8s 21 | import data.kubernetes.policies 22 | 23 | # Matches provides an abstraction to find policies that match the (name). 24 | policymatches[[name, policy]] { 25 | policy := policies[name] 26 | } -------------------------------------------------------------------------------- /charts/open-policy-agent/templates/webhook.yaml: -------------------------------------------------------------------------------- 1 | kind: ValidatingWebhookConfiguration 2 | apiVersion: admissionregistration.k8s.io/v1beta1 3 | metadata: 4 | name: opa-validating-webhook 5 | webhooks: 6 | - name: validating-webhook.openpolicyagent.org 7 | rules: 8 | - operations: ["CREATE", "UPDATE"] 9 | apiGroups: ["*"] 10 | apiVersions: ["*"] 11 | resources: ["*"] 12 | clientConfig: 13 | caBundle: {{ .Values.caBundle }} 14 | service: 15 | namespace: {{ .Release.Namespace }} 16 | name: opa 17 | path: "/v1/admit" 18 | failurePolicy: Ignore 19 | namespaceSelector: 20 | matchLabels: 21 | opa-controlled: 'true' 22 | --- 23 | kind: MutatingWebhookConfiguration 24 | apiVersion: admissionregistration.k8s.io/v1beta1 25 | metadata: 26 | name: opa-mutating-webhook 27 | webhooks: 28 | - name: mutating-webhook.openpolicyagent.org 29 | rules: 30 | - operations: ["CREATE", "UPDATE"] 31 | apiGroups: ["*"] 32 | apiVersions: ["*"] 33 | resources: ["*"] 34 | clientConfig: 35 | caBundle: {{ .Values.caBundle }} 36 | service: 37 | namespace: {{ .Release.Namespace }} 38 | name: opa 39 | path: "/v1/admit" 40 | failurePolicy: Ignore 41 | namespaceSelector: 42 | matchLabels: 43 | opa-controlled: 'true' -------------------------------------------------------------------------------- /charts/open-policy-agent/values.yaml: -------------------------------------------------------------------------------- 1 | opa_local_address: http://localhost:8181 2 | log_level: info 3 | 4 | opa: 5 | image: openpolicyagent/opa 6 | image_tag: 0.10.2 7 | image_pull_policy: IfNotPresent 8 | 9 | kube_mgmt: 10 | image: openpolicyagent/kube-mgmt 11 | image_tag: 0.6 12 | image_pull_policy: IfNotPresent 13 | 14 | kubernetes_policy_controller: 15 | image: nikhilbh/kubernetes-policy-controller 16 | image_tag: 1.2 17 | image_pull_policy: IfNotPresent -------------------------------------------------------------------------------- /examples/authorization-webhooks/unreadable_secret_test.rego: -------------------------------------------------------------------------------- 1 | package authorization 2 | 3 | test_deny_admin { 4 | result := deny[{"id": id, "resource": {"kind": "secrets", "namespace": "secret_namespace", "name": "ciao"}, "resolution": resolution}] with data.kubernetes.secrets.secret_namespace.ciao as { 5 | "kind": "SubjectAccessReview", 6 | "apiVersion": "authorization.k8s.io/v1beta1", 7 | "spec": { 8 | "resourceAttributes": { 9 | "namespace": "secret_namespace", 10 | "verb": "get", 11 | "resource": "secrets", 12 | "name": "ciao" 13 | }, 14 | "user": "admin", 15 | "group": ["cluster-admin"], 16 | }, 17 | } 18 | result[_].message = "cluster administrator are not allowed to read secrets in non-administrative namespaces" 19 | } 20 | 21 | test_allow_user { 22 | not deny[{"id": "unreadable-secret", "resource": {"kind": "secrets", "namespace": "secret_namespace", "name": "ciao"}, "resolution": {"message": "cluster administrator are not allowed to read secrets in non-administrative namespaces"}}] with data.kubernetes.secrets.secret_namespace.ciao as { 23 | "kind": "SubjectAccessReview", 24 | "apiVersion": "authorization.k8s.io/v1beta1", 25 | "spec": { 26 | "resourceAttributes": { 27 | "namespace": "secret_namespace", 28 | "verb": "get", 29 | "resource": "secrets", 30 | "name": "ciao" 31 | }, 32 | "user": "alice", 33 | "group": ["user"], 34 | }, 35 | } 36 | } 37 | 38 | test_allow_admin_kube_namespace { 39 | not deny[{"id": "unreadable-secret", "resource": {"kind": "secrets", "namespace": "kube-system", "name": "policies"}, "resolution": {"message": "cluster administrator are not allowed to read secrets in non-administrative namespaces"}}] with data.kubernetes.secrets["kube-system"].policies as { 40 | "kind": "SubjectAccessReview", 41 | "apiVersion": "authorization.k8s.io/v1beta1", 42 | "spec": { 43 | "resourceAttributes": { 44 | "namespace": "kube-system", 45 | "verb": "get", 46 | "resource": "secrets", 47 | "name": "policies" 48 | }, 49 | "user": "admin", 50 | "group": ["cluster-admin"], 51 | }, 52 | } 53 | } -------------------------------------------------------------------------------- /examples/authorization-webhooks/unreadable_secrets.rego: -------------------------------------------------------------------------------- 1 | package authorization 2 | import data.k8s.matches 3 | 4 | ############################################################################## 5 | # 6 | # Policy : denies cluster-admin users access to read secrets in administrative projects 7 | # 8 | # 9 | # 10 | ############################################################################## 11 | 12 | deny[{ 13 | "id": "unreadable-secret", 14 | "resource": {"kind": "secrets", "namespace": namespace, "name": name}, 15 | "resolution": {"message": "cluster administrator are not allowed to read secrets in non-administrative namespaces"}, 16 | }] { 17 | matches[["secrets", namespace, name, resource]] 18 | resource.spec.resourceAttributes.verb = "get" 19 | resource.spec.group[_] = "cluster-admin" 20 | not re_match("^(openshift-*|kube-*)", resource.spec.resourceAttributes.namespace) 21 | } -------------------------------------------------------------------------------- /examples/kubernetes/matches.rego: -------------------------------------------------------------------------------- 1 | package k8s 2 | import data.kubernetes 3 | 4 | matches[[kind, namespace, name, resource]] { 5 | resource := kubernetes[kind][namespace][name].object 6 | } 7 | 8 | matches[[kind, namespace, name, resource]] { 9 | resource := kubernetes[kind][namespace][name] 10 | } 11 | -------------------------------------------------------------------------------- /examples/kubernetes/policymatches.rego: -------------------------------------------------------------------------------- 1 | package k8s 2 | import data.kubernetes.policies 3 | 4 | # Matches provides an abstraction to find policies that match the (name). 5 | policymatches[[name, policy]] { 6 | policy := policies[name] 7 | } -------------------------------------------------------------------------------- /examples/mutating-admission-webhooks/no_serviceaccount_secret.rego: -------------------------------------------------------------------------------- 1 | package admission 2 | 3 | import data.k8s.matches 4 | 5 | ############################################################################## 6 | # 7 | # Policy : Construct JSON Patch for annotating boject with foo=bar if it is 8 | # annotated with "test-mutation" 9 | # 10 | ############################################################################## 11 | 12 | default no_sa_annotation = "requires-service-account-secret" 13 | 14 | deny[{ 15 | "id": "no-serviceaccount-secret", 16 | "resource": {"kind": "pods", "namespace": namespace, "name": name}, 17 | "resolution": {"patches": p, "message" : "service account secret not mounted"}, 18 | }] { 19 | matches[["pods", namespace, name, matched_pod]] 20 | isCreateOrUpdate(matched_pod) 21 | isMissingOrFalseAnnotation(matched_pod, no_sa_annotation) 22 | p = [{"op": "add", "path": "/spec/automountServiceAccountToken", "value": false}] 23 | } 24 | 25 | isCreateOrUpdate(obj) { 26 | obj.operation == "CREATE" 27 | } 28 | 29 | isCreateOrUpdate(obj) { 30 | obj.operation == "UPDATE" 31 | } 32 | 33 | isMissingOrFalseAnnotation(obj, annotation) { 34 | not obj.object.metadata["annotations"][annotation] 35 | } 36 | 37 | isMissingOrFalseAnnotation(obj, annotation) { 38 | obj.object.metadata["annotations"][annotation] != true 39 | } -------------------------------------------------------------------------------- /examples/mutating-admission-webhooks/no_serviceaccount_secret_test.rego: -------------------------------------------------------------------------------- 1 | package admission 2 | 3 | no_service_account_pod1 = { 4 | "uid":"0df28fbd-5f5f-11e8-bc74-36e6bb280816", 5 | "kind":{ 6 | "group":"", 7 | "version":"v1", 8 | "kind":"Pod" 9 | }, 10 | "resource":{ 11 | "group":"", 12 | "version":"v1", 13 | "resource":"pods" 14 | }, 15 | "namespace":"myproject", 16 | "operation":"CREATE", 17 | "userInfo":{ 18 | "username":"system:serviceaccount:kube-system:replicaset-controller", 19 | "uid":"a7e0ab33-5f29-11e8-8a3c-36e6bb280816", 20 | "groups":[ 21 | "system:serviceaccounts", 22 | "system:serviceaccounts:kube-system", 23 | "system:authenticated" 24 | ] 25 | }, 26 | "object":{ 27 | "metadata":{ 28 | "name":"couchbase", 29 | "namespace":"myproject", 30 | "annotations": { 31 | "requires-service-account-secret": true 32 | } 33 | }, 34 | "spec":{ 35 | "containers":[ 36 | { 37 | "image":"couchbase:6.0.0", 38 | "imagePullPolicy":"IfNotPresent", 39 | "name":"couchbase" 40 | } 41 | ], 42 | "restartPolicy":"Always", 43 | "terminationGracePeriodSeconds":30 44 | } 45 | }, 46 | "oldObject":null 47 | } 48 | 49 | no_service_account_pod2 = { 50 | "uid":"0df28fbd-5f5f-11e8-bc74-36e6bb280816", 51 | "kind":{ 52 | "group":"", 53 | "version":"v1", 54 | "kind":"Pod" 55 | }, 56 | "resource":{ 57 | "group":"", 58 | "version":"v1", 59 | "resource":"pods" 60 | }, 61 | "namespace":"myproject", 62 | "operation":"CREATE", 63 | "userInfo":{ 64 | "username":"system:serviceaccount:kube-system:replicaset-controller", 65 | "uid":"a7e0ab33-5f29-11e8-8a3c-36e6bb280816", 66 | "groups":[ 67 | "system:serviceaccounts", 68 | "system:serviceaccounts:kube-system", 69 | "system:authenticated" 70 | ] 71 | }, 72 | "object":{ 73 | "metadata":{ 74 | "name":"myimage", 75 | "namespace":"myproject" 76 | }, 77 | "spec":{ 78 | "containers":[ 79 | { 80 | "image":"myrepo/myimage:v3.2", 81 | "imagePullPolicy":"IfNotPresent", 82 | "name":"myimage" 83 | }, 84 | ], 85 | "restartPolicy":"Always", 86 | "terminationGracePeriodSeconds":30 87 | } 88 | }, 89 | "oldObject":null 90 | } 91 | 92 | test_non_mutation { 93 | count(deny) = 0 94 | with data.kubernetes.pods.myproject.couchbase as no_service_account_pod1 95 | } 96 | 97 | test_mutation { 98 | result := deny[{"id": id, "resource": {"kind": "pods", "namespace": namespace, "name": name}, "resolution": resolution}] with data.kubernetes.pods.myproject.mysql as no_service_account_pod2 99 | result[_].message = "service account secret not mounted" 100 | result[_].patches[_].op = "add" 101 | result[_].patches[_].path = "/spec/automountServiceAccountToken" 102 | result[_].patches[_].value = false 103 | } -------------------------------------------------------------------------------- /examples/mutating-admission-webhooks/no_serviceaccount_secret_test.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: busybox 5 | spec: 6 | containers: 7 | - name: busybox 8 | image: busybox:latest 9 | command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] -------------------------------------------------------------------------------- /examples/validating-admission-webhook/cmdb_integration.rego: -------------------------------------------------------------------------------- 1 | package admission 2 | 3 | import data.k8s.matches 4 | 5 | #all deployments must have the cmdb_id, emergency_contant an tier labels 6 | 7 | required_labels = ["cmdb_id", "emergency_contact", "tier"] 8 | 9 | deny[{ 10 | "id": "cmdb-labels", 11 | "resource": {"kind": "deployments", "namespace": namespace, "name": name}, 12 | "resolution": {"message": "all deployments must have the cmdb_id, emergency_contant an tier labels"}, 13 | }] { 14 | matches[["deployments", namespace, name, matched_deployment]] 15 | 16 | l := required_labels[_] 17 | not check_labels(matched_deployment.object.metadata.labels, l) 18 | 19 | } 20 | 21 | check_labels(obj, key) { 22 | obj[key] 23 | } -------------------------------------------------------------------------------- /examples/validating-admission-webhook/cmdb_integration_test.rego: -------------------------------------------------------------------------------- 1 | package admission 2 | 3 | 4 | cmdb_deployment1 = { 5 | "kind":{ 6 | "group":"apps", 7 | "kind":"Deployment", 8 | "version":"v1" 9 | }, 10 | "namespace":"test", 11 | "object":{ 12 | "metadata":{ 13 | "creationTimestamp":"None", 14 | "name":"hello", 15 | "namespace":"test", 16 | "labels": { 17 | "cmdb_id":"foo", 18 | "emergency_contact":"foo", 19 | "tier":"foo" 20 | } 21 | }, 22 | "spec":{ 23 | "progressDeadlineSeconds":600, 24 | "replicas":3, 25 | "revisionHistoryLimit":10, 26 | "selector":{ 27 | "matchLabels":{ 28 | "app":"hello" 29 | } 30 | }, 31 | "strategy":{ 32 | "rollingUpdate":{ 33 | "maxSurge":"25%", 34 | "maxUnavailable":"25%" 35 | }, 36 | "type":"RollingUpdate" 37 | }, 38 | "template":{ 39 | "metadata":{ 40 | "creationTimestamp":"None", 41 | "labels":{ 42 | "app":"hello" 43 | } 44 | }, 45 | "spec":{ 46 | "containers":[ 47 | { 48 | "image":"jmsearcy/hello:1.5", 49 | "imagePullPolicy":"IfNotPresent", 50 | "name":"hello", 51 | "ports":[ 52 | { 53 | "containerPort":8080, 54 | "protocol":"TCP" 55 | } 56 | ], 57 | "resources":{ 58 | 59 | }, 60 | "terminationMessagePath":"/dev/termination-log", 61 | "terminationMessagePolicy":"File" 62 | } 63 | ], 64 | "dnsPolicy":"ClusterFirst", 65 | "restartPolicy":"Always", 66 | "schedulerName":"default-scheduler", 67 | "securityContext":{}, 68 | "terminationGracePeriodSeconds":30 69 | } 70 | } 71 | }, 72 | "status":{ 73 | 74 | } 75 | }, 76 | "oldObject":"None", 77 | "operation":"CREATE", 78 | "resource":{ 79 | "group":"apps", 80 | "resource":"deployments", 81 | "version":"v1" 82 | }, 83 | "uid":"96ab6176-dc7e-11e8-84d0-da6ee68491b2", 84 | "userInfo":{ 85 | "groups":[ 86 | "system:authenticated" 87 | ], 88 | "username":"cc9ddda6c7ba0887fc4e0d483a907363d20df2b4" 89 | } 90 | } 91 | 92 | cmdb_deployment2 = { 93 | "kind":{ 94 | "group":"apps", 95 | "kind":"Deployment", 96 | "version":"v1" 97 | }, 98 | "namespace":"test", 99 | "object":{ 100 | "metadata":{ 101 | "creationTimestamp":"None", 102 | "name":"hello2", 103 | "namespace":"test", 104 | "labels": { 105 | "cmdb_id":"foo", 106 | "tier":"foo" 107 | } 108 | }, 109 | "spec":{ 110 | "progressDeadlineSeconds":600, 111 | "replicas":3, 112 | "revisionHistoryLimit":10, 113 | "selector":{ 114 | "matchLabels":{ 115 | "app":"hello2" 116 | } 117 | }, 118 | "strategy":{ 119 | "rollingUpdate":{ 120 | "maxSurge":"25%", 121 | "maxUnavailable":"25%" 122 | }, 123 | "type":"RollingUpdate" 124 | }, 125 | "template":{ 126 | "metadata":{ 127 | "creationTimestamp":"None", 128 | "labels":{ 129 | "app":"hello2" 130 | } 131 | }, 132 | "spec":{ 133 | "containers":[ 134 | { 135 | "image":"jmsearcy/hello:1.5", 136 | "imagePullPolicy":"IfNotPresent", 137 | "name":"hello2", 138 | "ports":[ 139 | { 140 | "containerPort":8080, 141 | "protocol":"TCP" 142 | } 143 | ], 144 | "resources":{ 145 | 146 | }, 147 | "terminationMessagePath":"/dev/termination-log", 148 | "terminationMessagePolicy":"File" 149 | } 150 | ], 151 | "dnsPolicy":"ClusterFirst", 152 | "restartPolicy":"Always", 153 | "schedulerName":"default-scheduler", 154 | "securityContext":{}, 155 | "terminationGracePeriodSeconds":30 156 | } 157 | } 158 | }, 159 | "status":{ 160 | 161 | } 162 | }, 163 | "oldObject":"None", 164 | "operation":"CREATE", 165 | "resource":{ 166 | "group":"apps", 167 | "resource":"deployments", 168 | "version":"v1" 169 | }, 170 | "uid":"96ab6176-dc7e-11e8-84d0-da6ee68491b2", 171 | "userInfo":{ 172 | "groups":[ 173 | "system:authenticated" 174 | ], 175 | "username":"cc9ddda6c7ba0887fc4e0d483a907363d20df2b4" 176 | } 177 | } 178 | 179 | test_valid_cmdb_integration { 180 | count(deny) = 0 with data.kubernetes.deployments.test.hello as cmdb_deployment1 181 | } 182 | 183 | test_invalid_cmdb_integration { 184 | result := deny[{"id": id, "resource": {"kind": "deployments", "namespace": namespace, "name": name}, "resolution": resolution}] with data.kubernetes.deployments.test.hello2 as cmdb_deployment2 185 | result[_].message = "all deployments must have the cmdb_id, emergency_contant an tier labels" 186 | } -------------------------------------------------------------------------------- /examples/validating-admission-webhook/cmdb_integration_test.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx-deployment 5 | labels: 6 | app: nginx 7 | cmbd_id: '123' 8 | tier: gold 9 | spec: 10 | replicas: 3 11 | selector: 12 | matchLabels: 13 | app: nginx 14 | template: 15 | metadata: 16 | labels: 17 | app: nginx 18 | spec: 19 | containers: 20 | - name: nginx 21 | image: nginx:1.7.9 22 | ports: 23 | - containerPort: 80 -------------------------------------------------------------------------------- /examples/validating-admission-webhook/latest_and_IfNotPresent.rego: -------------------------------------------------------------------------------- 1 | package admission 2 | 3 | import data.k8s.matches 4 | 5 | # Common function to validate containers 6 | validate_containers(containers) { 7 | containers.imagePullPolicy 8 | containers.imagePullPolicy = "IfNotPresent" 9 | endswith(containers.image,":latest") 10 | } 11 | 12 | deny[{ 13 | "id": "pods-imagepullpolicy-latest", 14 | "resource": {"kind": "pods", "namespace": namespace, "name": name}, 15 | "resolution": {"message": "image pull policy and image tag cannot be respectively IfNotPresent and latest at the same time"}, 16 | }] { 17 | matches[["pods", namespace, name, matched_workload]] 18 | containers := matched_workload.object.spec.containers[_] 19 | validate_containers(containers) 20 | } -------------------------------------------------------------------------------- /examples/validating-admission-webhook/latest_and_IfNotPresent_test.rego: -------------------------------------------------------------------------------- 1 | package admission 2 | 3 | test_deny_pod { 4 | result := deny[{"id": id, "resource": {"kind": "pods", "namespace": "dummy", "name": "testpod"}, "resolution": resolution}] with data.kubernetes.pods.dummy.testpod as { 5 | "uid":"0df28fbd-5f5f-11e8-bc74-36e6bb280816", 6 | "kind":{ 7 | "group":"", 8 | "version":"v1", 9 | "kind":"Pod" 10 | }, 11 | "resource":{ 12 | "group":"", 13 | "version":"v1", 14 | "resource":"pods" 15 | }, 16 | "namespace":"dummy", 17 | "operation":"CREATE", 18 | "userInfo":{ 19 | "username":"system:serviceaccount:kube-system:replicaset-controller", 20 | "uid":"a7e0ab33-5f29-11e8-8a3c-36e6bb280816", 21 | "groups":[ 22 | "system:serviceaccounts", 23 | "system:serviceaccounts:kube-system", 24 | "system:authenticated" 25 | ] 26 | }, 27 | "object":{ 28 | "metadata":{ 29 | "generateName":"nginx-deployment-6c54bd5869-", 30 | "creationTimestamp":null, 31 | "labels":{ 32 | "app":"nginx", 33 | "pod-template-hash":"2710681425" 34 | }, 35 | "annotations":{ 36 | "openshift.io/scc":"restricted" 37 | }, 38 | "ownerReferences":[ 39 | { 40 | "apiVersion":"extensions/v1beta1", 41 | "kind":"ReplicaSet", 42 | "name":"nginx-deployment-6c54bd5869", 43 | "uid":"16c2b355-5f5d-11e8-ac91-36e6bb280816", 44 | "controller":true, 45 | "blockOwnerDeletion":true 46 | } 47 | ] 48 | }, 49 | "spec":{ 50 | "volumes":[ 51 | { 52 | "name":"default-token-tq5lq", 53 | "secret":{ 54 | "secretName":"default-token-tq5lq" 55 | } 56 | } 57 | ], 58 | "containers":[ 59 | { 60 | "name":"nginx", 61 | "image":"nginx:latest", 62 | "ports":[ 63 | { 64 | "containerPort":80, 65 | "protocol":"TCP" 66 | } 67 | ], 68 | "resources":{ 69 | 70 | }, 71 | "volumeMounts":[ 72 | { 73 | "name":"default-token-tq5lq", 74 | "readOnly":true, 75 | "mountPath":"/var/run/secrets/kubernetes.io/serviceaccount" 76 | } 77 | ], 78 | "terminationMessagePath":"/dev/termination-log", 79 | "terminationMessagePolicy":"File", 80 | "imagePullPolicy":"IfNotPresent", 81 | "securityContext":{ 82 | "capabilities":{ 83 | "drop":[ 84 | "KILL", 85 | "MKNOD", 86 | "SETGID", 87 | "SETUID" 88 | ] 89 | }, 90 | "runAsUser":1000080000 91 | } 92 | } 93 | ], 94 | "restartPolicy":"Always", 95 | "terminationGracePeriodSeconds":30, 96 | "dnsPolicy":"ClusterFirst", 97 | "serviceAccountName":"default", 98 | "serviceAccount":"default", 99 | "securityContext":{ 100 | "seLinuxOptions":{ 101 | "level":"s0:c9,c4" 102 | }, 103 | "fsGroup":1000080000 104 | }, 105 | "imagePullSecrets":[ 106 | { 107 | "name":"default-dockercfg-kksdv" 108 | } 109 | ], 110 | "schedulerName":"default-scheduler" 111 | }, 112 | "status":{ 113 | 114 | } 115 | }, 116 | "oldObject":null 117 | } 118 | result[_].message = "image pull policy and image tag cannot be respectively IfNotPresent and latest at the same time" 119 | } -------------------------------------------------------------------------------- /examples/validating-admission-webhook/latest_and_IfNotPresent_test.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: busybox 5 | spec: 6 | containers: 7 | - name: busybox 8 | image: busybox:latest 9 | imagePullPolicy: IfNotPresent 10 | command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] -------------------------------------------------------------------------------- /examples/validating-admission-webhook/loadbalancer_quota.rego: -------------------------------------------------------------------------------- 1 | # normal namespoaces are allowed to create only 2 loadbalancer type services 2 | package admission 3 | 4 | import data.k8s.matches 5 | 6 | deny[{ 7 | "id": "loadbalancer-service-quota", 8 | "resource": {"kind": "services", "namespace": namespace, "name": name}, 9 | "resolution": {"message": "you cannot have more than 2 loadbalancer services in each namespace"}, 10 | }] { 11 | service := data.kubernetes.services[namespace][name] 12 | 13 | # Verify current object is a LoadBalancer Service 14 | service["object"]["spec"]["type"] == "LoadBalancer" 15 | 16 | loadbalancers := [s | s := data.kubernetes.services[namespace][_]; s.spec.type == "LoadBalancer"] 17 | count(loadbalancers) >= 2 18 | 19 | } -------------------------------------------------------------------------------- /examples/validating-admission-webhook/loadbalancer_quota_test.rego: -------------------------------------------------------------------------------- 1 | package admission 2 | 3 | 4 | lb_svc = { 5 | "kind": { 6 | "group": "", 7 | "kind": "Service", 8 | "version": "v1", 9 | }, 10 | "namespace": "myproject", 11 | "object": { 12 | "metadata": { 13 | "creationTimestamp": "2018-10-30T23:30:00Z", 14 | "name": "myservice", 15 | "namespace": "myproject", 16 | "uid": "b82becfa-dc9b-11e8-9aa6-080027ca3112", 17 | }, 18 | "spec": { 19 | "clusterIP": "10.97.228.185", 20 | "externalTrafficPolicy": "Cluster", 21 | "ports": [{ 22 | "nodePort": 30431, 23 | "port": 80, 24 | "protocol": "TCP", 25 | "targetPort": 9376, 26 | }], 27 | "selector": {"app": "MyApp"}, 28 | "sessionAffinity": "None", 29 | "type": "LoadBalancer", 30 | }, 31 | "status": {"loadBalancer": {}}, 32 | }, 33 | "oldObject": null, 34 | "operation": "CREATE", 35 | "resource": { 36 | "group": "", 37 | "resource": "services", 38 | "version": "v1", 39 | }, 40 | "uid": "b82bef5f-dc9b-11e8-9aa6-080027ca3112", 41 | "userInfo": { 42 | "groups": [ 43 | "system:masters", 44 | "system:authenticated", 45 | ], 46 | "username": "minikube-user", 47 | }, 48 | } 49 | 50 | existing_svc1 = { 51 | "apiVersion": "v1", 52 | "kind": "Service", 53 | "metadata": { 54 | "name": "existingsvc1", 55 | "namespace": "myproject", 56 | }, 57 | "spec": { 58 | "clusterIP": "1.2.3.4", 59 | "ports": [ 60 | { 61 | "name": "http", 62 | "port": 8080, 63 | "protocol": "TCP", 64 | "targetPort": 8080 65 | } 66 | ], 67 | "selector": { 68 | "app": "myapp" 69 | }, 70 | "type": "LoadBalancer" 71 | } 72 | } 73 | 74 | existing_svc2 = { 75 | "apiVersion": "v1", 76 | "kind": "Service", 77 | "metadata": { 78 | "name": "existingsvc2", 79 | "namespace": "myproject", 80 | }, 81 | "spec": { 82 | "clusterIP": "1.2.3.4", 83 | "ports": [ 84 | { 85 | "name": "http", 86 | "port": 8080, 87 | "protocol": "TCP", 88 | "targetPort": 8080 89 | } 90 | ], 91 | "selector": { 92 | "app": "myapp" 93 | }, 94 | "type": "LoadBalancer" 95 | } 96 | } 97 | 98 | test_deny_loadbalancer_service2 { 99 | count(deny) = 0 100 | with data.kubernetes.services.myproject.myservice as lb_svc 101 | with data.kubernetes.services.myproject.myservice2 as existing_svc1 102 | } 103 | test_deny_loadbalancer_service3 { 104 | result := deny[{"id": id, "resource": {"kind": "services", "namespace": namespace, "name": name}, "resolution": resolution}] with data.kubernetes.services.myproject.exitingsvc1 as existing_svc1 with data.kubernetes.services.myproject.existingsvc2 as existing_svc2 with data.kubernetes.services.myproject.myservice as lb_svc 105 | result[_].message = "you cannot have more than 2 loadbalancer services in each namespace" 106 | } -------------------------------------------------------------------------------- /examples/validating-admission-webhook/loadbalancer_quota_test1.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: mysvc1 5 | spec: 6 | ports: 7 | - name: http 8 | port: 8080 9 | protocol: TCP 10 | selector: 11 | app: myapp 12 | type: LoadBalancer 13 | --- 14 | apiVersion: v1 15 | kind: Service 16 | metadata: 17 | name: mysvc2 18 | spec: 19 | ports: 20 | - name: http 21 | port: 8080 22 | protocol: TCP 23 | selector: 24 | app: myapp 25 | type: LoadBalancer 26 | 27 | 28 | -------------------------------------------------------------------------------- /examples/validating-admission-webhook/loadbalancer_quota_test2.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: mysvc3 5 | spec: 6 | ports: 7 | - name: http 8 | port: 8080 9 | protocol: TCP 10 | selector: 11 | app: myapp 12 | type: LoadBalancer 13 | --- 14 | apiVersion: v1 15 | kind: Service 16 | metadata: 17 | name: mysvc4 18 | spec: 19 | ports: 20 | - name: http 21 | port: 8080 22 | protocol: TCP 23 | selector: 24 | app: myapp 25 | type: LoadBalancer 26 | 27 | 28 | -------------------------------------------------------------------------------- /examples/validating-admission-webhook/software_license.rego: -------------------------------------------------------------------------------- 1 | package admission 2 | 3 | import data.k8s.matches 4 | 5 | # we cannot have more than 500 total cpu core for the myrepo/myimage workload 6 | 7 | default max_cpu_requests = 500 8 | default licensed_image = "myrepo/myimage:v3.2" 9 | 10 | deny[{ 11 | "id": "software-license", 12 | "resource": {"kind": "pods", "namespace": namespace, "name": name}, 13 | "resolution": {"message": sprintf("we cannot have more than %v total cpu core for the %v workload", [max_cpu_requests, licensed_image])}, 14 | }] { 15 | pod := data.kubernetes.pods[namespace][name] 16 | existing_containers := [c | c := data.kubernetes.pods[_][_].spec.containers[_]; c.image == licensed_image] 17 | containers := [c | c := data.kubernetes.pods[_][_].object.spec.containers[_]; c.image == licensed_image] 18 | array.concat(existing_containers, containers, total_containers) 19 | container_millicore_requests := [s | num := total_containers[_]; s = process_millicore_cpu(num.resources.requests.cpu) ] 20 | container_core_requests := [s | num := total_containers[_]; s = process_core_cpu(num.resources.requests.cpu) ] 21 | total_requests := sum(container_millicore_requests) + sum(container_core_requests) 22 | total_requests > max_cpu_requests 23 | } 24 | 25 | process_millicore_cpu(obj) = millicore_cpu_result { 26 | re_match("m$",obj) 27 | regex.split("m$", obj, parsed_obj) 28 | to_number(parsed_obj[0],int_obj) 29 | millicore_cpu_result = int_obj / 1000 30 | } 31 | 32 | process_core_cpu(obj) = core_cpu_result { 33 | not re_match("m$",obj) 34 | to_number(obj,int_obj) 35 | core_cpu_result = int_obj 36 | } -------------------------------------------------------------------------------- /examples/validating-admission-webhook/software_license_test.rego: -------------------------------------------------------------------------------- 1 | package admission 2 | 3 | 4 | software_license_pod1 = { 5 | "uid":"0df28fbd-5f5f-11e8-bc74-36e6bb280816", 6 | "kind":{ 7 | "group":"", 8 | "version":"v1", 9 | "kind":"Pod" 10 | }, 11 | "resource":{ 12 | "group":"", 13 | "version":"v1", 14 | "resource":"pods" 15 | }, 16 | "namespace":"myproject1", 17 | "operation":"CREATE", 18 | "userInfo":{ 19 | "username":"system:serviceaccount:kube-system:replicaset-controller", 20 | "uid":"a7e0ab33-5f29-11e8-8a3c-36e6bb280816", 21 | "groups":[ 22 | "system:serviceaccounts", 23 | "system:serviceaccounts:kube-system", 24 | "system:authenticated" 25 | ] 26 | }, 27 | "object":{ 28 | "metadata":{ 29 | "name":"myimage", 30 | "namespace":"myproject1" 31 | }, 32 | "spec":{ 33 | "containers":[ 34 | { 35 | "image":"myrepo/myimage:v3.2", 36 | "imagePullPolicy":"IfNotPresent", 37 | "name":"mysql", 38 | "resources":{ 39 | "requests":{ 40 | "cpu":"300m", 41 | "memory":"512Mi" 42 | }, 43 | "limits":{ 44 | "cpu":"1", 45 | "memory":"1Gi" 46 | } 47 | } 48 | }, 49 | { 50 | "image":"httpd:latest", 51 | "imagePullPolicy": "Always", 52 | "name":"httpd", 53 | "resources":{ 54 | "requests":{ 55 | "cpu":"1", 56 | "memory":"2048Mi" 57 | }, 58 | "limits":{ 59 | "cpu":"1", 60 | "memory":"4096Mi" 61 | } 62 | } 63 | } 64 | ], 65 | "restartPolicy":"Always", 66 | "terminationGracePeriodSeconds":30 67 | } 68 | }, 69 | "oldObject":null 70 | } 71 | 72 | software_license_pod2 = { 73 | "apiVersion": "v1", 74 | "kind": "Pod", 75 | "metadata":{ 76 | "name":"couchbase", 77 | "namespace":"myproject2" 78 | }, 79 | "spec":{ 80 | "containers":[ 81 | { 82 | "image":"couchbase:6.0.0", 83 | "imagePullPolicy":"IfNotPresent", 84 | "name":"couchbase", 85 | "resources":{ 86 | "requests":{ 87 | "cpu":"700m", 88 | "memory":"768m" 89 | }, 90 | "limits":{ 91 | "cpu":"700m", 92 | "memory":"768m" 93 | } 94 | } 95 | } 96 | ], 97 | "restartPolicy":"Always", 98 | "terminationGracePeriodSeconds":30 99 | } 100 | } 101 | 102 | software_license_pod3 = { 103 | "apiVersion": "v1", 104 | "kind": "Pod", 105 | "metadata":{ 106 | "name":"myimage", 107 | "namespace":"myproject2" 108 | }, 109 | "spec":{ 110 | "containers":[ 111 | { 112 | "image":"myrepo/myimage:v3.2", 113 | "imagePullPolicy":"IfNotPresent", 114 | "name":"mysql", 115 | "resources":{ 116 | "requests":{ 117 | "cpu":"500", 118 | "memory":"768Mi" 119 | }, 120 | "limits":{ 121 | "cpu":"700m", 122 | "memory":"768m" 123 | } 124 | } 125 | }, 126 | ], 127 | "restartPolicy":"Always", 128 | "terminationGracePeriodSeconds":30 129 | } 130 | } 131 | 132 | test_deny_software_license { 133 | count(deny) = 0 134 | with data.kubernetes.pods.myproject1.mysql as software_license_pod1 135 | with data.kubernetes.pods.myproject2.couchbase as software_license_pod2 136 | } 137 | 138 | test_invalid_software_license { 139 | result := deny[{"id": id, "resource": {"kind": "pods", "namespace": namespace, "name": name}, "resolution": resolution}] with data.kubernetes.pods.myproject1.mysql as software_license_pod1 with data.kubernetes.pods.myproject2.couchbase as software_license_pod2 with data.kubernetes.pods.myproject2.mysql as software_license_pod3 140 | result[_].message = "we cannot have more than 500 total cpu core for the myrepo/myimage:v3.2 workload" 141 | } -------------------------------------------------------------------------------- /examples/validating-admission-webhook/software_license_test1.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: busybox 5 | spec: 6 | containers: 7 | - name: busybox 8 | image: myrepo/myimage:v3.2 9 | resources: 10 | requests: 11 | cpu: "250" 12 | 13 | 14 | -------------------------------------------------------------------------------- /examples/validating-admission-webhook/software_license_test2.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: busybox1 5 | spec: 6 | containers: 7 | - name: busybox 8 | image: myrepo/myimage:v3.2 9 | resources: 10 | requests: 11 | cpu: "251" 12 | --------------------------------------------------------------------------------