├── .gitignore ├── README.md ├── demo ├── 10-security-controls │ ├── allow-kube-dns.yaml │ ├── base-policies.yaml │ ├── default-deny.yaml │ ├── feodo-block-policy.yaml │ ├── feodotracker.threatfeed.yaml │ └── staged.default-deny.yaml ├── 20-egress-access-controls │ ├── centos-to-frontend.yaml │ ├── dns-policy.netset.yaml │ ├── dns-policy.yaml │ ├── metadata-policy.yaml │ ├── netset.external-apis.yaml │ └── netset.metadata-api.yaml ├── 30-secure-hep │ ├── felixconfiguration.yaml │ ├── frontend-nodeport-access.yaml │ ├── kubelet-access.yaml │ └── ssh-access.yaml ├── 40-compliance-reports │ ├── cluster-reports.yaml │ ├── compliance-reporter-pod.yaml │ └── daily-cis-results.yaml ├── 50-alerts │ ├── globalnetworkset.changed.yaml │ ├── unsanctioned.dns.access.yaml │ └── unsanctioned.lateral.access.yaml ├── 60-deep-packet-inspection │ ├── resource-dpi.yaml │ └── sample-dpi-frontend.yaml ├── 80-packet-capture │ └── packet-capture.yaml ├── 90-anomaly-detection │ └── ad-alerts.yaml ├── boutiqueshop │ ├── app.manifests.yaml │ ├── policies.yaml │ ├── staged.default-deny.yaml │ └── staged.policies.yaml ├── dev │ ├── app.manifests.yaml │ └── policies.yaml └── tiers │ └── tiers.yaml ├── img ├── add-DNS-in-networkset.png ├── alerts-view-all.png ├── alerts-view.png ├── anomaly-detection-alert.png ├── anomaly-detection-config.png ├── calico-cloud-login.png ├── calico-on-aks.png ├── choose-aks.png ├── cluster-selection.png ├── compliance-report.png ├── connect-cluster.png ├── connectivity-diagram.png ├── create-dns-policy.png ├── dashboard-default-deny.png ├── dashboard-overall-view.png ├── delete-policy.png ├── dns-alert.png ├── dns-network-set.png ├── download-packet-capture.png ├── drop-down-menu.png ├── edit-policy.png ├── ee-event-log.png ├── endpoints-view.png ├── expand-menu.png ├── external-saas-traffic.png ├── flow-viz.png ├── frontend-packet-capture.png ├── get-start.png ├── hep-policy.png ├── hep-service-graph.png ├── honeypod-threat-alert.png ├── initiate-pc.png ├── kibana-dashboard.png ├── kibana-flow-logs.png ├── managed-cluster.png ├── network-set-grid.png ├── packet-capture-ui.png ├── policies-board-stats.png ├── policies-board.png ├── redis-pcap.png ├── schedule-packet-capture-job.png ├── schedule-packet-capture.png ├── script.png ├── select-ep.png ├── service-graph-default.png ├── service-graph-l7.png ├── service-graph-node.png ├── service-graph-top-level.png ├── signature-alert.png ├── staged-default-deny.png ├── test-packet-capture.png ├── timeline-view.png └── workshop-environment.png └── modules ├── anomaly-detection.md ├── configuring-demo-apps.md ├── creating-aks-cluster.md ├── deep-packet-inspection.md ├── dns-egress-access-controls.md ├── honeypod-threat-detection.md ├── joining-aks-to-calico-cloud.md ├── layer7-logging.md ├── packet-capture.md ├── pod-access-controls.md ├── using-alerts.md ├── using-compliance-reports.md └── using-observability-tools.md /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | *.kubeconfig -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Calico cloud workshop on AKS 2 | 3 | ![Calicocloud on AKS](img/calico-on-aks.png) 4 | 5 | ## AKS Calico Cloud Workshop 6 | 7 | The intent of this workshop is to introduce [Calico Cloud](https://www.calicocloud.io/?utm_campaign=calicocloud&utm_medium=digital&utm_source=microsoft) to manage AKS clusters and leverage Calico features to implement the various use cases. While there are many capabilities that the Calico product provides, this workshop focuses on a subset of those that are used most often by enterprises to derive value from the Calico Product. 8 | 9 | ## Learning Objectives 10 | 11 | In this workshop we are going to focus on these main use cases (with links to Calico docs for further info). Note that features for policy and visibility as outlined in this workshop are identical between Calico Cloud and Calico Enterprise. Consult the [Calico Enterprise docs](https://docs.tigera.io/about/about-calico-enterprise/) for further reading: 12 | 13 | - **Integration:** [Integrating Calico Cloud into the AKS clusters.](https://docs.calicocloud.io/install/system-requirements) 14 | - **East-West security:** [leveraging zero-trust security approach.](https://docs.tigera.io/security/adopt-zero-trust) 15 | - **workload access controls:** [using DNS policy to access external resources by their fully qualified domain names (FQDN).](https://docs.calicocloud.io/workload-access/) 16 | - **Observability:** [exploring various logs and application level metrics collected by Calico.](https://docs.calicocloud.io/visibility/) 17 | - **Compliance:** [providing proof of security compliance.](https://docs.calicocloud.io/compliance/overview) 18 | 19 | ## Join the Slack Channel 20 | 21 | [Calico User Group Slack](https://slack.projectcalico.org/) is a great resource to ask any questions about Calico. If you are not a part of this Slack group yet, we highly recommend [joining it](https://slack.projectcalico.org/) to participate in discussions or ask questions. For example, you can ask questions specific to EKS and other managed Kubernetes services in the `#eks-aks-gke-iks` channel. 22 | 23 | ## Who should take this workshop? 24 | 25 | - Developers 26 | - DevOps Engineers 27 | - Solutions Architects 28 | - Anyone that is interested in Security, Observability and Network policy for Kubernetes. 29 | 30 | ## Workshop prerequisites 31 | 32 | >It is recommended to follow the AKS creation step outlined in [Module 0](modules/creating-aks-cluster.md) and to keep the resources isolated from any existing deployments. If you are using a corporate Azure account for the workshop, make sure to check with account administrator to provide you with sufficient permissions to create and manage AkS clusters and Load Balancer resources. 33 | 34 | - [Azure Kubernetes Service](https://github.com/Azure/kubernetes-hackfest/blob/master/labs/networking/network-policy/) 35 | - [Calico Cloud trial account](https://www.calicocloud.io/?utm_campaign=calicocloud&utm_medium=digital&utm_source=microsoft) 36 | - Terminal or Command Line console to work with Azure resources and AKS cluster 37 | - `Git` 38 | - `netcat` 39 | 40 | ## Modules 41 | 42 | - [Module 0: Creating an AKS compatible cluster for Calico Cloud](modules/creating-aks-cluster.md) 43 | - [Module 1: Joining AKS cluster to Calico Cloud](modules/joining-aks-to-calico-cloud.md) 44 | - [Module 2: Configuring demo applications](modules/configuring-demo-apps.md) 45 | - [Module 3: Pod access controls](modules/pod-access-controls.md) 46 | - [Module 4: DNS egress access controls](modules/dns-egress-access-controls.md) 47 | - [Module 5: Layer 7 Logging](modules/layer7-logging.md) 48 | - [Module 6: Using observability tools](modules/using-observability-tools.md) 49 | - [Module 7: Packet Capture](modules/packet-capture.md) 50 | - [Module 8: Using compliance reports](modules/using-compliance-reports.md) 51 | - [Module 9: Using alerts](modules/using-alerts.md) 52 | - [Module 10: Honeypod Threat Detection](modules/honeypod-threat-detection.md) 53 | - [Module 11: Deep Packet Inspection](modules/deep-packet-inspection.md) 54 | 55 | ## Cleanup 56 | 57 | 1. Disconnect your cluster from CalicoCloud by following the instruction [here](https://docs.tigera.io/calico-cloud/operations/disconnect) 58 | 59 | >Whether you’ve finished with your Calico Cloud Trial or decided to disconnect your cluster from Calico Cloud, we know you want your cluster to remain functional. We highly recommend running a simple script to migrate your cluster to open-source Project Calico. 60 | 61 | ```bash 62 | curl -O https://installer.calicocloud.io/manifests/v3.19.0-1.0-7/downgrade.sh 63 | ``` 64 | 65 | ```bash 66 | chmod +x downgrade.sh 67 | ``` 68 | 69 | ```bash 70 | ./downgrade.sh --remove-all-calico-policy --remove-prometheus 71 | ``` 72 | 73 | 2. Delete application stack to clean up any `loadbalancer` services. 74 | 75 | ```bash 76 | kubectl delete -f demo/dev/app.manifests.yaml 77 | kubectl delete -f demo/boutiqueshop/app.manifests.yaml 78 | ``` 79 | 80 | 3. Delete AKS cluster. 81 | 82 | ```bash 83 | az aks delete --name $CLUSTERNAME --resource-group $RGNAME 84 | ``` 85 | 86 | 4. Delete the azure resource group. 87 | 88 | ```bash 89 | az group delete --resource-group $RGNAME 90 | ``` 91 | 92 | 5. Clean up workshop variables from `~/.bashrc`. 93 | 94 | ```bash 95 | sed -i "/UNIQUE_SUFFIX/d; /RGNAME/d; /LOCATION/d; /CLUSTERNAME/d; /K8SVERSION/d" ~/.bashrc 96 | ``` 97 | -------------------------------------------------------------------------------- /demo/10-security-controls/allow-kube-dns.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalNetworkPolicy 3 | metadata: 4 | name: platform.allow-kube-dns 5 | spec: 6 | # requires platform tier to exist 7 | tier: platform 8 | order: 2000 9 | selector: all() 10 | types: 11 | - Egress 12 | egress: 13 | - action: Allow 14 | protocol: UDP 15 | source: {} 16 | destination: 17 | selector: "k8s-app == 'kube-dns'" 18 | ports: 19 | - '53' 20 | - action: Allow 21 | protocol: TCP 22 | source: {} 23 | destination: 24 | selector: "k8s-app == 'kube-dns'" 25 | ports: 26 | - '53' 27 | - action: Pass 28 | source: {} 29 | destination: {} 30 | -------------------------------------------------------------------------------- /demo/10-security-controls/base-policies.yaml: -------------------------------------------------------------------------------- 1 | # security tier pass policy 2 | --- 3 | apiVersion: projectcalico.org/v3 4 | kind: GlobalNetworkPolicy 5 | metadata: 6 | name: security.pass-to-next-tier 7 | spec: 8 | tier: security 9 | order: 2000 10 | selector: all() 11 | ingress: 12 | - action: Pass 13 | egress: 14 | - action: Pass 15 | types: 16 | - Ingress 17 | - Egress 18 | 19 | # platform tier pass policy 20 | --- 21 | apiVersion: projectcalico.org/v3 22 | kind: GlobalNetworkPolicy 23 | metadata: 24 | name: platform.pass-to-next-tier 25 | spec: 26 | tier: platform 27 | order: 2000 28 | selector: all() 29 | ingress: 30 | - action: Pass 31 | egress: 32 | - action: Pass 33 | types: 34 | - Ingress 35 | - Egress 36 | 37 | # application tier pass policy 38 | --- 39 | apiVersion: projectcalico.org/v3 40 | kind: GlobalNetworkPolicy 41 | metadata: 42 | name: application.pass-to-next-tier 43 | spec: 44 | tier: application 45 | order: 2000 46 | selector: all() 47 | ingress: 48 | - action: Pass 49 | egress: 50 | - action: Pass 51 | types: 52 | - Ingress 53 | - Egress -------------------------------------------------------------------------------- /demo/10-security-controls/default-deny.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalNetworkPolicy 3 | metadata: 4 | name: default-deny 5 | spec: 6 | order: 2000 7 | selector: "projectcalico.org/namespace in {'dev','default'}" 8 | types: 9 | - Ingress 10 | - Egress 11 | -------------------------------------------------------------------------------- /demo/10-security-controls/feodo-block-policy.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: projectcalico.org/v3 3 | kind: GlobalNetworkPolicy 4 | metadata: 5 | name: security.block-feodo 6 | spec: 7 | tier: security 8 | order: 210 9 | selector: all() 10 | types: 11 | - Egress 12 | egress: 13 | - action: Deny 14 | destination: 15 | selector: threatfeed == 'feodo' 16 | - action: Pass 17 | -------------------------------------------------------------------------------- /demo/10-security-controls/feodotracker.threatfeed.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalThreatFeed 3 | metadata: 4 | name: feodo-tracker 5 | spec: 6 | pull: 7 | http: 8 | url: https://feodotracker.abuse.ch/downloads/ipblocklist.txt 9 | globalNetworkSet: 10 | labels: 11 | threatfeed: feodo -------------------------------------------------------------------------------- /demo/10-security-controls/staged.default-deny.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: StagedGlobalNetworkPolicy 3 | metadata: 4 | name: default-deny 5 | spec: 6 | order: 2000 7 | selector: "projectcalico.org/namespace in {'dev','default'}" 8 | types: 9 | - Ingress 10 | - Egress 11 | -------------------------------------------------------------------------------- /demo/20-egress-access-controls/centos-to-frontend.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: projectcalico.org/v3 3 | kind: NetworkPolicy 4 | metadata: 5 | name: platform.centos-to-frontend 6 | namespace: dev 7 | spec: 8 | tier: platform 9 | order: 100 10 | selector: app == "centos" 11 | types: 12 | - Egress 13 | egress: 14 | - action: Allow 15 | protocol: UDP 16 | destination: 17 | selector: k8s-app == "kube-dns" 18 | namespaceSelector: projectcalico.org/name == "kube-system" 19 | ports: 20 | - 53 21 | - action: Allow 22 | protocol: TCP 23 | source: {} 24 | destination: 25 | selector: app == "frontend" 26 | namespaceSelector: projectcalico.org/name == "default" 27 | -------------------------------------------------------------------------------- /demo/20-egress-access-controls/dns-policy.netset.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalNetworkPolicy 3 | metadata: 4 | name: security.allow-twilio-access 5 | spec: 6 | # requires security tier 7 | tier: security 8 | selector: (app == "centos" && projectcalico.org/namespace == "dev") 9 | order: 200 10 | types: 11 | - Egress 12 | egress: 13 | - action: Allow 14 | destination: 15 | selector: type == "external-apis" 16 | -------------------------------------------------------------------------------- /demo/20-egress-access-controls/dns-policy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalNetworkPolicy 3 | metadata: 4 | name: security.allow-twilio-access 5 | spec: 6 | # requires security tier 7 | tier: security 8 | selector: (app == "centos" && projectcalico.org/namespace == "dev") 9 | order: 200 10 | types: 11 | - Egress 12 | egress: 13 | - action: Allow 14 | source: {} 15 | destination: 16 | domains: 17 | - '*.twilio.com' 18 | -------------------------------------------------------------------------------- /demo/20-egress-access-controls/metadata-policy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalNetworkPolicy 3 | metadata: 4 | name: security.metadata-api-access 5 | spec: 6 | # requires security tier 7 | tier: security 8 | selector: all() 9 | order: 100 10 | types: 11 | - Egress 12 | egress: 13 | - action: Allow 14 | protocol: TCP 15 | source: {} 16 | destination: 17 | selector: type == "metadata-api" 18 | ports: 19 | - '80' 20 | -------------------------------------------------------------------------------- /demo/20-egress-access-controls/netset.external-apis.yaml: -------------------------------------------------------------------------------- 1 | kind: GlobalNetworkSet 2 | apiVersion: projectcalico.org/v3 3 | metadata: 4 | name: external-apis 5 | labels: 6 | type: external-apis 7 | spec: 8 | allowedEgressDomains: 9 | - '*.twilio.com' 10 | -------------------------------------------------------------------------------- /demo/20-egress-access-controls/netset.metadata-api.yaml: -------------------------------------------------------------------------------- 1 | kind: GlobalNetworkSet 2 | apiVersion: projectcalico.org/v3 3 | metadata: 4 | name: metadata-api 5 | labels: 6 | type: metadata-api 7 | spec: 8 | nets: 9 | # metadata service endpoint 10 | - 169.254.169.254 11 | -------------------------------------------------------------------------------- /demo/30-secure-hep/felixconfiguration.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: projectcalico.org/v3 3 | kind: FelixConfiguration 4 | metadata: 5 | name: default 6 | spec: 7 | flowLogsFlushInterval: 10s 8 | flowLogsFileAggregationKindForAllowed: 1 9 | dnsLogsFlushInterval: 10s 10 | logSeverityScreen: Info 11 | failsafeInboundHostPorts: 12 | #- protocol: tcp 13 | # port: 22 14 | - protocol: tcp 15 | port: 68 16 | net: 0.0.0.0/0 17 | - protocol: tcp 18 | port: 179 19 | net: 0.0.0.0/0 20 | - protocol: tcp 21 | port: 2379 22 | net: 0.0.0.0/0 23 | - protocol: tcp 24 | port: 6443 25 | net: 0.0.0.0/0 26 | failsafeOutboundHostPorts: 27 | - protocol: udp 28 | port: 53 29 | net: 0.0.0.0/0 30 | - protocol: tcp 31 | port: 67 32 | net: 0.0.0.0/0 33 | - protocol: tcp 34 | port: 179 35 | net: 0.0.0.0/0 36 | - protocol: tcp 37 | port: 2379 38 | net: 0.0.0.0/0 39 | - protocol: tcp 40 | port: 2380 41 | net: 0.0.0.0/0 42 | - protocol: tcp 43 | port: 6443 44 | net: 0.0.0.0/0 -------------------------------------------------------------------------------- /demo/30-secure-hep/frontend-nodeport-access.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalNetworkPolicy 3 | metadata: 4 | name: security.frontend-nodeport-access 5 | spec: 6 | tier: security 7 | order: 100 8 | selector: has(eks.amazonaws.com/nodegroup) 9 | # Allow all traffic to localhost. 10 | ingress: 11 | - action: Allow 12 | destination: 13 | nets: 14 | - 127.0.0.1/32 15 | # Allow node port access only from specific CIDR. 16 | - action: Deny 17 | protocol: TCP 18 | source: 19 | notNets: 20 | - ${CLOUD9_IP} 21 | destination: 22 | ports: 23 | - 30080 24 | doNotTrack: false 25 | applyOnForward: true 26 | preDNAT: true 27 | types: 28 | - Ingress 29 | -------------------------------------------------------------------------------- /demo/30-secure-hep/kubelet-access.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalNetworkPolicy 3 | metadata: 4 | name: security.kubelet-access 5 | spec: 6 | tier: security 7 | order: 120 8 | selector: has(eks.amazonaws.com/nodegroup) 9 | ingress: 10 | # This rule allows all traffic to localhost. 11 | - action: Allow 12 | destination: 13 | nets: 14 | - 127.0.0.1/32 15 | # This rule allows the access the kubelet API. 16 | - action: Allow 17 | protocol: TCP 18 | destination: 19 | ports: 20 | - '10250' 21 | types: 22 | - Ingress 23 | -------------------------------------------------------------------------------- /demo/30-secure-hep/ssh-access.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalNetworkPolicy 3 | metadata: 4 | name: security.ssh-access 5 | spec: 6 | tier: security 7 | order: 110 8 | selector: has(eks.amazonaws.com/nodegroup) 9 | # Allow all traffic to localhost. 10 | ingress: 11 | - action: Allow 12 | destination: 13 | nets: 14 | - 127.0.0.1/32 15 | # Allow only SSH port access. 16 | - action: Allow 17 | protocol: TCP 18 | source: 19 | nets: 20 | - ${CLOUD9_IP} 21 | destination: 22 | ports: 23 | - "22" -------------------------------------------------------------------------------- /demo/40-compliance-reports/cluster-reports.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: projectcalico.org/v3 3 | kind: GlobalReport 4 | metadata: 5 | name: cluster-inventory 6 | spec: 7 | reportType: inventory 8 | schedule: '*/10 * * * *' 9 | 10 | --- 11 | apiVersion: projectcalico.org/v3 12 | kind: GlobalReport 13 | metadata: 14 | name: cluster-policy-audit 15 | spec: 16 | reportType: policy-audit 17 | schedule: '*/10 * * * *' 18 | 19 | --- 20 | apiVersion: projectcalico.org/v3 21 | kind: GlobalReport 22 | metadata: 23 | name: cluster-network-access 24 | spec: 25 | reportType: network-access 26 | schedule: '*/10 * * * *' 27 | -------------------------------------------------------------------------------- /demo/40-compliance-reports/compliance-reporter-pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: run-reporter 5 | namespace: tigera-compliance 6 | labels: 7 | k8s-app: compliance-reporter 8 | spec: 9 | nodeSelector: 10 | kubernetes.io/os: linux 11 | restartPolicy: Never 12 | serviceAccount: tigera-compliance-reporter 13 | serviceAccountName: tigera-compliance-reporter 14 | tolerations: 15 | - key: node-role.kubernetes.io/master 16 | effect: NoSchedule 17 | imagePullSecrets: 18 | - name: tigera-pull-secret 19 | containers: 20 | - name: reporter 21 | # Modify this image name, if you have re-tagged the image and are using a local 22 | # docker image repository. 23 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 24 | image: quay.io/tigera/compliance-reporter:$CALICOVERSION 25 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 26 | env: 27 | # Modify this value with name of an existing globalreport resource. 28 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 29 | - name: TIGERA_COMPLIANCE_REPORT_NAME 30 | value: $REPORT_NAME 31 | # Modify these values with the start and end time frame that should be reported on. 32 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 33 | - name: TIGERA_COMPLIANCE_REPORT_START_TIME 34 | # value: 35 | value: "$START_TIME" 36 | - name: TIGERA_COMPLIANCE_REPORT_END_TIME 37 | # value: 38 | value: "$END_TIME" 39 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 40 | - name: LOG_LEVEL 41 | value: "warning" 42 | - name: ELASTIC_INDEX_SUFFIX 43 | value: "$CALICOCLUSTERNAME" 44 | - name: ELASTIC_SCHEME 45 | value: https 46 | - name: ELASTIC_HOST 47 | value: tigera-secure-es-gateway-http.tigera-elasticsearch.svc 48 | - name: ELASTIC_PORT 49 | value: "9200" 50 | - name: ELASTIC_USER 51 | valueFrom: 52 | secretKeyRef: 53 | name: tigera-ee-compliance-reporter-elasticsearch-access 54 | key: username 55 | optional: true 56 | - name: ELASTIC_PASSWORD 57 | valueFrom: 58 | secretKeyRef: 59 | name: tigera-ee-compliance-reporter-elasticsearch-access 60 | key: password 61 | optional: true 62 | - name: ELASTIC_SSL_VERIFY 63 | value: "true" 64 | - name: ELASTIC_CA 65 | value: /etc/pki/tls/certs/tigera-ca-bundle.crt 66 | volumeMounts: 67 | - mountPath: /var/log/calico 68 | name: var-log-calico 69 | - mountPath: /etc/pki/tls/certs/ 70 | name: tigera-ca-bundle 71 | readOnly: true 72 | livenessProbe: 73 | httpGet: 74 | path: /liveness 75 | port: 9099 76 | host: localhost 77 | volumes: 78 | - name: var-log-calico 79 | hostPath: 80 | path: /var/log/calico 81 | type: DirectoryOrCreate 82 | - configMap: 83 | defaultMode: 420 84 | name: tigera-ca-bundle 85 | name: tigera-ca-bundle 86 | -------------------------------------------------------------------------------- /demo/40-compliance-reports/daily-cis-results.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalReport 3 | metadata: 4 | name: daily-cis-results 5 | labels: 6 | deployment: production 7 | spec: 8 | reportType: cis-benchmark 9 | schedule: '*/10 * * * *' 10 | cis: 11 | highThreshold: 100 12 | medThreshold: 50 13 | includeUnscoredTests: true 14 | numFailedTests: 5 -------------------------------------------------------------------------------- /demo/50-alerts/globalnetworkset.changed.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: projectcalico.org/v3 3 | kind: GlobalAlertTemplate 4 | metadata: 5 | name: policy.globalnetworkset 6 | spec: 7 | description: "Alerts on any changes to global network sets" 8 | summary: "[audit] [privileged access] change detected for ${objectRef.resource} ${objectRef.name}" 9 | severity: 100 10 | period: 5m 11 | lookback: 5m 12 | dataSet: audit 13 | # alert is triggered if CRUD operation executed against any globalnetworkset 14 | query: (verb=create OR verb=update OR verb=delete OR verb=patch) AND "objectRef.resource"=globalnetworksets 15 | aggregateBy: [objectRef.resource, objectRef.name] 16 | metric: count 17 | condition: gt 18 | threshold: 0 19 | 20 | --- 21 | apiVersion: projectcalico.org/v3 22 | kind: GlobalAlert 23 | metadata: 24 | name: policy.globalnetworkset 25 | spec: 26 | description: "Alerts on any changes to global network sets" 27 | summary: "[audit] [privileged access] change detected for ${objectRef.resource} ${objectRef.name}" 28 | severity: 100 29 | period: 1m 30 | lookback: 1m 31 | dataSet: audit 32 | # alert is triggered if CRUD operation executed against any globalnetworkset 33 | query: (verb=create OR verb=update OR verb=delete OR verb=patch) AND "objectRef.resource"=globalnetworksets 34 | aggregateBy: [objectRef.resource, objectRef.name] 35 | metric: count 36 | condition: gt 37 | threshold: 0 38 | -------------------------------------------------------------------------------- /demo/50-alerts/unsanctioned.dns.access.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: projectcalico.org/v3 3 | kind: GlobalAlertTemplate 4 | metadata: 5 | name: dns.unsanctioned.access 6 | spec: 7 | description: "Pod attempted to access restricted.com domain" 8 | summary: "[dns] pod ${client_namespace}/${client_name_aggr} attempted to access '${qname}'" 9 | severity: 100 10 | dataSet: dns 11 | period: 5m 12 | lookback: 5m 13 | query: '(qname = "www.restricted.com" OR qname = "restricted.com")' 14 | aggregateBy: [client_namespace, client_name_aggr, qname] 15 | metric: count 16 | condition: gt 17 | threshold: 0 18 | 19 | --- 20 | apiVersion: projectcalico.org/v3 21 | kind: GlobalAlert 22 | metadata: 23 | name: dns.unsanctioned.access 24 | spec: 25 | description: "Pod attempted to access google.com domain" 26 | summary: "[dns] pod ${client_namespace}/${client_name_aggr} attempted to access '${qname}'" 27 | severity: 100 28 | dataSet: dns 29 | period: 1m 30 | lookback: 1m 31 | query: '(qname = "www.google.com" OR qname = "google.com")' 32 | aggregateBy: [client_namespace, client_name_aggr, qname] 33 | metric: count 34 | condition: gt 35 | threshold: 0 36 | -------------------------------------------------------------------------------- /demo/50-alerts/unsanctioned.lateral.access.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: projectcalico.org/v3 3 | kind: GlobalAlertTemplate 4 | metadata: 5 | name: network.lateral.access 6 | spec: 7 | description: "Alerts when pods with a specific label (security=strict) accessed by other workloads from other namespaces" 8 | summary: "[flows] [lateral movement] ${source_namespace}/${source_name_aggr} has accessed ${dest_namespace}/${dest_name_aggr} with label security=strict" 9 | severity: 100 10 | period: 5m 11 | lookback: 5m 12 | dataSet: flows 13 | query: '"dest_labels.labels"="security=strict" AND "dest_namespace"="secured_pod_namespace" AND "source_namespace"!="secured_pod_namespace" AND proto=tcp AND (("action"="allow" AND ("reporter"="dst" OR "reporter"="src")) OR ("action"="deny" AND "reporter"="src"))' 14 | aggregateBy: [source_namespace, source_name_aggr, dest_namespace, dest_name_aggr] 15 | field: num_flows 16 | metric: sum 17 | condition: gt 18 | threshold: 0 19 | 20 | --- 21 | apiVersion: projectcalico.org/v3 22 | kind: GlobalAlert 23 | metadata: 24 | name: network.lateral.access 25 | spec: 26 | description: "Alerts when pods with a specific label (security=strict) accessed by other workloads from other namespaces" 27 | summary: "[flows] [lateral movement] ${source_namespace}/${source_name_aggr} has accessed ${dest_namespace}/${dest_name_aggr} with label security=strict" 28 | severity: 100 29 | period: 1m 30 | lookback: 1m 31 | dataSet: flows 32 | query: '("dest_labels.labels"="security=strict" AND "dest_namespace"="dev") AND "source_namespace"!="dev" AND "proto"="tcp" AND (("action"="allow" AND ("reporter"="dst" OR "reporter"="src")) OR ("action"="deny" AND "reporter"="src"))' 33 | aggregateBy: [source_namespace, source_name_aggr, dest_namespace, dest_name_aggr] 34 | field: num_flows 35 | metric: sum 36 | condition: gt 37 | threshold: 0 38 | -------------------------------------------------------------------------------- /demo/60-deep-packet-inspection/resource-dpi.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: operator.tigera.io/v1 2 | kind: IntrusionDetection 3 | metadata: 4 | name: tigera-secure 5 | spec: 6 | componentResources: 7 | - componentName: DeepPacketInspection 8 | resourceRequirements: 9 | limits: 10 | cpu: "1" 11 | memory: 1Gi 12 | requests: 13 | cpu: 100m 14 | memory: 100Mi 15 | -------------------------------------------------------------------------------- /demo/60-deep-packet-inspection/sample-dpi-frontend.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: DeepPacketInspection 3 | metadata: 4 | name: sample-dpi-frontend 5 | spec: 6 | selector: app == "frontend" -------------------------------------------------------------------------------- /demo/80-packet-capture/packet-capture.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: PacketCapture 3 | metadata: 4 | name: packet-capture-frontend 5 | namespace: default 6 | spec: 7 | selector: app == "frontend" 8 | -------------------------------------------------------------------------------- /demo/90-anomaly-detection/ad-alerts.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: GlobalAlert 3 | metadata: 4 | name: tigera.io.detector.port-scan 5 | spec: 6 | description: Port Scan detection 7 | summary: "Looks for pods in your cluster that are sending packets to one destination on multiple ports." 8 | detector: 9 | name: port_scan 10 | period: 5m0s 11 | lookback: 5m0s 12 | severity: 100 13 | type: AnomalyDetection 14 | --- 15 | -------------------------------------------------------------------------------- /demo/boutiqueshop/app.manifests.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2018 Google LLC 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # ---------------------------------------------------------- 16 | # WARNING: This file is autogenerated. Do not manually edit. 17 | # ---------------------------------------------------------- 18 | 19 | --- 20 | apiVersion: apps/v1 21 | kind: Deployment 22 | metadata: 23 | name: emailservice 24 | spec: 25 | selector: 26 | matchLabels: 27 | app: emailservice 28 | template: 29 | metadata: 30 | labels: 31 | app: emailservice 32 | spec: 33 | serviceAccountName: default 34 | terminationGracePeriodSeconds: 5 35 | securityContext: 36 | fsGroup: 1000 37 | runAsGroup: 1000 38 | runAsNonRoot: true 39 | runAsUser: 1000 40 | containers: 41 | - name: server 42 | securityContext: 43 | allowPrivilegeEscalation: false 44 | capabilities: 45 | drop: 46 | - ALL 47 | privileged: false 48 | readOnlyRootFilesystem: true 49 | image: gcr.io/google-samples/microservices-demo/emailservice:v0.9.0 50 | ports: 51 | - containerPort: 8080 52 | env: 53 | - name: PORT 54 | value: "8080" 55 | - name: DISABLE_PROFILER 56 | value: "1" 57 | readinessProbe: 58 | periodSeconds: 5 59 | grpc: 60 | port: 8080 61 | livenessProbe: 62 | periodSeconds: 5 63 | grpc: 64 | port: 8080 65 | resources: 66 | requests: 67 | cpu: 100m 68 | memory: 64Mi 69 | limits: 70 | cpu: 200m 71 | memory: 128Mi 72 | --- 73 | apiVersion: v1 74 | kind: Service 75 | metadata: 76 | name: emailservice 77 | spec: 78 | type: ClusterIP 79 | selector: 80 | app: emailservice 81 | ports: 82 | - name: grpc 83 | port: 5000 84 | targetPort: 8080 85 | --- 86 | apiVersion: apps/v1 87 | kind: Deployment 88 | metadata: 89 | name: checkoutservice 90 | spec: 91 | selector: 92 | matchLabels: 93 | app: checkoutservice 94 | template: 95 | metadata: 96 | labels: 97 | app: checkoutservice 98 | spec: 99 | serviceAccountName: default 100 | securityContext: 101 | fsGroup: 1000 102 | runAsGroup: 1000 103 | runAsNonRoot: true 104 | runAsUser: 1000 105 | containers: 106 | - name: server 107 | securityContext: 108 | allowPrivilegeEscalation: false 109 | capabilities: 110 | drop: 111 | - ALL 112 | privileged: false 113 | readOnlyRootFilesystem: true 114 | image: gcr.io/google-samples/microservices-demo/checkoutservice:v0.9.0 115 | ports: 116 | - containerPort: 5050 117 | readinessProbe: 118 | grpc: 119 | port: 5050 120 | livenessProbe: 121 | grpc: 122 | port: 5050 123 | env: 124 | - name: PORT 125 | value: "5050" 126 | - name: PRODUCT_CATALOG_SERVICE_ADDR 127 | value: "productcatalogservice:3550" 128 | - name: SHIPPING_SERVICE_ADDR 129 | value: "shippingservice:50051" 130 | - name: PAYMENT_SERVICE_ADDR 131 | value: "paymentservice:50051" 132 | - name: EMAIL_SERVICE_ADDR 133 | value: "emailservice:5000" 134 | - name: CURRENCY_SERVICE_ADDR 135 | value: "currencyservice:7000" 136 | - name: CART_SERVICE_ADDR 137 | value: "cartservice:7070" 138 | resources: 139 | requests: 140 | cpu: 100m 141 | memory: 64Mi 142 | limits: 143 | cpu: 200m 144 | memory: 128Mi 145 | --- 146 | apiVersion: v1 147 | kind: Service 148 | metadata: 149 | name: checkoutservice 150 | spec: 151 | type: ClusterIP 152 | selector: 153 | app: checkoutservice 154 | ports: 155 | - name: grpc 156 | port: 5050 157 | targetPort: 5050 158 | --- 159 | apiVersion: apps/v1 160 | kind: Deployment 161 | metadata: 162 | name: recommendationservice 163 | spec: 164 | selector: 165 | matchLabels: 166 | app: recommendationservice 167 | template: 168 | metadata: 169 | labels: 170 | app: recommendationservice 171 | spec: 172 | serviceAccountName: default 173 | terminationGracePeriodSeconds: 5 174 | securityContext: 175 | fsGroup: 1000 176 | runAsGroup: 1000 177 | runAsNonRoot: true 178 | runAsUser: 1000 179 | containers: 180 | - name: server 181 | securityContext: 182 | allowPrivilegeEscalation: false 183 | capabilities: 184 | drop: 185 | - ALL 186 | privileged: false 187 | readOnlyRootFilesystem: true 188 | image: gcr.io/google-samples/microservices-demo/recommendationservice:v0.9.0 189 | ports: 190 | - containerPort: 8080 191 | readinessProbe: 192 | periodSeconds: 5 193 | grpc: 194 | port: 8080 195 | livenessProbe: 196 | periodSeconds: 5 197 | grpc: 198 | port: 8080 199 | env: 200 | - name: PORT 201 | value: "8080" 202 | - name: PRODUCT_CATALOG_SERVICE_ADDR 203 | value: "productcatalogservice:3550" 204 | - name: DISABLE_PROFILER 205 | value: "1" 206 | resources: 207 | requests: 208 | cpu: 100m 209 | memory: 220Mi 210 | limits: 211 | cpu: 200m 212 | memory: 450Mi 213 | --- 214 | apiVersion: v1 215 | kind: Service 216 | metadata: 217 | name: recommendationservice 218 | spec: 219 | type: ClusterIP 220 | selector: 221 | app: recommendationservice 222 | ports: 223 | - name: grpc 224 | port: 8080 225 | targetPort: 8080 226 | --- 227 | apiVersion: apps/v1 228 | kind: Deployment 229 | metadata: 230 | name: frontend 231 | spec: 232 | selector: 233 | matchLabels: 234 | app: frontend 235 | template: 236 | metadata: 237 | labels: 238 | app: frontend 239 | annotations: 240 | sidecar.istio.io/rewriteAppHTTPProbers: "true" 241 | spec: 242 | serviceAccountName: default 243 | securityContext: 244 | fsGroup: 1000 245 | runAsGroup: 1000 246 | runAsNonRoot: true 247 | runAsUser: 1000 248 | containers: 249 | - name: server 250 | securityContext: 251 | allowPrivilegeEscalation: false 252 | capabilities: 253 | drop: 254 | - ALL 255 | privileged: false 256 | readOnlyRootFilesystem: true 257 | image: gcr.io/google-samples/microservices-demo/frontend:v0.9.0 258 | ports: 259 | - containerPort: 8080 260 | readinessProbe: 261 | initialDelaySeconds: 10 262 | httpGet: 263 | path: "/_healthz" 264 | port: 8080 265 | httpHeaders: 266 | - name: "Cookie" 267 | value: "shop_session-id=x-readiness-probe" 268 | livenessProbe: 269 | initialDelaySeconds: 10 270 | httpGet: 271 | path: "/_healthz" 272 | port: 8080 273 | httpHeaders: 274 | - name: "Cookie" 275 | value: "shop_session-id=x-liveness-probe" 276 | env: 277 | - name: PORT 278 | value: "8080" 279 | - name: PRODUCT_CATALOG_SERVICE_ADDR 280 | value: "productcatalogservice:3550" 281 | - name: CURRENCY_SERVICE_ADDR 282 | value: "currencyservice:7000" 283 | - name: CART_SERVICE_ADDR 284 | value: "cartservice:7070" 285 | - name: RECOMMENDATION_SERVICE_ADDR 286 | value: "recommendationservice:8080" 287 | - name: SHIPPING_SERVICE_ADDR 288 | value: "shippingservice:50051" 289 | - name: CHECKOUT_SERVICE_ADDR 290 | value: "checkoutservice:5050" 291 | - name: AD_SERVICE_ADDR 292 | value: "adservice:9555" 293 | # # ENV_PLATFORM: One of: local, gcp, aws, azure, onprem, alibaba 294 | - name: ENV_PLATFORM 295 | value: azure 296 | # # When not set, defaults to "local" unless running in GKE, otherwies auto-sets to gcp 297 | # - name: ENV_PLATFORM 298 | # value: "aws" 299 | - name: ENABLE_PROFILER 300 | value: "0" 301 | # - name: CYMBAL_BRANDING 302 | # value: "true" 303 | # - name: FRONTEND_MESSAGE 304 | # value: "Replace this with a message you want to display on all pages." 305 | # As part of an optional Google Cloud demo, you can run an optional microservice called the "packaging service". 306 | # - name: PACKAGING_SERVICE_URL 307 | # value: "" # This value would look like "http://123.123.123" 308 | resources: 309 | requests: 310 | cpu: 100m 311 | memory: 64Mi 312 | limits: 313 | cpu: 200m 314 | memory: 128Mi 315 | --- 316 | apiVersion: v1 317 | kind: Service 318 | metadata: 319 | name: frontend 320 | spec: 321 | type: ClusterIP 322 | selector: 323 | app: frontend 324 | ports: 325 | - name: http 326 | port: 80 327 | targetPort: 8080 328 | --- 329 | apiVersion: v1 330 | kind: Service 331 | metadata: 332 | name: frontend-external 333 | spec: 334 | type: LoadBalancer 335 | selector: 336 | app: frontend 337 | ports: 338 | - name: http 339 | port: 80 340 | targetPort: 8080 341 | --- 342 | apiVersion: apps/v1 343 | kind: Deployment 344 | metadata: 345 | name: paymentservice 346 | spec: 347 | selector: 348 | matchLabels: 349 | app: paymentservice 350 | template: 351 | metadata: 352 | labels: 353 | app: paymentservice 354 | spec: 355 | serviceAccountName: default 356 | terminationGracePeriodSeconds: 5 357 | securityContext: 358 | fsGroup: 1000 359 | runAsGroup: 1000 360 | runAsNonRoot: true 361 | runAsUser: 1000 362 | containers: 363 | - name: server 364 | securityContext: 365 | allowPrivilegeEscalation: false 366 | capabilities: 367 | drop: 368 | - ALL 369 | privileged: false 370 | readOnlyRootFilesystem: true 371 | image: gcr.io/google-samples/microservices-demo/paymentservice:v0.9.0 372 | ports: 373 | - containerPort: 50051 374 | env: 375 | - name: PORT 376 | value: "50051" 377 | - name: DISABLE_PROFILER 378 | value: "1" 379 | readinessProbe: 380 | grpc: 381 | port: 50051 382 | livenessProbe: 383 | grpc: 384 | port: 50051 385 | resources: 386 | requests: 387 | cpu: 100m 388 | memory: 64Mi 389 | limits: 390 | cpu: 200m 391 | memory: 128Mi 392 | --- 393 | apiVersion: v1 394 | kind: Service 395 | metadata: 396 | name: paymentservice 397 | spec: 398 | type: ClusterIP 399 | selector: 400 | app: paymentservice 401 | ports: 402 | - name: grpc 403 | port: 50051 404 | targetPort: 50051 405 | --- 406 | apiVersion: apps/v1 407 | kind: Deployment 408 | metadata: 409 | name: productcatalogservice 410 | spec: 411 | selector: 412 | matchLabels: 413 | app: productcatalogservice 414 | template: 415 | metadata: 416 | labels: 417 | app: productcatalogservice 418 | spec: 419 | serviceAccountName: default 420 | terminationGracePeriodSeconds: 5 421 | securityContext: 422 | fsGroup: 1000 423 | runAsGroup: 1000 424 | runAsNonRoot: true 425 | runAsUser: 1000 426 | containers: 427 | - name: server 428 | securityContext: 429 | allowPrivilegeEscalation: false 430 | capabilities: 431 | drop: 432 | - ALL 433 | privileged: false 434 | readOnlyRootFilesystem: true 435 | image: gcr.io/google-samples/microservices-demo/productcatalogservice:v0.9.0 436 | ports: 437 | - containerPort: 3550 438 | env: 439 | - name: PORT 440 | value: "3550" 441 | - name: DISABLE_PROFILER 442 | value: "1" 443 | readinessProbe: 444 | grpc: 445 | port: 3550 446 | livenessProbe: 447 | grpc: 448 | port: 3550 449 | resources: 450 | requests: 451 | cpu: 100m 452 | memory: 64Mi 453 | limits: 454 | cpu: 200m 455 | memory: 128Mi 456 | --- 457 | apiVersion: v1 458 | kind: Service 459 | metadata: 460 | name: productcatalogservice 461 | spec: 462 | type: ClusterIP 463 | selector: 464 | app: productcatalogservice 465 | ports: 466 | - name: grpc 467 | port: 3550 468 | targetPort: 3550 469 | --- 470 | apiVersion: apps/v1 471 | kind: Deployment 472 | metadata: 473 | name: cartservice 474 | spec: 475 | selector: 476 | matchLabels: 477 | app: cartservice 478 | template: 479 | metadata: 480 | labels: 481 | app: cartservice 482 | spec: 483 | serviceAccountName: default 484 | terminationGracePeriodSeconds: 5 485 | securityContext: 486 | fsGroup: 1000 487 | runAsGroup: 1000 488 | runAsNonRoot: true 489 | runAsUser: 1000 490 | containers: 491 | - name: server 492 | securityContext: 493 | allowPrivilegeEscalation: false 494 | capabilities: 495 | drop: 496 | - ALL 497 | privileged: false 498 | readOnlyRootFilesystem: true 499 | image: gcr.io/google-samples/microservices-demo/cartservice:v0.9.0 500 | ports: 501 | - containerPort: 7070 502 | env: 503 | - name: REDIS_ADDR 504 | value: "redis-cart:6379" 505 | resources: 506 | requests: 507 | cpu: 200m 508 | memory: 64Mi 509 | limits: 510 | cpu: 300m 511 | memory: 128Mi 512 | readinessProbe: 513 | initialDelaySeconds: 15 514 | grpc: 515 | port: 7070 516 | livenessProbe: 517 | initialDelaySeconds: 15 518 | periodSeconds: 10 519 | grpc: 520 | port: 7070 521 | --- 522 | apiVersion: v1 523 | kind: Service 524 | metadata: 525 | name: cartservice 526 | spec: 527 | type: ClusterIP 528 | selector: 529 | app: cartservice 530 | ports: 531 | - name: grpc 532 | port: 7070 533 | targetPort: 7070 534 | --- 535 | apiVersion: apps/v1 536 | kind: Deployment 537 | metadata: 538 | name: loadgenerator 539 | spec: 540 | selector: 541 | matchLabels: 542 | app: loadgenerator 543 | replicas: 1 544 | template: 545 | metadata: 546 | labels: 547 | app: loadgenerator 548 | annotations: 549 | sidecar.istio.io/rewriteAppHTTPProbers: "true" 550 | spec: 551 | serviceAccountName: default 552 | terminationGracePeriodSeconds: 5 553 | restartPolicy: Always 554 | # securityContext: 555 | # fsGroup: 1000 556 | # runAsGroup: 1000 557 | # runAsNonRoot: true 558 | # runAsUser: 1000 559 | initContainers: 560 | - command: 561 | - /bin/sh 562 | - -exc 563 | - | 564 | echo "Init container pinging frontend: ${FRONTEND_ADDR}..." 565 | STATUSCODE=$(wget --server-response http://${FRONTEND_ADDR} 2>&1 | awk '/^ HTTP/{print $2}') 566 | if test $STATUSCODE -ne 200; then 567 | echo "Error: Could not reach frontend - Status code: ${STATUSCODE}" 568 | exit 1 569 | fi 570 | name: frontend-check 571 | securityContext: 572 | allowPrivilegeEscalation: false 573 | capabilities: 574 | drop: 575 | - ALL 576 | privileged: false 577 | readOnlyRootFilesystem: true 578 | image: busybox:latest 579 | env: 580 | - name: FRONTEND_ADDR 581 | value: "frontend:80" 582 | containers: 583 | - name: main 584 | # securityContext: 585 | # allowPrivilegeEscalation: false 586 | # capabilities: 587 | # drop: 588 | # - ALL 589 | # privileged: false 590 | # readOnlyRootFilesystem: true 591 | image: gcr.io/google-samples/microservices-demo/loadgenerator:v0.9.0 592 | env: 593 | - name: FRONTEND_ADDR 594 | value: "frontend:80" 595 | - name: USERS 596 | value: "10" 597 | resources: 598 | requests: 599 | cpu: 300m 600 | memory: 256Mi 601 | limits: 602 | cpu: 500m 603 | memory: 512Mi 604 | --- 605 | apiVersion: apps/v1 606 | kind: Deployment 607 | metadata: 608 | name: currencyservice 609 | spec: 610 | selector: 611 | matchLabels: 612 | app: currencyservice 613 | template: 614 | metadata: 615 | labels: 616 | app: currencyservice 617 | spec: 618 | serviceAccountName: default 619 | terminationGracePeriodSeconds: 5 620 | securityContext: 621 | fsGroup: 1000 622 | runAsGroup: 1000 623 | runAsNonRoot: true 624 | runAsUser: 1000 625 | containers: 626 | - name: server 627 | securityContext: 628 | allowPrivilegeEscalation: false 629 | capabilities: 630 | drop: 631 | - ALL 632 | privileged: false 633 | readOnlyRootFilesystem: true 634 | image: gcr.io/google-samples/microservices-demo/currencyservice:v0.9.0 635 | ports: 636 | - name: grpc 637 | containerPort: 7000 638 | env: 639 | - name: PORT 640 | value: "7000" 641 | - name: DISABLE_PROFILER 642 | value: "1" 643 | readinessProbe: 644 | grpc: 645 | port: 7000 646 | livenessProbe: 647 | grpc: 648 | port: 7000 649 | resources: 650 | requests: 651 | cpu: 100m 652 | memory: 64Mi 653 | limits: 654 | cpu: 200m 655 | memory: 128Mi 656 | --- 657 | apiVersion: v1 658 | kind: Service 659 | metadata: 660 | name: currencyservice 661 | spec: 662 | type: ClusterIP 663 | selector: 664 | app: currencyservice 665 | ports: 666 | - name: grpc 667 | port: 7000 668 | targetPort: 7000 669 | --- 670 | apiVersion: apps/v1 671 | kind: Deployment 672 | metadata: 673 | name: shippingservice 674 | spec: 675 | selector: 676 | matchLabels: 677 | app: shippingservice 678 | template: 679 | metadata: 680 | labels: 681 | app: shippingservice 682 | spec: 683 | serviceAccountName: default 684 | securityContext: 685 | fsGroup: 1000 686 | runAsGroup: 1000 687 | runAsNonRoot: true 688 | runAsUser: 1000 689 | containers: 690 | - name: server 691 | securityContext: 692 | allowPrivilegeEscalation: false 693 | capabilities: 694 | drop: 695 | - ALL 696 | privileged: false 697 | readOnlyRootFilesystem: true 698 | image: gcr.io/google-samples/microservices-demo/shippingservice:v0.9.0 699 | ports: 700 | - containerPort: 50051 701 | env: 702 | - name: PORT 703 | value: "50051" 704 | - name: DISABLE_PROFILER 705 | value: "1" 706 | readinessProbe: 707 | periodSeconds: 5 708 | grpc: 709 | port: 50051 710 | livenessProbe: 711 | grpc: 712 | port: 50051 713 | resources: 714 | requests: 715 | cpu: 100m 716 | memory: 64Mi 717 | limits: 718 | cpu: 200m 719 | memory: 128Mi 720 | --- 721 | apiVersion: v1 722 | kind: Service 723 | metadata: 724 | name: shippingservice 725 | spec: 726 | type: ClusterIP 727 | selector: 728 | app: shippingservice 729 | ports: 730 | - name: grpc 731 | port: 50051 732 | targetPort: 50051 733 | --- 734 | apiVersion: apps/v1 735 | kind: Deployment 736 | metadata: 737 | name: redis-cart 738 | spec: 739 | selector: 740 | matchLabels: 741 | app: redis-cart 742 | template: 743 | metadata: 744 | labels: 745 | app: redis-cart 746 | spec: 747 | securityContext: 748 | fsGroup: 1000 749 | runAsGroup: 1000 750 | runAsNonRoot: true 751 | runAsUser: 1000 752 | containers: 753 | - name: redis 754 | securityContext: 755 | allowPrivilegeEscalation: false 756 | capabilities: 757 | drop: 758 | - ALL 759 | privileged: false 760 | readOnlyRootFilesystem: true 761 | image: redis:alpine 762 | ports: 763 | - containerPort: 6379 764 | readinessProbe: 765 | periodSeconds: 5 766 | tcpSocket: 767 | port: 6379 768 | livenessProbe: 769 | periodSeconds: 5 770 | tcpSocket: 771 | port: 6379 772 | volumeMounts: 773 | - mountPath: /data 774 | name: redis-data 775 | resources: 776 | limits: 777 | memory: 256Mi 778 | cpu: 125m 779 | requests: 780 | cpu: 70m 781 | memory: 200Mi 782 | volumes: 783 | - name: redis-data 784 | emptyDir: {} 785 | --- 786 | apiVersion: v1 787 | kind: Service 788 | metadata: 789 | name: redis-cart 790 | spec: 791 | type: ClusterIP 792 | selector: 793 | app: redis-cart 794 | ports: 795 | - name: tcp-redis 796 | port: 6379 797 | targetPort: 6379 798 | --- 799 | apiVersion: apps/v1 800 | kind: Deployment 801 | metadata: 802 | name: adservice 803 | spec: 804 | selector: 805 | matchLabels: 806 | app: adservice 807 | template: 808 | metadata: 809 | labels: 810 | app: adservice 811 | spec: 812 | serviceAccountName: default 813 | terminationGracePeriodSeconds: 5 814 | securityContext: 815 | fsGroup: 1000 816 | runAsGroup: 1000 817 | runAsNonRoot: true 818 | runAsUser: 1000 819 | containers: 820 | - name: server 821 | securityContext: 822 | allowPrivilegeEscalation: false 823 | capabilities: 824 | drop: 825 | - ALL 826 | privileged: false 827 | readOnlyRootFilesystem: true 828 | image: gcr.io/google-samples/microservices-demo/adservice:v0.9.0 829 | ports: 830 | - containerPort: 9555 831 | env: 832 | - name: PORT 833 | value: "9555" 834 | resources: 835 | requests: 836 | cpu: 200m 837 | memory: 180Mi 838 | limits: 839 | cpu: 300m 840 | memory: 300Mi 841 | readinessProbe: 842 | initialDelaySeconds: 20 843 | periodSeconds: 15 844 | grpc: 845 | port: 9555 846 | livenessProbe: 847 | initialDelaySeconds: 20 848 | periodSeconds: 15 849 | grpc: 850 | port: 9555 851 | --- 852 | apiVersion: v1 853 | kind: Service 854 | metadata: 855 | name: adservice 856 | spec: 857 | type: ClusterIP 858 | selector: 859 | app: adservice 860 | ports: 861 | - name: grpc 862 | port: 9555 863 | targetPort: 9555 864 | -------------------------------------------------------------------------------- /demo/boutiqueshop/policies.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: projectcalico.org/v3 3 | kind: NetworkPolicy 4 | metadata: 5 | name: application.adservice 6 | namespace: default 7 | spec: 8 | tier: application 9 | order: 100 10 | selector: app == "adservice" 11 | ingress: 12 | - action: Allow 13 | protocol: TCP 14 | source: 15 | selector: app == "frontend" 16 | destination: 17 | ports: 18 | - '9555' 19 | egress: 20 | - action: Allow 21 | protocol: TCP 22 | source: {} 23 | destination: 24 | ports: 25 | - '80' 26 | types: 27 | - Ingress 28 | - Egress 29 | 30 | --- 31 | apiVersion: projectcalico.org/v3 32 | kind: NetworkPolicy 33 | metadata: 34 | name: application.cartservice 35 | namespace: default 36 | spec: 37 | tier: application 38 | order: 110 39 | selector: app == "cartservice" 40 | ingress: 41 | - action: Allow 42 | protocol: TCP 43 | source: 44 | selector: app == "checkoutservice" 45 | destination: 46 | ports: 47 | - '7070' 48 | - action: Allow 49 | protocol: TCP 50 | source: 51 | selector: app == "frontend" 52 | destination: 53 | ports: 54 | - '7070' 55 | egress: 56 | - action: Allow 57 | protocol: TCP 58 | source: {} 59 | destination: 60 | ports: 61 | - '6379' 62 | - action: Allow 63 | protocol: TCP 64 | source: {} 65 | destination: 66 | selector: app == "redis-cart" 67 | ports: 68 | - '6379' 69 | types: 70 | - Ingress 71 | - Egress 72 | 73 | --- 74 | apiVersion: projectcalico.org/v3 75 | kind: NetworkPolicy 76 | metadata: 77 | name: application.checkoutservice 78 | namespace: default 79 | spec: 80 | tier: application 81 | order: 120 82 | selector: app == "checkoutservice" 83 | ingress: 84 | - action: Allow 85 | protocol: TCP 86 | source: 87 | selector: app == "frontend" 88 | destination: 89 | ports: 90 | - '5050' 91 | egress: 92 | - action: Allow 93 | protocol: TCP 94 | source: {} 95 | destination: 96 | selector: app == "productcatalogservice" 97 | ports: 98 | - '3550' 99 | - action: Allow 100 | protocol: TCP 101 | source: {} 102 | destination: 103 | selector: app == "shippingservice" 104 | ports: 105 | - '50051' 106 | - action: Allow 107 | protocol: TCP 108 | source: {} 109 | destination: 110 | ports: 111 | - '80' 112 | - action: Allow 113 | protocol: TCP 114 | source: {} 115 | destination: 116 | selector: app == "cartservice" 117 | ports: 118 | - '7070' 119 | - action: Allow 120 | protocol: TCP 121 | source: {} 122 | destination: 123 | selector: app == "currencyservice" 124 | ports: 125 | - '7000' 126 | - action: Allow 127 | protocol: TCP 128 | source: {} 129 | destination: 130 | selector: app == "emailservice" 131 | ports: 132 | - '8080' 133 | - action: Allow 134 | protocol: TCP 135 | source: {} 136 | destination: 137 | selector: app == "paymentservice" 138 | ports: 139 | - '50051' 140 | types: 141 | - Ingress 142 | - Egress 143 | 144 | --- 145 | apiVersion: projectcalico.org/v3 146 | kind: NetworkPolicy 147 | metadata: 148 | name: application.currencyservice 149 | namespace: default 150 | spec: 151 | tier: application 152 | order: 130 153 | selector: app == "currencyservice" 154 | ingress: 155 | - action: Allow 156 | protocol: TCP 157 | source: 158 | selector: app == "checkoutservice" 159 | destination: 160 | ports: 161 | - '7000' 162 | - action: Allow 163 | protocol: TCP 164 | source: 165 | selector: app == "frontend" 166 | destination: 167 | ports: 168 | - '7000' 169 | types: 170 | - Ingress 171 | 172 | --- 173 | apiVersion: projectcalico.org/v3 174 | kind: NetworkPolicy 175 | metadata: 176 | name: application.emailservice 177 | namespace: default 178 | spec: 179 | tier: application 180 | order: 140 181 | selector: app == "emailservice" 182 | ingress: 183 | - action: Allow 184 | protocol: TCP 185 | source: 186 | selector: app == "checkoutservice" 187 | destination: 188 | ports: 189 | - '8080' 190 | types: 191 | - Ingress 192 | 193 | --- 194 | apiVersion: projectcalico.org/v3 195 | kind: NetworkPolicy 196 | metadata: 197 | name: application.frontend 198 | namespace: default 199 | spec: 200 | tier: application 201 | order: 150 202 | selector: app == "frontend" 203 | ingress: 204 | - action: Allow 205 | protocol: TCP 206 | source: 207 | selector: app == "loadgenerator" 208 | destination: 209 | ports: 210 | - '8080' 211 | - action: Allow 212 | protocol: TCP 213 | source: {} 214 | destination: 215 | ports: 216 | - '8080' 217 | - action: Allow 218 | protocol: TCP 219 | source: 220 | selector: >- 221 | (component == "apiserver"&&endpoints.projectcalico.org/serviceName == 222 | "kubernetes") 223 | destination: 224 | ports: 225 | - '56590' 226 | egress: 227 | - action: Allow 228 | protocol: TCP 229 | source: {} 230 | destination: 231 | selector: app == "checkoutservice" 232 | ports: 233 | - '5050' 234 | - action: Allow 235 | protocol: TCP 236 | source: {} 237 | destination: 238 | selector: app == "currencyservice" 239 | ports: 240 | - '7000' 241 | - action: Allow 242 | protocol: TCP 243 | source: {} 244 | destination: 245 | selector: app == "productcatalogservice" 246 | ports: 247 | - '3550' 248 | - action: Allow 249 | protocol: TCP 250 | source: {} 251 | destination: 252 | selector: app == "recommendationservice" 253 | ports: 254 | - '8080' 255 | - action: Allow 256 | protocol: TCP 257 | source: {} 258 | destination: 259 | selector: app == "shippingservice" 260 | ports: 261 | - '50051' 262 | - action: Allow 263 | protocol: TCP 264 | source: {} 265 | destination: 266 | ports: 267 | - '8080' 268 | - '5050' 269 | - '9555' 270 | - '7070' 271 | - '7000' 272 | - action: Allow 273 | protocol: TCP 274 | source: {} 275 | destination: 276 | selector: app == "adservice" 277 | ports: 278 | - '9555' 279 | - action: Allow 280 | protocol: TCP 281 | source: {} 282 | destination: 283 | selector: app == "cartservice" 284 | ports: 285 | - '7070' 286 | types: 287 | - Ingress 288 | - Egress 289 | 290 | --- 291 | apiVersion: projectcalico.org/v3 292 | kind: NetworkPolicy 293 | metadata: 294 | name: application.loadgenerator 295 | namespace: default 296 | spec: 297 | tier: application 298 | order: 160 299 | selector: app == "loadgenerator" 300 | egress: 301 | - action: Allow 302 | protocol: TCP 303 | source: {} 304 | destination: 305 | selector: projectcalico.org/namespace == "default" 306 | ports: 307 | - '80' 308 | - action: Allow 309 | protocol: TCP 310 | source: {} 311 | destination: 312 | selector: app == "frontend" 313 | ports: 314 | - '8080' 315 | types: 316 | - Egress 317 | 318 | --- 319 | apiVersion: projectcalico.org/v3 320 | kind: NetworkPolicy 321 | metadata: 322 | name: application.paymentservice 323 | namespace: default 324 | spec: 325 | tier: application 326 | order: 170 327 | selector: app == "paymentservice" 328 | ingress: 329 | - action: Allow 330 | protocol: TCP 331 | source: 332 | selector: app == "checkoutservice" 333 | destination: 334 | ports: 335 | - '50051' 336 | types: 337 | - Ingress 338 | 339 | --- 340 | apiVersion: projectcalico.org/v3 341 | kind: NetworkPolicy 342 | metadata: 343 | name: application.productcatalogservice 344 | namespace: default 345 | spec: 346 | tier: application 347 | order: 180 348 | selector: app == "productcatalogservice" 349 | ingress: 350 | - action: Allow 351 | protocol: TCP 352 | source: 353 | selector: app == "checkoutservice" 354 | destination: 355 | ports: 356 | - '3550' 357 | - action: Allow 358 | protocol: TCP 359 | source: 360 | selector: app == "frontend" 361 | destination: 362 | ports: 363 | - '3550' 364 | - action: Allow 365 | protocol: TCP 366 | source: 367 | selector: app == "recommendationservice" 368 | destination: 369 | ports: 370 | - '3550' 371 | - action: Allow 372 | protocol: TCP 373 | source: 374 | selector: >- 375 | (component == "apiserver"&&endpoints.projectcalico.org/serviceName == 376 | "kubernetes") 377 | destination: 378 | ports: 379 | - '35302' 380 | egress: 381 | - action: Allow 382 | protocol: TCP 383 | source: {} 384 | destination: 385 | ports: 386 | - '80' 387 | types: 388 | - Ingress 389 | - Egress 390 | 391 | --- 392 | apiVersion: projectcalico.org/v3 393 | kind: NetworkPolicy 394 | metadata: 395 | name: application.recommendationservice 396 | namespace: default 397 | spec: 398 | tier: application 399 | order: 190 400 | selector: app == "recommendationservice" 401 | ingress: 402 | - action: Allow 403 | protocol: TCP 404 | source: 405 | selector: app == "frontend" 406 | destination: 407 | ports: 408 | - '8080' 409 | egress: 410 | - action: Allow 411 | protocol: TCP 412 | source: {} 413 | destination: 414 | ports: 415 | - '80' 416 | - action: Allow 417 | protocol: TCP 418 | source: {} 419 | destination: 420 | selector: app == "productcatalogservice" 421 | ports: 422 | - '3550' 423 | types: 424 | - Ingress 425 | - Egress 426 | 427 | --- 428 | apiVersion: projectcalico.org/v3 429 | kind: NetworkPolicy 430 | metadata: 431 | name: application.redis-cart 432 | namespace: default 433 | spec: 434 | tier: application 435 | order: 200 436 | selector: app == "redis-cart" 437 | ingress: 438 | - action: Allow 439 | protocol: TCP 440 | source: 441 | selector: app == "cartservice" 442 | destination: 443 | ports: 444 | - '6379' 445 | types: 446 | - Ingress 447 | 448 | --- 449 | apiVersion: projectcalico.org/v3 450 | kind: NetworkPolicy 451 | metadata: 452 | name: application.shippingservice 453 | namespace: default 454 | spec: 455 | tier: application 456 | order: 210 457 | selector: app == "shippingservice" 458 | ingress: 459 | - action: Allow 460 | protocol: TCP 461 | source: 462 | selector: app == "checkoutservice" 463 | destination: 464 | ports: 465 | - '50051' 466 | - action: Allow 467 | protocol: TCP 468 | source: 469 | selector: app == "frontend" 470 | destination: 471 | ports: 472 | - '50051' 473 | egress: 474 | - action: Allow 475 | protocol: TCP 476 | source: {} 477 | destination: 478 | ports: 479 | - '80' 480 | types: 481 | - Ingress 482 | - Egress 483 | 484 | # default deny for boutiqueshop 485 | --- 486 | apiVersion: projectcalico.org/v3 487 | kind: NetworkPolicy 488 | metadata: 489 | name: application.default-deny 490 | namespace: default 491 | spec: 492 | tier: application 493 | order: 1500 494 | selector: all() 495 | ingress: 496 | - action: Deny 497 | egress: 498 | - action: Deny 499 | types: 500 | - Ingress 501 | - Egress 502 | -------------------------------------------------------------------------------- /demo/boutiqueshop/staged.default-deny.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: StagedNetworkPolicy 3 | metadata: 4 | name: default-deny 5 | spec: 6 | order: 2000 7 | selector: "projectcalico.org/namespace == 'default'" 8 | types: 9 | - Ingress 10 | - Egress 11 | -------------------------------------------------------------------------------- /demo/boutiqueshop/staged.policies.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: projectcalico.org/v3 3 | kind: StagedNetworkPolicy 4 | metadata: 5 | name: application.adservice 6 | namespace: default 7 | spec: 8 | tier: application 9 | order: 100 10 | selector: app == "adservice" 11 | ingress: 12 | - action: Allow 13 | protocol: TCP 14 | source: 15 | selector: app == "frontend" 16 | destination: 17 | ports: 18 | - '9555' 19 | egress: 20 | - action: Allow 21 | protocol: TCP 22 | source: {} 23 | destination: 24 | ports: 25 | - '80' 26 | types: 27 | - Ingress 28 | - Egress 29 | 30 | --- 31 | apiVersion: projectcalico.org/v3 32 | kind: StagedNetworkPolicy 33 | metadata: 34 | name: application.cartservice 35 | namespace: default 36 | spec: 37 | tier: application 38 | order: 110 39 | selector: app == "cartservice" 40 | ingress: 41 | - action: Allow 42 | protocol: TCP 43 | source: 44 | selector: app == "checkoutservice" 45 | destination: 46 | ports: 47 | - '7070' 48 | - action: Allow 49 | protocol: TCP 50 | source: 51 | selector: app == "frontend" 52 | destination: 53 | ports: 54 | - '7070' 55 | egress: 56 | - action: Allow 57 | protocol: TCP 58 | source: {} 59 | destination: 60 | ports: 61 | - '6379' 62 | - action: Allow 63 | protocol: TCP 64 | source: {} 65 | destination: 66 | selector: app == "redis-cart" 67 | ports: 68 | - '6379' 69 | types: 70 | - Ingress 71 | - Egress 72 | 73 | --- 74 | apiVersion: projectcalico.org/v3 75 | kind: StagedNetworkPolicy 76 | metadata: 77 | name: application.checkoutservice 78 | namespace: default 79 | spec: 80 | tier: application 81 | order: 120 82 | selector: app == "checkoutservice" 83 | ingress: 84 | - action: Allow 85 | protocol: TCP 86 | source: 87 | selector: app == "frontend" 88 | destination: 89 | ports: 90 | - '5050' 91 | egress: 92 | - action: Allow 93 | protocol: TCP 94 | source: {} 95 | destination: 96 | selector: app == "productcatalogservice" 97 | ports: 98 | - '3550' 99 | - action: Allow 100 | protocol: TCP 101 | source: {} 102 | destination: 103 | selector: app == "shippingservice" 104 | ports: 105 | - '50051' 106 | - action: Allow 107 | protocol: TCP 108 | source: {} 109 | destination: 110 | ports: 111 | - '80' 112 | - action: Allow 113 | protocol: TCP 114 | source: {} 115 | destination: 116 | selector: app == "cartservice" 117 | ports: 118 | - '7070' 119 | - action: Allow 120 | protocol: TCP 121 | source: {} 122 | destination: 123 | selector: app == "currencyservice" 124 | ports: 125 | - '7000' 126 | - action: Allow 127 | protocol: TCP 128 | source: {} 129 | destination: 130 | selector: app == "emailservice" 131 | ports: 132 | - '8080' 133 | - action: Allow 134 | protocol: TCP 135 | source: {} 136 | destination: 137 | selector: app == "paymentservice" 138 | ports: 139 | - '50051' 140 | types: 141 | - Ingress 142 | - Egress 143 | 144 | --- 145 | apiVersion: projectcalico.org/v3 146 | kind: StagedNetworkPolicy 147 | metadata: 148 | name: application.currencyservice 149 | namespace: default 150 | spec: 151 | tier: application 152 | order: 130 153 | selector: app == "currencyservice" 154 | ingress: 155 | - action: Allow 156 | protocol: TCP 157 | source: 158 | selector: app == "checkoutservice" 159 | destination: 160 | ports: 161 | - '7000' 162 | - action: Allow 163 | protocol: TCP 164 | source: 165 | selector: app == "frontend" 166 | destination: 167 | ports: 168 | - '7000' 169 | types: 170 | - Ingress 171 | 172 | --- 173 | apiVersion: projectcalico.org/v3 174 | kind: StagedNetworkPolicy 175 | metadata: 176 | name: application.emailservice 177 | namespace: default 178 | spec: 179 | tier: application 180 | order: 140 181 | selector: app == "emailservice" 182 | ingress: 183 | - action: Allow 184 | protocol: TCP 185 | source: 186 | selector: app == "checkoutservice" 187 | destination: 188 | ports: 189 | - '8080' 190 | types: 191 | - Ingress 192 | 193 | --- 194 | apiVersion: projectcalico.org/v3 195 | kind: StagedNetworkPolicy 196 | metadata: 197 | name: application.frontend 198 | namespace: default 199 | spec: 200 | tier: application 201 | order: 150 202 | selector: app == "frontend" 203 | ingress: 204 | - action: Allow 205 | protocol: TCP 206 | source: 207 | selector: app == "loadgenerator" 208 | destination: 209 | ports: 210 | - '8080' 211 | - action: Allow 212 | protocol: TCP 213 | source: {} 214 | destination: 215 | ports: 216 | - '8080' 217 | - action: Allow 218 | protocol: TCP 219 | source: 220 | selector: >- 221 | (component == "apiserver"&&endpoints.projectcalico.org/serviceName == 222 | "kubernetes") 223 | destination: 224 | ports: 225 | - '56590' 226 | egress: 227 | - action: Allow 228 | protocol: TCP 229 | source: {} 230 | destination: 231 | selector: app == "checkoutservice" 232 | ports: 233 | - '5050' 234 | - action: Allow 235 | protocol: TCP 236 | source: {} 237 | destination: 238 | selector: app == "currencyservice" 239 | ports: 240 | - '7000' 241 | - action: Allow 242 | protocol: TCP 243 | source: {} 244 | destination: 245 | selector: app == "productcatalogservice" 246 | ports: 247 | - '3550' 248 | - action: Allow 249 | protocol: TCP 250 | source: {} 251 | destination: 252 | selector: app == "recommendationservice" 253 | ports: 254 | - '8080' 255 | - action: Allow 256 | protocol: TCP 257 | source: {} 258 | destination: 259 | selector: app == "shippingservice" 260 | ports: 261 | - '50051' 262 | - action: Allow 263 | protocol: TCP 264 | source: {} 265 | destination: 266 | ports: 267 | - '8080' 268 | - '5050' 269 | - '9555' 270 | - '7070' 271 | - '7000' 272 | - action: Allow 273 | protocol: TCP 274 | source: {} 275 | destination: 276 | selector: app == "adservice" 277 | ports: 278 | - '9555' 279 | - action: Allow 280 | protocol: TCP 281 | source: {} 282 | destination: 283 | selector: app == "cartservice" 284 | ports: 285 | - '7070' 286 | types: 287 | - Ingress 288 | - Egress 289 | 290 | --- 291 | apiVersion: projectcalico.org/v3 292 | kind: StagedNetworkPolicy 293 | metadata: 294 | name: application.loadgenerator 295 | namespace: default 296 | spec: 297 | tier: application 298 | order: 160 299 | selector: app == "loadgenerator" 300 | egress: 301 | - action: Allow 302 | protocol: TCP 303 | source: {} 304 | destination: 305 | selector: projectcalico.org/namespace == "default" 306 | ports: 307 | - '80' 308 | - action: Allow 309 | protocol: TCP 310 | source: {} 311 | destination: 312 | selector: app == "frontend" 313 | ports: 314 | - '8080' 315 | types: 316 | - Egress 317 | 318 | --- 319 | apiVersion: projectcalico.org/v3 320 | kind: StagedNetworkPolicy 321 | metadata: 322 | name: application.paymentservice 323 | namespace: default 324 | spec: 325 | tier: application 326 | order: 170 327 | selector: app == "paymentservice" 328 | ingress: 329 | - action: Allow 330 | protocol: TCP 331 | source: 332 | selector: app == "checkoutservice" 333 | destination: 334 | ports: 335 | - '50051' 336 | types: 337 | - Ingress 338 | 339 | --- 340 | apiVersion: projectcalico.org/v3 341 | kind: StagedNetworkPolicy 342 | metadata: 343 | name: application.productcatalogservice 344 | namespace: default 345 | spec: 346 | tier: application 347 | order: 180 348 | selector: app == "productcatalogservice" 349 | ingress: 350 | - action: Allow 351 | protocol: TCP 352 | source: 353 | selector: app == "checkoutservice" 354 | destination: 355 | ports: 356 | - '3550' 357 | - action: Allow 358 | protocol: TCP 359 | source: 360 | selector: app == "frontend" 361 | destination: 362 | ports: 363 | - '3550' 364 | - action: Allow 365 | protocol: TCP 366 | source: 367 | selector: app == "recommendationservice" 368 | destination: 369 | ports: 370 | - '3550' 371 | - action: Allow 372 | protocol: TCP 373 | source: 374 | selector: >- 375 | (component == "apiserver"&&endpoints.projectcalico.org/serviceName == 376 | "kubernetes") 377 | destination: 378 | ports: 379 | - '35302' 380 | egress: 381 | - action: Allow 382 | protocol: TCP 383 | source: {} 384 | destination: 385 | ports: 386 | - '80' 387 | types: 388 | - Ingress 389 | - Egress 390 | 391 | --- 392 | apiVersion: projectcalico.org/v3 393 | kind: StagedNetworkPolicy 394 | metadata: 395 | name: application.recommendationservice 396 | namespace: default 397 | spec: 398 | tier: application 399 | order: 190 400 | selector: app == "recommendationservice" 401 | ingress: 402 | - action: Allow 403 | protocol: TCP 404 | source: 405 | selector: app == "frontend" 406 | destination: 407 | ports: 408 | - '8080' 409 | egress: 410 | - action: Allow 411 | protocol: TCP 412 | source: {} 413 | destination: 414 | ports: 415 | - '80' 416 | - action: Allow 417 | protocol: TCP 418 | source: {} 419 | destination: 420 | selector: app == "productcatalogservice" 421 | ports: 422 | - '3550' 423 | types: 424 | - Ingress 425 | - Egress 426 | 427 | --- 428 | apiVersion: projectcalico.org/v3 429 | kind: StagedNetworkPolicy 430 | metadata: 431 | name: application.redis-cart 432 | namespace: default 433 | spec: 434 | tier: application 435 | order: 200 436 | selector: app == "redis-cart" 437 | ingress: 438 | - action: Allow 439 | protocol: TCP 440 | source: 441 | selector: app == "cartservice" 442 | destination: 443 | ports: 444 | - '6379' 445 | types: 446 | - Ingress 447 | 448 | --- 449 | apiVersion: projectcalico.org/v3 450 | kind: StagedNetworkPolicy 451 | metadata: 452 | name: application.shippingservice 453 | namespace: default 454 | spec: 455 | tier: application 456 | order: 210 457 | selector: app == "shippingservice" 458 | ingress: 459 | - action: Allow 460 | protocol: TCP 461 | source: 462 | selector: app == "checkoutservice" 463 | destination: 464 | ports: 465 | - '50051' 466 | - action: Allow 467 | protocol: TCP 468 | source: 469 | selector: app == "frontend" 470 | destination: 471 | ports: 472 | - '50051' 473 | egress: 474 | - action: Allow 475 | protocol: TCP 476 | source: {} 477 | destination: 478 | ports: 479 | - '80' 480 | types: 481 | - Ingress 482 | - Egress 483 | 484 | # default deny for boutiqueshop 485 | --- 486 | apiVersion: projectcalico.org/v3 487 | kind: StagedNetworkPolicy 488 | metadata: 489 | name: application.default-deny 490 | namespace: default 491 | spec: 492 | tier: application 493 | order: 1500 494 | selector: all() 495 | ingress: 496 | - action: Deny 497 | egress: 498 | - action: Deny 499 | types: 500 | - Ingress 501 | - Egress 502 | -------------------------------------------------------------------------------- /demo/dev/app.manifests.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: Namespace 3 | apiVersion: v1 4 | metadata: 5 | name: dev 6 | labels: 7 | compliance: open 8 | environment: development 9 | 10 | --- 11 | apiVersion: v1 12 | kind: Pod 13 | metadata: 14 | name: centos 15 | namespace: dev 16 | labels: 17 | app: centos 18 | 19 | spec: 20 | containers: 21 | - name: centos 22 | image: centos:latest 23 | command: [ "/bin/bash", "-c", "--" ] 24 | args: [ "while true; do curl -m3 http://nginx-svc; sleep 3; done;" ] 25 | resources: {} 26 | 27 | --- 28 | apiVersion: apps/v1 29 | kind: Deployment 30 | metadata: 31 | name: dev-nginx 32 | namespace: dev 33 | spec: 34 | selector: 35 | matchLabels: 36 | app: nginx 37 | security: strict 38 | replicas: 2 39 | template: 40 | metadata: 41 | labels: 42 | app: nginx 43 | security: strict 44 | spec: 45 | containers: 46 | - name: nginx 47 | image: nginx 48 | ports: 49 | - containerPort: 80 50 | resources: {} 51 | 52 | --- 53 | apiVersion: v1 54 | kind: Service 55 | metadata: 56 | name: nginx-svc 57 | namespace: dev 58 | labels: 59 | service: nginx 60 | spec: 61 | ports: 62 | - port: 80 63 | targetPort: 80 64 | protocol: TCP 65 | selector: 66 | app: nginx 67 | 68 | --- 69 | apiVersion: v1 70 | kind: Pod 71 | metadata: 72 | name: netshoot 73 | namespace: dev 74 | labels: 75 | app: netshoot 76 | spec: 77 | containers: 78 | - name: netshoot 79 | image: nicolaka/netshoot:latest 80 | # Just spin & wait forever 81 | command: [ "/bin/bash", "-c", "--" ] 82 | args: [ "while true; do curl -m3 http://nginx-svc; sleep 3; done;" ] 83 | resources: {} -------------------------------------------------------------------------------- /demo/dev/policies.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: networking.k8s.io/v1 3 | kind: NetworkPolicy 4 | metadata: 5 | name: nginx 6 | namespace: dev 7 | spec: 8 | podSelector: 9 | matchLabels: 10 | app: nginx 11 | ingress: 12 | - from: 13 | - namespaceSelector: 14 | matchLabels: 15 | compliance: open 16 | policyTypes: 17 | - Ingress 18 | 19 | --- 20 | apiVersion: networking.k8s.io/v1 21 | kind: NetworkPolicy 22 | metadata: 23 | name: centos 24 | namespace: dev 25 | spec: 26 | podSelector: 27 | matchLabels: 28 | app: centos 29 | egress: 30 | - to: 31 | - podSelector: 32 | matchLabels: 33 | app: nginx 34 | policyTypes: 35 | - Egress 36 | 37 | --- 38 | 39 | apiVersion: networking.k8s.io/v1 40 | kind: NetworkPolicy 41 | metadata: 42 | name: netshoot 43 | namespace: dev 44 | spec: 45 | podSelector: 46 | matchLabels: 47 | app: netshoot 48 | egress: 49 | - {} 50 | policyTypes: 51 | - Egress -------------------------------------------------------------------------------- /demo/tiers/tiers.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: projectcalico.org/v3 3 | kind: Tier 4 | metadata: 5 | name: security 6 | spec: 7 | order: 400 8 | 9 | --- 10 | apiVersion: projectcalico.org/v3 11 | kind: Tier 12 | metadata: 13 | name: platform 14 | spec: 15 | order: 500 16 | 17 | --- 18 | apiVersion: projectcalico.org/v3 19 | kind: Tier 20 | metadata: 21 | name: application 22 | spec: 23 | order: 600 24 | -------------------------------------------------------------------------------- /img/add-DNS-in-networkset.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/add-DNS-in-networkset.png -------------------------------------------------------------------------------- /img/alerts-view-all.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/alerts-view-all.png -------------------------------------------------------------------------------- /img/alerts-view.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/alerts-view.png -------------------------------------------------------------------------------- /img/anomaly-detection-alert.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/anomaly-detection-alert.png -------------------------------------------------------------------------------- /img/anomaly-detection-config.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/anomaly-detection-config.png -------------------------------------------------------------------------------- /img/calico-cloud-login.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/calico-cloud-login.png -------------------------------------------------------------------------------- /img/calico-on-aks.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/calico-on-aks.png -------------------------------------------------------------------------------- /img/choose-aks.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/choose-aks.png -------------------------------------------------------------------------------- /img/cluster-selection.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/cluster-selection.png -------------------------------------------------------------------------------- /img/compliance-report.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/compliance-report.png -------------------------------------------------------------------------------- /img/connect-cluster.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/connect-cluster.png -------------------------------------------------------------------------------- /img/connectivity-diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/connectivity-diagram.png -------------------------------------------------------------------------------- /img/create-dns-policy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/create-dns-policy.png -------------------------------------------------------------------------------- /img/dashboard-default-deny.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/dashboard-default-deny.png -------------------------------------------------------------------------------- /img/dashboard-overall-view.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/dashboard-overall-view.png -------------------------------------------------------------------------------- /img/delete-policy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/delete-policy.png -------------------------------------------------------------------------------- /img/dns-alert.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/dns-alert.png -------------------------------------------------------------------------------- /img/dns-network-set.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/dns-network-set.png -------------------------------------------------------------------------------- /img/download-packet-capture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/download-packet-capture.png -------------------------------------------------------------------------------- /img/drop-down-menu.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/drop-down-menu.png -------------------------------------------------------------------------------- /img/edit-policy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/edit-policy.png -------------------------------------------------------------------------------- /img/ee-event-log.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/ee-event-log.png -------------------------------------------------------------------------------- /img/endpoints-view.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/endpoints-view.png -------------------------------------------------------------------------------- /img/expand-menu.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/expand-menu.png -------------------------------------------------------------------------------- /img/external-saas-traffic.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/external-saas-traffic.png -------------------------------------------------------------------------------- /img/flow-viz.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/flow-viz.png -------------------------------------------------------------------------------- /img/frontend-packet-capture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/frontend-packet-capture.png -------------------------------------------------------------------------------- /img/get-start.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/get-start.png -------------------------------------------------------------------------------- /img/hep-policy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/hep-policy.png -------------------------------------------------------------------------------- /img/hep-service-graph.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/hep-service-graph.png -------------------------------------------------------------------------------- /img/honeypod-threat-alert.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/honeypod-threat-alert.png -------------------------------------------------------------------------------- /img/initiate-pc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/initiate-pc.png -------------------------------------------------------------------------------- /img/kibana-dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/kibana-dashboard.png -------------------------------------------------------------------------------- /img/kibana-flow-logs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/kibana-flow-logs.png -------------------------------------------------------------------------------- /img/managed-cluster.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/managed-cluster.png -------------------------------------------------------------------------------- /img/network-set-grid.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/network-set-grid.png -------------------------------------------------------------------------------- /img/packet-capture-ui.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/packet-capture-ui.png -------------------------------------------------------------------------------- /img/policies-board-stats.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/policies-board-stats.png -------------------------------------------------------------------------------- /img/policies-board.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/policies-board.png -------------------------------------------------------------------------------- /img/redis-pcap.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/redis-pcap.png -------------------------------------------------------------------------------- /img/schedule-packet-capture-job.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/schedule-packet-capture-job.png -------------------------------------------------------------------------------- /img/schedule-packet-capture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/schedule-packet-capture.png -------------------------------------------------------------------------------- /img/script.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/script.png -------------------------------------------------------------------------------- /img/select-ep.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/select-ep.png -------------------------------------------------------------------------------- /img/service-graph-default.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/service-graph-default.png -------------------------------------------------------------------------------- /img/service-graph-l7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/service-graph-l7.png -------------------------------------------------------------------------------- /img/service-graph-node.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/service-graph-node.png -------------------------------------------------------------------------------- /img/service-graph-top-level.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/service-graph-top-level.png -------------------------------------------------------------------------------- /img/signature-alert.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/signature-alert.png -------------------------------------------------------------------------------- /img/staged-default-deny.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/staged-default-deny.png -------------------------------------------------------------------------------- /img/test-packet-capture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/test-packet-capture.png -------------------------------------------------------------------------------- /img/timeline-view.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/timeline-view.png -------------------------------------------------------------------------------- /img/workshop-environment.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tigera-solutions/calicocloud-aks-workshop/30321fd0a427162e85c68728b5aa08462455a79c/img/workshop-environment.png -------------------------------------------------------------------------------- /modules/anomaly-detection.md: -------------------------------------------------------------------------------- 1 | # Module 10: Anomaly Detection 2 | 3 | **Goal:** Configure Anomaly Detection to alert upon abnormal/suspicious traffic 4 | 5 | --- 6 | 7 | Calico offers [Anomaly Detection](https://docs.tigera.io/calico-cloud/threat/security-anomalies) (AD) as a part of its [threat defense](https://docs.tigera.io/calico-cloud/threat/) capabilities. Calico's Machine Learning algorithms can baseline "normal" traffic patterns and subsequently detect abnormal or suspicious behavior. This may resemble an Indicator of Compromise and will generate a security alert to take further action on the incident. 8 | 9 | ## Steps 10 | 11 | 1. Configure the Anomaly Detection alerts. 12 | 13 | Instructions below are for a Managed cluster of version 3.14+. Follow the [Anomaly Detection doc](https://docs.calicocloud.io/threat/security-anomalies) to configure AD jobs in management and standalone clusters. 14 | 15 | Navigate to `Activity -> Anomaly Detection` view in the Calico Cloud UI and enable `Port Scan detection` alert. 16 | 17 | ![Anomaly Detection configuration](../img/anomaly-detection-config.png) 18 | 19 | Or use CLI command below to enable this alert. 20 | 21 | ```bash 22 | # example to enable AD alert via CLI command 23 | kubectl apply -f demo/90-anomaly-detection/ad-alerts.yaml 24 | ``` 25 | 26 | 2. Confirm the AD jobs are running before simulating anomaly behavior. 27 | 28 | ```bash 29 | kubectl get globalalerts | grep -i tigera.io.detector 30 | ``` 31 | 32 | The output should look similar to this: 33 | 34 | ```bash 35 | tigera.io.detector.port-scan 2022-07-27T16:59:11Z 36 | ``` 37 | 38 | 3. Simulate a port scan anomaly by using an NMAP utility. 39 | 40 | ```bash 41 | # simulate port scan 42 | POD_IP=$(kubectl -n dev get po --selector app=nginx -o jsonpath='{.items[0].status.podIP}') 43 | kubectl -n dev exec netshoot -- nmap -Pn -r -p 1-900 $POD_IP 44 | ``` 45 | 46 | ```text 47 | # expected output 48 | Starting Nmap 7.92 ( https://nmap.org ) at 2022-07-27 17:36 UTC 49 | Nmap scan report for 10-224-0-67.nginx-svc.dev.svc.cluster.local (10.224.0.67) 50 | Host is up (0.0010s latency). 51 | Not shown: 899 closed tcp ports (reset) 52 | PORT STATE SERVICE 53 | 80/tcp open http 54 | 55 | Nmap done: 1 IP address (1 host up) scanned in 0.24 seconds 56 | ``` 57 | 58 | 4. After a few minutes the alerts can be seen in the Alerts view of the Web UI. 59 | 60 | ![Anomaly Detection alert](../img/anomaly-detection-alert.png) 61 | 62 | [Module 9 :arrow_left:](../modules/using-alerts.md)     [:arrow_right: Module 11](../modules/honeypod-threat-detection.md) 63 | 64 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 65 | -------------------------------------------------------------------------------- /modules/configuring-demo-apps.md: -------------------------------------------------------------------------------- 1 | # Module 2: Configuring demo applications 2 | 3 | **Goal:** Deploy and configure demo applications. 4 | 5 | This workshop will deploy Namespace `dev` where several pods will be created to demonstrate connectivity within the Namespace and cross-Namespace to the `default` Namespace where the Boutiquestore (Microservices) Demo apps will be deployed. We will use a subset of these pods to demonstrate how Calico Network Policy can provide pod security and microsegmentation techniques. 6 | In [Module 9](../modules/anomaly-detection.md) we introduce Namespace `tigera-intrusion-detection` which contains specific Calico pods to run the Anomaly Detection engine. 7 |
8 | 9 | ![workshop-environment](../img/workshop-environment.png) 10 | 11 | ## Steps 12 | 13 | 1. Confirm your current folder is in this repo. 14 | 15 | ```text 16 | /home/azure/calicocloud-aks-workshop 17 | ``` 18 | 19 | 2. Deploy policy tiers. 20 | 21 | We are going to deploy some policies into policy tier to take advantage of hierarcical policy management. 22 | 23 | ```bash 24 | kubectl apply -f demo/tiers/tiers.yaml 25 | ``` 26 | 27 | This will add tiers `security`, `platform` and `application` to the aks cluster. 28 | 29 | 3. Deploy base policy. 30 | 31 | In order to explicitly allow workloads to connect to the Kubernetes DNS component, we are going to implement a policy that controls such traffic. 32 | 33 | ```bash 34 | kubectl apply -f demo/10-security-controls/base-policies.yaml 35 | kubectl apply -f demo/10-security-controls/allow-kube-dns.yaml 36 | ``` 37 | 38 | This will add `allow-kube-dns` policy to your `platform` tier. 39 | 40 | 4. Deploy demo applications. 41 | 42 | ```bash 43 | # deploy dev app stack 44 | kubectl apply -f demo/dev/app.manifests.yaml 45 | 46 | # deploy boutiqueshop app stack 47 | kubectl apply -f demo/boutiqueshop/app.manifests.yaml 48 | ``` 49 | 50 | ```bash 51 | #confirm the pod/deployments are running. Note the loadgenerator pod waits for the frontend pod to respond to http calls before coming up and can take a few minutes. Eventually, the status of the pods in the default namespace will look as follows: 52 | 53 | kubectl get pods 54 | NAME READY STATUS RESTARTS AGE 55 | adservice-85598d856b-7zhlp 1/1 Running 0 113s 56 | cartservice-c77f6b866-hmbbj 1/1 Running 0 114s 57 | checkoutservice-654c47f4b6-l6wlk 1/1 Running 0 115s 58 | currencyservice-59bc889674-2xh2q 1/1 Running 0 114s 59 | emailservice-5b9fff7cb8-f49gk 1/1 Running 0 115s 60 | frontend-77b88cc7cb-btssz 1/1 Running 0 115s 61 | loadgenerator-6958f5bc8b-6lfrz 1/1 Running 0 114s 62 | paymentservice-68dd9755bb-ddqd2 1/1 Running 0 114s 63 | productcatalogservice-557ff44b96-88f7w 1/1 Running 0 114s 64 | recommendationservice-64dc9dfbc8-rwzdq 1/1 Running 0 115s 65 | redis-cart-5b569cd47-6mstt 1/1 Running 0 114s 66 | shippingservice-5488d5b6cb-l79nw 1/1 Running 0 114s 67 | 68 | kubectl get pods -n dev 69 | NAME READY STATUS RESTARTS AGE 70 | centos 1/1 Running 0 48s 71 | dev-nginx-754f647b8b-99fsn 1/1 Running 0 48s 72 | dev-nginx-754f647b8b-hlrw8 1/1 Running 0 48s 73 | netshoot 1/1 Running 0 48s 74 | ``` 75 | 76 | The pods will be visible in "service graph", for example in `default` namespace. This may take 1-2 mins to update in Service Graph. To view resources in the `default` namespace click on the `Service Graph` icon on the left menu which will display a top level view of the cluster resources: 77 |
78 | 79 | ![service-graph-top level](../img/service-graph-top-level.png) 80 | 81 | Double click on the `default` Namespace as highlighted to bring only resources in the `default` namespace in view along with other resources communicating into or out of the `deafult` Namespace. 82 |
83 | 84 | ![service-graph-default](../img/service-graph-default.png) 85 | 86 | Note that pod/resource limits on your nodes may prevent pods from deploying. Ensure the nodes in the cluster are scaled appropriately 87 | 88 | 5. Deploy compliance reports. 89 | 90 | >The reports run as cronjob and will be needed for one of a later lab. 91 | 92 | ```bash 93 | kubectl apply -f demo/40-compliance-reports/daily-cis-results.yaml 94 | kubectl apply -f demo/40-compliance-reports/cluster-reports.yaml 95 | ``` 96 | 97 | 6. Deploy global alerts. 98 | 99 | >The alerts will be explored in a later lab. Ignore any warning messages - these do not affect the deployment of resources. 100 | 101 | ```bash 102 | kubectl apply -f demo/50-alerts/globalnetworkset.changed.yaml 103 | kubectl apply -f demo/50-alerts/unsanctioned.dns.access.yaml 104 | kubectl apply -f demo/50-alerts/unsanctioned.lateral.access.yaml 105 | ``` 106 | 107 | 7. Install curl on loadgenerator pod 108 | 109 | > Before we implement network security rules we need to install curl on the loadgenerator pod for testing purposes later in the workshop. Note the installation will not survive a reboot so repeat this installation as necessary 110 | 111 | ```bash 112 | ##install update package and curl 113 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'apt-get update && apt install curl iputils-ping -y' 114 | ``` 115 | 116 | [Module 1 :arrow_left:](../modules/joining-aks-to-calico-cloud.md)     [:arrow_right: Module 3](../modules/pod-access-controls.md) 117 | 118 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 119 | -------------------------------------------------------------------------------- /modules/creating-aks-cluster.md: -------------------------------------------------------------------------------- 1 | # Module 0: Creating AKS cluster 2 | 3 | The following guide is based upon the repos from [lastcoolnameleft](https://github.com/lastcoolnameleft/kubernetes-workshop/blob/master/create-aks-cluster.md) and [Azure Kubernetes Hackfest](https://github.com/Azure/kubernetes-hackfest/tree/master/labs/create-aks-cluster#readme). 4 | 5 | * * * 6 | 7 | **Goal:** Create AKS cluster. 8 | 9 | > This workshop uses AKS cluster with Linux containers. To create a Windows Server container on an AKS cluster, consider exploring [AKS documents](https://docs.microsoft.com/en-us/azure/aks/windows-container-cli). This cluster deployment utilizes Azure CLI v2.x from your local terminal or via Azure Cloud Shell. Instructions for installing Azure CLI can be found [here](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli). 10 | 11 | If you already have an AKS cluster, make sure the network plugin is "azure", then you can skip this module and go to [module 2](/Applications/Joplin.app/Contents/Resources/modules/joining-aks-to-calico-cloud.md "../modules/joining-aks-to-calico-cloud.md") 12 | 13 | ## Prerequisite Tasks 14 | 15 | You can use either your shell of choice or the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) that has all necessary tools installed. When using the Azure Cloud Shell, you'll have to sign into your Azure account in the browser and then open the shell. 16 | 17 | Follow the prerequisite steps if you need to verify your Azure subscription and Service Principal otherwise proceed to step 1. 18 | 19 | 1. Ensure you are using the correct Azure subscription you want to deploy AKS to. 20 | 21 | ```bash 22 | # View subscriptions 23 | az account list 24 | 25 | # Verify selected subscription 26 | az account show 27 | 28 | # Set correct subscription (if needed) 29 | az account set --subscription 30 | 31 | # Verify correct subscription is now set 32 | az account show 33 | ``` 34 | 35 | 2. *[Optional]* Create Azure Service Principal for AKS resource 36 | 37 | >Use this step **only** if you cannot create an AKS cluster with a managed identity, otherwise skip this step. If you do use your custom service principal to create the AKS cluster, you'll have to add `--service-principal ` and `--client-secret ` parameter to the `az aks create ...` command below. 38 | 39 | - Create service principal account. 40 | 41 | ```bash 42 | az ad sp create-for-rbac --skip-assignment 43 | ``` 44 | 45 | - This will return the following. **IMPORTANT** - Note this information as you'll need it to provision the AKS cluster. You'll need to use `appId` as the `--service-principal` parameter value and `password` as the `--client-secret` parameter value. 46 | 47 | ```bash 48 | # this is an example output 49 | “appId”: “ce493696-xxxx-xxxx-xxxx-99d6cfd09436", 50 | “displayName”: “jessieazurexxxx”, 51 | “name”: “ce493696-xxxx-xxxx-xxxx-99d6cfd09436", 52 | “password”: “YhutxxxxYwNxxxxQntNxxxx_VyVnxxxx”, 53 | “tenant”: “b0497432-xxxx-xxxx-xxxx-7d4072ca6006" 54 | ``` 55 | 56 | - Set the values from above as variables **(replace `appId` and `password` with your values)**. 57 | 58 | >**Warning:** Several of the following steps save some variables to the `.bashrc` file. This is done so that you can get those values back if your session disconnects. You will want to clean these up once you're done with this workshop. 59 | 60 | **Make sure to replace the `` and `` tokens below with the values from your newly created service principal object.** 61 | 62 | ```bash 63 | # Persist for Later Sessions in Case of Timeout 64 | APPID= 65 | echo export APPID=$APPID >> ~/.bashrc 66 | CLIENTSECRET= 67 | echo export CLIENTSECRET=$CLIENTSECRET >> ~/.bashrc 68 | ``` 69 | 70 | ## Steps 71 | 72 | 1. Create a unique identifier suffix for resources to be created in this lab. 73 | 74 | ```bash 75 | UNIQUE_SUFFIX=$USER$RANDOM 76 | # Remove Underscores and Dashes (Not Allowed in AKS and ACR Names) 77 | UNIQUE_SUFFIX="${UNIQUE_SUFFIX//_}" 78 | UNIQUE_SUFFIX="${UNIQUE_SUFFIX//-}" 79 | # Check Unique Suffix Value (Should be No Underscores or Dashes) 80 | echo $UNIQUE_SUFFIX 81 | # Persist for Later Sessions in Case of Timeout 82 | echo export UNIQUE_SUFFIX=$UNIQUE_SUFFIX >> ~/.bashrc 83 | ``` 84 | 85 | **Note this value as it will be used in the next couple labs.** 86 | 87 | 2. Create an Azure Resource Group in desired region. 88 | 89 | ```bash 90 | # Set Resource Group Name using the unique suffix 91 | RGNAME=calicocloud-workshop-$UNIQUE_SUFFIX 92 | # Persist for Later Sessions in Case of Timeout 93 | echo export RGNAME=$RGNAME >> ~/.bashrc 94 | # Set Region (Location) 95 | LOCATION=eastus 96 | # Persist for Later Sessions in Case of Timeout 97 | echo export LOCATION=$LOCATION >> ~/.bashrc 98 | # Create Resource Group 99 | az group create -n $RGNAME -l $LOCATION 100 | ``` 101 | 102 | 3. Create your AKS cluster in the resource group created in step 2 with 3 nodes. We will check for a recent version of Kubernetes before proceeding. You will use the Service Principal information from the prerequisite tasks. 103 | 104 | Use Unique CLUSTERNAME 105 | 106 | ```bash 107 | # Set AKS Cluster Name 108 | CLUSTERNAME=aks${UNIQUE_SUFFIX} 109 | # Look at AKS Cluster Name for Future Reference 110 | echo $CLUSTERNAME 111 | # Persist for Later Sessions in Case of Timeout 112 | echo export CLUSTERNAME=${CLUSTERNAME} >> ~/.bashrc 113 | ``` 114 | 115 | Get available Kubernetes versions for the region. You will likely see more recent versions in your lab. 116 | 117 | ```bash 118 | az aks get-versions -l $LOCATION --output table 119 | ``` 120 | 121 | ```text 122 | KubernetesVersion Upgrades SupportPlan 123 | ------------------- ----------------------- -------------------------------------- 124 | 1.29.2 None available KubernetesOfficial 125 | 1.29.0 1.29.2 KubernetesOfficial 126 | 1.28.5 1.29.0, 1.29.2 KubernetesOfficial 127 | 1.28.3 1.28.5, 1.29.0, 1.29.2 KubernetesOfficial 128 | 1.27.9 1.28.3, 1.28.5 KubernetesOfficial, AKSLongTermSupport 129 | 1.27.7 1.27.9, 1.28.3, 1.28.5 KubernetesOfficial, AKSLongTermSupport 130 | 1.26.12 1.27.7, 1.27.9 KubernetesOfficial 131 | 1.26.10 1.26.12, 1.27.7, 1.27.9 KubernetesOfficial 132 | ``` 133 | 134 | For this lab we'll use 1.28 135 | 136 | ```bash 137 | K8SVERSION=1.28 138 | echo export K8SVERSION=$K8SVERSION >> ~/.bashrc 139 | ``` 140 | 141 | >The below command can take several minutes to run as it is creating the AKS cluster. 142 | 143 | ```bash 144 | # Create AKS Cluster - it is important to set the network-plugin as azure in order to connect to Calico Cloud 145 | az aks create -n $CLUSTERNAME -g $RGNAME \ 146 | --kubernetes-version $K8SVERSION \ 147 | --enable-managed-identity \ 148 | --node-count 3 \ 149 | --network-plugin azure \ 150 | --no-wait \ 151 | --generate-ssh-keys 152 | ``` 153 | 154 | 4. Verify your cluster status. The `ProvisioningState` should be `Succeeded` 155 | 156 | ```bash 157 | az aks list -o table -g $RGNAME 158 | ``` 159 | 160 | ```bash 161 | Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn 162 | ------------ ---------- ------------------------------ ------------------- -------------------------- ------------------- ---------------------------------------------------------------- 163 | aksivan25988 westus calicocloud-workshop-ivan25988 1.28 1.28.5 Succeeded aksivan259-calicocloud-work-03cfb8-zmf4e587.hcp.westus.azmk8s.io 164 | ``` 165 | 166 | 5. Get the Kubernetes config files for your new AKS cluster 167 | 168 | ```bash 169 | az aks get-credentials -n $CLUSTERNAME -g $RGNAME 170 | ``` 171 | 172 | 6. Verify you have API access to your new AKS cluster 173 | 174 | > Note: It can take 5 minutes for your nodes to appear and be in READY state. You can run `watch kubectl get nodes` to monitor status. Otherwise, the cluster is ready when the output is similar to the following: 175 | 176 | ```bash 177 | kubectl get nodes 178 | ``` 179 | 180 | ```bash 181 | NAME STATUS ROLES AGE VERSION 182 | aks-nodepool1-36555681-vmss000000 Ready agent 47m v1.28.5 183 | aks-nodepool1-36555681-vmss000001 Ready agent 47m v1.28.5 184 | aks-nodepool1-36555681-vmss000002 Ready agent 47m v1.28.5 185 | 186 | ``` 187 | 188 | To see more details about your cluster: 189 | 190 | ```bash 191 | kubectl cluster-info 192 | ``` 193 | 194 | 7. Download this repo into your environment: 195 | 196 | ```bash 197 | git clone https://github.com/tigera-solutions/calicocloud-aks-workshop.git 198 | ``` 199 | 200 | ```bash 201 | cd ./calicocloud-aks-workshop 202 | ``` 203 | 204 | 8. *[Optional]* Install `calicoctl` CLI to use in later labs 205 | 206 | a. CloudShell 207 | 208 | ```bash 209 | # adjust version as needed 210 | CALICOVERSION=v3.19.0-1.0 211 | # download and configure calicoctl 212 | curl -o calicoctl -O -L https://downloads.tigera.io/ee/binaries/$CALICOVERSION/calicoctl 213 | 214 | chmod +x calicoctl 215 | 216 | # verify calicoctl is running 217 | ./calicoctl version 218 | ``` 219 | 220 | b. Linux 221 | >Tip: Consider navigating to a location that’s in your PATH. For example, /usr/local/bin/ 222 | 223 | ```bash 224 | # adjust version as needed 225 | CALICOVERSION=v3.19.0-1.0 226 | # download and configure calicoctl 227 | curl -o calicoctl -O -L https://downloads.tigera.io/ee/binaries/$CALICOVERSION/calicoctl 228 | chmod +x calicoctl 229 | 230 | # verify calicoctl is running 231 | calicoctl version 232 | ``` 233 | 234 | c. MacOS 235 | >Tip: Consider navigating to a location that’s in your PATH. For example, /usr/local/bin/ 236 | 237 | ```bash 238 | # adjust version as needed 239 | CALICOVERSION=v3.19.0-1.0 240 | # download and configure calicoctl 241 | curl -o calicoctl -O -L https://downloads.tigera.io/ee/binaries/$CALICOVERSION/calicoctl-darwin-amd64 242 | 243 | chmod +x calicoctl 244 | 245 | # verify calicoctl is running 246 | calicoctl version 247 | ``` 248 | 249 | Note: If you are faced with `cannot be opened because the developer cannot be verified` error when using `calicoctl` for the first time. go to `Applicaitons` \> `System Prefences` \> `Security & Privacy` in the `General` tab at the bottom of the window click `Allow anyway`. 250 | Note: If the location of calicoctl is not already in your PATH, move the file to one that is or add its location to your PATH. This will allow you to invoke it without having to prepend its location. 251 | 252 | d. Windows - using powershell command to download the calicoctl binary 253 | >Tip: Consider runing powershell as administraor and navigating to a location that’s in your PATH. For example, C:\Windows. 254 | 255 | ```pwsh 256 | $CALICOVERSION=$(kubectl get clusterinformations default -ojsonpath='{.spec.cnxVersion}') 257 | Invoke-WebRequest -Uri "https://downloads.tigera.io/ee/binaries/$CALICOVERSION/calicoctl-windows-amd64.exe" -OutFile "kubectl-calico.exe" 258 | ``` 259 | 260 | --- 261 | 262 | ## Next steps 263 | 264 | You should now have a Kubernetes cluster running with 3 nodes. You do not see the master servers for the cluster because these are managed by Microsoft. The Control Plane services which manage the Kubernetes cluster such as scheduling, API access, configuration data store and object controllers are all provided as services to the nodes. 265 | 266 | [:arrow_right: Module 1](../modules/joining-aks-to-calico-cloud.md) 267 | 268 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 269 | -------------------------------------------------------------------------------- /modules/deep-packet-inspection.md: -------------------------------------------------------------------------------- 1 | # Module 11: Deep packet inspection 2 | 3 | **Goal:** Use DPI on select workloads to efficiently make use of cluster resources and minimize the impact of false positives. 4 | 5 | --- 6 | 7 | >For each deep packet inspection resource (DeepPacketInspection), CalicoCloud creates a live network monitor that inspects the header and payload information of packets that match the [Snort community rules](https://www.snort.org/downloads/#rule-downloads). Whenever malicious activities are suspected, an alert is automatically added to the Alerts page in the Calico Manager UI. 8 | 9 | ## Steps 10 | 11 | 1. Configure deep packet inspection in your target workload, we will use `boutiqueshop/frontend` as example. 12 | 13 | ```bash 14 | kubectl apply -f demo/60-deep-packet-inspection/sample-dpi-frontend.yaml 15 | ``` 16 | 17 | 2. Configure resource requirements in IntrusionDetection. 18 | 19 | > For a data transfer rate of 1GB/sec on workload endpoints being monitored, we recommend a minimum of 1 CPU and 1GB RAM. 20 | 21 | ```bash 22 | kubectl apply -f demo/60-deep-packet-inspection/resource-dpi.yaml 23 | ``` 24 | 25 | 3. Verify deep packet inspection is running and the daemonset of `tigera-dpi` is also running. 26 | 27 | ```bash 28 | kubectl get deeppacketinspections.crd.projectcalico.org 29 | ``` 30 | 31 | ```text 32 | NAME AGE 33 | sample-dpi-frontend 40s 34 | ``` 35 | 36 | ```bash 37 | kubectl get pods -n tigera-dpi 38 | ``` 39 | 40 | ```text 41 | NAME READY STATUS RESTARTS AGE 42 | tigera-dpi-6mhz6 1/1 Running 0 2m2s 43 | tigera-dpi-kvqmp 1/1 Running 0 2m2s 44 | tigera-dpi-lj8sg 1/1 Running 0 2m2s 45 | ``` 46 | 47 | 4. Trigger a snort alert basing on existing alert rules, we will use rule [57461](https://www.snort.org/rule_docs/1-57461) 48 | 49 | ```bash 50 | SVC_IP=$(kubectl get svc frontend-external -ojsonpath='{.status.loadBalancer.ingress[0].ip}') 51 | ``` 52 | 53 | ```bash 54 | #curl your loadbalancer from outside of cluster 55 | curl http://$SVC_IP:80/secid_canceltoken.cgi -H 'X-CMD: Test' -H 'X-KEY: Test' -XPOST 56 | ``` 57 | 58 | 5. Confirm the `Signature Triggered Alert` in manager UI and also in Kibana - Discover `tigera_secure_ee_events*` index. 59 | 60 | ![Signature Alert](../img/signature-alert.png) 61 | 62 | ![ee event log](../img/ee-event-log.png) 63 | 64 | **Congratulations! You have finished all the labs in the workshop.** 65 | 66 | >Follow the cleanup instructions on the [main page](../README.md) if needed. 67 | 68 | [Module 10 :arrow_left:](../modules/honeypod-threat-detection.md) 69 | 70 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 71 | -------------------------------------------------------------------------------- /modules/dns-egress-access-controls.md: -------------------------------------------------------------------------------- 1 | # Module 4: DNS egress access controls 2 | 3 | **Goal:** Configure egress access for specific workloads. 4 | 5 | ## Steps 6 | 7 | 1. Implement DNS policy to allow the external endpoint access from a specific workload, e.g. `dev/centos`. 8 | 9 | a. Apply a policy to allow access to `api.twilio.com` endpoint using DNS rule. 10 | 11 | ```bash 12 | # deploy dns policy 13 | kubectl apply -f demo/20-egress-access-controls/dns-policy.yaml 14 | 15 | # test egress access to api.twilio.com 16 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -skI https://api.twilio.com 2>/dev/null | grep -i http' 17 | # test egress access to www.bing.com 18 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -skI https://www.bing.com 2>/dev/null | grep -i http' 19 | ``` 20 | 21 | Access to the `api.twilio.com` endpoint should be allowed by the DNS policy but not to any other external endpoints like `www.bing.com` unless we modify the policy to include that domain name. The connectivity is represented in the diagram below 22 |
23 | 24 | ![Connectivity diagram](../img/connectivity-diagram.png) 25 | 26 | b. Edit the policy to use a `NetworkSet` instead of inline DNS rule. 27 | 28 | ```bash 29 | # deploy network set 30 | kubectl apply -f demo/20-egress-access-controls/netset.external-apis.yaml 31 | # deploy DNS policy using the network set 32 | kubectl apply -f demo/20-egress-access-controls/dns-policy.netset.yaml 33 | ``` 34 | 35 | >the update version of `allow-twilio-access` policy is using `[destination: type == "external-apis"]` instead of `[source: app == 'centos']`, which will simplify your DNS egress management. Now we re-test access to twilio and bing to demonstrate policy using Network Sets is enforcing as expected: 36 | 37 | ```bash 38 | # test egress access to api.twilio.com 39 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -skI https://api.twilio.com 2>/dev/null | grep -i http' 40 | # test egress access to www.bing.com 41 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -skI https://www.bing.com 2>/dev/null | grep -i http' 42 | ``` 43 | 44 | As access to Twilio is permitted and access to bing is denied we are able to whitelist domains as described next 45 | 46 | c. As a bonus example, you can modify the `external-apis` network set in calico cloud management UI to include `*.azure.com` domain name or `*.microsoft.com` which would allow access to azure/microsoft subdomains. 47 | 48 | >One common use case is to provide access to Azure storage resources. For example, Blob storage: `*.blob.core.windows.net`. 49 | 50 | ```bash 51 | # test egress access to www.azure.com and www.microsoft.com after you whitelist from UI. 52 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -skI https://www.microsoft.com 2>/dev/null | grep -i http' 53 | # test egress access to www.azure.com 54 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -skI https://www.azure.com 2>/dev/null | grep -i http' 55 | ``` 56 | 57 | ![DNS in networkset](../img/add-DNS-in-networkset.png) 58 | 59 | [Module 3 :arrow_left:](../modules/pod-access-controls.md)     [:arrow_right: Module 5](../modules/layer7-logging.md) 60 | 61 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 62 | -------------------------------------------------------------------------------- /modules/honeypod-threat-detection.md: -------------------------------------------------------------------------------- 1 | # Module 10: Honeypod Threat Detection 2 | 3 | **Goal:** Deploy honeypod resources and generate alerts when suspicious traffic is detected 4 | 5 | --- 6 | 7 | Calico offers [Honeypod](https://docs.calicocloud.io/threat/honeypod/) capability which is based upon the same principles as traditional honeypots. Calico is able to detect traffic which probes the Honeypod resources which can be an indicator of compromise. Refer to the [official honeypod configuration documentation](https://docs.calicocloud.io/threat/honeypod/honeypods) for more details. 8 | 9 | ## Steps 10 | 11 | 1. Configure honeypod namespace and Alerts for SSH detection 12 | 13 | ```bash 14 | CALICOVERSION=$(kubectl get clusterinformations default -ojsonpath='{.spec.cnxVersion}') 15 | # create dedicated namespace and RBAC for honeypods 16 | kubectl apply -f https://downloads.tigera.io/ee/${CALICOVERSION}/manifests/threatdef/honeypod/common.yaml 17 | 18 | # add tigera pull secret to the namespace. We clone the existing secret from the calico-system NameSpace 19 | kubectl get secret tigera-pull-secret --namespace=calico-system -o yaml | \ 20 | grep -v '^[[:space:]]*namespace:[[:space:]]*calico-system' | \ 21 | kubectl apply --namespace=tigera-internal -f - 22 | ``` 23 | 24 | 2. Deploy sample honeypods 25 | 26 | ```bash 27 | # expose pod IP to test IP enumeration use case 28 | kubectl apply -f https://downloads.tigera.io/ee/${CALICOVERSION}/manifests/threatdef/honeypod/ip-enum.yaml 29 | 30 | # expose nginx service that can be reached via ClusterIP or DNS 31 | kubectl apply -f https://downloads.tigera.io/ee/${CALICOVERSION}/manifests/threatdef/honeypod/expose-svc.yaml 32 | 33 | # expose MySQL service 34 | kubectl apply -f https://downloads.tigera.io/ee/${CALICOVERSION}/manifests/threatdef/honeypod/vuln-svc.yaml 35 | ``` 36 | 37 | 3. Verify newly deployed pods are running 38 | 39 | ```bash 40 | kubectl get pods -n tigera-internal 41 | ``` 42 | 43 | Output should resemble: 44 | 45 | ```text 46 | NAME READY STATUS RESTARTS AGE 47 | tigera-internal-app-7jlg8 1/1 Running 0 60s 48 | tigera-internal-app-lptd6 1/1 Running 0 60s 49 | tigera-internal-app-rfllv 1/1 Running 0 60s 50 | tigera-internal-dashboard-859fb4f577-6tgqj 1/1 Running 0 51s 51 | tigera-internal-db-58547d8655-hgjrc 1/1 Running 0 43s 52 | ``` 53 | 54 | 4. Verify honeypod alerts are deployed 55 | 56 | ```bash 57 | kubectl get globalalerts | grep -i honeypod 58 | ``` 59 | 60 | Output should resemble: 61 | 62 | ```text 63 | honeypod.fake.svc 2021-10-01T18:41:55Z 64 | honeypod.ip.enum 2021-10-01T18:41:53Z 65 | honeypod.network.ssh 2021-10-01T18:40:05Z 66 | honeypod.port.scan 2021-10-01T18:41:53Z 67 | honeypod.vuln.svc 2021-10-01T18:41:56Z 68 | ``` 69 | 70 | 5. Test honeypod use cases 71 | 72 | Ping exposed Honeypod IP 73 | 74 | ```bash 75 | POD_IP=$(kubectl -n tigera-internal get po --selector app=tigera-internal-app -o jsonpath='{.items[0].status.podIP}') 76 | kubectl -n dev exec netshoot -- ping -c5 $POD_IP 77 | ``` 78 | 79 | Output should resemble: 80 | 81 | ```bash 82 | kubectl -n dev exec netshoot -- ping -c5 $POD_IP 83 | 84 | PING 10.240.0.86 (10.240.0.86) 56(84) bytes of data. 85 | 64 bytes from 10.240.0.86: icmp_seq=1 ttl=62 time=1.37 ms 86 | 64 bytes from 10.240.0.86: icmp_seq=2 ttl=62 time=1.25 ms 87 | 64 bytes from 10.240.0.86: icmp_seq=3 ttl=62 time=1.05 ms 88 | 64 bytes from 10.240.0.86: icmp_seq=4 ttl=62 time=1.16 ms 89 | 64 bytes from 10.240.0.86: icmp_seq=5 ttl=62 time=1.13 ms 90 | 91 | --- 10.240.0.86 ping statistics --- 92 | 5 packets transmitted, 5 received, 0% packet loss, time 4004ms 93 | rtt min/avg/max/mdev = 1.053/1.191/1.366/0.107 ms 94 | ``` 95 | 96 | `curl` Honeypod nginx service 97 | 98 | ```bash 99 | SVC_URL=$(kubectl -n tigera-internal get svc -l app=tigera-dashboard-internal-debug -ojsonpath='{.items[0].metadata.name}') 100 | SVC_PORT=$(kubectl -n tigera-internal get svc -l app=tigera-dashboard-internal-debug -ojsonpath='{.items[0].spec.ports[0].port}') 101 | kubectl -n dev exec netshoot -- curl -m3 -skI $SVC_URL.tigera-internal:$SVC_PORT | grep -i http 102 | ``` 103 | 104 | Output should resemble: 105 | 106 | ```text 107 | HTTP/1.1 200 OK 108 | ``` 109 | 110 | Query Honeypod MySQL service 111 | 112 | ```bash 113 | SVC_URL=$(kubectl -n tigera-internal get svc -l app=tigera-internal-backend -ojsonpath='{.items[0].metadata.name}') 114 | SVC_PORT=$(kubectl -n tigera-internal get svc -l app=tigera-internal-backend -ojsonpath='{.items[0].spec.ports[0].port}') 115 | kubectl -n dev exec netshoot -- nc -zv $SVC_URL.tigera-internal $SVC_PORT 116 | ``` 117 | 118 | Output should resemble: 119 | 120 | ```text 121 | Connection to tigera-internal-backend.tigera-internal 3306 port [tcp/mysql] succeeded! 122 | ``` 123 | 124 | 6. Head to `Alerts` view in the Enterprise Manager UI to view the related alerts. Note the alerts can take a few minutes to generate. 125 | 126 | ![Honeypod threat alert](../img/honeypod-threat-alert.png) 127 | 128 | [Module 9 :arrow_left:](../modules/using-alerts.md)     [:arrow_right: Module 11](../modules/deep-packet-inspection.md) 129 | 130 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 131 | -------------------------------------------------------------------------------- /modules/joining-aks-to-calico-cloud.md: -------------------------------------------------------------------------------- 1 | # Module 1: Joining AKS cluster to Calico Cloud 2 | 3 | **Goal:** Join AKS cluster to Calico Cloud management plane. 4 | 5 | IMPORTANT: In order to complete this module, you must have [Calico Cloud trial account](https://www.calicocloud.io/?utm_campaign=calicocloud&utm_medium=digital&utm_source=microsoft). Issues with being unable to navigate menus in the UI are often due to browsers blocking scripts - please ensure you disable any script blockers. 6 | 7 | ## Steps 8 | 9 | 1. Navigate to [calicocloud.io](https://www.calicocloud.io/?utm_campaign=calicocloud&utm_medium=digital&utm_source=microsoft) and sign up for a 14 day trial account - no credit cards required. Returning users can login. 10 | 11 | ![calico-cloud-login](../img/calico-cloud-login.png) 12 | 13 | 2. Upon signing into the Calico Cloud UI the Welcome screen shows four use cases which will give a quick tour for learning more. This step can be skipped. Tip: the menu icons on the left can be expanded to display the worded menu as shown: 14 | 15 | ![get-start](../img/get-start.png) 16 | 17 | ![expand-menu](../img/expand-menu.png) 18 | 19 | 3. Join AKS cluster to Calico Cloud management plane. 20 | 21 | Click the "Managed Cluster" in your left side of browser. 22 | ![managed-cluster](../img/managed-cluster.png) 23 | 24 | Click on "connect cluster" 25 | ![connect-cluster](../img/connect-cluster.png) 26 | 27 | choose AKS and click next 28 | ![choose-aks](../img/choose-aks.png) 29 | 30 | Run installation script in your aks cluster, script should look similar to this 31 | 32 | ![install-script](../img/script.png) 33 | 34 | Output should look similar to: 35 | 36 | ```text 37 | namespace/calico-cloud created 38 | namespace/calico-system created 39 | namespace/tigera-access created 40 | namespace/tigera-image-assurance created 41 | namespace/tigera-license created 42 | namespace/tigera-operator created 43 | namespace/tigera-operator-cloud created 44 | namespace/tigera-prometheus created 45 | namespace/tigera-risk-system created 46 | customresourcedefinition.apiextensions.k8s.io/installers.operator.calicocloud.io created 47 | serviceaccount/calico-cloud-controller-manager created 48 | role.rbac.authorization.k8s.io/calico-cloud-installer-ns-role created 49 | role.rbac.authorization.k8s.io/calico-cloud-installer-calico-system-role created 50 | role.rbac.authorization.k8s.io/calico-cloud-installer-kube-system-role created 51 | role.rbac.authorization.k8s.io/calico-cloud-installer-tigera-image-assurance-role created 52 | role.rbac.authorization.k8s.io/calico-cloud-installer-tigera-prometheus-role created 53 | role.rbac.authorization.k8s.io/calico-cloud-installer-tigera-risk-system-role created 54 | clusterrole.rbac.authorization.k8s.io/calico-cloud-installer-role created 55 | clusterrole.rbac.authorization.k8s.io/calico-cloud-installer-sa-creator-role created 56 | clusterrole.rbac.authorization.k8s.io/calico-cloud-installer-tigera-operator-role created 57 | rolebinding.rbac.authorization.k8s.io/calico-cloud-installer-ns-rbac created 58 | rolebinding.rbac.authorization.k8s.io/calico-cloud-installer-calico-system-rbac created 59 | rolebinding.rbac.authorization.k8s.io/calico-cloud-installer-kube-system-rbac created 60 | rolebinding.rbac.authorization.k8s.io/calico-cloud-installer-tigera-access-rbac created 61 | rolebinding.rbac.authorization.k8s.io/calico-cloud-installer-tigera-image-assurance-rbac created 62 | rolebinding.rbac.authorization.k8s.io/calico-cloud-installer-tigera-license-rbac created 63 | rolebinding.rbac.authorization.k8s.io/calico-cloud-installer-tigera-operator-rbac created 64 | rolebinding.rbac.authorization.k8s.io/calico-cloud-installer-tigera-operator-rbac created 65 | rolebinding.rbac.authorization.k8s.io/calico-cloud-installer-tigera-prometheus-rbac created 66 | rolebinding.rbac.authorization.k8s.io/calico-cloud-installer-tigera-risk-system-rbac created 67 | clusterrolebinding.rbac.authorization.k8s.io/calico-cloud-installer-crb created 68 | deployment.apps/calico-cloud-controller-manager created 69 | % Total % Received % Xferd Average Speed Time Time Time Current 70 | Dload Upload Total Spent Left Speed 71 | 100 462 100 462 0 0 1263 0 --:--:-- --:--:-- --:--:-- 1265 72 | secret/api-key created 73 | installer.operator.calicocloud.io/default created 74 | ``` 75 | 76 | Joining the cluster to Calico Cloud can take a few minutes. Meanwhile the Calico resources can be monitored until they are all reporting `Available` as `True` 77 | 78 | ```bash 79 | kubectl get tigerastatus -w 80 | 81 | NAME AVAILABLE PROGRESSING DEGRADED SINCE 82 | apiserver True False False 2m39s 83 | calico True False False 4s 84 | cloud-core True False False 2m6s 85 | compliance True False False 64s 86 | image-assurance True False False 52s 87 | intrusion-detection True False False 54s 88 | log-collector True False False 39s 89 | management-cluster-connection True False False 104s 90 | monitor True False False 3m14s 91 | policy-recommendation True False False 109s 92 | ``` 93 | 94 | 4. Navigating the Calico Cloud UI 95 | 96 | Once the cluster has successfully connected to Calico Cloud you can review the cluster status in the UI. Click on `Managed Clusters` from the left side menu and look for the `connected` status of your cluster. You will also see a `tigera-labs` cluster for demo purposes. Ensure you are in the correct cluster context by clicking the `Cluster` dropdown in the top right corner. This will list the connected clusters. Click on your cluster to switch context otherwise the current cluster context is in *bold* font. 97 | 98 | ![cluster-selection](../img/cluster-selection.png) 99 | 100 | 5. Configure log aggregation and flush intervals for Calico cluster, we will use 10s instead of default value 300s for lab testing only. 101 | 102 | ```bash 103 | kubectl patch felixconfiguration.p default -p '{"spec":{"flowLogsFlushInterval":"10s"}}' 104 | kubectl patch felixconfiguration.p default -p '{"spec":{"dnsLogsFlushInterval":"10s"}}' 105 | kubectl patch felixconfiguration.p default -p '{"spec":{"flowLogsFileAggregationKindForAllowed":1}}' 106 | kubectl patch felixconfiguration.p default -p '{"spec":{"flowLogsFileAggregationKindForDenied":0}}' 107 | kubectl patch felixconfiguration.p default -p '{"spec":{"dnsLogsFileAggregationKind":0}}' 108 | ``` 109 | 110 | 6. Configure Felix to collect TCP stats - this uses eBPF TC program and requires minimum Kernel version of v5.3.0. Further [documentation](https://docs.tigera.io/visibility/elastic/flow/tcpstats) 111 | 112 | ```bash 113 | kubectl patch felixconfiguration default -p '{"spec":{"flowLogsCollectTcpStats":true}}' 114 | ``` 115 | 116 | [Module 0 :arrow_left:](../modules/creating-aks-cluster.md)     [:arrow_right: Module 2](../modules/configuring-demo-apps.md) 117 | 118 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 119 | -------------------------------------------------------------------------------- /modules/layer7-logging.md: -------------------------------------------------------------------------------- 1 | # Module 5: Layer 7 Logging and Visibility 2 | 3 | **Goal:** Enable Layer 7 visibility for Pod traffic. 4 | 5 | --- 6 | 7 | Calico Cloud can be enabled for Layer 7 application visibility which captures the HTTP calls applications are making. Application visibility does not require a service mesh but does utilize envoy for capturing logs. Envoy is deployed as part of an L7 Log Collector DaemonSet per Kubernetes node - this requires less resources than a sidecar per pod. For more info please review the [documentation](https://docs.tigera.io/calico-cloud/visibility/elastic/l7/configure). 8 | 9 | ## Steps 10 | 11 | 1. Configure Felix for log data collection and patch Felix with AKS specific parameters 12 | 13 | >Enable the Policy Sync API in Felix - we configure this cluster-wide 14 | 15 | ```bash 16 | kubectl patch felixconfiguration default --type='merge' -p '{"spec":{"policySyncPathPrefix":"/var/run/nodeagent"}}' 17 | ``` 18 | 19 | 2. Since Calico Cloud v3.11 L7 visibility is deployed using an `ApplicationLayer` resource. Calico's operator will deploy the envoy and log collector containers as a daemonset. To deploy the ApplicationLayer resource: 20 | 21 | ```bash 22 | kubectl apply -f -<L7 flow logs will require a few minutes to generate, you can also restart pods which will enable L7 logs quicker. 64 | 65 | Once frontend service is annotated, naviate to the `frontend-external` service IP and perform a few actions on the website. After a few moments, you should be able to see those actions in the Service Graph under the HTTP tab. 66 | 67 | 5. Review L7 logs 68 | 69 | The HTTP logs can be reviewed from `Service Graph` and then clicking the `HTTP` tab. Details of each flow can be reviewed by drilling down into the flow record 70 | 71 | ![Service Graph L7](../img/service-graph-l7.png) 72 | 73 | [Module 4 :arrow_left:](../modules/dns-egress-access-controls.md)     [:arrow_right: Module 6](../modules/using-observability-tools.md) 74 | 75 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 76 | -------------------------------------------------------------------------------- /modules/packet-capture.md: -------------------------------------------------------------------------------- 1 | # Module 7: Packet Capture 2 | 3 | **Goal:** Configure packet capture for specific pods and review captured payload. 4 | 5 | Packet captures are Kubernetes Custom Resources and thus native Kubernetes RBAC can be used to control which users/groups can run and access Packet Captures; this may be useful if Compliance or Governance policies mandate strict controls on running Packet Captures for specific workloads. This demo is simplified without RBAC but further details can be found [here](https://docs.tigera.io/calico-cloud/visibility/packetcapture#enforce-rbac-for-capture-tasks-for-cli-users). 6 | 7 | ## Steps 8 | 9 | 1. Choose an endpoint you want to capture from from manager UI, we will use `Redis` as example. 10 | 11 | >Note: You can see the endpoint details from UI, and we choose the service port `6379` for capture the traffic. 12 | 13 | ![select endpoint](../img/select-ep.png) 14 | 15 | ![initial packet capture](../img/initiate-pc.png) 16 | 17 | 2. Schedule the packet capture job with specific port and time. 18 | 19 | ![schedule the job](../img/schedule-packet-capture-job.png) 20 | 21 | 3. You will see the job scheduled in service graph. 22 | 23 | ![schedule packet capture](../img/schedule-packet-capture.png) 24 | 25 | 4. Download the pcap file once the job is `Capturing` or `Finished`. 26 | 27 | ![download packet capture](../img/download-packet-capture.png) 28 | 29 | 5. Open the pcap file with wireshark or tcpdump, you will see the ingress and egress traffic associate with `redis` pods i.e `10.240.0.71` 30 | 31 | ![redis packet capture](../img/redis-pcap.png) 32 | 33 | [Module 6 :arrow_left:](../modules/using-observability-tools.md)     [:arrow_right: Module 8](../modules/using-compliance-reports.md) 34 | 35 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 36 | -------------------------------------------------------------------------------- /modules/pod-access-controls.md: -------------------------------------------------------------------------------- 1 | # Module 3: Pod access controls 2 | 3 | **Goal:** Leverage network policies to segment connections within the AKS cluster and prevent known bad actors from accessing the workloads. 4 | 5 | ## Steps 6 | 7 | 1. Test connectivity between application components and across application stacks, since we don't have network policy in place, the pods are reachable from any endpoints. 8 | 9 | a. Test connectivity between workloads within each namespace. 10 | 11 | ```bash 12 | # test connectivity within dev namespace 13 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://nginx-svc 2>/dev/null | grep -i http' 14 | 15 | # test connectivity within default namespace 16 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI frontend 2>/dev/null | grep -i http' 17 | 18 | kubectl exec -it $(kubectl get po -l app=frontend -ojsonpath='{.items[0].metadata.name}') -c server -- sh -c 'nc -zv productcatalogservice 3550' 19 | ``` 20 | 21 | b. Test connectivity across namespaces. 22 | 23 | ```bash 24 | # test connectivity from dev namespace to default namespace 25 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://frontend.default 2>/dev/null | grep -i http' 26 | 27 | # test connectivity from default namespace to dev namespace 28 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI http://nginx-svc.dev 2>/dev/null | grep -i http' 29 | ``` 30 | 31 | c. Test connectivity from each namespace to the Internet. 32 | 33 | ```bash 34 | # test connectivity from dev namespace to the Internet 35 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://www.bing.com 2>/dev/null | grep -i http' 36 | 37 | # test connectivity from default namespace to the Internet 38 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI www.bing.com 2>/dev/null | grep -i http' 39 | ``` 40 | 41 | All of these tests should succeed if there are no policies in place to govern the traffic for `dev` and `default` namespaces. 42 | 43 | 2. Apply staged `default-deny` policy. 44 | 45 | >Staged `default-deny` policy is a good way of catching any traffic that is not explicitly allowed by a policy without explicitly blocking it. 46 | 47 | ```bash 48 | kubectl apply -f demo/10-security-controls/staged.default-deny.yaml 49 | ``` 50 | 51 | Review the network policy created by clicking `Policies` on the left menu. A staged default deny policy has been created in the `default` tier. You can view or edit the policy by double clicking the policy. 52 | 53 | ![Staged default-deny](../img/staged-default-deny.png) 54 | 55 | You can view the potential affect of the staged `default-deny` policy if you navigate to the `Dashboard` view in your Calico Cloud Manager UI and look at the `Packets by Policy` histogram. 56 | 57 | ![Dashboard default-deny](../img/dashboard-default-deny.png) 58 | 59 | To view more traffic in the `Packets by Policy` histogram we can generate traffic from the `centos` pod to the `frontend` service. 60 | 61 | ```bash 62 | # make a request across namespaces and view Packets by Policy histogram 63 | for i in {1..5}; do kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://frontend.default 2>/dev/null | grep -i http'; sleep 2; done 64 | ``` 65 | 66 | >The staged policy does not affect the traffic directly but allows you to view the policy impact if it were to be enforced. 67 | 68 | 3. Apply network policies to control East-West traffic. 69 | 70 | ```bash 71 | # deploy dev policies 72 | kubectl apply -f demo/dev/policies.yaml 73 | 74 | # deploy boutiqueshop policies 75 | kubectl apply -f demo/boutiqueshop/policies.yaml 76 | 77 | # deploy policies for pods to access metadata API 78 | kubectl apply -f demo/20-egress-access-controls/netset.metadata-api.yaml 79 | kubectl apply -f demo/20-egress-access-controls/metadata-policy.yaml 80 | ``` 81 | 82 | Now as we have proper policies in place, we can enforce `default-deny` policy moving closer to zero-trust security approach. You can either enforced the already deployed staged `default-deny` policy using the `Policies Board` view in your Calico Cloud Manager UI, or you can apply an enforcing `default-deny` policy manifest. 83 | 84 | ```bash 85 | # apply enforcing default-deny policy manifest 86 | kubectl apply -f demo/10-security-controls/default-deny.yaml 87 | ``` 88 | 89 | If the above yaml definition is deployed the policy `Staged default-deny` can be deleted through the Web UI. Within the policy board click the edit icon from the `Staged default deny` policy in the `default` tier. Then click `Delete` 90 | 91 | ![Edit policy](../img/edit-policy.png) 92 | 93 | ![Delete policy](../img/delete-policy.png) 94 | 95 | 4. Test connectivity with policies in place. 96 | 97 | a. The only connections between the components within each namespaces should be allowed as configured by the policies. 98 | 99 | ```bash 100 | # test connectivity within dev namespace 101 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://nginx-svc 2>/dev/null | grep -i http' 102 | 103 | # test connectivity within default namespace 104 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI frontend 2>/dev/null | grep -i http' 105 | ``` 106 | 107 | b. The connections across `dev` and `default` namespaces should be blocked by the global `default-deny` policy. 108 | 109 | ```bash 110 | # test connectivity from dev namespace to default namespace 111 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://frontend.default 2>/dev/null | grep -i http' 112 | 113 | # test connectivity from default namespace to dev namespace 114 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI http://nginx-svc.dev 2>/dev/null | grep -i http' 115 | ``` 116 | 117 | c. The connections to the Internet should be blocked by the configured `default-deny` policies. 118 | 119 | ```bash 120 | # test connectivity from dev namespace to the Internet 121 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://www.bing.com 2>/dev/null | grep -i http' 122 | 123 | # test connectivity from default namespace to the Internet 124 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI www.bing.com 2>/dev/null | grep -i http' 125 | ``` 126 | 127 | 5. Implement egress policy to allow egress access from a workload in one namespace, e.g. `dev/centos`, to a service in another namespace, e.g. `default/frontend`. After the deployment, you can view the policy details under `platform` tier in `Policies Board` 128 | 129 | a. Deploy egress policy. 130 | 131 | ```bash 132 | kubectl apply -f demo/20-egress-access-controls/centos-to-frontend.yaml 133 | ``` 134 | 135 | b. Test connectivity between `dev/centos` pod and `default/frontend` service. 136 | 137 | ```bash 138 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://frontend.default 2>/dev/null | grep -i http' 139 | ``` 140 | 141 | The access should be allowed once the egress policy is in place. 142 | 143 | [Module 2 :arrow_left:](../modules/configuring-demo-apps.md)     [:arrow_right: Module 4](../modules/dns-egress-access-controls.md) 144 | 145 | [:leftwards_arrow_with_hook: Back to Main](/README.md) -------------------------------------------------------------------------------- /modules/using-alerts.md: -------------------------------------------------------------------------------- 1 | # Module 9: Using alerts 2 | 3 | **Goal:** Use global alerts to notify security and operations teams about unsanctioned or suspicious activity. 4 | 5 | ## Steps 6 | 7 | 1. Review alerts manifests. 8 | 9 | Navigate to `demo/50-alerts` and review YAML manifests that represent alerts definitions. Each file contains an alert template and alert definition. Alerts templates can be used to quickly create an alert definition in the UI. 10 | 11 | 2. View triggered alerts. 12 | 13 | >We implemented alerts in one of the first labs in order to see how our activity can trigger them. 14 | 15 | ```bash 16 | kubectl get globalalert 17 | ``` 18 | 19 | ```text 20 | NAME CREATED AT 21 | dns.unsanctioned.access 2021-06-10T03:24:41Z 22 | network.lateral.access 2021-06-10T03:24:43Z 23 | policy.globalnetworkset 2021-06-10T03:24:41Z 24 | ``` 25 | 26 | Open `Alerts` view to see all triggered alerts in the cluster. Review the generated alerts. 27 | 28 | ![alerts view](../img/alerts-view.png) 29 | 30 | You can also review the alerts configuration and templates by navigating to alerts configuration in the top right corner. 31 |
32 | 33 | 3. Trigger dns alerts from curl demo. 34 | 35 | ```bash 36 | # curl example.com couple times to trigger the dns aler 37 | kubectl -n dev exec -it netshoot -- sh -c 'curl -m3 -sI www.google.com 2>/dev/null | grep -i http' 38 | ``` 39 | 40 | 4. Trigger GlobalThreatfeed from known bad actors. 41 | 42 | Calico Cloud offers [Global threat feed](https://docs.tigera.io/calico-cloud/reference/resources/globalthreatfeed) resource to prevent known bad actors from accessing Kubernetes pods. 43 | 44 | ```bash 45 | kubectl get globalthreatfeeds 46 | ``` 47 | 48 | Output is: 49 | 50 | ```bash 51 | NAME CREATED AT 52 | alienvault.domainthreatfeeds 2021-09-28T15:01:33Z 53 | alienvault.ipthreatfeeds 2021-09-28T15:01:33Z 54 | ``` 55 | 56 | You can get these domain/ip list from yaml file, the url would be: 57 | 58 | ```bash 59 | kubectl get globalthreatfeeds alienvault.domainthreatfeeds -ojson | jq -r '.spec.pull.http.url' 60 | 61 | kubectl get globalthreatfeeds alienvault.ipthreatfeeds -ojson | jq -r '.spec.pull.http.url' 62 | ``` 63 | 64 | Output is: 65 | 66 | ```bash 67 | https://installer.calicocloud.io/feeds/v1/domains 68 | 69 | https://installer.calicocloud.io/feeds/v1/ips 70 | ``` 71 | 72 | Generate `Suspicious IPs/Domains` alerts by curl those list above. Use first entry in each threatfeed as example: 73 | 74 | ```bash 75 | # generate suspicious DNS alerts 76 | DOMAIN=$(curl https://installer.calicocloud.io/feeds/v1/domains | awk 'NR==1') 77 | kubectl -n dev exec -t netshoot -- sh -c "ping -W2 -c1 $DOMAIN" 78 | 79 | # generate suspicious IP alerts 80 | IP=$(kubectl get globalnetworksets.crd.projectcalico.org threatfeed.alienvault.ipthreatfeeds -o jsonpath='{.spec.nets[0]}' | sed 's/...$//') 81 | kubectl -n dev exec -t netshoot -- sh -c "ping -W2 -c3 $IP" 82 | ``` 83 | 84 | Open `Alerts` view to see all triggered alerts in the cluster. Review the generated alerts. 85 | 86 | ![alerts view all](../img/alerts-view-all.png) 87 | 88 | [Module 8 :arrow_left:](../modules/using-compliance-reports.md)     [:arrow_right: Module 10](../modules/honeypod-threat-detection.md) 89 | 90 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 91 | -------------------------------------------------------------------------------- /modules/using-compliance-reports.md: -------------------------------------------------------------------------------- 1 | # Module 8: Using compliance reports 2 | 3 | **Goal:** Use global reports to satisfy compliance requirements. 4 | 5 | ## Steps 6 | 7 | 1. Use `Compliance Reports` view to see all generated reports. 8 | 9 | >We have deployed a few compliance reports in one of the first labs and by this time a few reports should have been already generated. 10 | 11 | ```bash 12 | kubectl get globalreport 13 | ``` 14 | 15 | ```text 16 | NAME CREATED AT 17 | cluster-inventory 2022-04-07T00:29:45Z 18 | cluster-network-access 2022-04-07T00:29:45Z 19 | cluster-policy-audit 2022-04-07T00:29:45Z 20 | daily-cis-results 2022-04-07T00:29:44Z 21 | ``` 22 | 23 | >If you don't see any reports, you can manually kick off report generation task. Follow the steps below if you need to do so. 24 | 25 | Calico provides `GlobalReport` resource to offer [Compliance reports](https://docs.tigera.io/calico-cloud/compliance/overview) capability. There are several types of reports that you can configure: 26 | 27 | - CIS benchmarks 28 | - Inventory 29 | - Network access 30 | - Policy audit 31 | 32 |
33 | 34 | A compliance report could be configured to include only specific endpoints leveraging endpoint labels and selectors. Each report has the `schedule` field that determines how often the report is going to be generated and sets the timeframe for the data to be included into the report. 35 | 36 | Compliance reports organize data in a CSV format which can be downloaded and moved to a long term data storage to meet compliance requirements. 37 | 38 |
39 | 40 | ![compliance report](../img/compliance-report.png) 41 | 42 | 2. Generate a reports at any time to specify a different start/end time. 43 | 44 | a. Review and apply the yaml file for the managed cluster. 45 | 46 | Instructions below for a Managed cluster only. Follow [configuration documentation](https://docs.tigera.io/calico-cloud/compliance/overview#run-reports) to configure compliance jobs for management and standalone clusters. We will need change the START/END time accordingly. 47 | 48 | ```bash 49 | vi demo/40-compliance-reports/compliance-reporter-pod.yaml 50 | ``` 51 | 52 | b. We need to substitute the Cluster Name in the YAML file with the correct cluster name index. We can obtain this value and set as a variable `CALICOCLUSTERNAME`. This enables compliance jobs to target the correct index in Elastic Search. 53 | 54 | ```bash 55 | # obtain ElasticSearch index and set as variable 56 | CALICOCLUSTERNAME=$(kubectl get deployment -n tigera-intrusion-detection intrusion-detection-controller -ojson | \ 57 | jq -r '.spec.template.spec.containers[0].env[] | select(.name == "CLUSTER_NAME").value') 58 | 59 | # get calico version 60 | CALICOVERSION=$(kubectl get clusterinformations default -ojsonpath='{.spec.cnxVersion}') 61 | # set start and end time for the report (Linux shell) 62 | START_TIME=$(date -d '-2 hours' -u +'%Y-%m-%dT%H:%M:%SZ') 63 | END_TIME=$(date -u +'%Y-%m-%dT%H:%M:%SZ') 64 | ``` 65 | 66 | c. Now apply the compliance job YAML 67 | 68 | ```bash 69 | # set report name 70 | REPORT_NAME=daily-cis-results 71 | # replace tokens and apply the manifest 72 | sed -e "s/\$CALICOCLUSTERNAME/${CALICOCLUSTERNAME}/g" \ 73 | -e "s/\$CALICOVERSION/${CALICOVERSION}/g" \ 74 | -e "s/\$START_TIME/${START_TIME}/g" \ 75 | -e "s/\$END_TIME/${END_TIME}/g" \ 76 | -e "s/\$REPORT_NAME/${REPORT_NAME}/g" \ 77 | ./demo/40-compliance-reports/compliance-reporter-pod.yaml | kubectl apply -f - 78 | ``` 79 | 80 | Once the `run-reporter` job finished, you should be able to see this report in manager UI and download the csv file. 81 | 82 | 3. Reports are generated 30 minutes after the end of the report as [documented](https://docs.tigera.io/calico-cloud/compliance/overview#change-the-default-report-generation-time). As the compliance reports deployed in the [manifests](https://github.com/tigera-solutions/calicocloud-aks-workshop/tree/main/demo/40-compliance-reports) are scheduled to run every 10 minutes the generation of reports will take between 30-60 mins depending when the manifests were deployed. 83 | 84 |
85 | 86 | [Module 7 :arrow_left:](../modules/packet-capture.md)     [:arrow_right: Module 9](../modules/using-alerts.md) 87 | 88 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 89 | -------------------------------------------------------------------------------- /modules/using-observability-tools.md: -------------------------------------------------------------------------------- 1 | # Module 6: Using observability tools 2 | 3 | **Goal:** Explore Calico observability tools. 4 | 5 | ## Calico observability tools 6 | 7 | 1. Dashboard 8 | 9 | The `Dashboard` view in the Calicocloud Manager UI presents high level overview of what's going on in your cluster. The view shows the following information: 10 | 11 | - Connections, Allowed Bytes and Packets 12 | - Denied Bytes and Packets 13 | - Total number of Policies, Endpoints and Nodes 14 | - Summary of CIS benchmarks 15 | - Count of triggered alerts 16 | - Packets by Policy histogram that shows allowed and denied traffic as it is being evaluated by network policies 17 | 18 | ![dashboard overall view](../img/dashboard-overall-view.png) 19 | 20 | 2. Policies Board 21 | 22 | The `Policies Board` shows all policies deployed in the cluster and organized into `policy tiers`. You can control what a user can see and do by configuring Kubernetes RBAC roles which determine what the user can see in this view. You can also use controls to hide away tiers you're not interested in at any given time. 23 | 24 | ![policies board](../img/policies-board.png) 25 | 26 | By leveraging stats controls you can toggle additional metrics to be listed for each shown policy. 27 | 28 | ![policies board stats](../img/policies-board-stats.png) 29 | 30 | 3. Audit timeline 31 | 32 | The `Timeline` view shows audit trail of created, deleted, or modified resources. 33 | 34 | timeline view 35 | 36 | 4. Endpoints 37 | 38 | The `Endpoints` view lists all endpoints known to Calico. It includes all Kubernetes endpoints, such as Pods, as well as Host endpoints that can represent a Kubernetes host or an external VM or bare metal machine. 39 | 40 | ![endpoints view](../img/endpoints-view.png) 41 | 42 | 5. Service Graph 43 | 44 | The dynamic `Service Graph` presents network flows from service level perspective. Top level view shows how traffic flows between namespaces as well as external and internal endpoints. 45 | 46 | ![service graph node view](../img/service-graph-node.png) 47 | 48 | - When you select any node representing a namespace, you will get additional details about the namespace, such as incoming and outgoing traffic, policies evaluating each flow, and DNS metrics. 49 | - When you select any edge, you will get details about the flows representing that edge. 50 | - If you expand a namespace by double-clicking on it, you will get the view of all components of the namespace. 51 | 52 | 6. Flow Visualizations 53 | 54 | The `Flow Visualizations` view shows all point-to-point flows in the cluster. It allows you to see the cluster traffic from the network point of view. 55 | 56 | ![flow viz view](../img/flow-viz.png) 57 | 58 | 7. Kibana dashboards 59 | 60 | The `Kibana` components comes with Calico cloud offerings and provides you access to raw flow, audit, and dns logs, as well as ability to visualize the collected data in various dashboards. 61 | 62 | When you login Kibana, you can choose a predefined dashboard or create your own, below is "Tigera Flow Logs" dashboard. 63 | 64 | ![kibana flows](../img/kibana-flow-logs.png) 65 | 66 | Some of the default dashboards you get access to are DNS Logs, Flow Logs, Audit Logs, Kubernetes API calls, L7 HTTP metrics, and others. 67 | 68 | [Module 5 :arrow_left:](../modules/layer7-logging.md)     [:arrow_right: Module 7](../modules/packet-capture.md) 69 | 70 | [:leftwards_arrow_with_hook: Back to Main](/README.md) 71 | --------------------------------------------------------------------------------