├── .gitignore
├── README.md
├── configs
└── trust-policy.json
├── demo
├── 00-tiers
│ └── tiers.yaml
├── 01-base
│ ├── allow-kube-dns.yaml
│ ├── quarantine-policy.yaml
│ └── tiers-pass-policy.yaml
├── 10-security-controls
│ ├── default-deny.yaml
│ ├── feodo-block-policy.yaml
│ ├── feodotracker.threatfeed.yaml
│ └── staged.default-deny.yaml
├── 20-egress-access-controls
│ ├── centos-to-frontend.yaml
│ ├── dns-policy.netset.yaml
│ ├── dns-policy.yaml
│ └── netset.external-apis.yaml
├── 30-secure-hep
│ ├── felixconfiguration.yaml
│ ├── frontend-nodeport-access.yaml
│ ├── kubelet-access.yaml
│ └── ssh-access.yaml
├── 40-compliance-reports
│ ├── boutiqueshop-reports.yaml
│ ├── cluster-reporter-pods.yaml
│ ├── cluster-reports.yaml
│ └── daily-cis-results.yaml
├── 50-alerts
│ ├── boutiqueshop.unsanctioned.access.yaml
│ ├── globalnetworkset.changed.yaml
│ ├── unsanctioned.dns.access.yaml
│ └── unsanctioned.lateral.access.yaml
├── 60-packet-capture
│ └── nginx-pcap.yaml
├── 70-deep-packet-inspection
│ └── nginx-dpi.yaml
├── 80-image-assurance
│ ├── tigera-image-assurance-admission-controller-deploy.yaml
│ └── tigera-image-assurance-admission-controller-policy.yaml
├── boutiqueshop
│ ├── policies.yaml
│ ├── staged.default-deny.yaml
│ └── staged.policies.yaml
└── dev
│ ├── app.manifests.yaml
│ └── policies.yaml
├── img
├── alerts-view.png
├── calico-on-eks.png
├── cloud9-aws-settings.png
├── cloud9-manage-ec2.png
├── compliance-report.png
├── connect-cluster.png
├── dashboard-view.png
├── enable-polrec.png
├── endpoints-view.png
├── expand-menu.png
├── flow-viz.png
├── kibana-flow-logs.png
├── modify-iam-role.png
├── policies-board-stats.png
├── policies-board.png
├── polrec-settings.png
├── service-graph-node.png
└── timeline-view.png
└── modules
├── configuring-demo-apps.md
├── creating-eks-cluster.md
├── deep-packet-inspection.md
├── dynamic-packet-capture.md
├── enable-l7-logs.md
├── joining-eks-to-calico-cloud.md
├── namespace-isolation.md
├── securing-heps.md
├── setting-up-work-environment.md
├── using-alerts.md
├── using-compliance-reports.md
├── using-egress-access-controls.md
├── using-observability-tools.md
├── using-security-controls.md
└── vulnerability-management.md
/.gitignore:
--------------------------------------------------------------------------------
1 | .*
2 | !/.gitignore
3 | configs/*.yaml
4 | **/admission_controller*.pem
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Calico workshop on EKS
2 |
3 |
4 |
5 | ## Workshop objectives
6 |
7 | The intent of this workshop is to educate any person working with EKS cluster in one way or another about Calico features and how to use them. While there are many capabilities that Calico provides, this workshop focuses on a subset of those that are used most often by different types of technical users.
8 |
9 | ## Use cases
10 |
11 | In this workshop we are going to focus on these main use cases:
12 |
13 | - **East-West security**, leveraging zero-trust security approach.
14 | - **Namespace isolation**, leveraging Policy Recommendation engine to auto-generate policies to protect applications at namespace level.
15 | - **Egress access controls**, using DNS policy to access external resources by their fully qualified domain names (FQDN).
16 | - **Host micro-segmentation**, leveraging Calico policies to protect host ports and host based services.
17 | - **Observability**, exploring various logs and application level metrics collected by Calico.
18 | - **Compliance**, providing proof of security compliance.
19 | - **Security alerts**, configuring alerts to notify security and operations teams of any security incidents or anomalous behaviors.
20 | - **Dynamic packet capture**, capturing full packet payload on demand for further forensic analysis.
21 |
22 | ## Join the Slack Channel
23 |
24 | [Calico User Group Slack](https://slack.projectcalico.org/) is a great resource to ask any questions about Calico. If you are not a part of this Slack group yet, we highly recommend [joining it](https://slack.projectcalico.org/) to participate in discussions or ask questions. For example, you can ask questions specific to EKS and other managed Kubernetes services in the `#eks-aks-gke-iks` channel.
25 |
26 | ## Workshop prerequisites
27 |
28 | >It is recommended to use your personal AWS account which would have full access to AWS resources. If using a corporate AWS account for the workshop, make sure to check with account administrator to provide you with sufficient permissions to create and manage EKS clusters and Load Balancer resources.
29 |
30 | - [Calico Cloud trial account](https://www.tigera.io/tigera-products/calico-cloud/)
31 | - for instructor-led workshop use instructions in the email you receive to request a Calico Trial account
32 | - for self-paced workshop follow the [link to register](https://www.tigera.io/tigera-products/calico-cloud/) for a Calico Trial account
33 | - AWS account and credentials to manage AWS resources
34 | - Terminal or Command Line console to work with AWS resources and EKS cluster
35 | - most common environments are Cloud9, Mac OS, Linux, Windows WSL2
36 | - `Git`
37 | - `netcat`
38 |
39 | >This workshop has been designed to use [AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/tutorial.html) instance as a workspace environment. If you're familiar with the tools listed in prerequisites section, feel free to use a workspace environment you are most comfortable with.
40 |
41 | ## Modules
42 |
43 | - [Module 1: Setting up workspace environment](./modules/setting-up-work-environment.md)
44 | - [Module 2: Creating EKS cluster](modules/creating-eks-cluster.md)
45 | - [Module 3: Joining EKS cluster to Calico Cloud](modules/joining-eks-to-calico-cloud.md)
46 | - [Module 4: Configuring demo applications](modules/configuring-demo-apps.md)
47 | - [Module 5: Enable application layer monitoring (L7 logs)](modules/enable-l7-logs.md)
48 | - [Module 6: Namespace isolation](modules/namespace-isolation.md)
49 | - [Module 7: Using security controls](modules/using-security-controls.md)
50 | - [Module 8: Using egress access controls](modules/using-egress-access-controls.md)
51 | - [Module 9: Securing EKS hosts](modules/securing-heps.md)
52 | - [Module 10: Using observability tools](modules/using-observability-tools.md)
53 | - [Module 11: Using compliance reports](modules/using-compliance-reports.md)
54 | - [Module 12: Using alerts](modules/using-alerts.md)
55 | - [Module 13: Dynamic packet capture](modules/dynamic-packet-capture.md)
56 | - [Module 14: Deep packet inspection](modules/deep-packet-inspection.md)
57 | - [Module 15: Vulnerability management](modules/vulnerability-management.md)
58 |
59 | ## Cleanup
60 |
61 | 1. Delete application stack to clean up any `loadbalancer` services.
62 |
63 | ```bash
64 | kubectl delete -f demo/dev/app.manifests.yaml
65 | kubectl delete -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/kubernetes-manifests.yaml
66 | ```
67 |
68 | 2. Delete EKS cluster.
69 |
70 | ```bash
71 | eksctl delete cluster --name tigera-workshop
72 | ```
73 |
74 | 3. Delete EC2 Key Pair.
75 |
76 | >If you created EC2 KeyPair for the EKS cluster, you can remove it if no longer needed.
77 |
78 | ```bash
79 | export KEYPAIR_NAME=''
80 | aws ec2 delete-key-pair --key-name $KEYPAIR_NAME
81 | ```
82 |
83 | 4. Delete Cloud9 instance.
84 |
85 | Navigate to `AWS Console` > `Services` > `Cloud9` and remove your workspace environment, e.g. `tigera-workspace`.
86 |
87 | 5. Delete IAM role created for this workshop.
88 |
89 | ```bash
90 | # use your local shell to set AWS credentials if needed
91 | # otherwise skip these two lines and execute commands below
92 | export AWS_ACCESS_KEY_ID=""
93 | export AWS_SECRET_ACCESS_KEY=""
94 |
95 | # delete IAM role
96 | IAM_ROLE='tigera-workshop-admin'
97 | ADMIN_POLICY_ARN=$(aws iam list-policies --query 'Policies[?PolicyName==`AdministratorAccess`].Arn' --output text)
98 | aws iam detach-role-policy --role-name $IAM_ROLE --policy-arn $ADMIN_POLICY_ARN
99 | aws iam remove-role-from-instance-profile --instance-profile-name $IAM_ROLE --role-name $IAM_ROLE
100 | # if this command fails, you can remove the role via AWS Console once you delete the Cloud9 instance
101 | aws iam delete-instance-profile --instance-profile-name $IAM_ROLE
102 | aws iam delete-role --role-name $IAM_ROLE
103 | ```
104 |
--------------------------------------------------------------------------------
/configs/trust-policy.json:
--------------------------------------------------------------------------------
1 | {
2 | "Version": "2012-10-17",
3 | "Statement": [
4 | {
5 | "Effect": "Allow",
6 | "Action": "sts:AssumeRole",
7 | "Principal": {
8 | "Service": [
9 | "ec2.amazonaws.com"
10 | ]
11 | }
12 | }
13 | ]
14 | }
--------------------------------------------------------------------------------
/demo/00-tiers/tiers.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: Tier
4 | metadata:
5 | name: security
6 | spec:
7 | order: 400
8 |
9 | ---
10 | apiVersion: projectcalico.org/v3
11 | kind: Tier
12 | metadata:
13 | name: platform
14 | spec:
15 | order: 500
16 |
17 | ---
18 | apiVersion: projectcalico.org/v3
19 | kind: Tier
20 | metadata:
21 | name: application
22 | spec:
23 | order: 600
24 |
--------------------------------------------------------------------------------
/demo/01-base/allow-kube-dns.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: GlobalNetworkPolicy
3 | metadata:
4 | name: platform.allow-kube-dns
5 | spec:
6 | # requires platform tier to exist
7 | tier: platform
8 | order: 2000
9 | selector: all()
10 | types:
11 | - Egress
12 | egress:
13 | - action: Allow
14 | protocol: UDP
15 | source: {}
16 | destination:
17 | selector: "k8s-app == 'kube-dns'"
18 | ports:
19 | - '53'
20 | # - action: Pass
21 | # source: {}
22 | # destination: {}
23 |
--------------------------------------------------------------------------------
/demo/01-base/quarantine-policy.yaml:
--------------------------------------------------------------------------------
1 | # quarantine policy
2 | ---
3 | apiVersion: projectcalico.org/v3
4 | kind: GlobalNetworkPolicy
5 | metadata:
6 | name: security.quarantine
7 | spec:
8 | tier: security
9 | order: 1
10 | selector: "quarantine == 'true'"
11 | ingress:
12 | - action: Deny
13 | source: {}
14 | destination: {}
15 | egress:
16 | - action: Deny
17 | source: {}
18 | destination: {}
19 | types:
20 | - Ingress
21 | - Egress
22 |
--------------------------------------------------------------------------------
/demo/01-base/tiers-pass-policy.yaml:
--------------------------------------------------------------------------------
1 | # security tier pass policy
2 | ---
3 | apiVersion: projectcalico.org/v3
4 | kind: GlobalNetworkPolicy
5 | metadata:
6 | name: security.pass-to-next-tier
7 | spec:
8 | tier: security
9 | order: 2000
10 | selector: all()
11 | namespaceSelector: ''
12 | serviceAccountSelector: ''
13 | ingress:
14 | - action: Pass
15 | source: {}
16 | destination: {}
17 | egress:
18 | - action: Pass
19 | source: {}
20 | destination: {}
21 | doNotTrack: false
22 | applyOnForward: false
23 | preDNAT: false
24 | types:
25 | - Ingress
26 | - Egress
27 |
28 | # platform tier pass policy
29 | ---
30 | apiVersion: projectcalico.org/v3
31 | kind: GlobalNetworkPolicy
32 | metadata:
33 | name: platform.pass-to-next-tier
34 | spec:
35 | tier: platform
36 | order: 2000
37 | selector: all()
38 | namespaceSelector: ''
39 | serviceAccountSelector: ''
40 | ingress:
41 | - action: Pass
42 | source: {}
43 | destination: {}
44 | egress:
45 | - action: Pass
46 | source: {}
47 | destination: {}
48 | doNotTrack: false
49 | applyOnForward: false
50 | preDNAT: false
51 | types:
52 | - Ingress
53 | - Egress
54 |
55 | # application tier pass policy
56 | ---
57 | apiVersion: projectcalico.org/v3
58 | kind: GlobalNetworkPolicy
59 | metadata:
60 | name: application.pass-to-next-tier
61 | spec:
62 | tier: application
63 | order: 2000
64 | selector: all()
65 | namespaceSelector: ''
66 | serviceAccountSelector: ''
67 | ingress:
68 | - action: Pass
69 | source: {}
70 | destination: {}
71 | egress:
72 | - action: Pass
73 | source: {}
74 | destination: {}
75 | doNotTrack: false
76 | applyOnForward: false
77 | preDNAT: false
78 | types:
79 | - Ingress
80 | - Egress
81 |
--------------------------------------------------------------------------------
/demo/10-security-controls/default-deny.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: GlobalNetworkPolicy
3 | metadata:
4 | name: default-deny
5 | spec:
6 | order: 2000
7 | selector: "projectcalico.org/namespace in {'dev','default'}"
8 | types:
9 | - Ingress
10 | - Egress
11 |
--------------------------------------------------------------------------------
/demo/10-security-controls/feodo-block-policy.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: GlobalNetworkPolicy
4 | metadata:
5 | name: security.block-feodo
6 | spec:
7 | tier: security
8 | order: 210
9 | selector: all()
10 | types:
11 | - Egress
12 | egress:
13 | - action: Deny
14 | destination:
15 | selector: threatfeed == 'feodo'
16 | - action: Pass
17 |
--------------------------------------------------------------------------------
/demo/10-security-controls/feodotracker.threatfeed.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: GlobalThreatFeed
3 | metadata:
4 | name: feodo-tracker
5 | spec:
6 | pull:
7 | http:
8 | url: https://feodotracker.abuse.ch/downloads/ipblocklist.txt
9 | globalNetworkSet:
10 | labels:
11 | threatfeed: feodo
--------------------------------------------------------------------------------
/demo/10-security-controls/staged.default-deny.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: StagedGlobalNetworkPolicy
3 | metadata:
4 | name: default-deny
5 | spec:
6 | order: 2000
7 | selector: "projectcalico.org/namespace in {'dev','default'}"
8 | types:
9 | - Ingress
10 | - Egress
11 |
--------------------------------------------------------------------------------
/demo/20-egress-access-controls/centos-to-frontend.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: NetworkPolicy
4 | metadata:
5 | name: platform.centos-to-frontend
6 | namespace: dev
7 | spec:
8 | tier: platform
9 | order: 100
10 | selector: app == "centos"
11 | types:
12 | - Egress
13 | egress:
14 | - action: Allow
15 | protocol: UDP
16 | destination:
17 | selector: k8s-app == "kube-dns"
18 | namespaceSelector: projectcalico.org/name == "kube-system"
19 | ports:
20 | - 53
21 | - action: Allow
22 | protocol: TCP
23 | source: {}
24 | destination:
25 | selector: app == "frontend"
26 | namespaceSelector: projectcalico.org/name == "default"
27 |
--------------------------------------------------------------------------------
/demo/20-egress-access-controls/dns-policy.netset.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: GlobalNetworkPolicy
3 | metadata:
4 | name: security.external-domains-access
5 | spec:
6 | # requires security tier
7 | tier: security
8 | selector: (app == "centos" && projectcalico.org/namespace == "dev")
9 | order: 200
10 | types:
11 | - Egress
12 | egress:
13 | - action: Allow
14 | destination:
15 | selector: type == "external-apis"
16 |
--------------------------------------------------------------------------------
/demo/20-egress-access-controls/dns-policy.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: GlobalNetworkPolicy
3 | metadata:
4 | name: security.external-domains-access
5 | spec:
6 | # requires security tier
7 | tier: security
8 | selector: (app == "centos" && projectcalico.org/namespace == "dev")
9 | order: 200
10 | types:
11 | - Egress
12 | egress:
13 | - action: Allow
14 | source:
15 | selector: app == 'centos'
16 | destination:
17 | domains:
18 | - '*.twilio.com'
19 |
--------------------------------------------------------------------------------
/demo/20-egress-access-controls/netset.external-apis.yaml:
--------------------------------------------------------------------------------
1 | kind: GlobalNetworkSet
2 | apiVersion: projectcalico.org/v3
3 | metadata:
4 | name: external-apis
5 | labels:
6 | type: external-apis
7 | spec:
8 | allowedEgressDomains:
9 | - '*.twilio.com'
10 |
--------------------------------------------------------------------------------
/demo/30-secure-hep/felixconfiguration.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: FelixConfiguration
4 | metadata:
5 | name: default
6 | spec:
7 | flowLogsFlushInterval: 10s
8 | flowLogsFileAggregationKindForAllowed: 1
9 | dnsLogsFlushInterval: 10s
10 | logSeverityScreen: Info
11 | failsafeInboundHostPorts:
12 | #- protocol: tcp
13 | # port: 22
14 | - protocol: tcp
15 | port: 68
16 | net: 0.0.0.0/0
17 | - protocol: tcp
18 | port: 179
19 | net: 0.0.0.0/0
20 | - protocol: tcp
21 | port: 2379
22 | net: 0.0.0.0/0
23 | - protocol: tcp
24 | port: 6443
25 | net: 0.0.0.0/0
26 | failsafeOutboundHostPorts:
27 | - protocol: udp
28 | port: 53
29 | net: 0.0.0.0/0
30 | - protocol: tcp
31 | port: 67
32 | net: 0.0.0.0/0
33 | - protocol: tcp
34 | port: 179
35 | net: 0.0.0.0/0
36 | - protocol: tcp
37 | port: 2379
38 | net: 0.0.0.0/0
39 | - protocol: tcp
40 | port: 2380
41 | net: 0.0.0.0/0
42 | - protocol: tcp
43 | port: 6443
44 | net: 0.0.0.0/0
--------------------------------------------------------------------------------
/demo/30-secure-hep/frontend-nodeport-access.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: GlobalNetworkPolicy
3 | metadata:
4 | name: security.frontend-nodeport-access
5 | spec:
6 | tier: security
7 | order: 100
8 | selector: has(eks.amazonaws.com/nodegroup)
9 | # Allow all traffic to localhost.
10 | ingress:
11 | - action: Allow
12 | destination:
13 | nets:
14 | - 127.0.0.1/32
15 | # Allow node port access only from specific CIDR.
16 | - action: Deny
17 | protocol: TCP
18 | source:
19 | notNets:
20 | - ${CLOUD9_IP}
21 | destination:
22 | ports:
23 | - 30080
24 | doNotTrack: false
25 | applyOnForward: true
26 | preDNAT: true
27 | types:
28 | - Ingress
29 |
--------------------------------------------------------------------------------
/demo/30-secure-hep/kubelet-access.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: GlobalNetworkPolicy
3 | metadata:
4 | name: security.kubelet-access
5 | spec:
6 | tier: security
7 | order: 120
8 | selector: has(eks.amazonaws.com/nodegroup)
9 | ingress:
10 | # This rule allows all traffic to localhost.
11 | - action: Allow
12 | destination:
13 | nets:
14 | - 127.0.0.1/32
15 | # This rule allows the access the kubelet API.
16 | - action: Allow
17 | protocol: TCP
18 | destination:
19 | ports:
20 | - '10250'
21 | types:
22 | - Ingress
23 |
--------------------------------------------------------------------------------
/demo/30-secure-hep/ssh-access.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: GlobalNetworkPolicy
3 | metadata:
4 | name: security.ssh-access
5 | spec:
6 | tier: security
7 | order: 110
8 | selector: has(eks.amazonaws.com/nodegroup)
9 | # Allow all traffic to localhost.
10 | ingress:
11 | - action: Allow
12 | destination:
13 | nets:
14 | - 127.0.0.1/32
15 | # Allow only SSH port access.
16 | - action: Allow
17 | protocol: TCP
18 | source:
19 | nets:
20 | - ${CLOUD9_IP}
21 | destination:
22 | ports:
23 | - "22"
--------------------------------------------------------------------------------
/demo/40-compliance-reports/boutiqueshop-reports.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: GlobalReport
4 | metadata:
5 | name: boutiqueshop-inventory
6 | labels:
7 | deployment: production
8 | spec:
9 | reportType: inventory
10 | endpoints:
11 | namespaces:
12 | names: ["default"]
13 | schedule: '*/30 * * * *'
14 |
15 | ---
16 | apiVersion: projectcalico.org/v3
17 | kind: GlobalReport
18 | metadata:
19 | name: boutiqueshop-network-access
20 | labels:
21 | deployment: production
22 | spec:
23 | reportType: network-access
24 | endpoints:
25 | namespaces:
26 | names: ["default"]
27 | schedule: '*/30 * * * *'
28 |
29 | # uncomment policy-audit report if you configured audit logs for EKS cluster https://docs.tigera.io/compliance/compliance-reports/compliance-managed-cloud#enable-audit-logs-in-eks
30 | # ---
31 | # apiVersion: projectcalico.org/v3
32 | # kind: GlobalReport
33 | # metadata:
34 | # name: boutiqueshop-policy-audit
35 | # labels:
36 | # deployment: production
37 | # spec:
38 | # reportType: policy-audit
39 | # endpoints:
40 | # namespaces:
41 | # names: ["default"]
42 | # schedule: '*/30 * * * *'
43 |
--------------------------------------------------------------------------------
/demo/40-compliance-reports/cluster-reporter-pods.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Pod
3 | metadata:
4 | name: run-reporter-cis
5 | namespace: tigera-compliance
6 | labels:
7 | k8s-app: compliance-reporter
8 | spec:
9 | nodeSelector:
10 | kubernetes.io/os: linux
11 | restartPolicy: Never
12 | serviceAccount: tigera-compliance-reporter
13 | serviceAccountName: tigera-compliance-reporter
14 | tolerations:
15 | - key: node-role.kubernetes.io/master
16 | effect: NoSchedule
17 | imagePullSecrets:
18 | - name: tigera-pull-secret
19 | containers:
20 | - name: reporter
21 | # Modify this image name, if you have re-tagged the image and are using a local
22 | # docker image repository.
23 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
24 | image: quay.io/tigera/compliance-reporter:
25 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
26 | env:
27 | # Modify this value with name of an existing globalreport resource.
28 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
29 | - name: TIGERA_COMPLIANCE_REPORT_NAME
30 | value:
31 | # Modify these values with the start and end time frame that should be reported on.
32 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
33 | #- name: TIGERA_COMPLIANCE_REPORT_START_TIME
34 | # value:
35 | #- name: TIGERA_COMPLIANCE_REPORT_END_TIME
36 | # value:
37 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
38 | - name: TIGERA_COMPLIANCE_REPORT_START_TIME
39 | value: ""
40 | - name: TIGERA_COMPLIANCE_REPORT_END_TIME
41 | value: ""
42 | # By default reports get generated 30 min after the end of report time.
43 | # Modify this variable to change the default delay.
44 | - name: TIGERA_COMPLIANCE_JOB_START_DELAY
45 | value: "0m"
46 | - name: LOG_LEVEL
47 | value: "warning"
48 | # - name: ELASTIC_INDEX_SUFFIX
49 | # value: cluster
50 | - name: ELASTIC_INDEX_SUFFIX
51 | value:
52 | - name: ELASTIC_SCHEME
53 | value: https
54 | - name: ELASTIC_HOST
55 | value: tigera-secure-es-gateway-http.tigera-elasticsearch.svc
56 | - name: ELASTIC_PORT
57 | value: "9200"
58 | - name: ELASTIC_USER
59 | valueFrom:
60 | secretKeyRef:
61 | name: tigera-ee-compliance-reporter-elasticsearch-access
62 | key: username
63 | optional: true
64 | - name: ELASTIC_PASSWORD
65 | valueFrom:
66 | secretKeyRef:
67 | name: tigera-ee-compliance-reporter-elasticsearch-access
68 | key: password
69 | optional: true
70 | - name: ELASTIC_SSL_VERIFY
71 | value: "true"
72 | - name: ELASTIC_CA
73 | value: /etc/ssl/elastic/ca.pem
74 | volumeMounts:
75 | - mountPath: /var/log/calico
76 | name: var-log-calico
77 | - name: elastic-ca-cert-volume
78 | mountPath: /etc/ssl/elastic/
79 | livenessProbe:
80 | httpGet:
81 | path: /liveness
82 | port: 9099
83 | host: localhost
84 | resources: {}
85 | volumes:
86 | - name: var-log-calico
87 | hostPath:
88 | path: /var/log/calico
89 | type: DirectoryOrCreate
90 | - name: elastic-ca-cert-volume
91 | secret:
92 | optional: true
93 | items:
94 | - key: tls.crt
95 | path: ca.pem
96 | secretName: tigera-secure-es-gateway-http-certs-public
97 | ---
98 | apiVersion: v1
99 | kind: Pod
100 | metadata:
101 | name: run-reporter-network-access
102 | namespace: tigera-compliance
103 | labels:
104 | k8s-app: compliance-reporter
105 | spec:
106 | nodeSelector:
107 | kubernetes.io/os: linux
108 | restartPolicy: Never
109 | serviceAccount: tigera-compliance-reporter
110 | serviceAccountName: tigera-compliance-reporter
111 | tolerations:
112 | - key: node-role.kubernetes.io/master
113 | effect: NoSchedule
114 | imagePullSecrets:
115 | - name: tigera-pull-secret
116 | containers:
117 | - name: reporter
118 | # Modify this image name, if you have re-tagged the image and are using a local
119 | # docker image repository.
120 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
121 | image: quay.io/tigera/compliance-reporter:
122 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
123 | env:
124 | # Modify this value with name of an existing globalreport resource.
125 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
126 | - name: TIGERA_COMPLIANCE_REPORT_NAME
127 | value:
128 | # Modify these values with the start and end time frame that should be reported on.
129 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
130 | #- name: TIGERA_COMPLIANCE_REPORT_START_TIME
131 | # value:
132 | #- name: TIGERA_COMPLIANCE_REPORT_END_TIME
133 | # value:
134 | - name: TIGERA_COMPLIANCE_REPORT_START_TIME
135 | value: ""
136 | - name: TIGERA_COMPLIANCE_REPORT_END_TIME
137 | value: ""
138 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
139 | # By default reports get generated 30 min after the end of report time.
140 | # Modify this variable to change the default delay.
141 | - name: TIGERA_COMPLIANCE_JOB_START_DELAY
142 | value: "0m"
143 | - name: LOG_LEVEL
144 | value: "warning"
145 | # - name: ELASTIC_INDEX_SUFFIX
146 | # value: cluster
147 | - name: ELASTIC_INDEX_SUFFIX
148 | value:
149 | - name: ELASTIC_SCHEME
150 | value: https
151 | - name: ELASTIC_HOST
152 | value: tigera-secure-es-gateway-http.tigera-elasticsearch.svc
153 | - name: ELASTIC_PORT
154 | value: "9200"
155 | - name: ELASTIC_USER
156 | valueFrom:
157 | secretKeyRef:
158 | name: tigera-ee-compliance-reporter-elasticsearch-access
159 | key: username
160 | optional: true
161 | - name: ELASTIC_PASSWORD
162 | valueFrom:
163 | secretKeyRef:
164 | name: tigera-ee-compliance-reporter-elasticsearch-access
165 | key: password
166 | optional: true
167 | - name: ELASTIC_SSL_VERIFY
168 | value: "true"
169 | - name: ELASTIC_CA
170 | value: /etc/ssl/elastic/ca.pem
171 | volumeMounts:
172 | - mountPath: /var/log/calico
173 | name: var-log-calico
174 | - name: elastic-ca-cert-volume
175 | mountPath: /etc/ssl/elastic/
176 | livenessProbe:
177 | httpGet:
178 | path: /liveness
179 | port: 9099
180 | host: localhost
181 | resources: {}
182 | volumes:
183 | - name: var-log-calico
184 | hostPath:
185 | path: /var/log/calico
186 | type: DirectoryOrCreate
187 | - name: elastic-ca-cert-volume
188 | secret:
189 | optional: true
190 | items:
191 | - key: tls.crt
192 | path: ca.pem
193 | secretName: tigera-secure-es-gateway-http-certs-public
194 | ---
195 | apiVersion: v1
196 | kind: Pod
197 | metadata:
198 | name: run-reporter-inventory
199 | namespace: tigera-compliance
200 | labels:
201 | k8s-app: compliance-reporter
202 | spec:
203 | nodeSelector:
204 | kubernetes.io/os: linux
205 | restartPolicy: Never
206 | serviceAccount: tigera-compliance-reporter
207 | serviceAccountName: tigera-compliance-reporter
208 | tolerations:
209 | - key: node-role.kubernetes.io/master
210 | effect: NoSchedule
211 | imagePullSecrets:
212 | - name: tigera-pull-secret
213 | containers:
214 | - name: reporter
215 | # Modify this image name, if you have re-tagged the image and are using a local
216 | # docker image repository.
217 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
218 | image: quay.io/tigera/compliance-reporter:
219 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
220 | env:
221 | # Modify this value with name of an existing globalreport resource.
222 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
223 | - name: TIGERA_COMPLIANCE_REPORT_NAME
224 | value:
225 | # Modify these values with the start and end time frame that should be reported on.
226 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
227 | #- name: TIGERA_COMPLIANCE_REPORT_START_TIME
228 | # value:
229 | #- name: TIGERA_COMPLIANCE_REPORT_END_TIME
230 | # value:
231 | # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232 | - name: TIGERA_COMPLIANCE_REPORT_START_TIME
233 | value: ""
234 | - name: TIGERA_COMPLIANCE_REPORT_END_TIME
235 | value: ""
236 | # By default reports get generated 30 min after the end of report time.
237 | # Modify this variable to change the default delay.
238 | - name: TIGERA_COMPLIANCE_JOB_START_DELAY
239 | value: "0m"
240 | - name: LOG_LEVEL
241 | value: "warning"
242 | # - name: ELASTIC_INDEX_SUFFIX
243 | # value: cluster
244 | - name: ELASTIC_INDEX_SUFFIX
245 | value:
246 | - name: ELASTIC_SCHEME
247 | value: https
248 | - name: ELASTIC_HOST
249 | value: tigera-secure-es-gateway-http.tigera-elasticsearch.svc
250 | - name: ELASTIC_PORT
251 | value: "9200"
252 | - name: ELASTIC_USER
253 | valueFrom:
254 | secretKeyRef:
255 | name: tigera-ee-compliance-reporter-elasticsearch-access
256 | key: username
257 | optional: true
258 | - name: ELASTIC_PASSWORD
259 | valueFrom:
260 | secretKeyRef:
261 | name: tigera-ee-compliance-reporter-elasticsearch-access
262 | key: password
263 | optional: true
264 | - name: ELASTIC_SSL_VERIFY
265 | value: "true"
266 | - name: ELASTIC_CA
267 | value: /etc/ssl/elastic/ca.pem
268 | volumeMounts:
269 | - mountPath: /var/log/calico
270 | name: var-log-calico
271 | - name: elastic-ca-cert-volume
272 | mountPath: /etc/ssl/elastic/
273 | livenessProbe:
274 | httpGet:
275 | path: /liveness
276 | port: 9099
277 | host: localhost
278 | resources: {}
279 | volumes:
280 | - name: var-log-calico
281 | hostPath:
282 | path: /var/log/calico
283 | type: DirectoryOrCreate
284 | - name: elastic-ca-cert-volume
285 | secret:
286 | optional: true
287 | items:
288 | - key: tls.crt
289 | path: ca.pem
290 | secretName: tigera-secure-es-gateway-http-certs-public
291 | # ---
292 | # apiVersion: v1
293 | # kind: Pod
294 | # metadata:
295 | # name: run-reporter-policy-audit
296 | # namespace: tigera-compliance
297 | # labels:
298 | # k8s-app: compliance-reporter
299 | # spec:
300 | # nodeSelector:
301 | # kubernetes.io/os: linux
302 | # restartPolicy: Never
303 | # serviceAccount: tigera-compliance-reporter
304 | # serviceAccountName: tigera-compliance-reporter
305 | # tolerations:
306 | # - key: node-role.kubernetes.io/master
307 | # effect: NoSchedule
308 | # imagePullSecrets:
309 | # - name: tigera-pull-secret
310 | # containers:
311 | # - name: reporter
312 | # # Modify this image name, if you have re-tagged the image and are using a local
313 | # # docker image repository.
314 | # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
315 | # image: quay.io/tigera/compliance-reporter:
316 | # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
317 | # env:
318 | # # Modify this value with name of an existing globalreport resource.
319 | # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
320 | # - name: TIGERA_COMPLIANCE_REPORT_NAME
321 | # value:
322 | # # Modify these values with the start and end time frame that should be reported on.
323 | # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
324 | # #- name: TIGERA_COMPLIANCE_REPORT_START_TIME
325 | # # value:
326 | # #- name: TIGERA_COMPLIANCE_REPORT_END_TIME
327 | # # value:
328 | # - name: TIGERA_COMPLIANCE_REPORT_START_TIME
329 | # value: ""
330 | # - name: TIGERA_COMPLIANCE_REPORT_END_TIME
331 | # value: ""
332 | # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
333 | # # By default reports get generated 30 min after the end of report time.
334 | # # Modify this variable to change the default delay.
335 | # - name: TIGERA_COMPLIANCE_JOB_START_DELAY
336 | # value: "0m"
337 | # - name: LOG_LEVEL
338 | # value: "warning"
339 | # #- name: ELASTIC_INDEX_SUFFIX
340 | # # value: cluster
341 | # - name: ELASTIC_INDEX_SUFFIX
342 | # value:
343 | # - name: ELASTIC_SCHEME
344 | # value: https
345 | # - name: ELASTIC_HOST
346 | # value: tigera-secure-es-gateway-http.tigera-elasticsearch.svc
347 | # - name: ELASTIC_PORT
348 | # value: "9200"
349 | # - name: ELASTIC_USER
350 | # valueFrom:
351 | # secretKeyRef:
352 | # name: tigera-ee-compliance-reporter-elasticsearch-access
353 | # key: username
354 | # optional: true
355 | # - name: ELASTIC_PASSWORD
356 | # valueFrom:
357 | # secretKeyRef:
358 | # name: tigera-ee-compliance-reporter-elasticsearch-access
359 | # key: password
360 | # optional: true
361 | # - name: ELASTIC_SSL_VERIFY
362 | # value: "true"
363 | # - name: ELASTIC_CA
364 | # value: /etc/ssl/elastic/ca.pem
365 | # volumeMounts:
366 | # - mountPath: /var/log/calico
367 | # name: var-log-calico
368 | # - name: elastic-ca-cert-volume
369 | # mountPath: /etc/ssl/elastic/
370 | # livenessProbe:
371 | # httpGet:
372 | # path: /liveness
373 | # port: 9099
374 | # host: localhost
375 | # resources: {}
376 | # volumes:
377 | # - name: var-log-calico
378 | # hostPath:
379 | # path: /var/log/calico
380 | # type: DirectoryOrCreate
381 | # - name: elastic-ca-cert-volume
382 | # secret:
383 | # optional: true
384 | # items:
385 | # - key: tls.crt
386 | # path: ca.pem
387 | # secretName: tigera-secure-es-gateway-http-certs-public
388 |
--------------------------------------------------------------------------------
/demo/40-compliance-reports/cluster-reports.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: GlobalReport
4 | metadata:
5 | name: cluster-inventory
6 | spec:
7 | reportType: inventory
8 | schedule: '*/30 * * * *'
9 |
10 | ---
11 | apiVersion: projectcalico.org/v3
12 | kind: GlobalReport
13 | metadata:
14 | name: cluster-network-access
15 | spec:
16 | reportType: network-access
17 | schedule: '*/30 * * * *'
18 |
19 | # uncomment policy-audit report if you configured audit logs for EKS cluster https://docs.tigera.io/compliance/compliance-reports/compliance-managed-cloud#enable-audit-logs-in-eks
20 | # ---
21 | # apiVersion: projectcalico.org/v3
22 | # kind: GlobalReport
23 | # metadata:
24 | # name: cluster-policy-audit
25 | # spec:
26 | # reportType: policy-audit
27 | # schedule: '*/30 * * * *'
28 |
--------------------------------------------------------------------------------
/demo/40-compliance-reports/daily-cis-results.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: GlobalReport
3 | metadata:
4 | name: daily-cis-results
5 | labels:
6 | deployment: production
7 | spec:
8 | reportType: cis-benchmark
9 | schedule: 0 0 * * *
10 | cis:
11 | highThreshold: 100
12 | medThreshold: 50
13 | includeUnscoredTests: true
14 | numFailedTests: 5
--------------------------------------------------------------------------------
/demo/50-alerts/boutiqueshop.unsanctioned.access.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | # for demo purpose "period" and "lookback" fields are set to 1m
3 | apiVersion: projectcalico.org/v3
4 | kind: GlobalAlert
5 | metadata:
6 | name: paymentservice.unsanctioned.lateral.access
7 | spec:
8 | description: "Alerts when paymentservice pod is accessed by unsanctioned workloads outside of boutiqueshop application namespace"
9 | summary: "[flows] [lateral movement] ${source_namespace}/${source_name_aggr} has accessed boutiqueshop's pod ${dest_namespace}/${dest_name_aggr}"
10 | severity: 100
11 | period: 1m
12 | lookback: 1m
13 | dataSet: flows
14 | query: '"dest_namespace"="default" AND "dest_labels.labels"="app=paymentservice" AND "source_namespace"!="default" AND proto=tcp AND (action=allow OR action=deny) AND reporter=dst'
15 | aggregateBy: [source_namespace, source_name_aggr, dest_namespace, dest_name_aggr]
16 | field: num_flows
17 | metric: sum
18 | condition: gt
19 | threshold: 0
20 |
--------------------------------------------------------------------------------
/demo/50-alerts/globalnetworkset.changed.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: GlobalAlertTemplate
4 | metadata:
5 | name: policy.globalnetworkset
6 | spec:
7 | description: "Alerts on any changes to global network sets"
8 | summary: "[audit] [privileged access] change detected for ${objectRef.resource} ${objectRef.name}"
9 | severity: 100
10 | period: 5m
11 | lookback: 5m
12 | dataSet: audit
13 | # alert is triggered if CRUD operation executed against any globalnetworkset
14 | query: (verb=create OR verb=update OR verb=delete OR verb=patch) AND "objectRef.resource"=globalnetworksets
15 | aggregateBy: [objectRef.resource, objectRef.name]
16 | metric: count
17 | condition: gt
18 | threshold: 0
19 |
20 | ---
21 | apiVersion: projectcalico.org/v3
22 | kind: GlobalAlert
23 | metadata:
24 | name: policy.globalnetworkset
25 | spec:
26 | description: "Alerts on any changes to global network sets"
27 | summary: "[audit] [privileged access] change detected for ${objectRef.resource} ${objectRef.name}"
28 | severity: 100
29 | period: 1m
30 | lookback: 1m
31 | dataSet: audit
32 | # alert is triggered if CRUD operation executed against any globalnetworkset
33 | query: (verb=create OR verb=update OR verb=delete OR verb=patch) AND "objectRef.resource"=globalnetworksets
34 | aggregateBy: [objectRef.resource, objectRef.name]
35 | metric: count
36 | condition: gt
37 | threshold: 0
38 |
--------------------------------------------------------------------------------
/demo/50-alerts/unsanctioned.dns.access.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: GlobalAlertTemplate
4 | metadata:
5 | name: dns.unsanctioned.access
6 | spec:
7 | description: "Pod attempted to access restricted.com domain"
8 | summary: "[dns] pod ${client_namespace}/${client_name_aggr} attempted to access '${qname}'"
9 | severity: 100
10 | dataSet: dns
11 | period: 5m
12 | lookback: 5m
13 | query: '(qname = "www.restricted.com" OR qname = "restricted.com")'
14 | aggregateBy: [client_namespace, client_name_aggr, qname]
15 | metric: count
16 | condition: gt
17 | threshold: 0
18 |
19 | ---
20 | apiVersion: projectcalico.org/v3
21 | kind: GlobalAlert
22 | metadata:
23 | name: dns.unsanctioned.access
24 | spec:
25 | description: "Pod attempted to access google.com domain"
26 | summary: "[dns] pod ${client_namespace}/${client_name_aggr} attempted to access '${qname}'"
27 | severity: 100
28 | dataSet: dns
29 | period: 1m
30 | lookback: 1m
31 | query: '(qname = "www.google.com" OR qname = "google.com")'
32 | aggregateBy: [client_namespace, client_name_aggr, qname]
33 | metric: count
34 | condition: gt
35 | threshold: 0
36 |
--------------------------------------------------------------------------------
/demo/50-alerts/unsanctioned.lateral.access.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: GlobalAlertTemplate
4 | metadata:
5 | name: network.lateral.access
6 | spec:
7 | description: "Alerts when pods with a specific label (security=strict) accessed by other workloads from other namespaces"
8 | summary: "[flows] [lateral movement] ${source_namespace}/${source_name_aggr} has accessed ${dest_namespace}/${dest_name_aggr} with label security=strict"
9 | severity: 100
10 | period: 5m
11 | lookback: 5m
12 | dataSet: flows
13 | query: '"dest_labels.labels"="security=strict" AND "dest_namespace"="secured_pod_namespace" AND "source_namespace"!="secured_pod_namespace" AND proto=tcp AND (("action"="allow" AND ("reporter"="dst" OR "reporter"="src")) OR ("action"="deny" AND "reporter"="src"))'
14 | aggregateBy: [source_namespace, source_name_aggr, dest_namespace, dest_name_aggr]
15 | field: num_flows
16 | metric: sum
17 | condition: gt
18 | threshold: 0
19 |
20 | ---
21 | apiVersion: projectcalico.org/v3
22 | kind: GlobalAlert
23 | metadata:
24 | name: network.lateral.access
25 | spec:
26 | description: "Alerts when pods with a specific label (security=strict) accessed by other workloads from other namespaces"
27 | summary: "[flows] [lateral movement] ${source_namespace}/${source_name_aggr} has accessed ${dest_namespace}/${dest_name_aggr} with label security=strict"
28 | severity: 100
29 | period: 1m
30 | lookback: 1m
31 | dataSet: flows
32 | query: '("dest_labels.labels"="security=strict" AND "dest_namespace"="dev") AND "source_namespace"!="dev" AND "proto"="tcp" AND (("action"="allow" AND ("reporter"="dst" OR "reporter"="src")) OR ("action"="deny" AND "reporter"="src"))'
33 | aggregateBy: [source_namespace, source_name_aggr, dest_namespace, dest_name_aggr]
34 | field: num_flows
35 | metric: sum
36 | condition: gt
37 | threshold: 0
38 |
--------------------------------------------------------------------------------
/demo/60-packet-capture/nginx-pcap.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: PacketCapture
3 | metadata:
4 | name: nginx-pcap
5 | namespace: dev
6 | spec:
7 | selector: app == "nginx"
8 | filters:
9 | - protocol: TCP
10 | ports:
11 | - 80
12 | # if this field is skipped, the capture starts once the CR is deployed
13 | # startTime: "2021-08-26T12:00:00Z"
14 |
15 | # Mac terminal: date -v +30S -u +'%Y-%m-%dT%H:%M:%SZ'
16 | # Linux shell: date -u -d '30 sec' '+%Y-%m-%dT%H:%M:%SZ'
17 | # endTime: "2021-08-26T12:30:00Z"
18 | # endTime: "$END_TIME"
--------------------------------------------------------------------------------
/demo/70-deep-packet-inspection/nginx-dpi.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: DeepPacketInspection
3 | metadata:
4 | name: nginx-dpi
5 | namespace: dev
6 | spec:
7 | selector: app == "nginx"
8 | # selector: all()
9 |
--------------------------------------------------------------------------------
/demo/80-image-assurance/tigera-image-assurance-admission-controller-deploy.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Secret
3 | metadata:
4 | name: tigera-image-assurance-admission-controller-certs
5 | namespace: tigera-image-assurance
6 | type: Opaque
7 | data:
8 | tls.crt: BASE64_CERTIFICATE
9 | tls.key: BASE64_KEY
10 | ---
11 | apiVersion: v1
12 | kind: ServiceAccount
13 | metadata:
14 | name: tigera-secure-container-admission-controller
15 | namespace: tigera-image-assurance
16 | ---
17 | apiVersion: rbac.authorization.k8s.io/v1
18 | kind: ClusterRole
19 | metadata:
20 | name: tigera-secure-container-admission-controller
21 | namespace: tigera-image-assurance
22 | rules:
23 | - apiGroups:
24 | - "containersecurity.tigera.io"
25 | resources:
26 | - "containeradmissionpolicies"
27 | verbs:
28 | - get
29 | - list
30 | - watch
31 | - apiGroups:
32 | - ""
33 | resources:
34 | - namespaces
35 | verbs:
36 | - get
37 | - list
38 | - watch
39 | ---
40 | apiVersion: rbac.authorization.k8s.io/v1
41 | kind: ClusterRoleBinding
42 | metadata:
43 | name: tigera-secure-container-admission-controller
44 | namespace: tigera-image-assurance
45 | roleRef:
46 | apiGroup: rbac.authorization.k8s.io
47 | kind: ClusterRole
48 | name: tigera-secure-container-admission-controller
49 | subjects:
50 | - kind: ServiceAccount
51 | name: tigera-secure-container-admission-controller
52 | namespace: tigera-image-assurance
53 | ---
54 | apiVersion: apps/v1
55 | kind: Deployment
56 | metadata:
57 | labels:
58 | k8s-app: tigera-image-assurance-admission-controller
59 | name: tigera-image-assurance-admission-controller
60 | namespace: tigera-image-assurance
61 | spec:
62 | progressDeadlineSeconds: 600
63 | replicas: 1
64 | revisionHistoryLimit: 10
65 | selector:
66 | matchLabels:
67 | name: tigera-image-assurance-admission-controller
68 | template:
69 | metadata:
70 | labels:
71 | k8s-app: tigera-image-assurance-admission-controller
72 | name: tigera-image-assurance-admission-controller
73 | spec:
74 | serviceAccountName: tigera-secure-container-admission-controller
75 | hostNetwork: true
76 | dnsPolicy: ClusterFirstWithHostNet
77 | containers:
78 | - image: quay.io/tigera/image-assurance-admission-controller:IA_AC_VERSION
79 | imagePullPolicy: IfNotPresent
80 | name: tigera-image-assurance-admission-controller
81 | env:
82 | - name: IMAGE_ASSURANCE_ORGANIZATION_ID
83 | valueFrom:
84 | configMapKeyRef:
85 | key: organizationID
86 | name: tigera-image-assurance-config
87 | - name: IMAGE_ASSURANCE_LOG_LEVEL
88 | value: INFO
89 | - name: IMAGE_ASSURANCE_BAST_APIURL
90 | value: "https://tigera-image-assurance-api.tigera-image-assurance.svc:9443"
91 | - name: IMAGE_ASSURANCE_BAST_API_TOKEN
92 | valueFrom:
93 | secretKeyRef:
94 | key: token
95 | name: tigera-image-assurance-admission-controller-api-access
96 | ports:
97 | - containerPort: 8080
98 | volumeMounts:
99 | - mountPath: /certs/https/
100 | name: certs
101 | readOnly: true
102 | - mountPath: /certs/bast
103 | name: tigera-secure-bast-server-tls
104 | readOnly: true
105 | imagePullSecrets:
106 | - name: tigera-pull-secret
107 | volumes:
108 | - name: certs
109 | secret:
110 | defaultMode: 420
111 | items:
112 | - key: tls.crt
113 | path: cert.pem
114 | - key: tls.key
115 | path: key.pem
116 | secretName: tigera-image-assurance-admission-controller-certs
117 | - name: tigera-secure-bast-server-tls
118 | secret:
119 | defaultMode: 420
120 | items:
121 | - key: tls.crt
122 | path: tls.crt
123 | secretName: tigera-image-assurance-api-cert
124 | ---
125 | apiVersion: v1
126 | kind: Service
127 | metadata:
128 | name: tigera-image-assurance-admission-controller-service
129 | namespace: tigera-image-assurance
130 | labels:
131 | k8s-app: tigera-image-assurance-admission-controller-service
132 | spec:
133 | selector:
134 | k8s-app: tigera-image-assurance-admission-controller
135 | ports:
136 | - port: 8089
137 | targetPort: 8080
138 | protocol: TCP
139 | ---
140 | apiVersion: v1
141 | kind: Service
142 | metadata:
143 | name: tigera-bast-to-guardian-proxy
144 | namespace: tigera-guardian
145 | spec:
146 | selector:
147 | k8s-app: tigera-guardian
148 | ports:
149 | - name: bast
150 | port: 9443
151 | protocol: TCP
152 | targetPort: 8080
153 | ---
154 | apiVersion: v1
155 | kind: Service
156 | metadata:
157 | name: tigera-image-assurance-api
158 | namespace: tigera-image-assurance
159 | spec:
160 | externalName: tigera-bast-to-guardian-proxy.tigera-guardian.svc.cluster.local
161 | sessionAffinity: None
162 | type: ExternalName
163 | ---
164 | apiVersion: admissionregistration.k8s.io/v1
165 | kind: ValidatingWebhookConfiguration
166 | metadata:
167 | name: "image-assurance.tigera.io"
168 | webhooks:
169 | - name: "image-assurance.tigera.io"
170 | namespaceSelector:
171 | matchExpressions:
172 | - key: "tigera-admission-controller"
173 | operator: "In"
174 | values:
175 | - "enforcing"
176 | rules:
177 | - apiGroups: ["*"]
178 | apiVersions: ["*"]
179 | operations: ["CREATE", "UPDATE"]
180 | resources: ["pods", "replicasets", "deployments", "statefulsets", "daemonsets", "jobs", "cronjobs"]
181 | scope: "Namespaced"
182 | clientConfig:
183 | service:
184 | namespace: "tigera-image-assurance"
185 | name: "tigera-image-assurance-admission-controller-service"
186 | path: "/admission-review"
187 | port: 8089
188 | caBundle: "BASE64_CERTIFICATE"
189 | admissionReviewVersions: ["v1", "v1beta1"]
190 | sideEffects: None
191 | timeoutSeconds: 5
192 |
--------------------------------------------------------------------------------
/demo/80-image-assurance/tigera-image-assurance-admission-controller-policy.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: containersecurity.tigera.io/v1beta1
2 | kind: ContainerAdmissionPolicy
3 | metadata:
4 | name: reject-failed
5 | spec:
6 | selector: all()
7 | namespaceSelector: all()
8 | order: 10
9 | rules:
10 | - action: Allow
11 | imageScanStatus:
12 | operator: IsOneOf
13 | values:
14 | - Pass
15 | - Warn
16 | - action: Reject
17 |
--------------------------------------------------------------------------------
/demo/boutiqueshop/policies.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: NetworkPolicy
4 | metadata:
5 | name: application.adservice
6 | namespace: default
7 | spec:
8 | tier: application
9 | order: 100
10 | selector: app == "adservice"
11 | ingress:
12 | - action: Allow
13 | protocol: TCP
14 | source:
15 | selector: app == "frontend"
16 | destination:
17 | ports:
18 | - '9555'
19 | egress:
20 | - action: Allow
21 | protocol: TCP
22 | source: {}
23 | destination:
24 | ports:
25 | - '80'
26 | types:
27 | - Ingress
28 | - Egress
29 |
30 | ---
31 | apiVersion: projectcalico.org/v3
32 | kind: NetworkPolicy
33 | metadata:
34 | name: application.cartservice
35 | namespace: default
36 | spec:
37 | tier: application
38 | order: 110
39 | selector: app == "cartservice"
40 | ingress:
41 | - action: Allow
42 | protocol: TCP
43 | source:
44 | selector: app == "checkoutservice"
45 | destination:
46 | ports:
47 | - '7070'
48 | - action: Allow
49 | protocol: TCP
50 | source:
51 | selector: app == "frontend"
52 | destination:
53 | ports:
54 | - '7070'
55 | egress:
56 | - action: Allow
57 | protocol: TCP
58 | source: {}
59 | destination:
60 | ports:
61 | - '6379'
62 | - action: Allow
63 | protocol: TCP
64 | source: {}
65 | destination:
66 | selector: app == "redis-cart"
67 | ports:
68 | - '6379'
69 | types:
70 | - Ingress
71 | - Egress
72 |
73 | ---
74 | apiVersion: projectcalico.org/v3
75 | kind: NetworkPolicy
76 | metadata:
77 | name: application.checkoutservice
78 | namespace: default
79 | spec:
80 | tier: application
81 | order: 120
82 | selector: app == "checkoutservice"
83 | ingress:
84 | - action: Allow
85 | protocol: TCP
86 | source:
87 | selector: app == "frontend"
88 | destination:
89 | ports:
90 | - '5050'
91 | egress:
92 | - action: Allow
93 | protocol: TCP
94 | source: {}
95 | destination:
96 | selector: app == "productcatalogservice"
97 | ports:
98 | - '3550'
99 | - action: Allow
100 | protocol: TCP
101 | source: {}
102 | destination:
103 | selector: app == "shippingservice"
104 | ports:
105 | - '50051'
106 | - action: Allow
107 | protocol: TCP
108 | source: {}
109 | destination:
110 | ports:
111 | - '80'
112 | - action: Allow
113 | protocol: TCP
114 | source: {}
115 | destination:
116 | selector: app == "cartservice"
117 | ports:
118 | - '7070'
119 | - action: Allow
120 | protocol: TCP
121 | source: {}
122 | destination:
123 | selector: app == "currencyservice"
124 | ports:
125 | - '7000'
126 | - action: Allow
127 | protocol: TCP
128 | source: {}
129 | destination:
130 | selector: app == "emailservice"
131 | ports:
132 | - '8080'
133 | - action: Allow
134 | protocol: TCP
135 | source: {}
136 | destination:
137 | selector: app == "paymentservice"
138 | ports:
139 | - '50051'
140 | types:
141 | - Ingress
142 | - Egress
143 |
144 | ---
145 | apiVersion: projectcalico.org/v3
146 | kind: NetworkPolicy
147 | metadata:
148 | name: application.currencyservice
149 | namespace: default
150 | spec:
151 | tier: application
152 | order: 130
153 | selector: app == "currencyservice"
154 | ingress:
155 | - action: Allow
156 | protocol: TCP
157 | source:
158 | selector: app == "checkoutservice"
159 | destination:
160 | ports:
161 | - '7000'
162 | - action: Allow
163 | protocol: TCP
164 | source:
165 | selector: app == "frontend"
166 | destination:
167 | ports:
168 | - '7000'
169 | types:
170 | - Ingress
171 |
172 | ---
173 | apiVersion: projectcalico.org/v3
174 | kind: NetworkPolicy
175 | metadata:
176 | name: application.emailservice
177 | namespace: default
178 | spec:
179 | tier: application
180 | order: 140
181 | selector: app == "emailservice"
182 | ingress:
183 | - action: Allow
184 | protocol: TCP
185 | source:
186 | selector: app == "checkoutservice"
187 | destination:
188 | ports:
189 | - '8080'
190 | types:
191 | - Ingress
192 |
193 | ---
194 | apiVersion: projectcalico.org/v3
195 | kind: NetworkPolicy
196 | metadata:
197 | name: application.frontend
198 | namespace: default
199 | spec:
200 | tier: application
201 | order: 150
202 | selector: app == "frontend"
203 | ingress:
204 | - action: Allow
205 | protocol: TCP
206 | source:
207 | selector: app == "loadgenerator"
208 | destination:
209 | ports:
210 | - '8080'
211 | - action: Allow
212 | protocol: TCP
213 | source: {}
214 | destination:
215 | ports:
216 | - '8080'
217 | - action: Allow
218 | protocol: TCP
219 | source:
220 | selector: >-
221 | (component == "apiserver"&&endpoints.projectcalico.org/serviceName ==
222 | "kubernetes")
223 | destination:
224 | ports:
225 | - '56590'
226 | egress:
227 | - action: Allow
228 | protocol: TCP
229 | source: {}
230 | destination:
231 | selector: app == "checkoutservice"
232 | ports:
233 | - '5050'
234 | - action: Allow
235 | protocol: TCP
236 | source: {}
237 | destination:
238 | selector: app == "currencyservice"
239 | ports:
240 | - '7000'
241 | - action: Allow
242 | protocol: TCP
243 | source: {}
244 | destination:
245 | selector: app == "productcatalogservice"
246 | ports:
247 | - '3550'
248 | - action: Allow
249 | protocol: TCP
250 | source: {}
251 | destination:
252 | selector: app == "recommendationservice"
253 | ports:
254 | - '8080'
255 | - action: Allow
256 | protocol: TCP
257 | source: {}
258 | destination:
259 | selector: app == "shippingservice"
260 | ports:
261 | - '50051'
262 | - action: Allow
263 | protocol: TCP
264 | source: {}
265 | destination:
266 | ports:
267 | - '8080'
268 | - '5050'
269 | - '9555'
270 | - '7070'
271 | - '7000'
272 | - action: Allow
273 | protocol: TCP
274 | source: {}
275 | destination:
276 | selector: app == "adservice"
277 | ports:
278 | - '9555'
279 | - action: Allow
280 | protocol: TCP
281 | source: {}
282 | destination:
283 | selector: app == "cartservice"
284 | ports:
285 | - '7070'
286 | - action: Allow
287 | protocol: TCP
288 | source: {}
289 | destination:
290 | nets:
291 | - 169.254.169.254/32
292 | ports:
293 | - '80'
294 | types:
295 | - Ingress
296 | - Egress
297 |
298 | ---
299 | apiVersion: projectcalico.org/v3
300 | kind: NetworkPolicy
301 | metadata:
302 | name: application.loadgenerator
303 | namespace: default
304 | spec:
305 | tier: application
306 | order: 160
307 | selector: app == "loadgenerator"
308 | egress:
309 | - action: Allow
310 | protocol: TCP
311 | source: {}
312 | destination:
313 | selector: projectcalico.org/namespace == "default"
314 | ports:
315 | - '80'
316 | - action: Allow
317 | protocol: TCP
318 | source: {}
319 | destination:
320 | selector: app == "frontend"
321 | ports:
322 | - '8080'
323 | types:
324 | - Egress
325 |
326 | ---
327 | apiVersion: projectcalico.org/v3
328 | kind: NetworkPolicy
329 | metadata:
330 | name: application.paymentservice
331 | namespace: default
332 | spec:
333 | tier: application
334 | order: 170
335 | selector: app == "paymentservice"
336 | ingress:
337 | - action: Allow
338 | protocol: TCP
339 | source:
340 | selector: app == "checkoutservice"
341 | destination:
342 | ports:
343 | - '50051'
344 | types:
345 | - Ingress
346 |
347 | ---
348 | apiVersion: projectcalico.org/v3
349 | kind: NetworkPolicy
350 | metadata:
351 | name: application.productcatalogservice
352 | namespace: default
353 | spec:
354 | tier: application
355 | order: 180
356 | selector: app == "productcatalogservice"
357 | ingress:
358 | - action: Allow
359 | protocol: TCP
360 | source:
361 | selector: app == "checkoutservice"
362 | destination:
363 | ports:
364 | - '3550'
365 | - action: Allow
366 | protocol: TCP
367 | source:
368 | selector: app == "frontend"
369 | destination:
370 | ports:
371 | - '3550'
372 | - action: Allow
373 | protocol: TCP
374 | source:
375 | selector: app == "recommendationservice"
376 | destination:
377 | ports:
378 | - '3550'
379 | - action: Allow
380 | protocol: TCP
381 | source:
382 | selector: >-
383 | (component == "apiserver"&&endpoints.projectcalico.org/serviceName ==
384 | "kubernetes")
385 | destination:
386 | ports:
387 | - '35302'
388 | egress:
389 | - action: Allow
390 | protocol: TCP
391 | source: {}
392 | destination:
393 | ports:
394 | - '80'
395 | types:
396 | - Ingress
397 | - Egress
398 |
399 | ---
400 | apiVersion: projectcalico.org/v3
401 | kind: NetworkPolicy
402 | metadata:
403 | name: application.recommendationservice
404 | namespace: default
405 | spec:
406 | tier: application
407 | order: 190
408 | selector: app == "recommendationservice"
409 | ingress:
410 | - action: Allow
411 | protocol: TCP
412 | source:
413 | selector: app == "frontend"
414 | destination:
415 | ports:
416 | - '8080'
417 | egress:
418 | - action: Allow
419 | protocol: TCP
420 | source: {}
421 | destination:
422 | ports:
423 | - '80'
424 | - action: Allow
425 | protocol: TCP
426 | source: {}
427 | destination:
428 | selector: app == "productcatalogservice"
429 | ports:
430 | - '3550'
431 | types:
432 | - Ingress
433 | - Egress
434 |
435 | ---
436 | apiVersion: projectcalico.org/v3
437 | kind: NetworkPolicy
438 | metadata:
439 | name: application.redis-cart
440 | namespace: default
441 | spec:
442 | tier: application
443 | order: 200
444 | selector: app == "redis-cart"
445 | ingress:
446 | - action: Allow
447 | protocol: TCP
448 | source:
449 | selector: app == "cartservice"
450 | destination:
451 | ports:
452 | - '6379'
453 | types:
454 | - Ingress
455 |
456 | ---
457 | apiVersion: projectcalico.org/v3
458 | kind: NetworkPolicy
459 | metadata:
460 | name: application.shippingservice
461 | namespace: default
462 | spec:
463 | tier: application
464 | order: 210
465 | selector: app == "shippingservice"
466 | ingress:
467 | - action: Allow
468 | protocol: TCP
469 | source:
470 | selector: app == "checkoutservice"
471 | destination:
472 | ports:
473 | - '50051'
474 | - action: Allow
475 | protocol: TCP
476 | source:
477 | selector: app == "frontend"
478 | destination:
479 | ports:
480 | - '50051'
481 | egress:
482 | - action: Allow
483 | protocol: TCP
484 | source: {}
485 | destination:
486 | ports:
487 | - '80'
488 | types:
489 | - Ingress
490 | - Egress
491 |
--------------------------------------------------------------------------------
/demo/boutiqueshop/staged.default-deny.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: projectcalico.org/v3
2 | kind: StagedNetworkPolicy
3 | metadata:
4 | name: default-deny
5 | spec:
6 | order: 2000
7 | selector: "projectcalico.org/namespace == 'default'"
8 | types:
9 | - Ingress
10 | - Egress
11 |
--------------------------------------------------------------------------------
/demo/boutiqueshop/staged.policies.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: projectcalico.org/v3
3 | kind: StagedNetworkPolicy
4 | metadata:
5 | name: application.adservice
6 | namespace: default
7 | spec:
8 | tier: application
9 | order: 100
10 | selector: app == "adservice"
11 | ingress:
12 | - action: Allow
13 | protocol: TCP
14 | source:
15 | selector: app == "frontend"
16 | destination:
17 | ports:
18 | - '9555'
19 | egress:
20 | - action: Allow
21 | protocol: TCP
22 | source: {}
23 | destination:
24 | ports:
25 | - '80'
26 | types:
27 | - Ingress
28 | - Egress
29 |
30 | ---
31 | apiVersion: projectcalico.org/v3
32 | kind: StagedNetworkPolicy
33 | metadata:
34 | name: application.cartservice
35 | namespace: default
36 | spec:
37 | tier: application
38 | order: 110
39 | selector: app == "cartservice"
40 | ingress:
41 | - action: Allow
42 | protocol: TCP
43 | source:
44 | selector: app == "checkoutservice"
45 | destination:
46 | ports:
47 | - '7070'
48 | - action: Allow
49 | protocol: TCP
50 | source:
51 | selector: app == "frontend"
52 | destination:
53 | ports:
54 | - '7070'
55 | egress:
56 | - action: Allow
57 | protocol: TCP
58 | source: {}
59 | destination:
60 | ports:
61 | - '6379'
62 | - action: Allow
63 | protocol: TCP
64 | source: {}
65 | destination:
66 | selector: app == "redis-cart"
67 | ports:
68 | - '6379'
69 | types:
70 | - Ingress
71 | - Egress
72 |
73 | ---
74 | apiVersion: projectcalico.org/v3
75 | kind: StagedNetworkPolicy
76 | metadata:
77 | name: application.checkoutservice
78 | namespace: default
79 | spec:
80 | tier: application
81 | order: 120
82 | selector: app == "checkoutservice"
83 | ingress:
84 | - action: Allow
85 | protocol: TCP
86 | source:
87 | selector: app == "frontend"
88 | destination:
89 | ports:
90 | - '5050'
91 | egress:
92 | - action: Allow
93 | protocol: TCP
94 | source: {}
95 | destination:
96 | selector: app == "productcatalogservice"
97 | ports:
98 | - '3550'
99 | - action: Allow
100 | protocol: TCP
101 | source: {}
102 | destination:
103 | selector: app == "shippingservice"
104 | ports:
105 | - '50051'
106 | - action: Allow
107 | protocol: TCP
108 | source: {}
109 | destination:
110 | ports:
111 | - '80'
112 | - action: Allow
113 | protocol: TCP
114 | source: {}
115 | destination:
116 | selector: app == "cartservice"
117 | ports:
118 | - '7070'
119 | - action: Allow
120 | protocol: TCP
121 | source: {}
122 | destination:
123 | selector: app == "currencyservice"
124 | ports:
125 | - '7000'
126 | - action: Allow
127 | protocol: TCP
128 | source: {}
129 | destination:
130 | selector: app == "emailservice"
131 | ports:
132 | - '8080'
133 | - action: Allow
134 | protocol: TCP
135 | source: {}
136 | destination:
137 | selector: app == "paymentservice"
138 | ports:
139 | - '50051'
140 | types:
141 | - Ingress
142 | - Egress
143 |
144 | ---
145 | apiVersion: projectcalico.org/v3
146 | kind: StagedNetworkPolicy
147 | metadata:
148 | name: application.currencyservice
149 | namespace: default
150 | spec:
151 | tier: application
152 | order: 130
153 | selector: app == "currencyservice"
154 | ingress:
155 | - action: Allow
156 | protocol: TCP
157 | source:
158 | selector: app == "checkoutservice"
159 | destination:
160 | ports:
161 | - '7000'
162 | - action: Allow
163 | protocol: TCP
164 | source:
165 | selector: app == "frontend"
166 | destination:
167 | ports:
168 | - '7000'
169 | types:
170 | - Ingress
171 |
172 | ---
173 | apiVersion: projectcalico.org/v3
174 | kind: StagedNetworkPolicy
175 | metadata:
176 | name: application.emailservice
177 | namespace: default
178 | spec:
179 | tier: application
180 | order: 140
181 | selector: app == "emailservice"
182 | ingress:
183 | - action: Allow
184 | protocol: TCP
185 | source:
186 | selector: app == "checkoutservice"
187 | destination:
188 | ports:
189 | - '8080'
190 | types:
191 | - Ingress
192 |
193 | ---
194 | apiVersion: projectcalico.org/v3
195 | kind: StagedNetworkPolicy
196 | metadata:
197 | name: application.frontend
198 | namespace: default
199 | spec:
200 | tier: application
201 | order: 150
202 | selector: app == "frontend"
203 | ingress:
204 | - action: Allow
205 | protocol: TCP
206 | source:
207 | selector: app == "loadgenerator"
208 | destination:
209 | ports:
210 | - '8080'
211 | - action: Allow
212 | protocol: TCP
213 | source: {}
214 | destination:
215 | ports:
216 | - '8080'
217 | - action: Allow
218 | protocol: TCP
219 | source:
220 | selector: >-
221 | (component == "apiserver"&&endpoints.projectcalico.org/serviceName ==
222 | "kubernetes")
223 | destination:
224 | ports:
225 | - '56590'
226 | egress:
227 | - action: Allow
228 | protocol: TCP
229 | source: {}
230 | destination:
231 | selector: app == "checkoutservice"
232 | ports:
233 | - '5050'
234 | - action: Allow
235 | protocol: TCP
236 | source: {}
237 | destination:
238 | selector: app == "currencyservice"
239 | ports:
240 | - '7000'
241 | - action: Allow
242 | protocol: TCP
243 | source: {}
244 | destination:
245 | selector: app == "productcatalogservice"
246 | ports:
247 | - '3550'
248 | - action: Allow
249 | protocol: TCP
250 | source: {}
251 | destination:
252 | selector: app == "recommendationservice"
253 | ports:
254 | - '8080'
255 | - action: Allow
256 | protocol: TCP
257 | source: {}
258 | destination:
259 | selector: app == "shippingservice"
260 | ports:
261 | - '50051'
262 | - action: Allow
263 | protocol: TCP
264 | source: {}
265 | destination:
266 | ports:
267 | - '8080'
268 | - '5050'
269 | - '9555'
270 | - '7070'
271 | - '7000'
272 | - action: Allow
273 | protocol: TCP
274 | source: {}
275 | destination:
276 | selector: app == "adservice"
277 | ports:
278 | - '9555'
279 | - action: Allow
280 | protocol: TCP
281 | source: {}
282 | destination:
283 | selector: app == "cartservice"
284 | ports:
285 | - '7070'
286 | - action: Allow
287 | protocol: TCP
288 | source: {}
289 | destination:
290 | nets:
291 | - 169.254.169.254/32
292 | ports:
293 | - '80'
294 | types:
295 | - Ingress
296 | - Egress
297 |
298 | ---
299 | apiVersion: projectcalico.org/v3
300 | kind: StagedNetworkPolicy
301 | metadata:
302 | name: application.loadgenerator
303 | namespace: default
304 | spec:
305 | tier: application
306 | order: 160
307 | selector: app == "loadgenerator"
308 | egress:
309 | - action: Allow
310 | protocol: TCP
311 | source: {}
312 | destination:
313 | selector: projectcalico.org/namespace == "default"
314 | ports:
315 | - '80'
316 | - action: Allow
317 | protocol: TCP
318 | source: {}
319 | destination:
320 | selector: app == "frontend"
321 | ports:
322 | - '8080'
323 | types:
324 | - Egress
325 |
326 | ---
327 | apiVersion: projectcalico.org/v3
328 | kind: StagedNetworkPolicy
329 | metadata:
330 | name: application.paymentservice
331 | namespace: default
332 | spec:
333 | tier: application
334 | order: 170
335 | selector: app == "paymentservice"
336 | ingress:
337 | - action: Allow
338 | protocol: TCP
339 | source:
340 | selector: app == "checkoutservice"
341 | destination:
342 | ports:
343 | - '50051'
344 | types:
345 | - Ingress
346 |
347 | ---
348 | apiVersion: projectcalico.org/v3
349 | kind: StagedNetworkPolicy
350 | metadata:
351 | name: application.productcatalogservice
352 | namespace: default
353 | spec:
354 | tier: application
355 | order: 180
356 | selector: app == "productcatalogservice"
357 | ingress:
358 | - action: Allow
359 | protocol: TCP
360 | source:
361 | selector: app == "checkoutservice"
362 | destination:
363 | ports:
364 | - '3550'
365 | - action: Allow
366 | protocol: TCP
367 | source:
368 | selector: app == "frontend"
369 | destination:
370 | ports:
371 | - '3550'
372 | - action: Allow
373 | protocol: TCP
374 | source:
375 | selector: app == "recommendationservice"
376 | destination:
377 | ports:
378 | - '3550'
379 | - action: Allow
380 | protocol: TCP
381 | source:
382 | selector: >-
383 | (component == "apiserver"&&endpoints.projectcalico.org/serviceName ==
384 | "kubernetes")
385 | destination:
386 | ports:
387 | - '35302'
388 | egress:
389 | - action: Allow
390 | protocol: TCP
391 | source: {}
392 | destination:
393 | ports:
394 | - '80'
395 | types:
396 | - Ingress
397 | - Egress
398 |
399 | ---
400 | apiVersion: projectcalico.org/v3
401 | kind: StagedNetworkPolicy
402 | metadata:
403 | name: application.recommendationservice
404 | namespace: default
405 | spec:
406 | tier: application
407 | order: 190
408 | selector: app == "recommendationservice"
409 | ingress:
410 | - action: Allow
411 | protocol: TCP
412 | source:
413 | selector: app == "frontend"
414 | destination:
415 | ports:
416 | - '8080'
417 | egress:
418 | - action: Allow
419 | protocol: TCP
420 | source: {}
421 | destination:
422 | ports:
423 | - '80'
424 | - action: Allow
425 | protocol: TCP
426 | source: {}
427 | destination:
428 | selector: app == "productcatalogservice"
429 | ports:
430 | - '3550'
431 | types:
432 | - Ingress
433 | - Egress
434 |
435 | ---
436 | apiVersion: projectcalico.org/v3
437 | kind: StagedNetworkPolicy
438 | metadata:
439 | name: application.redis-cart
440 | namespace: default
441 | spec:
442 | tier: application
443 | order: 200
444 | selector: app == "redis-cart"
445 | ingress:
446 | - action: Allow
447 | protocol: TCP
448 | source:
449 | selector: app == "cartservice"
450 | destination:
451 | ports:
452 | - '6379'
453 | types:
454 | - Ingress
455 |
456 | ---
457 | apiVersion: projectcalico.org/v3
458 | kind: StagedNetworkPolicy
459 | metadata:
460 | name: application.shippingservice
461 | namespace: default
462 | spec:
463 | tier: application
464 | order: 210
465 | selector: app == "shippingservice"
466 | ingress:
467 | - action: Allow
468 | protocol: TCP
469 | source:
470 | selector: app == "checkoutservice"
471 | destination:
472 | ports:
473 | - '50051'
474 | - action: Allow
475 | protocol: TCP
476 | source:
477 | selector: app == "frontend"
478 | destination:
479 | ports:
480 | - '50051'
481 | egress:
482 | - action: Allow
483 | protocol: TCP
484 | source: {}
485 | destination:
486 | ports:
487 | - '80'
488 | types:
489 | - Ingress
490 | - Egress
491 |
--------------------------------------------------------------------------------
/demo/dev/app.manifests.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | kind: Namespace
3 | apiVersion: v1
4 | metadata:
5 | name: dev
6 | labels:
7 | compliance: open
8 | environment: development
9 |
10 | ---
11 | apiVersion: v1
12 | kind: Pod
13 | metadata:
14 | name: centos
15 | namespace: dev
16 | labels:
17 | app: centos
18 |
19 | spec:
20 | containers:
21 | - name: centos
22 | image: centos:latest
23 | command: [ "/bin/bash", "-c", "--" ]
24 | args: [ "while true; do curl -m3 http://nginx-svc; sleep 3; done;" ]
25 | resources: {}
26 |
27 | ---
28 | apiVersion: apps/v1
29 | kind: Deployment
30 | metadata:
31 | name: dev-nginx
32 | namespace: dev
33 | spec:
34 | selector:
35 | matchLabels:
36 | app: nginx
37 | security: strict
38 | replicas: 2
39 | template:
40 | metadata:
41 | labels:
42 | app: nginx
43 | security: strict
44 | spec:
45 | containers:
46 | - name: nginx
47 | image: nginx
48 | ports:
49 | - containerPort: 80
50 | resources: {}
51 |
52 | ---
53 | apiVersion: v1
54 | kind: Service
55 | metadata:
56 | name: nginx-svc
57 | namespace: dev
58 | labels:
59 | service: nginx
60 | spec:
61 | ports:
62 | - port: 80
63 | targetPort: 80
64 | protocol: TCP
65 | selector:
66 | app: nginx
67 |
--------------------------------------------------------------------------------
/demo/dev/policies.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: networking.k8s.io/v1
3 | kind: NetworkPolicy
4 | metadata:
5 | name: nginx
6 | namespace: dev
7 | spec:
8 | podSelector:
9 | matchLabels:
10 | app: nginx
11 | ingress:
12 | - from:
13 | - namespaceSelector:
14 | matchLabels:
15 | compliance: open
16 | policyTypes:
17 | - Ingress
18 |
19 | ---
20 | apiVersion: networking.k8s.io/v1
21 | kind: NetworkPolicy
22 | metadata:
23 | name: centos
24 | namespace: dev
25 | spec:
26 | podSelector:
27 | matchLabels:
28 | app: centos
29 | egress:
30 | - to:
31 | - podSelector:
32 | matchLabels:
33 | app: nginx
34 | policyTypes:
35 | - Egress
36 |
--------------------------------------------------------------------------------
/img/alerts-view.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/alerts-view.png
--------------------------------------------------------------------------------
/img/calico-on-eks.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/calico-on-eks.png
--------------------------------------------------------------------------------
/img/cloud9-aws-settings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/cloud9-aws-settings.png
--------------------------------------------------------------------------------
/img/cloud9-manage-ec2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/cloud9-manage-ec2.png
--------------------------------------------------------------------------------
/img/compliance-report.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/compliance-report.png
--------------------------------------------------------------------------------
/img/connect-cluster.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/connect-cluster.png
--------------------------------------------------------------------------------
/img/dashboard-view.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/dashboard-view.png
--------------------------------------------------------------------------------
/img/enable-polrec.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/enable-polrec.png
--------------------------------------------------------------------------------
/img/endpoints-view.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/endpoints-view.png
--------------------------------------------------------------------------------
/img/expand-menu.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/expand-menu.png
--------------------------------------------------------------------------------
/img/flow-viz.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/flow-viz.png
--------------------------------------------------------------------------------
/img/kibana-flow-logs.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/kibana-flow-logs.png
--------------------------------------------------------------------------------
/img/modify-iam-role.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/modify-iam-role.png
--------------------------------------------------------------------------------
/img/policies-board-stats.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/policies-board-stats.png
--------------------------------------------------------------------------------
/img/policies-board.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/policies-board.png
--------------------------------------------------------------------------------
/img/polrec-settings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/polrec-settings.png
--------------------------------------------------------------------------------
/img/service-graph-node.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/service-graph-node.png
--------------------------------------------------------------------------------
/img/timeline-view.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/tigera-solutions/tigera-eks-workshop/d431efcbaaba341ba9e8a0780fba97e874858d88/img/timeline-view.png
--------------------------------------------------------------------------------
/modules/configuring-demo-apps.md:
--------------------------------------------------------------------------------
1 | # Module 4: Configuring demo applications
2 |
3 | **Goal:** Deploy and configure demo applications.
4 |
5 | ## Steps
6 |
7 | 1. Deploy policy tiers.
8 |
9 | We are going to deploy some policies into policy tier to take advantage of hierarcical policy management.
10 |
11 | ```bash
12 | kubectl apply -f demo/00-tiers/tiers.yaml
13 | ```
14 |
15 | This will add tiers `security` and `platform` to the Calico cluster.
16 |
17 | 2. Deploy base policy.
18 |
19 | In order to explicitly allow workloads to connect to the Kubernetes DNS component, we are going to implement a policy that controls such traffic.
20 |
21 | ```bash
22 | kubectl apply -f demo/01-base/allow-kube-dns.yaml
23 | kubectl apply -f demo/01-base/tiers-pass-policy.yaml
24 | kubectl apply -f demo/01-base/quarantine-policy.yaml
25 | ```
26 |
27 | 3. Deploy demo applications.
28 |
29 | ```bash
30 | # deploy dev app stack
31 | kubectl apply -f demo/dev/app.manifests.yaml
32 |
33 | # deploy boutiqueshop app stack
34 | kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/release/v0.3.8/release/kubernetes-manifests.yaml
35 | ```
36 |
37 | 4. Deploy compliance reports.
38 |
39 | >The reports will be needed for one of a later lab.
40 |
41 | ```bash
42 | kubectl apply -f demo/40-compliance-reports/daily-cis-results.yaml
43 | kubectl apply -f demo/40-compliance-reports/cluster-reports.yaml
44 | ```
45 |
46 | 5. Deploy global alerts.
47 |
48 | >The alerts will be explored in a later lab.
49 |
50 | ```bash
51 | kubectl apply -f demo/50-alerts/globalnetworkset.changed.yaml
52 | kubectl apply -f demo/50-alerts/unsanctioned.dns.access.yaml
53 | kubectl apply -f demo/50-alerts/unsanctioned.lateral.access.yaml
54 | ```
55 |
56 | [Next -> Module 5](../modules/enable-l7-logs.md)
57 |
--------------------------------------------------------------------------------
/modules/creating-eks-cluster.md:
--------------------------------------------------------------------------------
1 | # Module 2: Creating EKS cluster
2 |
3 | **Goal:** Create EKS cluster.
4 |
5 | >This workshop uses EKS cluster with most of the default configuration settings. To create an EKS cluster and tune the default settings, consider exploring [EKS Workshop](https://www.eksworkshop.com) materials.
6 |
7 | ## Steps
8 |
9 | 1. Configure variables.
10 |
11 | ```bash
12 | export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')
13 | export AZS=($(aws ec2 describe-availability-zones --query 'AvailabilityZones[].ZoneName' --output text --region $AWS_REGION))
14 | EKS_VERSION="1.29"
15 | IAM_ROLE='tigera-workshop-admin'
16 |
17 | # check if AWS_REGION is configured
18 | test -n "$AWS_REGION" && echo AWS_REGION is "$AWS_REGION" || echo AWS_REGION is not set
19 |
20 | # add vars to .bash_profile
21 | echo "export AWS_REGION=${AWS_REGION}" | tee -a ~/.bash_profile
22 | echo "export AZS=(${AZS[@]})" | tee -a ~/.bash_profile
23 | aws configure set default.region ${AWS_REGION}
24 | aws configure get default.region
25 |
26 | # verify that IAM role is configured correctly. IAM_ROLE was set in previous module to tigera-workshop-admin.
27 | aws sts get-caller-identity --query Arn | grep $IAM_ROLE -q && echo "IAM role valid" || echo "IAM role NOT valid"
28 | ```
29 |
30 | >Do not proceed if the role is `NOT` valid, but rather go back and review the configuration steps in previous module. The proper role configuration is required for Cloud9 instance in order to use `kubectl` CLI with EKS cluster.
31 |
32 | 2. *[Optional]* Create AWS key pair.
33 |
34 | >This step is only necessary if you want to SSH into EKS node later to test SSH related use case in one of the later modules. Otherwise, you can skip this step.
35 | >If you decide to create the EC2 key pair, uncomment `publicKeyName` parameter in the cluster configuration example in the next step.
36 |
37 | In order to test host port protection with Calico network policy we will create EKS nodes with SSH access. For that we need to create EC2 key pair.
38 |
39 | ```bash
40 | export KEYPAIR_NAME=''
41 | # create EC2 key pair
42 | aws ec2 create-key-pair --key-name $KEYPAIR_NAME --query "KeyMaterial" --output text > $KEYPAIR_NAME.pem
43 | # set file permission
44 | chmod 400 $KEYPAIR_NAME.pem
45 | ```
46 |
47 | 3. Create EKS manifest.
48 |
49 | >If you created the EC2 key pair in the previous step, then uncomment `publicKeyName` parameter in the cluster configuration example below.
50 |
51 | ```bash
52 | # create EKS manifest file
53 | cat > configs/tigera-workshop.yaml << EOF
54 | apiVersion: eksctl.io/v1alpha5
55 | kind: ClusterConfig
56 |
57 | metadata:
58 | name: "tigera-workshop"
59 | region: "${AWS_REGION}"
60 | version: "${EKS_VERSION}"
61 |
62 | iam:
63 | withOIDC: true
64 |
65 | availabilityZones: ["${AZS[0]}", "${AZS[1]}", "${AZS[2]}"]
66 |
67 | addonsConfig:
68 | autoApplyPodIdentityAssociations: true
69 | addons:
70 | - name: aws-ebs-csi-driver
71 |
72 | managedNodeGroups:
73 | - name: "nix-t3-large"
74 | desiredCapacity: 3
75 | # choose proper size for worker node instance as the node size detemines the number of pods that a node can run
76 | # it's limited by a max number of interfeces and private IPs per interface
77 | # t3.large has max 3 interfaces and allows up to 12 IPs per interface, therefore can run up to 36 pods per node
78 | # see: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI
79 | instanceType: "t3.large"
80 | # uncomment lines below to allow SSH access to the nodes using existing EC2 key pair
81 | #ssh:
82 | #publicKeyName: ${KEYPAIR_NAME}
83 | #allow: true
84 |
85 | # enable all of the control plane logs:
86 | cloudWatch:
87 | clusterLogging:
88 | enableTypes: ["*"]
89 | EOF
90 | ```
91 |
92 | 4. Use `eksctl` to create EKS cluster.
93 |
94 | ```bash
95 | eksctl create cluster -f configs/tigera-workshop.yaml
96 | ```
97 |
98 | 5. View EKS cluster.
99 |
100 | Once cluster is created you can list it using `eksctl`.
101 |
102 | ```bash
103 | eksctl get cluster tigera-workshop
104 | ```
105 |
106 | 6. Test access to EKS cluster with `kubectl`
107 |
108 | Once the EKS cluster is provisioned with `eksctl` tool, the `kubeconfig` file would be placed into `~/.kube/config` path. The `kubectl` CLI looks for `kubeconfig` at `~/.kube/config` path or into `KUBECONFIG` env var.
109 |
110 | ```bash
111 | # verify kubeconfig file path
112 | ls ~/.kube/config
113 | # test cluster connection
114 | kubectl get nodes
115 | ```
116 |
117 | [Next -> Module 3](../modules/joining-eks-to-calico-cloud.md)
118 |
--------------------------------------------------------------------------------
/modules/deep-packet-inspection.md:
--------------------------------------------------------------------------------
1 | # Module 14: Deep packet inspection
2 |
3 | **Goal:** Configure deep packet inspection for sensitive workloads to allow Calico inspect packets and alert on suspicious traffic.
4 |
5 | ## Steps
6 |
7 | 1. Configure deep packet inspection (DPI) resource.
8 |
9 | Navigate to `demo/70-deep-packet-inspection` and review YAML manifests that represent DPI resource definition. A DPI resource is usually deployed to watch traffic for entire namespace or specific pods within the namespace using label selectors.
10 |
11 | Deploy DPI resource definition to allow Calico inspect packets bound for `dev/nginx` pods.
12 |
13 | ```bash
14 | kubectl apply -f demo/70-deep-packet-inspection/nginx-dpi.yaml
15 | ```
16 |
17 | >Once the `DeepPacketInspection` resource is deployed, Calico starts capturing packets for all endpoints configured in the `selector` field.
18 |
19 | Wait until all DPI pods become `Ready`
20 |
21 | ```bash
22 | watch kubectl get po -n tigera-dpi
23 | ```
24 |
25 | 2. Simulate malicious request.
26 |
27 | Query `dev/nginx` application with payload that triggers a Snort rule alert.
28 |
29 | ```bash
30 | kubectl -n dev exec -t centos -- sh -c "curl http://nginx-svc/secid_canceltoken.cgi -H 'X-CMD: Test' -H 'X-KEY: Test' -XPOST"
31 | ```
32 |
33 | 3. Review alerts.
34 |
35 | Navigate to the Alerts view in Tigera UI and review alerts triggered by DPI controller. Calico DPI controller uses [Snort](https://www.snort.org/) signatures to perform DPI checks.
36 |
37 | [Next -> Module 15](../modules/vulnerability-management.md)
38 |
--------------------------------------------------------------------------------
/modules/dynamic-packet-capture.md:
--------------------------------------------------------------------------------
1 | # Module 13: Dynamic packet capture
2 |
3 | **Goal:** Configure packet capture for specific pods and review captured payload.
4 |
5 | ## Steps
6 |
7 | 1. Configure packet capture.
8 |
9 | >One can initiate a packet capture from the Service Graph view by right-clicking on a node in the graph and choosing the `Initiate packet capture` option.
10 |
11 | Navigate to `demo/60-packet-capture` and review YAML manifests that represent packet capture definition. Each packet capture is configured by deploing a `PacketCapture` resource that targets endpoints using `selector` and `labels`.
12 |
13 | Deploy packet capture definition to capture packets for `dev/nginx` pods.
14 |
15 | ```bash
16 | kubectl apply -f demo/60-packet-capture/nginx-pcap.yaml
17 | ```
18 |
19 | >Once the `PacketCapture` resource is deployed, Calico starts capturing packets for all endpoints configured in the `selector` field.
20 |
21 | 2. Install `calicoctl` CLI
22 |
23 | The easiest way to retrieve captured `*.pcap` files is to use [calicoctl](https://docs.tigera.io/maintenance/clis/calicoctl/) CLI.
24 |
25 | ```bash
26 | CALICO_VERSION=$(kubectl get clusterinformation default -ojsonpath='{.spec.cnxVersion}')
27 | # download and configure calicoctl
28 | curl -o calicoctl -O -L https://docs.tigera.io/download/binaries/${CALICO_VERSION}/calicoctl
29 | chmod +x calicoctl
30 | sudo mv calicoctl /usr/local/bin/
31 | calicoctl version
32 | ```
33 |
34 | 3. Fetch and review captured payload.
35 |
36 | >The captured `*.pcap` files are stored on the hosts where pods are running at the time the `PacketCapture` resource is active.
37 |
38 | Retrieve captured `*.pcap` files and review the content.
39 |
40 | ```bash
41 | # get pcap files
42 | calicoctl captured-packets copy nginx-pcap --namespace dev
43 |
44 | ls dev-nginx*
45 | # view *.pcap content
46 | tcpdump -Ar dev-nginx-XXXXXX.pcap
47 | tcpdump -Xr dev-nginx-XXXXXX.pcap
48 | ```
49 |
50 | 4. Stop packet capture
51 |
52 | Stop packet capture by removing the `PacketCapture` resource.
53 |
54 | ```bash
55 | kubectl delete -f demo/60-packet-capture/nginx-pcap.yaml
56 | ```
57 |
58 | >Note Packet Captures can also be created and scheduled directly from the Calico UI. Follow the Service Graph method for this alternative [procedure](https://docs.tigera.io/visibility/packetcapture#access-packet-capture-files-via-service-graph).
59 |
60 | [Next -> Module 14](../modules/deep-packet-inspection.md)
61 |
--------------------------------------------------------------------------------
/modules/enable-l7-logs.md:
--------------------------------------------------------------------------------
1 | # Module 5: Enable application layer monitoring
2 |
3 | **Goal:** Leverage L7 logs to get insight into application layer communications.
4 |
5 | For more details on L7 logs, refer to the [official documentation](https://docs.tigera.io/visibility/elastic/l7/configure).
6 |
7 | >This module is applicable to Calico Cloud or Calico Enterprise version v3.10+. If your Calico version is lower than v3.10.0, then skip this task. You can verify Calico version, by running command:
8 | `kubectl get clusterinformation default -ojsonpath='{.spec.cnxVersion}'`
9 |
10 | >L7 collector is based on the Envoy proxy which gets automatically deployed via `ApplicationLayer` resource configuration. For more details, see [Configure L7 logs](https://docs.tigera.io/visibility/elastic/l7/configure) documentation page.
11 |
12 | ## Steps
13 |
14 | 1. Enable Policy Sync API setting.
15 |
16 | ```bash
17 | kubectl patch felixconfiguration default --type='merge' -p '{"spec":{"policySyncPathPrefix":"/var/run/nodeagent"}}'
18 | ```
19 |
20 | 2. Deploy `ApplicationLayer` resource.
21 |
22 | a. Deploy `ApplicationLayer` resource.
23 |
24 | ```bash
25 | kubectl apply -f - <This creates `l7-log-collector` daemonset in the `calico-system` namespace which contains `enovy-proxy` pod for application log collection and security.
39 |
40 | b. Enable L7 logs for the application service.
41 |
42 | To opt a service into L7 log collection, you need to annotate the service with `projectcalico.org/l7-logging=true` annotation.
43 |
44 | ```bash
45 | # enable L7 logs for a few services of boutiqueshop app
46 | kubectl annotate svc frontend projectcalico.org/l7-logging=true
47 | kubectl annotate svc checkoutservice projectcalico.org/l7-logging=true
48 | ```
49 |
50 | In module 9 you will review Calico's observability tools and can see application layer information for the `frontend` and `checkout` service in the Service Graph tool.
51 |
52 | [Next -> Module 6](../modules/namespace-isolation.md)
53 |
--------------------------------------------------------------------------------
/modules/joining-eks-to-calico-cloud.md:
--------------------------------------------------------------------------------
1 | # Module 3: Joining EKS cluster to Calico Cloud
2 |
3 | **Goal:** Join EKS cluster to Calico Cloud management plane.
4 |
5 | >In order to complete this module, you must have [Calico Cloud trial account](https://www.tigera.io/tigera-products/calico-cloud/).
6 |
7 | ## Steps
8 |
9 | 1. Calico Cloud Registration
10 |
11 | After verifying email activation, a browser tab of the Calico Cloud UI is launched which will ask for a few personal details. After this step the Welcome screen shows four use cases which will give a quick tour for learning more. Pick a use case to continue. Tip: the menu icons on the left can be expanded to display the worded menu as shown:
12 |
13 | 
14 |
15 | 2. Join EKS cluster to Calico Cloud management plane.
16 |
17 | Click the "Managed Cluster" in your left side of browser, enter the name of your cluster, select the Amazon EKS and click "Next"
18 |
19 | 
20 |
21 | A custom token is generated along with the Calico Cloud Installer Operator manifest. The command will look similar to:
22 |
23 | ```bash
24 | kubectl apply -f https://installer.calicocloud.io/manifests/cc-operator/latest/deploy.yaml && curl -H "Authorization: Bearer xxxxxxxxxxxx" "https://www.calicocloud.io/api/managed-cluster/deploy.yaml" | kubectl apply -f -
25 | ```
26 |
27 | Copy the output to clipboard and paste into your terminal to run. Output should look similar to:
28 |
29 | ```text
30 | namespace/calico-cloud created
31 | customresourcedefinition.apiextensions.k8s.io/installers.operator.calicocloud.io created
32 | serviceaccount/calico-cloud-controller-manager created
33 | role.rbac.authorization.k8s.io/calico-cloud-leader-election-role created
34 | clusterrole.rbac.authorization.k8s.io/calico-cloud-metrics-reader created
35 | clusterrole.rbac.authorization.k8s.io/calico-cloud-proxy-role created
36 | rolebinding.rbac.authorization.k8s.io/calico-cloud-leader-election-rolebinding created
37 | clusterrolebinding.rbac.authorization.k8s.io/calico-cloud-installer-rbac created
38 | clusterrolebinding.rbac.authorization.k8s.io/calico-cloud-proxy-rolebinding created
39 | configmap/calico-cloud-manager-config created
40 | service/calico-cloud-controller-manager-metrics-service created
41 | deployment.apps/calico-cloud-controller-manager created
42 | % Total % Received % Xferd Average Speed Time Time Time Current
43 | Dload Upload Total Spent Left Speed
44 | 100 355 100 355 0 0 541 0 --:--:-- --:--:-- --:--:-- 541
45 | secret/api-key created
46 | installer.operator.calicocloud.io/aks-westus created
47 | ```
48 |
49 | Joining the cluster to Calico Cloud can take a few minutes. Meanwhile the Calico resources can be monitored until they are all reporting `Available` as `True`.
50 |
51 | ```text
52 | Every 2.0s: kubectl get tigerastatus
53 |
54 | NAME AVAILABLE PROGRESSING DEGRADED SINCE
55 | apiserver True False False 96s
56 | calico True False False 16s
57 | compliance True False False 21s
58 | intrusion-detection True False False 41s
59 | log-collector True False False 21s
60 | management-cluster-connection True False False 51s
61 | monitor True False False 2m1s
62 | ```
63 |
64 | 3. Configure log aggregation and flush intervals.
65 |
66 | ```bash
67 | kubectl patch felixconfiguration.p default -p '{"spec":{"flowLogsFlushInterval":"10s"}}'
68 | kubectl patch felixconfiguration.p default -p '{"spec":{"dnsLogsFlushInterval":"10s"}}'
69 | kubectl patch felixconfiguration.p default -p '{"spec":{"flowLogsFileAggregationKindForAllowed":1}}'
70 | ```
71 |
72 | 4. Enable TCP stats collection.
73 |
74 | >This feature allows collection of TCP socket stats leveraging eBPF TC programs. See the docs for [more details](https://docs.tigera.io/visibility/elastic/flow/tcpstats).
75 |
76 | ```bash
77 | kubectl patch felixconfiguration default -p '{"spec":{"flowLogsCollectTcpStats":true}}'
78 | ```
79 |
80 | In module 9 you can view these stats in the `Socket stats` tab on the right hand side when selecting a traffic flow edge.
81 |
82 | [Next -> Module 4](../modules/configuring-demo-apps.md)
83 |
--------------------------------------------------------------------------------
/modules/namespace-isolation.md:
--------------------------------------------------------------------------------
1 | # Module 6: Namespace isolation
2 |
3 | **Goal:** Leverage Policy Recommendation engine to auto-generate policies to protect applications at namespace level.
4 |
5 | ## Steps
6 |
7 | 1. Enable Policy Recommendation engine.
8 |
9 | Policy Recommendation engine aims to aid security and applications teams to quickly generate policies to secure applications and service running in the EKS cluster at the namespace level.
10 |
11 | a. Navigate to `Policies` -> `Recommendations` menu item in the left navigation panel and click `Enable Policy Recommendations` button to enable this feature.
12 |
13 | 
14 |
15 | For the demo purpose it's useful to lower `Stabilization Period` and `Processing Interval` settings for the engine. You can access these settings by clicking on the `Global Settings` button in the top right corner of the `Policy Recommendations` view.
16 |
17 | 
18 |
19 | b. Let the Policy Recommendation engine run for several minutes to propose policies.
20 |
21 | >It usually take about 5 minutes for the engine to start generating recommended policies.
22 |
23 | 2. Review the proposed policies and move them to the `Policy Board`.
24 |
25 | The proposed policies land in the `namespace-isolation` tier as the staged policies. One can let them run in the staged mode for some time if needed or enforce them immediately.
26 |
27 | 3. Enforce policies and test connectivity between the pods.
28 |
29 | Install `curl` utility into the `loadgenerator` pod to use it for testing.
30 |
31 | ```bash
32 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'apt-get update && apt-get install -y curl iputils-ping netcat && curl --help'
33 | ```
34 |
35 | Create a testing pod in `dev` namespace.
36 |
37 | ```bash
38 | kubectl -n dev run --restart=OnFailure --image nicolaka/netshoot netshoot -- sh -c 'while true; do sleep 30; done'
39 | ```
40 |
41 | Enforce staged policies that were added to the `namespace-isolation` tier and test the connectivity between the pods.
42 |
43 | ```bash
44 | # test connectivity from dev/netshoot to paymentservice.default service over port 50051
45 | kubectl -n dev exec -it netshoot -- sh -c 'nc -zv -w2 paymentservice.default 50051'
46 |
47 | # test connectivity from dev/centos to frontend.default service
48 | kubectl -n dev exec -it centos -- sh -c 'curl -m2 -sI frontend.default 2>/dev/null | grep -i http'
49 |
50 | # test connectivity from loadgenerator to frontent service within the default namespace
51 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI frontend 2>/dev/null | grep -i http'
52 |
53 | # test connectivity from default namespace to the Internet
54 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI www.google.com 2>/dev/null | grep -i http'
55 | ```
56 |
57 | Only the connections allowed by the recommended policies should succeed.
58 |
59 | [Next -> Module 7](../modules/using-security-controls.md)
60 |
--------------------------------------------------------------------------------
/modules/securing-heps.md:
--------------------------------------------------------------------------------
1 | # Module 9: Securing EKS hosts
2 |
3 | **Goal:** Secure EKS hosts ports with network policies.
4 |
5 | Calico network policies not only can secure pod to pod communications but also can be applied to EKS hosts to protect host based services and ports. For more details refer to [Protect Kubernetes nodes](https://docs.tigera.io/security/kubernetes-nodes) documentaiton.
6 |
7 | ## Steps
8 |
9 | 1. Open a port of NodePort service for public access on EKS node.
10 |
11 | For the demo purpose we are going to expose the `default/frontend` service via the `NodePort` service type to open it for the public access.
12 |
13 | ```bash
14 | # expose the frontend service via the NodePort service type
15 | kubectl expose deployment frontend --type=NodePort --name=frontend-nodeport --overrides='{"apiVersion":"v1","spec":{"ports":[{"nodePort":30080,"port":80,"targetPort":8080}]}}'
16 |
17 | # open access to the port in AWS security group
18 | CLUSTER_NAME='tigera-workshop' # adjust the name if you used a different name for your EKS cluster
19 | AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')
20 | # pick one EKS node and use it's ID to get securigy group
21 | SG_ID=$(aws ec2 describe-instances --region $AWS_REGION --filters "Name=tag:Name,Values=$CLUSTER_NAME*" "Name=instance-state-name,Values=running" --query 'Reservations[0].Instances[*].NetworkInterfaces[0].Groups[0].GroupId' --output text --output text)
22 | # open SSH port in the security group for public access
23 | aws ec2 authorize-security-group-ingress --region $AWS_REGION --group-id $SG_ID --protocol tcp --port 30080 --cidr 0.0.0.0/0
24 |
25 | # get public IP of an EKS node
26 | PUB_IP=$(aws ec2 describe-instances --region $AWS_REGION --filters "Name=tag:Name,Values=$CLUSTER_NAME*" "Name=instance-state-name,Values=running" --query 'Reservations[0].Instances[0].PublicIpAddress' --output text --output text)
27 | # test connection to SSH port
28 | nc -zv $PUB_IP 30080
29 |
30 | # get var configuration for local shell which will be used in a later step
31 | echo "EKS_NODE_PUB_IP=$PUB_IP"
32 | ```
33 |
34 | >It can take a moment for the node port to become accessible.
35 |
36 | If the SSH port was configured correctly, the `nc` command should show you that the port is open.
37 |
38 | 2. Enable `HostEndpoint` auto-creation for EKS cluster.
39 |
40 | When working with managed Kubernetes services, such as EKS, we recommend using `HostEndpoint` (HEP) auto-creation feature which allows you to automate the management of `HostEndpoint` resources for managed Kubernetes clusters whenever the cluster is scaled.
41 |
42 | >Before you enable HEP auto-creation feature, make sure there are no `HostEndpoint` resources manually defined for your cluster: `kubectl get hostendpoints`.
43 |
44 | ```bash
45 | # check whether auto-creation for HEPs is enabled. Default: Disabled
46 | kubectl get kubecontrollersconfiguration.p default -ojsonpath='{.status.runningConfig.controllers.node.hostEndpoint.autoCreate}'
47 |
48 | # enable HEP auto-creation
49 | kubectl patch kubecontrollersconfiguration.p default -p '{"spec": {"controllers": {"node": {"hostEndpoint": {"autoCreate": "Enabled"}}}}}'
50 | # verify that each node got a HostEndpoint resource created
51 | kubectl get hostendpoints
52 | ```
53 |
54 | 3. Implement a Calico policy to control access to the service of NodePort type.
55 |
56 | Deploy a policy that only allows access to the node port from the Cloud9 instance.
57 |
58 | ```bash
59 | # from your local shell test connection to the node port, i.e. 30080, using netcat or telnet or other connectivity testing tool
60 | EKS_NODE_PUB_IP=XX.XX.XX.XX
61 | nc -zv $EKS_NODE_PUB_IP 30080
62 |
63 | # get public IP of Cloud9 instance in the Cloud9 shell
64 | CLOUD9_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)
65 | # deploy HEP policy
66 | sed -e "s/\${CLOUD9_IP}/${CLOUD9_IP}\/32/g" demo/30-secure-hep/frontend-nodeport-access.yaml | kubectl apply -f -
67 | # test access from Cloud9 shell
68 | nc -zv $EKS_NODE_PUB_IP 30080
69 | ```
70 |
71 | Once the policy is implemented, you should not be able to access the node port `30080` from your local shell, but you should be able to access it from the Cloud9 shell.
72 |
73 | >Note that in order to control access to the NodePort service, you need to enable `preDNAT` and `applyOnForward` policy settings.
74 |
75 | 4. *[Bonus task]* Implement a Calico policy to control access to the SSH port on EKS hosts.
76 |
77 | When dealing with SSH and platform required ports, Calico provides a fail safe mechanism to manage such posrts so that you don't lock yourself out of the node by accident. Once you configure and test host targeting policy, you can selectively disable fail safe ports.
78 |
79 | ```bash
80 | # deploy FelixConfiguration to disable fail safe for SSH port
81 | kubectl apply -f demo/30-secure-hep/felixconfiguration.yaml
82 |
83 | # get public IP of Cloud9 instance
84 | CLOUD9_IP=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)
85 | # allow SSH access to EKS nodes only from the Cloud9 instance
86 | sed -e "s/\${CLOUD9_IP}/${CLOUD9_IP}\/32/g" demo/30-secure-hep/ssh-access.yaml | kubectl apply -f -
87 | ```
88 |
89 | [Next -> Module 10](../modules/using-observability-tools.md)
90 |
--------------------------------------------------------------------------------
/modules/setting-up-work-environment.md:
--------------------------------------------------------------------------------
1 | # Module 1: Setting up work environment
2 |
3 | **Goal:** Set up and configure your environment to work with AWS resources.
4 |
5 | ## Choose between local environment and Cloud9 instance
6 |
7 | The simplest ways to configure your working environment is to either use your local environment, i.e. laptop, desktop computer, etc., or create an [AWS Cloud9 environment](https://docs.aws.amazon.com/cloud9/latest/user-guide/tutorial.html) from which you can run all necessary commands in this workshop. If you're familiar with tools like `SSH client`, `git`, `jq`, `netcat` and feel comfortable using your local shell, then go to `setp 2` in the next section.
8 |
9 | ## Steps
10 |
11 | 1. Create Cloud9 workspace environment.
12 |
13 | To configure a Cloud9 instance, open AWS Console and navigate to `Services` > `Cloud9`. Create environment in the desired region. You can use all the default settings when creating the environment, but consider using `t3.small` instance as the `t2.micro` instance could be a bit slow. You can name it as `tigera-workspace` to quickly find it in case you have many `Cloud9` instances. It usually takes only a few minutes to get the Cloud9 instance running.
14 |
15 | 2. Ensure your environment has these tools:
16 |
17 | - [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)
18 | - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
19 | - [eksctl](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html)
20 | - [EKS kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html)
21 | - `jq` and `netcat` utilities
22 |
23 | Check whether these tools already present in your environment. If not, install the missing ones.
24 |
25 | ```bash
26 | # run these commands to check whether the tools are installed in your environment
27 | aws --version
28 | git --version
29 | eksctl version
30 | kubectl version --short --client
31 |
32 | # install jq and netcat
33 | sudo yum install jq nc -y
34 | jq --version
35 | nc --version
36 | ```
37 |
38 | >If `aws` version is `1.x`, upgrade it to version `2.x`.
39 |
40 | ```bash
41 | curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
42 | unzip awscliv2.zip
43 | sudo ./aws/install
44 | # reload bash shell
45 | . ~/.bashrc
46 | aws --version
47 | ```
48 |
49 | >For convenience consider configuring [autocompletion for kubectl](https://kubernetes.io/docs/tasks/tools/included/optional-kubectl-configs-bash-linux/#enable-kubectl-autocompletion).
50 |
51 | 3. Download this repo into your environment:
52 |
53 | ```bash
54 | git clone https://github.com/tigera-solutions/tigera-eks-workshop
55 | ```
56 |
57 | 4. Configure AMI role for Cloud9 workspace.
58 |
59 | >This is necessary when using Cloud9 environment which has an IAM role automatically associated with it. You need to replace this role with a custom IAM role that provides necessary permissions to build EKS cluster so that you can work with the cluster using `kubectl` CLI.
60 |
61 | a. When using Cloud9 instance, by default the instance has AWS managed temporary credentials that provide limited permissions to AWS resources. In order to manage IAM resources from the Cloud9 workspace, export your user's [AWS Access Key/ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) via environment variables. If you already have them under your `~/.aws/credentials` then you can skip this step.
62 |
63 | >It is recommended to use your personal AWS account which would have full access to AWS resources. If using a corporate AWS account, make sure to check with account administrators to provide you with sufficient permissions to create and manage EKS clusters and Load Balancer resources.
64 |
65 | ```bash
66 | export AWS_ACCESS_KEY_ID=''
67 | export AWS_SECRET_ACCESS_KEY=''
68 | ```
69 |
70 | b. Create IAM role.
71 |
72 | ```bash
73 | # go to cloned repo
74 | cd ./tigera-eks-workshop
75 |
76 | IAM_ROLE='tigera-workshop-admin'
77 | # assign AdministratorAccess default policy. You can use a custom policy if required.
78 | ADMIN_POLICY_ARN=$(aws iam list-policies --query 'Policies[?PolicyName==`AdministratorAccess`].Arn' --output text)
79 | # create IAM role
80 | aws iam create-role --role-name $IAM_ROLE --assume-role-policy-document file://configs/trust-policy.json
81 | aws iam attach-role-policy --role-name $IAM_ROLE --policy-arn $ADMIN_POLICY_ARN
82 | # tag role
83 | aws iam tag-role --role-name $IAM_ROLE --tags '{"Key": "purpose", "Value": "tigera-eks-workshop"}'
84 | # create instance profile
85 | aws iam create-instance-profile --instance-profile-name $IAM_ROLE
86 | # add IAM role to instance profile
87 | aws iam add-role-to-instance-profile --role-name $IAM_ROLE --instance-profile-name $IAM_ROLE
88 | ```
89 |
90 | c. Assign the IAM role to Cloud9 workspace.
91 |
92 | - Click the grey circle button (in top right corner) and select `Manage EC2 Instance`.
93 |
94 | 
95 |
96 | - Select the instance, then choose `Actions` > `Security` > `Modify IAM Role` and assign the IAM role you created in previous step, i.e. `tigera-workshop-admin`.
97 |
98 | 
99 |
100 | d. Update IAM settings for your workspace.
101 |
102 | - Return to your Cloud9 workspace and click the gear icon (in top right corner)
103 | - Select AWS SETTINGS
104 | - Turn off AWS managed temporary credentials
105 | - Close the Preferences tab
106 |
107 | 
108 |
109 | - Remove locally stored `~/.aws/credentials`
110 |
111 | ```bash
112 | rm -vf ~/.aws/credentials
113 | ```
114 |
115 | e. Unset `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` to allow Cloud9 instance to use the configured IAM role.
116 |
117 | ```bash
118 | unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
119 | ```
120 |
121 | [Next -> Module 2](../modules/creating-eks-cluster.md)
122 |
--------------------------------------------------------------------------------
/modules/using-alerts.md:
--------------------------------------------------------------------------------
1 | # Module 12: Using alerts
2 |
3 | **Goal:** Use global alerts to notify security and operations teams about unsanctioned or suspicious activity.
4 |
5 | ## Steps
6 |
7 | 1. Review alerts manifests.
8 |
9 | Navigate to `demo/50-alerts` and review YAML manifests that represent alerts definitions. Each file containes an alert template and alert definition. Alerts templates can be used to quickly create an alert definition in the UI.
10 |
11 | 2. View triggered alerts.
12 |
13 | >We implemented alerts in one of the first labs in order to see how our activity can trigger them.
14 |
15 | Open `Alerts` view to see all triggered alerts in the cluster. Review the generated alerts.
16 |
17 | 
18 |
19 | You can also review the alerts configuration and templates by navigating to alerts configuration in the top right corner.
20 |
21 | [Next -> Module 13](../modules/dynamic-packet-capture.md)
22 |
--------------------------------------------------------------------------------
/modules/using-compliance-reports.md:
--------------------------------------------------------------------------------
1 | # Module 11: Using compliance reports
2 |
3 | **Goal:** Use global reports to satisfy compliance requirements.
4 |
5 | ## Steps
6 |
7 | 1. Use `Compliance Reports` view to see all generated reports.
8 |
9 | >We have deployed a few compliance reports in one of the first labs and by this time a few reports should have been already generated. If you don't see any reports, you can manually kick off report generation task. Follow the steps below if you need to do so.
10 |
11 | Calico provides `GlobalReport` resource to offer [Compliance reports](https://docs.tigera.io/compliance/compliance-reports/) capability. There are several types of reports that you can configure:
12 |
13 | - CIS benchmarks
14 | - Inventory
15 | - Network access
16 | - Policy audit
17 |
18 | >When using EKS cluster, you need to [enable and configure audit log collection](https://docs.tigera.io/compliance/compliance-reports/compliance-managed-cloud#enable-audit-logs-in-eks) on AWS side in order to get the data captured for the `policy-audit` reports.
19 |
20 | A compliance report could be configured to include only specific endpoints leveraging endpoint labels and selectors. Each report has the `schedule` field that determines how often the report is going to be generated and sets the timeframe for the data to be included into the report.
21 |
22 | Compliance reports organize data in a CSV format which can be downloaded and moved to a long term data storage to meet compliance requirements.
23 |
24 | 
25 |
26 | 2. *[Optional]* Manually kick off report generation task.
27 |
28 | >In order to generate a compliance report, Calico needs at least 1 hour worth of data for `inventory`, `network-access` reports, and at least 24 hours worth of data for `cis` reports. If commands below don't result in any reports being generated, give it some time and then retry the report generation.
29 |
30 | It is possible to kick off report generation via a one off job.
31 |
32 | ```bash
33 | # get Calico version
34 | CALICO_VERSION=$(kubectl get clusterinformation default -ojsonpath='{.spec.cnxVersion}')
35 | # set report names
36 | CIS_REPORT_NAME='daily-cis-results'
37 | INVENTORY_REPORT_NAME='cluster-inventory'
38 | NETWORK_ACCESS_REPORT_NAME='cluster-network-access'
39 | # for managed clusters you must set ELASTIC_INDEX_SUFFIX var to cluster name in the reporter pod template YAML
40 | ELASTIC_INDEX_SUFFIX=$(kubectl get deployment -n tigera-intrusion-detection intrusion-detection-controller -ojson | jq -r '.spec.template.spec.containers[0].env[] | select(.name == "CLUSTER_NAME").value')
41 |
42 | # enable if you configured audit logs for EKS cluster and uncommented policy audit reporter job
43 | # you also need to add variable replacement in the sed command below
44 | # POLICY_AUDIT_REPORT_NAME='cluster-policy-audit'
45 |
46 | START_TIME=$(date -d '-2 hours' -u +'%Y-%m-%dT%H:%M:%SZ')
47 | END_TIME=$(date -u +'%Y-%m-%dT%H:%M:%SZ')
48 |
49 | # replace variables in YAML and deploy reporter jobs
50 | sed -e "s??$CALICO_VERSION?g" \
51 | -e "s??$CIS_REPORT_NAME?g" \
52 | -e "s??$INVENTORY_REPORT_NAME?g" \
53 | -e "s??$NETWORK_ACCESS_REPORT_NAME?g" \
54 | -e "s??$POLICY_AUDIT_REPORT_NAME?g" \
55 | -e "s??$ELASTIC_INDEX_SUFFIX?g" \
56 | -e "s??$START_TIME?g" \
57 | -e "s??$END_TIME?g" \
58 | scenarios/40-compliance-reports/cluster-reporter-pods.yaml | kubectl apply -f -
59 | ```
60 |
61 | [Next -> Module 12](../modules/using-alerts.md)
62 |
--------------------------------------------------------------------------------
/modules/using-egress-access-controls.md:
--------------------------------------------------------------------------------
1 | # Module 8: Using egress access controls
2 |
3 | **Goal:** Configure egress access for specific workloads.
4 |
5 | ## Steps
6 |
7 | 1. Test connectivity within the cluster and to the external endpoint.
8 |
9 | a. Test connectivity between `dev/centos` pod and `default/frontend` pod.
10 |
11 | ```bash
12 | # test connectivity from dev namespace to default namespace
13 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://frontend.default 2>/dev/null | grep -i http'
14 | ```
15 |
16 | b. Test connectivity from `dev/centos` to the external endpoint.
17 |
18 | ```bash
19 | # test connectivity from dev namespace to the Internet
20 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -skI https://api.twilio.com 2>/dev/null | grep -i http'
21 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://www.google.com 2>/dev/null | grep -i http'
22 | ```
23 |
24 | The access should be denied as the policies configured in previous module do not allow it.
25 |
26 | 2. Implement egress policy to allow egress access from a workload in one namespace, e.g. `dev/centos`, to a service in another namespace, e.g. `default/frontend`.
27 |
28 | a. Deploy egress policy.
29 |
30 | ```bash
31 | kubectl apply -f demo/20-egress-access-controls/centos-to-frontend.yaml
32 | ```
33 |
34 | b. Test connectivity between `dev/centos` pod and `default/frontend` service.
35 |
36 | ```bash
37 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://frontend.default 2>/dev/null | grep -i http'
38 | ```
39 |
40 | The access should be allowed once the egress policy is in place.
41 |
42 | 3. Implement DNS policy to allow the external endpoint access from a specific workload, e.g. `dev/centos`.
43 |
44 | a. Apply a policy to allow access to `api.twilio.com` endpoint using DNS rule.
45 |
46 | ```bash
47 | # deploy dns policy
48 | kubectl apply -f demo/20-egress-access-controls/dns-policy.yaml
49 |
50 | # test egress access to api.twilio.com
51 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -skI https://api.twilio.com 2>/dev/null | grep -i http'
52 | # test egress access to www.google.com
53 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -skI https://www.google.com 2>/dev/null | grep -i http'
54 | ```
55 |
56 | Access to the `api.twilio.com` endpoint should be allowed by the DNS policy but not to any other external endpoints like `www.google.com` unless we modify the policy to include that domain name.
57 |
58 | b. Edit the policy to use a `NetworkSet` instead of inline DNS rule.
59 |
60 | ```bash
61 | # deploy network set
62 | kubectl apply -f demo/20-egress-access-controls/netset.external-apis.yaml
63 | # deploy DNS policy using the network set
64 | kubectl apply -f demo/20-egress-access-controls/dns-policy.netset.yaml
65 | ```
66 |
67 | >As a bonus example, you can modify the `external-apis` network set to include `*.google.com` domain name which would allow access to Google subdomains. If you do it, you can would allow acess to subdomains like `www.google.com`, `docs.google.com`, etc.
68 |
69 | [Next -> Module 9](../modules/securing-heps.md)
70 |
--------------------------------------------------------------------------------
/modules/using-observability-tools.md:
--------------------------------------------------------------------------------
1 | # Module 10: Using observability tools
2 |
3 | **Goal:** Explore Calico observability tools.
4 |
5 | ## Calico observability tools
6 |
7 | >If you are interested in enabling collection of application layer metrics for your workloads, refer to [Configure L7 logs](https://docs.tigera.io/visibility/elastic/l7/configure) documentation to enable application layer metrics collection.
8 |
9 | 1. Dashboard
10 |
11 | The `Dashboard` view in the Enterprise Manager UI presents high level overview of what's going on in your cluster. The view shows the following information:
12 |
13 | - Connections, Allowed Bytes and Packets
14 | - Denied Bytes and Packets
15 | - Total number of Policies, Endpoints and Nodes
16 | - Summary of CIS benchmarks
17 | - Count of triggered alerts
18 | - Packets by Policy histogram that shows allowed and denied traffic as it is being evaluated by network policies
19 |
20 | 
21 |
22 | 2. Policies Board
23 |
24 | The `Policies Board` shows all policies deployed in the cluster and organized into `policy tiers`. You can control what a user can see and do by configuring Kubernetes RBAC roles which determine what the user can see in this view. You can also use controls to hide away tiers you're not interested in at any given time.
25 |
26 | 
27 |
28 | By leveraging stats controls you can toggle additional metrics to be listed for each shown policy.
29 |
30 | 
31 |
32 | 3. Audit timeline
33 |
34 | The `Timeline` view shows audit trail of created, deleted, or modified resources.
35 |
36 | 
37 |
38 | 4. Endpoints
39 |
40 | The `Endpoints` view lists all endpoints known to Calico. It includes all Kubernetes endpoints, such as Pods, as well as Host endpoints that can represent a Kubernetes host or an external VM or bare metal machine.
41 |
42 | 
43 |
44 | 5. Service Graph
45 |
46 | The dynamic `Service Graph` presents network flows from service level perspective. Top level view shows how traffic flows between namespaces as well as external and internal endpoints.
47 |
48 | 
49 |
50 | - When you select any node representing a namespace, you will get additional details about the namespace, such as incoming and outgoing traffic, policies evaluating each flow, and DNS metrics.
51 | - When you select any edge, you will get details about the flows representing that edge.
52 | - If you expand a namespace by double-clicking on it, you will get the view of all components of the namespace.
53 |
54 | 6. Flow Visualizations
55 |
56 | The `Flow Visualizations` view shows all point-to-point flows in the cluster. It allows you to see the cluster traffic from the network point of view.
57 |
58 | 
59 |
60 | 7. Kibana dashboards
61 |
62 | The `Kibana` components comes with Calico commercial offerings and provides you access to raw flow, audit, and dns logs, as well as ability to visualize the collected data in various dashboards.
63 |
64 | 
65 |
66 | Some of the default dashboards you get access to are **DNS Logs**, **Flow Logs**, **Audit Logs**, **Kuernetes API calls**, **L7 HTTP metrics**, **Tor-VPN Logs** and others.
67 |
68 | [Next -> Module 11](../modules/using-compliance-reports.md)
69 |
--------------------------------------------------------------------------------
/modules/using-security-controls.md:
--------------------------------------------------------------------------------
1 | # Module 7: Using security controls
2 |
3 | **Goal:** Leverage network policies to segment connections within Kubernetes cluster and prevent known bad actors from accessing the workloads.
4 |
5 | ## Steps
6 |
7 | 1. Test connectivity between application components and across application stacks.
8 |
9 | a. Add `curl` to `loadgenerator` component to run test commands.
10 |
11 | >This step is only needed if you're using `boutiqueshop` version in which the `loadgenerator` component doesn't include `curl` or `wget` utility.
12 | >Note that package addition to a running pod will be removed as soon as the pod is restarted.
13 |
14 | ```bash
15 | # install curl utility
16 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'apt-get update && apt-get install -y curl && curl --help'
17 | ```
18 |
19 | b. Test connectivity between workloads within each namespace.
20 |
21 | ```bash
22 | # test connectivity within dev namespace
23 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://nginx-svc 2>/dev/null | grep -i http'
24 |
25 | # test connectivity within default namespace
26 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI frontend 2>/dev/null | grep -i http'
27 | ```
28 |
29 | c. Test connectivity across namespaces.
30 |
31 | ```bash
32 | # test connectivity from dev namespace to default namespace
33 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://frontend.default 2>/dev/null | grep -i http'
34 |
35 | # test connectivity from default namespace to dev namespace
36 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI http://nginx-svc.dev 2>/dev/null | grep -i http'
37 | ```
38 |
39 | d. Test connectivity from each namespace to the Internet.
40 |
41 | ```bash
42 | # test connectivity from dev namespace to the Internet
43 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://www.google.com 2>/dev/null | grep -i http'
44 |
45 | # test connectivity from default namespace to the Internet
46 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI www.google.com 2>/dev/null | grep -i http'
47 | ```
48 |
49 | All of these tests should succeed if there are no policies in place to govern the traffic for `dev` and `default` namespaces.
50 |
51 | 2. Apply staged `default-deny` policy.
52 |
53 | >Staged `default-deny` policy is a good way of catching any traffic that is not explicitly allowed by a policy without explicitly blocking it.
54 |
55 | ```bash
56 | kubectl apply -f demo/10-security-controls/staged.default-deny.yaml
57 | ```
58 |
59 | You should be able to view the potential affect of the staged `default-deny` policy if you navigate to the `Dashboard` view in the Enterprise Manager UI and look at the `Packets by Policy` histogram.
60 |
61 | ```bash
62 | # make a request across namespaces and view Packets by Policy histogram
63 | for i in {1..10}; do kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://frontend.default 2>/dev/null | grep -i http'; sleep 2; done
64 | ```
65 |
66 | >The staged policy does not affect the traffic directly but allows you to view the policy impact if it were to be enforced.
67 |
68 | 3. Apply network policies to control East-West traffic.
69 |
70 | ```bash
71 | # deploy dev policies
72 | kubectl apply -f demo/dev/policies.yaml
73 |
74 | # deploy boutiqueshop policies
75 | kubectl apply -f demo/boutiqueshop/policies.yaml
76 | ```
77 |
78 | Now as we have proper policies in place, we can enforce `default-deny` policy moving closer to zero-trust security approach. You can either enforced the already deployed staged `default-deny` policy using the `Policies Board` view in the Enterirpse Manager UI, or you can apply an enforcing `default-deny` policy manifest.
79 |
80 | ```bash
81 | # apply enforcing default-deny policy manifest
82 | kubectl apply -f demo/10-security-controls/default-deny.yaml
83 | # you can delete staged default-deny policy
84 | kubectl delete -f demo/10-security-controls/staged.default-deny.yaml
85 | ```
86 |
87 | 4. Test connectivity with policies in place.
88 |
89 | a. The only connections between the components within each namespaces should be allowed as configured by the policies.
90 |
91 | ```bash
92 | # test connectivity within dev namespace
93 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://nginx-svc 2>/dev/null | grep -i http'
94 |
95 | # test connectivity within default namespace
96 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI frontend 2>/dev/null | grep -i http'
97 | ```
98 |
99 | b. The connections across `dev` and `default` namespaces should be blocked by the global `default-deny` policy.
100 |
101 | ```bash
102 | # test connectivity from dev namespace to default namespace
103 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://frontend.default 2>/dev/null | grep -i http'
104 |
105 | # test connectivity from default namespace to dev namespace
106 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI http://nginx-svc.dev 2>/dev/null | grep -i http'
107 | ```
108 |
109 | c. The connections to the Internet should be blocked by the configured policies.
110 |
111 | ```bash
112 | # test connectivity from dev namespace to the Internet
113 | kubectl -n dev exec -t centos -- sh -c 'curl -m2 -sI http://www.google.com 2>/dev/null | grep -i http'
114 |
115 | # test connectivity from default namespace to the Internet
116 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c 'curl -m2 -sI www.google.com 2>/dev/null | grep -i http'
117 | ```
118 |
119 | 5. Protect workloads from known bad actors.
120 |
121 | Calico offers `GlobalThreatfeed` resource to prevent known bad actors from accessing Kubernetes pods.
122 |
123 | a. Deploy a threat feed and a policy for it.
124 |
125 | ```bash
126 | # deploy feodo tracker threatfeed
127 | kubectl apply -f demo/10-security-controls/feodotracker.threatfeed.yaml
128 | # deploy network policy that uses the threadfeed
129 | kubectl apply -f demo/10-security-controls/feodo-block-policy.yaml
130 | ```
131 |
132 | b. Simulate access to threat feed endpoints.
133 |
134 | ```bash
135 | # try to ping any of the IPs in from the feodo tracker list
136 | IP=$(kubectl get globalnetworkset threatfeed.feodo-tracker -ojson | jq '.spec.nets[4]' | sed -e 's/^"//' -e 's/"$//' -e 's/\/32//')
137 | kubectl -n dev exec -t centos -- sh -c "curl -m2 $IP"
138 | kubectl exec -it $(kubectl get po -l app=loadgenerator -ojsonpath='{.items[0].metadata.name}') -c main -- sh -c "curl -m2 $IP"
139 | ```
140 |
141 | c. Review the results.
142 |
143 | Navigate to Calico Enterprise `Manager UI` -> `Alerts` to view the results.
144 |
145 | 6. *[Bonus task]* Monitor the use of Tor exists by pods.
146 |
147 | Calico leverages `GlobalThreatfeed` resource to monitor the use of Tor (The Onion Router) exists which provide the means to establish anonymous connections. You can configure [Tor-VPN feed types](https://docs.tigera.io/threat/tor-vpn-feed-and-dashboard) to capture any attempts from within your cluster to use those types of exists.
148 |
149 | a. Configure Tor bulk exist feed.
150 |
151 | ```bash
152 | kubectl apply -f https://docs.tigera.io/manifests/threatdef/tor-exit-feed.yaml
153 | ```
154 |
155 | b. Simulate attempt to use a Tor exit.
156 |
157 | ```bash
158 | IP=$(kubectl get globalnetworkset threatfeed.tor-bulk-exit-list -ojson | jq .spec.nets[0] | sed -e 's/^"//' -e 's/"$//' -e 's/\/32//')
159 | kubectl -n dev exec -t centos -- sh -c "ping -c1 $IP"
160 | ```
161 |
162 | c. View the attempts to use the Tor exists in the Kibana dashboard.
163 |
164 | Navigate to `Kibana` -> `Dashboard` -> `Tigera Secure EE Tor-VPN Logs` dashboard to view the results.
165 |
166 | [Next -> Module 8](../modules/using-egress-access-controls.md)
167 |
--------------------------------------------------------------------------------
/modules/vulnerability-management.md:
--------------------------------------------------------------------------------
1 | # Module 15: Vulnerability management
2 |
3 | **Goal:** Use vulnerability management to understand applications exposure to various vulnerabilities and manage container admission leveraging vulnerability scanning results.
4 |
5 | ## Steps
6 |
7 | 1. Download `tigera-scanner` binary.
8 |
9 | >Refer to the [Image Assurance](https://docs.tigera.io/calico-cloud/image-assurance/) docs for the most recent information.
10 |
11 | [Follow the docs](https://docs.tigera.io/calico-cloud/image-assurance/scan-image-registries#start-the-cli-scanner) to download `tigera-scanner` binary to run scan application images.
12 |
13 | >Note that the scanner version in the command below maybe outdated as the scanner binary is often updated with each release of Calico Cloud. Follow the docs to get the most recent `tigera-scanner` binary.
14 |
15 | ```bash
16 | curl -Lo tigera-scanner https://installer.calicocloud.io/tigera-scanner/v3.16.1-11/image-assurance-scanner-cli-linux-amd64
17 | chmod +x ./tigera-scanner
18 | ./tigera-scanner version
19 | ```
20 |
21 | 2. Scan application images.
22 |
23 | a. Retrieve `API URL` and `Token` values by navigating at **Image Assurance** > **Access Settings** in the Calico Cloud UI.
24 |
25 | b. Scan application images.
26 |
27 | >In order to scan an image, you need to pull it down first and then scan.
28 |
29 | ```bash
30 | # set vars
31 | API_URL='https://.calicocloud.io'
32 | TOKEN=''
33 |
34 | # pull image locally
35 | docker pull gcr.io/google-samples/microservices-demo/frontend:v0.3.8
36 |
37 | # scan images
38 | ./tigera-scanner scan gcr.io/google-samples/microservices-demo/frontend:v0.3.8 --fail_threshold 7.0 --warn_threshold 3.9 --apiurl $API_URL --token $TOKEN
39 | ```
40 |
41 | Navigate to **Image Assurance** > **Scan Results** in the Calico Cloud UI and review scan results.
42 |
43 | 3. Configure image assurance admission controller.
44 |
45 | >Image assurance admission controller is used to enforce the policies that determine which images are allowed to be deployed into the cluster. The `tigera-admission-controller.yaml` manifest is configured to look for namespaces containing `tigera-admission-controller: enforcing` label to enforce container admission.
46 |
47 | a. Add `tigera-admission-controller: enforcing` label to the `default` namespace.
48 |
49 | ```bash
50 | kubectl label namespace default tigera-admission-controller=enforcing
51 | ```
52 |
53 | b. Deploy the admission controller.
54 |
55 | >See [image assurance](https://docs.tigera.io/calico-cloud/image-assurance/install-the-admission-controller#install-the-admission-controller) docs to get the most recent version.
56 |
57 | >NOTE: if your workstation has OpenSSL of version 1.0.2 or any other version that doesn't contain `-addext` flag, update OpenSSL to version 1.1.x and make sure that `openssl` binary is updated. On Amazon Linux 2 you can do it with this line: `sudo yum install -y openssl11 && sudo ln -s /usr/bin/openssl11 /usr/bin/openssl`
58 |
59 | ```bash
60 | # get most recent versions and adjust these vars
61 | IA_VERSION='v3.19.0-1.0-4'
62 | IA_AC_VERSION='v1.14.1'
63 |
64 | # generate certificates
65 | curl https://installer.calicocloud.io/manifests/${IA_VERSION}/manifests/generate-open-ssl-key-cert-pair.sh | bash
66 |
67 | # deploy admission controller
68 | sed -e "s/BASE64_CERTIFICATE/$(printf '%q' `base64 < admission_controller_cert.pem`)/g" -e "s/BASE64_KEY/$(printf '%q' `base64 < admission_controller_key.pem`)/g" -e "s/IA_AC_VERSION/$IA_AC_VERSION/g" demo/80-image-assurance/tigera-image-assurance-admission-controller-deploy.yaml | kubectl apply -f-
69 | ```
70 |
71 | 4. Configure container admission policy.
72 |
73 | Deploy a container admission policy that only allows deployment of images that have `Pass` or `Warn` status.
74 |
75 | ```bash
76 | kubectl apply -f demo/80-image-assurance/tigera-image-assurance-admission-controller-policy.yaml
77 | ```
78 |
79 | To test the policy enforcement, first delete and then redeploy the boutiqueshop application stack since the admission controller can only enforce container deployment when it gets created in the cluster.
80 |
81 | >Note that the `reject-failed` container admission policy is configured to only allow images that have defined scanning status of `Pass` or `Warn`. If an image for any application component hasn't been scanned yet, its status will be `Unknown`. If you don't want to scan all images for the boutiqueshop stack, you can edit the admission policy to also allow images with the `Unknown` status.
82 |
83 | ```bash
84 | # delete app stack
85 | kubectl delete -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/release/v0.3.8/release/kubernetes-manifests.yaml
86 |
87 | # deploy app stack
88 | kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/release/v0.3.8/release/kubernetes-manifests.yaml
89 | ```
90 |
91 | Congratulations! You have finished all the labs in the workshop.
92 |
--------------------------------------------------------------------------------