16 |
17 | > TIP: There are several ways to authenticate with the GitHub CLI. To help you choose the best option for your environment, see the [GitHub CLI authentication documentation](https://cli.github.com/manual/gh_auth_login).
18 |
19 |
20 |
21 | ## Refresh Azure resources variables
22 |
23 | Next, refresh the environment variables for the Azure resources created earlier in this workshop. Run the following commands to refresh the environment variables:
24 |
25 |
26 | ```bash
27 | export SUBSCRIPTION_ID=$(az account show --query id --output tsv);
28 | cd terraform/;
29 | export GROUP_NAME="$(terraform output -raw rg_name)"
30 | export AKV_NAME="$(terraform output -raw akv_name)"
31 | export ACR_NAME="$(terraform output -raw acr_name)"
32 | export CERT_NAME="$(terraform output -raw cert_name)"
33 | cd ..
34 | ```
35 |
36 | ## Create an Azure Service Principal
37 |
38 | First, you'll need to create a Service Principal for the GitHub Actions workflow to use to authenticate to Azure.
39 |
40 | Run the following command to create a Service Principal:
41 |
42 |
43 | ```bash
44 | spDisplayName='github-workflow-sp';
45 | credJSON=$(az ad sp create-for-rbac --name $spDisplayName --role contributor \
46 | --scopes /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$GROUP_NAME \
47 | --sdk-auth)
48 | ```
49 |
50 |
51 |
52 | Example Output
53 |
54 | ```output
55 | {
56 | "clientId": "00000000-0000-0000-0000-000000000000",
57 | "clientSecret": "00000000-0000-0000-0000-000000000000",
58 | "subscriptionId": "00000000-0000-0000-0000-000000000000",
59 | "tenantId": "00000000-0000-0000-0000-000000000000",
60 | "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
61 | "resourceManagerEndpointUrl": "https://management.azure.com/",
62 | "activeDirectoryGraphResourceId": "https://graph.windows.net/",
63 | "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
64 | "galleryEndpointUrl": "https://gallery.azure.com/",
65 | "managementEndpointUrl": "https://management.core.windows.net/"
66 | }
67 | ```
68 |
69 |
70 |
71 |
72 | Next, you'll need to create an access policy for the Service Principal that grants key sign and secert get permssions on the Azure Key Vault instance.
73 |
74 | Run the following command to create an access policy for the Service Principal:
75 |
76 | ```bash
77 | objectId=$(az ad sp list --display-name $spDisplayName --query '[].id' --output tsv);
78 | az keyvault set-policy --name $AKV_NAME --object-id $objectId --certificate-permissions get --key-permissions sign --secret-permissions get
79 | ```
80 |
81 | > NOTE: These permissions are used to get the secret from the signing certificate.
82 |
83 | ### Create the AZURE_CREDENTIALS secret
84 |
85 | Create a Service Principal for the GitHub Actions workflow to authenticate with Azure. Execute the following command to create a Service Principal:
86 |
87 | ```bash
88 | gh secret set AZURE_CREDENTIALS --body "$credJSON"
89 | ```
90 |
91 | ### Create Azure resource variables
92 |
93 | Create the following GitHub secrets to store the Azure resource variables:
94 |
95 | ```bash
96 |
97 | gh variable set ACR_NAME --body "$ACR_NAME";
98 | gh variable set CERT_NAME --body "$CERT_NAME";
99 | gh variable set KEY_ID --body "$KEY_ID";
100 | ```
101 |
102 | ### Trigger the GitHub Actions workflow
103 |
104 | Now that you've modified the GitHub Actions workflow, you can trigger it by pushing a change to the repository.
105 |
106 | Run the following command to push a change to the repository:
107 |
108 | ```bash
109 | #Push a change to start the build process
110 | echo >> README.md
111 | git add .; git commit -m 'Demo: Lets do it live!'; git push
112 | ```
113 |
114 | Browse to the `Actions` tab in your repository and you should see the workflow running. Wait for the workflow to complete. Then check the logs for the message 'Successfully signed...' in the logs to confirm the image was signed.
115 |
116 | ---
117 |
--------------------------------------------------------------------------------
/demo.sh:
--------------------------------------------------------------------------------
1 | . ./util.sh
2 |
3 | run 'clear'
4 |
5 | desc 'Loading demo environment variables...'
6 | export IMAGE='azure-voting-app-rust:v0.1-alpha'
7 | cd terraform/
8 | export GROUP_NAME="$(terraform output -raw rg_name)"
9 | export AKS_NAME="$(terraform output -raw aks_name)"
10 | export VAULT_URI="$(terraform output -raw akv_uri)"
11 | export KEYVAULT_NAME="$(terraform output -raw akv_name)"
12 | export ACR_NAME="$(terraform output -raw acr_name)"
13 | export CERT_NAME="$(terraform output -raw cert_name)"
14 | export TENANT_ID="$(terraform output -raw tenant_id)"
15 | export CLIENT_ID="$(terraform output -raw wl_client_id)"
16 | cd ..
17 | az acr login --name $ACR_NAME >> /dev/null 2>&1
18 |
19 | desc 'Build & pull container images'
20 | run 'docker build -t azure-voting-app-rust:v0.1-alpha .'
21 | run 'docker pull postgres:15.0-alpine'
22 |
23 | desc 'List Docker images'
24 | run 'docker images'
25 |
26 | desc 'Scan the azure-voting-app-rust images'
27 | run 'sudo trivy image azure-voting-app-rust:v0.1-alpha'
28 |
29 | desc 'Adjust severity levels'
30 | run "sudo trivy image --severity CRITICAL ${IMAGE}"
31 |
32 | desc 'Filter vulnerabilities by type'
33 | run "sudo trivy image --vuln-type os --severity CRITICAL ${IMAGE}"
34 |
35 | desc 'Export a vulnerability report'
36 | run "sudo trivy image --exit-code 0 --format json --output ./patch.json --scanners vuln --vuln-type os --ignore-unfixed ${IMAGE}"
37 |
38 | desc 'Review vulnerable packages found in the image'
39 | run "cat patch.json | jq '.Results[0].Vulnerabilities[] | .PkgID' | sort | uniq"
40 |
41 | desc 'Tag and push azure-voting-app-rust to ACR'
42 | ACR_IMAGE=${ACR_NAME}.azurecr.io/azure-voting-app-rust:v0.1-alpha
43 | run "docker tag ${IMAGE} ${ACR_IMAGE}"
44 | run "docker push ${ACR_IMAGE}"
45 |
46 | sudo ./bin/buildkitd &> /dev/null & # why does this still output?
47 | desc "Patch container image with Copacetic"
48 | run "sudo copa patch -i ${ACR_IMAGE} -r ./patch.json -t v0.1-alpha-1"
49 | sudo pkill buildkitd >> /dev/null 2>&1
50 | ACR_IMAGE_PATCHED=${ACR_NAME}.azurecr.io/azure-voting-app-rust:v0.1-alpha-1
51 |
52 | desc "Re-scan the patched image"
53 | run "sudo trivy image --severity CRITICAL --scanners vuln ${ACR_IMAGE_PATCHED}"
54 |
55 | desc "Push the patched image to ACR"
56 | run "docker push ${ACR_IMAGE_PATCHED}"
57 |
58 | desc 'Tag and push Postgres image to ACR'
59 | run "docker tag postgres:15.0-alpine ${ACR_NAME}.azurecr.io/postgres:15.0-alpine"
60 | run "docker push ${ACR_NAME}.azurecr.io/postgres:15.0-alpine"
61 |
62 | KEY_ID=$(az keyvault certificate show --name $CERT_NAME --vault-name $KEYVAULT_NAME --query kid -o tsv)
63 | desc 'Add a signing key to Notation CLI'
64 | run "notation key add --plugin azure-kv ${CERT_NAME} --id ${KEY_ID} --default"
65 |
66 | desc 'List the Notation key'
67 | run 'notation key list'
68 |
69 | desc 'Get the Docker image digest'
70 | run "docker image inspect --format='{{index .RepoDigests 0}}' ${ACR_IMAGE_PATCHED}"
71 | run "docker image inspect --format='{{index .RepoDigests 1}}' ${ACR_NAME}.azurecr.io/postgres:15.0-alpine"
72 | export APP_DIGEST=$(docker inspect --format='{{index .RepoDigests 0}}' $ACR_IMAGE_PATCHED)
73 | export DB_DIGEST=$(docker image inspect --format='{{range $digest := .RepoDigests}}{{println $digest}}{{end}}' ${ACR_NAME}.azurecr.io/postgres:15.0-alpine | sort | tail -1)
74 |
75 | desc 'Sign the azure-voting-app-rust image'
76 | run "notation sign ${APP_DIGEST}"
77 |
78 | desc 'Sign the Postgres image'
79 | run "notation sign ${DB_DIGEST}"
80 |
81 | az aks get-credentials --resource-group ${GROUP_NAME} --name ${AKS_NAME}
82 | desc 'Deploy unsigned app images'
83 | run "kubectl apply -f manifests/"
84 | kubectl delete manifest/ >> /dev/null 2>&1
85 |
86 | # desc 'Check Ratify logs for blocked pod deployment'
87 | # run "kubectl logs deployment/ratify --namespace gatekeeper-system | grep voting"
88 |
89 | desc "Modify the app deployment manifests to use the signed image"
90 | run "sed -i \"s|azure-voting-app-rust:v0.1-alpha|${APP_DIGEST}|\" ./manifests/deployment-app.yaml"
91 | run "code ./manifests/deployment-app.yaml"
92 |
93 | desc "Modify the db deployment manifests to use the signed image"
94 | run "sed -i \"s|postgres:15.0-alpine|${DB_DIGEST}|\" ./manifests/deployment-db.yaml"
95 | run "code ./manifests/deployment-db.yaml"
96 |
97 | desc 'Deploy signed app images'
98 | run "kubectl apply -f manifests/"
99 |
100 | desc 'Sleep 10 seconds, wait for pods'
101 | sleep 10
102 | desc 'Check deployment status'
103 | run "kubectl get pods"
104 |
105 | desc 'Review Ratify constraints'
106 | run "code ./manifests/constraint.yaml"
107 |
108 | desc 'Review Ratify template'
109 | run "code ./manifests/template.yaml"
110 |
111 | desc 'Check ingress status'
112 | run "kubectl get ingress"
113 |
114 | desc 'Test the app'
115 |
116 | desc 'Game Over: End of demo'
--------------------------------------------------------------------------------
/terraform/main.tf:
--------------------------------------------------------------------------------
1 | terraform {
2 | required_providers {
3 | azurerm = {
4 | source = "hashicorp/azurerm"
5 | version = "3.52.0"
6 | }
7 | azuread = {
8 | source = "hashicorp/azuread"
9 | version = "2.28.0"
10 | }
11 | }
12 | }
13 |
14 | provider "azurerm" {
15 | features {
16 | resource_group {
17 | prevent_deletion_if_contains_resources = false
18 | }
19 | }
20 | }
21 |
22 | data "azurerm_client_config" "current" {}
23 |
24 | data "azuread_client_config" "current" {}
25 |
26 | resource "azurerm_resource_group" "rg" {
27 | name = var.resource_group_name
28 | location = var.location
29 | }
30 |
31 | resource "azurerm_container_registry" "registry" {
32 | name = var.registry_name
33 | resource_group_name = azurerm_resource_group.rg.name
34 | location = azurerm_resource_group.rg.location
35 | sku = "Standard"
36 | anonymous_pull_enabled = true # Copacetic limitation
37 | }
38 |
39 | resource "azurerm_user_assigned_identity" "identity" {
40 | name = var.identity_name
41 | resource_group_name = azurerm_resource_group.rg.name
42 | location = azurerm_resource_group.rg.location
43 | tags = var.tags
44 | }
45 |
46 | resource "azurerm_role_assignment" "acr" {
47 | scope = azurerm_container_registry.registry.id
48 | role_definition_name = "AcrPull"
49 | principal_id = azurerm_user_assigned_identity.identity.principal_id
50 | }
51 |
52 | resource "azurerm_role_assignment" "acr-admin" {
53 | scope = azurerm_container_registry.registry.id
54 | role_definition_name = "Owner"
55 | principal_id = data.azuread_client_config.current.object_id
56 | }
57 |
58 | resource "azurerm_key_vault" "kv" {
59 | name = var.key_vault_name
60 | location = azurerm_resource_group.rg.location
61 | resource_group_name = azurerm_resource_group.rg.name
62 | tenant_id = data.azurerm_client_config.current.tenant_id
63 | sku_name = "standard"
64 | tags = var.tags
65 |
66 | access_policy {
67 | tenant_id = data.azurerm_client_config.current.tenant_id
68 | object_id = data.azuread_client_config.current.object_id
69 |
70 | key_permissions = [
71 | "Get", "Sign"
72 | ]
73 |
74 | secret_permissions = [
75 | "Get", "Set", "List", "Delete", "Purge"
76 | ]
77 |
78 | certificate_permissions = [
79 | "Get", "Create", "Delete", "Purge"
80 | ]
81 | }
82 | }
83 |
84 | resource "azurerm_key_vault_access_policy" "workload-identity" {
85 | key_vault_id = azurerm_key_vault.kv.id
86 | tenant_id = data.azurerm_client_config.current.tenant_id
87 | object_id = azurerm_user_assigned_identity.identity.principal_id
88 |
89 | certificate_permissions = [
90 | "Get"
91 |
92 | ]
93 | secret_permissions = [
94 | "Get"
95 | ]
96 | }
97 |
98 |
99 | resource "azurerm_kubernetes_cluster" "aks" {
100 | name = var.cluster_name
101 | location = azurerm_resource_group.rg.location
102 | resource_group_name = azurerm_resource_group.rg.name
103 | dns_prefix = "${var.cluster_name}-dns"
104 | kubernetes_version = "1.28.3"
105 | workload_identity_enabled = true
106 | oidc_issuer_enabled = true
107 | automatic_channel_upgrade = "node-image"
108 |
109 |
110 | default_node_pool {
111 | name = "default"
112 | node_count = 3
113 | vm_size = "Standard_D2_v2"
114 |
115 | }
116 |
117 | web_app_routing {
118 | dns_zone_id = ""
119 | }
120 |
121 |
122 | identity {
123 | type = "SystemAssigned"
124 | }
125 |
126 | tags = var.tags
127 | }
128 |
129 | resource "azurerm_role_assignment" "linkAcrAks" {
130 | principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
131 | role_definition_name = "AcrPull"
132 | scope = azurerm_container_registry.registry.id
133 | skip_service_principal_aad_check = true
134 | }
135 |
136 |
137 | resource "azurerm_federated_identity_credential" "workload-identity-credential" {
138 | name = "ratify-federated-credential"
139 | resource_group_name = azurerm_resource_group.rg.name
140 | audience = ["api://AzureADTokenExchange"]
141 | issuer = azurerm_kubernetes_cluster.aks.oidc_issuer_url
142 | parent_id = azurerm_user_assigned_identity.identity.id
143 | subject = "system:serviceaccount:${var.ratify_namespace}:ratify-admin"
144 | }
145 |
146 | resource "azurerm_key_vault_certificate" "sign-cert" {
147 | name = var.sign_cert_name
148 | key_vault_id = azurerm_key_vault.kv.id
149 |
150 | certificate_policy {
151 | issuer_parameters {
152 | name = "Self"
153 | }
154 |
155 | key_properties {
156 | exportable = false
157 | key_size = 2048
158 | key_type = "RSA"
159 | reuse_key = true
160 | }
161 |
162 | lifetime_action {
163 | action {
164 | action_type = "AutoRenew"
165 | }
166 |
167 | trigger {
168 | days_before_expiry = 30
169 | }
170 | }
171 |
172 | secret_properties {
173 | content_type = "application/x-pem-file"
174 | }
175 |
176 | x509_certificate_properties {
177 | extended_key_usage = ["1.3.6.1.5.5.7.3.3"]
178 |
179 | key_usage = [
180 | "digitalSignature",
181 | ]
182 |
183 | subject = "CN=example.com,O=Notation,L=Seattle,ST=WA,C=US"
184 | validity_in_months = 12
185 | }
186 | }
187 | }
188 |
--------------------------------------------------------------------------------
/docs/setup.md:
--------------------------------------------------------------------------------
1 | ## Inftrastructure Overview
2 |
3 | As part of this workshop, you'll need to deploy the following Azure resources:
4 | | Azure Resources | Notes |
5 | |------------------------------------|-------------------------------------------------------------------------------------------------------|
6 | | Azure Resource Group | |
7 | | Azure Key Vault | Stores the certificate used for digtial signatures |
8 | | Azure Container Registry | |
9 | | Azure Kubernetes Service | |
10 | | Azure User Assigned Managed Identity| Authenticates to Azure Key Vault as a Workload Identity and Azure Container Registry by using the AcrPull role. |
11 |
12 | ## Deploy the Azure resources with Terraform
13 |
14 | First, log into Azure with the Azure CLI.
15 |
16 | ```bash
17 | az login
18 | ```
19 |
20 | Next, deploy the Terraform configuration. This will create the Azure resources needed for this workshop.
21 |
22 | ```bash
23 | cd terraform;
24 | terraform init;
25 | terraform apply
26 | ```
27 |
28 |
29 | Example Output
30 |
31 | ```output
32 | azurerm_resource_group.rg: Creating...
33 | azurerm_resource_group.rg: Creation complete after 1s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg]
34 | azurerm_key_vault.kv: Creating...
35 | azurerm_key_vault.kv: Creation complete after 4s [id=https://kv.vault.azure.net]
36 | azurerm_user_assigned_identity.ua: Creating...
37 | azurerm_user_assigned_identity.ua: Creation complete after 1s [id=/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ua]
38 | azurerm_container_registry.acr: Creating...
39 | ```
40 |
41 |
42 |
43 |
44 |
45 |
46 | > Certain Azure resources need to be globally unique. If you receive an error that a resource already exists, you may need to change the name of the resource in the `terraform.tfvars` file.
47 |
48 |
49 |
50 | Run the following command to export the Terraform output as environment variables:
51 |
52 | ```bash
53 | export GROUP_NAME="$(terraform output -raw rg_name)"
54 | export AKS_NAME="$(terraform output -raw aks_name)"
55 | export VAULT_URI="$(terraform output -raw akv_uri)"
56 | export AKV_NAME="$(terraform output -raw akv_name)"
57 | export ACR_NAME="$(terraform output -raw acr_name)"
58 | export CERT_NAME="$(terraform output -raw cert_name)"
59 | export TENANT_ID="$(terraform output -raw tenant_id)"
60 | export CLIENT_ID="$(terraform output -raw wl_client_id)"
61 | ```
62 |
63 | Before continuing, change back to the root of the repository.
64 |
65 | ```bash
66 | cd ..
67 | ```
68 |
69 | ### Deploy Ratify to the AKS cluster
70 |
71 | Gatekeeper is an open-source project from the CNCF that allows you to enforce policies on your Kubernetes cluster and Ratify is a tool that allows you to deploy policies and constraints that prevent unsigned container image from being deployed to Kubernetes.
72 |
73 | Run the following command to get the Kubernetes credentials for your cluster:
74 |
75 | ```bash
76 | az aks get-credentials --resource-group ${GROUP_NAME} --name ${AKS_NAME}
77 | ```
78 |
79 | Next, run the following command to deploy Gatekeeper to your cluster:
80 |
81 | ```bash
82 | helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
83 |
84 | helm install gatekeeper/gatekeeper \
85 | --name-template=gatekeeper \
86 | --namespace gatekeeper-system --create-namespace \
87 | --set enableExternalData=true \
88 | --set validatingWebhookTimeoutSeconds=5 \
89 | --set mutatingWebhookTimeoutSeconds=2
90 | ```
91 |
92 | Run the following command to deploy Ratify to your cluster:
93 |
94 | ```bash
95 | helm repo add ratify https://deislabs.github.io/ratify
96 |
97 | helm install ratify \
98 | ratify/ratify --atomic \
99 | --namespace gatekeeper-system \
100 | --set akvCertConfig.enabled=true \
101 | --set featureFlags.RATIFY_CERT_ROTATION=true \
102 | --set akvCertConfig.vaultURI=${VAULT_URI} \
103 | --set akvCertConfig.cert1Name=${CERT_NAME} \
104 | --set akvCertConfig.tenantId=${TENANT_ID} \
105 | --set oras.authProviders.azureWorkloadIdentityEnabled=true \
106 | --set azureWorkloadIdentity.clientId=${CLIENT_ID}
107 | ```
108 |
109 | Once Ratify is deployed, you'll need to deploy the policies and constraints that prevent unsigned container images from being deployed to Kubernetes.
110 |
111 | Run the following command to deploy the Ratify policies to your cluster:
112 |
113 |
114 | ```bash
115 | kubectl apply -f manifests/template.yaml
116 | kubectl apply -f manifests/constraint.yaml
117 | ```
118 |
119 | Verify Ratify is running with the following command:
120 |
121 | ```bash
122 | kubectl get pods --namespace gatekeeper-system
123 | ```
124 |
125 |
126 | Example Output
127 |
128 | ```output
129 | NAME READY STATUS RESTARTS AGE
130 | gatekeeper-audit-769879bb55-bdsr5 1/1 Running 1 (34m ago) 34m
131 | gatekeeper-controller-manager-d8c9c5cd5-bstmb 1/1 Running 0 34m
132 | gatekeeper-controller-manager-d8c9c5cd5-dzk2f 1/1 Running 0 34m
133 | gatekeeper-controller-manager-d8c9c5cd5-qftxt 1/1 Running 0 34m
134 | ratify-88b59894d-w5nxl 1/1 Running 0 31m
135 | ```
136 |
137 |
--------------------------------------------------------------------------------
/src/main.rs:
--------------------------------------------------------------------------------
1 | #[macro_use]
2 | extern crate lazy_static;
3 |
4 | mod database;
5 | mod model;
6 | mod schema;
7 | use crate::schema::votes::vote_value;
8 | use actix_files::Files;
9 | use actix_web::{middleware::Logger, post, web, App, HttpResponse, HttpServer};
10 | use database::setup;
11 | use diesel::{dsl::*, pg::PgConnection, prelude::*, r2d2::ConnectionManager};
12 | use env_logger::Env;
13 | use handlebars::Handlebars;
14 | use log::info;
15 | use model::NewVote;
16 | use r2d2::Pool;
17 | use schema::votes::dsl::votes;
18 | use serde::Deserialize;
19 | use serde_json::json;
20 | use std::env::var;
21 | use std::fmt;
22 | use std::sync::Mutex;
23 |
24 |
25 | lazy_static! {
26 | static ref FIRST_VALUE: String = var("FIRST_VALUE").unwrap_or("Dogs".to_string());
27 | static ref SECOND_VALUE: String = var("SECOND_VALUE").unwrap_or("Cats".to_string());
28 | }
29 |
30 | #[derive(Debug, Deserialize)]
31 | enum VoteValue {
32 | FirstValue,
33 | SecondValue,
34 | Reset,
35 | }
36 |
37 | impl fmt::Display for VoteValue {
38 | fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
39 | match self {
40 | VoteValue::SecondValue => write!(f, "{}", *SECOND_VALUE),
41 | VoteValue::FirstValue => write!(f, "{}", *FIRST_VALUE),
42 | VoteValue::Reset => write!(f, "Reset"),
43 | }
44 | }
45 | }
46 |
47 | impl VoteValue {
48 | fn source_value(input: &str) -> VoteValue {
49 | if input == *FIRST_VALUE {
50 | return VoteValue::FirstValue
51 | }
52 | else if input == *SECOND_VALUE {
53 | return VoteValue::SecondValue
54 | }
55 | else if input == "Reset" {
56 | return VoteValue::Reset
57 | }
58 | else {
59 | panic!("Failed to match the vote type from {}", input);
60 | };
61 |
62 | }
63 | }
64 |
65 | #[derive(Deserialize)]
66 | struct FormData {
67 | vote: String,
68 | }
69 |
70 | struct AppStateVoteCounter {
71 | first_value_counter: Mutex, // <- Mutex is necessary to mutate safely across threads
72 | second_value_counter: Mutex,
73 | }
74 |
75 | /// extract form data using serde
76 | /// this handler gets called only if the content type is *x-www-form-urlencoded*
77 | /// and the content of the request could be deserialized to a `FormData` struct
78 | #[post("/")]
79 | async fn submit(
80 | form: web::Form,
81 | data: web::Data,
82 | pool: web::Data>>,
83 | hb: web::Data>,
84 | ) -> HttpResponse {
85 | let mut first_value_counter = data.first_value_counter.lock().unwrap(); // <- get counter's MutexGuard
86 | let mut second_value_counter = data.second_value_counter.lock().unwrap();
87 |
88 | info!("Vote is: {}", &form.vote);
89 | info!("Debug Vote is: {:?}", &form.vote);
90 |
91 | let vote = VoteValue::source_value(&form.vote);
92 |
93 | match vote {
94 | VoteValue::FirstValue => *first_value_counter += 1, // <- access counter inside MutexGuard
95 | VoteValue::SecondValue => *second_value_counter += 1,
96 | VoteValue::Reset => {
97 | *first_value_counter = 0;
98 | *second_value_counter = 0;
99 | }
100 | }
101 |
102 | let data = json!({
103 | "title": "Azure Voting App",
104 | "button1": VoteValue::FirstValue.to_string(),
105 | "button2": VoteValue::SecondValue.to_string(),
106 | "value1": first_value_counter.to_string(),
107 | "value2": second_value_counter.to_string()
108 | });
109 |
110 | let body = hb.render("index", &data).unwrap();
111 |
112 | // if the vote value is not reset then save the
113 | if !matches!(vote, VoteValue::Reset) {
114 | let vote_data = NewVote {
115 | vote_value: form.vote.to_string(),
116 | };
117 |
118 | let mut connection = pool.get().unwrap();
119 | let _vote_data = web::block(move || {
120 | diesel::insert_into(votes)
121 | .values(vote_data)
122 | .execute(&mut connection)
123 | })
124 | .await;
125 | } else {
126 | let mut connection = pool.get().unwrap();
127 | let _vote_data = web::block(move || {
128 | let _ = diesel::delete(votes).execute(&mut connection);
129 | })
130 | .await;
131 | }
132 |
133 | HttpResponse::Ok().body(body)
134 | }
135 |
136 | async fn index(
137 | data: web::Data,
138 | hb: web::Data>,
139 | ) -> HttpResponse {
140 | let first_value_counter = data.first_value_counter.lock().unwrap(); // <- get counter's MutexGuard
141 | let second_value_counter = data.second_value_counter.lock().unwrap();
142 |
143 | info!("Value 1: {}", VoteValue::FirstValue);
144 | info!("Value 2: {}", VoteValue::SecondValue);
145 |
146 | let data = json!({
147 | "title": "Azure Voting App",
148 | "button1": VoteValue::FirstValue.to_string(),
149 | "button2": VoteValue::SecondValue.to_string(),
150 | "value1": first_value_counter.to_string(),
151 | "value2": second_value_counter.to_string()
152 | });
153 | let body = hb.render("index", &data).unwrap();
154 | HttpResponse::Ok().body(body)
155 | }
156 |
157 | #[actix_web::main]
158 | async fn main() -> std::io::Result<()> {
159 | // Default logging format is:
160 | // %a %t "%r" %s %b "%{Referer}i" "%{User-Agent}i" %T
161 | env_logger::init_from_env(Env::default().default_filter_or("info"));
162 |
163 | let pool = setup();
164 | let mut connection = pool.get().unwrap();
165 |
166 | // Load up the dog votes
167 | let first_value_query = votes.filter(vote_value.eq(FIRST_VALUE.clone()));
168 | let first_value_result = first_value_query.select(count(vote_value)).first(&mut connection);
169 | let first_value_count = first_value_result.unwrap_or(0);
170 |
171 | // Load up the cat votes
172 | let second_value_query = votes.filter(vote_value.eq(SECOND_VALUE.clone()));
173 | let second_value_result = second_value_query.select(count(vote_value)).first(&mut connection);
174 | let second_value_count = second_value_result.unwrap_or(0);
175 |
176 | // Note: web::Data created _outside_ HttpServer::new closure
177 | let vote_counter = web::Data::new(AppStateVoteCounter {
178 | first_value_counter: Mutex::new(first_value_count),
179 | second_value_counter: Mutex::new(second_value_count),
180 | });
181 |
182 | let mut handlebars = Handlebars::new();
183 | handlebars
184 | .register_templates_directory(".html", "./static/")
185 | .unwrap();
186 | let handlebars_ref = web::Data::new(handlebars);
187 |
188 | info!("Listening on port 8080");
189 | HttpServer::new(move || {
190 | App::new()
191 | .wrap(Logger::default())
192 | // .wrap(Logger::new("%a %{User-Agent}i")) // <- optionally create your own format
193 | .app_data(vote_counter.clone()) // <- register the created data
194 | .app_data(handlebars_ref.clone())
195 | .data(pool.clone())
196 | .service(Files::new("/static", "static").show_files_listing())
197 | .route("/", web::get().to(index))
198 | .service(submit)
199 | })
200 | .bind(("0.0.0.0", 8080))?
201 | .run()
202 | .await
203 | }
204 |
--------------------------------------------------------------------------------
/docs/workshop.md:
--------------------------------------------------------------------------------
1 | ---
2 | short_title: Securing container deployments on Azure Kubernetes Service with open-source tools
3 | description: Learn how to use open-source tools to secure your container deployments on Azure Kubernetes Service.
4 | type: workshop
5 | authors: Josh Duffney
6 | contacts: '@joshduffney'
7 | # banner_url: assets/copilot-banner.jpg
8 | duration_minutes: 30
9 | audience: devops engineers, devs, site reliability engineers, security engineers
10 | level: intermediate
11 | tags: azure, github actions, notary, ratify, secure supply chain, kubernetes, helm, terraform, gatekeeper, azure kubernetes service, azure key vault, azure container registry
12 | published: false
13 | wt_id:
14 | sections_title:
15 | - Introduction
16 | ---
17 |
18 | # Securing container deployments on Azure Kubernetes Service by using open-source tools
19 |
20 | In this workshop, you'll learn how to use open-source tools; Trivy, Copacetic, Notary, and Ratify to secure your container deployments on Azure Kubernetes Service.
21 |
22 | 
23 |
24 | ## Objectives
25 |
26 | You'll learn how to:
27 | - Use Trivy to scan container images for vulnerabilities
28 | - Automate container image patching with Copacetic
29 | - Sign container images with Notation
30 | - Prevent unsigned container images from being deployed with Ratify
31 |
32 | ## Prerequisites
33 |
34 | | | |
35 | |----------------------|------------------------------------------------------|
36 | | GitHub account | [Get a free GitHub account](https://github.com/join) |
37 | | Azure account | [Get a free Azure account](https://azure.microsoft.com/free) |
38 | | Visual Studio Code | [Install VS Code](https://code.visualstudio.com/download) |
39 |
40 |
41 | In order to begin this workshop, you'll need to setup the Azure environment and deploy Gatekeeper and Ratify to your AKS cluster.
42 |
43 | Follow the [setup](setup.md) instructions to deploy and configure the infrastructure.
44 |
45 | ---
46 |
47 | ## Start the dev container
48 |
49 | A local development environment is provided for this workshop using a dev container. It includes all the tools you need to successfully participate in the workshop.
50 |
51 | Follow the steps below to fork the repository and open it in VS Code:
52 |
53 | 1. Fork the repository by navigating to the original repository URL: https://github.com/duffney/secure-supply-chain-on-aks.git. Click on the "Fork" button in the top right corner of the GitHub page. This will create a copy of the repository under your GitHub account.
54 |
55 | 2. Once the repository is forked, navigate to your forked repository on GitHub. The URL should be https://github.com/your-username/secure-supply-chain-on-aks.git, where "your-username" is your GitHub username.
56 |
57 | 3. Click on the "Code" button, and copy the URL provided (which will be the URL of your forked repository).
58 |
59 | 4. Open your terminal or command prompt and run the following command to clone your forked repository:
60 |
61 | ```bash
62 | git clone https://github.com/your-username/secure-supply-chain-on-aks.git
63 | ```
64 |
65 | 5. Change the working directory to the cloned repository:
66 |
67 | ```bash
68 | cd secure-supply-chain-on-aks
69 | ```
70 |
71 | 6. Open the repository in VS Code:
72 |
73 | ```bash
74 | code .
75 | ```
76 |
77 | 7. VS Code will prompt you to reopen the repository in a dev container. Click **Reopen in Container**. This will take a few minutes to build the dev container.
78 |
79 |
80 |
81 | > If you don't see the prompt, you can open the command palette by hitting `Ctrl+Shift+P` on Windows or `Cmd+Shift+P` on Mac and search for **Dev Containers: Reopen in Container**.
82 |
83 |
84 |
85 | ---
86 |
87 | ## Scanning container images for vulnerabilities
88 |
89 |
90 |
91 | Containerization has become an integral part of modern software development and deployment. However, with the increased adoption of containers, there comes a need for ensuring their security.
92 |
93 | In this section, you'll learn how to use Trivy to scan a container image for vulnerabilities.
94 |
95 | ### Build and pull the Azure Voting App container images
96 |
97 | Before you begin, there is a `Dockerfile` that will build a container image that hosts the Azure Voting App.
98 |
99 | The Azure Voting App is a simple Rust application that allows users to vote between the two options presented and stores the results in a database.
100 |
101 | Run the following command to build the Azure Voting web app container image
102 |
103 | ```bash
104 | docker build -t azure-voting-app-rust:v0.1-alpha .
105 | ```
106 |
107 | Next pull the `PostgreSQL` container image from Docker Hub. This will be used to store the votes.
108 |
109 | ```bash
110 | docker pull postgres:15.0-alpine
111 | ```
112 |
113 | ### Running Trivy for Vulnerability Scans
114 |
115 | Trivy is an open-source vulnerability scanner specifically designed for container images. It provides a simple and efficient way to detect vulnerabilities in containerized applications.
116 |
117 | Trivy leverages a comprehensive vulnerability database and checks container images against known security issues, including vulnerabilities in the operating system packages, application dependencies, and other components.
118 |
119 | Run the following command to scan the `azure-voting-app-rust` container image for vulnerabilities:
120 |
121 | ```bash
122 | IMAGE=azure-voting-app-rust:v0.1-alpha;
123 | trivy image $IMAGE;
124 | ```
125 |
126 |
127 | Example Output
128 |
129 | ```output
130 | 2023-07-14T17:08:57.400Z INFO Vulnerability scanning is enabled
131 | 2023-07-14T17:08:57.401Z INFO Secret scanning is enabled
132 | 2023-07-14T17:08:57.401Z INFO If your scanning is slow, please try '--scanners vuln' to disable secret scanning
133 | 2023-07-14T17:08:57.401Z INFO Please see also https://aquasecurity.github.io/trivy/v0.41/docs/secret/scanning/#recommendation for faster secret detection
134 | 2023-07-14T17:08:57.418Z INFO Detected OS: debian
135 | 2023-07-14T17:08:57.418Z INFO Detecting Debian vulnerabilities...
136 | 2023-07-14T17:08:57.448Z INFO Number of language-specific files: 0
137 |
138 | azure-voting-app-rust:v0.1-alpha (debian 11.2)
139 | ==============================================
140 | Total: 154 (UNKNOWN: 0, LOW: 76, MEDIUM: 27, HIGH: 37, CRITICAL: 14)
141 | ...........................................................
142 | ...........................................................
143 | ```
144 |
145 |
146 |
147 | Trivy will start scanning the specified container image for vulnerabilities. It will analyze the operating system, application packages, and libraries within the container to identify any known security issues.
148 |
149 | ### Adjusting Severity Levels
150 |
151 | Trivy allows you to customize the severity level of the reported vulnerabilities. By default, it provides information about vulnerabilities of all severity levels. However, you can narrow down the results based on your requirements.
152 |
153 | For example, if you only want to see vulnerabilities classified as CRITICAL, you can modify the command as follows:
154 |
155 | ```bash
156 | trivy image --severity CRITICAL $IMAGE
157 | ```
158 |
159 |
160 |
161 | ### Filtering vulnerabilities by type
162 |
163 | By default, Trivy scans for vulnerabilities in all components of the container image, including the operating system packages, application dependencies, and libraries. However, you can narrow down the results by specifying the type of vulnerabilities you want to see.
164 |
165 | For example, if you only want to see vulnerabilities in the operating system packages, you can modify the command as follows:
166 |
167 | ```bash
168 | trivy image --vuln-type os --severity CRITICAL $IMAGE
169 | ```
170 |
171 | ### Chosing a scanner
172 |
173 | Trivy supports four scanner options; vuln, config, secret, and license. Vuln is the default scanner and it scans for vulnerabilities in the container image. Config scans for misconfiguration in infrastructure as code configurations, like Terraform. Secret scans for sensitive information and secrets in the project files. And license scans for software license issues.
174 |
175 | You can specify the scanner you want to use by using the `--scanners` option. For example, if you only want to use the vuln scanner, you can modify the command as follows:
176 |
177 | ```bash
178 | trivy image --scanners vuln $IMAGE
179 | ```
180 |
181 | ### Exporting a vulnerability report
182 |
183 | Seeing the vulnerability reports in the terminal is useful, but it's not the most convenient way to view the results. Trivy allows you to export the vulnerability report in a variety of formats, including JSON, HTML, and CSV.
184 |
185 | Run the following command to export the vulnerability report in JSON format:
186 |
187 | ```bash
188 | trivy image --exit-code 0 --format json --output ./patch.json --scanners vuln --vuln-type os --ignore-unfixed $IMAGE
189 | ```
190 |
191 | Having a point-in-time report of the vulnerabilites discovered in your container is certainly a nice to have, but wouldn't it be even more awesome if that report was used to automatically add patched layers to the container image?
192 |
193 | ---
194 |
195 | ## Using Copacetic to patch container images
196 |
197 | Copacetic is an open-source tool that helps you patch container images for vulnerabilities. It uses Trivy vulnerability reports to identify the vulnerabilities in the container image and adds patched layers to the image to fix the vulnerabilities.
198 |
199 | In this section, you'll use Copacetic to patch the `azure-voting-app-rust` container image for vulnerabilities.
200 |
201 | ### Build and push the container images
202 |
203 | A currently limitation of Copacetic is that it can only patch container images that are stored in a remote container registry. So, you'll need to push the `azure-voting-app-rust` container image to the Azure Container Registry.
204 |
205 | Run the following commands to build the `azure-voting-app-rust` container image and push it to the Azure Container Registry:
206 |
207 | ```bash
208 | ACR_IMAGE=$ACR_NAME.azurecr.io/azure-voting-app-rust:v0.1-alpha;
209 | docker tag $IMAGE $ACR_IMAGE;
210 | docker push $ACR_IMAGE
211 | ```
212 |
213 |
214 |
215 | > If you encounter an authentication error, run `az acr login --name $ACR_NAME` command to authenticate to the Azure Container Registry then try the `docker push` command again.
216 |
217 |
218 |
219 | ### Patching the container image
220 |
221 | Now that you have your container images built and pushed to the Azure Container Registry, let's proceed with patching them.
222 |
223 | First, start the Copacetic buildkit daemon by running the following command:
224 |
225 | ```bash
226 | sudo ./bin/buildkitd &> /dev/null &
227 | ```
228 |
229 | Next, run `copa` to patch the `azure-voting-app-rust` container image:
230 |
231 | ```bash
232 | sudo copa patch -i ${ACR_IMAGE} -r ./patch.json -t v0.1-alpha-1
233 | ```
234 |
235 |
236 |
237 | > Appending a hyphen followed by a number is a semantic version compliant way to indicate how many times a container image has been patched.
238 |
239 |
240 |
241 |
242 | Once the patching process is complete, there will be a newly patched container image on your local machine. You can view the patched image by running the following command:
243 |
244 | ```bash
245 | docker images
246 | ```
247 |
248 | To confirm that Copacetic patched the container image, rerun the `trivy image` command on the patched image:
249 |
250 | ```bash
251 | ACR_IMAGE_PATCHED=${ACR_NAME}.azurecr.io/azure-voting-app-rust:v0.1-alpha-1;
252 | trivy image --severity CRITICAL --scanners vuln ${ACR_IMAGE_PATCHED}
253 | ```
254 |
255 |
256 | Example Output
257 |
258 | ```output
259 | 2023-07-14T17:45:27.463Z INFO Vulnerability scanning is enabled
260 | 2023-07-14T17:45:30.157Z INFO Detected OS: debian
261 | 2023-07-14T17:45:30.157Z INFO Detecting Debian vulnerabilities...
262 | 2023-07-14T17:45:30.165Z INFO Number of language-specific files: 0
263 |
264 | s3cexampleacr.azurecr.io/azure-voting-app-rust:v0.1-alpha
265 |
266 | Total: 1 (CRITICAL: 1)
267 | ```
268 |
269 |
270 |
271 | Lastly, push the patched image tag to the Azure Container Registry:
272 |
273 | ```bash
274 | docker push ${ACR_IMAGE_PATCHED}
275 | ```
276 |
277 | ### Tag and push the Postgres image to ACR
278 |
279 | You just pushed a patched version of the `azure-voting-app-rust` image to Azure Container Registry. But the app won't run witout a valid database container. So, next you'll tag and push a version of the Postgres SQL container to your ACR instance.
280 |
281 | ```bash
282 | docker tag postgres:15.0-alpine $ACR_NAME.azurecr.io/postgres:15.0-alpine
283 | docker push $ACR_NAME.azurecr.io/postgres:15.0-alpine
284 | ```
285 |
286 | ---
287 |
288 | ## Signing container images with Notation
289 |
290 | In this section, you'll learn how to use Notation, a command-line tool from the CNCF Notary project, to sign and verify container images, which adds an extra layer of security to your deployment pipeline.
291 |
292 | ### Why Sign Container Images?
293 |
294 | On the surface, signing container images just seems like more work. But, signing container images is crucial for several reasons:
295 |
296 | * **Image Integrity**: Signing verifies that the image hasn't been tampered with or altered since the signature was applied. It ensures that the image you deploy is the exact image you intended to use.
297 | * **Authentication**: Signature validation confirms the authenticity of the image, providing a level of trust in the source.
298 | * **Security and Compliance**: By ensuring image integrity and authenticity, signing helps meet security and compliance requirements, especially in regulated industries.
299 |
300 | ### Adding a key to Notary
301 |
302 | To sign container images with Notation, you need to add a key to the Notary configuration. As part of your infrastructure setup, you created a certificate and stored it in Azure Key Vault. You'll use the KEY_ID of that certificate to identify the certificate used for the digital signatures of your container images.
303 |
304 | To get the `KEY_ID` from Azure Key Vault, run the following command:
305 |
306 | ```bash
307 | KEY_ID=$(az keyvault certificate show --name $CERT_NAME --vault-name $AKV_NAME --query kid -o tsv)
308 | ```
309 |
310 | Next, add the certificate to Notary using the notation key add command:
311 |
312 | ```bash
313 | notation key add --plugin azure-kv default --id $KEY_ID --default
314 | ```
315 |
316 | You can verify that the key was added successfully by running:
317 |
318 | ```bash
319 | notation key list
320 | ```
321 |
322 | ### Getting the Docker Image Digest
323 |
324 | Signing a container image solely by its tag poses a significant security risk due to the mutability of tags, meaning they can be changed over time. When you sign an image by its tag, there is no way to guarantee that the tag you signed hasn't been modified or updated to point to a different version of the image.
325 |
326 | To mitigate this risk, it is recommended to sign container images using their Docker image digest, which is a unique identifier generated by the SHA256 cryptographic hash function. Docker image digests are immutable and remain constant as long as the image content remains unchanged. This immutability ensures that the signed content cannot be tampered with, providing a reliable way to reference a specific version of the image.
327 |
328 | By using digests (SHA256) for signing, an additional layer of security is added, ensuring the integrity and authenticity of container images. It significantly reduces the vulnerability to tampering or exploitation, making them more secure for use in production environments.
329 |
330 | To get the digests of the container images, run the following commands:
331 |
332 | ```bash
333 | APP_DIGEST=$(docker inspect --format='{{index .RepoDigests 0}}' ${ACR_IMAGE_PATCHED})
334 |
335 | DB_DIGEST=$(docker inspect --format='{{index .RepoDigests 1}}' $ACR_NAME.azurecr.io/postgres:15.0-alpine)
336 | ```
337 |
338 |
339 |
340 | > **Tip**: If you want to always get the most recent RepoDigest, you can use the following command-line trickery:
341 |
342 | ```bash
343 | docker image inspect --format='{{range $digest := .RepoDigests}}{{println $digest}}{{end}}' | sort | tail -1
344 | ```
345 |
346 |