├── .gitignore ├── README.md ├── DOKS-Terraform-and-Flux ├── version.tf ├── assets │ ├── manifests │ │ ├── busybox-ns.yaml │ │ ├── kustomization.yaml │ │ └── busybox.yaml │ └── img │ │ ├── doks_created.png │ │ ├── flux_git_res.png │ │ ├── tf_state_s3.png │ │ └── tf_doks_fluxcd_flow.png ├── backend.tf.sample ├── .gitignore ├── data.tf ├── terraform.tfvars.sample ├── provider.tf ├── variables.tf ├── main.tf └── README.md ├── DOKS-CI-CD └── assets │ ├── manifests │ ├── tekton │ │ ├── configs │ │ │ ├── argocd │ │ │ │ ├── auth.env │ │ │ │ └── server.env │ │ │ ├── github │ │ │ │ ├── pat.env │ │ │ │ └── githubsource.yaml │ │ │ └── docker │ │ │ │ ├── config.json │ │ │ │ └── registry.yaml │ │ ├── eventing │ │ │ ├── tekton-ci-cd-channel.yaml │ │ │ ├── tekton-ci-cd-channel-subscribers.yaml │ │ │ └── tekton-ci-cd-github-source.yaml │ │ ├── triggers │ │ │ ├── tekton-argocd-build-deploy-trigger-binding.yaml │ │ │ ├── rbac.yaml │ │ │ ├── tekton-argocd-build-deploy-event-listener.yaml │ │ │ └── tekton-argocd-build-deploy-trigger-template.yaml │ │ ├── tasks │ │ │ └── argocd-task-create-app.yaml │ │ ├── kustomization.yaml │ │ └── pipelines │ │ │ └── tekton-argocd-build-deploy.yaml │ ├── knative-serving │ │ ├── patches │ │ │ ├── network-config.yaml │ │ │ ├── domain-config.yaml │ │ │ ├── net-certmanager-install.yaml │ │ │ └── certmanager-config.yaml │ │ ├── resources │ │ │ └── kn-cluster-issuer.yaml │ │ └── kustomization.yaml │ └── knative-eventing │ │ └── testing │ │ ├── kn-gh-dumper-svc.yaml │ │ └── kn-gh-source-test.yaml │ └── images │ ├── 2048-game-run.png │ ├── tekton_gh_webhook.png │ ├── docr-k8s-integration.png │ ├── doks_ci_cd_overview.png │ ├── mp_1-click_apps_listing.png │ ├── tekton_tasks_overview.png │ ├── tekton_pipelines_overview.png │ ├── tkn_pipeline_run_details.png │ ├── tkn_pipeline_run_listing.png │ ├── argocd_1-click_app_install.png │ ├── knative_1-click_app_install.png │ ├── mp_1-click_apps_listing_argo.png │ ├── mp_1-click_apps_listing_cm.png │ ├── tekton_ci_cd_pipeline_overview.png │ ├── tekton_dashboard_welcome_page.png │ ├── tekton_event_listener_overview.png │ ├── certmanager_1-click_app_install.png │ ├── tekton_knative_githubsource_overview.png │ └── tekton_pipelines_1-click_app_install.png ├── DOKS-Wordpress └── assets │ ├── manifests │ ├── openEBS-nfs-provisioner-values.yaml │ ├── redis-values.yaml │ ├── sc-rwx-values.yaml │ ├── letsencrypt-issuer-values.yaml │ └── wordpress-values.yaml │ └── images │ └── arch_wordpress.png ├── DOKS-Egress-Gateway └── assets │ ├── images │ ├── sr_operator.png │ ├── crossplane_overview.png │ └── doks_egress_setup.png │ └── manifests │ ├── crossplane │ ├── do-provider-install.yaml │ ├── do-provider-config.yaml │ └── egress-gw-droplet.yaml │ ├── static-routes │ ├── ipinfo-io-example.yaml │ └── ifconfig-me-example.yaml │ └── curl-test.yaml ├── DOKS-Supply-Chain-Security ├── assets │ ├── images │ │ ├── snyk │ │ │ ├── logo.png │ │ │ ├── pipeline_flow.png │ │ │ ├── game-2048-wf-nav.png │ │ │ ├── game-2048-repo-scan.png │ │ │ ├── game-2048_wf_start.png │ │ │ ├── issue-card-details.png │ │ │ ├── slack_notifications.png │ │ │ ├── game-2048-wf-progress.png │ │ │ ├── game-2048-wf-success.png │ │ │ ├── gh_code_scanning_results.png │ │ │ ├── gh_workflow_diagram_code.png │ │ │ ├── snyk_cloud_portal_option.png │ │ │ ├── snyk_gh_security_option.png │ │ │ ├── gh_scan_report_sample_issue.png │ │ │ ├── game-2048-medium-level-issues.png │ │ │ ├── game-2048-wf-slack-notification.png │ │ │ ├── snyk_game-2048_container_monitor.png │ │ │ ├── gh_workflow_container_scan_success.png │ │ │ ├── security_compliance_scanning_process.png │ │ │ └── gh_workflow_container_scan_job_recommendation.png │ │ ├── DOKS_Overview.png │ │ ├── kubescape │ │ │ ├── logo.png │ │ │ ├── image_scanning.png │ │ │ ├── pipeline_flow.png │ │ │ ├── game-2048-wf-nav.png │ │ │ ├── portal_dashboard.png │ │ │ ├── rbac_visualizer.png │ │ │ ├── trigger_UI_scan.png │ │ │ ├── UI_trigger_options.png │ │ │ ├── game-2048_wf_start.png │ │ │ ├── slack_notifications.png │ │ │ ├── configuration_scanning.png │ │ │ ├── cp_remediation_assist.png │ │ │ ├── cp_remediation_hints.png │ │ │ ├── game-2048-ks-repo-scan.png │ │ │ ├── game-2048-scan-report.png │ │ │ ├── game-2048-wf-progress.png │ │ │ ├── game-2048-wf-success.png │ │ │ ├── game-2048-controls-list.png │ │ │ ├── gh_workflow_diagram_code.png │ │ │ ├── game-2048-wf-slack-notification.png │ │ │ ├── cluster_scan-slack_periodic_alerts.png │ │ │ └── security_compliance_scanning_process.png │ │ └── DOKS_E2E_Security.png │ └── manifests │ │ └── armo-values-v1.7.15.yaml ├── Tech-talk_ Kubernetes Supply Chain Security.pdf ├── README.md ├── kubescape.md └── snyk.md ├── DOKS-K-Bench-Load-Testing ├── assets │ └── images │ │ ├── kbench-overview.png │ │ ├── pod-count-sample.png │ │ ├── node-dashboard-sample.png │ │ ├── benchmark-results-sample.png │ │ └── grafana-api-server-sample.png └── README.md ├── DOKS-Automatic-Node-Repair ├── content │ └── img │ │ ├── digital-mobius-flow.png │ │ ├── simulate-node-failure.png │ │ └── digital-mobius-install.png └── README.md ├── .whitesource └── DOKS-Internal-LB ├── README.md ├── nodeport.yaml └── external-dns-rbac.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | .vscode 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # container-blueprints 2 | DOKS Solution Blueprints 3 | -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/version.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">=1.0.2" 3 | } 4 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/configs/argocd/auth.env: -------------------------------------------------------------------------------- 1 | ARGOCD_USERNAME=admin 2 | ARGOCD_PASSWORD= 3 | -------------------------------------------------------------------------------- /DOKS-Wordpress/assets/manifests/openEBS-nfs-provisioner-values.yaml: -------------------------------------------------------------------------------- 1 | nfsStorageClass: 2 | backendStorageClass: "do-block-storage" 3 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/2048-game-run.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/2048-game-run.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/tekton_gh_webhook.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/tekton_gh_webhook.png -------------------------------------------------------------------------------- /DOKS-Wordpress/assets/images/arch_wordpress.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Wordpress/assets/images/arch_wordpress.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/docr-k8s-integration.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/docr-k8s-integration.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/doks_ci_cd_overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/doks_ci_cd_overview.png -------------------------------------------------------------------------------- /DOKS-Egress-Gateway/assets/images/sr_operator.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Egress-Gateway/assets/images/sr_operator.png -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/assets/manifests/busybox-ns.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Namespace 4 | metadata: 5 | name: busybox 6 | spec: {} 7 | status: {} 8 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/mp_1-click_apps_listing.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/mp_1-click_apps_listing.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/tekton_tasks_overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/tekton_tasks_overview.png -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/assets/img/doks_created.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Terraform-and-Flux/assets/img/doks_created.png -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/assets/img/flux_git_res.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Terraform-and-Flux/assets/img/flux_git_res.png -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/assets/img/tf_state_s3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Terraform-and-Flux/assets/img/tf_state_s3.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/tekton_pipelines_overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/tekton_pipelines_overview.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/tkn_pipeline_run_details.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/tkn_pipeline_run_details.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/tkn_pipeline_run_listing.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/tkn_pipeline_run_listing.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/logo.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/argocd_1-click_app_install.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/argocd_1-click_app_install.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/knative_1-click_app_install.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/knative_1-click_app_install.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/mp_1-click_apps_listing_argo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/mp_1-click_apps_listing_argo.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/mp_1-click_apps_listing_cm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/mp_1-click_apps_listing_cm.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/configs/github/pat.env: -------------------------------------------------------------------------------- 1 | secretToken= 2 | accessToken= 3 | -------------------------------------------------------------------------------- /DOKS-Egress-Gateway/assets/images/crossplane_overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Egress-Gateway/assets/images/crossplane_overview.png -------------------------------------------------------------------------------- /DOKS-Egress-Gateway/assets/images/doks_egress_setup.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Egress-Gateway/assets/images/doks_egress_setup.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/tekton_ci_cd_pipeline_overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/tekton_ci_cd_pipeline_overview.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/tekton_dashboard_welcome_page.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/tekton_dashboard_welcome_page.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/tekton_event_listener_overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/tekton_event_listener_overview.png -------------------------------------------------------------------------------- /DOKS-K-Bench-Load-Testing/assets/images/kbench-overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-K-Bench-Load-Testing/assets/images/kbench-overview.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/DOKS_Overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/DOKS_Overview.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/logo.png -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/assets/img/tf_doks_fluxcd_flow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Terraform-and-Flux/assets/img/tf_doks_fluxcd_flow.png -------------------------------------------------------------------------------- /DOKS-Automatic-Node-Repair/content/img/digital-mobius-flow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Automatic-Node-Repair/content/img/digital-mobius-flow.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/certmanager_1-click_app_install.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/certmanager_1-click_app_install.png -------------------------------------------------------------------------------- /DOKS-K-Bench-Load-Testing/assets/images/pod-count-sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-K-Bench-Load-Testing/assets/images/pod-count-sample.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/DOKS_E2E_Security.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/DOKS_E2E_Security.png -------------------------------------------------------------------------------- /DOKS-Automatic-Node-Repair/content/img/simulate-node-failure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Automatic-Node-Repair/content/img/simulate-node-failure.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/pipeline_flow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/pipeline_flow.png -------------------------------------------------------------------------------- /DOKS-Automatic-Node-Repair/content/img/digital-mobius-install.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Automatic-Node-Repair/content/img/digital-mobius-install.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/tekton_knative_githubsource_overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/tekton_knative_githubsource_overview.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/images/tekton_pipelines_1-click_app_install.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-CI-CD/assets/images/tekton_pipelines_1-click_app_install.png -------------------------------------------------------------------------------- /DOKS-K-Bench-Load-Testing/assets/images/node-dashboard-sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-K-Bench-Load-Testing/assets/images/node-dashboard-sample.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-wf-nav.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-wf-nav.png -------------------------------------------------------------------------------- /DOKS-K-Bench-Load-Testing/assets/images/benchmark-results-sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-K-Bench-Load-Testing/assets/images/benchmark-results-sample.png -------------------------------------------------------------------------------- /DOKS-K-Bench-Load-Testing/assets/images/grafana-api-server-sample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-K-Bench-Load-Testing/assets/images/grafana-api-server-sample.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/image_scanning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/image_scanning.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/pipeline_flow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/pipeline_flow.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-repo-scan.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-repo-scan.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/game-2048_wf_start.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/game-2048_wf_start.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/issue-card-details.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/issue-card-details.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/slack_notifications.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/slack_notifications.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-wf-nav.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-wf-nav.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/portal_dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/portal_dashboard.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/rbac_visualizer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/rbac_visualizer.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/trigger_UI_scan.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/trigger_UI_scan.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-wf-progress.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-wf-progress.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-wf-success.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-wf-success.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/Tech-talk_ Kubernetes Supply Chain Security.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/Tech-talk_ Kubernetes Supply Chain Security.pdf -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/UI_trigger_options.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/UI_trigger_options.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048_wf_start.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048_wf_start.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/slack_notifications.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/slack_notifications.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/gh_code_scanning_results.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/gh_code_scanning_results.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/gh_workflow_diagram_code.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/gh_workflow_diagram_code.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/snyk_cloud_portal_option.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/snyk_cloud_portal_option.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/snyk_gh_security_option.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/snyk_gh_security_option.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/configuration_scanning.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/configuration_scanning.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/cp_remediation_assist.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/cp_remediation_assist.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/cp_remediation_hints.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/cp_remediation_hints.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-ks-repo-scan.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-ks-repo-scan.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-scan-report.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-scan-report.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-wf-progress.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-wf-progress.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-wf-success.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-wf-success.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/gh_scan_report_sample_issue.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/gh_scan_report_sample_issue.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-controls-list.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-controls-list.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/gh_workflow_diagram_code.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/gh_workflow_diagram_code.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-medium-level-issues.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-medium-level-issues.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/configs/docker/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "auths": { 3 | "registry.digitalocean.com": { 4 | "auth": "" 5 | } 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-wf-slack-notification.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/game-2048-wf-slack-notification.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/snyk_game-2048_container_monitor.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/snyk_game-2048_container_monitor.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/gh_workflow_container_scan_success.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/gh_workflow_container_scan_success.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-wf-slack-notification.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/game-2048-wf-slack-notification.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/security_compliance_scanning_process.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/security_compliance_scanning_process.png -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/cluster_scan-slack_periodic_alerts.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/cluster_scan-slack_periodic_alerts.png -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/eventing/tekton-ci-cd-channel.yaml: -------------------------------------------------------------------------------- 1 | # `InMemoryChannel` CRD - defines a channel of In-Memory type 2 | 3 | apiVersion: messaging.knative.dev/v1 4 | kind: InMemoryChannel 5 | metadata: 6 | name: tekton-ci-channel 7 | -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/kubescape/security_compliance_scanning_process.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/kubescape/security_compliance_scanning_process.png -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/assets/manifests/kustomization.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: kustomize.config.k8s.io/v1beta1 3 | kind: Kustomization 4 | commonLabels: 5 | app: busybox 6 | namespace: busybox 7 | resources: 8 | - busybox.yaml 9 | - busybox-ns.yaml -------------------------------------------------------------------------------- /DOKS-Egress-Gateway/assets/manifests/crossplane/do-provider-install.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: pkg.crossplane.io/v1 3 | kind: Provider 4 | metadata: 5 | name: provider-do 6 | spec: 7 | package: "xpkg.upbound.io/digitalocean/provider-digitalocean:v0.2.0" 8 | -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/images/snyk/gh_workflow_container_scan_job_recommendation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/digitalocean/container-blueprints/HEAD/DOKS-Supply-Chain-Security/assets/images/snyk/gh_workflow_container_scan_job_recommendation.png -------------------------------------------------------------------------------- /.whitesource: -------------------------------------------------------------------------------- 1 | { 2 | "scanSettings": { 3 | "baseBranches": [] 4 | }, 5 | "checkRunSettings": { 6 | "vulnerableCheckRunConclusionLevel": "failure", 7 | "displayMode": "diff" 8 | }, 9 | "issueSettings": { 10 | "minSeverityLevel": "LOW" 11 | } 12 | } -------------------------------------------------------------------------------- /DOKS-Egress-Gateway/assets/manifests/static-routes/ipinfo-io-example.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.digitalocean.com/v1 2 | kind: StaticRoute 3 | metadata: 4 | name: static-route-ipinfo.io 5 | spec: 6 | destinations: 7 | - "34.117.59.81" 8 | gateway: "" 9 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/configs/argocd/server.env: -------------------------------------------------------------------------------- 1 | # Argo CD server address (below value targets internal service - no need to change if following this tutorial) 2 | # Argo CD server internal address format: . 3 | ARGOCD_SERVER=argocd-server.argocd.svc 4 | -------------------------------------------------------------------------------- /DOKS-Egress-Gateway/assets/manifests/static-routes/ifconfig-me-example.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.digitalocean.com/v1 2 | kind: StaticRoute 3 | metadata: 4 | name: static-route-ifconfig.me 5 | spec: 6 | destinations: 7 | - "34.160.111.145" 8 | gateway: "" 9 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/configs/github/githubsource.yaml: -------------------------------------------------------------------------------- 1 | # Used to replace the GitHub owner and repository in the GitHubSource CRD 2 | # Path: DOKS-CI-CD/assets/manifests/tekton/eventing/tekton-ci-cd-github-source.yaml 3 | 4 | - op: replace 5 | path: /spec/ownerAndRepository 6 | value: /tekton-sample-app 7 | -------------------------------------------------------------------------------- /DOKS-Egress-Gateway/assets/manifests/curl-test.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: curl-test 5 | spec: 6 | containers: 7 | - name: curl 8 | image: curlimages/curl:7.80.0 9 | command: 10 | - sleep 11 | - "3600" 12 | imagePullPolicy: IfNotPresent 13 | restartPolicy: Always 14 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/configs/docker/registry.yaml: -------------------------------------------------------------------------------- 1 | # Used to replace the Docker registry name from the `TektonTriggerTemplate` CRD input parameters 2 | # Path: DOKS-CI-CD/assets/manifests/tekton/triggers/tekton-argocd-build-deploy-trigger-template.yaml 3 | 4 | - op: replace 5 | path: /spec/params/3/default 6 | value: 7 | -------------------------------------------------------------------------------- /DOKS-Wordpress/assets/manifests/redis-values.yaml: -------------------------------------------------------------------------------- 1 | master: 2 | persistence: 3 | enabled: true 4 | storageClass: rwx-storage 5 | accessModes: ["ReadWriteMany"] 6 | size: 5Gi 7 | 8 | volumePermissions: 9 | enabled: true 10 | 11 | auth: 12 | enabled: true 13 | password: 14 | 15 | architecture: standalone 16 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/knative-serving/patches/network-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: operator.knative.dev/v1alpha1 2 | kind: KnativeServing 3 | metadata: 4 | name: knative-serving 5 | namespace: knative-serving 6 | spec: 7 | config: 8 | network: 9 | # Instruct Knative Serving to enable auto TLS 10 | auto-tls: Enabled 11 | http-protocol: Enabled 12 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/knative-eventing/testing/kn-gh-dumper-svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: serving.knative.dev/v1 2 | kind: Service 3 | metadata: 4 | name: github-message-dumper 5 | labels: 6 | networking.knative.dev/visibility: cluster-local 7 | spec: 8 | template: 9 | spec: 10 | containers: 11 | - image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display 12 | -------------------------------------------------------------------------------- /DOKS-Internal-LB/README.md: -------------------------------------------------------------------------------- 1 | ## Internal Load Balancer for DigitalOcean Kubernetes Clusters 2 | 3 | Manifests to create a NodePort service and ExternalDNS deployment. See [Create Internal Load Balancer to Access DigitalOcean Kubernetes Services](https://docs.digitalocean.com/tutorials/internal-lb) to learn how to use these manifests to make services on a DOKS cluster accessible for applications on Droplets. -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/knative-serving/patches/domain-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: operator.knative.dev/v1alpha1 2 | kind: KnativeServing 3 | metadata: 4 | name: knative-serving 5 | namespace: knative-serving 6 | spec: 7 | config: 8 | # Instruct Knative Serving to use a custom domain you own 9 | # Make sure to replace `starter-kit.online` with your own domain name 10 | domain: 11 | starter-kit.online: "" 12 | -------------------------------------------------------------------------------- /DOKS-Wordpress/assets/manifests/sc-rwx-values.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: storage.k8s.io/v1 3 | kind: StorageClass 4 | metadata: 5 | name: rwx-storage 6 | annotations: 7 | openebs.io/cas-type: nsfrwx 8 | cas.openebs.io/config: | 9 | - name: NSFServerType 10 | value: "kernel" 11 | - name: BackendStorageClass 12 | value: "do-block-storage" 13 | provisioner: openebs.io/nfsrwx 14 | reclaimPolicy: Delete 15 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/knative-serving/patches/net-certmanager-install.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: operator.knative.dev/v1alpha1 2 | kind: KnativeServing 3 | metadata: 4 | name: knative-serving 5 | namespace: knative-serving 6 | spec: 7 | # Instruct Knative Serving Operator to install the additional `net-certmanager` component 8 | additionalManifests: 9 | - URL: https://github.com/knative/net-certmanager/releases/download/knative-v1.4.0/release.yaml 10 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/knative-serving/patches/certmanager-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: operator.knative.dev/v1alpha1 2 | kind: KnativeServing 3 | metadata: 4 | name: knative-serving 5 | namespace: knative-serving 6 | spec: 7 | config: 8 | # Instruct Knative Serving to use `kn-letsencrypt-http01-issuer` as a cluster issuer for TLS certificates 9 | certmanager: 10 | issuerRef: | 11 | kind: ClusterIssuer 12 | name: kn-letsencrypt-http01-issuer 13 | -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/assets/manifests/busybox.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Pod 4 | metadata: 5 | name: busybox1 6 | namespace: busybox 7 | labels: 8 | app: busybox1 9 | spec: 10 | containers: 11 | - image: busybox 12 | command: 13 | - sleep 14 | - "3600" 15 | imagePullPolicy: IfNotPresent 16 | name: busybox 17 | resources: 18 | requests: 19 | cpu: 100m 20 | memory: 50Mi 21 | limits: 22 | cpu: 200m 23 | memory: 100Mi 24 | restartPolicy: Always 25 | -------------------------------------------------------------------------------- /DOKS-Egress-Gateway/assets/manifests/crossplane/do-provider-config.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Secret 4 | metadata: 5 | namespace: crossplane-system 6 | name: do-api-token 7 | type: Opaque 8 | data: 9 | token: 10 | 11 | --- 12 | apiVersion: do.crossplane.io/v1alpha1 13 | kind: ProviderConfig 14 | metadata: 15 | name: do-provider-config 16 | spec: 17 | credentials: 18 | source: Secret 19 | secretRef: 20 | namespace: crossplane-system 21 | name: do-api-token 22 | key: token 23 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/triggers/tekton-argocd-build-deploy-trigger-binding.yaml: -------------------------------------------------------------------------------- 1 | # A `TriggerBinding` specifies what fields you're interested in from the incoming GitHub event 2 | 3 | apiVersion: triggers.tekton.dev/v1beta1 4 | kind: TriggerBinding 5 | metadata: 6 | name: tekton-argocd-build-deploy-trigger-binding 7 | spec: 8 | # Defines what fields to extract from the GitHub event payload and parameter association 9 | params: 10 | - name: git-url 11 | value: $(body.repository.url) 12 | - name: git-revision 13 | value: $(body.head_commit.id) 14 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/knative-eventing/testing/kn-gh-source-test.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: sources.knative.dev/v1alpha1 2 | kind: GitHubSource 3 | metadata: 4 | name: tekton-ci-gh-source 5 | spec: 6 | eventTypes: 7 | - push 8 | ownerAndRepository: mtiutiu-heits/do-gitops-testing 9 | accessToken: 10 | secretKeyRef: 11 | name: github-pat 12 | key: accessToken 13 | secretToken: 14 | secretKeyRef: 15 | name: github-pat 16 | key: secretToken 17 | sink: 18 | ref: 19 | apiVersion: serving.knative.dev/v1 20 | kind: Service 21 | name: github-message-dumper 22 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/eventing/tekton-ci-cd-channel-subscribers.yaml: -------------------------------------------------------------------------------- 1 | # `Subscription` CRD - defines a subscription for a specific channel 2 | 3 | apiVersion: messaging.knative.dev/v1 4 | kind: Subscription 5 | metadata: 6 | name: tekton-argocd-build-deploy-subscription 7 | spec: 8 | # Defines the channel used for subscriptions 9 | channel: 10 | apiVersion: messaging.knative.dev/v1 11 | kind: InMemoryChannel 12 | name: tekton-ci-channel 13 | subscriber: 14 | # subscribe a service by URI 15 | # Kubernetes classic services are also accepted, but must have an endpoint listening on port 80 16 | uri: http://el-tekton-argocd-build-deploy-event-listener.default.svc.cluster.local:8080 17 | -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/backend.tf.sample: -------------------------------------------------------------------------------- 1 | # Store the state file using a DO Spaces bucket 2 | 3 | terraform { 4 | backend "s3" { 5 | skip_credentials_validation = true 6 | skip_metadata_api_check = true 7 | endpoint = ".digitaloceanspaces.com" # replace , leave the rest as is (e.g: fra1.digitaloceanspaces.com) 8 | region = "us-east-1" # leave this as is (Terraform expects the AWS format - N/A for DO Spaces) 9 | bucket = "" # replace this with your bucket name 10 | key = "" # replaces this with your state file name (e.g. terraform.tfstate) 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /DOKS-Wordpress/assets/manifests/letsencrypt-issuer-values.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cert-manager.io/v1 2 | kind: ClusterIssuer 3 | metadata: 4 | name: letsencrypt-prod 5 | namespace: wordpress 6 | spec: 7 | acme: 8 | # You must replace this email address with your own. 9 | # Let's Encrypt will use this to contact you about expiring 10 | # certificates, and issues related to your account. 11 | email: 12 | server: https://acme-v02.api.letsencrypt.org/directory 13 | privateKeySecretRef: 14 | # Secret resource used to store the account's private key. 15 | name: prod-issuer-account-key 16 | # Add a single challenge solver, HTTP01 using nginx 17 | solvers: 18 | - http01: 19 | ingress: 20 | class: nginx 21 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/knative-serving/resources/kn-cluster-issuer.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cert-manager.io/v1 2 | kind: ClusterIssuer 3 | metadata: 4 | name: kn-letsencrypt-http01-issuer 5 | spec: 6 | acme: 7 | privateKeySecretRef: 8 | name: kn-letsencrypt 9 | # GitHub webhooks require a production ready TLS certificate 10 | # Make sure to switch to the Let's Encrypt production server, when setting Knative Eventing to react on GitHub events 11 | server: https://acme-v02.api.letsencrypt.org/directory 12 | # By default it's recommended to use the Let's Encrypt staging environment for testing 13 | # The Let's Encrypt production server has a quota limit set for the number of requests per day 14 | # server: https://acme-staging-v02.api.letsencrypt.org/directory 15 | solvers: 16 | - http01: 17 | ingress: 18 | class: kourier 19 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/triggers/rbac.yaml: -------------------------------------------------------------------------------- 1 | # `doks-ci-cd-tekton-triggers-sa` service account - allows Tekton EventListeners to instantiate Pipeline resources 2 | 3 | 4 | apiVersion: v1 5 | kind: ServiceAccount 6 | metadata: 7 | name: doks-ci-cd-tekton-triggers-sa 8 | 9 | --- 10 | apiVersion: rbac.authorization.k8s.io/v1 11 | kind: RoleBinding 12 | metadata: 13 | name: doks-ci-cd-eventlistener-binding 14 | subjects: 15 | - kind: ServiceAccount 16 | name: doks-ci-cd-tekton-triggers-sa 17 | roleRef: 18 | apiGroup: rbac.authorization.k8s.io 19 | kind: ClusterRole 20 | name: tekton-triggers-eventlistener-roles 21 | 22 | --- 23 | apiVersion: rbac.authorization.k8s.io/v1 24 | kind: ClusterRoleBinding 25 | metadata: 26 | name: doks-ci-cd-eventlistener-clusterbinding 27 | subjects: 28 | - kind: ServiceAccount 29 | name: doks-ci-cd-tekton-triggers-sa 30 | namespace: default 31 | roleRef: 32 | apiGroup: rbac.authorization.k8s.io 33 | kind: ClusterRole 34 | name: tekton-triggers-eventlistener-clusterroles 35 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/triggers/tekton-argocd-build-deploy-event-listener.yaml: -------------------------------------------------------------------------------- 1 | # An `EventListener` is basically a service that listens for events at a specified port on your Kubernetes cluster. 2 | # It exposes an addressable sink that receives incoming events, and specifies one or more `Triggers`. 3 | 4 | apiVersion: triggers.tekton.dev/v1beta1 5 | kind: EventListener 6 | metadata: 7 | name: tekton-argocd-build-deploy-event-listener 8 | spec: 9 | # Service account name to use - allows this EventListener to instantiate Pipeline resources 10 | serviceAccountName: doks-ci-cd-tekton-triggers-sa 11 | # List of triggers used by this EventListener 12 | # Each trigger is composed of a TriggerBinding and a TriggerTemplate 13 | triggers: 14 | - name: tekton-argocd-build-deploy-trigger 15 | bindings: 16 | - ref: tekton-argocd-build-deploy-trigger-binding # Reference to a TriggerBinding object 17 | template: 18 | ref: tekton-argocd-build-deploy-trigger-template # Reference to a TriggerTemplate object 19 | -------------------------------------------------------------------------------- /DOKS-Internal-LB/nodeport.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx 5 | spec: 6 | strategy: 7 | type: Recreate 8 | selector: 9 | matchLabels: 10 | app: nginx 11 | replicas: 3 12 | template: 13 | metadata: 14 | labels: 15 | app: nginx 16 | spec: 17 | containers: 18 | - name: nginx 19 | image: nginx 20 | ports: 21 | - containerPort: 80 22 | --- 23 | apiVersion: v1 24 | kind: Service 25 | metadata: 26 | name: nginx 27 | namespace: default 28 | labels: 29 | app: nginx 30 | annotations: 31 | kubernetes.digitalocean.com/firewall-managed: "false" 32 | external-dns.alpha.kubernetes.io/hostname: "nginx.kubenugget.dev" 33 | external-dns.alpha.kubernetes.io/ttl: "30" 34 | external-dns.alpha.kubernetes.io/access: "private" 35 | spec: 36 | externalTrafficPolicy: Cluster 37 | ports: 38 | - name: http 39 | port: 80 40 | protocol: TCP 41 | targetPort: 80 42 | nodePort: 31000 43 | selector: 44 | app: nginx 45 | type: NodePort 46 | -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/.gitignore: -------------------------------------------------------------------------------- 1 | # Local .terraform directories 2 | **/.terraform/* 3 | 4 | # .tfstate files 5 | *.tfstate 6 | *.tfstate.* 7 | 8 | # Crash log files 9 | crash.log 10 | 11 | # Exclude all .tfvars files, which are likely to contain sentitive data, such as 12 | # password, private keys, and other secrets. These should not be part of version 13 | # control as they are data points which are potentially sensitive and subject 14 | # to change depending on the environment. 15 | # 16 | *.tfvars 17 | 18 | # Ignore override files as they are usually used to override resources locally and so 19 | # are not checked in 20 | override.tf 21 | override.tf.json 22 | *_override.tf 23 | *_override.tf.json 24 | 25 | # Include override files you do wish to add to version control using negated pattern 26 | # 27 | # !example_override.tf 28 | 29 | # Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan 30 | # example: *tfplan* 31 | 32 | # Ignore CLI configuration files 33 | .terraformrc 34 | terraform.rc 35 | 36 | .terraform.lock* 37 | .DS_Store 38 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/knative-serving/kustomization.yaml: -------------------------------------------------------------------------------- 1 | ## Knative Serving Kustomization 2 | # 3 | # This kustomization configures Knative Serving from the DigitalOcean Marketplace repo to: 4 | # 1. Install the net-certmanager component (`patches/net-certmanager-install.yaml`) 5 | # 2. Configures Knative Serving to use the `kn-letsencrypt-http01-issuer` cluster issuer (`patches/certmanager-config.yaml`) 6 | # 3. Configures Knative Serving to use a custom domain (`patches/domain-config.yaml`) 7 | # 4. Configures Knative Serving for auto TLS (`patches/network-config.yaml`) 8 | ## 9 | 10 | apiVersion: kustomize.config.k8s.io/v1beta1 11 | kind: Kustomization 12 | resources: 13 | - https://raw.githubusercontent.com/digitalocean/marketplace-kubernetes/master/stacks/knative/assets/manifests/knative-serving.yaml 14 | - resources/kn-cluster-issuer.yaml # creates the `kn-letsencrypt-http01-issuer` cluster issuer 15 | patches: 16 | - patches/net-certmanager-install.yaml 17 | - patches/certmanager-config.yaml 18 | - patches/domain-config.yaml 19 | - patches/network-config.yaml 20 | -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/data.tf: -------------------------------------------------------------------------------- 1 | # =========================== GIT ============================== 2 | data "github_repository" "main" { 3 | name = var.git_repository_name 4 | } 5 | # ============================================================== 6 | 7 | # =========================== FLUX CD =========================== 8 | data "flux_install" "main" { 9 | target_path = var.git_repository_sync_path 10 | } 11 | 12 | data "flux_sync" "main" { 13 | target_path = var.git_repository_sync_path 14 | url = "ssh://git@github.com/${var.github_user}/${var.git_repository_name}.git" 15 | branch = var.git_repository_branch 16 | } 17 | 18 | data "kubectl_file_documents" "install" { 19 | content = data.flux_install.main.content 20 | } 21 | 22 | data "kubectl_file_documents" "sync" { 23 | content = data.flux_sync.main.content 24 | } 25 | 26 | locals { 27 | install = [for v in data.kubectl_file_documents.install.documents : { 28 | data : yamldecode(v) 29 | content : v 30 | } 31 | ] 32 | sync = [for v in data.kubectl_file_documents.sync.documents : { 33 | data : yamldecode(v) 34 | content : v 35 | } 36 | ] 37 | } 38 | # ================================================================= 39 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/eventing/tekton-ci-cd-github-source.yaml: -------------------------------------------------------------------------------- 1 | # `GitHubSource` CRD - defines GitHub as a source of events for other consumers (such as Tekton EventListeners) 2 | 3 | apiVersion: sources.knative.dev/v1alpha1 4 | kind: GitHubSource 5 | metadata: 6 | name: tekton-ci-cd-github-source 7 | spec: 8 | # Defines what type of GitHub events you're interested 9 | eventTypes: 10 | - push 11 | ownerAndRepository: / 12 | accessToken: 13 | secretKeyRef: 14 | name: tekton-ci-github-pat 15 | key: accessToken 16 | secretToken: 17 | secretKeyRef: 18 | name: tekton-ci-github-pat 19 | key: secretToken 20 | # Defines a sink where GitHub events should be sent 21 | # You can send events to a Knative or Kubernetes Service 22 | # You can also send events to multiple subscribers via Knative Eventing Channels (or Brokers) 23 | sink: 24 | # Change the URI value to match your service name, namespace and port value (below value works for this tutorial only) 25 | # URI field value uses the following format: http://.: 26 | uri: http://el-tekton-argocd-build-deploy-event-listener.doks-ci-cd.svc.cluster.local:8080 27 | # ref: 28 | # apiVersion: messaging.knative.dev/v1 29 | # kind: InMemoryChannel 30 | # name: tekton-ci-channel 31 | -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/terraform.tfvars.sample: -------------------------------------------------------------------------------- 1 | # DOKS 2 | do_api_token = "" # DO API TOKEN 3 | doks_cluster_name = "" # Name of this `DOKS` cluster ? 4 | doks_cluster_region = "" # What region should this `DOKS` cluster be provisioned in? 5 | doks_cluster_version = "" # What Kubernetes version should this `DOKS` cluster use ? 6 | doks_cluster_pool_size = "" # What machine type to use for this `DOKS` cluster ? 7 | doks_cluster_pool_node_count = # How many worker nodes this `DOKS` cluster should have ? 8 | 9 | # GitHub 10 | github_user = "" # Your `GitHub` username 11 | github_token = "" # Your `GitHub` personal access token 12 | git_repository_name = "" # Git repository where `Flux CD` manifests should be stored 13 | git_repository_branch = "" # Branch name to use for this `Git` repository (e.g.: `main`) 14 | git_repository_sync_path = "" # Git repository path where the manifests to sync are committed (e.g.: `clusters/dev`) 15 | -------------------------------------------------------------------------------- /DOKS-Wordpress/assets/manifests/wordpress-values.yaml: -------------------------------------------------------------------------------- 1 | # WordPress service type 2 | service: 3 | type: ClusterIP 4 | 5 | # Enable persistence using Persistent Volume Claims 6 | persistence: 7 | enabled: true 8 | storageClass: rwx-storage 9 | accessModes: ["ReadWriteMany"] 10 | size: 5Gi 11 | 12 | volumePermissions: 13 | enabled: true 14 | 15 | # Prometheus Exporter / Metrics configuration 16 | metrics: 17 | enabled: false 18 | 19 | # Level of auto-updates to allow. Allowed values: major, minor or none. 20 | wordpressAutoUpdateLevel: minor 21 | 22 | # Scheme to use to generate WordPress URLs 23 | wordpressScheme: https 24 | 25 | # WordPress credentials 26 | wordpressUsername: 27 | wordpressPassword: 28 | 29 | # External Database details 30 | externalDatabase: 31 | host: 32 | port: 25060 33 | user: 34 | password: 35 | database: 36 | 37 | # Disabling MariaDB 38 | mariadb: 39 | enabled: false 40 | 41 | wordpressSkipInstall: true 42 | 43 | wordpressExtraConfigContent: | 44 | define( 'WP_REDIS_SCHEME', '' ); 45 | define( 'WP_REDIS_HOST', '' ); 46 | define( 'WP_REDIS_PORT', ); 47 | define( 'WP_REDIS_PASSWORD', ''); 48 | define( 'WP_REDIS_DATABASE', 0 ); 49 | -------------------------------------------------------------------------------- /DOKS-Egress-Gateway/assets/manifests/crossplane/egress-gw-droplet.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: compute.do.crossplane.io/v1alpha1 3 | kind: Droplet 4 | metadata: 5 | name: egress-gw-nyc1 6 | spec: 7 | forProvider: 8 | region: nyc1 9 | size: s-1vcpu-1gb 10 | image: ubuntu-20-04-x64 11 | # vpcUuid: "" 12 | # sshKeys: 13 | # - "" 14 | userData: | 15 | #!/usr/bin/env bash 16 | # Install dependencies 17 | echo iptables-persistent iptables-persistent/autosave_v4 boolean true | debconf-set-selections 18 | echo iptables-persistent iptables-persistent/autosave_v6 boolean true | debconf-set-selections 19 | apt-get update 20 | apt-get -y install iptables iptables-persistent curl 21 | 22 | # Enable IP forwarding 23 | echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf 24 | sysctl -p /etc/sysctl.conf 25 | 26 | # Configure iptables for NAT 27 | PRIVATE_NETWORK_INTERFACE_IP="$(curl -s http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address)" 28 | PRIVATE_NETWORK_CIDR="$(ip route show src $PRIVATE_NETWORK_INTERFACE_IP | awk '{print $1}')" 29 | PUBLIC_INTERFACE_NAME="$(ip route show default | awk '{print $5}')" 30 | iptables -t nat -A POSTROUTING -s "$PRIVATE_NETWORK_CIDR" -o "$PUBLIC_INTERFACE_NAME" -j MASQUERADE 31 | iptables-save > /etc/iptables/rules.v4 32 | providerConfigRef: 33 | name: do-provider-config 34 | -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | github = { 4 | source = "integrations/github" 5 | version = "~> 4.12.2" 6 | } 7 | digitalocean = { 8 | source = "digitalocean/digitalocean" 9 | version = "~> 2.10.1" 10 | } 11 | kubernetes = { 12 | source = "hashicorp/kubernetes" 13 | version = "~> 2.3.2" 14 | } 15 | kubectl = { 16 | source = "gavinbunney/kubectl" 17 | version = "~> 1.11.2" 18 | } 19 | flux = { 20 | source = "fluxcd/flux" 21 | version = "~> 0.2.0" 22 | } 23 | tls = { 24 | source = "hashicorp/tls" 25 | version = "~> 3.1.0" 26 | } 27 | } 28 | } 29 | 30 | provider "digitalocean" { 31 | token = var.do_api_token 32 | } 33 | 34 | provider "kubectl" { 35 | host = digitalocean_kubernetes_cluster.primary.endpoint 36 | token = digitalocean_kubernetes_cluster.primary.kube_config[0].token 37 | cluster_ca_certificate = base64decode( 38 | digitalocean_kubernetes_cluster.primary.kube_config[0].cluster_ca_certificate 39 | ) 40 | load_config_file = false 41 | } 42 | 43 | provider "kubernetes" { 44 | host = digitalocean_kubernetes_cluster.primary.endpoint 45 | token = digitalocean_kubernetes_cluster.primary.kube_config[0].token 46 | cluster_ca_certificate = base64decode( 47 | digitalocean_kubernetes_cluster.primary.kube_config[0].cluster_ca_certificate 48 | ) 49 | } 50 | 51 | provider "github" { 52 | owner = var.github_user 53 | token = var.github_token 54 | } 55 | -------------------------------------------------------------------------------- /DOKS-Internal-LB/external-dns-rbac.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: external-dns 5 | --- 6 | apiVersion: rbac.authorization.k8s.io/v1beta1 7 | kind: ClusterRole 8 | metadata: 9 | name: external-dns 10 | rules: 11 | - apiGroups: [""] 12 | resources: ["services","endpoints","pods"] 13 | verbs: ["get","watch","list"] 14 | - apiGroups: ["networking","networking.k8s.io"] 15 | resources: ["ingresses"] 16 | verbs: ["get","watch","list"] 17 | - apiGroups: [""] 18 | resources: ["nodes"] 19 | verbs: ["get","watch","list"] 20 | --- 21 | apiVersion: rbac.authorization.k8s.io/v1beta1 22 | kind: ClusterRoleBinding 23 | metadata: 24 | name: external-dns-viewer 25 | roleRef: 26 | apiGroup: rbac.authorization.k8s.io 27 | kind: ClusterRole 28 | name: external-dns 29 | subjects: 30 | - kind: ServiceAccount 31 | name: external-dns 32 | namespace: default 33 | --- 34 | --- 35 | apiVersion: apps/v1 36 | kind: Deployment 37 | metadata: 38 | name: external-dns 39 | spec: 40 | replicas: 1 41 | selector: 42 | matchLabels: 43 | app: external-dns 44 | strategy: 45 | type: Recreate 46 | template: 47 | metadata: 48 | labels: 49 | app: external-dns 50 | spec: 51 | serviceAccountName: external-dns 52 | containers: 53 | - name: external-dns 54 | image: k8s.gcr.io/external-dns/external-dns:v0.13.1 55 | args: 56 | - --source=service 57 | - --domain-filter=kubenugget.dev 58 | - --provider=digitalocean 59 | env: 60 | - name: DO_TOKEN 61 | value: "DIGITALOCEAN_API_TOKEN" 62 | --- 63 | -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/assets/manifests/armo-values-v1.7.15.yaml: -------------------------------------------------------------------------------- 1 | # -- enable/disable trigger image scan for new images 2 | triggerNewImageScan: "enable" 3 | 4 | # image vulnerability scheduled scan using a CronJob 5 | armoScanScheduler: 6 | 7 | # -- enable/disable image vulnerability a schedule scan using a CronJob 8 | enabled: true 9 | 10 | # scan scheduler container name 11 | name: armo-scan-scheduler 12 | 13 | # Frequency of running the scan 14 | # ┌───────────── minute (0 - 59) 15 | # │ ┌───────────── hour (0 - 23) 16 | # │ │ ┌───────────── day of the month (1 - 31) 17 | # │ │ │ ┌───────────── month (1 - 12) 18 | # │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday; 19 | # │ │ │ │ │ 7 is also Sunday on some systems) 20 | # │ │ │ │ │ 21 | # │ │ │ │ │ 22 | # * * * * * 23 | # -- scan schedule frequency 24 | scanSchedule: "0 0 * * *" 25 | 26 | # kubescape scheduled scan using a CronJob 27 | armoKubescapeScanScheduler: 28 | 29 | # -- enable/disable a kubescape scheduled scan using a CronJob 30 | enabled: true 31 | 32 | # scan scheduler container name 33 | name: armo-kubescape-scheduler 34 | 35 | # -- Frequency of running the scan 36 | # ┌───────────── minute (0 - 59) 37 | # │ ┌───────────── hour (0 - 23) 38 | # │ │ ┌───────────── day of the month (1 - 31) 39 | # │ │ │ ┌───────────── month (1 - 12) 40 | # │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday; 41 | # │ │ │ │ │ 7 is also Sunday on some systems) 42 | # │ │ │ │ │ 43 | # │ │ │ │ │ 44 | # * * * * * 45 | # -- scan schedule frequency 46 | scanSchedule: "0 0 * * *" 47 | -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/variables.tf: -------------------------------------------------------------------------------- 1 | # ======================== DOKS ========================= 2 | variable "do_api_token" { 3 | description = "DigitalOcean API token" 4 | type = string 5 | sensitive = true 6 | } 7 | 8 | variable "doks_cluster_name" { 9 | description = "DOKS cluster name" 10 | type = string 11 | } 12 | 13 | variable "doks_cluster_region" { 14 | description = "DOKS cluster region" 15 | type = string 16 | } 17 | 18 | variable "doks_cluster_version" { 19 | description = "Kubernetes version provided by DOKS" 20 | type = string 21 | default = "1.21.3-do.0" # Grab the latest version slug from "doctl kubernetes options versions" 22 | } 23 | 24 | variable "doks_cluster_pool_size" { 25 | description = "DOKS cluster node pool size" 26 | type = string 27 | } 28 | 29 | variable "doks_cluster_pool_node_count" { 30 | description = "DOKS cluster worker nodes count" 31 | type = number 32 | } 33 | # =========================================================== 34 | 35 | # ========================== GIT ============================ 36 | variable "github_user" { 37 | description = "GitHub owner" 38 | type = string 39 | } 40 | 41 | variable "github_token" { 42 | description = "GitHub token" 43 | type = string 44 | sensitive = true 45 | } 46 | 47 | variable "github_ssh_pub_key" { 48 | description = "GitHub SSH public key" 49 | type = string 50 | default = "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=" 51 | } 52 | 53 | variable "git_repository_name" { 54 | description = "Git main repository to use for installation" 55 | type = string 56 | } 57 | 58 | variable "git_repository_branch" { 59 | description = "Branch name to use on the Git repository" 60 | type = string 61 | default = "main" 62 | } 63 | 64 | variable "git_repository_sync_path" { 65 | description = "Git repository directory path to use for Flux CD sync" 66 | type = string 67 | } 68 | # ======================================================================== -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/tasks/argocd-task-create-app.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: tekton.dev/v1beta1 2 | kind: Task 3 | metadata: 4 | name: argocd-task-create-app 5 | labels: 6 | app.kubernetes.io/version: "0.1" 7 | annotations: 8 | tekton.dev/pipelines.minVersion: "0.12.1" 9 | tekton.dev/categories: Deployment 10 | tekton.dev/tags: deploy 11 | tekton.dev/displayName: "argocd" 12 | tekton.dev/platforms: "linux/amd64" 13 | spec: 14 | description: >- 15 | This task creates an Argo CD application. 16 | To do so, it requires the address of the Argo CD server and some form of 17 | authentication either a username/password or an authentication token. 18 | params: 19 | - name: application-name 20 | description: Name of the application to sync 21 | type: string 22 | - name: repo-url 23 | description: Application repository URL 24 | type: string 25 | - name: resources-path 26 | description: Application Kubernetes resources path in the repository 27 | type: string 28 | - name: dest-server 29 | description: Destination Kubernetes server URL 30 | default: https://kubernetes.default.svc 31 | type: string 32 | - name: dest-namespace 33 | description: Kubernetes namespace for the application 34 | type: string 35 | - name: flags 36 | default: -- 37 | type: string 38 | - name: argocd-version 39 | default: v2.2.2 40 | type: string 41 | stepTemplate: 42 | envFrom: 43 | - configMapRef: 44 | name: argocd-env-configmap # used for server address 45 | - secretRef: 46 | name: argocd-env-secret # used for authentication (username/password or auth token) 47 | steps: 48 | - name: login 49 | image: quay.io/argoproj/argocd:$(params.argocd-version) 50 | script: | 51 | if [ -z "$ARGOCD_AUTH_TOKEN" ]; then 52 | yes | argocd login "$ARGOCD_SERVER" --username="$ARGOCD_USERNAME" --password="$ARGOCD_PASSWORD"; 53 | fi 54 | 55 | argocd app create "$(params.application-name)" \ 56 | --repo "$(params.repo-url)" \ 57 | --path "$(params.resources-path)" \ 58 | --dest-server "$(params.dest-server)" \ 59 | --dest-namespace "$(params.dest-namespace)" \ 60 | "$(params.flags)" 61 | -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/main.tf: -------------------------------------------------------------------------------- 1 | # ======================= GITHUB ========================= 2 | # 3 | # SSH Deploy Key to use by Flux CD 4 | resource "tls_private_key" "main" { 5 | algorithm = "ECDSA" 6 | ecdsa_curve = "P256" 7 | } 8 | 9 | resource "github_repository_deploy_key" "main" { 10 | title = var.doks_cluster_name 11 | repository = data.github_repository.main.name 12 | key = tls_private_key.main.public_key_openssh 13 | read_only = true 14 | } 15 | 16 | resource "github_repository_file" "install" { 17 | repository = data.github_repository.main.name 18 | file = data.flux_install.main.path 19 | content = data.flux_install.main.content 20 | branch = var.git_repository_branch 21 | } 22 | 23 | resource "github_repository_file" "sync" { 24 | repository = data.github_repository.main.name 25 | file = data.flux_sync.main.path 26 | content = data.flux_sync.main.content 27 | branch = var.git_repository_branch 28 | } 29 | 30 | resource "github_repository_file" "kustomize" { 31 | repository = data.github_repository.main.name 32 | file = data.flux_sync.main.kustomize_path 33 | content = data.flux_sync.main.kustomize_content 34 | branch = var.git_repository_branch 35 | } 36 | # ========================================================= 37 | 38 | # ======================== DOKS =========================== 39 | resource "digitalocean_kubernetes_cluster" "primary" { 40 | name = var.doks_cluster_name 41 | region = var.doks_cluster_region 42 | version = var.doks_cluster_version 43 | 44 | node_pool { 45 | name = "${var.doks_cluster_name}-pool" 46 | size = var.doks_cluster_pool_size 47 | node_count = var.doks_cluster_pool_node_count 48 | } 49 | } 50 | 51 | # =========================== FLUX CD =========================== 52 | resource "kubernetes_namespace" "flux_system" { 53 | metadata { 54 | name = "flux-system" 55 | } 56 | 57 | lifecycle { 58 | ignore_changes = [ 59 | metadata[0].labels, 60 | # metadata[0].annotations, # TODO: need to check if this one can be safely ignored 61 | ] 62 | } 63 | } 64 | 65 | resource "kubectl_manifest" "install" { 66 | for_each = { for v in local.install : lower(join("/", compact([v.data.apiVersion, v.data.kind, lookup(v.data.metadata, "namespace", ""), v.data.metadata.name]))) => v.content } 67 | depends_on = [kubernetes_namespace.flux_system] 68 | yaml_body = each.value 69 | } 70 | 71 | resource "kubectl_manifest" "sync" { 72 | for_each = { for v in local.sync : lower(join("/", compact([v.data.apiVersion, v.data.kind, lookup(v.data.metadata, "namespace", ""), v.data.metadata.name]))) => v.content } 73 | depends_on = [kubernetes_namespace.flux_system] 74 | yaml_body = each.value 75 | } 76 | 77 | resource "kubernetes_secret" "main" { 78 | depends_on = [kubectl_manifest.install] 79 | 80 | metadata { 81 | name = data.flux_sync.main.secret 82 | namespace = data.flux_sync.main.namespace 83 | } 84 | 85 | data = { 86 | identity = tls_private_key.main.private_key_pem 87 | "identity.pub" = tls_private_key.main.public_key_pem 88 | known_hosts = "github.com ${var.github_ssh_pub_key}" 89 | } 90 | } 91 | # ================================================================== 92 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/triggers/tekton-argocd-build-deploy-trigger-template.yaml: -------------------------------------------------------------------------------- 1 | # A `TriggerTemplate` specifies what PipelineResource to instantiate, via `spec.resourcetemplates` 2 | 3 | apiVersion: triggers.tekton.dev/v1beta1 4 | kind: TriggerTemplate 5 | metadata: 6 | name: tekton-argocd-build-deploy-trigger-template 7 | spec: 8 | # List of input parameters for this TriggerTemplate (passed in by the EventListener and TriggerBinding) 9 | # You can also define default values for parameters 10 | params: 11 | - name: git-url 12 | description: The git repository url 13 | - name: git-revision 14 | description: The git revision 15 | default: main 16 | - name: application-name 17 | description: The application name 18 | default: 2048-game 19 | - name: docker-registry-name 20 | description: Docker registry name 21 | default: tekton-ci 22 | - name: path-to-image-context 23 | description: Project build path context for docker 24 | default: ./ 25 | - name: path-to-dockerfile 26 | description: Path to project Dockerfile 27 | default: ./Dockerfile 28 | - name: k8s-resources-path 29 | description: Path to project Kubernetes resources 30 | default: resources 31 | - name: k8s-dest-server 32 | description: Targeted Kubernetes server 33 | default: https://kubernetes.default.svc 34 | - name: k8s-dest-namespace 35 | description: Target Kubernetes namespace for the application 36 | default: doks-ci-cd 37 | # Resource templates define what Pipeline to instantiate and run via `PipelineRun` 38 | resourcetemplates: 39 | - apiVersion: tekton.dev/v1beta1 40 | kind: PipelineRun 41 | metadata: 42 | generateName: tekton-argocd-build-deploy-pipeline-run- 43 | spec: 44 | # The Pipeline reference to use and instantiate 45 | pipelineRef: 46 | name: tekton-argocd-build-deploy-pipeline 47 | # List of required input parameters for the Pipeline 48 | params: 49 | - name: git-url 50 | value: $(tt.params.git-url) 51 | - name: git-revision 52 | value: $(tt.params.git-revision) 53 | - name: application-name 54 | value: $(tt.params.application-name) 55 | - name: image-name 56 | value: registry.digitalocean.com/$(tt.params.docker-registry-name)/$(tt.params.application-name) 57 | - name: path-to-image-context 58 | value: $(tt.params.path-to-image-context) 59 | - name: path-to-dockerfile 60 | value: $(tt.params.path-to-dockerfile) 61 | - name: k8s-resources-path 62 | value: $(tt.params.k8s-resources-path) 63 | - name: k8s-dest-server 64 | value: $(tt.params.k8s-dest-server) 65 | - name: k8s-dest-namespace 66 | value: $(tt.params.k8s-dest-namespace) 67 | # List of workspace definitions used by the Pipeline (as well as associated PVCs) 68 | workspaces: 69 | - name: git-source 70 | volumeClaimTemplate: 71 | spec: 72 | accessModes: 73 | - ReadWriteOnce 74 | resources: 75 | requests: 76 | storage: 1Gi 77 | - name: docker-config 78 | secret: 79 | secretName: registry-tekton-ci 80 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/kustomization.yaml: -------------------------------------------------------------------------------- 1 | ## Tekton Kustomization 2 | # 3 | # This kustomization is responsible with: 4 | # - Configuring and installing required Tekton resources (such as Tasks, Pipelines, etc) for the CI/CD flow 5 | # - Tekton integration with Knative Eventing to trigger Pipelines via GitHub webhooks 6 | # - Generating required configmaps and secrets for the CI/CD Tekton pipeline to work 7 | ## 8 | 9 | apiVersion: kustomize.config.k8s.io/v1beta1 10 | kind: Kustomization 11 | 12 | # Making sure all resources used in this tutorial are created in a dedicated namespace 13 | # Also specific labels and annotations are added for later identification 14 | namespace: doks-ci-cd 15 | commonAnnotations: 16 | provider: container-blueprints 17 | commonLabels: 18 | pipeline: tekton 19 | deploy: argocd 20 | 21 | resources: 22 | # Tekton catalog (Hub) tasks used in this tutorial 23 | - https://raw.githubusercontent.com/tektoncd/catalog/main/task/git-clone/0.6/git-clone.yaml 24 | - https://raw.githubusercontent.com/tektoncd/catalog/main/task/kaniko/0.6/kaniko.yaml 25 | - https://raw.githubusercontent.com/tektoncd/catalog/main/task/argocd-task-sync-and-wait/0.2/argocd-task-sync-and-wait.yaml 26 | # Custom Tekton tasks used in this tutorial 27 | - tasks/argocd-task-create-app.yaml 28 | # Custom pipelines used in this tutorial 29 | - pipelines/tekton-argocd-build-deploy.yaml 30 | # Tekton triggers/events resources 31 | - triggers/rbac.yaml 32 | - triggers/tekton-argocd-build-deploy-trigger-template.yaml 33 | - triggers/tekton-argocd-build-deploy-trigger-binding.yaml 34 | - triggers/tekton-argocd-build-deploy-event-listener.yaml 35 | # Knative eventing resources 36 | - eventing/tekton-ci-cd-github-source.yaml 37 | # - eventing/tekton-ci-channel.yaml 38 | # - eventing/tekton-ci-channel-subscribers.yaml 39 | 40 | # Patching the required resources used in this tutorial based on user settings, such as: 41 | # - GitHub owner and repository 42 | # - Docker registry name 43 | patches: 44 | # Patches GitHubSource CRD to point to user GitHub repo 45 | - path: configs/github/githubsource.yaml 46 | target: 47 | group: sources.knative.dev 48 | version: v1alpha1 49 | kind: GitHubSource 50 | name: tekton-ci-cd-github-source 51 | # Patches TriggerTemplate CRD to point to user DO Docker registry 52 | - path: configs/docker/registry.yaml 53 | target: 54 | group: triggers.tekton.dev 55 | version: v1beta1 56 | kind: TriggerTemplate 57 | name: tekton-argocd-build-deploy-trigger-template 58 | 59 | # Disabling name suffix for Kubernetes ConfigMaps and Secrets generated via Kustomize 60 | generatorOptions: 61 | disableNameSuffixHash: true 62 | 63 | configMapGenerator: 64 | # Contains the Argo CD server endpoint 65 | # Used by the `sync-application` Task to sync the Argo CD application 66 | - name: argocd-env-configmap 67 | env: configs/argocd/server.env 68 | 69 | secretGenerator: 70 | # Creates a secret containing Docker registry credentials 71 | # Used by the `build-docker-image` Task to push application image to the registry 72 | - name: registry-tekton-ci 73 | files: 74 | - configs/docker/config.json 75 | # Contains authentication credentials for the Argo CD server 76 | # Used by the `sync-application` Task to sync the Argo CD application 77 | - name: argocd-env-secret 78 | env: configs/argocd/auth.env 79 | # Contains the GitHub personal access token (or PAT) 80 | # Used by the `GitHubSource` CRD to access the GitHub API and manage webhooks 81 | - name: tekton-ci-github-pat 82 | env: configs/github/pat.env 83 | -------------------------------------------------------------------------------- /DOKS-CI-CD/assets/manifests/tekton/pipelines/tekton-argocd-build-deploy.yaml: -------------------------------------------------------------------------------- 1 | # `Pipeline` CRD - defines a Tekton Pipeline and associated Tasks 2 | 3 | apiVersion: tekton.dev/v1beta1 4 | kind: Pipeline 5 | metadata: 6 | name: tekton-argocd-build-deploy-pipeline 7 | spec: 8 | # List of input parameters used by this pipeline 9 | # These parameters will be used by each pipeline task subsequently 10 | params: 11 | - name: git-url 12 | - name: git-revision 13 | - name: application-name 14 | - name: image-name 15 | - name: path-to-image-context 16 | - name: path-to-dockerfile 17 | - name: k8s-resources-path 18 | - name: k8s-dest-server 19 | - name: k8s-dest-namespace 20 | # Shared workspaces used by pipeline tasks 21 | workspaces: 22 | - name: git-source 23 | - name: docker-config 24 | # List of tasks performing actions inside this Pipeline 25 | tasks: 26 | ############################################################################# 27 | # Task: `fetch-from-git` 28 | # Role: Fetches application source code from the specified Git repository 29 | # Parameters: 30 | # - `url`: defines Git repository URL 31 | # - `revision`: defines Git revision to use 32 | # Workspaces: 33 | # - `git-source`: used to store Git repository data 34 | ############################################################################# 35 | - name: fetch-from-git 36 | taskRef: 37 | name: git-clone 38 | params: 39 | - name: url 40 | value: $(params.git-url) 41 | - name: revision 42 | value: $(params.git-revision) 43 | workspaces: 44 | - name: output 45 | workspace: git-source 46 | ############################################################################# 47 | # Task: `build-docker-image` 48 | # Role: Builds the docker image for the application 49 | # Parameters: 50 | # - `IMAGE`: defines the docker image name 51 | # - `CONTEXT`: defines the Docker build context path (relative to workspace) 52 | # - `DOCKERFILE`: path to the Dockerfile inside the workspace 53 | # Workspaces: 54 | # - `git-source`: stores application code used to build the docker image 55 | # - `docker-config`: stores docker registry credentials to push the image 56 | ############################################################################# 57 | - name: build-docker-image 58 | taskRef: 59 | name: kaniko 60 | params: 61 | - name: IMAGE 62 | value: $(params.image-name) 63 | - name: CONTEXT 64 | value: $(params.path-to-image-context) 65 | - name: DOCKERFILE 66 | value: $(params.path-to-dockerfile) 67 | workspaces: 68 | - name: source 69 | workspace: git-source 70 | - name: dockerconfig 71 | workspace: docker-config 72 | ############################################################################# 73 | # Task: `create-application` 74 | # Role: Tells Argo CD to create a new application (if not present) 75 | # Parameters: 76 | # - `application-name`: the application name to sync 77 | # - `repo-url`: the repository URL hosting the application 78 | # - `resources-path`: path inside the repository for Kubernetes resources 79 | # - `dest-server`: targeted Kubernetes server 80 | # - `dest-namespace`: application target namespace 81 | # - `flags`: extra flags passed to Argo CD 82 | ############################################################################# 83 | - name: create-application 84 | taskRef: 85 | name: argocd-task-create-app 86 | runAfter: 87 | - build-docker-image 88 | params: 89 | - name: application-name 90 | value: $(params.application-name) 91 | - name: repo-url 92 | value: $(params.git-url) 93 | - name: resources-path 94 | value: $(params.k8s-resources-path) 95 | - name: dest-server 96 | value: $(params.k8s-dest-server) 97 | - name: dest-namespace 98 | value: $(params.k8s-dest-namespace) 99 | - name: flags 100 | value: --insecure 101 | ############################################################################# 102 | # Task: `sync-application` 103 | # Role: Tells Argo CD to sync and wait for the application to be ready 104 | # Parameters: 105 | # - `application-name`: application name to sync 106 | # - `flags`: extra flags passed to Argo CD 107 | ############################################################################# 108 | - name: sync-application 109 | taskRef: 110 | name: argocd-task-sync-and-wait 111 | runAfter: 112 | - create-application 113 | params: 114 | - name: application-name 115 | value: $(params.application-name) 116 | - name: flags 117 | value: --insecure 118 | -------------------------------------------------------------------------------- /DOKS-K-Bench-Load-Testing/README.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | Load Testing is a non-functional software testing process in which the performance of a system is tested under a specific expected load. It determines how the system behaves while being put under load. The goal of Load Testing is to improve performance bottlenecks and to ensure stability and smooth functioning of the system. Load testing gives confidence in the system & its reliability and performance. 4 | 5 | [K-bench](https://github.com/vmware-tanzu/k-bench) is a framework to benchmark the control and data plane aspects of a Kubernetes infrastructure. K-Bench provides a configurable way to prescriptively create and manipulate Kubernetes resources at scale and eventually provide the relevant control plane and dataplane performance metrics for the target infrastructure. 6 | K-bench allows users to control the client side concurrency, the operations, and how these different types of operations are executed in sequence or in parallel. In particular, user can define, through a config file, a workflow of operations for supported resources. 7 | After a successful run, the benchmark reports metrics (e.g., number of requests, API invoke latency, throughput, etc.) for the executed operations on various resource types. 8 | 9 | In this tutorial, you will configure K-bench. This tool needs to be installed on a droplet, prefferably with access to the target cluster for testing. 10 | You will be configuring (if not already present) a prometheus stack for your cluster to observe the results of a test run. 11 | 12 | ## K-bench Architecture Diagram 13 | 14 | ![K-bench Architecture Diagram](assets/images/kbench-overview.png) 15 | 16 | ## Table of Contents 17 | 18 | - [Overview](#overview) 19 | - [K-bench Architecture Diagram](#kbench-architecture-diagram) 20 | - [Prerequisites](#prerequisites) 21 | - [Creating a DO droplet for K-bench](#creating-a-DO-droplet-for-K-bench) 22 | - [K-bench Benchmark Results Sample](#k-bench-benchmark-results-sample) 23 | - [Grafana metric visualization](#grafana-metric-visualization) 24 | - [Grafana API Server Dashboard sample](#grafana-api-server-dashboard-sample) 25 | - [Grafana Node Dashboard sample](#grafana-node-dashboard-sample) 26 | - [Grafana Pod Count sample](#grafana-pod-count-sample) 27 | 28 | ## Prerequisites 29 | 30 | To complete this tutorial, you will need: 31 | 32 | 1. DOKS cluster, refer to: [Kubernetes-Starter-Kit-Developers](https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/tree/main/01-setup-DOKS) if one needs to be created 33 | 2. Prometheus stack installed on the cluster, refer to: [Kubernetes-Starter-Kit-Developers](https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/tree/main/04-setup-prometheus-stack) if it's not installed 34 | 3. A droplet which will server as the K-bench `master` 35 | 36 | ## Creating a DO droplet for K-bench 37 | 38 | In this section you will create a droplet which will server as your K-bench master. On this droplet you will clone the K-bench repo, perform the installation, run tests and/or add any new tests which will fit your use case. The reason for using a droplet is that it is best to have a decoupled resource apart from the cluster which we can use for just one specific reason and that is doing load tests and visualising the results of benchmarks. 39 | 40 | Please follow below steps to create a droplet, install and configure K-bench: 41 | 42 | 1. Navigate to your [DO cloud account](https://cloud.digitalocean.com/). 43 | 2. From the Dashboard, click on the `Create` button and select the `Droplets` option. 44 | 3. Choose the Ubuntu distribution, the basic plan, Regular with SSD CPU options, a region and as `Authenticaion` choose the SSH keys option. If no SSH keys are present [this article](https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/) explains how to create one and add them to the DO account. 45 | 4. From the droplet dashboard click on the `Console` button. After this you will be presented with a screen informing you to `Update Droplet Console`, follow those steps to gain SSH access to the droplet. 46 | 5. Once the SSH access is available, click on the `Console` button again. You will be logged in as root into the droplet. 47 | 6. Clone the [K-bench](https://github.com/vmware-tanzu/k-bench) repository via HTTPS using this command: 48 | 49 | ```console 50 | git clone https://github.com/vmware-tanzu/k-bench.git 51 | ``` 52 | 53 | 7. Navigate to the cloned repository directory. 54 | 55 | ```console 56 | cd k-bench/ 57 | ``` 58 | 59 | 8. Run the install script to install `GO` and any other dependencies `K-Bench` has. 60 | 61 | ```console 62 | ./install.sh 63 | ``` 64 | 65 | 9. From the DOKS cluster dashboard, click on the `Download Config File` and copy the contents of the config file. `K-bench` needs that information to connect to the cluster. 66 | 10. Create a kube folder where the kube config will be added, paste the contents copied from Step 9 and save the file. 67 | 68 | ```console 69 | mkdir ~/.kube 70 | vim ~/.kube/config 71 | ``` 72 | 73 | 11. As a validation step, run the test start command which will create a benchmark for the `default` test. 74 | 75 | ```console 76 | ./run.sh 77 | ``` 78 | 79 | 12. If the test was successful, the tool will output that it started and it is writing the logs to a folder prefixed with `results_run_` 80 | 13. Open the benchmark log and observe the results. 81 | 82 | ```console 83 | cat results_run_29-Jun-2022-08-06-42-am/default/kbench.log 84 | ``` 85 | 86 | **Notes:** 87 | 88 | The tests are added under the `config` folder of `k-bench`. To change an existing test it’s `config.json` file needs to be updated. 89 | The test is ran via the `-t` flag supplied by k-bench. For example running the `cp_heavy_12client` is done via: `./run.sh -t cp_heavy_12client` 90 | 91 | ## K-bench Benchmark Results Sample 92 | 93 | ![K-bench Benchmark Results Sample](assets/images/benchmark-results-sample.png) 94 | 95 | ## Grafana Metric Visualization 96 | 97 | `K-bench` tests are very easily observable using Grafana. You can create different dashboards to provide observability and understanding of Prometheus metrics. In this section you will explore some useful metrics for Kubernetes as well as some Dashboards which can offer insight as to what is happening with the DOKS cluster under load. 98 | 99 | **Notes:** 100 | 101 | This section can only be completed if the prometheus stack was created earlier in Step 2 of the [Prerequisites](#prerequisites) section or is already installed on the cluster. 102 | 103 | Please follow below steps: 104 | 105 | 1. Connect to Grafana (using default credentials: `admin/prom-operator`) by port forwarding to local machine. 106 | 107 | ```console 108 | kubectl --namespace monitoring port-forward svc/kube-prom-stack-grafana 3000:80 109 | ``` 110 | 111 | 2. Navigate to `http://localhost:3000/` and login to Grafana. 112 | 3. Import the `Kubernetes System Api Server` by navigating to `http://localhost:3000/dashboard/import`, add the `15761` ID in the box under `Import via grafana.com` and add Load 113 | 4. From the upper mentioned dashboard you will be able to see the API latency, HTTP requests by code, HTTPS requests by verb etc. You can use this dashboard to monitor the API under load. 114 | 5. From the Grafana main page, click on the `Dashboards` menu and click on the Node Exporter Nodes to open a `Node` resource oriented dashboard. You can use this dashboard to monitor the resources available in your nodes during a test. 115 | 6. You can also use various metrics to count the number of pods that have been created during a test. For example from the `Explore` page enter the following in the metrics browser: `count(kube_pod_info{namespace="kbench-pod-namespace"})`. This will show a graph with the number of pods at any given time. 116 | 117 | ## Grafana API Server Dashboard Sample 118 | 119 | ![Grafana API Server Dashboard Sample](assets/images/grafana-api-server-sample.png) 120 | 121 | ## Grafana Node Dashboard sample 122 | 123 | ![Grafana API Server Dashboard Sample](assets/images/node-dashboard-sample.png) 124 | 125 | ## Grafana Pod Count sample 126 | 127 | ![Grafana Pod count Sample](assets/images/pod-count-sample.png) 128 | -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/README.md: -------------------------------------------------------------------------------- 1 | # DigitalOcean Kubernetes Supply Chain Security 2 | 3 | ## Introduction 4 | 5 | This guide provides a short introduction about Kubernetes security best practices in general (applies to [DOKS](https://docs.digitalocean.com/products/kubernetes/) as well). Then, a practical example is given about how to integrate popular vulnerability scan tools (e.g. [Kubescape](https://github.com/armosec/kubescape/)) in a traditional CI/CD pipeline implemented using [GitHub Workflows](https://docs.github.com/en/actions/using-workflows). 6 | 7 | [Kubernetes](https://kubernetes.io) gained a lot of popularity over time and for a good reason. It's widely being used today in every modern infrastructure based on microservices. Kubernetes takes away the burden of managing high availability (or HA) setups, such as scheduling and replicating workloads on different nodes, thus assuring resiliency. Then, at the networking layer it also takes care of load balancing and distributes traffic evenly to workloads. At its core, Kubernetes is a modern container scheduler offering additional features such as application configuration and secrets management, to mention a few. You can also set quotas and control applications access to various resources (such as CPU and memory) by fine tuning resource limits requests. In terms of security, you can restrict who has access to what resources via RBAC policies, which is an acronym standing for Resource Based Access Control. 8 | 9 | Kubernetes has grown a lot in terms of stability and maturity in the past years. On the other hand, due to popularity it has become a potential target for malicious attacks. No matter where you run Kubernetes (cloud or on-premise), each cluster is divided into two major components: 10 | 11 | 1. [Control Plane](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) - takes care of scheduling your workloads (Pods) and responding to cluster events (such as starting up a new pod when a deployment's replicas field is unsatisfied). 12 | 2. [Worker Nodes](https://kubernetes.io/docs/concepts/overview/components/#node-components) - these are the actual machines running your Kubernetes workloads. Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment. 13 | 14 | Below picture shows the typical architecture of a Kubernetes cluster and possible weak points: 15 | 16 | ![Kubernetes Architecture Overview](assets/images/DOKS_Overview.png) 17 | 18 | Cloud providers (including [DigitalOcean](https://www.digitalocean.com)) offer today ready to run [Kubernetes](https://docs.digitalocean.com/products/kubernetes/) services, thus taking away the burden of managing the cluster itself (or the control plane component). This way, you can focus more on application development rather than spending time to deal with infrastructure tasks, such as control plane management, worker nodes maintenance (e.g. performing regular OS updates and security patching), etc. DigitalOcean offers an easy to use Kubernetes platform called [DOKS](https://docs.digitalocean.com/products/kubernetes/), which stands for DigitalOcean Kubernetes. DOKS is a [managed Kubernetes](https://docs.digitalocean.com/products/kubernetes/resources/managed/) service that lets you deploy Kubernetes clusters without dealing with the complexities of installing and managing control plane components and containerized infrastructure. 19 | 20 | Going further, a very important aspect which is often overlooked is **security**. Security is a broader term and covers many areas such as: software supply chain security, infrastructure security, networking security, etc. Because Kubernetes is so popular it has become a potential target fot attack so care must be taken. Another aspect to look at is the Kubernetes ecosystem complexity. In general, complex systems can have multiple weak points, thus opening multiple doors to external attacks and exploits. Most of the security flaws are caused by improperly configured Kubernetes clusters. A typical example is cluster administrators forgetting to set RBAC rules, or allowing applications to run as root in the Pod specification. Going further, Kubernetes offers a simple but very powerful isolation mechanism (both at the application level and networking layer) - [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). By using namespaces administrators can isolate application resources and configure access rules to various users and/or teams in a more controlled fashion. 21 | 22 | Kubernetes hardening is a multi step process, and usually consists of: 23 | 24 | 1. Control plane hardening: 25 | - Reduce surface attacks by securing the public REST API of Kubernetes (authorization, authentication, TLS encryption). 26 | - Regularly update the operating system kernel via patches to include security fixes. Also, system libraries and binaries must be updated regularly. 27 | - Enforce network policies and configure network firewalls to allow minimum to zero access if possible from the outside. Start by denying everything, and then allow only required services. 28 | - **Do not expose the ETCD database publicly!** This is a very important step, because the Kubernetes ETCD database contains your resources configuration and state. It can also contain sensitive information such as Kubernetes secrets. 29 | - Restrict access to a very limited group of people (system administrators usually). 30 | - Perform system checks regularly by installing/configuring a security audit tool, and receive alerts in real time in case of a security breach. 31 | 2. Worker nodes hardening. Most of the control plane recommendations apply here as well, with a few notes such as: 32 | - Never expose [kubelets](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) or [kube-proxies](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) publicly. 33 | - Avoid exposing the SSH service to the public. This is recommended to reduce surface attacks. For system administration you can use a VPN setup. 34 | 3. Kubernetes applications environment hardening: 35 | - Kubernetes YAML manifests scanning for misconfigurations. 36 | - Strict permissions for Pods and containers (such as not allowing root user within Pod spec, immutable filesystems for containers, etc.) 37 | - Proper configuration of RBAC policies. 38 | - Container images signing, thus allowing only trusted images to run. 39 | - Setting up admission controllers to allow trusted (or signed) container images only. 40 | - Network policies and namespace isolation (restrict applications to dedicated namespaces, as well as controlling ingress/egress traffic between namespaces). 41 | - Periodic Kubernetes cluster scanning. 42 | 4. Hardening the software supply chain: 43 | - Application source code and 3rd party libraries scanning for known vulnerabilities. 44 | - Application container images scanning for known vulnerabilities. 45 | 46 | Below picture illustrates what are the recommended steps to achieve end to end security for Kubernetes: 47 | 48 | ![DOKS End to End Security](asssets/../assets/images/DOKS_E2E_Security.png) 49 | 50 | In case of DOKS, you don't have to worry about control plane and worker nodes security because this is already taken care by the cloud provider (DigitalOcean). This is one of the main benefits of using a managed Kubernetes service. Still, users have access to the underlying machines (Droplets) and firewall settings, so it all circles back to administrators diligence to pay attention and not expose services or ports that are not really required. 51 | 52 | What's left is taking measures to harden the Kubernetes applications environment and software supply chain. This guide is mainly focused around the Kubernetes supply chain security, and it will teach you to: 53 | 54 | 1. Run vulnerability scans in the early stages, e.g. within your CI/CD pipelines (Tekton, Jenkins, GitHub workflows, etc). 55 | 2. Run periodic scans within your Kubernetes cluster, as well as for any new deployments. 56 | 3. Evaluate security risks and take the appropriate actions to reduce the risk factor to a minimum. 57 | 4. Get notified in real time (e.g. via Slack) about possible threats in your Kubernetes cluster. 58 | 59 | ## Kubernetes Environment and Software Supply Chain Security 60 | 61 | To build an application and run it on Kubernetes, you need a list of ingredients which are part of the software supply chain. The software supply chain is usually composed of: 62 | 63 | - A Git repository from where your application source code is retrieved. 64 | - Third party libraries that your application may use (fetched via a project management tool, such as npm, maven, gradle, etc). 65 | - Docker images hosting your application (including inherited base images). 66 | - YAML manifests that tell Kubernetes to create required resources for your application to run, such as Pods, Deployments, Secrets, ConfigMaps, etc. 67 | 68 | Hardening the Kubernetes applications environment and software supply chain can be accomplished in the early stages at the CI/CD pipeline level. Every modern infrastructure is using a CI/CD system nowadays to build and deploy applications, hence the reason. 69 | 70 | The first step required to harden your Kubernetes environment is to use a dedicated tool that continuously scans for vulnerabilities both at the CI/CD pipeline level and the entire Kubernetes cluster. 71 | 72 | There are many vulnerability scanning tools available but this guide focuses on two implementations - [Snyk](https://snyk.io) and [Armosec Kubescape](https://github.com/armosec/kubescape/). 73 | 74 | Without further ado, please pick one to start with from the list below. 75 | 76 | ## Kubernetes Vulnerability Scanning Tools 77 | 78 | | SNYK | KUBESCAPE | 79 | |:-----------------------------------------------:|:--------------------------------------------------------------:| 80 | | [![Snyk](assets/images/snyk/logo.png)](snyk.md) | [![Kubescape](assets/images/kubescape/logo.png)](kubescape.md) | 81 | -------------------------------------------------------------------------------- /DOKS-Automatic-Node-Repair/README.md: -------------------------------------------------------------------------------- 1 | 2 | # Automatic Node Repair on DigitalOcean Kubernetes 3 | 4 | ## Introduction 5 | 6 | When a node in a DigitalOcean Kubernetes cluster is unhealthy or not ready, replacing the node is manual and cumbersome. The cluster will operate at lower capacity without replacing the nodes because the unhealthy nodes will not run any Pods. 7 | 8 | Cluster nodes can become unhealthy when the `kubelet service` dies or is unresponsive. This can happen for several reasons, 9 | 10 | - Worker node is *overloaded* 11 | - *Networking issues*: This can happen if the node loses connectivity to the Kubernetes API server or if there are issues with the network overlay used by the cluster. 12 | - *Resource constraints*: If the node does not have enough resources (such as CPU, memory, or disk) to execute the pods scheduled on it, it can become "NotReady." This can happen if a pod's resource requests are more significant than the node's available resources. 13 | - *Hardware failures*: If a hardware failure on the node (such as a disk or network interface failure), the node can become "NotReady." 14 | 15 | This tutorial provides an automated way to recycle unhealthy nodes in a DigitalOcean Kubernetes (DOKS) cluster using [Digital Mobius](https://github.com/Qovery/digital-mobius). 16 | 17 | mobius-install 18 | 19 | ### Prerequisites 20 | 21 | - [DigitalOcean access token](https://docs.digitalocean.com/reference/api/create-personal-access-token) for managing the DOKS cluster. Ensure that the access token has a *read-write* scope. 22 | 23 | ```bash 24 | export DIGITAL_OCEAN_TOKEN="" 25 | # Copy the token value and save it in a local environment variable to use later 26 | ``` 27 | 28 | - [doctl CLI](https://docs.digitalocean.com/reference/doctl/how-to/install) 29 | 30 | ```bash 31 | # Initialize doctl 32 | doctl auth init --access-token "$DIGITAL_OCEAN_TOKEN" 33 | ``` 34 | 35 | - [Helm CLI](https://helm.sh/docs/intro/install/) 36 | 37 | ## Digital Mobius setup 38 | 39 | Digital Mobius is an open-source application written in Go specifically for DOKS cluster node recycling. The application monitors DOKS cluster nodes that are in an unhealthy state at specified regular intervals. 40 | 41 | Digital Mobius needs a set of environment variables to be configured and available. You can see these variables in the [values.yaml](https://github.com/Qovery/digital-mobius/blob/main/charts/Digital-Mobius/values.yaml): 42 | 43 | ```yaml 44 | LOG_LEVEL: "info" 45 | DELAY_NODE_CREATION: "10m" # Node recycle period 46 | DIGITAL_OCEAN_TOKEN: "" # Personal DO API token value 47 | DIGITAL_OCEAN_CLUSTER_ID: "" # DOKS cluster ID that needs to be monitored 48 | ``` 49 | 50 | **Note:** 51 | 52 | Choose an appropriate value for `DELAY_NODE_CREATION.` A value that is too low will interfere with the time interval needed for a node to become ready and available after it gets recycled. In real-world situations, this can take several minutes or more to complete. A good starting point is `10m`, the value used in this tutorial. 53 | 54 | ### Configure and Deploy 55 | 56 | Digital Mobius can be easily deployed using the [Helm chart](https://github.com/Qovery/digital-mobius/tree/main/charts/Digital-Mobius) (or [artifacthub.io](https://artifacthub.io/packages/helm/digital-mobius/digital-mobius)). 57 | 58 | 1. Add the required Helm repository: 59 | 60 | ```bash 61 | helm repo add digital-mobius https://qovery.github.io/digital-mobius 62 | ``` 63 | 64 | 2. Fetch the cluster-ID that you want to monitor for node failures: 65 | 66 | ```bash 67 | doctl k8s cluster list 68 | export DIGITAL_OCEAN_CLUSTER_ID="" 69 | ``` 70 | 71 | 3. Set the DigitalOcean access token: 72 | 73 | ```bash 74 | export DIGITAL_OCEAN_TOKEN="" 75 | echo "$DIGITAL_OCEAN_TOKEN" 76 | ``` 77 | 78 | 4. Start the deployment in a dedicated namespace. This example uses `maintenance` as the namespace: 79 | 80 | ```bash 81 | helm install digital-mobius digital-mobius/digital-mobius --version 0.1.4 \ 82 | --set environmentVariables.DIGITAL_OCEAN_TOKEN="$DIGITAL_OCEAN_TOKEN" \ 83 | --set environmentVariables.DIGITAL_OCEAN_CLUSTER_ID="$DIGITAL_OCEAN_CLUSTER_ID" \ 84 | --set enabledFeatures.disableDryRun=true \ 85 | --namespace maintenance --create-namespace 86 | ``` 87 | 88 | **Note:** 89 | 90 | The `enabledFeatures.disableDryRun` option enables or disables the tool’s `DRY RUN` mode. Setting it to `true` means the dry run mode is disabled, and the cluster nodes will be recycled. Enabling the dry run mode is helpful if you want to test it first without performing any changes to the actual cluster nodes. 91 | 92 | 5. Check the deployment once it completes. 93 | 94 | ```bash 95 | # List the deployments 96 | helm ls -n maintenance 97 | ``` 98 | 99 | The output looks similar to the following: 100 | 101 | ```bash 102 | NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION 103 | digital-mobius maintenance 1 2023-03-04 11:24:10.131055 +0300 EEST deployed digital-mobius-0.1.4 0.1.4 104 | ``` 105 | 106 | Verify the running Pod(s): 107 | 108 | ```bash 109 | kubectl get pods -n maintenance 110 | ``` 111 | 112 | The output looks similar to the following: 113 | 114 | ```bash 115 | NAME READY STATUS RESTARTS AGE 116 | digital-mobius-55fbc9fdd-dzxbh 1/1 Running 0 8s 117 | ``` 118 | 119 | Inspect the logs: 120 | 121 | ```bash 122 | kubectl logs -l app.kubernetes.io/name=digital-mobius -n maintenance 123 | ``` 124 | 125 | The output looks similar to the following: 126 | 127 | ```bash 128 | _ _ _ _ _ _ _ 129 | __| (_) __ _(_) |_ __ _| | _ __ ___ ___ | |__ (_)_ _ ___ 130 | / _` | |/ _` | | __/ _` | | | '_ ` _ \ / _ \| '_ \| | | | / __| 131 | | (_|| | (_| | | || (_| | | | | | | | | (_) | |_) | | |_| \__ \ 132 | \__,_|_|\__, |_|\__\__,_|_| |_| |_| |_|\___/|_.__/|_|\__,_|___/ 133 | |___/ 134 | time="2023-03-04T08:29:52Z" level=info msg="Starting Digital Mobius 0.1.4 135 | ``` 136 | 137 | Now that we have successfully deployed `Digital Mobius,`. Let us check out the underlying logic in which it operates. 138 | 139 | ## Automatic Node Repair Logic 140 | 141 | A node is considered unhealthy if the [node condition](https://kubernetes.io/docs/concepts/architecture/nodes/#condition) is `Ready` and the status is `False` or `Unknown.` Then, the application recreates the affected node(s) using the DigitalOcean [Delete Kubernetes Node API](https://docs.digitalocean.com/reference/api/api-reference/#operation/kubernetes_delete_node). 142 | 143 | The following diagram shows how Digital Mobius checks the worker node(s) state: 144 | 145 | mobius-flow 146 | 147 | ## Simulate a Worker Node Problem 148 | 149 | We must disconnect one or more nodes from the DOKS cluster to test the Digital Mobius setup. To do this, we will use the [doks-debug](https://github.com/digitalocean/doks-debug) tool to create some debug pods that run containers with elevated privileges. To access the running containers in the debug pods, we will use `kubectl exec.` This command will allow us to execute commands inside the containers and gain access to the worker node(s) system services. 150 | 151 | - Create DOKS debug pods: 152 | 153 | ```bash 154 | # This will spin up the debug pods in the `kube-system` namespace: 155 | kubectl apply -f https://raw.githubusercontent.com/digitalocean/doks-debug/master/k8s/daemonset.yaml 156 | ``` 157 | 158 | Verify the DaemonSet: 159 | 160 | ```bash 161 | kubectl get ds -n kube-system 162 | ``` 163 | 164 | The output looks similar to the following (notice the `doks-debug` entry): 165 | 166 | ```bash 167 | NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE 168 | cilium 3 3 3 3 3 kubernetes.io/os=linux 4d1h 169 | cpc-bridge-proxy 3 3 3 3 3 4d1h 170 | csi-do-node 3 3 3 3 3 4d1h 171 | do-node-agent 3 3 3 3 3 kubernetes.io/os=linux 4d1h 172 | doks-debug 3 3 3 3 3 3d22h 173 | konnectivity-agent 3 3 3 3 3 4d1h 174 | kube-proxy 3 3 3 3 3 4d1h 175 | ``` 176 | 177 | Verify the debug pods: 178 | 179 | ```bash 180 | kubectl get pods -l name=doks-debug -n kube-system 181 | ``` 182 | 183 | The output looks similar to the following: 184 | 185 | ```bash 186 | NAME READY STATUS RESTARTS AGE 187 | doks-debug-dckbv 1/1 Running 0 3d22h 188 | doks-debug-rwzgm 1/1 Running 0 3d22h 189 | doks-debug-s9cbp 1/1 Running 0 3d22h 190 | ``` 191 | 192 | - Kill the `kubelet` service 193 | 194 | Use `kubectl exec` in one of the debug pods and get access to worker node system services. Then, stop the kubelet service, which results in the node going away from the `kubectl get nodes` command output. 195 | 196 | Open a new terminal window and watch the worker nodes: 197 | 198 | ```bash 199 | watch "kubectl get nodes" 200 | ``` 201 | 202 | Pick the first debug pod and access the shell: 203 | 204 | ```bash 205 | kubectl exec -it -n kube-system -- bash 206 | ``` 207 | 208 | A prompt that looks similar to the following appears: 209 | 210 | ```bash 211 | root@doks-debug-dckbv:~# 212 | ``` 213 | 214 | Inspect the system service: 215 | 216 | ```bash 217 | chroot /host /bin/bash 218 | systemctl status kubelet 219 | ``` 220 | 221 | The output looks similar to the following: 222 | 223 | ```shell 224 | ● kubelet.service - Kubernetes Kubelet Server 225 | Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled) 226 | Active: active (running) since Fri 2023-03-04 08:48:42 UTC; 2h 18min ago 227 | Docs: https://kubernetes.io/docs/concepts/overview/components/#kubelet 228 | Main PID: 1053 (kubelet) 229 | Tasks: 17 (limit: 4701) 230 | Memory: 69.3M 231 | CGroup: /system.slice/kubelet.service 232 | └─1053 /usr/bin/kubelet --config=/etc/kubernetes/kubelet.conf --logtostderr=true --image-pull-progress-deadline=5m 233 | ... 234 | ``` 235 | 236 | Stop the kubelet: 237 | 238 | ```bash 239 | systemctl stop kubelet 240 | ``` 241 | 242 | simulate-node-failure 243 | 244 | ### Observe the Worker Nodes 245 | 246 | After you stop the kubelet service, you will be kicked out of the shell session. This means the node controller lost connection with the affected node where the kubelet service was killed. 247 | 248 | You can see the `NotReady` state of the affected node in the other terminal window where you set the watch: 249 | 250 | ```bash 251 | NAME STATUS ROLES AGE VERSION 252 | game-q44rc Ready 3d22h v1.26.3 253 | game-q4507 Ready 4d1h v1.26.3 254 | game-q450c NotReady 4d1h v1.26.3 255 | ``` 256 | 257 | After the time interval you specified in `DELAY_NODE_CREATION` expires, the node vanishes as expected: 258 | 259 | ```bash 260 | NAME STATUS ROLES AGE VERSION 261 | game-q44rc Ready 3d22h v1.26.3 262 | game-q4507 Ready 4d1h v1.26.3 263 | ``` 264 | 265 | Next, check how Digital Mobius monitors the DOKS cluster. Open a terminal window and inspect the logs first: 266 | 267 | ```bash 268 | kubectl logs -l app.kubernetes.io/name=digital-mobius -n maintenance 269 | ``` 270 | 271 | The output looks like below (watch for the `Recycling node {...}` lines): 272 | 273 | ```bash 274 | _ _ _ _ _ _ _ 275 | __| (_) __ _(_) |_ __ _| | _ __ ___ ___ | |__ (_)_ _ ___ 276 | / _` | |/ _` | | __/ _` | | | '_ ` _ \ / _ \| '_ \| | | | / __| 277 | | (_| | | (_| | | || (_| | | | | | | | | (_) | |_) | | |_| \__ \ 278 | \__,_|_|\__, |_|\__\__,_|_| |_| |_| |_|\___/|_.__/|_|\__,_|___/ 279 | |___/ 280 | time="2023-03-04T08:29:52Z" level=info msg="Starting Digital Mobius 0.1.4 \n" 281 | time="2023-03-04T11:13:09Z" level=info msg="Recyling node {11bdd0f1-8bd0-42dc-a3af-7a83bc319295 f8d76723-2b0e-474d-9465-d9da7817a639 379826e4-8d1b-4ba4-97dd-739bbfa69023}" 282 | ... 283 | ``` 284 | 285 | In the terminal window where you set the watch for `kubectl get nodes,` a new node appears after a minute, replacing the old one. The new node has a different ID and a new `AGE` value: 286 | 287 | ```bash 288 | NAME STATUS ROLES AGE VERSION 289 | game-q44rc Ready 3d22h v1.26.3 290 | game-q4507 Ready 4d1h v1.26.3 291 | game-q450d Ready 22s v1.26.3 292 | ``` 293 | 294 | As you can see, the node was automatically recycled. 295 | 296 | ## Summary 297 | 298 | In conclusion, while automatic recovery of cluster nodes is a valuable feature, it is crucial to prioritize node health monitoring and load management to prevent frequent node failures. In addition, properly setting Pod resource limits, such as setting and using fair values, can also help avoid overloading nodes. By adopting these best practices, you can ensure the stability and reliability of your Kubernetes cluster, avoiding costly downtime and service disruptions. 299 | 300 | ### References 301 | 302 | - [GitHub](https://github.com/digitalocean/container-blueprints/tree/main/DOKS-automatic-node-repair) 303 | - [Digital Mobius](https://github.com/Qovery/digital-mobius) 304 | -------------------------------------------------------------------------------- /DOKS-Terraform-and-Flux/README.md: -------------------------------------------------------------------------------- 1 | # Create a GitOps Stack Using DigitalOcean Kubernetes and Flux CD 2 | 3 | This tutorial will guide you on how to use [Flux](https://fluxcd.io) to manage application deployments on a DigitalOcean Kubernetes (DOKS) cluster in a GitOps fashion. [Terraform](https://www.terraform.io) will be responsible with spinning up the DOKS cluster as well as Flux. In the end, you will also tell Flux to perform a basic deployment of the BusyBox Docker application. 4 | 5 | Terraform is one of the most popular tools to write infrastructure-as-code using declarative configuration files. You can write concise descriptions of resources using blocks, arguments, and expressions. 6 | 7 | Flux is used for managing the continuous delivery of applications inside a DOKS cluster and enable GitOps. The built-in [controllers](https://fluxcd.io/docs/components) help you create the required GitOps resources. 8 | 9 | The following diagram illustrates the DOKS cluster, Terraform and Flux setup: 10 | 11 | ![TF-DOKS-FLUX-CD](assets/img/tf_doks_fluxcd_flow.png) 12 | 13 | **Note**: Following the steps below will result in charges for the use of DigitalOcean resources. [Delete the resources](#step-7-deleting-the-resources) at the end of this tutorial to avoid being billed for additional resources. 14 | 15 | ## Prerequisites 16 | 17 | To complete this tutorial, you will need: 18 | 19 | - A [GitHub](https://github.com) repository and branch for Flux CD to store the cluster and your Kubernetes custom application deployment manifests. 20 | 21 | - A GitHub [personal access token](https://github.com/settings/tokens) that has the repo permissions set. The [Terraform module](https://www.terraform.io/docs/language/modules/index.html) provided in this tutorial needs it in order to create the SSH deploy key, and to commit the Flux CD cluster manifests in your Git repository. 22 | 23 | - A [git client](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). 24 | 25 | - A [DigitalOcean access token](https://docs.digitalocean.com/reference/api/create-personal-access-token) for creating and managing the DOKS cluster. Copy the token value and save it somewhere safe. 26 | 27 | - A [DigitalOcean Space](https://docs.digitalocean.com/products/spaces/how-to/create/) for storing the Terraform state file. Make sure that it is set to restrict file listing for security reasons. 28 | 29 | - [Access keys](https://docs.digitalocean.com/products/spaces/how-to/manage-access/) for DigitalOcean Spaces. Copy the `key` and `secret` values and save each in a local environment variable to use later: 30 | 31 | ```shell 32 | export DO_SPACES_ACCESS_KEY="" 33 | export DO_SPACES_SECRET_KEY="" 34 | ``` 35 | 36 | - [Terraform](https://docs.digitalocean.com/reference/terraform/getting-started/#install-terraform). 37 | 38 | - [`doctl`](https://docs.digitalocean.com/reference/doctl/how-to/install) for interacting with DigitalOcean API. 39 | 40 | - [`kubectl`](https://kubernetes.io/docs/tasks/tools) for interacting with Kubernetes. 41 | 42 | - [`flux`](https://fluxcd.io/docs/installation) for interacting with Flux. 43 | 44 | ## STEP 1: Cloning the Sample GitHub Repository 45 | 46 | Clone the repository on your local machine and navigate to the appropriate directory: 47 | 48 | ```shell 49 | git clone https://github.com/digitalocean/container-blueprints.git 50 | 51 | cd container-blueprints/create-doks-with-terraform-flux 52 | ``` 53 | 54 | This repository is a Terraform module. You can inspect the options available inside the [variables.tf](https://github.com/digitalocean/container-blueprints/blob/main/create-doks-with-terraform-flux/variables.tf) file. 55 | 56 | ## Step 2: Bootstrapping DOKS and Flux 57 | 58 | The bootstrap process creates a DOKS cluster and provisions Flux using Terraform. 59 | 60 | First, you are going to initialize the Terraform backend. Then, you will create a Terraform plan to inspect the infrastructure and apply it to create all the required resources. After it finishes, you should have a fully functional DOKS cluster with Flux CD deployed and running. Follow these steps to bootstrap DOKS and Flux: 61 | 62 | 1. Rename the provided [`backend.tf.sample`](https://github.com/digitalocean/container-blueprints/blob/main/create-doks-with-terraform-flux/backend.tf.sample) file to `backend.tf`. Edit the file and replace the placeholder values with your bucket's and Terraform state file's name you want to create. 63 | 64 | We strongly recommend using a [DigitalOcean Spaces](https://cloud.digitalocean.com/spaces) bucket for storing the Terraform state file. As long as the space is private, your sensitive data is secure. The data is also backed up and you can perform collaborative work using your Space. 65 | 66 | ```text 67 | # Store the state file using a DigitalOcean Spaces bucket 68 | 69 | terraform { 70 | backend "s3" { 71 | skip_credentials_validation = true 72 | skip_metadata_api_check = true 73 | endpoint = ".digitaloceanspaces.com" # replace , leave the rest as is (e.g: fra1.digitaloceanspaces.com) 74 | region = "us-east-1" # leave this as is (Terraform expects the AWS format - N/A for DO Spaces) 75 | bucket = "" # replace this with your bucket name 76 | key = "" # replaces this with your state file name (e.g. terraform.tfstate) 77 | } 78 | } 79 | ``` 80 | 81 | 2. Initialize the Terraform backend. 82 | 83 | Use the previously created DO Spaces access and secret keys to initialize the Terraform backend: 84 | 85 | ```shell 86 | terraform init --backend-config="access_key=$DigitalOcean_SPACES_ACCESS_KEY" --backend-config="secret_key=$DO_SPACES_SECRET_KEY" 87 | ``` 88 | 89 | The output looks similar to the following: 90 | 91 | ```text 92 | Initializing the backend... 93 | 94 | Successfully configured the backend "s3"! Terraform will automatically 95 | use this backend unless the backend configuration changes. 96 | 97 | Initializing provider plugins... 98 | - Finding hashicorp/kubernetes versions matching "2.3.2"... 99 | - Finding gavinbunney/kubectl versions matching "1.11.2"... 100 | ... 101 | ``` 102 | 103 | 3. Rename the `terraform.tfvars.sample` file to `terraform.tfvars`. Edit the file and replace the placeholder values with your DOKS and GitHub information. 104 | 105 | ```text 106 | # DOKS 107 | do_api_token = "" # DigitalOcean API TOKEN 108 | doks_cluster_name = "" # Name of this `DOKS` cluster 109 | doks_cluster_region = "" # What region should this `DOKS` cluster be provisioned in? 110 | doks_cluster_version = "" # What Kubernetes version should this `DOKS` cluster use ? 111 | doks_cluster_pool_size = "" # What machine type to use for this `DOKS` cluster ? 112 | doks_cluster_pool_node_count = # How many worker nodes this `DOKS` cluster should have ? 113 | 114 | # GitHub 115 | github_user = "" # Your `GitHub` username 116 | github_token = "" # Your `GitHub` personal access token 117 | git_repository_name = "" # Git repository where `Flux CD` manifests should be stored 118 | git_repository_branch = "" # Branch name to use for this `Git` repository (e.g.: `main`) 119 | git_repository_sync_path = "" # Git repository path where the manifests to sync are committed (e.g.: `clusters/dev`) 120 | ``` 121 | 122 | 4. Create a Terraform plan and inspect the infrastructure changes: 123 | 124 | ```shell 125 | terraform plan -out doks_fluxcd_cluster.out 126 | ``` 127 | 128 | 5. Apply the changes: 129 | 130 | ```shell 131 | terraform apply "doks_fluxcd_cluster.out" 132 | ``` 133 | 134 | The output looks similar to the following: 135 | 136 | ```text 137 | tls_private_key.main: Creating... 138 | kubernetes_namespace.flux_system: Creating... 139 | github_repository.main: Creating... 140 | tls_private_key.main: Creation complete after 2s [id=1d5ddec06b0f4daeea57d3a987029c1153ebcb21] 141 | kubernetes_namespace.flux_system: Creation complete after 2s [id=flux-system] 142 | kubectl_manifest.install["v1/serviceaccount/flux-system/source-controller"]: Creating... 143 | kubectl_manifest.sync["kustomize.toolkit.fluxcd.io/v1beta1/kustomization/flux-system/flux-system"]: Creating... 144 | kubectl_manifest.install["v1/serviceaccount/flux-system/helm-controller"]: Creating... 145 | kubectl_manifest.install["networking.k8s.io/v1/networkpolicy/flux-system/allow-egress"]: Creating... 146 | ... 147 | ``` 148 | 149 | The DOKS cluster and Flux are up and running. 150 | 151 | ![DOKS state](https://github.com/digitalocean/container-blueprints/blob/main/create-doks-with-terraform-flux/assets/img/doks_created.png?raw=true) 152 | 153 | Check that the Terraform state file is saved in your Spaces bucket. 154 | 155 | ![DigitalOcean Spaces Terraform state file](https://github.com/digitalocean/container-blueprints/blob/main/create-doks-with-terraform-flux/assets/img/tf_state_s3.png?raw=true) 156 | 157 | Check that the Flux CD manifests for your DOKS cluster are also present in your Git repository. 158 | 159 | ![GIT repo state](https://github.com/digitalocean/container-blueprints/blob/main/create-doks-with-terraform-flux/assets/img/flux_git_res.png?raw=true) 160 | 161 | ## Step 3: Inspecting DOKS Cluster State 162 | 163 | List the available Kubernetes clusters: 164 | 165 | ```shell 166 | doctl kubernetes cluster list 167 | ``` 168 | 169 | [Authenticate](https://docs.digitalocean.com/products/kubernetes/how-to/connect-to-cluster/#doctl) your cluster: 170 | 171 | ```shell 172 | doctl k8s cluster kubeconfig save 173 | ``` 174 | 175 | [Check that the current context](https://docs.digitalocean.com/products/kubernetes/how-to/connect-to-cluster/#contexts) points to your cluster: 176 | 177 | ```shell 178 | kubectl config get-contexts 179 | ``` 180 | 181 | List the cluster nodes and check the `STATUS` column to make sure that they're in a healthy state: 182 | 183 | ```shell 184 | kubectl get nodes 185 | ``` 186 | 187 | The output looks similar to: 188 | 189 | ```text 190 | NAME STATUS ROLES AGE VERSION 191 | dev-fluxcd-cluster-pool-8z9df Ready 3d2h v1.21.3 192 | dev-fluxcd-cluster-pool-8z9dq Ready 3d2h v1.21.3 193 | dev-fluxcd-cluster-pool-8z9dy Ready 3d2h v1.21.3 194 | ``` 195 | 196 | ## Step 4: Inspecting Flux Deployment and Configuration 197 | 198 | Check the status of Flux: 199 | 200 | ```shell 201 | flux check 202 | ``` 203 | 204 | The output looks similar to the following: 205 | 206 | ```text 207 | ► checking prerequisites 208 | ✔ kubectl 1.21.3 >=1.18.0-0 209 | ✔ Kubernetes 1.21.2 >=1.16.0-0 210 | ► checking controllers 211 | ✗ helm-controller: deployment not ready 212 | ► ghcr.io/fluxcd/helm-controller:v0.11.1 213 | ✔ kustomize-controller: deployment ready 214 | ► ghcr.io/fluxcd/kustomize-controller:v0.13.1 215 | ✔ notification-controller: deployment ready 216 | ► ghcr.io/fluxcd/notification-controller:v0.15.0 217 | ✔ source-controller: deployment ready 218 | ► ghcr.io/fluxcd/source-controller:v0.15.3 219 | ✔ all checks passed 220 | ``` 221 | 222 | Flux comes with [CRDs](https://fluxcd.io/docs/components/helm/helmreleases#crds) that let you define the required components for a GitOps-enabled environment. An associated controller must also be present to handle the CRDs and maintain their state, as defined in the manifest files. 223 | 224 | The following controllers come with Flux: 225 | 226 | - [Source Controller](https://fluxcd.io/docs/components/source/) - responsible for handling the [Git Repository](https://fluxcd.io/docs/components/source/gitrepositories) CRD. 227 | - [Kustomize Controller](https://fluxcd.io/docs/components/kustomize) - responsible for handling the [Kustomization](https://fluxcd.io/docs/components/kustomize/kustomization) CRD. 228 | 229 | By default, Flux uses a [Git repository](https://fluxcd.io/docs/components/source/gitrepositories) and a [Kustomization](https://fluxcd.io/docs/components/kustomize/kustomization) resource. The Git repository tells Flux where to sync files from, and points to a Git repository and branch. The Kustomization resource tells Flux where to find your application `kustomizations`. 230 | 231 | Inspect all the Flux resources: 232 | 233 | ```shell 234 | flux get all 235 | ``` 236 | 237 | The output looks similar to the following: 238 | 239 | ```text 240 | NAME READY MESSAGE REVISION SUSPENDED 241 | gitrepository/flux-system True Fetched revision: main/1d69... main/1d69... False 242 | 243 | NAME READY MESSAGE REVISION SUSPENDED 244 | kustomization/flux-system True Applied revision: main/1d69... main/1d69c... False 245 | ``` 246 | 247 | Terraform provisions the `gitrepository/flux-system` and `kustomization/flux-system` resources for your DOKS cluster. Inspect the Git repository resource: 248 | 249 | ```shell 250 | flux export source git flux-system 251 | ``` 252 | 253 | The output looks similar to: 254 | 255 | ```text 256 | --- 257 | apiVersion: source.toolkit.fluxcd.io/v1beta1 258 | kind: GitRepository 259 | metadata: 260 | ... 261 | spec: 262 | gitImplementation: go-git 263 | interval: 1m0s 264 | ref: 265 | branch: main 266 | secretRef: 267 | name: flux-system 268 | timeout: 20s 269 | url: ssh://git@github.com/test-github-user/test-git-repo.git 270 | ``` 271 | 272 | In the `spec`, note the following parameter values: 273 | 274 | - `url`: The Git repository URL to sync manifests from, set to `ssh://git@github.com/test-github-user/test-git-repo.git` in this example. 275 | - `branch`: The repository branch to use - set to `main` in this example. 276 | - `interval`: The time interval to use for syncing, set to `1 minute` by default. 277 | 278 | Next, inspect the Kustomization resource: 279 | 280 | ```shell 281 | flux export kustomization flux-system 282 | ``` 283 | 284 | The output looks similar to: 285 | 286 | ```text 287 | --- 288 | apiVersion: kustomize.toolkit.fluxcd.io/v1beta1 289 | kind: Kustomization 290 | metadata: 291 | ... 292 | spec: 293 | interval: 10m0s 294 | path: ./clusters/dev 295 | prune: true 296 | sourceRef: 297 | kind: GitRepository 298 | name: flux-system 299 | validation: client 300 | ``` 301 | 302 | In the `spec`, note the following parameter values: 303 | 304 | - `interval`: The time interval to use for syncing, set to `10 minutes` by default. 305 | - `path`: The path from the Git repository where this Kustomization manifest is kept. 306 | - `sourceRef`: Shows that it is using another resource to fetch the manifests - a `GitRepository` in this case. The `name` field uniquely identifies the referenced resource - `flux-system`. 307 | 308 | In case you need to troubleshoot or see what Flux CD is doing, you can access the logs by running the following command: 309 | 310 | ```shell 311 | flux logs 312 | ``` 313 | 314 | The output looks similar to the following: 315 | 316 | ```text 317 | ... 318 | 2021-07-20T12:31:36.696Z info GitRepository/flux-system.flux-system - Reconciliation finished in 1.193290329s, next run in 1m0s 319 | 2021-07-20T12:32:37.873Z info GitRepository/flux-system.flux-system - Reconciliation finished in 1.176637507s, next run in 1m0s 320 | ... 321 | ``` 322 | 323 | ## Step 5: Creating a BusyBox Example Application Using Flux 324 | 325 | Configure Flux to create a simple BusyBox application, using the sample manifests provided in the sample Git repository. 326 | 327 | The `kustomization/flux-system` CRD you inspected previously, expects the Kustomization manifests to be present in the Git repository path specified by the `git_repository_sync_path` Terraform variable specified in the `terraform.tfvars` file. 328 | 329 | 1. Clone the Git repository specified in the `terraform.tfvars` file. This is the main repository used for DOKS cluster reconciliation. 330 | 331 | ```shell 332 | git clone git@github.com:/.git 333 | ``` 334 | 335 | 2. Change to the directory where you cloned the repository: 336 | 337 | ```shell 338 | cd 339 | ``` 340 | 341 | 3. Optionally, checkout the branch if you are not using the `main` branch: 342 | 343 | ```shell 344 | git checkout 345 | ``` 346 | 347 | 4. Next, create the `applications` directory, to store the `busybox` example manifests: 348 | 349 | ```shell 350 | APPS_PATH="/apps/busybox" 351 | 352 | mkdir -p "$APPS_PATH" 353 | ``` 354 | 355 | 5. Download the following manifests, using the directory path created in the previous step: 356 | - [busybox-ns](https://github.com/digitalocean/container-blueprints/blob/main/create-doks-with-terraform-flux/assets/manifests/busybox-ns.yaml): Creates the `busybox` app namespace 357 | 358 | - [busybox](https://github.com/digitalocean/container-blueprints/blob/main/create-doks-with-terraform-flux/assets/manifests/busybox.yaml): Creates the `busybox` app 359 | 360 | - [kustomization](https://github.com/digitalocean/container-blueprints/blob/main/create-doks-with-terraform-flux/assets/manifests/kustomization.yaml): `kustomization` for BusyBox 361 | 362 | **Hint:** 363 | 364 | If you have `curl` installed, you can fetch the required files using the following commands: 365 | 366 | ```shell 367 | curl https://raw.githubusercontent.com/digitalocean/container-blueprints/main/create-doks-with-terraform-flux/assets/manifests/busybox-ns.yaml > "${APPS_PATH}/busybox-ns.yaml" 368 | 369 | curl https://raw.githubusercontent.com/digitalocean/container-blueprints/main/create-doks-with-terraform-flux/assets/manifests/busybox.yaml > "${APPS_PATH}/busybox.yaml" 370 | 371 | curl https://raw.githubusercontent.com/digitalocean/container-blueprints/main/create-doks-with-terraform-flux/assets/manifests/kustomization.yaml > "${APPS_PATH}/kustomization.yaml" 372 | ``` 373 | 374 | 6. Commit the files and push the changes: 375 | 376 | ```shell 377 | git add -A 378 | 379 | git commit -am "Busybox Kustomization manifests" 380 | 381 | git push origin 382 | ``` 383 | 384 | ## STEP 6: Inspecting the Results 385 | 386 | If you are using the default settings, a `busybox` namespace and an associated pod is created and running after about one minute. If you do not want to wait, you can force reconciliation using the following command: 387 | 388 | ```shell 389 | flux reconcile source git flux-system 390 | 391 | flux reconcile kustomization flux-system 392 | ``` 393 | 394 | The output looks similar to: 395 | 396 | ```text 397 | $ flux reconcile source git flux-system 398 | 399 | ► annotating GitRepository flux-system in flux-system namespace 400 | ✔ GitRepository annotated 401 | ◎ waiting for GitRepository reconciliation 402 | ✔ GitRepository reconciliation completed 403 | ✔ fetched revision main/b908f9b47b3a568ae346a74c277b23a7b7ef9602 404 | 405 | $ flux reconcile kustomization busybox 406 | 407 | ► annotating Kustomization flux-system in flux-system namespace 408 | ✔ Kustomization annotated 409 | ◎ waiting for Kustomization reconciliation 410 | ✔ Kustomization reconciliation completed 411 | ✔ applied revision main/b908f9b47b3a568ae346a74c277b23a7b7ef9602 412 | ``` 413 | 414 | Get Kustomization status: 415 | 416 | ```shell 417 | flux get kustomizations 418 | ``` 419 | 420 | The output looks similar to: 421 | 422 | ```text 423 | NAME READY MESSAGE REVISION SUSPENDED 424 | flux-system True Applied revision: main/fa69f917302bcfd35d2959ebc398b3aa13102480 main/fa69f917302bcfd35d2959ebc398b3aa13102480 False 425 | ``` 426 | 427 | Examine the Kubernetes namespaces: 428 | 429 | ```shell 430 | kubectl get ns 431 | ``` 432 | 433 | The output looks similar to the following: 434 | 435 | ```text 436 | NAME STATUS AGE 437 | busybox Active 30s 438 | default Active 26h 439 | flux-system Active 26h 440 | kube-node-lease Active 26h 441 | kube-public Active 26h 442 | kube-system Active 26h 443 | ``` 444 | 445 | Check the `busybox` pod: 446 | 447 | ```shell 448 | kubectl get pods -n busybox 449 | ``` 450 | 451 | The output looks similar to the following: 452 | 453 | ```text 454 | NAME READY STATUS RESTARTS AGE 455 | busybox1 1/1 Running 0 42s 456 | ``` 457 | 458 | ## Step 7: Deleting the Resources 459 | 460 | If you want to clean up the allocated resources, run the following command from the directory where you cloned this repository on your local machine: 461 | 462 | ```shell 463 | terraform destroy 464 | ``` 465 | 466 | **Note:** 467 | 468 | Due to an issue with the Terraform [kubernetes-provider](https://github.com/hashicorp/terraform-provider-kubernetes/issues/1040), the `terraform destroy` command hangs when it tries to clean up the `Flux CD` namespace. Alternatively, you can clean the resources individually: 469 | 470 | - Uninstall all the resources created by Flux, such as namespaces and pods, using the following command: 471 | 472 | ```shell 473 | flux uninstall 474 | ``` 475 | 476 | - Destroy the DOKS cluster by running the following command: 477 | 478 | ```shell 479 | terraform destroy --target=digitalocean_kubernetes_cluster.primary 480 | ``` 481 | 482 | **Note** that the command destroys the entire DOKS cluster (Flux and all the applications you deployed). 483 | 484 | ## Summary 485 | 486 | In this tutorial, you used Terraform and Flux to manage application deployments on a DigitalOcean Kubernetes(DOKS) cluster in a GitOps fashion. You completed the following prerequisites for the tutorial: 487 | 488 | - Created a GitHub repository and GitHub personal access token, and installed `git` client. 489 | 490 | - Created a DigitalOcean access token. 491 | 492 | - Created a DigitalOcean Space and access keys. 493 | 494 | - Installed Terraform 495 | 496 | - Installed `doctl`, `kubectl`, and `flux`. 497 | 498 | Terraform allows you to re-use code via `modules`. The [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) principle is strongly encouraged when using Terraform. The sample repository is a Terraform module which you can reference and re-use like this: 499 | 500 | ```text 501 | module "doks_flux_cd" { 502 | source = "github.com/digitalocean/container-blueprints/create-doks-with-terraform-flux" 503 | 504 | # DOKS 505 | do_api_token = "" # DigitalOcean API TOKEN 506 | doks_cluster_name = "" # Name of this `DOKS` cluster? 507 | doks_cluster_region = "" # What region should this `DOKS` cluster be provisioned in? 508 | doks_cluster_version = "" # What Kubernetes version should this `DOKS` cluster use ? 509 | doks_cluster_pool_size = "" # What machine type to use for this `DOKS` cluster ? 510 | doks_cluster_pool_node_count = # How many worker nodes this `DOKS` cluster should have ? 511 | 512 | # GitHub 513 | github_user = "" # Your `GitHub` username 514 | github_token = "" # Your `GitHub` personal access token 515 | git_repository_name = "" # Git repository where `Flux CD` manifests should be stored 516 | git_repository_branch = "" # Branch name to use for this `Git` repository (e.g.: `main`) 517 | git_repository_sync_path = "" # Git repository path where the manifests to sync are committed (e.g.: `clusters/dev`) 518 | 519 | } 520 | ``` 521 | 522 | You can instantiate it as many times as required and target different cluster configurations and environments. For more information, see the official [Terraform Modules](https://www.terraform.io/docs/language/modules/index.html) documentation page. 523 | 524 | ## What's Next 525 | 526 | To help you start quickly, as well as to demonstrate the basic functionality of Flux, this example uses a single cluster, synced from one Git repository and branch. There are many options available depending on your setup and what the final goal is. You can create as many Git repository resources as you want, that point to different repositories and/or branches, for example, a separate branch per environment. You can find more information and examples on the Flux CD [Repository Structure Guide](https://fluxcd.io/docs/guides/repository-structure). 527 | 528 | Flux supports other controllers, such as the following, which you can configure and enable: 529 | 530 | - [Notification Controller](https://fluxcd.io/docs/components/notification) which is specialized in handling inbound and outbound events for Slack and other messaging services. 531 | - [Helm Controller](https://fluxcd.io/docs/components/helm) for managing [Helm](https://helm.sh) chart releases. 532 | - [Image Automation Controller](https://fluxcd.io/docs/components/image) which can update a Git repository when new container images are available. 533 | 534 | See [Flux CD Guides](https://fluxcd.io/docs/guides) for more examples, such as how to structure your Git repositories, as well as application manifests for multi-cluster and multi-environment setups. -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/kubescape.md: -------------------------------------------------------------------------------- 1 | # Using the Kubescape Vulnerability Scan Tool 2 | 3 | ## Introduction 4 | 5 | [Kubescape](https://github.com/armosec/kubescape/) is a Kubernetes open-source tool developed by [Armosec](https://www.armosec.io) used for risk analysis, security compliance, RBAC visualizer, and image vulnerabilities scanning. In addition, Kubescape is able to scan Kubernetes manifests to detect potential configuration issues that expose your deployments to the risk of attack. It can also scan Helm charts, detect RBAC (role-based-access-control) violations, performs risk score calculations and shows risk trends over time. 6 | 7 | **Kubescape key features:** 8 | 9 | - Detect Kubernetes misconfigurations and provide remediation assistance via the [Armosec Cloud Portal](https://cloud.armosec.io). 10 | - Risk analysis and trending over time via the [Armosec Cloud Portal](https://cloud.armosec.io). 11 | - Includes multiple security compliance frameworks, such as ArmoBest, NSA, MITRE and Devops Best Practices. 12 | - Exceptions management support, allowing Kubernetes admins to mark acceptable risk levels. 13 | - Integrates with various tools such as Jenkins, Github workflows, Prometheus, etc. 14 | - Image scanning - scan images for vulnerabilities and easily see, sort and filter (which vulnerability to patch first). 15 | - Simplifies RBAC complexity by providing an easy-to-understand visual graph which shows the RBAC configuration in your cluster. 16 | 17 | **Kubescape can be run in different ways:** 18 | 19 | - Via the command line interface (or CLI). This the preferred way to run inside scripts and various automations, including CI/CD pipelines. Results can be uploaded to the [Armosec Cloud Portal](https://cloud.armosec.io) for analysis. 20 | - As a cronjob inside your Kubernetes cluster. In this mode, Kubescape is constantly watching your Kubernetes cluster for changes and uploads scan results to the [Armosec Cloud Portal](https://cloud.armosec.io). This feature works only if you deploy [Armo Cluster Components](https://hub.armosec.io/docs/installation-of-armo-in-cluster) in your DOKS cluster. 21 | - Via the [Armosec Cloud Portal](https://cloud.armosec.io) web interface. You can trigger [configuration scanning](https://cloud.armosec.io/configuration-scanning), [Image Scanning](https://cloud.armosec.io/image-scanning), view and inspect [RBAC rules](https://cloud.armosec.io/rbac-visualizer), customize frameworks, etc. This feature works only if you deploy [Armo Cluster Components](https://hub.armosec.io/docs/installation-of-armo-in-cluster) in your DOKS cluster. 22 | - Inside your [Visual Studio Code IDE](https://hub.armosec.io/docs/visual-studio-code). This way you can spot issues very quickly in the early stages of development. 23 | 24 | **Kubescape is using different frameworks to detect misconfigurations such as:** 25 | 26 | - [ArmoBest](https://www.armosec.io/blog/armobest-kubernetes-framework/) 27 | - [NSA](https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/2716980/nsa-cisa-release-kubernetes-hardening-guidance/) 28 | - [MITRE ATT&CK](https://www.microsoft.com/security/blog/2021/03/23/secure-containerized-environments-with-updated-threat-matrix-for-kubernetes/) 29 | 30 | **Is Kubescape free?** 31 | 32 | Yes, the tooling and community edition is free forever, except the cloud portal backend implementation and maybe some other advanced features. There is also a limitation on the maximum number of worker nodes you can scan per cluster (up to 10). Your scan reports data retention is limited to one month in the Armo cloud portal. 33 | 34 | See [pricing plans](https://www.armosec.io/pricing/) for more information. 35 | 36 | **Is Kubescape open source?** 37 | 38 | Yes, the tooling for sure is. You can visit the [Armo GitHub home page](https://github.com/armosec) to find more details about each component implementation. The cloud portal backend implementation is not open source. 39 | 40 | In this guide you will use Kubescape to perform risk analysis for your Kubernetes applications supply chain (container images, Kubernetes YAML manifests). Then, you will learn how to take the appropriate action to remediate the situation. Finally, you will learn how to integrate Kubescape in a CI/CD pipeline to scan for vulnerabilities in the early stages of development. 41 | 42 | ## Table of Contents 43 | 44 | - [Introduction](#introduction) 45 | - [Requirements](#requirements) 46 | - [Step 1 - Getting to Know the Kubescape CLI](#step-1---getting-to-know-the-kubescape-cli) 47 | - [Step 2 - Getting to Know the Armosec Cloud Portal](#step-2---getting-to-know-the-armosec-cloud-portal) 48 | - [Risk Score Analysis and Trending](#risk-score-analysis-and-trending) 49 | - [Understanding Kubescape Risk Score Value](#understanding-kubescape-risk-score-value) 50 | - [Assisted Remediation for Reported Security Issues](#assisted-remediation-for-reported-security-issues) 51 | - [Triggering Cluster Scans from the Web UI](#triggering-cluster-scans-from-the-web-ui) 52 | - [Step 3 - Configuring Kubescape Automatic Scans for DOKS](#step-3---configuring-kubescape-automatic-scans-for-doks) 53 | - [Provisioning Armo Cluster Components to DOKS](#provisioning-armo-cluster-components-to-doks) 54 | - [Tweaking Helm Values for the Armo Cluster Components Chart](#tweaking-helm-values-for-the-armo-cluster-components-chart) 55 | - [Step 4 - Using Kubescape to Scan for Kubernetes Configuration Vulnerabilities in a CI/CD Pipeline](#step-4---using-kubescape-to-scan-for-kubernetes-configuration-vulnerabilities-in-a-cicd-pipeline) 56 | - [GitHub Actions CI/CD Workflow Implementation](#github-actions-cicd-workflow-implementation) 57 | - [Step 5 - Investigating Kubescape Scan Results and Fixing Reported Issues](#step-5---investigating-kubescape-scan-results-and-fixing-reported-issues) 58 | - [Treating Exceptions](#treating-exceptions) 59 | - [Kubescape for IDEs](#kubescape-for-ides) 60 | - [Step 6 - Triggering the Kubescape CI/CD Workflow Automatically](#step-6---triggering-the-kubescape-cicd-workflow-automatically) 61 | - [Step 7 - Enabling Slack Notifications for Continuous Monitoring](#step-7---enabling-slack-notifications-for-continuous-monitoring) 62 | - [Conclusion](#conclusion) 63 | - [Additional Resources](#additional-resources) 64 | 65 | ## Requirements 66 | 67 | To complete all steps from this guide, you will need: 68 | 69 | 1. A working `DOKS` cluster running `Kubernetes version >=1.21` that you have access to. For additional instructions on configuring a DigitalOcean Kubernetes cluster, see: [How to Set Up a DigitalOcean Managed Kubernetes Cluster (DOKS)](https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/tree/main/01-setup-DOKS#how-to-set-up-a-digitalocean-managed-kubernetes-cluster-doks). 70 | 2. A [DigitalOcean Docker Registry](https://docs.digitalocean.com/products/container-registry/). A free plan is enough to complete this tutorial. Also, make sure it is integrated with your DOKS cluster as explained [here](https://docs.digitalocean.com/products/container-registry/how-to/use-registry-docker-kubernetes/#kubernetes-integration). 71 | 3. [Kubectl](https://kubernetes.io/docs/tasks/tools) CLI for `Kubernetes` interaction. Follow these [instructions](https://www.digitalocean.com/docs/kubernetes/how-to/connect-to-cluster/) to connect to your cluster with `kubectl` and `doctl`. 72 | 4. [Helm](https://www.helm.sh), to install Kubescape in the Kubernetes cluster. 73 | 5. [Kubescape CLI](https://hub.armosec.io/docs/installing-kubescape/) to interact with [Kubescape](https://github.com/armosec/kubescape/) vulnerabilities scanner. 74 | 6. A free [Armosec Cloud Portal](https://cloud.armosec.io) account used to periodically publish scan results for your Kubernetes cluster to a nice dashboard. Also, the Armosec portal web interface helps you with investigations and risk analysis. 75 | 7. A Slack workspace you own, and a dedicated [Slack app](https://api.slack.com/authentication/basics) to get notified of vulnerability scan issues reported by Kubescape. 76 | 77 | ## Step 1 - Getting to Know the Kubescape CLI 78 | 79 | You can manually scan for vulnerabilities via the `kubescape` command line interface. The kubescape CLI is designed to be used in various scripts and automations. A practical example is in a CI/CD pipeline implemented using various tools such as Tekton, Jenkins, GitHub Workflows, etc. 80 | 81 | Kubescape is designed to scan a whole Kubernetes cluster from ground up (workloads, containers, etc). If desired, you can limit scans to a specific namespace as well. Other features include host scanning (worker nodes), local or remote repositories scanning (e.g. GitHub), detect misconfigurations in Kubernetes YAML manifests or Helm charts. Various frameworks can be selected via the `framework` command, such as ArmoBest, NSA, MITRE, etc. 82 | 83 | When kubescape CLI is invoked, it will download (or update) the known vulnerabilities database on your local machine. Then, it will start the scanning process and report back issues in a specific format. By default it will print a summary table using the standard output or the console. Kubescape can generate reports in other formats as well, such as JSON, HTML, SARIF, etc. 84 | 85 | You can opt to push the results to the [Armosec Cloud Portal](https://cloud.armosec.io) via the `--submit` flag to store and visualize scan results later. 86 | 87 | **Note:** 88 | 89 | It's not mandatory to submit scan results to the Armosec cloud portal. The big advantage of using the portal is visibility because it gives you access to a nice dashboard where you can check all scan reports and the overall risk score. It also helps you on the long term with investigations and remediation hints. 90 | 91 | Some examples to try with Kubescape CLI: 92 | 93 | - Scan a whole Kubernetes cluster and generate a summary report in the console (standard output): 94 | 95 | ```shell 96 | kubescape scan 97 | ``` 98 | 99 | - Use a specific namespace only for scanning: 100 | 101 | ```shell 102 | kubescape scan --include-namespaces microservices 103 | ``` 104 | 105 | - Exclude specific namespaces from scanning: 106 | 107 | ```shell 108 | kubescape scan --exclude-namespaces kube-system,kube-public 109 | ``` 110 | 111 | - Scan a specific namespace and submit results to the Armosec cloud portal: 112 | 113 | ```shell 114 | kubescape scan --include-namespaces default --submit 115 | ``` 116 | 117 | - Perform cluster scan using a specific framework (e.g. NSA): 118 | 119 | ```shell 120 | kubescape scan framework nsa --exclude-namespaces kube-system,kube-public 121 | ``` 122 | 123 | Kubescape is able to scan your Kubernetes cluster hosts (or worker nodes) for OS vulnerabilities as well. To enable this feature, you need to pass the `--enable-host-scan` flag to the kubescape CLI. When this flag is enabled, kubescape deploys `sensors` in your cluster. Sensors are created using Kubernetes DaemonSets which deploy Pods on each node of your cluster to scan for known vulnerabilities. After the scan process is completed, the sensors are removed from your cluster (including the associated Kubernetes resources). 124 | 125 | Kubescape CLI provides help pages for all available options. Below command can be used to print the main help page: 126 | 127 | ```shell 128 | kubescape --help 129 | ``` 130 | 131 | The output looks similar to: 132 | 133 | ```text 134 | Kubescape is a tool for testing Kubernetes security posture. Docs: https://hub.armo.cloud/docs 135 | 136 | Usage: 137 | kubescape [command] 138 | 139 | Available Commands: 140 | completion Generate autocompletion script 141 | config Handle cached configurations 142 | delete Delete configurations in Kubescape SaaS version 143 | download Download controls-inputs,exceptions,control,framework,artifacts 144 | help Help about any command 145 | list List frameworks/controls will list the supported frameworks and controls 146 | scan Scan the current running cluster or yaml files 147 | submit Submit an object to the Kubescape SaaS version 148 | version Get current version 149 | ... 150 | ``` 151 | 152 | Each kubescape CLI command (or subcommand) has an associated help page as well which can be accessed via `kubescape [command] --help`. 153 | 154 | Please visit the official [kubescape CLI documentation page](https://hub.armosec.io/docs/examples/) for more examples. 155 | 156 | ## Step 2 - Getting to Know the Armosec Cloud Portal 157 | 158 | Armosec provides a nice [cloud based portal](https://cloud.armosec.io) where you can upload your Kubescape scan results and perform risk analysis. This is pretty useful because you will want to visualize and inspect each scan report, take the appropriate action to remediate the situation, and then run the scan again to check results. By having a good visual representation for each report and the associated risk score helps you on the long term with the investigations and iterations required to fix the reported security issues. 159 | 160 | You can create an account for free limited to **10 worker nodes** and **1 month of data retention** which should be sufficient in most cases (e.g. for testing or development needs). You can read more about how to create the kubescape cloud account on the [official documentation page](https://hub.armosec.io/docs/kubescape-cloud-account). 161 | 162 | Once you have the account created, an unique user ID is generated which you can use to upload scan results for that specific account. For example, you may have a specific automation such as a CI/CD pipeline where you need to upload scan results, hence the associated user ID is required to distinguish between multiple tenants. 163 | 164 | ### Risk Score Analysis and Trending 165 | 166 | For each scan report uploaded to your Armosec cloud account, a new history record is added containing the list of issues found and the associated risk score. This way you can get trends and the associated graphs showing risk score evolution over time. Also, a list with topmost security issues is generated as well in the main dashboard. 167 | 168 | Below picture illustrates these features: 169 | 170 | ![Kubescape Cloud Portal Dashboard](assets/images/kubescape/portal_dashboard.png) 171 | 172 | ### Understanding Kubescape Risk Score Value 173 | 174 | On each scan, kubescape verifies your resources for potential security risks using internal controls. A [Kubescape Control](https://hub.armosec.io/docs/controls) is a concept used by the kubescape tool to denote the tests used under the hood to check for a particular aspect of your cluster (or resources being scanned). Going further, a framework is a collection of controls or tests used internally to scan your particular resource(s) for issues. So, depending on what framework you use, a different suite of checks is performed (still, some tests share same things in common). Finally, depending on the risk factor associated with each test the final score is computed. 175 | 176 | The final score is a positive number ranging from **0** to **100%**. A lower value indicates the best score, whereas a higher value indicates the worst. So, if you want to be on the safe side you should aim for the lowest value possible. In practice, a risk score equal to or lower than **30%** should be a good starting point. 177 | 178 | ### Assisted Remediation for Reported Security Issues 179 | 180 | Another useful feature provided by the Armosec cloud portal is security issues remediation assistance. It means, you receive a recommendation about how to fix each security issue found by the kubescape scanner. This is very important because it simplifies the process and closes the loop for each iteration that you need to perform to fix each reported security issue. 181 | 182 | Below picture illustrates this process better: 183 | 184 | ![Security Compliance Scanning Process](assets/images/kubescape/security_compliance_scanning_process.png) 185 | 186 | For each reported security issue there is a wrench tool icon displayed which you can click on and get remediation assistance: 187 | 188 | ![Access Kubescape Cloud Portal Remediation Assistance](assets/images/kubescape/cp_remediation_assist.png) 189 | 190 | Next, a new window opens giving you details about each affected Kubernetes object, highlighted in green color: 191 | 192 | ![Kubescape Cloud Portal Remediation Hints](assets/images/kubescape/cp_remediation_hints.png) 193 | 194 | You can click on each control such as `C-0018`, `C-0030`, `C-0086`, etc. and investigate the highlighted issues. You will be presented with suggestions about how to fix each security issue. What's left is to follow the hints and fix each security issue. 195 | 196 | ### Triggering Cluster Scans from the Web UI 197 | 198 | The Armo cloud portal offers the possibility to trigger cluster scans from web interface as well if the Armo cloud components Helm chart is deployed in your DOKS cluster (discussed in the next step). Both configuration and image scanning can be triggered via a one button click in the portal. In order for this feature to work, you need to wait for the Armo cloud components to finish scanning your cluster in the background, and upload the results. 199 | 200 | Triggering a configuration scanning is done by navigating to the [configuration scanning](https://cloud.armosec.io/configuration-scanning) page, and click on the Scan button. Below picture shows how to accomplish this task: 201 | 202 | ![Kubescape Trigger Scans from UI](assets/images/kubescape/trigger_UI_scan.png) 203 | 204 | You can also set or modify the current schedule for automatic scanning if desired by clicking on the Schedule button in the pop-up window that appears after clicking the Scan button. Using the same window, you can select which control frameworks to use for scanning. Below picture shows how to accomplish the tasks: 205 | 206 | ![Kubescape UI Scan Options](assets/images/kubescape/UI_trigger_options.png) 207 | 208 | ## Step 3 - Configuring Kubescape Automatic Scans for DOKS 209 | 210 | Kubescape can be configured to automatically scan your entire Kubernetes cluster at a specific interval of time, or each time a new application image is deployed. You need to deploy [Armo Cluster Components](https://hub.armosec.io/docs/installation-of-armo-in-cluster) in your Kubernetes cluster using Helm to achieve this functionality. An [Armosec Cloud Portal](https://cloud.armosec.io) account is needed as well to upload and inspect the results. 211 | 212 | The [Armo Helm chart](https://github.com/armosec/armo-helm) installs cron jobs that trigger a vulnerability scan both for the entire Kubernetes cluster and container images. Each cron job interval is configurable in the [Helm values file](assets/manifests/armo-values-v1.7.15.yaml). 213 | 214 | ### Provisioning Armo Cluster Components to DOKS 215 | 216 | Steps to deploy kubescape in your Kubernetes cluster using Helm: 217 | 218 | 1. Add the `Helm` repo, and list the available `charts`: 219 | 220 | ```shell 221 | helm repo add armo https://armosec.github.io/armo-helm/ 222 | 223 | helm repo update armo 224 | 225 | helm search repo armo 226 | ``` 227 | 228 | The output looks similar to the following: 229 | 230 | ```text 231 | NAME CHART VERSION APP VERSION DESCRIPTION 232 | armo/armo-cluster-components 1.7.15 v1.7.15 ARMO Vulnerability Scanning 233 | ``` 234 | 235 | **Note:** 236 | 237 | The chart of interest is `armo/armo-cluster-components`, which will install Armo components in your Kubernetes cluster. Please visit the [armo-helm](https://github.com/armosec/armo-helm) repository page, for more details about this chart. 238 | 239 | 2. Fetch your Armo account user ID using kubescape CLI (needed in the next step): 240 | 241 | ```shell 242 | kubescape config view 243 | ``` 244 | 245 | The output looks similar to: 246 | 247 | ```json 248 | { 249 | "accountID": "c952b81f-77d5-4afb-80cc-59b59ec2sdfr" 250 | } 251 | ``` 252 | 253 | **Note:** 254 | 255 | If you never used kubescape CLI to submit scan results to the Armosec cloud portal, the above command won't work. In this case, you need to log in to the portal and get the account ID from there as explained [here](https://hub.armosec.io/docs/installation-of-armo-in-cluster#install-a-pre-registered-cluster). 256 | 257 | 3. Install the Armo Kubescape cluster components using `Helm` - a dedicated `armo-system` namespace will be created as well (make sure to replace the `<>` placeholders accordingly): 258 | 259 | ```shell 260 | ARMO_KUBESCAPE_CHART_VERSION="1.7.15" 261 | 262 | helm install armo armo/armo-cluster-components \ 263 | --version "$ARMO_KUBESCAPE_CHART_VERSION" \ 264 | --namespace armo-system \ 265 | --create-namespace \ 266 | --set clusterName="$(kubectl config current-context)" \ 267 | --set accountGuid= 268 | ``` 269 | 270 | **Note:** 271 | 272 | A specific version for the `armo-cluster-components` Helm chart is used. In this case `1.7.15` was picked, which maps to the `1.7.15` release of Armo cluster components (see the output from `Step 1.`). It’s good practice in general, to lock on a specific version. This helps to have predictable results, and allows versioning control via `Git`. 273 | 274 | Now check if all the Armo cluster components deployments are up and running: 275 | 276 | ```shell 277 | kubectl get deployments -n armo-system 278 | ``` 279 | 280 | The output looks similar to: 281 | 282 | ```text 283 | NAME READY UP-TO-DATE AVAILABLE AGE 284 | armo-collector 1/1 1 1 5d6h 285 | armo-kubescape 1/1 1 1 5d6h 286 | armo-notification-service 1/1 1 1 5d6h 287 | armo-vuln-scan 1/1 1 1 5d6h 288 | armo-web-socket 1/1 1 1 5d6h 289 | ``` 290 | 291 | All Armo cluster components should be up and running. 292 | 293 | Finally, after a few minutes you should be able to see your cluster scan reports available in the cloud portal, such as: 294 | 295 | - Configuration scanning results: 296 | 297 | ![Armo Portal Configuration Scanning](assets/images/kubescape/configuration_scanning.png) 298 | 299 | - Image scanning results: 300 | 301 | ![Armo Portal Image Scanning](assets/images/kubescape/image_scanning.png) 302 | 303 | - RBAC visualizer results: 304 | 305 | ![Armo Portal RBAC Visualizer](assets/images/kubescape/rbac_visualizer.png) 306 | 307 | For more information please visit the [cluster vulnerability scanning](https://hub.armosec.io/docs/cluster-vulnerability-scanning) page from the official documentation. 308 | 309 | ### Tweaking Helm Values for the Armo Cluster Components Chart 310 | 311 | You can change the behavior of the Armo cluster components chart by editing the [Helm values file](assets/manifests/armo-values-v1.7.15.yaml) provided in this guide. 312 | 313 | The following settings can be changed: 314 | 315 | - Scanning intervals via the `armoScanScheduler` and `armoKubescapeScanScheduler` values. 316 | - New image scan trigger via the `triggerNewImageScan` value. 317 | 318 | The full list of values that can be customized to your needs is available in the [official Helm chart values](https://github.com/armosec/armo-helm/blob/armo-cluster-components-1.7.15/charts/armo-components/values.yaml) file. 319 | 320 | To apply changes, you need to upgrade the current Helm chart version via (make sure to replace the `<>` placeholders accordingly): 321 | 322 | ```shell 323 | ARMO_KUBESCAPE_CHART_VERSION="1.7.15" 324 | 325 | helm upgrade armo armo/armo-cluster-components \ 326 | --version "$ARMO_KUBESCAPE_CHART_VERSION" \ 327 | --namespace armo-system \ 328 | --set clusterName="$(kubectl config current-context)" \ 329 | --set accountGuid= \ 330 | -f 331 | ``` 332 | 333 | ## Step 4 - Using Kubescape to Scan for Kubernetes Configuration Vulnerabilities in a CI/CD Pipeline 334 | 335 | How do you benefit from embedding a security compliance scanning tool in your CI/CD pipeline and avoid unpleasant situations in a production environment? 336 | 337 | It all starts at the foundation level where software development starts. In general, you will want to use a dedicated environment for each stage. So, in the early stages of development when application code changes very often, you should use a dedicated development environment (called the lower environment usually). Then, the application gets more and more refined in the QA environment where QA teams perform manual and/or automated testing. Next, if the application gets the QA team approval it is promoted to the upper environments such as staging, and finally into production. In this process, where the application is promoted from one environment to another, a dedicated pipeline runs which continuously scans application artifacts and computes the security risk score. If the score doesn't meet a specific threshold, the pipeline fails immediately and application artifacts promotion to production is stopped in the early stages. 338 | 339 | So, the security scanning tool (e.g. kubescape) acts as a gatekeeper stopping unwanted artifacts in your production environment from the early stages of development. In the same manner, upper environments pipelines use kubescape to allow or forbid application artifacts entering the final production stage. 340 | 341 | ### GitHub Actions CI/CD Workflow Implementation 342 | 343 | In this step you will learn how to create and test a sample CI/CD pipeline with integrated vulnerability scanning via GitHub workflows. To learn the fundamentals of using Github Actions with DigitalOcean Kubernetes, refer to this [tutorial](https://docs.digitalocean.com/tutorials/enable-push-to-deploy/). 344 | 345 | The pipeline provided in the following section builds and deploys the [game-2048-example](https://github.com/digitalocean/kubernetes-sample-apps/tree/master/game-2048-example) application from the DigitalOcean [kubernetes-sample-apps](https://github.com/digitalocean/kubernetes-sample-apps) repository. 346 | 347 | At a high level overview, the [example CI/CD workflow](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/.github/workflows/game-2048-kubescape.yaml) provided in the **kubernetes-sample-apps** repo is comprised of the following stages: 348 | 349 | 1. Application build and test stage - builds main application artifacts and runs automated tests. 350 | 2. Kubescape scan stage - scans for known vulnerabilities in the Kubernetes YAML manifests associated with the application. Acts as a gate and the final pipeline state (pass/fail) is dependent on this step. In case of failure a Slack notification is sent as well. 351 | 3. Application image build and push stage - builds and tags the application image using the latest git commit SHA. Then the image is pushed to DOCR. 352 | 4. Application deployment stage - deploys the application to Kubernetes (DOKS). 353 | 354 | Below diagram illustrates each job from the pipeline and the associated steps with actions (only relevant configuration is shown): 355 | 356 | ![GitHub Workflow Configuration](assets/images/kubescape/gh_workflow_diagram_code.png) 357 | 358 | **Notes:** 359 | 360 | - In case of kustomize based projects it's best to render the final manifest via the `kubectl kustomize ` command in order to capture and scan everything (including remote resources). On the other hand, it can be hard to identify which Kubernetes resource needs to be patched. This is due to the fact that the resulting manifest file is comprised of all resources to be applied. This is how kustomize works - it gathers all configuration fragments from each overlay and applies them over a base to build the final compound. 361 | - You can also tell Kubescape to scan the entire folder where you keep your kustomize configurations (current guide relies on this approach). This way, it's easier to identify what resource needs to be fixed in your repository. Remote resources used by kustomize need to be fixed upstream. Also, Kubernetes secrets and ConfigMaps generated via kustomize are not captured. 362 | 363 | How do you fail the pipeline if a certain security compliance level is not met? 364 | 365 | Kubescape CLI provides a flag named `--fail-threshold` for this purpose. This flag correlates with the overall risk score computed after each scan. You can fail or pass the pipeline based on the threshold value and stop application deployment if conditions are not met. 366 | 367 | Below picture illustrates the flow for the example CI/CD pipeline used in this guide: 368 | 369 | ![Kubescape Pipeline Flow](assets/images/kubescape/pipeline_flow.png) 370 | 371 | Please follow below steps to create and test the kubescape CI/CD GitHub workflow provided in the [kubernetes-sample-apps](https://github.com/digitalocean/kubernetes-sample-apps) GitHub repository: 372 | 373 | 1. Fork the [kubernetes-sample-apps](https://github.com/digitalocean/kubernetes-sample-apps) GitHub repository. 374 | 2. Create the following [GitHub encrypted secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for your **kubernetes-sample-apps** copy (**Settings Tab** -> **Secrets** -> **Actions**): 375 | - `DIGITALOCEAN_ACCESS_TOKEN` - holds your DigitalOcean account token. 376 | - `DOCKER_REGISTRY` - holds your DigitalOcean docker registry name including the endpoint (e.g. `registry.digitalocean.com/sample-apps`). 377 | - `DOKS_CLUSTER` - holds your DOKS cluster name. You can run the following command to get your DOKS cluster name: `doctl k8s cluster list --no-header --format Name`. 378 | - `ARMOSEC_PORTAL_ACCOUNT_ID` - holds your Armo portal user account ID - run: `kubescape config view` to get the ID. If that doesn't work you can find more info [here](https://hub.armosec.io/docs/installation-of-armo-in-cluster#install-a-pre-registered-cluster). 379 | - `SLACK_WEBHOOK_URL` - holds your [Slack incoming webhook URL](https://api.slack.com/messaging/webhooks) used for kubescape scan notifications. 380 | 3. Navigate to the **Actions** tab of your forked repo and select the **Game 2048 Kubescape CI/CD Example** workflow: 381 | ![Game 2048 Main Workflow](assets/images/kubescape/game-2048-wf-nav.png) 382 | 4. Click on the **Run Workflow** button and leave the default values: 383 | ![Game 2048 Workflow Triggering](assets/images/kubescape/game-2048_wf_start.png) 384 | 385 | A new entry should appear in below list after clicking the **Run Workflow** green button. Select the running workflow to observe pipeline progress: 386 | 387 | ![Game 2048 Workflow Progress](assets/images/kubescape/game-2048-wf-progress.png) 388 | 389 | The pipeline will fail and stop when the **kubescape-nsa-security-check** job runs. This is expected because the default threshold value of `30` for the overall risk score is lower than the desired value. You should also receive a Slack notifications with details about the workflow run: 390 | 391 | ![Game 2048 Workflow Slack Notification](assets/images/kubescape/game-2048-wf-slack-notification.png) 392 | 393 | In the next step you will learn how to investigate the kubescape scan report to fix the issues, lower the risk score, and pass the pipeline. 394 | 395 | ## Step 5 - Investigating Kubescape Scan Results and Fixing Reported Issues 396 | 397 | Whenever the risk score value threshold is not met, the [game-2048 GitHub workflow](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/.github/workflows/game-2048-kubescape.yaml) will fail and a Slack notification is sent with additional details. 398 | 399 | The **game-2048 workflow** runs one security check (local image scanning is not supported) - Kubernetes manifests misconfiguration checks. The **kubescape-nsa-security-check** job is used for this purpose. Equivalent kubescape command being used is - `kubescape scan framework nsa /path/to/project/kubernetes/manifests`. 400 | 401 | Below snippet shows the main logic of the **kubescape-nsa-security-check** job: 402 | 403 | ```yaml 404 | - name: Scan Kubernetes YAML files 405 | run: | 406 | kubescape scan framework nsa kustomize/ \ 407 | -t ${{ github.event.inputs.kubescape_fail_threshold || env.KUBESCAPE_FAIL_THRESHOLD }} \ 408 | --submit --account=${{ secrets.ARMOSEC_PORTAL_ACCOUNT_ID }} 409 | working-directory: ${{ env.PROJECT_DIR }} 410 | ``` 411 | 412 | Above configuration tells kubescape CLI to start a new scan for all Kubernetes manifests present in the `kustomize/` directory using the **NSA** framework. It also specifies what threshold level to use via the **-t** flag, and to submit final results to Armo cloud portal (the **--submit** flag in conjunction with **--acount**). 413 | 414 | Thus, lowering the risk score value and passing the workflow consists of investigating and fixing issues reported by the **kubescape-nsa-security-check** job. Next, you will learn how to address security issues reported by this job. 415 | 416 | To check the status report, you can click on the kubescape scan results link from the received Slack notification. Then, click on the **REPOSITORIES SCAN** scan button from the left menu in the Armo cloud portal. Now, click on the **kubernetes-sample-apps** entry from the list: 417 | 418 | ![Game 2048 Repo Scan Entry](assets/images/kubescape/game-2048-ks-repo-scan.png) 419 | 420 | Next, click on the **deployment.yaml** entry, and then hit the wrench tool from the upper right part: 421 | 422 | ![Game 2048 Repo Scan Results](assets/images/kubescape/game-2048-scan-report.png) 423 | 424 | A new browser window opens showing in detail each control and description. You will also be presented with required actions to remediate the issue (highlighted in green color): 425 | 426 | ![Game 2048 Reported Scan Controls](assets/images/kubescape/game-2048-controls-list.png) 427 | 428 | After collecting all the information from the scan report, you can go ahead and edit the [deployment.yaml](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/game-2048-example/kustomize/resources/deployment.yaml) file from your repo (located in the `game-2048-example/kustomize/resources` subfolder). The fixes are already in place, you just need to uncomment the last lines from the file. The final `deployment.yaml` file should look like below: 429 | 430 | ```yaml 431 | --- 432 | apiVersion: apps/v1 433 | kind: Deployment 434 | metadata: 435 | name: game-2048 436 | spec: 437 | replicas: 1 438 | selector: 439 | matchLabels: 440 | app: game-2048 441 | strategy: 442 | type: RollingUpdate 443 | template: 444 | metadata: 445 | labels: 446 | app: game-2048 447 | spec: 448 | containers: 449 | - name: backend 450 | # Replace the `<>` placeholders with your docker registry info 451 | image: registry.digitalocean.com/sample-apps/2048-game:latest 452 | ports: 453 | - name: http 454 | containerPort: 8080 455 | resources: 456 | requests: 457 | cpu: 100m 458 | memory: 50Mi 459 | limits: 460 | cpu: 200m 461 | memory: 100Mi 462 | securityContext: 463 | readOnlyRootFilesystem: true 464 | runAsNonRoot: true 465 | allowPrivilegeEscalation: false 466 | capabilities: 467 | drop: 468 | - all 469 | ``` 470 | 471 | **Note:** 472 | 473 | The [C-0055](https://hub.armosec.io/docs/c-0055) suggestions were omitted in this example for simplicity. You can read more about secure computing mode in Kubernetes [here](https://kubernetes.io/docs/tutorials/security/seccomp/). 474 | 475 | What changed? The following security fixes were applied: 476 | 477 | - `readOnlyRootFilesystem` - runs container image in read only (cannot alter files by `kubectl exec` in the container). 478 | - `runAsNonRoot` - runs as the non root user defined by the [USER](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/game-2048-example/Dockerfile#L18) directive from the game-2048 project [Dockerfile](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/game-2048-example/Dockerfile). 479 | - `allowPrivilegeEscalation` - setting **allowPrivilegeEscalation** to **false** ensures that no child process of a container can gain more privileges than its parent. 480 | - `capabilities.drop` - To make containers more secure, you should provide containers with the least amount of privileges it needs to run. In practice, you drop everything by default, then add required capabilities step by step. You can read more about container security in this [article](https://www.armosec.io/blog/secure-kubernetes-deployment/) written by Armosec. 481 | 482 | Finally, commit the changes for the **deployment.yaml** file and push to main branch. After manually triggering the workflow it should complete successfully this time: 483 | 484 | ![Game 2048 Workflow Success](assets/images/kubescape/game-2048-wf-success.png) 485 | 486 | You should also receive a green Slack notification from the kubescape scan job. Navigate to the Armo portal link, and check if the issues that you fixed recently are gone - there should be none reported. 487 | 488 | A few final checks can be performed as well on the Kubernetes side to verify if the reported issues were fixed: 489 | 490 | 1. Check if the game-2048 deployment has a read-only (immutable) filesystem by writing the application **index.html** file: 491 | 492 | ```shell 493 | kubectl exec -it deployment/game-2048 -n game-2048 -- /bin/bash -c "echo > /public/index.html" 494 | ``` 495 | 496 | The output looks similar to: 497 | 498 | ```text 499 | /bin/bash: /public/index.html: Read-only file system 500 | command terminated with exit code 1 501 | ``` 502 | 503 | 2. Check if the container runs as non-root user (should print a integer number different than zero - e.g. `1000`): 504 | 505 | ```shell 506 | kubectl exec -it deployment/game-2048 -n game-2048 -- id -u 507 | ``` 508 | 509 | If all checks pass then you applied the required security recommendations successfully. 510 | 511 | ### Treating Exceptions 512 | 513 | There are situations when you don't want the final risk score to be affected by some reported issues which your team consider is safe to ignore. Kubescape offers a builtin feature to manage exceptions and overcome this situation. 514 | 515 | You can read more about this feature [here](https://hub.armosec.io/docs/exceptions). 516 | 517 | ### Kubescape for IDEs 518 | 519 | A more efficient approach is where you integrate vulnerability scan tools directly in your favorite IDE (or Integrated Development Environment). This way you can detect and fix security issues ahead of time in the software development cycle. 520 | 521 | Kubescape offers support for IDE integration via following extensions: 522 | 523 | 1. [Visual Studio Code extension](https://hub.armosec.io/docs/visual-studio-code). 524 | 2. [Kubernetes Lens extension](https://hub.armosec.io/docs/kubernetes-lens). 525 | 526 | Above plugins will help you detect and fix issues in the early stages of development, thus eliminating frustration, costs, and security flaws in production systems. Also, it helps you to reduce the iterations end human effort on the long run. As an example, for each reported security issue by your CI/CD automation you need to go back and fix the issue in your code, commit changes, wait for the CI/CD automation again, then repeat in case of failure. 527 | 528 | You can read more about these features by navigating to the [Kubescape documentation](https://hub.armosec.io/docs/) page, then search in the **INTEGRATIONS** section. 529 | 530 | ## Step 6 - Triggering the Kubescape CI/CD Workflow Automatically 531 | 532 | You can set the workflow to trigger automatically on each commit or PR against the main branch by uncommenting the following lines at the top of the [game-2048-kubescape.yaml](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/.github/workflows/game-2048-kubescape.yaml) file: 533 | 534 | ```yaml 535 | on: 536 | push: 537 | branches: [ master ] 538 | pull_request: 539 | branches: [ master ] 540 | ``` 541 | 542 | After editing the file, commit the changes to your main branch and you should be ready to go. 543 | 544 | ## Step 7 - Enabling Slack Notifications for Continuous Monitoring 545 | 546 | The vulnerability scan automation you implemented so far is a good starting point, but not perfect. Why? 547 | 548 | One issue with the current approach is that you never know when new issues are reported for the assets you already deployed in your environments. In other words, you assessed the security risks and took the measures to fix the issues at one specific point in time - when your CI/CD automation was executed. 549 | 550 | But, what if new issues are reported meanwhile and your application is vulnerable again? 551 | 552 | The monitoring feature of Kubescape helps you address new vulnerabilities, which are constantly disclosed. When combined with the Slack integration, you can take immediate actions to fix new disclosed issues that may affect your application in a production environment. 553 | 554 | The Armo cloud portal supports Slack integration for sending real time alerts after each cluster scan. This feature requires the Armo cloud components Helm chart to be installed in your DOKS cluster as explained in [Step 3 - Configuring Kubescape Automatic Scans for DOKS](#step-3---configuring-kubescape-automatic-scans-for-doks). 555 | 556 | By enabling Slack alerts you will get important notifications about vulnerabilities detected in your DOKS cluster, such as: 557 | 558 | 1. Worker nodes vulnerabilities (OS level). 559 | 2. Container images vulnerabilities. 560 | 3. Kubernetes misconfigurations for various resources such as deployments, pods, etc. 561 | 562 | First, you need to create a [Slack App](https://api.slack.com/apps/new). Then, you need to give the following permissions to your Slack Bot in the **OAuth & Permissions** page: 563 | 564 | - `channels:join` - Join public channels in a workspace. 565 | - `channels:read` - View basic information about public channels in a workspace. 566 | - `groups:read` - View basic information about private channels your Slack App has been added to. 567 | - `chat:write` - Send messages as @< Your Slack App Name >. 568 | - `im:read` - View basic information about direct messages your Slack App has been added to. 569 | - `mpim:read` - View basic information about group direct messages your Slack App has been added to. 570 | 571 | Next, navigate to the [settings](https://cloud.armosec.io/settings/) page of your Armo cloud portal account (top right gear icon). From there, select [Integrations](https://cloud.armosec.io/settings/integrations/) page, then [Slack](https://cloud.armosec.io/settings/integrations/slack/). 572 | 573 | Now, paste your Slack Bot OAuth token (can be found in the **OAuth & Permissions** page from your Slack App page) in the **Insert Token** input field. Finally, select how to get notified and the Slack channel where alerts should be sent. Click on **Set Notifications** button and you're set. Below picture illustrates the details: 574 | 575 | ![Kubescape Slack Notifications](assets/images/kubescape/slack_notifications.png) 576 | 577 | After configuring the Slack integration you should receive periodic notifications after each cluster scan on the designated channel: 578 | 579 | ![Cluster Scan Periodic Alerts](assets/images/kubescape/cluster_scan-slack_periodic_alerts.png) 580 | 581 | If you receive notifications similar to above, then you configured the Armosec Kubescape Slack integration successfully. 582 | 583 | ## Conclusion 584 | 585 | In this guide you learned how to use one of the most popular Kubernetes vulnerability scanning tool - [Kubescape](https://github.com/armosec/kubescape). You also learned how to perform cluster and repository scanning (YAML manifests) using the kubescape CLI. Then, you learned how to integrate the vulnerability scanning tool in a traditional CI/CD pipeline implemented using GitHub workflows. 586 | 587 | Finally, you learned how to investigate vulnerability scan reports, apply fixes to remediate the situation, and reduce the risk score to a minimum via a practical example - the [game-2048](https://github.com/digitalocean/kubernetes-sample-apps/tree/master/game-2048-example) application from the [kubernetes-sample-apps](https://github.com/digitalocean/kubernetes-sample-apps) repository. 588 | 589 | ## Additional Resources 590 | 591 | You can learn more by reading the following additional resources: 592 | 593 | - [Kubernetes Security Best Practices by Armo](https://www.armosec.io/blog/kubernetes-security-best-practices/) 594 | - [Catch Security Issues Ahead of Time by Armosec](https://www.armosec.io/blog/find-kubernetes-security-issues-while-coding/) 595 | - [Secure your Kubernetes Deployments by Armosec](https://www.armosec.io/blog/secure-kubernetes-deployment/) 596 | - [Learn More about Armosec Host Scanners](https://hub.armosec.io/docs/host-sensor) 597 | - [Kubescape Prometheus Exporter](https://hub.armosec.io/docs/prometheus-exporter) 598 | - [Customizing Kubescape Scan Frameworks](https://hub.armosec.io/docs/customization) 599 | -------------------------------------------------------------------------------- /DOKS-Supply-Chain-Security/snyk.md: -------------------------------------------------------------------------------- 1 | # Using the Snyk Vulnerability Scanning Tool 2 | 3 | ## Introduction 4 | 5 | [Snyk](https://snyk.io) was designed to serve as a developer security platform and with flexibility in mind. Its main goal is to help you detect and fix vulnerabilities in your application source code, third party dependencies, container images, and infrastructure configuration files (e.g. Kubernetes, Terraform, etc). 6 | 7 | **Snyk is divided into four components:** 8 | 9 | 1. [Snyk Code](https://docs.snyk.io/products/snyk-code) - helps you find and fix vulnerabilities in your application source code. 10 | 2. [Snyk Open Source](https://docs.snyk.io/products/snyk-open-source) - helps you find and fix vulnerabilities for any 3rd party libraries or dependencies your application relies on. 11 | 3. [Snyk Container](https://docs.snyk.io/products/snyk-container) - helps you find and fix vulnerabilities in container images or Kubernetes workloads used in your cluster. 12 | 4. [Snyk Infrastructure as Code](https://docs.snyk.io/products/snyk-infrastructure-as-code) - helps you find and fix misconfigurations in your Kubernetes manifests (Terraform, CloudFormation and Azure are supported as well). 13 | 14 | **Snyk can be run in different ways:** 15 | 16 | - Via the command line interface using [Snyk CLI](https://docs.snyk.io/snyk-cli). This the preferred way to run inside scripts and various automations, including CI/CD pipelines. 17 | - In the browser as [Snyk Web UI](https://docs.snyk.io/snyk-web-ui). Snyk offers a cloud based platform as well which you can use to investigate scan reports, receive hints and take required actions to fix reported issues, etc. You can also connect GitHub repositories and perform scans/audits from the web interface. 18 | - Via [IDE plugins](https://docs.snyk.io/ide-tools). This way you can spot issues early as you're developing using your favorite IDE (e.g. Visual Studio Code). 19 | - Programmatically, via the [Snyk API](https://support.snyk.io/hc/en-us/categories/360000665657-Snyk-API). Snyk API is available to customers on [paid plans](https://snyk.io/plans) and allows you to programmatically integrate with Snyk. 20 | 21 | **Is Snyk free?** 22 | 23 | Yes, the tooling is free, except [Snyk API](https://support.snyk.io/hc/en-us/categories/360000665657-Snyk-API) and some advanced features from the web UI (such as advanced reporting). There is also a limitation on the number of tests you can perform per month. 24 | 25 | See [pricing plans](https://snyk.io/plans/) for more information. 26 | 27 | **Is Snyk open source?** 28 | 29 | Yes, the tooling and Snyk CLI for sure is. You can visit the [Snyk GitHub home page](https://github.com/snyk) to find more details about each component implementation. The cloud portal and all paid features such as the rest API implementation is not open source. 30 | 31 | Another important set of concepts that is Snyk is using are [Targets](https://docs.snyk.io/introducing-snyk/introduction-to-snyk-projects#targets) and [Projects](https://docs.snyk.io/introducing-snyk/introduction-to-snyk-projects#projects). 32 | 33 | Targets represent an external resource Snyk has scanned through an integration, the CLI, UI or API. Example targets are a SCM repository, a Kubernetes workload, etc. 34 | 35 | Projects on the other hand, define the items Snyk scans at a given Target. A project includes: 36 | 37 | - A scannable item external to Snyk. 38 | - Configuration to define how to run that scan. 39 | 40 | You can read more about Snyk core concepts [here](https://docs.snyk.io/introducing-snyk/snyks-core-concepts). 41 | 42 | In this guide you will use [Snyk CLI](https://docs.snyk.io/snyk-cli) to perform risk analysis for your Kubernetes applications supply chain (container images, Kubernetes YAML manifests). Then, you will learn how to take the appropriate action to remediate the situation. Finally, you will learn how to integrate Snyk in a CI/CD pipeline to scan for vulnerabilities in the early stages of development. 43 | 44 | ## Table of Contents 45 | 46 | - [Introduction](#introduction) 47 | - [Requirements](#requirements) 48 | - [Step 1 - Getting to Know the Snyk CLI](#step-1---getting-to-know-the-snyk-cli) 49 | - [Step 2 - Getting to Know the Snyk Web UI](#step-2---getting-to-know-the-snyk-web-ui) 50 | - [Understanding Snyk Severity Levels](#understanding-snyk-severity-levels) 51 | - [Assisted Remediation for Reported Security Issues](#assisted-remediation-for-reported-security-issues) 52 | - [Step 3 - Using Snyk to Scan for Kubernetes Configuration Vulnerabilities in a CI/CD Pipeline](#step-3---using-snyk-to-scan-for-kubernetes-configuration-vulnerabilities-in-a-cicd-pipeline) 53 | - [GitHub Actions CI/CD Workflow Implementation](#github-actions-cicd-workflow-implementation) 54 | - [Step 4 - Investigating Snyk Scan Results and Fixing Reported Issues](#step-4---investigating-snyk-scan-results-and-fixing-reported-issues) 55 | - [Investigating and Fixing Container Images Vulnerabilities](#investigating-and-fixing-container-images-vulnerabilities) 56 | - [Investigating and Fixing Kubernetes Manifests Vulnerabilities](#investigating-and-fixing-kubernetes-manifests-vulnerabilities) 57 | - [Monitor your Projects on a Regular Basis](#monitor-your-projects-on-a-regular-basis) 58 | - [Treating Exceptions](#treating-exceptions) 59 | - [Snyk for IDEs](#snyk-for-ides) 60 | - [Step 5 - Triggering the Snyk CI/CD Workflow Automatically](#step-5---triggering-the-snyk-cicd-workflow-automatically) 61 | - [Step 6 - Enabling Slack Notifications](#step-6---enabling-slack-notifications) 62 | - [Conclusion](#conclusion) 63 | - [Additional Resources](#additional-resources) 64 | 65 | ## Requirements 66 | 67 | To complete all steps from this guide, you will need: 68 | 69 | 1. A working `DOKS` cluster running `Kubernetes version >=1.21` that you have access to. For additional instructions on configuring a DigitalOcean Kubernetes cluster, see: [How to Set Up a DigitalOcean Managed Kubernetes Cluster (DOKS)](https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/tree/main/01-setup-DOKS#how-to-set-up-a-digitalocean-managed-kubernetes-cluster-doks). 70 | 2. A [DigitalOcean Docker Registry](https://docs.digitalocean.com/products/container-registry/). A free plan is enough to complete this tutorial. Also, make sure it is integrated with your DOKS cluster as explained [here](https://docs.digitalocean.com/products/container-registry/how-to/use-registry-docker-kubernetes/#kubernetes-integration). 71 | 3. [Kubectl](https://kubernetes.io/docs/tasks/tools) CLI for `Kubernetes` interaction. Follow these [instructions](https://www.digitalocean.com/docs/kubernetes/how-to/connect-to-cluster/) to connect to your cluster with `kubectl` and `doctl`. 72 | 4. [Snyk CLI](https://docs.snyk.io/snyk-cli/install-the-snyk-cli) to interact with [Snyk](https://snyk.io) vulnerabilities scanner. 73 | 5. A free [Snyk cloud account](https://app.snyk.io) account used to periodically publish scan results for your Kubernetes cluster to a nice dashboard. Also, the Snyk web interface helps you with investigations and risk analysis. Please follow [How to Create a Snyk Account](https://docs.snyk.io/tutorials/getting-started/snyk-integrations/snyk-account) documentation page. 74 | 6. A Slack workspace you own, and a dedicated [Slack app](https://api.slack.com/authentication/basics) to get notified of vulnerability scan issues reported by Snyk. 75 | 76 | ## Step 1 - Getting to Know the Snyk CLI 77 | 78 | You can manually scan for vulnerabilities via the `snyk` command line interface. The snyk CLI is designed to be used in various scripts and automations. A practical example is in a CI/CD pipeline implemented using various tools such as Tekton, Jenkins, GitHub Workflows, etc. 79 | 80 | When the snyk CLI is invoked it will immediately start the scanning process and report back issues in a specific format. By default it will print a summary table using the standard output or the console. Snyk can generate reports in other formats as well, such as JSON, HTML, SARIF, etc. 81 | 82 | You can opt to push the results to the [Snyk Cloud Portal](https://app.snyk.io) (or web UI) via the `--report` flag to store and visualize scan results later. 83 | 84 | **Note:** 85 | 86 | It's not mandatory to submit scan results to the Snyk cloud portal. The big advantage of using the Snyk portal is visibility because it gives you access to a nice dashboard where you can check all scan reports and see how much the Kubernetes supply chain is impacted. It also helps you on the long term with investigations and remediation hints. 87 | 88 | Snyk CLI is divided into several subcommands. Each subcommand is dedicated to a specific feature, such as: 89 | 90 | - [Open source scanning](https://docs.snyk.io/products/snyk-open-source/use-snyk-open-source-from-the-cli) - identifies current project dependencies and reports found security issues. 91 | - [Code scanning](https://docs.snyk.io/products/snyk-code/cli-for-snyk-code) - reports security issues found in your application source code. 92 | - [Image scanning](https://docs.snyk.io/products/snyk-container/snyk-cli-for-container-security) - reports security issues found in container images (e.g. Docker). 93 | - [Infrastructure as code files scanning](https://docs.snyk.io/products/snyk-infrastructure-as-code/snyk-cli-for-infrastructure-as-code) - reports security issues found in configuration files used by Kubernetes, Terraform, etc. 94 | 95 | **Note:** 96 | 97 | Before moving on, please make sure to create a [free account](https://docs.snyk.io/tutorials/getting-started/snyk-integrations/snyk-account) using the Snyk web UI. Also, snyk CLI needs to be [authenticated](https://docs.snyk.io/snyk-cli/authenticate-the-cli-with-your-account) with your cloud account as well in order for some commands/subcommands to work (e.g. `snyk code test`). 98 | 99 | A few examples to try with Snyk CLI: 100 | 101 | 1. Open source scanning: 102 | 103 | ```shell 104 | # Scans your project code from current directory 105 | snyk test 106 | 107 | # Scan a specific path from your project directory (make sure to replace the `<>` placeholders accordingly) 108 | snyk test 109 | ``` 110 | 111 | 2. Code scanning: 112 | 113 | ```shell 114 | # Scan your project code from current directory 115 | snyk code test 116 | 117 | # Scan a specific path from your project directory (make sure to replace the `<>` placeholders accordingly) 118 | snyk code test 119 | ``` 120 | 121 | 3. Image scanning: 122 | 123 | ```shell 124 | # Scans the debian docker image by pulling it first 125 | snyk container debian 126 | 127 | # Give more context to the scanner by providing a Dockerfile (make sure to replace the `<>` placeholders accordingly) 128 | snyk container debian --file= 129 | ``` 130 | 131 | 4. Infrastructure as code scanning: 132 | 133 | ```shell 134 | # Scan your project code from current directory 135 | snyk iac test 136 | 137 | # Scan a specific path from your project directory (make sure to replace the `<>` placeholders accordingly) 138 | snyk iac test 139 | 140 | # Scan Kustomize based projects (first you need to render the final template, then pass it to the scanner) 141 | kustomize build > kubernetes.yaml 142 | snyk iac test kubernetes.yaml 143 | ``` 144 | 145 | Snyk CLI provides help pages for all available options. Below command can be used to print the main help page: 146 | 147 | ```shell 148 | snyk --help 149 | ``` 150 | 151 | The output looks similar to: 152 | 153 | ```text 154 | CLI commands help 155 | Snyk CLI scans and monitors your projects for security vulnerabilities and license issues. 156 | 157 | For more information visit the Snyk website https://snyk.io 158 | 159 | For details see the CLI documentation https://docs.snyk.io/features/snyk-cli 160 | 161 | How to get started 162 | 1. Authenticate by running snyk auth 163 | 2. Test your local project with snyk test 164 | 3. Get alerted for new vulnerabilities with snyk monitor 165 | 166 | Available commands 167 | To learn more about each Snyk CLI command, use the --help option, for example, snyk auth --help or 168 | snyk container --help 169 | 170 | snyk auth 171 | Authenticate Snyk CLI with a Snyk account. 172 | 173 | snyk test 174 | Test a project for open source vulnerabilities and license issues. 175 | ... 176 | ``` 177 | 178 | Each snyk CLI command (or subcommand) has an associated help page as well which can be accessed via `snyk [command] --help`. 179 | 180 | Please visit the official [snyk CLI documentation page](https://docs.snyk.io/snyk-cli) for more examples. 181 | 182 | ## Step 2 - Getting to Know the Snyk Web UI 183 | 184 | After you [sign up for a Snyk account, authenticate and log in to Snyk](https://docs.snyk.io/getting-started), the Web UI opens to the Dashboard, with a wizard to guide you through setup steps: 185 | 186 | - Identifying where the code you want to monitor in Snyk is located. 187 | - Defining which projects within your code you want Snyk to scan. 188 | - Connecting Snyk to the relevant projects to scan them. 189 | - Reviewing the results of your Snyk scan. 190 | 191 | The following features are available via the web UI: 192 | 193 | - [Explore the dashboard](https://docs.snyk.io/snyk-web-ui/getting-started-with-the-snyk-web-ui#dashboard) 194 | - [Investigate reports](https://docs.snyk.io/snyk-web-ui/getting-started-with-the-snyk-web-ui#reports) 195 | - [Manage projects](https://docs.snyk.io/snyk-web-ui/getting-started-with-the-snyk-web-ui#manage-your-projects) 196 | - [Manage integrations](https://docs.snyk.io/snyk-web-ui/getting-started-with-the-snyk-web-ui#manage-your-integrations) 197 | - [Manage group or organization members](https://docs.snyk.io/snyk-web-ui/getting-started-with-the-snyk-web-ui#manage-organization-or-group-members) 198 | - [View Snyk updates](https://docs.snyk.io/snyk-web-ui/getting-started-with-the-snyk-web-ui#view-product-updates) 199 | - [Get help](https://docs.snyk.io/snyk-web-ui/getting-started-with-the-snyk-web-ui#view-helpful-resources) 200 | - [Manage your user account](https://docs.snyk.io/snyk-web-ui/getting-started-with-the-snyk-web-ui#manage-account-preferences-and-settings) 201 | 202 | Please visit the official documentation page to learn more about the [Snyk web UI](https://docs.snyk.io/snyk-web-ui). 203 | 204 | ### Understanding Snyk Severity Levels 205 | 206 | On each scan, snyk verifies your resources for potential security risks and how each impacts your system. A severity level is applied to a vulnerability, to indicate the risk for that vulnerability in an application. 207 | 208 | Severity levels can take one of below values: 209 | 210 | - **Low**: the application may expose some data allowing vulnerability mapping, which can be used with other vulnerabilities to attack the application. 211 | - **Medium**: may allow attackers under some conditions to access sensitive data on your application. 212 | - **High**: may allow attackers to access sensitive data on your application. 213 | - **Critical**: may allow attackers to access sensitive data and run code on your application. 214 | 215 | The Common Vulnerability Scoring System (CVSS) determines the severity level of a vulnerability. Snyk uses [CVSS framework version 3.1](https://www.first.org/cvss/v3-1/) to communicate the characteristics and severity of vulnerabilities. 216 | 217 | Below table shows each severity level mapping: 218 | 219 | | **Severity level** | **CVSS score** | 220 | |:------------------:|:--------------:| 221 | | Low | 0.0 - 3.9 | 222 | | Medium | 4.0 - 6.9 | 223 | | High | 7.0 - 8.9 | 224 | | Critical | 9.0 - 10.10 | 225 | 226 | In this guide the **medium** level threshold is used as the default value in the example CI/CD pipeline being used. Usually you will want to asses high and critical issues first, but in some cased medium level needs some attention as well. In terms of security and as a general rule of thumb, you will usually want to be very strict. 227 | 228 | Please visit the [official documentation](https://docs.snyk.io/introducing-snyk/snyks-core-concepts/severity-levels) page to learn more about severity levels. 229 | 230 | ### Assisted Remediation for Reported Security Issues 231 | 232 | Another useful feature provided by the Snyk web UI is security issues remediation assistance. It means, you receive a recommendation about how to fix each security issue found by the snyk scanner. This is very important because it simplifies the process and closes the loop for each iteration that you need to perform to fix each reported security issue. 233 | 234 | Below picture illustrates this process better: 235 | 236 | ![Security Compliance Scanning Process](assets/images/snyk/security_compliance_scanning_process.png) 237 | 238 | For each reported issue there is a button which you can click on and get remediation assistance: 239 | 240 | ![Snyk Issue Remediation Assistance](assets/images/snyk/issue-card-details.png) 241 | 242 | The main procedure is the same for each reported issue. It means, you click on the show details button, then take the suggested steps to apply the fix. 243 | 244 | ## Step 3 - Using Snyk to Scan for Kubernetes Configuration Vulnerabilities in a CI/CD Pipeline 245 | 246 | How do you benefit from embedding a security compliance scanning tool in your CI/CD pipeline and avoid unpleasant situations in a production environment? 247 | 248 | It all starts at the foundation level where software development starts. In general, you will want to use a dedicated environment for each stage. So, in the early stages of development when application code changes very often, you should use a dedicated development environment (called the lower environment usually). Then, the application gets more and more refined in the QA environment where QA teams perform manual and/or automated testing. Next, if the application gets the QA team approval it is promoted to the upper environments such as staging, and finally into production. In this process, where the application is promoted from one environment to another, a dedicated pipeline runs which continuously scans application artifacts and checks the severity level. If the severity level doesn't meet a specific threshold, the pipeline fails immediately and application artifacts promotion to production is stopped in the early stages. 249 | 250 | So, the security scanning tool (e.g. snyk) acts as a gatekeeper stopping unwanted artifacts getting in your production environment from the early stages of development. In the same manner, upper environments pipelines use snyk to allow or forbid application artifacts entering the final production stage. 251 | 252 | ### GitHub Actions CI/CD Workflow Implementation 253 | 254 | In this step you will learn how to create and test a sample CI/CD pipeline with integrated vulnerability scanning via GitHub workflows. To learn the fundamentals of using Github Actions with DigitalOcean Kubernetes, refer to this [tutorial](https://docs.digitalocean.com/tutorials/enable-push-to-deploy/). 255 | 256 | The pipeline provided in the following section builds and deploys the [game-2048-example](https://github.com/digitalocean/kubernetes-sample-apps/tree/master/game-2048-example) application from the DigitalOcean [kubernetes-sample-apps](https://github.com/digitalocean/kubernetes-sample-apps) repository. 257 | 258 | At a high level overview, the [game-2048 CI/CD workflow](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/.github/workflows/game-2048-snyk.yaml) provided in the **kubernetes-sample-apps** repo is comprised of the following stages: 259 | 260 | 1. Application build and test stage - builds main application artifacts and runs automated tests. 261 | 2. Snyk application image scan stage - scans application docker image for known vulnerabilities. Acts as a gate and the final pipeline state (pass/fail) is dependent on this step. In case of failure a Slack notification is sent. 262 | 3. Application image build and push stage - builds and tags the application image using the latest git commit SHA. Then the image is pushed to DOCR. 263 | 4. Snyk infrastructure as code (IAC) scan stage - scans for known vulnerabilities in the Kubernetes YAML manifests associated with the application. Acts as a gate, and the final pipeline state (pass/fail) is dependent on this step. In case of failure a Slack notification is sent as well. 264 | 5. Application deployment stage - deploys the application to Kubernetes (DOKS). 265 | 266 | Below diagram illustrates each job from the pipeline and the associated steps with actions (only relevant configuration is shown): 267 | 268 | ![GitHub Workflow Configuration](assets/images/snyk/gh_workflow_diagram_code.png) 269 | 270 | **Notes:** 271 | 272 | - In case of kustomize based projects it's best to render the final manifest in order to capture and scan everything (including remote resources). On the other hand, it can be hard to identify which Kubernetes resource needs to be patched. This is due to the fact that the resulting manifest file is comprised of all resources to be applied. This is how kustomize works - it gathers all configuration fragments from each overlay and applies them over a base to build the final compound. 273 | - You can also tell Snyk to scan the entire folder where you keep your kustomize configurations. This way, it's easier to identify what resource needs to be fixed in your repository. Remote resources used by kustomize need to be fixed upstream. Also, Kubernetes secrets and ConfigMaps generated via kustomize are not captured. 274 | 275 | How do you fail the pipeline if a certain security compliance level is not met? 276 | 277 | Snyk CLI provides a flag named `--severity-threshold` for this purpose. This flag correlates with the overall severity level computed after each scan. In case of Snyk, the severity level takes one of the following values: **low**, **medium**, **high**, or **critical** You can fail or pass the pipeline based on the severity level value and stop application deployment if conditions are not met. 278 | 279 | Below picture illustrates the flow for the example CI/CD pipeline used in this guide: 280 | 281 | ![Snyk Pipeline Flow](assets/images/snyk/pipeline_flow.png) 282 | 283 | Please follow below steps to create and test the snyk CI/CD GitHub workflow provided in the [kubernetes-sample-apps](https://github.com/digitalocean/kubernetes-sample-apps) GitHub repository: 284 | 285 | 1. Fork the [kubernetes-sample-apps](https://github.com/digitalocean/kubernetes-sample-apps) GitHub repository. 286 | 2. Create the following [GitHub encrypted secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository) for your **kubernetes-sample-apps** copy (**Settings Tab** -> **Secrets** -> **Actions**): 287 | - `DIGITALOCEAN_ACCESS_TOKEN` - holds your DigitalOcean account token. 288 | - `DOCKER_REGISTRY` - holds your DigitalOcean docker registry name including the endpoint (e.g. `registry.digitalocean.com/sample-apps`). 289 | - `DOKS_CLUSTER` - holds your DOKS cluster name. You can run the following command to get your DOKS cluster name: `doctl k8s cluster list --no-header --format Name`. 290 | - `SNYK_TOKEN` - holds your Snyk user account ID - run: `snyk config get api` to get the ID. If that doesn't work, you can retrieve the token from your [user account settings](https://docs.snyk.io/snyk-web-ui/getting-started-with-the-snyk-web-ui#manage-account-preferences-and-settings) page. 291 | - `SLACK_WEBHOOK_URL` - holds your [Slack incoming webhook URL](https://api.slack.com/messaging/webhooks) used for snyk scan notifications. 292 | 3. Navigate to the **Actions** tab of your forked repo and select the **Game 2048 Snyk CI/CD Example** workflow: 293 | ![Game 2048 Main Workflow](assets/images/snyk/game-2048-wf-nav.png) 294 | 4. Click on the **Run Workflow** button and leave the default values: 295 | ![Game 2048 Workflow Triggering](assets/images/snyk/game-2048_wf_start.png) 296 | 297 | A new entry should appear in below list after clicking the **Run Workflow** green button. Select the running workflow to observe the pipeline progress: 298 | 299 | ![Game 2048 Workflow Progress](assets/images/snyk/game-2048-wf-progress.png) 300 | 301 | The pipeline will fail and stop when the **snyk-container-security-check** job runs. This is expected because the default severity level value used in the workflow input, which is **medium**, doesn't meet the expectations. You should also receive a Slack notifications with details about the workflow run: 302 | 303 | ![Game 2048 Workflow Slack Notification](assets/images/snyk/game-2048-wf-slack-notification.png) 304 | 305 | In the next steps, you will learn how to investigate the snyk scan report to fix the issues, lower the severity level, and pass the pipeline. 306 | 307 | ## Step 4 - Investigating Snyk Scan Results and Fixing Reported Issues 308 | 309 | Whenever the severity level threshold is not met, the [game-2048 GitHub workflow](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/.github/workflows/game-2048-snyk.yaml) will fail and a Slack notification is sent with additional details. You also get security reports published to GitHub and accessible in the **Security** tab of your project repository. 310 | 311 | The **game-2048 workflow** runs two security checks: 312 | 313 | 1. Container image security checks - the **snyk-container-security-check** job is used for this purpose. Equivalent snyk command being used is - `snyk container test : --file=/path/to/game-2048/Dockerfile`. 314 | 2. Kubernetes manifests misconfiguration checks - the **snyk-iac-security-check** job is used for this purpose. Equivalent snyk command being used is - `snyk iac test /path/to/project/kubernetes/manifests`. 315 | 316 | Thus, lowering the severity level and passing the workflow consists of: 317 | 318 | 1. Investigating and fixing issues reported by the **snyk-container-security-check** job. 319 | 2. Investigating and fixing issues reported by the **snyk-iac-security-check** job. 320 | 321 | Next, you will learn how to address each in turn. 322 | 323 | ### Investigating and Fixing Container Images Vulnerabilities 324 | 325 | The sample pipeline used in this guide runs security checks for the **game-2048 container image** and the associated [Dockerfile](https://github.com/v-ctiutiu/kubernetes-sample-apps/blob/master/game-2048-example/Dockerfile) via the **snyk-container-security-check** job. 326 | 327 | The **snyk-container-security-check** job runs the following steps: 328 | 329 | 1. Builds the game-2048 application Docker image locally. This step is implemented using the [docker-build-push](https://github.com/docker/build-push-action) GitHub action. 330 | 2. Runs Snyk security checks for the application container image and Dockerfile. This step is implemented using **snyk container test** command. Scan results are exported using the [GitHub SARIF](https://docs.github.com/en/code-security/code-scanning/integrating-with-code-scanning/sarif-support-for-code-scanning) format. Security level threshold is controlled via the **--severity-threshold** argument - it is either set to the `snyk_fail_threshold` input parameter if the workflow is manually triggered, or to `SNYK_FAIL_THRESHOLD` environment variable, if workflow runs automatically. 331 | 3. Scan results (SARIF format) are published in the security tab of your application repository. This step is implemented using [codeql](https://github.com/github/codeql-action) GitHub action. 332 | 333 | Below snippet shows the main logic of the **snyk-container-security-check** job: 334 | 335 | ```yaml 336 | - name: Build App Image for Snyk container scanning 337 | uses: docker/build-push-action@v3 338 | with: 339 | context: ${{ env.PROJECT_DIR }} 340 | push: false 341 | tags: "${{ secrets.DOCKER_REGISTRY }}/${{ env.PROJECT_NAME }}:${{ github.sha }}" 342 | 343 | - name: Check application container vulnerabilities 344 | run: | 345 | snyk container test "${{ secrets.DOCKER_REGISTRY }}/${{ env.PROJECT_NAME }}:${{ github.sha }}" \ 346 | --file=Dockerfile \ 347 | --severity-threshold=${{ github.event.inputs.snyk_fail_threshold || env.SNYK_FAIL_THRESHOLD }} \ 348 | --target-name=${{ env.PROJECT_NAME }} \ 349 | --target-reference=${{ env.ENVIRONMENT }} \ 350 | --sarif --sarif-file-output=snyk-container-scan.sarif 351 | env: 352 | SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} 353 | working-directory: ${{ env.PROJECT_DIR }} 354 | 355 | - name: Upload Snyk report SARIF file 356 | if: ${{ always() }} 357 | uses: github/codeql-action/upload-sarif@v2 358 | with: 359 | sarif_file: ${{ env.PROJECT_DIR }}/snyk-container-scan.sarif 360 | category: snyk-container-scan 361 | ``` 362 | 363 | In order to fix the reported issues, you need to check first the security tab of your **kubernetes-sample-apps** repository fork: 364 | 365 | ![Snyk SARIF Scan Results](assets/images/snyk/gh_code_scanning_results.png) 366 | 367 | You will see a bunch of vulnerabilities for the base docker image in this case. Click on each to expand and see more details: 368 | 369 | ![Snyk SARIF Issue Details](assets/images/snyk/gh_scan_report_sample_issue.png) 370 | 371 | To finish investigations and see recommendations offered by Snyk, you need to inspect the **snyk-container-security-check** job output from the main workflow: 372 | 373 | ![Snyk Container Fix Recommendations](assets/images/snyk/gh_workflow_container_scan_job_recommendation.png) 374 | 375 | **Note:** 376 | 377 | **Snyk container test** offers the possibility to export results in SARIF format, but it doesn't know how to upload reports to the Snyk cloud portal. On the other hand, **snyk container monitor** offers the possibility to upload results to the Snyk cloud portal, but it cannot export SARIF. So this guide is using snyk container test with SARIF exporting feature. Some recommendations are not available in the SARIF output unfortunately. So, you must also look in the job console output for recommendations. 378 | 379 | The **snyk-container-security-check** job output shows that Snyk recommends to update the base image version from **node:16-slim** to **node:18.6.0-slim**. This change eliminates the high risk issue(s), and also lowers the number of other reported vulnerabilities from **70** to **44** - this is a substantial reduction of almost **50%** !!! 380 | 381 | Now, open the game-2048 application **Dockerfile** from your fork, and change the **FROM** directives to point to the new version (**node:18.6.0-slim** at this time of writing): 382 | 383 | ```dockerfile 384 | FROM node:18.6.0-slim AS builder 385 | WORKDIR /usr/src/app 386 | COPY . . 387 | RUN npm install --include=dev 388 | # 389 | # Build mode can be set via NODE_ENV environment variable (development or production) 390 | # See project package.json and webpack.config.js 391 | # 392 | ENV NODE_ENV=development 393 | RUN npm run build 394 | 395 | FROM node:18.6.0-slim 396 | RUN npm install http-server -g 397 | RUN mkdir /public 398 | WORKDIR /public 399 | COPY --from=builder /usr/src/app/dist/ ./ 400 | EXPOSE 8080 401 | USER 1000 402 | CMD ["http-server"] 403 | ``` 404 | 405 | Finally, commit changes to your GitHub repository and trigger the workflow again (leaving the default values on). This time the **snyk-container-security-check** job should pass: 406 | 407 | ![Game 2048 Workflow Snyk Container Scan Success](assets/images/snyk/gh_workflow_container_scan_success.png) 408 | 409 | Going to the security tab of your project, there should be no issues reported. 410 | 411 | How do you make sure to reduce base image vulnerabilities in the future? 412 | 413 | The best approach is to use a base image with a minimal footprint - the lesser the binaries or dependencies in the base image, the better. Another good practice is to continuously monitor your projects, as explained in the [Monitor your Projects on a Regular Basis](#monitor-your-projects-on-a-regular-basis) section of this guide. 414 | 415 | You will notice that the pipeline still fails, but this time at the **snyk-iac-security-check** phase. This is expected because there are security issues with the Kubernetes manifests used to deploy the application. In the next section, you will learn how to investigate this situation and apply Snyk security recommendations to fix the reported issues. 416 | 417 | ### Investigating and Fixing Kubernetes Manifests Vulnerabilities 418 | 419 | The pipeline is still failing and stops at the **snyk-iac-security-check** job. This is expected because the default severity level value used in the workflow input, which is **medium**, doesn't meet the security requirements for the project. 420 | 421 | The **snyk-iac-security-check** job checks for Kubernetes manifests vulnerabilities (or misconfigurations), and executes the following steps: 422 | 423 | 1. Snyk security checks for Kubernetes manifests from the **game-2048-example** project directory. This step is implemented using **snyk iac test** command. Scan results are exported using the [GitHub SARIF](https://docs.github.com/en/code-security/code-scanning/integrating-with-code-scanning/sarif-support-for-code-scanning) format. Security level threshold is controlled via the **--severity-threshold** argument - it is either set to the `snyk_fail_threshold` input parameter if the workflow is manually triggered, or to `SNYK_FAIL_THRESHOLD` environment variable, if workflow runs automatically. Finally, the **--report** argument is also used to send scan results to the Snyk cloud portal. 424 | 2. Scan results (SARIF format) are published to the security tab of your application repository. This step is implemented using the [codeql](https://github.com/github/codeql-action) GitHub action. 425 | 426 | Below snippet shows the actual implementation of each step from the **snyk-iac-security-check** job: 427 | 428 | ```yaml 429 | - name: Check for Kubernetes manifests vulnerabilities 430 | run: | 431 | snyk iac test \ 432 | --severity-threshold=${{ github.event.inputs.snyk_fail_threshold || env.SNYK_FAIL_THRESHOLD }} \ 433 | --target-name=${{ env.PROJECT_NAME }} \ 434 | --target-reference=${{ env.ENVIRONMENT }} \ 435 | --sarif --sarif-file-output=snyk-iac-scan.sarif \ 436 | --report 437 | env: 438 | SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} 439 | working-directory: ${{ env.PROJECT_DIR }} 440 | 441 | - name: Upload Snyk IAC SARIF file 442 | if: ${{ always() }} 443 | uses: github/codeql-action/upload-sarif@v2 444 | with: 445 | sarif_file: ${{ env.PROJECT_DIR }}/snyk-iac-scan.sarif 446 | category: snyk-iac-scan 447 | ``` 448 | 449 | In order to fix the reported issues you have two options: 450 | 451 | 1. Use the Snyk cloud portal and access the **game-2048** project to check for details: 452 | ![Snyk Cloud Portal Option](assets/images/snyk/snyk_cloud_portal_option.png) 453 | 2. Use the security tab of your game-2048 app repository to check for details: 454 | ![Snyk GitHub Security Option](assets/images/snyk/snyk_gh_security_option.png) 455 | 456 | Either way, you will get recommendations about how to fix the reported issues. 457 | 458 | For this guide you will be using the Snyk cloud portal to investigate the reported security issues. First, click on the **game-2048-example** entry from the projects list, then select the **kustomize/resources/deployment.yaml** file: 459 | 460 | ![Game 2048 Repo Scan Entry](assets/images/snyk/game-2048-repo-scan.png) 461 | 462 | Next, tick the **Medium** checkbox in the **Severity** submenu from the left to display only **medium** level issues: 463 | 464 | ![Game 2048 Repo Medium Severity Issues Results](assets/images/snyk/game-2048-medium-level-issues.png) 465 | 466 | Then, you can inspect each reported issue card and check the details. Go ahead and click on the **Show more details** button from the **Container is running without root user control** card - you will receive more details about the current issue, and important hints about how to fix it: 467 | 468 | ![Snyk Issue Card Details](assets/images/snyk/issue-card-details.png) 469 | 470 | After collecting all information from each card, you can go ahead and edit the [deployment.yaml](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/game-2048-example/kustomize/resources/deployment.yaml) file from your repo (located in the `game-2048-example/kustomize/resources` subfolder). The fixes are already in place, you just need to uncomment the last lines from the file. The final `deployment.yaml` file should look like below: 471 | 472 | ```yaml 473 | --- 474 | apiVersion: apps/v1 475 | kind: Deployment 476 | metadata: 477 | name: game-2048 478 | spec: 479 | replicas: 1 480 | selector: 481 | matchLabels: 482 | app: game-2048 483 | strategy: 484 | type: RollingUpdate 485 | template: 486 | metadata: 487 | labels: 488 | app: game-2048 489 | spec: 490 | containers: 491 | - name: backend 492 | # Replace the `<>` placeholders with your docker registry info 493 | image: registry.digitalocean.com/sample-apps/2048-game:latest 494 | ports: 495 | - name: http 496 | containerPort: 8080 497 | resources: 498 | requests: 499 | cpu: 100m 500 | memory: 50Mi 501 | limits: 502 | cpu: 200m 503 | memory: 100Mi 504 | securityContext: 505 | readOnlyRootFilesystem: true 506 | runAsNonRoot: true 507 | allowPrivilegeEscalation: false 508 | capabilities: 509 | drop: 510 | - all 511 | ``` 512 | 513 | What changed? The following security fixes were applied: 514 | 515 | - `readOnlyRootFilesystem` - runs container image in read only (cannot alter files by `kubectl exec` in the container). 516 | - `runAsNonRoot` - runs as the non root user defined by the [USER](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/game-2048-example/Dockerfile#L18) directive from the game-2048 project [Dockerfile](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/game-2048-example/Dockerfile). 517 | - `allowPrivilegeEscalation` - setting **allowPrivilegeEscalation** to **false** ensures that no child process of a container can gain more privileges than its parent. 518 | - `capabilities.drop` - To make containers more secure, you should provide containers with the least amount of privileges it needs to run. In practice, you drop everything by default, then add required capabilities step by step. You can learn more about container capabilities [here](https://learn.snyk.io/lessons/container-does-not-drop-all-default-capabilities/kubernetes/). 519 | 520 | Finally, commit the changes for the **deployment.yaml** file and push to main branch. After manually triggering the workflow it should complete successfully this time: 521 | 522 | ![Game 2048 Workflow Success](assets/images/snyk/game-2048-wf-success.png) 523 | 524 | You should also receive a green Slack notification from the snyk scan job. Navigate to the Snyk portal link and check if the issues that you fixed recently are gone - there should be no **medium level** issues reported. 525 | 526 | A few final checks can be performed as well on the Kubernetes side to verify if the reported issues were fixed: 527 | 528 | 1. Check if the game-2048 deployment has a read-only (immutable) filesystem by writing to the **index.html** file used by the game-2048 application: 529 | 530 | ```shell 531 | kubectl exec -it deployment/game-2048 -n game-2048 -- /bin/bash -c "echo > /public/index.html" 532 | ``` 533 | 534 | The output looks similar to: 535 | 536 | ```text 537 | /bin/bash: /public/index.html: Read-only file system 538 | command terminated with exit code 1 539 | ``` 540 | 541 | 2. Check if the container runs as non-root user (should print a integer number different than zero - e.g. **1000**): 542 | 543 | ```shell 544 | kubectl exec -it deployment/game-2048 -n game-2048 -- id -u 545 | ``` 546 | 547 | If all checks pass then you applied the required security recommendations successfully. 548 | 549 | ### Monitor your Projects on a Regular Basis 550 | 551 | The vulnerability scan automation you implemented so far is a good starting point, but not perfect. Why? 552 | 553 | One issue with the current approach is that you never know when new issues are reported for the assets you already deployed in your environments. In other words, you assessed the security risks and took the measures to fix the issues at one specific point in time - when your CI/CD automation was executed. 554 | 555 | But, what if new issues are reported meanwhile and your application is vulnerable again? Snyk helps you overcome this situation via the [monitoring](https://docs.snyk.io/snyk-cli/test-for-vulnerabilities/monitor-your-projects-at-regular-intervals) feature. The monitoring feature of Snyk helps you address new vulnerabilities, which are constantly disclosed. When combined with the Snyk Slack integration (explained in [Step 6 - Enabling Slack Notifications](#step-6---enabling-slack-notifications)), you can take immediate actions to fix new disclosed issues that may affect your application in a production environment. 556 | 557 | To benefit from this feature all you have to do is just use the **snyk monitor** command before any deploy steps in your CI/CD pipeline. The syntax is very similar to the **snyk test** commands (one of the cool things about snyk CLI is that it was designed with uniformity in mind). The snyk monitor command will send a snapshot to the Snyk cloud portal, and from there you will get notified about newly disclosed vulnerabilities for your project. 558 | 559 | In terms of the GitHub workflow automation, you can snyk monitor your application container in the **snyk-container-security-check** job, after testing for vulnerabilities. Below snippet shows a practical implementation for the pipeline used in this guide (some steps were omitted for clarity): 560 | 561 | ```yaml 562 | snyk-container-security-check: 563 | runs-on: ubuntu-latest 564 | needs: build-and-test-application 565 | 566 | steps: 567 | - name: Checkout 568 | uses: actions/checkout@v3 569 | 570 | ... 571 | 572 | - name: Check application container vulnerabilities 573 | run: | 574 | snyk container test "${{ secrets.DOCKER_REGISTRY }}/${{ env.PROJECT_NAME }}:${{ github.sha }}" \ 575 | --file=${{ env.PROJECT_DIR }}/Dockerfile \ 576 | --severity-threshold=${{ github.event.inputs.snyk_fail_threshold || env.SNYK_FAIL_THRESHOLD }} \ 577 | --target-name=${{ env.PROJECT_NAME }} \ 578 | --target-reference=${{ env.ENVIRONMENT }} \ 579 | --sarif-file-output=snyk-container-scan.sarif 580 | env: 581 | SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} 582 | 583 | - name: Monitor the application container using Snyk 584 | run: | 585 | snyk container monitor "${{ secrets.DOCKER_REGISTRY }}/${{ env.PROJECT_NAME }}:${{ github.sha }}" \ 586 | --file=${{ env.PROJECT_DIR }}/Dockerfile 587 | env: 588 | SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} 589 | 590 | ... 591 | ``` 592 | 593 | Above snippet shows an additional step called **Monitor the application container using Snyk** where the actual snyk container monitor runs. 594 | 595 | After the snyk monitor command runs, you can log in to the Snyk Web UI to see the latest snapshot and history of your [project](https://app.snyk.io/monitor/): 596 | 597 | ![Game 2048 Image Snyk Monitoring](assets/images/snyk/snyk_game-2048_container_monitor.png) 598 | 599 | You can test and monitor your application source code as well in the **build-and-test-application** job. Below snippet shows an example implementation for the GitHub workflow used in this guide: 600 | 601 | ```yaml 602 | build-and-test-application: 603 | runs-on: ubuntu-latest 604 | 605 | steps: 606 | - name: Checkout 607 | uses: actions/checkout@v3 608 | 609 | - name: npm install, build, and test 610 | run: | 611 | npm install 612 | npm run build --if-present 613 | npm test 614 | working-directory: ${{ env.PROJECT_DIR }} 615 | 616 | - name: Snyk code test and monitoring 617 | run: | 618 | snyk test ${{ env.PROJECT_DIR }} 619 | snyk monitor ${{ env.PROJECT_DIR }} 620 | env: 621 | SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} 622 | ``` 623 | 624 | Next, you will receive Slack notifications on a regular basis about newly disclosed vulnerabilities for your project. 625 | 626 | ### Treating Exceptions 627 | 628 | There are situations when you don't want the final report to be affected by some issues which your team consider is safe to ignore. Snyk offers a builtin feature to manage exceptions and overcome this situation. 629 | 630 | You can read more about this feature [here](https://docs.snyk.io/features/fixing-and-prioritizing-issues/issue-management/ignore-issues). 631 | 632 | ### Snyk for IDEs 633 | 634 | A more efficient approach is where you integrate vulnerability scan tools directly in your favorite IDE (or Integrated Development Environment). This way you can detect and fix security issues ahead of time in the software development cycle. 635 | 636 | Snyk offers support for a variety of IDEs, such as: 637 | 638 | 1. [Eclipse plugin](https://docs.snyk.io/ide-tools/eclipse-plugin). 639 | 2. [JetBrains plugin](https://docs.snyk.io/ide-tools/jetbrains-plugins). 640 | 3. [Visual Studio extension](https://docs.snyk.io/ide-tools/visual-studio-extension). 641 | 4. [Visual Studio Code extension](https://docs.snyk.io/ide-tools/visual-studio-code-extension-for-snyk-code). 642 | 643 | Above plugins will help you detect and fix issues in the early stages of development, thus eliminating frustration, costs, and security flaws in production systems. Also, it helps you to reduce the iterations end human effort on the long run. As an example, for each reported security issue by your CI/CD automation you need to go back and fix the issue in your code, commit changes, wait for the CI/CD automation again, then repeat in case of failure. 644 | 645 | You can read more about these features on the [Snyk for IDEs](https://docs.snyk.io/ide-tools) page from the official documentation. 646 | 647 | ## Step 5 - Triggering the Snyk CI/CD Workflow Automatically 648 | 649 | You can set the workflow to trigger automatically on each commit or PR against the main branch by uncommenting the following lines at the top of the [game-2048-snyk.yaml](https://github.com/digitalocean/kubernetes-sample-apps/blob/master/.github/workflows/game-2048-snyk.yaml) file: 650 | 651 | ```yaml 652 | on: 653 | push: 654 | branches: [ master ] 655 | pull_request: 656 | branches: [ master ] 657 | ``` 658 | 659 | After editing the file, commit the changes to your main branch and you should be ready to go. 660 | 661 | ## Step 6 - Enabling Slack Notifications 662 | 663 | You can set up Snyk to send Slack alerts about new vulnerabilities discovered in your projects, and about new upgrades or patches that have become available. 664 | 665 | To set it up, you will need to generate a Slack webhook. You can either do this via [Incoming WebHooks](https://api.slack.com/messaging/webhooks) or by creating your own [Slack App](https://api.slack.com/start/building). Once you have generated your Slack Webhook URL, go to your 'Manage organization’ settings, enter the URL, and click the **Connect** button: 666 | 667 | ![Snyk Slack Integration](assets/images/snyk/slack_notifications.png) 668 | 669 | ## Conclusion 670 | 671 | In this guide you learned how to use a pretty flexible and powerful Kubernetes vulnerability scanning tool - [Snyk](https://snyk.io). Then, you learned how to integrate the Snyk vulnerability scanning tool in a traditional CI/CD pipeline implemented using GitHub workflows. 672 | 673 | Finally, you learned how to investigate vulnerability scan reports, apply fixes to remediate the situation, and reduce security risks to a minimum via a practical example - the [game-2048](https://github.com/digitalocean/kubernetes-sample-apps/tree/master/game-2048-example) application from the [kubernetes-sample-apps](https://github.com/digitalocean/kubernetes-sample-apps) repository. 674 | 675 | ## Additional Resources 676 | 677 | You can learn more by reading the following additional resources: 678 | 679 | - [Kubernetes Security Best Practices Article from Snyk](https://snyk.io/learn/kubernetes-security/) 680 | - [More about Snyk Security Levels](https://docs.snyk.io/introducing-snyk/snyks-core-concepts/severity-levels) 681 | - [Vulnerability Assessment](https://snyk.io/learn/vulnerability-assessment/) 682 | - [Snyk Targets and Projects](https://docs.snyk.io/introducing-snyk/introduction-to-snyk-projects) 683 | - [Snyk for IDEs](https://docs.snyk.io/ide-tools) 684 | - [Discover more Snyk Integrations](https://docs.snyk.io/integrations) 685 | - [Snyk Web UI Users and Group Management](https://docs.snyk.io/features/user-and-group-management) 686 | - [Fixing and Prioritizing Issues Reported by Snyk](https://docs.snyk.io/features/fixing-and-prioritizing-issues) 687 | - [Snyk Github Action Used in this Guide](https://github.com/snyk/actions) 688 | --------------------------------------------------------------------------------