├── CODEOWNERS ├── _docs ├── kubernetes-service.png └── k8s-service-architecture.png ├── .pre-commit-config.yaml ├── NOTICE ├── test ├── fixtures │ ├── canary_deployment_values.yaml │ ├── ingress_values_with_number_port.yaml │ ├── ingress_values_with_name_port.yaml │ ├── custom_resources_values.yaml │ ├── service_monitor_values.yaml │ ├── multiple_custom_resources_values.yaml │ └── canary_and_main_deployment_values.yaml ├── k8s_service_volume_secret_store_csi_template_test.go ├── k8s_service_custom_resources_template_test.go ├── sample_app_test_helpers.go ├── k8s_service_volume_test.go ├── k8s_service_canary_deployment_test.go ├── k8s_service_canary_deployment_template_test.go ├── k8s_service_service_monitor_template_test.go ├── k8s_service_custom_resources_example_test.go ├── README.md ├── k8s_service_volume_template_test.go ├── go.mod ├── k8s_service_service_account_template_test.go ├── k8s_service_lifecycle_hooks_template_test.go ├── k8s_service_template_render_helpers_for_test.go ├── k8s_service_vertical_pod_autoscaler_template_test.go ├── k8s_service_nginx_example_test.go ├── k8s_service_config_injection_example_test.go ├── k8s_service_example_test_helpers.go └── k8s_service_horizontal_pod_autoscaler_template_test.go ├── examples ├── k8s-service-config-injection │ ├── docker │ │ ├── Dockerfile │ │ └── app.rb │ ├── kubernetes │ │ └── config-map.yaml │ ├── extensions │ │ ├── secret_values.yaml │ │ └── config_map_values.yaml │ ├── values.yaml │ └── README.md ├── README.md └── k8s-service-nginx │ └── values.yaml ├── charts └── k8s-service │ ├── templates │ ├── deployment.yaml │ ├── customresources.yaml │ ├── canarydeployment.yaml │ ├── serviceaccount.yaml │ ├── servicemonitor.yaml │ ├── pdb.yaml │ ├── gmc.yaml │ ├── horizontalpodautoscaler.yaml │ ├── verticalpodautoscaler.yaml │ ├── service.yaml │ ├── NOTES.txt │ ├── _capabilities_helpers.tpl │ ├── _helpers.tpl │ └── ingress.yaml │ ├── .helmignore │ ├── Chart.yaml │ └── linter_values.yaml ├── .gitignore ├── .github ├── ISSUE_TEMPLATE │ ├── feature_request.md │ └── bug_report.md └── pull_request_template.md ├── core-concepts.md ├── CONTRIBUTING.md ├── GRUNTWORK_PHILOSOPHY.md ├── README.adoc ├── .circleci └── config.yml └── LICENSE /CODEOWNERS: -------------------------------------------------------------------------------- 1 | * @Etiene @pras111gw @ryehowell 2 | -------------------------------------------------------------------------------- /_docs/kubernetes-service.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gruntwork-io/helm-kubernetes-services/HEAD/_docs/kubernetes-service.png -------------------------------------------------------------------------------- /_docs/k8s-service-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gruntwork-io/helm-kubernetes-services/HEAD/_docs/k8s-service-architecture.png -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | repos: 2 | - repo: https://github.com/gruntwork-io/pre-commit 3 | rev: v0.1.17 4 | hooks: 5 | - id: helmlint 6 | -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | helm-kubernetes-services 2 | Copyright 2019 Gruntwork, Inc. 3 | 4 | This product includes software developed at Gruntwork (https://www.gruntwork.io/). 5 | -------------------------------------------------------------------------------- /test/fixtures/canary_deployment_values.yaml: -------------------------------------------------------------------------------- 1 | canary: 2 | enabled: true 3 | replicaCount: 1 4 | containerImage: 5 | repository: nginx 6 | tag: 1.16.0 7 | -------------------------------------------------------------------------------- /examples/k8s-service-config-injection/docker/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ruby:2.7.8 2 | 3 | RUN gem install sinatra json rackup 4 | 5 | COPY app.rb /usr/src/app.rb 6 | WORKDIR /usr/src 7 | 8 | EXPOSE 8080 9 | CMD ["ruby", "app.rb"] 10 | -------------------------------------------------------------------------------- /examples/k8s-service-config-injection/kubernetes/config-map.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: sample-sinatra-app-server-text 6 | data: 7 | server_text: "Hello! I was configured using a ConfigMap!" 8 | -------------------------------------------------------------------------------- /test/fixtures/ingress_values_with_number_port.yaml: -------------------------------------------------------------------------------- 1 | ingress: 2 | enabled: true 3 | path: "/app" 4 | servicePort: 80 5 | additionalPaths: 6 | - path: "/black-hole" 7 | serviceName: "black-hole" 8 | servicePort: 80 9 | -------------------------------------------------------------------------------- /test/fixtures/ingress_values_with_name_port.yaml: -------------------------------------------------------------------------------- 1 | ingress: 2 | enabled: true 3 | path: "/app" 4 | servicePort: "app" 5 | additionalPaths: 6 | - path: "/black-hole" 7 | serviceName: "black-hole" 8 | servicePort: "black-hole" 9 | -------------------------------------------------------------------------------- /test/fixtures/custom_resources_values.yaml: -------------------------------------------------------------------------------- 1 | customResources: 2 | enabled: true 3 | resources: 4 | custom_configmap: | 5 | apiVersion: v1 6 | kind: ConfigMap 7 | metadata: 8 | name: example 9 | data: 10 | key: value 11 | -------------------------------------------------------------------------------- /test/fixtures/service_monitor_values.yaml: -------------------------------------------------------------------------------- 1 | serviceMonitor: 2 | enabled: true 3 | namespace: monitoring 4 | endpoints: 5 | default: 6 | interval: 10s 7 | scrapeTimeout: 10s 8 | honorLabels: true 9 | path: /metrics 10 | port: http 11 | scheme: http 12 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | {{- /* 2 | The main Deployment Controller for the application being deployed. This resource manages the creation and replacement 3 | of the Pods backing your application. 4 | */ -}} 5 | {{ include "k8s-service.deploymentSpec" (dict "Values" .Values "isCanary" false "Release" .Release "Chart" .Chart) }} 6 | -------------------------------------------------------------------------------- /examples/k8s-service-config-injection/docker/app.rb: -------------------------------------------------------------------------------- 1 | # A sample backend app built on top of Ruby and Sinatra. It returns JSON. 2 | 3 | require 'sinatra' 4 | require 'json' 5 | 6 | server_port = ENV['SERVER_PORT'] || 8080 7 | server_text = ENV['SERVER_TEXT'] || 'Hello from backend' 8 | 9 | set :port, server_port 10 | set :bind, '0.0.0.0' 11 | 12 | get '/' do 13 | content_type :json 14 | {:text => server_text}.to_json 15 | end 16 | -------------------------------------------------------------------------------- /charts/k8s-service/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /charts/k8s-service/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | name: k8s-service 3 | description: A Helm chart to package your application container for Kubernetes 4 | # This will be updated with the release tag in the CI/CD pipeline before publishing. This has to be a valid semver for 5 | # the linter to accept. 6 | version: 0.0.1-replace 7 | home: https://github.com/gruntwork-io/helm-kubernetes-services 8 | maintainers: 9 | - name: Gruntwork 10 | email: info@gruntwork.io 11 | url: https://gruntwork.io 12 | -------------------------------------------------------------------------------- /test/fixtures/multiple_custom_resources_values.yaml: -------------------------------------------------------------------------------- 1 | customResources: 2 | enabled: true 3 | resources: 4 | custom_configmap: | 5 | apiVersion: v1 6 | kind: ConfigMap 7 | metadata: 8 | name: example-config-map 9 | data: 10 | foo: bar 11 | custom_secret: | 12 | apiVersion: v1 13 | kind: Secret 14 | metadata: 15 | name: example-secret 16 | type: Opaque 17 | data: 18 | secret_text: dmFsdWU= 19 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/customresources.yaml: -------------------------------------------------------------------------------- 1 | {{- /* 2 | If the operator configures the customResources input variable, then create custom resources based on the given 3 | definitions. If a list of definitions is provided, separate them using the YAML separator so they can all be executed 4 | from the same template file. 5 | */ -}} 6 | 7 | {{- if .Values.customResources.enabled -}} 8 | {{- range $name, $value := .Values.customResources.resources }} 9 | --- 10 | {{ $value }} 11 | {{- end }} 12 | {{- end }} 13 | -------------------------------------------------------------------------------- /test/fixtures/canary_and_main_deployment_values.yaml: -------------------------------------------------------------------------------- 1 | containerImage: 2 | repository: nginx 3 | tag: 1.14.2 4 | pullPolicy: IfNotPresent 5 | applicationName: canary-test 6 | replicaCount: 3 7 | canary: 8 | enabled: true 9 | replicaCount: 3 10 | containerImage: 11 | repository: nginx 12 | tag: 1.16.0 13 | livenessProbe: 14 | httpGet: 15 | path: / 16 | port: http 17 | readinessProbe: 18 | httpGet: 19 | path: / 20 | port: http 21 | service: 22 | type: NodePort 23 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/canarydeployment.yaml: -------------------------------------------------------------------------------- 1 | {{- /* 2 | The Canary Deployment Controller for the application being deployed. This resource manages the creation and replacement 3 | of only the canary Pod(s) backing your application. It is intended to be used to test new release candidates 4 | and ensure they are free of issues prior to performing a full roll out. 5 | */ -}} 6 | 7 | {{- if .Values.canary.enabled -}} 8 | {{ include "k8s-service.deploymentSpec" (dict "Values" .Values "isCanary" true "Release" .Release "Chart" .Chart) }} 9 | {{- end }} 10 | -------------------------------------------------------------------------------- /examples/README.md: -------------------------------------------------------------------------------- 1 | # Quickstart Guides and Examples 2 | 3 | This folder contains various examples that demonstrate how to use the charts provided by this repository. Each example 4 | has a detailed `README` that provides a step by step guide on how to deploy the example and verify the deployed 5 | resources. Each example is meant to capture a common use case for the charts in this repo. 6 | 7 | Here is the list of examples provided in this repo: 8 | 9 | - [Quickstart Guide: K8S Service Nginx Example](./k8s-service-nginx) 10 | - [Quickstart Guide: K8S Service Config Injection Example](./k8s-service-config-injection) 11 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Terraform files 2 | .terraform 3 | terraform.tfstate 4 | terraform.tfvars 5 | *.tfstate* 6 | 7 | # OS X files 8 | .history 9 | .DS_Store 10 | 11 | # lambda zip files 12 | lambda.zip 13 | 14 | # IntelliJ files 15 | .idea_modules 16 | *.iml 17 | *.iws 18 | *.ipr 19 | .idea/ 20 | build/ 21 | */build/ 22 | out/ 23 | 24 | # Go best practices dictate that libraries should not include the vendor directory 25 | vendor 26 | 27 | # Python stuff 28 | dist 29 | aws_auth_configmap_generator.* 30 | .python-version 31 | .tox 32 | __pycache__ 33 | *.pyc 34 | 35 | # Folder used to store temporary test data by Terratest 36 | .test-data 37 | 38 | # Generic temporary files 39 | /tmp 40 | 41 | # goenv file 42 | .go-version -------------------------------------------------------------------------------- /charts/k8s-service/templates/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.serviceAccount.create }} 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: {{ .Values.serviceAccount.name }} 6 | namespace: {{ $.Release.Namespace }} 7 | labels: 8 | app: {{ template "k8s-service.name" . }} 9 | {{- if .Values.serviceAccount.labels }} 10 | {{- toYaml .Values.serviceAccount.labels | nindent 4 }} 11 | {{- end }} 12 | {{- if .Values.serviceAccount.annotations }} 13 | annotations: 14 | {{ toYaml .Values.serviceAccount.annotations | indent 4 }} 15 | {{- end }} 16 | {{- if gt (len .Values.imagePullSecrets) 0 }} 17 | imagePullSecrets: 18 | {{- range $secretName := .Values.imagePullSecrets }} 19 | - name: {{ $secretName }} 20 | {{- end }} 21 | {{- end }} 22 | {{- end }} 23 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Submit a feature request for this repo. 4 | title: '' 5 | labels: enhancement 6 | assignees: '' 7 | 8 | --- 9 | 10 | 14 | 15 | **Describe the solution you'd like** 16 | A clear and concise description of what you want to happen. 17 | 18 | **Describe alternatives you've considered** 19 | A clear and concise description of any alternative solutions or features you've considered. 20 | 21 | **Additional context** 22 | Add any other context or screenshots about the feature request here. 23 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/servicemonitor.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.serviceMonitor.enabled }} 2 | apiVersion: monitoring.coreos.com/v1 3 | kind: ServiceMonitor 4 | metadata: 5 | name: {{ template "k8s-service.fullname" . }} 6 | {{- if .Values.serviceMonitor.namespace }} 7 | namespace: {{ .Values.serviceMonitor.namespace }} 8 | {{- end }} 9 | labels: 10 | chart: {{ template "k8s-service.chart" . }} 11 | app: {{ template "k8s-service.name" . }} 12 | heritage: "{{ .Release.Service }}" 13 | {{- if .Values.serviceMonitor.labels }} 14 | {{- toYaml .Values.serviceMonitor.labels | nindent 4 }} 15 | {{- end }} 16 | spec: 17 | endpoints: 18 | {{- values .Values.serviceMonitor.endpoints | toYaml | nindent 6 }} 19 | selector: 20 | matchLabels: 21 | app.kubernetes.io/name: {{ template "k8s-service.name" . }} 22 | {{- end }} 23 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a bug report to help us improve. 4 | title: '' 5 | labels: bug 6 | assignees: '' 7 | 8 | --- 9 | 10 | 14 | 15 | **Describe the bug** 16 | A clear and concise description of what the bug is. 17 | 18 | **To Reproduce** 19 | Steps to reproduce the behavior including the relevant Terraform/Terragrunt/Packer version number and any code snippets and module inputs you used. 20 | 21 | ```hcl 22 | // paste code snippets here 23 | ``` 24 | 25 | **Expected behavior** 26 | A clear and concise description of what you expected to happen. 27 | 28 | **Nice to have** 29 | - [ ] Terminal output 30 | - [ ] Screenshots 31 | 32 | **Additional context** 33 | Add any other context about the problem here. 34 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/pdb.yaml: -------------------------------------------------------------------------------- 1 | {{- /* 2 | If there is a specification for minimum number of Pods that should be available, create a PodDisruptionBudget 3 | */ -}} 4 | {{- if .Values.minPodsAvailable -}} 5 | apiVersion: {{ include "gruntwork.pdb.apiVersion" . }} 6 | kind: PodDisruptionBudget 7 | metadata: 8 | name: {{ include "k8s-service.fullname" . }} 9 | labels: 10 | gruntwork.io/app-name: {{ .Values.applicationName }} 11 | # These labels are required by helm. You can read more about required labels in the chart best practices guide: 12 | # https://docs.helm.sh/chart_best_practices/#standard-labels 13 | app.kubernetes.io/name: {{ include "k8s-service.name" . }} 14 | helm.sh/chart: {{ include "k8s-service.chart" . }} 15 | app.kubernetes.io/instance: {{ .Release.Name }} 16 | app.kubernetes.io/managed-by: {{ .Release.Service }} 17 | spec: 18 | minAvailable: {{ int .Values.minPodsAvailable }} 19 | selector: 20 | matchLabels: 21 | app.kubernetes.io/name: {{ include "k8s-service.name" . }} 22 | app.kubernetes.io/instance: {{ .Release.Name }} 23 | {{- end }} 24 | -------------------------------------------------------------------------------- /.github/pull_request_template.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ## Description 4 | 5 | Fixes #000. 6 | 7 | 8 | 9 | ## TODOs 10 | 11 | Read the [Gruntwork contribution guidelines](https://gruntwork.notion.site/Gruntwork-Coding-Methodology-02fdcd6e4b004e818553684760bf691e). 12 | 13 | - [ ] Update the docs. 14 | - [ ] Run the relevant tests successfully, including pre-commit checks. 15 | - [ ] Ensure any 3rd party code adheres with our [license policy](https://www.notion.so/gruntwork/Gruntwork-licenses-and-open-source-usage-policy-f7dece1f780341c7b69c1763f22b1378) or delete this line if its not applicable. 16 | - [ ] Include release notes. If this PR is backward incompatible, include a migration guide. 17 | 18 | ## Release Notes (draft) 19 | 20 | 21 | Added / Removed / Updated [X]. 22 | 23 | ### Migration Guide 24 | 25 | 26 | -------------------------------------------------------------------------------- /examples/k8s-service-config-injection/extensions/secret_values.yaml: -------------------------------------------------------------------------------- 1 | #---------------------------------------------------------------------------------------------------------------------- 2 | # CHART PARAMETERS TO AUGMENT values.yaml WITH SECRET 3 | # This file declares additional input parameters to set the SERVER_TEXT environment variable in the application 4 | # container from a Secret. 5 | # 6 | # This values file is intended to be used with the values.yaml file: 7 | # helm install k8s-service -f values.yaml -f ./extensions/secret_values.yaml 8 | # 9 | # See the README for more details 10 | #---------------------------------------------------------------------------------------------------------------------- 11 | 12 | # secrets is a map that specifies which Secret resources should be injected into the application container. There 13 | # are two ways that a Secret can be injected: 14 | # - as an environment variable 15 | # - as a file on the file system 16 | # In this example, we inject the sample-sinatra-app-server-text Secret defined in kubernetes/secret.yaml to set 17 | # the SERVER_TEXT environment variable on the application container. 18 | secrets: 19 | sample-sinatra-app-server-text: 20 | as: environment 21 | items: 22 | server_text: 23 | envVarName: SERVER_TEXT 24 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/gmc.yaml: -------------------------------------------------------------------------------- 1 | {{- /* 2 | If the operator configures the google.managedCertificate input variable, then also create a ManagedCertificate resource 3 | that will provision a Google managed SSL certificate. 4 | */ -}} 5 | {{- if .Values.google.managedCertificate.enabled -}} 6 | {{- /* 7 | We declare some variables defined on the Values. These are reused in `with` and `range` blocks where the scoped variable 8 | (`.`) is rebound within the block. 9 | */ -}} 10 | {{- $domainName := .Values.google.managedCertificate.domainName -}} 11 | {{- $certificateName := .Values.google.managedCertificate.name -}} 12 | apiVersion: networking.gke.io/v1beta1 13 | kind: ManagedCertificate 14 | metadata: 15 | name: {{ $certificateName }} 16 | labels: 17 | gruntwork.io/app-name: {{ .Values.applicationName }} 18 | # These labels are required by helm. You can read more about required labels in the chart best practices guide: 19 | # https://docs.helm.sh/chart_best_practices/#standard-labels 20 | app.kubernetes.io/name: {{ include "k8s-service.name" . }} 21 | helm.sh/chart: {{ include "k8s-service.chart" . }} 22 | app.kubernetes.io/instance: {{ .Release.Name }} 23 | app.kubernetes.io/managed-by: {{ .Release.Service }} 24 | spec: 25 | domains: 26 | - {{ $domainName }} 27 | {{- end }} 28 | -------------------------------------------------------------------------------- /examples/k8s-service-config-injection/extensions/config_map_values.yaml: -------------------------------------------------------------------------------- 1 | #---------------------------------------------------------------------------------------------------------------------- 2 | # CHART PARAMETERS TO AUGMENT values.yaml WITH CONFIG MAP 3 | # This file declares additional input parameters to set the SERVER_TEXT environment variable in the application 4 | # container from a ConfigMap. 5 | # 6 | # This values file is intended to be used with the values.yaml file: 7 | # helm install k8s-service -f values.yaml -f ./extensions/config_map_values.yaml 8 | # 9 | # See the README for more details 10 | #---------------------------------------------------------------------------------------------------------------------- 11 | 12 | # configMaps is a map that specifies which ConfigMap resources should be injected into the application container. There 13 | # are two ways that a ConfigMap can be injected: 14 | # - as an environment variable 15 | # - as a file on the file system 16 | # In this example, we inject the sample-sinatra-app-server-text ConfigMap defined in kubernetes/config-map.yaml to set 17 | # the SERVER_TEXT environment variable on the application container. 18 | configMaps: 19 | sample-sinatra-app-server-text: 20 | as: environment 21 | items: 22 | server_text: 23 | envVarName: SERVER_TEXT 24 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/horizontalpodautoscaler.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.horizontalPodAutoscaler.enabled }} 2 | apiVersion: {{ include "gruntwork.horizontalPodAutoscaler.apiVersion" . }} 3 | kind: HorizontalPodAutoscaler 4 | metadata: 5 | name: {{ include "k8s-service.fullname" . }} 6 | namespace: {{ $.Release.Namespace }} 7 | spec: 8 | scaleTargetRef: 9 | apiVersion: apps/v1 10 | kind: Deployment 11 | name: {{ include "k8s-service.fullname" . }} 12 | minReplicas: {{ .Values.horizontalPodAutoscaler.minReplicas }} 13 | maxReplicas: {{ .Values.horizontalPodAutoscaler.maxReplicas }} 14 | metrics: 15 | {{- if .Values.horizontalPodAutoscaler.avgCpuUtilization }} 16 | - type: Resource 17 | resource: 18 | name: cpu 19 | target: 20 | type: Utilization 21 | averageUtilization: {{ .Values.horizontalPodAutoscaler.avgCpuUtilization }} 22 | {{- end }} 23 | {{- if .Values.horizontalPodAutoscaler.avgMemoryUtilization }} 24 | - type: Resource 25 | resource: 26 | name: memory 27 | target: 28 | type: Utilization 29 | averageUtilization: {{ .Values.horizontalPodAutoscaler.avgMemoryUtilization }} 30 | {{- end }} 31 | {{- if .Values.horizontalPodAutoscaler.customMetrics }} 32 | {{ toYaml .Values.horizontalPodAutoscaler.customMetrics | indent 4 }} 33 | {{- end }} 34 | {{- if .Values.horizontalPodAutoscaler.behavior }} 35 | behavior: 36 | {{ tpl (toYaml .Values.horizontalPodAutoscaler.behavior) $ | indent 4 }} 37 | {{- end }} 38 | {{- end }} -------------------------------------------------------------------------------- /charts/k8s-service/templates/verticalpodautoscaler.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.verticalPodAutoscaler.enabled }} 2 | apiVersion: {{ include "gruntwork.verticalPodAutoscaler.apiVersion" . }} 3 | kind: VerticalPodAutoscaler 4 | metadata: 5 | name: {{ include "k8s-service.fullname" . }} 6 | namespace: {{ $.Release.Namespace }} 7 | spec: 8 | targetRef: 9 | apiVersion: apps/v1 10 | kind: Deployment 11 | name: {{ include "k8s-service.fullname" . }} 12 | updatePolicy: 13 | updateMode: {{ .Values.verticalPodAutoscaler.updateMode | quote }} 14 | minReplicas: {{ .Values.verticalPodAutoscaler.minReplicas }} 15 | resourcePolicy: 16 | containerPolicies: 17 | - containerName: {{ include "k8s-service.fullname" . }} 18 | {{- if .Values.verticalPodAutoscaler.mainContainerResourcePolicy.minAllowed }} 19 | minAllowed: 20 | {{- toYaml .Values.verticalPodAutoscaler.mainContainerResourcePolicy.minAllowed | nindent 8 }} 21 | {{- end }} 22 | 23 | {{- if .Values.verticalPodAutoscaler.mainContainerResourcePolicy.maxAllowed }} 24 | maxAllowed: 25 | {{- toYaml .Values.verticalPodAutoscaler.mainContainerResourcePolicy.maxAllowed | nindent 8 }} 26 | {{- end }} 27 | 28 | {{- if .Values.verticalPodAutoscaler.mainContainerResourcePolicy.controlledResources }} 29 | controlledResources: 30 | {{- toYaml .Values.verticalPodAutoscaler.mainContainerResourcePolicy.controlledResources | nindent 8 }} 31 | {{- end }} 32 | 33 | {{- if .Values.verticalPodAutoscaler.mainContainerResourcePolicy.controlledValues }} 34 | controlledValues: {{ .Values.verticalPodAutoscaler.mainContainerResourcePolicy.controlledValues }} 35 | {{- end }} 36 | 37 | {{- if .Values.verticalPodAutoscaler.extraResourcePolicy }} 38 | {{- toYaml .Values.verticalPodAutoscaler.extraResourcePolicy | nindent 4 }} 39 | {{- end }} 40 | {{- end }} 41 | -------------------------------------------------------------------------------- /test/k8s_service_volume_secret_store_csi_template_test.go: -------------------------------------------------------------------------------- 1 | //go:build all || tpl 2 | // +build all tpl 3 | 4 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 5 | // run just the template tests. See the test README for more information. 6 | 7 | package test 8 | 9 | import ( 10 | "testing" 11 | 12 | "github.com/stretchr/testify/assert" 13 | "github.com/stretchr/testify/require" 14 | ) 15 | 16 | func TestK8SServiceDeploymentCheckSecretStoreCSIBlock(t *testing.T) { 17 | t.Parallel() 18 | 19 | deployment := renderK8SServiceDeploymentWithSetValues( 20 | t, 21 | map[string]string{ 22 | "secrets.dbsettings.as": "csi", 23 | "secrets.dbsettings.mountPath": "/etc/db", 24 | "secrets.dbsettings.readOnly": "true", 25 | 26 | "secrets.dbsettings.csi.driver": "secrets-store.csi.k8s.io", 27 | "secrets.dbsettings.csi.secretProviderClass": "secret-provider-class", 28 | 29 | "secrets.dbsettings.items.host.envVarName": "DB_HOST", 30 | "secrets.dbsettings.items.port.envVarName": "DB_PORT", 31 | }, 32 | ) 33 | 34 | // Verify that there is only one container and only one volume 35 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 36 | require.Equal(t, len(renderedPodContainers), 1) 37 | renderedPodVolumes := deployment.Spec.Template.Spec.Volumes 38 | require.Equal(t, len(renderedPodVolumes), 1) 39 | podVolume := renderedPodVolumes[0] 40 | 41 | // Check that the pod volume has a correct name 42 | assert.Equal(t, podVolume.Name, "dbsettings-volume") 43 | 44 | // Check that the pod volume has CSI block 45 | assert.NotNil(t, podVolume.CSI) 46 | 47 | // Check that the pod volume has correct CSI driver and attributes 48 | assert.Equal(t, podVolume.CSI.Driver, "secrets-store.csi.k8s.io") 49 | assert.NotNil(t, podVolume.CSI.VolumeAttributes) 50 | assert.Equal(t, podVolume.CSI.VolumeAttributes, map[string]string{ 51 | "secretProviderClass": "secret-provider-class", 52 | }) 53 | } 54 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/service.yaml: -------------------------------------------------------------------------------- 1 | {{- /* 2 | If the operator configures the service input variable, then also create a Service resource that exposes the Pod as a 3 | stable endpoint that can be routed within the Kubernetes cluster. 4 | */ -}} 5 | {{- if .Values.service.enabled -}} 6 | apiVersion: v1 7 | kind: Service 8 | metadata: 9 | name: {{ include "k8s-service.fullname" . }} 10 | labels: 11 | # These labels are required by helm. You can read more about required labels in the chart best practices guide: 12 | # https://docs.helm.sh/chart_best_practices/#standard-labels 13 | app.kubernetes.io/name: {{ include "k8s-service.name" . }} 14 | helm.sh/chart: {{ include "k8s-service.chart" . }} 15 | app.kubernetes.io/instance: {{ .Release.Name }} 16 | app.kubernetes.io/managed-by: {{ .Release.Service }} 17 | {{- if .Values.service.annotations }} 18 | {{- with .Values.service.annotations }} 19 | annotations: 20 | {{ toYaml . | indent 4 }} 21 | {{- end }} 22 | {{- end }} 23 | spec: 24 | type: {{ .Values.service.type | default "ClusterIP" }} 25 | ports: 26 | {{- range $key, $value := .Values.service.ports }} 27 | - name: {{ $key }} 28 | {{ toYaml $value | indent 6 }} 29 | {{- end }} 30 | {{- if .Values.service.clusterIP }} 31 | clusterIP: {{ .Values.service.clusterIP }} 32 | {{- end }} 33 | selector: 34 | app.kubernetes.io/name: {{ include "k8s-service.name" . }} 35 | app.kubernetes.io/instance: {{ .Release.Name }} 36 | {{- if .Values.service.externalTrafficPolicy }} 37 | externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }} 38 | {{- end}} 39 | {{- if .Values.service.internalTrafficPolicy }} 40 | internalTrafficPolicy: {{ .Values.service.internalTrafficPolicy }} 41 | {{- end}} 42 | {{- if .Values.service.sessionAffinity }} 43 | sessionAffinity: {{ .Values.service.sessionAffinity }} 44 | {{- if .Values.service.sessionAffinityConfig }} 45 | {{- with .Values.service.sessionAffinityConfig }} 46 | sessionAffinityConfig: 47 | {{ toYaml . | indent 4 }} 48 | {{- end}} 49 | {{- end}} 50 | {{- end}} 51 | {{- end }} 52 | -------------------------------------------------------------------------------- /test/k8s_service_custom_resources_template_test.go: -------------------------------------------------------------------------------- 1 | // +build all tpl 2 | 3 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 4 | // run just the template tests. See the test README for more information. 5 | 6 | package test 7 | 8 | import ( 9 | "path/filepath" 10 | "testing" 11 | 12 | "github.com/ghodss/yaml" 13 | "github.com/gruntwork-io/terratest/modules/helm" 14 | "github.com/stretchr/testify/require" 15 | corev1 "k8s.io/api/core/v1" 16 | ) 17 | 18 | // Test that setting customResources.enabled = false will cause the helm template to not render any custom resources 19 | func TestK8SServiceCustomResourcesEnabledFalseDoesNotCreateCustomResources(t *testing.T) { 20 | t.Parallel() 21 | 22 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 23 | require.NoError(t, err) 24 | 25 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 26 | // We then use SetValues to override all the defaults. 27 | options := &helm.Options{ 28 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 29 | SetValues: map[string]string{"customResources.enabled": "false"}, 30 | } 31 | _, err = helm.RenderTemplateE(t, options, helmChartPath, "customresources", []string{"templates/customresources.yaml"}) 32 | require.Error(t, err) 33 | } 34 | 35 | // Test that configuring a ConfigMap and a Secret will render correctly to something 36 | func TestK8SServiceCustomResourcesEnabledCreatesCustomResources(t *testing.T) { 37 | t.Parallel() 38 | 39 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 40 | require.NoError(t, err) 41 | 42 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 43 | options := &helm.Options{ 44 | ValuesFiles: []string{ 45 | filepath.Join("..", "charts", "k8s-service", "linter_values.yaml"), 46 | filepath.Join("fixtures", "custom_resources_values.yaml"), 47 | }, 48 | } 49 | out := helm.RenderTemplate(t, options, helmChartPath, "customresources", []string{"templates/customresources.yaml"}) 50 | 51 | // We render the output to a map to validate it 52 | renderedConfigMap := corev1.ConfigMap{} 53 | 54 | require.NoError(t, yaml.Unmarshal([]byte(out), &renderedConfigMap)) 55 | } 56 | -------------------------------------------------------------------------------- /test/sample_app_test_helpers.go: -------------------------------------------------------------------------------- 1 | // +build all integration 2 | 3 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 4 | // run just the template tests. See the test README for more information. 5 | 6 | package test 7 | 8 | import ( 9 | "encoding/json" 10 | "fmt" 11 | "os" 12 | "path/filepath" 13 | "strings" 14 | "testing" 15 | 16 | "github.com/gruntwork-io/terratest/modules/logger" 17 | "github.com/gruntwork-io/terratest/modules/shell" 18 | ) 19 | 20 | // Expected response from the sample app is a json 21 | type SampleAppResponse struct { 22 | Text string `json:"text"` 23 | } 24 | 25 | // createSampleAppDockerImage builds the sample app docker image into the minikube environment, tagging it using the 26 | // unique ID. 27 | func createSampleAppDockerImage(t *testing.T, uniqueID string, examplePath string) { 28 | dockerWorkingDir := filepath.Join(examplePath, "docker") 29 | cmdsToRun := []string{} 30 | // Build the docker environment to talk to minikube daemon 31 | // In CircleCI, we have to run minikube in none driver mode. In this mode, minikube runs directly on the machine 32 | // using the existing docker, so no environment prep is necessary. For all other environments, this is necessary. 33 | isInCircle := os.Getenv("CIRCLECI") == "true" 34 | if !isInCircle { 35 | cmdsToRun = append(cmdsToRun, "eval $(minikube docker-env)") 36 | } 37 | cmdsToRun = append( 38 | cmdsToRun, 39 | // Build the image and tag using the unique ID 40 | fmt.Sprintf("docker build -t gruntwork-io/sample-sinatra-app:%s .", uniqueID), 41 | ) 42 | cmd := shell.Command{ 43 | Command: "sh", 44 | Args: []string{ 45 | "-c", 46 | strings.Join(cmdsToRun, " && "), 47 | }, 48 | WorkingDir: dockerWorkingDir, 49 | } 50 | shell.RunCommand(t, cmd) 51 | } 52 | 53 | // sampleAppValidationFunctionGenerator will output a validation function that can be used with the pod verification 54 | // code in k8s_service_example_test_helpers.go. 55 | func sampleAppValidationFunctionGenerator(t *testing.T, expectedText string) func(int, string) bool { 56 | return func(statusCode int, body string) bool { 57 | if statusCode != 200 { 58 | return false 59 | } 60 | 61 | var resp SampleAppResponse 62 | err := json.Unmarshal([]byte(body), &resp) 63 | if err != nil { 64 | logger.Logf(t, "Error unmarshalling sample app response: %s", err) 65 | return false 66 | } 67 | return resp.Text == expectedText 68 | } 69 | } 70 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | Check the status of your Deployment by running this comamnd: 3 | 4 | kubectl get deployments --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "k8s-service.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" 5 | 6 | 7 | List the related Pods with the following command: 8 | 9 | kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "k8s-service.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" 10 | 11 | 12 | Use the following command to view information about the Service: 13 | 14 | kubectl get services --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "k8s-service.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" 15 | 16 | 17 | {{ if .Values.containerPorts -}} 18 | {{- $serviceType := .Values.service.type | default "ClusterIP" -}} 19 | Get the application URL by running these commands: 20 | 21 | {{- if .Values.ingress.enabled }} 22 | {{- range .Values.ingress.hosts }} 23 | http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }} 24 | {{- end }} 25 | {{- else if contains "NodePort" $serviceType }} 26 | export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "k8s-service.fullname" . }}) 27 | export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") 28 | echo http://$NODE_IP:$NODE_PORT 29 | {{- else if contains "LoadBalancer" $serviceType }} 30 | NOTE: It may take a few minutes for the LoadBalancer IP to be available. 31 | You can watch the status of by running 'kubectl get svc -w {{ include "k8s-service.fullname" . }}' 32 | export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "k8s-service.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') 33 | echo http://$SERVICE_IP:{{ .Values.service.port }} 34 | {{- else if contains "ClusterIP" $serviceType }} 35 | export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "k8s-service.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") 36 | {{- range $portName, $portSpec := .Values.containerPorts }} 37 | {{- if not $portSpec.disabled }} 38 | echo "Visit http://127.0.0.1:80{{ $portSpec.port }} to use your application container serving port {{ $portName }}" 39 | kubectl port-forward $POD_NAME 80{{ $portSpec.port }}:{{ $portSpec.port }} 40 | {{- end }} 41 | {{- end }} 42 | {{- end }} 43 | {{- end }} 44 | -------------------------------------------------------------------------------- /test/k8s_service_volume_test.go: -------------------------------------------------------------------------------- 1 | //go:build all || integration 2 | // +build all integration 3 | 4 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 5 | // run just the template tests. See the test README for more information. 6 | 7 | package test 8 | 9 | import ( 10 | "fmt" 11 | "path/filepath" 12 | "strings" 13 | "testing" 14 | 15 | "github.com/gruntwork-io/terratest/modules/helm" 16 | "github.com/gruntwork-io/terratest/modules/k8s" 17 | "github.com/gruntwork-io/terratest/modules/random" 18 | "github.com/stretchr/testify/require" 19 | metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" 20 | ) 21 | 22 | func TestK8SServiceScratchSpaceIsTmpfs(t *testing.T) { 23 | t.Parallel() 24 | 25 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 26 | require.NoError(t, err) 27 | 28 | // Create a test namespace to deploy resources into, to avoid colliding with other tests 29 | kubectlOptions := k8s.NewKubectlOptions("", "", "") 30 | uniqueID := random.UniqueId() 31 | testNamespace := fmt.Sprintf("k8s-service-scratch-%s", strings.ToLower(uniqueID)) 32 | k8s.CreateNamespace(t, kubectlOptions, testNamespace) 33 | defer k8s.DeleteNamespace(t, kubectlOptions, testNamespace) 34 | kubectlOptions.Namespace = testNamespace 35 | 36 | // Construct the values to run a pod with scratch space 37 | releaseName := fmt.Sprintf("k8s-service-scratch-%s", strings.ToLower(uniqueID)) 38 | appName := "scratch-tester" 39 | options := &helm.Options{ 40 | KubectlOptions: kubectlOptions, 41 | SetValues: map[string]string{ 42 | "applicationName": appName, 43 | "containerImage.repository": "alpine", 44 | "containerImage.tag": "3.13", 45 | "containerImage.pullPolicy": "IfNotPresent", 46 | "containerCommand[0]": "sh", 47 | "containerCommand[1]": "-c", 48 | "containerCommand[2]": "mount && sleep 9999999", 49 | "scratchPaths.scratch-mnt": "/mnt/scratch", 50 | }, 51 | } 52 | defer helm.Delete(t, options, releaseName, true) 53 | helm.Install(t, options, helmChartPath, releaseName) 54 | 55 | // Make sure all the pods are deployed and available 56 | verifyPodsCreatedSuccessfully(t, kubectlOptions, appName, releaseName, 1) 57 | 58 | // Get the logs from the pod to verify /mnt/scratch is mounted as tmpfs. 59 | pods := k8s.ListPods(t, kubectlOptions, metav1.ListOptions{}) 60 | require.Equal(t, 1, len(pods)) 61 | pod := pods[0] 62 | logs, err := k8s.RunKubectlAndGetOutputE(t, kubectlOptions, "logs", pod.Name) 63 | require.NoError(t, err) 64 | require.Contains(t, logs, "tmpfs on /mnt/scratch type tmpfs (rw,relatime") 65 | } 66 | -------------------------------------------------------------------------------- /charts/k8s-service/linter_values.yaml: -------------------------------------------------------------------------------- 1 | #---------------------------------------------------------------------------------------------------------------------- 2 | # CHART PARAMETERS TO USE WITH HELM LINT 3 | # This file declares a complete configuration value for this chart, with required values defined so that it can be used 4 | # with helm lint to lint the chart. This should only specify the required values of the chart, and be combined with the 5 | # default values of the chart. 6 | # This is a YAML-formatted file. 7 | #---------------------------------------------------------------------------------------------------------------------- 8 | 9 | #---------------------------------------------------------------------------------------------------------------------- 10 | # REQUIRED VALUES 11 | # These values are expected to be defined and passed in by the operator when deploying this helm chart. 12 | #---------------------------------------------------------------------------------------------------------------------- 13 | 14 | # containerImage is a map that describes the container image that should be used to serve the application managed by 15 | # this chart. 16 | # The expected keys are: 17 | # - repository (string) (required) : The container image repository that should be used. 18 | # E.g `nginx` ; `gcr.io/kubernetes-helm/tiller` 19 | # - tag (string) (required) : The tag of the image (e.g `latest`) that should be used. We recommend using a 20 | # fixed tag or the SHA of the image. Avoid using the tags `latest`, `head`, 21 | # `canary`, or other tags that are designed to be “floating”. 22 | # - pullPolicy (string) : The image pull policy to employ. Determines when the image will be pulled in. See 23 | # the official Kubernetes docs for more info. If undefined, this will default to 24 | # `IfNotPresent`. 25 | # 26 | # The following example deploys the `nginx:stable` image with a `IfNotPresent` image pull policy, which indicates that 27 | # the image should only be pulled if it has not been pulled previously. 28 | # 29 | # EXAMPLE: 30 | # 31 | # containerImage: 32 | # repository: nginx 33 | # tag: stable 34 | # pullPolicy: IfNotPresent 35 | containerImage: 36 | repository: nginx 37 | tag: stable 38 | pullPolicy: IfNotPresent 39 | 40 | # applicationName is a string that names the application. This is used to label the pod and to name the main application 41 | # container in the pod spec. The label is keyed under "gruntwork.io/app-name" 42 | applicationName: "linter" 43 | -------------------------------------------------------------------------------- /test/k8s_service_canary_deployment_test.go: -------------------------------------------------------------------------------- 1 | // +build all integration 2 | 3 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 4 | // run just the template tests. See the test README for more information. 5 | 6 | package test 7 | 8 | import ( 9 | "fmt" 10 | "path/filepath" 11 | "strings" 12 | "testing" 13 | 14 | "github.com/gruntwork-io/terratest/modules/helm" 15 | "github.com/gruntwork-io/terratest/modules/k8s" 16 | "github.com/gruntwork-io/terratest/modules/random" 17 | "github.com/stretchr/testify/require" 18 | ) 19 | 20 | // Test that: 21 | // 22 | // 1. Setting the canary.enabled input variable results in canary pods being created 23 | // 2. The deployment succeeds without errors 24 | // 3. The canary pods are correctly labeled with gruntwork.io/deployment-type=canary 25 | // 4. Enabling canary deployment does not interfere with main deployment. Main pods come up cleanly as well 26 | // 5. As configured, the canary and main deployments are running separate image tags 27 | func TestK8SServiceCanaryDeployment(t *testing.T) { 28 | t.Parallel() 29 | 30 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 31 | require.NoError(t, err) 32 | 33 | // Create a test namespace to deploy resources into, to avoid colliding with other tests 34 | kubectlOptions := k8s.NewKubectlOptions("", "", "") 35 | uniqueID := random.UniqueId() 36 | testNamespace := fmt.Sprintf("k8s-service-canary-%s", strings.ToLower(uniqueID)) 37 | k8s.CreateNamespace(t, kubectlOptions, testNamespace) 38 | defer k8s.DeleteNamespace(t, kubectlOptions, testNamespace) 39 | kubectlOptions.Namespace = testNamespace 40 | 41 | // Use the values file in the example and deploy the chart in the test namespace 42 | // Set a random release name 43 | releaseName := fmt.Sprintf("k8s-service-canary-%s", strings.ToLower(uniqueID)) 44 | options := &helm.Options{ 45 | KubectlOptions: kubectlOptions, 46 | ValuesFiles: []string{ 47 | filepath.Join("..", "charts", "k8s-service", "linter_values.yaml"), 48 | filepath.Join("fixtures", "canary_and_main_deployment_values.yaml"), 49 | }, 50 | } 51 | 52 | defer helm.Delete(t, options, releaseName, true) 53 | helm.Install(t, options, helmChartPath, releaseName) 54 | 55 | // Uses label filters including gruntwork.io/deployment-type=canary to ensure the correct pods were created 56 | verifyCanaryAndMainPodsCreatedSuccessfully(t, kubectlOptions, "canary-test", releaseName) 57 | verifyAllPodsAvailable(t, kubectlOptions, "canary-test", releaseName, nginxValidationFunction) 58 | verifyDifferentContainerTagsForCanaryPods(t, kubectlOptions, releaseName) 59 | verifyServiceRoutesToMainAndCanaryPods(t, kubectlOptions, "canary-test", releaseName) 60 | } 61 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/_capabilities_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* Allow KubeVersion to be overridden. This is mostly used for testing purposes. */}} 2 | {{- define "gruntwork.kubeVersion" -}} 3 | {{- default .Capabilities.KubeVersion.Version .Values.kubeVersionOverride -}} 4 | {{- end -}} 5 | 6 | {{/* Get Ingress API Version */}} 7 | {{- define "gruntwork.ingress.apiVersion" -}} 8 | {{- if and (.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19-0" (include "gruntwork.kubeVersion" .)) -}} 9 | {{- print "networking.k8s.io/v1" -}} 10 | {{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" -}} 11 | {{- print "networking.k8s.io/v1beta1" -}} 12 | {{- else -}} 13 | {{- print "extensions/v1beta1" -}} 14 | {{- end -}} 15 | {{- end -}} 16 | 17 | {{/* Ingress API version aware ingress backend */}} 18 | {{- define "gruntwork.ingress.backend" -}} 19 | {{/* NOTE: The leading whitespace is significant, as it is the specific yaml indentation for injection into the ingress resource. */}} 20 | {{- if eq .ingressAPIVersion "networking.k8s.io/v1" }} 21 | service: 22 | name: {{ if .serviceName }}{{ .serviceName }}{{ else }}{{ .fullName }}{{ end }} 23 | port: 24 | {{- if int .servicePort }} 25 | number: {{ .servicePort }} 26 | {{- else }} 27 | name: {{ .servicePort }} 28 | {{- end }} 29 | {{- else }} 30 | serviceName: {{ if .serviceName }}{{ .serviceName }}{{ else }}{{ .fullName }}{{ end }} 31 | servicePort: {{ .servicePort }} 32 | {{- end }} 33 | {{- end -}} 34 | 35 | {{/* Get PodDisruptionBudget API Version */}} 36 | {{- define "gruntwork.pdb.apiVersion" -}} 37 | {{- if and (.Capabilities.APIVersions.Has "policy/v1") (semverCompare ">= 1.21-0" (include "gruntwork.kubeVersion" .)) -}} 38 | {{- print "policy/v1" -}} 39 | {{- else -}} 40 | {{- print "policy/v1beta1" -}} 41 | {{- end -}} 42 | {{- end -}} 43 | 44 | {{/* Get HorizontalPodAutoscaler API Version */}} 45 | {{- define "gruntwork.horizontalPodAutoscaler.apiVersion" -}} 46 | {{- if and (.Capabilities.APIVersions.Has "autoscaling/v2") (semverCompare ">= 1.23-0" (include "gruntwork.kubeVersion" .)) -}} 47 | {{- print "autoscaling/v2" -}} 48 | {{- else -}} 49 | {{- print "autoscaling/v2beta2" -}} 50 | {{- end -}} 51 | {{- end -}} 52 | 53 | {{/* Get VertialPodAutoscaler API Version */}} 54 | {{- define "gruntwork.verticalPodAutoscaler.apiVersion" -}} 55 | {{- if and (.Capabilities.APIVersions.Has "autoscaling.k8s.io/v1") (semverCompare ">= 1.23-0" (include "gruntwork.kubeVersion" .)) -}} 56 | {{- print "autoscaling.k8s.io/v1" -}} 57 | {{- else -}} 58 | {{- print "autoscaling.k8s.io/v1beta2" -}} 59 | {{- end -}} 60 | {{- end -}} 61 | -------------------------------------------------------------------------------- /test/k8s_service_canary_deployment_template_test.go: -------------------------------------------------------------------------------- 1 | // +build all tpl 2 | 3 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 4 | // run just the template tests. See the test README for more information. 5 | 6 | package test 7 | 8 | import ( 9 | "fmt" 10 | "path/filepath" 11 | "strings" 12 | "testing" 13 | 14 | "github.com/ghodss/yaml" 15 | "github.com/gruntwork-io/terratest/modules/helm" 16 | "github.com/stretchr/testify/assert" 17 | "github.com/stretchr/testify/require" 18 | ) 19 | 20 | // Test that setting canary.enabled = false will cause the helm template to not render the canary Deployment resource 21 | func TestK8SServiceCanaryEnabledFalseDoesNotCreateCanaryDeployment(t *testing.T) { 22 | t.Parallel() 23 | 24 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 25 | require.NoError(t, err) 26 | 27 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 28 | // We then use SetValues to override all the defaults. 29 | options := &helm.Options{ 30 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 31 | SetValues: map[string]string{"canary.enabled": "false"}, 32 | } 33 | _, err = helm.RenderTemplateE(t, options, helmChartPath, "canary", []string{"templates/canarydeployment.yaml"}) 34 | require.Error(t, err) 35 | } 36 | 37 | // Test that configuring a canary deployment will render to a manifest with a container that is clearly a canary 38 | func TestK8SServiceCanaryEnabledCreatesCanaryDeployment(t *testing.T) { 39 | t.Parallel() 40 | 41 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 42 | require.NoError(t, err) 43 | 44 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 45 | // We then use SetValues to override all the defaults. 46 | options := &helm.Options{ 47 | ValuesFiles: []string{ 48 | filepath.Join("..", "charts", "k8s-service", "linter_values.yaml"), 49 | filepath.Join("fixtures", "canary_deployment_values.yaml"), 50 | }, 51 | } 52 | out := helm.RenderTemplate(t, options, helmChartPath, "canary", []string{"templates/canarydeployment.yaml"}) 53 | 54 | // We take the output and render it to a map to validate it has created a canary deployment or not 55 | rendered := map[string]interface{}{} 56 | err = yaml.Unmarshal([]byte(out), &rendered) 57 | assert.NoError(t, err) 58 | assert.NotEqual(t, 0, len(rendered)) 59 | 60 | // Inspect the name of the rendered canary deployment 61 | nameField := rendered["spec"].(map[string]interface{})["template"].(map[string]interface{})["spec"].(map[string]interface{})["containers"].([]interface{})[0].(map[string]interface{})["name"] 62 | nameString := fmt.Sprintf("%v", nameField) 63 | 64 | // Ensure the name contains the string "-canary" 65 | assert.True(t, strings.Contains(string(nameString), "-canary")) 66 | } 67 | -------------------------------------------------------------------------------- /test/k8s_service_service_monitor_template_test.go: -------------------------------------------------------------------------------- 1 | // +build all tpl 2 | 3 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 4 | // run just the template tests. See the test README for more information. 5 | 6 | package test 7 | 8 | import ( 9 | "path/filepath" 10 | "testing" 11 | 12 | "github.com/ghodss/yaml" 13 | "github.com/gruntwork-io/terratest/modules/helm" 14 | prometheus_operator_v1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1" 15 | "github.com/stretchr/testify/assert" 16 | "github.com/stretchr/testify/require" 17 | ) 18 | 19 | // Test that setting serviceMonitor.enabled = false will cause the helm template to not render the Service Monitor resource 20 | func TestK8SServiceServiceMonitorEnabledFalseDoesNotCreateServiceMonitor(t *testing.T) { 21 | t.Parallel() 22 | 23 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 24 | require.NoError(t, err) 25 | 26 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 27 | // We then use SetValues to override all the defaults. 28 | options := &helm.Options{ 29 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 30 | SetValues: map[string]string{"serviceMonitor.enabled": "false"}, 31 | } 32 | _, err = helm.RenderTemplateE(t, options, helmChartPath, "servicemonitor", []string{"templates/servicemonitor.yaml"}) 33 | require.Error(t, err) 34 | } 35 | 36 | // Test that configuring a service monitor will render correctly to something that will be accepted by the Prometheus 37 | // operator 38 | func TestK8SServiceServiceMonitorEnabledCreatesServiceMonitor(t *testing.T) { 39 | t.Parallel() 40 | 41 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 42 | require.NoError(t, err) 43 | 44 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 45 | // We then use SetValues to override all the defaults. 46 | options := &helm.Options{ 47 | ValuesFiles: []string{ 48 | filepath.Join("..", "charts", "k8s-service", "linter_values.yaml"), 49 | filepath.Join("fixtures", "service_monitor_values.yaml"), 50 | }, 51 | } 52 | out := helm.RenderTemplate(t, options, helmChartPath, "servicemonitor", []string{"templates/servicemonitor.yaml"}) 53 | 54 | // We take the output and render it to a map to validate it is an empty yaml 55 | rendered := prometheus_operator_v1.ServiceMonitor{} 56 | require.NoError(t, yaml.Unmarshal([]byte(out), &rendered)) 57 | require.Equal(t, 1, len(rendered.Spec.Endpoints)) 58 | 59 | // check the default endpoint properties 60 | defaultEndpoint := rendered.Spec.Endpoints[0] 61 | assert.Equal(t, "10s", defaultEndpoint.Interval) 62 | assert.Equal(t, "10s", defaultEndpoint.ScrapeTimeout) 63 | assert.Equal(t, "/metrics", defaultEndpoint.Path) 64 | assert.Equal(t, "http", defaultEndpoint.Port) 65 | assert.Equal(t, "http", defaultEndpoint.Scheme) 66 | } 67 | -------------------------------------------------------------------------------- /test/k8s_service_custom_resources_example_test.go: -------------------------------------------------------------------------------- 1 | // +build all integration 2 | 3 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 4 | // run just the template tests. See the test README for more information. 5 | 6 | package test 7 | 8 | import ( 9 | "context" 10 | "fmt" 11 | "path/filepath" 12 | "strings" 13 | "testing" 14 | 15 | "github.com/gruntwork-io/terratest/modules/helm" 16 | "github.com/gruntwork-io/terratest/modules/k8s" 17 | "github.com/gruntwork-io/terratest/modules/random" 18 | "github.com/stretchr/testify/require" 19 | corev1 "k8s.io/api/core/v1" 20 | metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" 21 | ) 22 | 23 | // Test the base case of the k8s-service-custom-resources example. 24 | // This test will: 25 | // 26 | // 1. Render a chart with multiple custom resources. 27 | // 2. Run `kubectl apply` with the rendered chart. 28 | // 3. Verify that the custom resources were deployed, by checking the k8s API. 29 | func TestK8SServiceCustomResourcesExample(t *testing.T) { 30 | t.Parallel() 31 | 32 | // Setup paths for testing the example chart 33 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 34 | require.NoError(t, err) 35 | 36 | // Create a test namespace to deploy resources into, to avoid colliding with other tests 37 | kubectlOptions := k8s.NewKubectlOptions("", "", "") 38 | uniqueID := random.UniqueId() 39 | testNamespace := fmt.Sprintf("k8s-service-custom-resources-%s", strings.ToLower(uniqueID)) 40 | k8s.CreateNamespace(t, kubectlOptions, testNamespace) 41 | defer k8s.DeleteNamespace(t, kubectlOptions, testNamespace) 42 | kubectlOptions.Namespace = testNamespace 43 | 44 | // Use the values file in the fixtures 45 | options := &helm.Options{ 46 | ValuesFiles: []string{ 47 | filepath.Join(helmChartPath, "linter_values.yaml"), 48 | filepath.Join("fixtures", "multiple_custom_resources_values.yaml"), 49 | }, 50 | } 51 | 52 | // Render the chart 53 | out := helm.RenderTemplate(t, options, helmChartPath, "customresources", []string{"templates/customresources.yaml"}) 54 | 55 | defer k8s.KubectlDeleteFromString(t, kubectlOptions, out) 56 | 57 | // Deploy a subset of the chart, just the ConfigMap and Secret 58 | k8s.KubectlApplyFromString(t, kubectlOptions, out) 59 | 60 | // Verify that ConfigMap and Secret got created, but do nothing with the output that is returned. 61 | // We only care that these functions do not error. 62 | k8s.GetSecret(t, kubectlOptions, "example-secret") 63 | getConfigMap(t, kubectlOptions, "example-config-map") 64 | } 65 | 66 | // getConfigMap should be implemented in Terratest 67 | func getConfigMap(t *testing.T, options *k8s.KubectlOptions, name string) corev1.ConfigMap { 68 | clientset, err := k8s.GetKubernetesClientFromOptionsE(t, options) 69 | require.NoError(t, err) 70 | 71 | configMap, err := clientset.CoreV1().ConfigMaps(options.Namespace).Get(context.Background(), name, metav1.GetOptions{}) 72 | require.NoError(t, err) 73 | require.NotNil(t, configMap) 74 | 75 | return *configMap 76 | } 77 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | 3 | {{/* 4 | Expand the name of the chart. 5 | */}} 6 | {{- define "k8s-service.name" -}} 7 | {{- .Values.applicationName | required "applicationName is required" | trunc 63 | trimSuffix "-" -}} 8 | {{- end -}} 9 | 10 | {{/* 11 | Create a default fully qualified app name. 12 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 13 | If release name contains chart name it will be used as a full name. 14 | */}} 15 | {{- define "k8s-service.fullname" -}} 16 | {{- $name := required "applicationName is required" .Values.applicationName -}} 17 | {{- if .Values.fullnameOverride -}} 18 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 19 | {{- else if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | 26 | {{/* 27 | Create chart name and version as used by the chart label. 28 | */}} 29 | {{- define "k8s-service.chart" -}} 30 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 31 | {{- end -}} 32 | 33 | {{/* 34 | Convert octal to decimal (e.g 644 => 420). For file permission modes, many people are more familiar with octal notation. 35 | However, due to yaml/json limitations, all the Kubernetes resources require file modes to be reported in decimal. 36 | */}} 37 | {{- define "k8s-service.fileModeOctalToDecimal" -}} 38 | {{- $digits := splitList "" (toString .) -}} 39 | 40 | {{/* Make sure there are exactly 3 digits */}} 41 | {{- if ne (len $digits) 3 -}} 42 | {{- fail (printf "File mode octal expects exactly 3 digits: %s" .) -}} 43 | {{- end -}} 44 | 45 | {{/* Go Templates do not support variable updating, so we simulate it using dictionaries */}} 46 | {{- $accumulator := dict "res" 0 -}} 47 | {{- range $idx, $digit := $digits -}} 48 | {{- $digitI := atoi $digit -}} 49 | 50 | {{/* atoi from sprig swallows conversion errors, so we double check to make sure it is a valid conversion */}} 51 | {{- if and (eq $digitI 0) (ne $digit "0") -}} 52 | {{- fail (printf "Digit %d of %s is not a number: %s" $idx . $digit) -}} 53 | {{- end -}} 54 | 55 | {{/* Make sure each digit is less than 8 */}} 56 | {{- if ge $digitI 8 -}} 57 | {{- fail (printf "%s is not a valid octal digit" $digit) -}} 58 | {{- end -}} 59 | 60 | {{/* Since we don't have math.Pow, we hard code */}} 61 | {{- if eq $idx 0 -}} 62 | {{/* 8^2 */}} 63 | {{- $_ := set $accumulator "res" (add (index $accumulator "res") (mul $digitI 64)) -}} 64 | {{- else if eq $idx 1 -}} 65 | {{/* 8^1 */}} 66 | {{- $_ := set $accumulator "res" (add (index $accumulator "res") (mul $digitI 8)) -}} 67 | {{- else -}} 68 | {{/* 8^0 */}} 69 | {{- $_ := set $accumulator "res" (add (index $accumulator "res") (mul $digitI 1)) -}} 70 | {{- end -}} 71 | {{- end -}} 72 | {{- "res" | index $accumulator | toString | printf -}} 73 | {{- end -}} 74 | -------------------------------------------------------------------------------- /core-concepts.md: -------------------------------------------------------------------------------- 1 | # Background 2 | 3 | ## What is Kubernetes? 4 | 5 | [Kubernetes](https://kubernetes.io) is an open source container management system for deploying, scaling, and managing 6 | containerized applications. Kubernetes is built by Google based on their internal proprietary container management 7 | systems (Borg and Omega). Kubernetes provides a cloud agnostic platform to deploy your containerized applications with 8 | built in support for common operational tasks such as replication, autoscaling, self-healing, and rolling deployments. 9 | 10 | You can learn more about Kubernetes from [the official documentation](https://kubernetes.io/docs/tutorials/kubernetes-basics/). 11 | 12 | 13 | ## What is Helm? 14 | 15 | [Helm](https://helm.sh/) is a package and module manager for Kubernetes that allows you to define, install, and manage 16 | Kubernetes applications as reusable packages called Charts. Helm provides support for official charts in their 17 | repository that contains various applications such as Jenkins, MySQL, and Consul to name a few. Gruntwork uses Helm 18 | under the hood for the Kubernetes modules in this package. 19 | 20 | The Helm client is a command line utility that is the primary interface for installing and managing Charts as releases 21 | in the Helm ecosystem. In addition to providing operational interfaces (e.g install, upgrade, list, etc), the client 22 | also provides utilities to support local development of Charts in the form of a scaffolding command and repository 23 | management (e.g uploading a Chart). 24 | 25 | 26 | ## How do you run applications on Kubernetes? 27 | 28 | There are three different ways you can schedule your application on a Kubernetes cluster. In all three, your application 29 | Docker containers are packaged as a [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/), which are the 30 | smallest deployable unit in Kubernetes, and represent one or more Docker containers that are tightly coupled. Containers 31 | in a Pod share certain elements of the kernel space that are traditionally isolated between containers, such as the 32 | network space (the containers both share an IP and thus the available ports are shared), IPC namespace, and PIDs in some 33 | cases. 34 | 35 | Pods are considered to be relatively ephemeral disposable entities in the Kubernetes ecosystem. This is because Pods are 36 | designed to be mobile across the cluster so that you can design a scalable fault tolerant system. As such, Pods are 37 | generally scheduled with 38 | [Controllers](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#pods-and-controllers) that manage the 39 | lifecycle of a Pod. Using Controllers, you can schedule your Pods as: 40 | 41 | - Jobs, which are Pods with a controller that will guarantee the Pods run to completion. See the [k8s-job 42 | chart](/charts/k8s-job) for more information. 43 | - Deployments behind a Service, which are Pods with a controller that implement lifecycle rules to provide replication 44 | and self-healing capabilities. Deployments will automatically reprovision failed Pods, or migrate Pods to healthy 45 | nodes off of failed nodes. A Service constructs a consistent endpoint that can be used to access the Deployment. See 46 | the [k8s-service chart](/charts/k8s-service) for more information. 47 | - Daemon Sets, which are Pods that are scheduled on all worker nodes. Daemon Sets schedule exactly one instance of a Pod 48 | on each node. Like Deployments, Daemon Sets will reprovision failed Pods and schedule new ones automatically on 49 | new nodes that join the cluster. See the [k8s-daemon-set chart](/charts/k8s-daemon-set) for more information. 50 | -------------------------------------------------------------------------------- /test/README.md: -------------------------------------------------------------------------------- 1 | # Tests 2 | 3 | This folder contains automated tests for this Module. All of the tests are written in [Go](https://golang.org/). 4 | 5 | There are three tiers of tests for helm: 6 | 7 | - Template tests: These are tests designed to test the logic of the templates. These tests should run `helm template` 8 | with various input values and parse the yaml to validate any logic embedded in the templates (e.g by reading them in 9 | using client-go). Since templates are not statically typed, the goal of these tests is to promote fast cycle time 10 | while catching some of the common bugs from typos or logic errors before getting to the slower integration tests. 11 | - Integration tests: These are tests that are designed to deploy the infrastructure and validate that it actually 12 | works as expected. If you consider the template tests to be syntactic tests, these are semantic tests that validate 13 | the behavior of the deployed resources. 14 | - Production tests (helm tests): These are tests that are run with the helm chart after it is deployed to validate the 15 | chart installed and deployed correctly. These should be smoke tests with minimal validation to ensure that the common 16 | operator errors during deployment are captured as early as possible. Note that because these tests run even on a 17 | production system, they should be passive and not destructive. 18 | 19 | This folder contains the "template tests" and "integration tests". Both types of tests use a helper library called 20 | [Terratest](https://github.com/gruntwork-io/terratest). While "template tests" do not need any infrastructure, the 21 | "integration tests" deploy the charts to a Kubernetes cluster. 22 | 23 | 24 | 25 | ## WARNING WARNING WARNING 26 | 27 | **Note #1**: Many of these tests create real resources in a Kubernetes cluster and then try to clean those resources up at 28 | the end of a test run. That means these tests may potentially pollute your Kubernetes cluster with unnecessary 29 | resources! When adding tests, please be considerate of the resources you create and take extra care to clean everything 30 | up when you're done! 31 | 32 | **Note #2**: Never forcefully shut the tests down (e.g. by hitting `CTRL + C`) or the cleanup tasks won't run! 33 | 34 | **Note #3**: We set `-timeout 60m` on all tests not because they necessarily take that long, but because Go has a 35 | default test timeout of 10 minutes, after which it forcefully kills the tests with a `SIGQUIT`, preventing the cleanup 36 | tasks from running. Therefore, we set an overlying long timeout to make sure all tests have enough time to finish and 37 | clean up. 38 | 39 | 40 | 41 | ## Running the tests 42 | 43 | ### Prerequisites 44 | 45 | - Install the latest version of [Go](https://golang.org/). 46 | - Setup a Kubernetes cluster. We recommend using a local version for fast iteration: 47 | - Linux: [minikube](https://github.com/kubernetes/minikube) 48 | - Mac OSX: [Kubernetes on Docker For Mac](https://docs.docker.com/docker-for-mac/kubernetes/) 49 | - Windows: [Kubernetes on Docker For Windows](https://docs.docker.com/docker-for-windows/kubernetes/) 50 | - Install and setup [helm](https://docs.helm.sh/using_helm/#installing-helm) 51 | 52 | 53 | ### Run all the tests 54 | 55 | We use build tags to categorize the tests. The tags are: 56 | 57 | - `all`: Run all the tests 58 | - `tpl`: Run the template tests 59 | - `integration`: Run the integration tests 60 | 61 | You can run all the tests by passing the `all` build tag: 62 | 63 | ```bash 64 | cd test 65 | go test -v -tags all -timeout 60m 66 | ``` 67 | 68 | ### Run a specific test 69 | 70 | To run a specific test called `TestFoo`: 71 | 72 | ```bash 73 | cd test 74 | go test -v -timeout 60m -tags all -run TestFoo 75 | ``` 76 | 77 | ### Run just the template tests 78 | 79 | Since the integration tests require infrastructure, they can be considerably slower than the unit tests. As such, to 80 | promote fast test cycles, you may want to test just the template tests. To do so, you can pass the `tpl` build tag: 81 | 82 | ```bash 83 | cd test 84 | go test -v -tags tpl 85 | ``` 86 | -------------------------------------------------------------------------------- /charts/k8s-service/templates/ingress.yaml: -------------------------------------------------------------------------------- 1 | {{- /* 2 | If the operator configures the ingress input variable, then also create an Ingress resource that will route to the 3 | service. Note that Ingress can only route to a Service, so the operator must also configure a Service. 4 | */ -}} 5 | {{- if .Values.ingress.enabled -}} 6 | 7 | {{- /* 8 | We declare some variables defined on the Values. These are reused in `with` and `range` blocks where the scoped variable 9 | (`.`) is rebound within the block. 10 | */ -}} 11 | {{- $fullName := include "k8s-service.fullname" . -}} 12 | {{- $ingressAPIVersion := include "gruntwork.ingress.apiVersion" . -}} 13 | {{- $ingressPath := .Values.ingress.path -}} 14 | {{- $ingressPathType := .Values.ingress.pathType -}} 15 | {{- $additionalPathsHigherPriority := .Values.ingress.additionalPathsHigherPriority }} 16 | {{- $additionalPaths := .Values.ingress.additionalPaths }} 17 | {{- $servicePort := .Values.ingress.servicePort -}} 18 | {{- $baseVarsForBackend := dict "fullName" $fullName "ingressAPIVersion" $ingressAPIVersion -}} 19 | 20 | apiVersion: {{ $ingressAPIVersion }} 21 | kind: Ingress 22 | metadata: 23 | name: {{ $fullName }} 24 | labels: 25 | gruntwork.io/app-name: {{ .Values.applicationName }} 26 | # These labels are required by helm. You can read more about required labels in the chart best practices guide: 27 | # https://docs.helm.sh/chart_best_practices/#standard-labels 28 | app.kubernetes.io/name: {{ include "k8s-service.name" . }} 29 | helm.sh/chart: {{ include "k8s-service.chart" . }} 30 | app.kubernetes.io/instance: {{ .Release.Name }} 31 | app.kubernetes.io/managed-by: {{ .Release.Service }} 32 | {{- if .Values.ingress.annotations }} 33 | {{- with .Values.ingress.annotations }} 34 | annotations: 35 | {{ toYaml . | indent 4 }} 36 | {{- end }} 37 | {{- end }} 38 | spec: 39 | {{- if .Values.ingress.ingressClassName }} 40 | ingressClassName: {{ .Values.ingress.ingressClassName }} 41 | {{- end }} 42 | {{- if .Values.ingress.tls }} 43 | {{- with .Values.ingress.tls }} 44 | tls: 45 | {{ toYaml . | indent 4}} 46 | {{- end }} 47 | {{- end }} 48 | rules: 49 | {{- if .Values.ingress.hosts }} 50 | {{- range .Values.ingress.hosts }} 51 | - host: {{ . | quote }} 52 | http: 53 | paths: 54 | {{- range $additionalPathsHigherPriority }} 55 | - path: {{ .path }} 56 | {{- if and (eq $ingressAPIVersion "networking.k8s.io/v1") .pathType }} 57 | pathType: {{ .pathType }} 58 | {{- end }} 59 | backend: 60 | {{- include "gruntwork.ingress.backend" (merge . $baseVarsForBackend) }} 61 | {{- end }} 62 | - path: {{ $ingressPath }} 63 | {{- if and (eq $ingressAPIVersion "networking.k8s.io/v1") $ingressPathType }} 64 | pathType: {{ $ingressPathType }} 65 | {{- end }} 66 | backend: 67 | {{- include "gruntwork.ingress.backend" (dict "serviceName" $fullName "servicePort" $servicePort | merge $baseVarsForBackend) }} 68 | {{- range $additionalPaths }} 69 | - path: {{ .path }} 70 | {{- if and (eq $ingressAPIVersion "networking.k8s.io/v1") .pathType }} 71 | pathType: {{ .pathType }} 72 | {{- end }} 73 | backend: 74 | {{- include "gruntwork.ingress.backend" (merge . $baseVarsForBackend) }} 75 | {{- end }} 76 | {{- end }} 77 | {{- else }} 78 | - http: 79 | paths: 80 | {{- range $additionalPathsHigherPriority }} 81 | - path: {{ .path }} 82 | {{- if and (eq $ingressAPIVersion "networking.k8s.io/v1") .pathType }} 83 | pathType: {{ .pathType }} 84 | {{- end }} 85 | backend: 86 | {{- include "gruntwork.ingress.backend" (merge . $baseVarsForBackend) }} 87 | {{- end }} 88 | - path: {{ $ingressPath }} 89 | {{- if and (eq $ingressAPIVersion "networking.k8s.io/v1") $ingressPathType }} 90 | pathType: {{ $ingressPathType }} 91 | {{- end }} 92 | backend: 93 | {{- include "gruntwork.ingress.backend" (dict "serviceName" $fullName "servicePort" $servicePort | merge $baseVarsForBackend) }} 94 | {{- range $additionalPaths }} 95 | - path: {{ .path }} 96 | {{- if and (eq $ingressAPIVersion "networking.k8s.io/v1") .pathType }} 97 | pathType: {{ .pathType }} 98 | {{- end }} 99 | backend: 100 | {{- include "gruntwork.ingress.backend" (merge . $baseVarsForBackend) }} 101 | {{- end }} 102 | 103 | {{- end }} 104 | {{- end }} 105 | -------------------------------------------------------------------------------- /test/k8s_service_volume_template_test.go: -------------------------------------------------------------------------------- 1 | //go:build all || tpl 2 | // +build all tpl 3 | 4 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 5 | // run just the template tests. See the test README for more information. 6 | 7 | package test 8 | 9 | import ( 10 | "fmt" 11 | "strconv" 12 | "testing" 13 | 14 | "github.com/stretchr/testify/assert" 15 | "github.com/stretchr/testify/require" 16 | corev1 "k8s.io/api/core/v1" 17 | ) 18 | 19 | func TestK8SServiceDeploymentAddingScratchVolumes(t *testing.T) { 20 | t.Parallel() 21 | 22 | volName := "scratch" 23 | volMountPath := "/mnt/scratch" 24 | 25 | deployment := renderK8SServiceDeploymentWithSetValues( 26 | t, 27 | map[string]string{ 28 | fmt.Sprintf("scratchPaths.%s", volName): volMountPath, 29 | }, 30 | ) 31 | 32 | // Verify that there is only one container 33 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 34 | require.Equal(t, len(renderedPodContainers), 1) 35 | podContainer := renderedPodContainers[0] 36 | 37 | // Verify that a mount has been created for the scratch path 38 | mounts := podContainer.VolumeMounts 39 | assert.Equal(t, len(mounts), 1) 40 | mount := mounts[0] 41 | assert.Equal(t, volName, mount.Name) 42 | assert.Equal(t, volMountPath, mount.MountPath) 43 | 44 | // Verify that a volume has been declared for the scratch path and is using tmpfs 45 | volumes := deployment.Spec.Template.Spec.Volumes 46 | assert.Equal(t, len(volumes), 1) 47 | volume := volumes[0] 48 | assert.Equal(t, volName, volume.Name) 49 | assert.Equal(t, corev1.StorageMediumMemory, volume.EmptyDir.Medium) 50 | 51 | } 52 | 53 | func TestK8SServiceDeploymentAddingPersistentVolumes(t *testing.T) { 54 | t.Parallel() 55 | 56 | volName := "pv-1" 57 | volClaim := "claim-1" 58 | volMountPath := "/mnt/path/1" 59 | 60 | deployment := renderK8SServiceDeploymentWithSetValues( 61 | t, 62 | map[string]string{ 63 | "persistentVolumes.pv-1.claimName": volClaim, 64 | "persistentVolumes.pv-1.mountPath": volMountPath, 65 | }, 66 | ) 67 | 68 | // Verify that there is only one container 69 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 70 | require.Equal(t, len(renderedPodContainers), 1) 71 | 72 | // Verify that a mount has been created for the PV 73 | mounts := renderedPodContainers[0].VolumeMounts 74 | assert.Equal(t, len(mounts), 1) 75 | mount := mounts[0] 76 | assert.Equal(t, volName, mount.Name) 77 | assert.Equal(t, volMountPath, mount.MountPath) 78 | 79 | // Verify that a volume has been declared for the PV 80 | volumes := deployment.Spec.Template.Spec.Volumes 81 | assert.Equal(t, len(volumes), 1) 82 | volume := volumes[0] 83 | assert.Equal(t, volName, volume.Name) 84 | assert.Equal(t, volClaim, volume.PersistentVolumeClaim.ClaimName) 85 | } 86 | 87 | func TestK8SServiceDeploymentAddingEmptyDirs(t *testing.T) { 88 | t.Parallel() 89 | 90 | volName := "empty-dir" 91 | volMountPath := "/mnt/empty" 92 | 93 | deployment := renderK8SServiceDeploymentWithSetValues( 94 | t, 95 | map[string]string{ 96 | fmt.Sprintf("emptyDirs.%s", volName): volMountPath, 97 | }, 98 | ) 99 | 100 | // Verify that there is only one container 101 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 102 | require.Equal(t, len(renderedPodContainers), 1) 103 | podContainer := renderedPodContainers[0] 104 | 105 | // Verify that a mount has been created for the emptyDir 106 | mounts := podContainer.VolumeMounts 107 | assert.Equal(t, len(mounts), 1) 108 | mount := mounts[0] 109 | assert.Equal(t, volName, mount.Name) 110 | assert.Equal(t, volMountPath, mount.MountPath) 111 | 112 | // Verify that a volume has been declared for the emptyDir 113 | volumes := deployment.Spec.Template.Spec.Volumes 114 | assert.Equal(t, len(volumes), 1) 115 | volume := volumes[0] 116 | assert.Equal(t, volName, volume.Name) 117 | assert.Empty(t, volume.EmptyDir) 118 | } 119 | 120 | func TestK8SServiceDeploymentAddingTerminationGracePeriod(t *testing.T) { 121 | 122 | gracePeriod := "30" 123 | 124 | deployment := renderK8SServiceDeploymentWithSetValues( 125 | t, 126 | map[string]string{ 127 | "terminationGracePeriodSeconds": gracePeriod, 128 | }, 129 | ) 130 | 131 | // Verify that there is only one container 132 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 133 | require.Equal(t, len(renderedPodContainers), 1) 134 | 135 | expectedGracePeriodInt64, err := strconv.ParseInt(gracePeriod, 10, 64) 136 | 137 | // Verify termination grace period has been set for container 138 | assert.NoError(t, err) 139 | renderedTerminationGracePeriodSeconds := deployment.Spec.Template.Spec.TerminationGracePeriodSeconds 140 | require.Equal(t, expectedGracePeriodInt64, *renderedTerminationGracePeriodSeconds) 141 | } 142 | -------------------------------------------------------------------------------- /examples/k8s-service-nginx/values.yaml: -------------------------------------------------------------------------------- 1 | #---------------------------------------------------------------------------------------------------------------------- 2 | # CHART PARAMETERS FOR NGINX EXAMPLE 3 | # This file declares the required values for the k8s-service helm chart to deploy nginx. 4 | # This is a YAML-formatted file. 5 | #---------------------------------------------------------------------------------------------------------------------- 6 | 7 | #---------------------------------------------------------------------------------------------------------------------- 8 | # REQUIRED VALUES OF CHART 9 | # These are the required values defined by the k8s-service chart. Here we will set them to deploy an nginx container. 10 | #---------------------------------------------------------------------------------------------------------------------- 11 | 12 | # containerImage is a map that describes the container image that should be used to serve the application managed by 13 | # the k8s-service chart. 14 | # The expected keys are: 15 | # - repository (string) (required) : The container image repository that should be used. 16 | # E.g `nginx` ; `gcr.io/kubernetes-helm/tiller` 17 | # - tag (string) (required) : The tag of the image (e.g `latest`) that should be used. We recommend using a 18 | # fixed tag or the SHA of the image. Avoid using the tags `latest`, `head`, 19 | # `canary`, or other tags that are designed to be “floating”. 20 | # - pullPolicy (string) : The image pull policy to employ. Determines when the image will be pulled in. See 21 | # the official Kubernetes docs for more info. If undefined, this will default to 22 | # `IfNotPresent`. 23 | # 24 | # The following example deploys the `nginx:stable` image with a `IfNotPresent` image pull policy, which indicates that 25 | # the image should only be pulled if it has not been pulled previously. We deploy a specific, locked tag so that we 26 | # don't inadvertently upgrade nginx during a deployment that changes some other unrelated input value. 27 | containerImage: 28 | repository: nginx 29 | tag: 1.14.2 30 | pullPolicy: IfNotPresent 31 | 32 | # applicationName is a string that names the application. This is used to label the pod and to name the main application 33 | # container in the pod spec. Here we use nginx as the name since we are deploying nginx. 34 | applicationName: "nginx" 35 | 36 | #---------------------------------------------------------------------------------------------------------------------- 37 | # OVERRIDE OPTIONAL VALUES 38 | # These values have defaults in the k8s-service chart, but we override a few of them for the purposes of this demo. 39 | #---------------------------------------------------------------------------------------------------------------------- 40 | 41 | # replicaCount can be used to configure the number of replica pods that should be deployed and maintained at any given 42 | # point in time. Here we set this to 3 to signal Kubernetes (via the Deployment contoller) to maintain 3 pods. 43 | replicaCount: 3 44 | 45 | # livenessProbe is a map that specifies the liveness probe of the main application container. Liveness probes indicate 46 | # when a container has reached a fatal state where it needs to be restarted to recover. When the liveness probe fails, 47 | # the container is automatically recreated. You can read more about container liveness probes in the official docs: 48 | # https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ 49 | # NOTE: This variable is injected directly into the container spec. 50 | # 51 | # The following example specifies an http GET based liveness probe, that will base the probe on a http GET request to 52 | # the port bound to name `http` (port 80 in the default settings) on the path `/`. 53 | livenessProbe: 54 | httpGet: 55 | path: / 56 | port: http 57 | 58 | # readinessProbe is a map that specifies the readiness probe of the main application container. Readiness probes 59 | # indicate when a container is unable to serve traffic. When the readiness probe fails, the container is cycled out of 60 | # the list of available containers to the `Service`. You can read more about readiness probes in the official docs: 61 | # https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ 62 | # NOTE: This variable is injected directly into the container spec. 63 | # 64 | # The following example specifies an http GET based readiness probe, that will base the probe on a http GET request to 65 | # the port bound to name `http` (see description on `containerPorts`) on the path `/`. 66 | readinessProbe: 67 | httpGet: 68 | path: / 69 | port: http 70 | 71 | # We override the service type to use NodePort so that we can access the Service from outside the Kubernetes cluster. 72 | service: 73 | type: NodePort 74 | -------------------------------------------------------------------------------- /examples/k8s-service-config-injection/values.yaml: -------------------------------------------------------------------------------- 1 | #---------------------------------------------------------------------------------------------------------------------- 2 | # CHART PARAMETERS FOR CONFIG INJECTION EXAMPLE 3 | # This file declares the required values for the k8s-service helm chart to deploy the sample ruby app in the docker 4 | # folder. 5 | # This is a YAML-formatted file. 6 | #---------------------------------------------------------------------------------------------------------------------- 7 | 8 | #---------------------------------------------------------------------------------------------------------------------- 9 | # REQUIRED VALUES OF CHART 10 | # These are the required values defined by the k8s-service chart. Here we will set them to deploy the container in the 11 | # docker folder, assuming it was tagged as "sample-sinatra-app". 12 | #---------------------------------------------------------------------------------------------------------------------- 13 | 14 | # containerImage is a map that describes the container image that should be used to serve the application managed by 15 | # the k8s-service chart. 16 | # The expected keys are: 17 | # - repository (string) (required) : The container image repository that should be used. 18 | # E.g `nginx` ; `gcr.io/kubernetes-helm/tiller` 19 | # - tag (string) (required) : The tag of the image (e.g `latest`) that should be used. We recommend using a 20 | # fixed tag or the SHA of the image. Avoid using the tags `latest`, `head`, 21 | # `canary`, or other tags that are designed to be “floating”. 22 | # - pullPolicy (string) : The image pull policy to employ. Determines when the image will be pulled in. See 23 | # the official Kubernetes docs for more info. If undefined, this will default to 24 | # `IfNotPresent`. 25 | # 26 | # The following example deploys the `sample-sinatra-app:latest` image with a `IfNotPresent` image pull policy, which 27 | # indicates that the image should only be pulled if it has not been pulled previously. 28 | containerImage: 29 | repository: gruntwork-io/sample-sinatra-app 30 | tag: latest 31 | pullPolicy: IfNotPresent 32 | 33 | # applicationName is a string that names the application. This is used to label the pod and to name the main application 34 | # container in the pod spec. 35 | applicationName: "sample-sinatra-app" 36 | 37 | #---------------------------------------------------------------------------------------------------------------------- 38 | # OVERRIDE OPTIONAL VALUES 39 | # These values have defaults in the k8s-service chart, but we override a few of them for the purposes of this demo. 40 | #---------------------------------------------------------------------------------------------------------------------- 41 | 42 | # envVars can be used to configure the environment variables that should be set in the application container. Here, 43 | # we override the port that the application will listen on to use port 80, which is what the k8s-service chart assumes. 44 | envVars: 45 | SERVER_PORT: 80 46 | 47 | # replicaCount can be used to configure the number of replica pods that should be deployed and maintained at any given 48 | # point in time. Here we set this to 3 to signal Kubernetes (via the Deployment contoller) to maintain 3 pods. 49 | replicaCount: 3 50 | 51 | # livenessProbe is a map that specifies the liveness probe of the main application container. Liveness probes indicate 52 | # when a container has reached a fatal state where it needs to be restarted to recover. When the liveness probe fails, 53 | # the container is automatically recreated. You can read more about container liveness probes in the official docs: 54 | # https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ 55 | # NOTE: This variable is injected directly into the container spec. 56 | # 57 | # The following example specifies an http GET based liveness probe, that will base the probe on a http GET request to 58 | # the port bound to name `http` (port 80 in the default settings) on the path `/`. 59 | livenessProbe: 60 | httpGet: 61 | path: / 62 | port: http 63 | 64 | # readinessProbe is a map that specifies the readiness probe of the main application container. Readiness probes 65 | # indicate when a container is unable to serve traffic. When the readiness probe fails, the container is cycled out of 66 | # the list of available containers to the `Service`. You can read more about readiness probes in the official docs: 67 | # https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ 68 | # NOTE: This variable is injected directly into the container spec. 69 | # 70 | # The following example specifies an http GET based readiness probe, that will base the probe on a http GET request to 71 | # the port bound to name `http` (see description on `containerPorts`) on the path `/`. 72 | readinessProbe: 73 | httpGet: 74 | path: / 75 | port: http 76 | 77 | # We override the service type to use NodePort so that we can access the Service from outside the Kubernetes cluster. 78 | service: 79 | type: NodePort 80 | -------------------------------------------------------------------------------- /test/go.mod: -------------------------------------------------------------------------------- 1 | module github.com/gruntwork-io/helm-kubernetes-services/test 2 | 3 | go 1.18 4 | 5 | require ( 6 | github.com/GoogleCloudPlatform/gke-managed-certs v1.0.5 7 | github.com/ghodss/yaml v1.0.0 8 | github.com/gruntwork-io/terratest v0.41.9 9 | github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.51.2 10 | github.com/stretchr/testify v1.8.1 11 | golang.org/x/mod v0.8.0 12 | k8s.io/api v0.24.9 13 | k8s.io/apimachinery v0.24.9 14 | ) 15 | 16 | require ( 17 | cloud.google.com/go v0.110.0 // indirect 18 | cloud.google.com/go/compute v1.19.1 // indirect 19 | cloud.google.com/go/compute/metadata v0.2.3 // indirect 20 | cloud.google.com/go/iam v0.13.0 // indirect 21 | cloud.google.com/go/storage v1.28.1 // indirect 22 | github.com/PuerkitoBio/purell v1.1.1 // indirect 23 | github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect 24 | github.com/agext/levenshtein v1.2.3 // indirect 25 | github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect 26 | github.com/aws/aws-sdk-go v1.44.122 // indirect 27 | github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect 28 | github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc // indirect 29 | github.com/cpuguy83/go-md2man/v2 v2.0.0 // indirect 30 | github.com/davecgh/go-spew v1.1.1 // indirect 31 | github.com/emicklei/go-restful v2.16.0+incompatible // indirect 32 | github.com/go-errors/errors v1.0.2-0.20180813162953-d98b870cc4e0 // indirect 33 | github.com/go-logr/logr v1.2.0 // indirect 34 | github.com/go-openapi/jsonpointer v0.19.5 // indirect 35 | github.com/go-openapi/jsonreference v0.19.5 // indirect 36 | github.com/go-openapi/swag v0.19.14 // indirect 37 | github.com/go-sql-driver/mysql v1.4.1 // indirect 38 | github.com/gogo/protobuf v1.3.2 // indirect 39 | github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect 40 | github.com/golang/protobuf v1.5.3 // indirect 41 | github.com/google/gnostic v0.5.7-v3refs // indirect 42 | github.com/google/go-cmp v0.5.9 // indirect 43 | github.com/google/gofuzz v1.1.0 // indirect 44 | github.com/google/uuid v1.3.0 // indirect 45 | github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect 46 | github.com/googleapis/gax-go/v2 v2.7.1 // indirect 47 | github.com/gruntwork-io/go-commons v0.8.0 // indirect 48 | github.com/hashicorp/errwrap v1.0.0 // indirect 49 | github.com/hashicorp/go-cleanhttp v0.5.2 // indirect 50 | github.com/hashicorp/go-getter v1.7.5 // indirect 51 | github.com/hashicorp/go-multierror v1.1.0 // indirect 52 | github.com/hashicorp/go-safetemp v1.0.0 // indirect 53 | github.com/hashicorp/go-version v1.6.0 // indirect 54 | github.com/hashicorp/hcl/v2 v2.9.1 // indirect 55 | github.com/hashicorp/terraform-json v0.13.0 // indirect 56 | github.com/imdario/mergo v0.3.11 // indirect 57 | github.com/jinzhu/copier v0.0.0-20190924061706-b57f9002281a // indirect 58 | github.com/jmespath/go-jmespath v0.4.0 // indirect 59 | github.com/josharian/intern v1.0.0 // indirect 60 | github.com/json-iterator/go v1.1.12 // indirect 61 | github.com/klauspost/compress v1.15.11 // indirect 62 | github.com/mailru/easyjson v0.7.6 // indirect 63 | github.com/mattn/go-zglob v0.0.2-0.20190814121620-e3c945676326 // indirect 64 | github.com/mitchellh/go-homedir v1.1.0 // indirect 65 | github.com/mitchellh/go-testing-interface v1.14.1 // indirect 66 | github.com/mitchellh/go-wordwrap v1.0.1 // indirect 67 | github.com/moby/spdystream v0.2.0 // indirect 68 | github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect 69 | github.com/modern-go/reflect2 v1.0.2 // indirect 70 | github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect 71 | github.com/pmezard/go-difflib v1.0.0 // indirect 72 | github.com/pquerna/otp v1.2.0 // indirect 73 | github.com/russross/blackfriday/v2 v2.1.0 // indirect 74 | github.com/spf13/pflag v1.0.5 // indirect 75 | github.com/tmccombs/hcl2json v0.3.3 // indirect 76 | github.com/ulikunitz/xz v0.5.10 // indirect 77 | github.com/urfave/cli v1.22.2 // indirect 78 | github.com/zclconf/go-cty v1.9.1 // indirect 79 | go.opencensus.io v0.24.0 // indirect 80 | golang.org/x/crypto v0.21.0 // indirect 81 | golang.org/x/net v0.23.0 // indirect 82 | golang.org/x/oauth2 v0.7.0 // indirect 83 | golang.org/x/sys v0.18.0 // indirect 84 | golang.org/x/term v0.18.0 // indirect 85 | golang.org/x/text v0.14.0 // indirect 86 | golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 // indirect 87 | golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect 88 | google.golang.org/api v0.114.0 // indirect 89 | google.golang.org/appengine v1.6.7 // indirect 90 | google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect 91 | google.golang.org/grpc v1.56.3 // indirect 92 | google.golang.org/protobuf v1.33.0 // indirect 93 | gopkg.in/inf.v0 v0.9.1 // indirect 94 | gopkg.in/yaml.v2 v2.4.0 // indirect 95 | gopkg.in/yaml.v3 v3.0.1 // indirect 96 | k8s.io/client-go v0.24.9 // indirect 97 | k8s.io/klog/v2 v2.60.1 // indirect 98 | k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 // indirect 99 | k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 // indirect 100 | sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect 101 | sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect 102 | sigs.k8s.io/yaml v1.2.0 // indirect 103 | ) 104 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contribution Guidelines 2 | 3 | Contributions to this Package are very welcome! We follow a fairly standard [pull request process]( 4 | https://help.github.com/articles/about-pull-requests/) for contributions, subject to the following guidelines: 5 | 6 | 1. [File a GitHub issue](#file-a-github-issue) 7 | 1. [Update the documentation](#update-the-documentation) 8 | 1. [Update the tests](#update-the-tests) 9 | 1. [Update the code](#update-the-code) 10 | 1. [Create a pull request](#create-a-pull-request) 11 | 1. [Merge and release](#merge-and-release) 12 | 13 | 14 | ## File a GitHub issue 15 | 16 | Before starting any work, we recommend filing a GitHub issue in this repo. This is your chance to ask questions and 17 | get feedback from the maintainers and the community before you sink a lot of time into writing (possibly the wrong) 18 | code. If there is anything you're unsure about, just ask! 19 | 20 | 21 | ## Update the documentation 22 | 23 | We recommend updating the documentation *before* updating any code (see [Readme Driven 24 | Development](http://tom.preston-werner.com/2010/08/23/readme-driven-development.html)). This ensures the documentation 25 | stays up to date and allows you to think through the problem at a high level before you get lost in the weeds of 26 | coding. 27 | 28 | 29 | ## Update the tests 30 | 31 | We also recommend updating the automated tests *before* updating any code (see [Test Driven 32 | Development](https://en.wikipedia.org/wiki/Test-driven_development)). That means you add or update a test case, 33 | verify that it's failing with a clear error message, and *then* make the code changes to get that test to pass. This 34 | ensures the tests stay up to date and verify all the functionality in this Module, including whatever new 35 | functionality you're adding in your contribution. Check out the 36 | [tests](https://github.com/gruntwork-io/helm-kubernetes-services/tree/main/test) folder for instructions on running 37 | the automated tests. 38 | 39 | 40 | ## Update the code 41 | 42 | At this point, make your code changes and use your new test case to verify that everything is working. As you work, 43 | keep in mind two things: 44 | 45 | 1. Backwards compatibility 46 | 1. Downtime 47 | 48 | ### Backwards compatibility 49 | 50 | Please make every effort to avoid unnecessary backwards incompatible changes. With Helm charts, this means: 51 | 52 | 1. Do not delete, rename, or change the type of input variables. 53 | 1. If you add an input variable, set a default in `values.yaml`. 54 | 1. Do not delete, rename, or change the type of output variables. 55 | 1. Do not delete or rename a chart in the `charts` folder. 56 | 57 | If a backwards incompatible change cannot be avoided, please make sure to call that out when you submit a pull request, 58 | explaining why the change is absolutely necessary. 59 | 60 | ### Downtime 61 | 62 | Bear in mind that the Helm charts in this Module are used by real companies to run real infrastructure in 63 | production, and certain types of changes could cause downtime. If downtime cannot be avoided, please make sure to call 64 | that out when you submit a pull request. 65 | 66 | ### Code style 67 | 68 | We follow [the official Chart best practices](https://docs.helm.sh/chart_best_practices/) documented by the community. 69 | Please read through the guidelines if you have not already. 70 | 71 | Additionally, Gruntwork has a few extra guidelines not stated in the best practices guide: 72 | 73 | - Chart `values.yaml` should separate required inputs from optional inputs. Required inputs should be documented as 74 | comments. 75 | - Provide example required inputs in a separate `linter_values.yaml` file so that `helm lint` can be used to lint the 76 | charts. 77 | - Any input value that is rendered directly in the yaml templates (using `toYaml`) should be explicitly called out in 78 | the `values.yaml` file. 79 | - Any input value that is a map should explicitly call out the expected fields. Additionally, the fields should be 80 | labeled by type and whether or not the value is required. 81 | - Any required input that is a map, or optional input that is a map with an empty default, should have an example in the 82 | comments. 83 | - Every chart should have both helm tests and terratests. The helm tests test if the chart came up successfully in an 84 | install, while the terratest is used for integration testing of the chart before releasing the chart. 85 | 86 | ### Formatting and pre-commit hooks 87 | 88 | You must run `helm lint` on the code before committing. You can configure your computer to do this automatically 89 | using pre-commit hooks managed using [pre-commit](http://pre-commit.com/): 90 | 91 | 1. [Install pre-commit](http://pre-commit.com/#install). E.g.: `brew install pre-commit`. 92 | 1. Install the hooks: `pre-commit install`. 93 | 1. Make sure you have the helm client installed. See [the official docs](https://docs.helm.sh/using_helm/#install-helm) 94 | for instructions. 95 | 96 | Now write your code, and every time you commit, `helm lint` will be run on the charts that you modify. 97 | 98 | 99 | ## Create a pull request 100 | 101 | [Create a pull request](https://help.github.com/articles/creating-a-pull-request/) with your changes. Please make sure 102 | to include the following: 103 | 104 | 1. A description of the change, including a link to your GitHub issue. 105 | 1. The output of your automated test run, preferably in a [GitHub Gist](https://gist.github.com/). We cannot run 106 | automated tests for pull requests automatically due to [security 107 | concerns](https://circleci.com/docs/fork-pr-builds/#security-implications), so we need you to manually provide this 108 | test output so we can verify that everything is working. 109 | 1. Any notes on backwards incompatibility or downtime. 110 | 111 | 112 | ## Merge and release 113 | 114 | The maintainers for this repo will review your code and provide feedback. If everything looks good, they will merge the 115 | code and release a new version, which you'll be able to find in the [releases page](../../releases). 116 | -------------------------------------------------------------------------------- /test/k8s_service_service_account_template_test.go: -------------------------------------------------------------------------------- 1 | // +build all tpl 2 | 3 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 4 | // run just the template tests. See the test README for more information. 5 | 6 | package test 7 | 8 | import ( 9 | "path/filepath" 10 | "strings" 11 | "testing" 12 | 13 | "github.com/gruntwork-io/terratest/modules/helm" 14 | "github.com/gruntwork-io/terratest/modules/random" 15 | "github.com/stretchr/testify/assert" 16 | "github.com/stretchr/testify/require" 17 | ) 18 | 19 | // Test that setting serviceAccount.create = true will cause the helm template to render the Service Account resource 20 | func TestK8SServiceAccountCreateTrueCreatesServiceAccount(t *testing.T) { 21 | t.Parallel() 22 | randomSAName := strings.ToLower(random.UniqueId()) 23 | 24 | serviceaccount := renderK8SServiceAccountWithSetValues( 25 | t, 26 | map[string]string{ 27 | "serviceAccount.create": "true", 28 | "serviceAccount.name": randomSAName, 29 | }, 30 | ) 31 | 32 | assert.Equal(t, serviceaccount.Name, randomSAName) 33 | } 34 | 35 | // Test that setting serviceAccount.create = false will cause the helm template to not render the Service Account 36 | // resource 37 | func TestK8SServiceAccountCreateFalse(t *testing.T) { 38 | t.Parallel() 39 | 40 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 41 | require.NoError(t, err) 42 | 43 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 44 | // We then use SetValues to override all the defaults. 45 | options := &helm.Options{ 46 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 47 | SetValues: map[string]string{ 48 | "serviceAccount.create": "false", 49 | }, 50 | } 51 | _, err = helm.RenderTemplateE(t, options, helmChartPath, "serviceaccount", []string{"templates/serviceaccount.yaml"}) 52 | require.Error(t, err) 53 | } 54 | 55 | func TestK8SServiceServiceAccountInjection(t *testing.T) { 56 | t.Parallel() 57 | randomSAName := strings.ToLower(random.UniqueId()) 58 | deployment := renderK8SServiceDeploymentWithSetValues( 59 | t, 60 | map[string]string{ 61 | "serviceAccount.name": randomSAName, 62 | }, 63 | ) 64 | renderedServiceAccountName := deployment.Spec.Template.Spec.ServiceAccountName 65 | assert.Equal(t, renderedServiceAccountName, randomSAName) 66 | } 67 | 68 | func TestK8SServiceServiceAccountNoNameIsEmpty(t *testing.T) { 69 | t.Parallel() 70 | deployment := renderK8SServiceDeploymentWithSetValues( 71 | t, 72 | map[string]string{}, 73 | ) 74 | renderedServiceAccountName := deployment.Spec.Template.Spec.ServiceAccountName 75 | assert.Equal(t, renderedServiceAccountName, "") 76 | } 77 | 78 | func TestK8SServiceServiceAccountAutomountTokenTrueInjection(t *testing.T) { 79 | t.Parallel() 80 | deployment := renderK8SServiceDeploymentWithSetValues( 81 | t, 82 | map[string]string{ 83 | "serviceAccount.automountServiceAccountToken": "true", 84 | }, 85 | ) 86 | renderedServiceAccountTokenAutomountSetting := deployment.Spec.Template.Spec.AutomountServiceAccountToken 87 | require.NotNil(t, renderedServiceAccountTokenAutomountSetting) 88 | assert.True(t, *renderedServiceAccountTokenAutomountSetting) 89 | } 90 | 91 | func TestK8SServiceServiceAccountAutomountTokenFalseInjection(t *testing.T) { 92 | t.Parallel() 93 | deployment := renderK8SServiceDeploymentWithSetValues( 94 | t, 95 | map[string]string{ 96 | "serviceAccount.automountServiceAccountToken": "false", 97 | }, 98 | ) 99 | renderedServiceAccountTokenAutomountSetting := deployment.Spec.Template.Spec.AutomountServiceAccountToken 100 | require.NotNil(t, renderedServiceAccountTokenAutomountSetting) 101 | assert.False(t, *renderedServiceAccountTokenAutomountSetting) 102 | } 103 | 104 | func TestK8SServiceServiceAccountOmitAutomountToken(t *testing.T) { 105 | t.Parallel() 106 | deployment := renderK8SServiceDeploymentWithSetValues( 107 | t, 108 | map[string]string{}, 109 | ) 110 | renderedServiceAccountTokenAutomountSetting := deployment.Spec.Template.Spec.AutomountServiceAccountToken 111 | assert.Nil(t, renderedServiceAccountTokenAutomountSetting) 112 | } 113 | 114 | // Test that the Annotations of a service account are correctly rendered 115 | func TestK8SServiceAccountAnnotationRendering(t *testing.T) { 116 | t.Parallel() 117 | 118 | serviceAccountAnnotationKey := "testAnnotation" 119 | serviceAccountAnnotationValue := strings.ToLower(random.UniqueId()) 120 | 121 | serviceaccount := renderK8SServiceAccountWithSetValues( 122 | t, 123 | map[string]string{ 124 | "serviceAccount.create": "true", 125 | "serviceAccount.annotations." + serviceAccountAnnotationKey: serviceAccountAnnotationValue, 126 | }, 127 | ) 128 | 129 | renderedAnnotation := serviceaccount.Annotations 130 | assert.Equal(t, len(renderedAnnotation), 1) 131 | assert.Equal(t, renderedAnnotation[serviceAccountAnnotationKey], serviceAccountAnnotationValue) 132 | } 133 | 134 | // Test that default imagePullSecrets do not render any 135 | func TestK8SServiceAccountNoImagePullSecrets(t *testing.T) { 136 | t.Parallel() 137 | 138 | serviceaccount := renderK8SServiceAccountWithSetValues( 139 | t, 140 | map[string]string{ 141 | "serviceAccount.create": "true", 142 | }, 143 | ) 144 | 145 | renderedImagePullSecrets := serviceaccount.ImagePullSecrets 146 | require.Equal(t, len(renderedImagePullSecrets), 0) 147 | } 148 | 149 | // Test that multiple imagePullSecrets renders each one correctly 150 | func TestK8SServiceAccountMultipleImagePullSecrets(t *testing.T) { 151 | t.Parallel() 152 | 153 | serviceaccount := renderK8SServiceAccountWithSetValues( 154 | t, 155 | map[string]string{ 156 | "serviceAccount.create": "true", 157 | "imagePullSecrets[0]": "docker-private-registry-key", 158 | "imagePullSecrets[1]": "gcr-registry-key", 159 | }, 160 | ) 161 | 162 | renderedImagePullSecrets := serviceaccount.ImagePullSecrets 163 | require.Equal(t, len(renderedImagePullSecrets), 2) 164 | assert.Equal(t, renderedImagePullSecrets[0].Name, "docker-private-registry-key") 165 | assert.Equal(t, renderedImagePullSecrets[1].Name, "gcr-registry-key") 166 | } 167 | -------------------------------------------------------------------------------- /test/k8s_service_lifecycle_hooks_template_test.go: -------------------------------------------------------------------------------- 1 | //go:build all || tpl 2 | // +build all tpl 3 | 4 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 5 | // run just the template tests. See the test README for more information. 6 | 7 | package test 8 | 9 | import ( 10 | "testing" 11 | 12 | "github.com/stretchr/testify/assert" 13 | "github.com/stretchr/testify/require" 14 | ) 15 | 16 | // Test that setting shutdownDelay to 0 will disable the preStop hook 17 | func TestK8SServiceShutdownDelayZeroDisablesPreStopHook(t *testing.T) { 18 | t.Parallel() 19 | 20 | deployment := renderK8SServiceDeploymentWithSetValues(t, map[string]string{"shutdownDelay": "0"}) 21 | 22 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 23 | require.Equal(t, len(renderedPodContainers), 1) 24 | appContainer := renderedPodContainers[0] 25 | assert.Nil(t, appContainer.Lifecycle) 26 | } 27 | 28 | // Test that setting shutdownDelay to something greater than 0 will include a preStop hook 29 | func TestK8SServiceNonZeroShutdownDelayIncludesPreStopHook(t *testing.T) { 30 | t.Parallel() 31 | 32 | deployment := renderK8SServiceDeploymentWithSetValues(t, map[string]string{"shutdownDelay": "5"}) 33 | 34 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 35 | require.Equal(t, len(renderedPodContainers), 1) 36 | appContainer := renderedPodContainers[0] 37 | require.NotNil(t, appContainer.Lifecycle) 38 | require.NotNil(t, appContainer.Lifecycle.PreStop) 39 | require.NotNil(t, appContainer.Lifecycle.PreStop.Exec) 40 | require.Equal(t, appContainer.Lifecycle.PreStop.Exec.Command, []string{"sleep", "5"}) 41 | } 42 | 43 | func TestK8SServiceDeploymentAddingOnlyPostStartLifecycleHooks(t *testing.T) { 44 | t.Parallel() 45 | 46 | deployment := renderK8SServiceDeploymentWithSetValues( 47 | t, 48 | map[string]string{ 49 | // Disable shutdown delay to ensure it doesn't enable preStop hooks. 50 | "shutdownDelay": "0", 51 | 52 | "lifecycleHooks.enabled": "true", 53 | "lifecycleHooks.postStart.exec.command[0]": "echo", 54 | "lifecycleHooks.postStart.exec.command[1]": "run after start", 55 | }, 56 | ) 57 | 58 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 59 | require.Equal(t, len(renderedPodContainers), 1) 60 | appContainer := renderedPodContainers[0] 61 | require.NotNil(t, appContainer.Lifecycle) 62 | 63 | assert.Nil(t, appContainer.Lifecycle.PreStop) 64 | 65 | require.NotNil(t, appContainer.Lifecycle.PostStart) 66 | require.NotNil(t, appContainer.Lifecycle.PostStart.Exec) 67 | require.Equal(t, appContainer.Lifecycle.PostStart.Exec.Command, []string{"echo", "run after start"}) 68 | } 69 | 70 | func TestK8SServiceDeploymentAddingOnlyPreStopLifecycleHooks(t *testing.T) { 71 | t.Parallel() 72 | 73 | deployment := renderK8SServiceDeploymentWithSetValues( 74 | t, 75 | map[string]string{ 76 | // Disable shutdown delay to ensure it doesn't enable preStop hooks. 77 | "shutdownDelay": "0", 78 | 79 | "lifecycleHooks.enabled": "true", 80 | "lifecycleHooks.preStop.exec.command[0]": "echo", 81 | "lifecycleHooks.preStop.exec.command[1]": "run before stop", 82 | }, 83 | ) 84 | 85 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 86 | require.Equal(t, len(renderedPodContainers), 1) 87 | appContainer := renderedPodContainers[0] 88 | require.NotNil(t, appContainer.Lifecycle) 89 | 90 | assert.Nil(t, appContainer.Lifecycle.PostStart) 91 | 92 | require.NotNil(t, appContainer.Lifecycle.PreStop) 93 | require.NotNil(t, appContainer.Lifecycle.PreStop.Exec) 94 | require.Equal(t, appContainer.Lifecycle.PreStop.Exec.Command, []string{"echo", "run before stop"}) 95 | } 96 | 97 | func TestK8SServiceDeploymentAddingBothLifecycleHooks(t *testing.T) { 98 | t.Parallel() 99 | 100 | deployment := renderK8SServiceDeploymentWithSetValues( 101 | t, 102 | map[string]string{ 103 | // Disable shutdown delay to ensure it doesn't enable preStop hooks. 104 | "shutdownDelay": "0", 105 | 106 | "lifecycleHooks.enabled": "true", 107 | "lifecycleHooks.postStart.exec.command[0]": "echo", 108 | "lifecycleHooks.postStart.exec.command[1]": "run after start", 109 | "lifecycleHooks.preStop.exec.command[0]": "echo", 110 | "lifecycleHooks.preStop.exec.command[1]": "run before stop", 111 | }, 112 | ) 113 | 114 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 115 | require.Equal(t, len(renderedPodContainers), 1) 116 | appContainer := renderedPodContainers[0] 117 | require.NotNil(t, appContainer.Lifecycle) 118 | 119 | require.NotNil(t, appContainer.Lifecycle.PostStart) 120 | require.NotNil(t, appContainer.Lifecycle.PostStart.Exec) 121 | require.Equal(t, appContainer.Lifecycle.PostStart.Exec.Command, []string{"echo", "run after start"}) 122 | 123 | require.NotNil(t, appContainer.Lifecycle.PreStop) 124 | require.NotNil(t, appContainer.Lifecycle.PreStop.Exec) 125 | require.Equal(t, appContainer.Lifecycle.PreStop.Exec.Command, []string{"echo", "run before stop"}) 126 | } 127 | 128 | func TestK8SServiceDeploymentPreferExplicitPreStopOverShutdownDelay(t *testing.T) { 129 | t.Parallel() 130 | 131 | deployment := renderK8SServiceDeploymentWithSetValues( 132 | t, 133 | map[string]string{ 134 | "shutdownDelay": "5", 135 | "lifecycleHooks.enabled": "true", 136 | "lifecycleHooks.preStop.exec.command[0]": "echo", 137 | "lifecycleHooks.preStop.exec.command[1]": "run before stop", 138 | }, 139 | ) 140 | 141 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 142 | require.Equal(t, len(renderedPodContainers), 1) 143 | appContainer := renderedPodContainers[0] 144 | require.NotNil(t, appContainer.Lifecycle) 145 | 146 | assert.Nil(t, appContainer.Lifecycle.PostStart) 147 | 148 | require.NotNil(t, appContainer.Lifecycle.PreStop) 149 | require.NotNil(t, appContainer.Lifecycle.PreStop.Exec) 150 | require.Equal(t, appContainer.Lifecycle.PreStop.Exec.Command, []string{"echo", "run before stop"}) 151 | } 152 | 153 | func TestK8SServiceDeploymentEnabledFalseDisablesLifecycleHooksEvenWhenAddingBoth(t *testing.T) { 154 | t.Parallel() 155 | 156 | deployment := renderK8SServiceDeploymentWithSetValues( 157 | t, 158 | map[string]string{ 159 | // Disable shutdown delay to ensure it doesn't enable preStop hooks. 160 | "shutdownDelay": "0", 161 | 162 | "lifecycleHooks.enabled": "false", 163 | "lifecycleHooks.postStart.exec.command[0]": "echo", 164 | "lifecycleHooks.postStart.exec.command[1]": "run after start", 165 | "lifecycleHooks.preStop.exec.command[0]": "echo", 166 | "lifecycleHooks.preStop.exec.command[1]": "run before stop", 167 | }, 168 | ) 169 | 170 | renderedPodContainers := deployment.Spec.Template.Spec.Containers 171 | require.Equal(t, len(renderedPodContainers), 1) 172 | appContainer := renderedPodContainers[0] 173 | require.Nil(t, appContainer.Lifecycle) 174 | } 175 | -------------------------------------------------------------------------------- /GRUNTWORK_PHILOSOPHY.md: -------------------------------------------------------------------------------- 1 | # Gruntwork Philosophy 2 | 3 | 4 | 5 | At Gruntwork, we strive to accelerate the deployment of production grade infrastructure by prodiving a library of 6 | stable, reusable, and battle tested infrastructure as code organized into a series of [modules](#what-is-a-module) with 7 | [submodules](#what-is-a-submodule). Each module represents a particular set of infrastructure that is componentized into 8 | smaller pieces represented by the submodules within the module. By doing so, we have built a composable library that can 9 | be combined into building out everything from simple single service deployments to complicated microservice setups so 10 | that your infrastructure can grow with your business needs. Every module we provide is built with the [production grade 11 | infrastruture checklist](#production-grade-infrastructure-checklist) in mind, ensuring that the services you deploy are 12 | resilient, fault tolerant, and scalable. 13 | 14 | 15 | ## What is a Module? 16 | 17 | A Module is a reusable, tested, documented, configurable, best-practices definition of a single piece of Infrastructure 18 | (e.g., Docker cluster, VPC, Jenkins, Consul), written using a combination of [Terraform](https://www.terraform.io/), Go, 19 | and Bash. A module contains a set of automated tests, documentation, and examples that have been proven in production, 20 | providing the underlying infrastructure for [Gruntwork's customers](https://www.gruntwork.io/customers). 21 | 22 | Instead of figuring out the details of how to run a piece of infrastructure from scratch, you can reuse existing code 23 | that has been proven in production. And instead of maintaining all that infrastructure code yourself, you can leverage 24 | the work of the community to pick up infrastructure improvements through a version number bump. 25 | 26 | 27 | ## What is a Submodule? 28 | 29 | Each Infrastructure Module consists of one or more orthogonal Submodules that handle some specific aspect of that 30 | Infrastructure Module's functionality. Breaking the code up into multiple submodules makes it easier to reuse and 31 | compose to handle many different use cases. Although Modules are designed to provide an end to end solution to manage 32 | the relevant infrastructure by combining the Submodules defined in the Module, Submodules can be used independently for 33 | specific functionality that you need in your infrastructure code. 34 | 35 | 36 | ## Production Grade Infrastructure Checklist 37 | 38 | At Gruntwork, we have learned over the years that it is not enough to just get the services up and running in a publicly 39 | accessible space to call your application "production-ready." There are many more things to consider, and oftentimes 40 | many of these considerations are missing in the deployment plan of applications. These topics come up as afterthoughts, 41 | and are learned the hard way after the fact. That is why we codified all of them into a checklist that can be used as a 42 | reference to help ensure that they are considered before your application goes to production, and conscious decisions 43 | are made to neglect particular components if needed, as opposed to accidentally omitting them from consideration. 44 | 45 | 49 | 50 | | Task | Description | Example tools | 51 | |--------------------|-------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------| 52 | | Install | Install the software binaries and all dependencies. | Bash, Chef, Ansible, Puppet | 53 | | Configure | Configure the software at runtime. Includes port settings, TLS certs, service discovery, leaders, followers, replication, etc. | Bash, Chef, Ansible, Puppet | 54 | | Provision | Provision the infrastructure. Includes EC2 instances, load balancers, network topology, security gr oups, IAM permissions, etc. | Terraform, CloudFormation | 55 | | Deploy | Deploy the service on top of the infrastructure. Roll out updates with no downtime. Includes blue-green, rolling, and canary deployments. | Scripts, Orchestration tools (ECS, k8s, Nomad) | 56 | | High availability | Withstand outages of individual processes, EC2 instances, services, Availability Zones, and regions. | Multi AZ, multi-region, replication, ASGs, ELBs | 57 | | Scalability | Scale up and down in response to load. Scale horizontally (more servers) and/or vertically (bigger servers). | ASGs, replication, sharding, caching, divide and conquer | 58 | | Performance | Optimize CPU, memory, disk, network, GPU, and usage. Includes query tuning, benchmarking, load testing, and profiling. | Dynatrace, valgrind, VisualVM, ab, Jmeter | 59 | | Networking | Configure static and dynamic IPs, ports, service discovery, firewalls, DNS, SSH access, and VPN access. | EIPs, ENIs, VPCs, NACLs, SGs, Route 53, OpenVPN | 60 | | Security | Encryption in transit (TLS) and on disk, authentication, authorization, secrets management, server hardening. | ACM, EBS Volumes, Cognito, Vault, CIS | 61 | | Metrics | Availability metrics, business metrics, app metrics, server metrics, events, observability, tracing, and alerting. | CloudWatch, DataDog, New Relic, Honeycomb | 62 | | Logs | Rotate logs on disk. Aggregate log data to a central location. | CloudWatch logs, ELK, Sumo Logic, Papertrail | 63 | | Backup and Restore | Make backups of DBs, caches, and other data on a scheduled basis. Replicate to separate region/account. | RDS, ElastiCache, ec2-snapper, Lambda | 64 | | Cost optimization | Pick proper instance types, use spot and reserved instances, use auto scaling, and nuke unused resources. | ASGs, spot instances, reserved instances | 65 | | Documentation | Document your code, architecture, and practices. Create playbooks to respond to incidents. | READMEs, wikis, Slack | 66 | | Tests | Write automated tests for your infrastructure code. Run tests after every commit and nightly. | Terratest | 67 | -------------------------------------------------------------------------------- /README.adoc: -------------------------------------------------------------------------------- 1 | :type: service 2 | :name: Kubernetes Service 3 | :description: Deploy a Kubernetes service with zero-downtime, rolling deployment, RBAC, auto scaling, secrets management, and more. 4 | :icon: /_docs/kubernetes-service.png 5 | :category: docker-services 6 | :cloud: k8s 7 | :tags: docker, orchestration, kubernetes, containers 8 | :license: gruntwork 9 | :built-with: helm 10 | 11 | // AsciiDoc TOC settings 12 | :toc: 13 | :toc-placement!: 14 | :toc-title: 15 | 16 | // GitHub specific settings. See https://gist.github.com/dcode/0cfbf2699a1fe9b46ff04c41721dda74 for details. 17 | ifdef::env-github[] 18 | :tip-caption: :bulb: 19 | :note-caption: :information_source: 20 | :important-caption: :heavy_exclamation_mark: 21 | :caution-caption: :fire: 22 | :warning-caption: :warning: 23 | endif::[] 24 | 25 | = Kubernetes Service 26 | 27 | image:https://img.shields.io/badge/maintained%20by-gruntwork.io-%235849a6.svg[link="https://gruntwork.io/?ref=repo_k8s_service"] 28 | 29 | This repo contains Helm Charts for deploying your applications on Kubernetes clusters with 30 | https://helm.sh[Helm] (hosted at https://helmcharts.gruntwork.io[helmcharts.gruntwork.io], implemented via https://github.com/gruntwork-io/helmcharts[gruntwork-io/helmcharts]). 31 | 32 | image::/_docs/k8s-service-architecture.png?raw=true[K8S Service architecture] 33 | 34 | toc::[] 35 | 36 | 37 | 38 | 39 | == Features 40 | 41 | * Deploy your application containers on to Kubernetes 42 | * Zero-downtime rolling deployments 43 | * Auto scaling and auto healing 44 | * Configuration management and Secrets management 45 | ** Secrets as Environment/Volumes/Secret Store CSI 46 | * Ingress and Service endpoints 47 | 48 | 49 | 50 | 51 | == Learn 52 | 53 | NOTE: This repo is a part of https://gruntwork.io/infrastructure-as-code-library/[the Gruntwork Infrastructure as Code 54 | Library], a collection of reusable, battle-tested, production ready infrastructure code. If you've never used the Infrastructure as Code Library before, make sure to read https://gruntwork.io/guides/foundations/how-to-use-gruntwork-infrastructure-as-code-library/[How to use the Gruntwork Infrastructure as Code Library]! 55 | 56 | === Core concepts 57 | 58 | * https://gruntwork.io/guides/kubernetes/how-to-deploy-production-grade-kubernetes-cluster-aws/#core_concepts[Kubernetes core concepts]: learn about Kubernetes architecture (control plane, worker nodes), access control (authentication, authorization), resources (pods, controllers, services, config, secrets), and more. 59 | * link:/core-concepts.md#how-do-you-run-applications-on-kubernetes[How do you run applications on Kubernetes?] 60 | * link:/core-concepts.md#what-is-helm[What is Helm?] 61 | * _https://www.manning.com/books/kubernetes-in-action[Kubernetes in Action]_: the best book we've found for getting up and running with Kubernetes. 62 | * link:/charts/k8s-service/README.md##how-to-use-this-chart[How to use this chart?] 63 | * link:/charts/k8s-service/README.md#what-resources-does-this-helm-chart-deploy[What resources does this Helm Chart deploy?] 64 | * link:/charts/k8s-service/README.md#what-is-a-sidecar-container[What is a sidecar container?] 65 | 66 | === Repo organization 67 | 68 | * link:/charts[charts]: the main implementation code for this repo, broken down into multiple standalone, orthogonal Helm charts. 69 | * link:/examples[examples]: This folder contains working examples of how to use the submodules. 70 | * link:/test[test]: Automated tests for the modules and examples. 71 | 72 | 73 | == Deploy 74 | 75 | === Non-production deployment (quick start for learning) 76 | 77 | If you just want to try this repo out for experimenting and learning, check out the following resources: 78 | 79 | * link:/examples[examples folder]: The `examples` folder contains sample code optimized for learning, experimenting, and testing (but not production usage). 80 | 81 | === Production deployment 82 | 83 | If you want to deploy this repo in production, check out the following resources: 84 | 85 | * **Gruntwork Subscriber Only** https://github.com/gruntwork-io/terraform-aws-service-catalog/blob/main/examples/for-production/infrastructure-live/prod/us-west-2/prod/services/k8s-sample-app-frontend/terragrunt.hcl[k8s-service in the example Reference Architecture]: Production-ready sample code from the Reference Architecture example. 86 | 87 | 88 | 89 | 90 | == Manage 91 | 92 | === Day-to-day operations 93 | 94 | * link:/charts/k8s-service/README.md#how-do-i-deploy-additional-services-not-managed-by-the-chart[How do I deploy additional services not managed by the chart?] 95 | * link:/charts/k8s-service/README.md#how-do-i-expose-my-application-internally-to-the-cluster[How do I expose my application internally to the cluster?] 96 | * link:/charts/k8s-service/README.md#how-do-i-expose-my-application-externally-outside-of-the-cluster[How do I expose my application externally, outside of the cluster?] 97 | * link:/charts/k8s-service/README.md#how-do-i-deploy-a-worker-service[How do I deploy a worker service?] 98 | * link:/charts/k8s-service/README.md#how-do-i-check-the-status-of-the-rollout[How do I check the status of the rollout?] 99 | * link:/charts/k8s-service/README.md#how-do-i-set-and-share-configurations-with-the-application[How do I set and share configurations with the application?] 100 | * link:/charts/k8s-service/README.md#why-does-the-pod-have-a-prestop-hook-with-a-shutdown-delay[Why does the Pod have a preStop hook with a Shutdown Delay?] 101 | * link:/charts/k8s-service/README.md#how-do-i-use-a-private-registry[How do I use a private registry?] 102 | * link:/charts/k8s-service/README.md#how-do-i-verify-my-canary-deployment[How do I verify my canary deployment?] 103 | * link:/charts/k8s-service/README.md#how-do-i-roll-back-a-canary-deployment[How do I roll back a canary deployment?] 104 | 105 | === Major changes 106 | 107 | * link:/charts/k8s-service/README.md#how-do-you-update-the-application-to-a-new-version[How do you update the application to a new version?] 108 | * link:/charts/k8s-service/README.md#how-do-i-ensure-a-minimum-number-of-pods-are-available-across-node-maintenance[How do I ensure a minimum number of Pods are available across node maintenance?] 109 | 110 | 111 | 112 | 113 | == Support 114 | 115 | If you need help with this repo or anything else related to infrastructure or DevOps, Gruntwork offers https://gruntwork.io/support/[Commercial Support] via Slack, email, and phone/video. If you're already a Gruntwork customer, hop on Slack and ask away! If not, https://www.gruntwork.io/pricing/[subscribe now]. If you're not sure, feel free to email us at link:mailto:support@gruntwork.io[support@gruntwork.io]. 116 | 117 | 118 | 119 | 120 | == Contributions 121 | 122 | Contributions to this repo are very welcome and appreciated! If you find a bug or want to add a new feature or even contribute an entirely new module, we are very happy to accept pull requests, provide feedback, and run your changes through our automated test suite. 123 | 124 | Please see https://gruntwork.io/guides/foundations/how-to-use-gruntwork-infrastructure-as-code-library/#contributing-to-the-gruntwork-infrastructure-as-code-library[Contributing to the Gruntwork Infrastructure as Code Library] for instructions. 125 | 126 | 127 | 128 | 129 | == License 130 | 131 | Please see link:LICENSE[LICENSE] for details on how the code in this repo is licensed. 132 | -------------------------------------------------------------------------------- /test/k8s_service_template_render_helpers_for_test.go: -------------------------------------------------------------------------------- 1 | //go:build all || tpl 2 | // +build all tpl 3 | 4 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 5 | // run just the template tests. See the test README for more information. 6 | 7 | package test 8 | 9 | import ( 10 | "path/filepath" 11 | "testing" 12 | 13 | "github.com/gruntwork-io/terratest/modules/helm" 14 | "github.com/stretchr/testify/require" 15 | appsv1 "k8s.io/api/apps/v1" 16 | corev1 "k8s.io/api/core/v1" 17 | extv1beta1 "k8s.io/api/extensions/v1beta1" 18 | networkingv1 "k8s.io/api/networking/v1" 19 | 20 | certapi "github.com/GoogleCloudPlatform/gke-managed-certs/pkg/apis/networking.gke.io/v1beta1" 21 | ) 22 | 23 | func renderK8SServiceDeploymentWithSetValues(t *testing.T, setValues map[string]string) appsv1.Deployment { 24 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 25 | require.NoError(t, err) 26 | 27 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 28 | options := &helm.Options{ 29 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 30 | SetValues: setValues, 31 | } 32 | // Render just the deployment resource 33 | out := helm.RenderTemplate(t, options, helmChartPath, "deployment", []string{"templates/deployment.yaml"}) 34 | 35 | // Parse the deployment and return it 36 | var deployment appsv1.Deployment 37 | helm.UnmarshalK8SYaml(t, out, &deployment) 38 | return deployment 39 | } 40 | 41 | func renderK8SServiceCanaryDeploymentWithSetValues(t *testing.T, setValues map[string]string) appsv1.Deployment { 42 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 43 | require.NoError(t, err) 44 | 45 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 46 | options := &helm.Options{ 47 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 48 | SetValues: setValues, 49 | } 50 | // Render just the canary deployment resource 51 | out := helm.RenderTemplate(t, options, helmChartPath, "canarydeployment", []string{"templates/canarydeployment.yaml"}) 52 | 53 | // Parse the canary deployment and return it 54 | var canarydeployment appsv1.Deployment 55 | helm.UnmarshalK8SYaml(t, out, &canarydeployment) 56 | return canarydeployment 57 | } 58 | 59 | func renderK8SServiceIngressWithSetValues(t *testing.T, setValues map[string]string) networkingv1.Ingress { 60 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 61 | require.NoError(t, err) 62 | 63 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 64 | options := &helm.Options{ 65 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 66 | SetValues: setValues, 67 | } 68 | // Render just the ingress resource 69 | out := helm.RenderTemplate(t, options, helmChartPath, "ingress", []string{"templates/ingress.yaml"}) 70 | 71 | // Parse the ingress and return it 72 | var ingress networkingv1.Ingress 73 | helm.UnmarshalK8SYaml(t, out, &ingress) 74 | return ingress 75 | } 76 | 77 | func renderK8SServiceIngressWithValuesFile(t *testing.T, valuesFilePath string) networkingv1.Ingress { 78 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 79 | require.NoError(t, err) 80 | 81 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 82 | options := &helm.Options{ 83 | ValuesFiles: []string{ 84 | filepath.Join("..", "charts", "k8s-service", "linter_values.yaml"), 85 | valuesFilePath, 86 | }, 87 | } 88 | // Render just the ingress resource 89 | out := helm.RenderTemplate(t, options, helmChartPath, "ingress", []string{"templates/ingress.yaml"}) 90 | 91 | // Parse the ingress and return it 92 | var ingress networkingv1.Ingress 93 | helm.UnmarshalK8SYaml(t, out, &ingress) 94 | return ingress 95 | } 96 | 97 | func renderK8SServiceExtV1Beta1IngressWithSetValues(t *testing.T, setValues map[string]string) extv1beta1.Ingress { 98 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 99 | require.NoError(t, err) 100 | 101 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 102 | options := &helm.Options{ 103 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 104 | SetValues: setValues, 105 | } 106 | // Render just the ingress resource 107 | out := helm.RenderTemplate(t, options, helmChartPath, "ingress", []string{"templates/ingress.yaml"}) 108 | 109 | // Parse the ingress and return it 110 | var ingress extv1beta1.Ingress 111 | helm.UnmarshalK8SYaml(t, out, &ingress) 112 | return ingress 113 | } 114 | 115 | func renderK8SServiceManagedCertificateWithSetValues(t *testing.T, setValues map[string]string) certapi.ManagedCertificate { 116 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 117 | require.NoError(t, err) 118 | 119 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 120 | options := &helm.Options{ 121 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 122 | SetValues: setValues, 123 | } 124 | // Render just the google managed certificate resource 125 | out := helm.RenderTemplate(t, options, helmChartPath, "gmc", []string{"templates/gmc.yaml"}) 126 | 127 | // Parse the google managed certificate and return it 128 | var cert certapi.ManagedCertificate 129 | helm.UnmarshalK8SYaml(t, out, &cert) 130 | return cert 131 | } 132 | 133 | func renderK8SServiceAccountWithSetValues(t *testing.T, setValues map[string]string) corev1.ServiceAccount { 134 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 135 | require.NoError(t, err) 136 | 137 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 138 | options := &helm.Options{ 139 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 140 | SetValues: setValues, 141 | } 142 | // Render just the service account resource 143 | out := helm.RenderTemplate(t, options, helmChartPath, "serviceaccount", []string{"templates/serviceaccount.yaml"}) 144 | 145 | // Parse the service account and return it 146 | var serviceaccount corev1.ServiceAccount 147 | helm.UnmarshalK8SYaml(t, out, &serviceaccount) 148 | return serviceaccount 149 | } 150 | 151 | func renderK8SServiceWithSetValues(t *testing.T, setValues map[string]string) corev1.Service { 152 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 153 | require.NoError(t, err) 154 | 155 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 156 | options := &helm.Options{ 157 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 158 | SetValues: setValues, 159 | } 160 | // Render just the service resource 161 | out := helm.RenderTemplate(t, options, helmChartPath, "service", []string{"templates/service.yaml"}) 162 | 163 | // Parse the service and return it 164 | var service corev1.Service 165 | helm.UnmarshalK8SYaml(t, out, &service) 166 | return service 167 | } 168 | -------------------------------------------------------------------------------- /.circleci/config.yml: -------------------------------------------------------------------------------- 1 | defaults: &defaults 2 | machine: 3 | enabled: true 4 | image: ubuntu-2004:202111-02 5 | 6 | base_env: &base_env 7 | GRUNTWORK_INSTALLER_VERSION: v0.0.38 8 | TERRATEST_LOG_PARSER_VERSION: v0.40.6 9 | HELM_VERSION: v3.8.0 10 | MODULE_CI_VERSION: v0.55.1 11 | MINIKUBE_VERSION: v1.28.0 12 | TERRAFORM_VERSION: NONE 13 | TERRAGRUNT_VERSION: NONE 14 | PACKER_VERSION: NONE 15 | GOLANG_VERSION: 1.18 16 | GO111MODULE: auto 17 | KUBECONFIG: /home/circleci/.kube/config 18 | 19 | install_helm_client: &install_helm_client 20 | name: install helm 21 | command: | 22 | # install helm 23 | curl -Lo helm.tar.gz https://get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz 24 | tar -xvf helm.tar.gz 25 | chmod +x linux-amd64/helm 26 | sudo mv linux-amd64/helm /usr/local/bin/ 27 | 28 | # Initialize stable repository 29 | helm repo add stable https://charts.helm.sh/stable 30 | 31 | install_gruntwork_utils: &install_gruntwork_utils 32 | name: install gruntwork utils 33 | command: | 34 | curl -Ls https://raw.githubusercontent.com/gruntwork-io/gruntwork-installer/master/bootstrap-gruntwork-installer.sh | bash /dev/stdin --version "${GRUNTWORK_INSTALLER_VERSION}" 35 | gruntwork-install --module-name "gruntwork-module-circleci-helpers" --repo "https://github.com/gruntwork-io/terraform-aws-ci" --tag "${MODULE_CI_VERSION}" 36 | gruntwork-install --module-name "kubernetes-circleci-helpers" --repo "https://github.com/gruntwork-io/terraform-aws-ci" --tag "${MODULE_CI_VERSION}" 37 | gruntwork-install --binary-name "terratest_log_parser" --repo "https://github.com/gruntwork-io/terratest" --tag "${TERRATEST_LOG_PARSER_VERSION}" 38 | configure-environment-for-gruntwork-module \ 39 | --terraform-version ${TERRAFORM_VERSION} \ 40 | --terragrunt-version ${TERRAGRUNT_VERSION} \ 41 | --packer-version ${PACKER_VERSION} \ 42 | --go-version ${GOLANG_VERSION} \ 43 | 44 | integration_test_steps: &integration_test_steps 45 | steps: 46 | - attach_workspace: 47 | at: /home/circleci 48 | - run: 49 | <<: *install_gruntwork_utils 50 | - run: 51 | command: | 52 | sudo apt-get update 53 | sudo DEBIAN_FRONTEND=noninteractive apt-get install -y conntrack 54 | setup-minikube --minikube-version "${MINIKUBE_VERSION}" --k8s-version "${KUBERNETES_VERSION}" ${CRI_DOCKERD_ARG} 55 | - run: 56 | <<: *install_helm_client 57 | - run: 58 | name: run tests 59 | command: | 60 | mkdir -p /tmp/logs 61 | cd test 62 | go mod tidy 63 | run-go-tests --packages "-tags integration ." --timeout 60m | tee /tmp/logs/all.log 64 | no_output_timeout: 3600s 65 | - run: 66 | command: terratest_log_parser --testlog /tmp/logs/all.log --outputdir /tmp/logs 67 | when: always 68 | - store_artifacts: 69 | path: /tmp/logs 70 | - store_test_results: 71 | path: /tmp/logs 72 | 73 | version: 2 74 | jobs: 75 | setup: 76 | environment: 77 | <<: *base_env 78 | docker: 79 | - image: 087285199408.dkr.ecr.us-east-1.amazonaws.com/circle-ci-test-image-base:go1.21.9-tf1.5-tg39.1-pck1.8-ci54.0 80 | steps: 81 | - checkout 82 | # Install gruntwork utilities 83 | - run: 84 | <<: *install_gruntwork_utils 85 | - run: 86 | <<: *install_helm_client 87 | # Fail the build if the pre-commit hooks don't pass. Note: if you run pre-commit install locally, these hooks will 88 | # execute automatically every time before you commit, ensuring the build never fails at this step! 89 | - run: 90 | command: | 91 | pre-commit install 92 | pre-commit run --all-files 93 | - persist_to_workspace: 94 | root: /home/circleci 95 | paths: 96 | - project 97 | tpl_tests: 98 | <<: *defaults 99 | environment: 100 | <<: *base_env 101 | steps: 102 | - attach_workspace: 103 | at: /home/circleci 104 | - run: 105 | <<: *install_gruntwork_utils 106 | - run: 107 | <<: *install_helm_client 108 | - run: 109 | name: run tests 110 | command: | 111 | mkdir -p /tmp/logs 112 | cd test 113 | go mod tidy 114 | run-go-tests --packages "-tags tpl ." --timeout 60m | tee /tmp/logs/all.log 115 | no_output_timeout: 3600s 116 | - run: 117 | command: terratest_log_parser --testlog /tmp/logs/all.log --outputdir /tmp/logs 118 | when: always 119 | - store_artifacts: 120 | path: /tmp/logs 121 | - store_test_results: 122 | path: /tmp/logs 123 | 124 | test_k8s124: 125 | <<: [*defaults, *integration_test_steps] 126 | environment: 127 | <<: *base_env 128 | KUBERNETES_VERSION: v1.24.8 129 | CRI_DOCKERD_ARG: "--cri-dockerd-version 0.3.0" 130 | MINIKUBE_VERSION: v1.28.0 131 | MODULE_CI_VERSION: v0.51.0 132 | 133 | test_k8s121: 134 | <<: [*defaults, *integration_test_steps] 135 | environment: 136 | <<: *base_env 137 | KUBERNETES_VERSION: v1.21.7 138 | CRI_DOCKERD_ARG: "" 139 | MINIKUBE_VERSION: v1.22.0 140 | MODULE_CI_VERSION: v0.50.0 141 | 142 | deploy: 143 | <<: *defaults 144 | environment: 145 | <<: *base_env 146 | steps: 147 | - attach_workspace: 148 | at: /home/circleci 149 | - run: 150 | <<: *install_gruntwork_utils 151 | - run: 152 | <<: *install_helm_client 153 | - run: 154 | name: Generate chart packages 155 | command: | 156 | mkdir -p assets 157 | assets_dir="$(python -c "import os; print(os.path.abspath('./assets'))")" 158 | version_tag="$(echo "$CIRCLE_TAG" | sed "s/^v?//")" 159 | for chart in charts/*/; do 160 | chart_name="$(basename "$chart")" 161 | echo "Packaging chart ${chart_name}" 162 | # Update version tag 163 | sed -i "s/0.0.1-replace/${version_tag}/" "${chart}/Chart.yaml" 164 | # TODO: Figure out provenance strategy 165 | (cd "charts" && helm package "${chart_name}" -d "${assets_dir}") 166 | done 167 | - run: 168 | name: Generate chart repo index 169 | command: | 170 | cd assets 171 | helm repo index --url "https://github.com/gruntwork-io/helm-kubernetes-services/releases/download/${CIRCLE_TAG}" . 172 | - run: 173 | command: upload-github-release-assets ./assets/* 174 | workflows: 175 | version: 2 176 | test-and-deploy: 177 | jobs: 178 | - setup: 179 | filters: 180 | tags: 181 | only: /^v.*/ 182 | context: 183 | - AWS__PHXDEVOPS__circle-ci-test 184 | - GITHUB__PAT__gruntwork-ci 185 | 186 | - tpl_tests: 187 | requires: 188 | - setup 189 | filters: 190 | tags: 191 | only: /^v.*/ 192 | context: 193 | - AWS__PHXDEVOPS__circle-ci-test 194 | - GITHUB__PAT__gruntwork-ci 195 | 196 | - test_k8s124: 197 | requires: 198 | - setup 199 | filters: 200 | tags: 201 | only: /^v.*/ 202 | context: 203 | - AWS__PHXDEVOPS__circle-ci-test 204 | - GITHUB__PAT__gruntwork-ci 205 | 206 | - test_k8s121: 207 | requires: 208 | - setup 209 | filters: 210 | tags: 211 | only: /^v.*/ 212 | context: 213 | - AWS__PHXDEVOPS__circle-ci-test 214 | - GITHUB__PAT__gruntwork-ci 215 | 216 | - deploy: 217 | requires: 218 | - tpl_tests 219 | - test_k8s124 220 | - test_k8s121 221 | filters: 222 | tags: 223 | only: /^v.*/ 224 | branches: 225 | ignore: /.*/ 226 | context: 227 | - AWS__PHXDEVOPS__circle-ci-test 228 | - GITHUB__PAT__gruntwork-ci 229 | -------------------------------------------------------------------------------- /test/k8s_service_vertical_pod_autoscaler_template_test.go: -------------------------------------------------------------------------------- 1 | //go:build all || tpl 2 | // +build all tpl 3 | 4 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 5 | // run just the template tests. See the test README for more information. 6 | 7 | package test 8 | 9 | import ( 10 | "path/filepath" 11 | "strconv" 12 | "testing" 13 | 14 | "github.com/ghodss/yaml" 15 | "github.com/gruntwork-io/terratest/modules/helm" 16 | "github.com/stretchr/testify/assert" 17 | "github.com/stretchr/testify/require" 18 | ) 19 | 20 | // Test that setting verticalPodAutoscaler.enabled = true will cause the helm template to render the Vertical Pod 21 | // Autoscaler resource with main pod configuration 22 | func TestK8SServiceVerticalPodAutoscalerCreateTrueCreatesVerticalPodAutoscalerWithMainPodConfiguration(t *testing.T) { 23 | t.Parallel() 24 | updateMode := "Initial" 25 | minReplicas := "20" 26 | controlledValues := "RequestsOnly" 27 | 28 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 29 | require.NoError(t, err) 30 | 31 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 32 | // We then use SetValues to override all the defaults. 33 | options := &helm.Options{ 34 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 35 | SetValues: map[string]string{ 36 | "verticalPodAutoscaler.enabled": "true", 37 | "verticalPodAutoscaler.updateMode": updateMode, 38 | "verticalPodAutoscaler.minReplicas": minReplicas, 39 | "verticalPodAutoscaler.mainContainerResourcePolicy.controlledValues": controlledValues, 40 | }, 41 | } 42 | out := helm.RenderTemplate(t, options, helmChartPath, "hpa", []string{"templates/verticalpodautoscaler.yaml"}) 43 | 44 | // We take the output and render it to a map to validate it has created a Vertical Pod Autoscaler output or not 45 | rendered := map[string]interface{}{} 46 | err = yaml.Unmarshal([]byte(out), &rendered) 47 | assert.NoError(t, err) 48 | assert.NotEqual(t, 0, len(rendered)) 49 | min, err := strconv.ParseFloat(minReplicas, 64) 50 | assert.Equal(t, updateMode, rendered["spec"].(map[string]interface{})["updatePolicy"].(map[string]interface{})["updateMode"]) 51 | assert.Equal(t, min, rendered["spec"].(map[string]interface{})["updatePolicy"].(map[string]interface{})["minReplicas"]) 52 | assert.Equal(t, controlledValues, rendered["spec"].(map[string]interface{})["resourcePolicy"].(map[string]interface{})["containerPolicies"].([]interface{})[0].(map[string]interface{})["controlledValues"]) 53 | } 54 | 55 | // Test that setting verticalPodAutoscaler.enabled = false will cause the helm template to not render the Vertical 56 | // Pod Autoscaler resource 57 | func TestK8SServiceVerticalPodAutoscalerCreateFalse(t *testing.T) { 58 | t.Parallel() 59 | 60 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 61 | require.NoError(t, err) 62 | 63 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 64 | // We then use SetValues to override all the defaults. 65 | options := &helm.Options{ 66 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 67 | SetValues: map[string]string{ 68 | "verticalPodAutoscaler.enabled": "false", 69 | }, 70 | } 71 | _, err = helm.RenderTemplateE(t, options, helmChartPath, "hpa", []string{"templates/verticalpodautoscaler.yaml"}) 72 | require.Error(t, err) 73 | } 74 | 75 | // Test that setting verticalPodAutoscaler.enabled = true will cause the helm template to render the Vertical Pod 76 | // Autoscaler resource with maxAllowed 77 | func TestK8SServiceVerticalPodAutoscalerCreateTrueCreatesVerticalPodAutoscalerWithMinAllowed(t *testing.T) { 78 | t.Parallel() 79 | maxCPU := "1000m" 80 | maxMemory := "1000Mi" 81 | 82 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 83 | require.NoError(t, err) 84 | 85 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 86 | // We then use SetValues to override all the defaults. 87 | options := &helm.Options{ 88 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 89 | SetValues: map[string]string{ 90 | "verticalPodAutoscaler.enabled": "true", 91 | "verticalPodAutoscaler.mainContainerResourcePolicy.maxAllowed.cpu": maxCPU, 92 | "verticalPodAutoscaler.mainContainerResourcePolicy.maxAllowed.memory": maxMemory, 93 | }, 94 | } 95 | out := helm.RenderTemplate(t, options, helmChartPath, "hpa", []string{"templates/verticalpodautoscaler.yaml"}) 96 | 97 | // We take the output and render it to a map to validate it has created a Vertical Pod Autoscaler output or not 98 | rendered := map[string]interface{}{} 99 | err = yaml.Unmarshal([]byte(out), &rendered) 100 | assert.NoError(t, err) 101 | assert.NotEqual(t, 0, len(rendered)) 102 | assert.Equal(t, maxCPU, rendered["spec"].(map[string]interface{})["resourcePolicy"].(map[string]interface{})["containerPolicies"].([]interface{})[0].(map[string]interface{})["maxAllowed"].(map[string]interface{})["cpu"]) 103 | assert.Equal(t, maxMemory, rendered["spec"].(map[string]interface{})["resourcePolicy"].(map[string]interface{})["containerPolicies"].([]interface{})[0].(map[string]interface{})["maxAllowed"].(map[string]interface{})["memory"]) 104 | } 105 | 106 | // Test that setting verticalPodAutoscaler.enabled = true will cause the helm template to render the Vertical Pod 107 | // Autoscaler resource with minAllowed 108 | func TestK8SServiceVerticalPodAutoscalerCreateTrueCreatesVerticalPodAutoscalerWithMaxAllowed(t *testing.T) { 109 | t.Parallel() 110 | minCPU := "1000m" 111 | minMemory := "1000Mi" 112 | 113 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 114 | require.NoError(t, err) 115 | 116 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 117 | // We then use SetValues to override all the defaults. 118 | options := &helm.Options{ 119 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 120 | SetValues: map[string]string{ 121 | "verticalPodAutoscaler.enabled": "true", 122 | "verticalPodAutoscaler.mainContainerResourcePolicy.minAllowed.cpu": minCPU, 123 | "verticalPodAutoscaler.mainContainerResourcePolicy.minAllowed.memory": minMemory, 124 | }, 125 | } 126 | out := helm.RenderTemplate(t, options, helmChartPath, "hpa", []string{"templates/verticalpodautoscaler.yaml"}) 127 | 128 | // We take the output and render it to a map to validate it has created a Vertical Pod Autoscaler output or not 129 | rendered := map[string]interface{}{} 130 | err = yaml.Unmarshal([]byte(out), &rendered) 131 | assert.NoError(t, err) 132 | assert.NotEqual(t, 0, len(rendered)) 133 | assert.Equal(t, minCPU, rendered["spec"].(map[string]interface{})["resourcePolicy"].(map[string]interface{})["containerPolicies"].([]interface{})[0].(map[string]interface{})["minAllowed"].(map[string]interface{})["cpu"]) 134 | assert.Equal(t, minMemory, rendered["spec"].(map[string]interface{})["resourcePolicy"].(map[string]interface{})["containerPolicies"].([]interface{})[0].(map[string]interface{})["minAllowed"].(map[string]interface{})["memory"]) 135 | } 136 | 137 | // // Test that setting verticalPodAutoscaler.enabled = true will cause the helm template to render the Vertical Pod 138 | // // Autoscaler resource updateMode = "Off" 139 | func TestK8SServiceVerticalPodAutoscalerCreateTrueCreatesVerticalPodAutoscalerWithUpdateModeOff(t *testing.T) { 140 | t.Parallel() 141 | updateMode := "Off" 142 | 143 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 144 | require.NoError(t, err) 145 | 146 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 147 | // We then use SetValues to override all the defaults. 148 | options := &helm.Options{ 149 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 150 | SetValues: map[string]string{ 151 | "verticalPodAutoscaler.enabled": "true", 152 | }, 153 | } 154 | out := helm.RenderTemplate(t, options, helmChartPath, "hpa", []string{"templates/verticalpodautoscaler.yaml"}) 155 | 156 | // We take the output and render it to a map to validate it has created a Vertical Pod Autoscaler output or not 157 | rendered := map[string]interface{}{} 158 | err = yaml.Unmarshal([]byte(out), &rendered) 159 | assert.NoError(t, err) 160 | assert.NotEqual(t, 0, len(rendered)) 161 | assert.Equal(t, updateMode, rendered["spec"].(map[string]interface{})["updatePolicy"].(map[string]interface{})["updateMode"]) 162 | } 163 | -------------------------------------------------------------------------------- /test/k8s_service_nginx_example_test.go: -------------------------------------------------------------------------------- 1 | //go:build all || integration 2 | // +build all integration 3 | 4 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 5 | // run just the template tests. See the test README for more information. 6 | 7 | package test 8 | 9 | import ( 10 | "fmt" 11 | "path/filepath" 12 | "strings" 13 | "testing" 14 | 15 | "github.com/gruntwork-io/terratest/modules/helm" 16 | http_helper "github.com/gruntwork-io/terratest/modules/http-helper" 17 | "github.com/gruntwork-io/terratest/modules/k8s" 18 | "github.com/gruntwork-io/terratest/modules/random" 19 | test_structure "github.com/gruntwork-io/terratest/modules/test-structure" 20 | "github.com/stretchr/testify/require" 21 | "golang.org/x/mod/semver" 22 | ) 23 | 24 | // Test that: 25 | // 26 | // 1. We can deploy the example 27 | // 2. The deployment succeeds without errors 28 | // 3. We can open a port forward to one of the Pods and access nginx 29 | // 4. We can access nginx via the service endpoint 30 | // 5. We can access nginx via the ingress endpoint 31 | // 6. If we set a lower priority path, the application path takes precendence over the nginx service 32 | // 7. If we set a higher priority path, that takes precedence over the nginx service 33 | func TestK8SServiceNginxExample(t *testing.T) { 34 | t.Parallel() 35 | 36 | workingDir := filepath.Join(".", "stages", t.Name()) 37 | 38 | //os.Setenv("SKIP_setup", "true") 39 | //os.Setenv("SKIP_create_namespace", "true") 40 | //os.Setenv("SKIP_install", "true") 41 | //os.Setenv("SKIP_validate_initial_deployment", "true") 42 | //os.Setenv("SKIP_upgrade", "true") 43 | //os.Setenv("SKIP_validate_upgrade", "true") 44 | //os.Setenv("SKIP_delete", "true") 45 | //os.Setenv("SKIP_delete_namespace", "true") 46 | 47 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 48 | require.NoError(t, err) 49 | examplePath, err := filepath.Abs(filepath.Join("..", "examples", "k8s-service-nginx")) 50 | require.NoError(t, err) 51 | 52 | // Create a test namespace to deploy resources into, to avoid colliding with other tests 53 | test_structure.RunTestStage(t, "setup", func() { 54 | kubectlOptions := k8s.NewKubectlOptions("", "", "") 55 | test_structure.SaveKubectlOptions(t, workingDir, kubectlOptions) 56 | 57 | uniqueID := random.UniqueId() 58 | test_structure.SaveString(t, workingDir, "uniqueID", uniqueID) 59 | }) 60 | kubectlOptions := test_structure.LoadKubectlOptions(t, workingDir) 61 | uniqueID := test_structure.LoadString(t, workingDir, "uniqueID") 62 | testNamespace := fmt.Sprintf("k8s-service-nginx-%s", strings.ToLower(uniqueID)) 63 | 64 | defer test_structure.RunTestStage(t, "delete_namespace", func() { 65 | k8s.DeleteNamespace(t, kubectlOptions, testNamespace) 66 | }) 67 | 68 | test_structure.RunTestStage(t, "create_namespace", func() { 69 | k8s.CreateNamespace(t, kubectlOptions, testNamespace) 70 | }) 71 | 72 | kubectlOptions.Namespace = testNamespace 73 | 74 | // Use the values file in the example and deploy the chart in the test namespace 75 | // Set a random release name 76 | releaseName := fmt.Sprintf("k8s-service-nginx-%s", strings.ToLower(uniqueID)) 77 | options := &helm.Options{ 78 | KubectlOptions: kubectlOptions, 79 | ValuesFiles: []string{filepath.Join(examplePath, "values.yaml")}, 80 | SetValues: map[string]string{ 81 | "ingress.enabled": "true", 82 | "ingress.path": "/app", 83 | "ingress.pathType": "Prefix", 84 | "ingress.servicePort": "http", 85 | "ingress.annotations.kubernetes\\.io/ingress\\.class": "nginx", 86 | "ingress.annotations.nginx\\.ingress\\.kubernetes\\.io/rewrite-target": "/", 87 | "ingress.additionalPaths[0].path": "/app", 88 | "ingress.additionalPaths[0].pathType": "Prefix", 89 | "ingress.additionalPaths[0].serviceName": "black-hole", 90 | "ingress.additionalPaths[0].servicePort": "80", 91 | "ingress.additionalPaths[1].path": "/black-hole", 92 | "ingress.additionalPaths[1].pathType": "Prefix", 93 | "ingress.additionalPaths[1].serviceName": "black-hole", 94 | "ingress.additionalPaths[1].servicePort": "80", 95 | }, 96 | } 97 | 98 | defer test_structure.RunTestStage(t, "delete", func() { 99 | helm.Delete(t, options, releaseName, true) 100 | }) 101 | 102 | test_structure.RunTestStage(t, "install", func() { 103 | helm.Install(t, options, helmChartPath, releaseName) 104 | }) 105 | 106 | test_structure.RunTestStage(t, "validate_initial_deployment", func() { 107 | verifyPodsCreatedSuccessfully(t, kubectlOptions, "nginx", releaseName, NumPodsExpected) 108 | verifyAllPodsAvailable(t, kubectlOptions, "nginx", releaseName, nginxValidationFunction) 109 | verifyServiceAvailable(t, kubectlOptions, "nginx", releaseName, nginxValidationFunction) 110 | 111 | // We expect this to succeed, because the black hole service that overlaps with the nginx service is added as lower 112 | // priority. 113 | verifyIngressAvailable(t, kubectlOptions, releaseName, "/app", nginxValidationFunction) 114 | 115 | // On the other hand, we expect this to fail because the black hole service does not exist 116 | verifyIngressAvailable(t, kubectlOptions, releaseName, "/black-hole", serviceUnavailableValidationFunction) 117 | }) 118 | 119 | test_structure.RunTestStage(t, "upgrade", func() { 120 | // Now redeploy with higher priority path and make sure it fails 121 | options.SetValues["ingress.additionalPathsHigherPriority[0].path"] = "/app" 122 | options.SetValues["ingress.additionalPathsHigherPriority[0].pathType"] = "Prefix" 123 | options.SetValues["ingress.additionalPathsHigherPriority[0].serviceName"] = "black-hole" 124 | options.SetValues["ingress.additionalPathsHigherPriority[0].servicePort"] = "80" 125 | helm.Upgrade(t, options, helmChartPath, releaseName) 126 | }) 127 | 128 | test_structure.RunTestStage(t, "validate_upgrade", func() { 129 | // We expect the service to still come up cleanly 130 | verifyPodsCreatedSuccessfully(t, kubectlOptions, "nginx", releaseName, NumPodsExpected) 131 | verifyAllPodsAvailable(t, kubectlOptions, "nginx", releaseName, nginxValidationFunction) 132 | verifyServiceAvailable(t, kubectlOptions, "nginx", releaseName, nginxValidationFunction) 133 | 134 | // ... but now the nginx service via ingress should be unavailable because of the higher priority black hole path 135 | verifyIngressAvailable(t, kubectlOptions, releaseName, "/app", serviceUnavailableValidationFunction) 136 | }) 137 | } 138 | 139 | // nginxValidationFunction checks that we get a 200 response with the nginx welcome page. 140 | func nginxValidationFunction(statusCode int, body string) bool { 141 | return statusCode == 200 && strings.Contains(body, "Welcome to nginx") 142 | } 143 | 144 | // serviceUnavailableValidationFunction checks that we get a 503 response and the maintenance page 145 | func serviceUnavailableValidationFunction(statusCode int, body string) bool { 146 | return statusCode == 503 && strings.Contains(body, "Service Temporarily Unavailable") 147 | } 148 | 149 | func verifyIngressAvailable( 150 | t *testing.T, 151 | kubectlOptions *k8s.KubectlOptions, 152 | ingressName string, 153 | path string, 154 | validationFunction func(int, string) bool, 155 | ) { 156 | version, err := k8s.GetKubernetesClusterVersionWithOptionsE(t, kubectlOptions) 157 | require.NoError(t, err) 158 | 159 | // If the actual cluster version is >= v1.19.0, use networkingv1 functions. Otherwise, use networkingv1beta1 160 | // functions. 161 | var ingressEndpoint string 162 | if semver.Compare(version, "v1.19.0") >= 0 { 163 | // Get the ingress and wait until it is available 164 | k8s.WaitUntilIngressAvailable( 165 | t, 166 | kubectlOptions, 167 | ingressName, 168 | WaitTimerRetries, 169 | WaitTimerSleep, 170 | ) 171 | 172 | // Now hit the service endpoint to verify it is accessible 173 | ingress := k8s.GetIngress(t, kubectlOptions, ingressName) 174 | if ingress.Status.LoadBalancer.Ingress[0].IP == "" { 175 | ingressEndpoint = ingress.Status.LoadBalancer.Ingress[0].Hostname 176 | } else { 177 | ingressEndpoint = ingress.Status.LoadBalancer.Ingress[0].IP 178 | } 179 | } else { 180 | // Get the ingress and wait until it is available 181 | k8s.WaitUntilIngressAvailableV1Beta1( 182 | t, 183 | kubectlOptions, 184 | ingressName, 185 | WaitTimerRetries, 186 | WaitTimerSleep, 187 | ) 188 | 189 | // Now hit the service endpoint to verify it is accessible 190 | ingress := k8s.GetIngressV1Beta1(t, kubectlOptions, ingressName) 191 | if ingress.Status.LoadBalancer.Ingress[0].IP == "" { 192 | ingressEndpoint = ingress.Status.LoadBalancer.Ingress[0].Hostname 193 | } else { 194 | ingressEndpoint = ingress.Status.LoadBalancer.Ingress[0].IP 195 | } 196 | } 197 | 198 | http_helper.HttpGetWithRetryWithCustomValidation( 199 | t, 200 | fmt.Sprintf("http://%s%s", ingressEndpoint, path), 201 | nil, 202 | WaitTimerRetries, 203 | WaitTimerSleep, 204 | validationFunction, 205 | ) 206 | } 207 | -------------------------------------------------------------------------------- /test/k8s_service_config_injection_example_test.go: -------------------------------------------------------------------------------- 1 | // +build all integration 2 | 3 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 4 | // run just the template tests. See the test README for more information. 5 | 6 | package test 7 | 8 | import ( 9 | "fmt" 10 | "path/filepath" 11 | "strings" 12 | "testing" 13 | 14 | "github.com/gruntwork-io/terratest/modules/helm" 15 | "github.com/gruntwork-io/terratest/modules/k8s" 16 | "github.com/gruntwork-io/terratest/modules/random" 17 | "github.com/stretchr/testify/require" 18 | ) 19 | 20 | // Test the base case of the k8s-service-config-injection example, where the server port is set using hard coded 21 | // environment variables. This test will check that: 22 | // 23 | // 1. The docker container can be built 24 | // 2. The base values.yaml file can be used to deploy the docker container 25 | // 3. The deployed container responds to web requests with the default server text. 26 | func TestK8SServiceConfigInjectionBaseExample(t *testing.T) { 27 | t.Parallel() 28 | 29 | // Setup paths for testing the example chart 30 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 31 | require.NoError(t, err) 32 | examplePath, err := filepath.Abs(filepath.Join("..", "examples", "k8s-service-config-injection")) 33 | require.NoError(t, err) 34 | 35 | // Create a test namespace to deploy resources into, to avoid colliding with other tests 36 | kubectlOptions := k8s.NewKubectlOptions("", "", "") 37 | uniqueID := random.UniqueId() 38 | testNamespace := fmt.Sprintf("k8s-service-config-injection-%s", strings.ToLower(uniqueID)) 39 | k8s.CreateNamespace(t, kubectlOptions, testNamespace) 40 | defer k8s.DeleteNamespace(t, kubectlOptions, testNamespace) 41 | kubectlOptions.Namespace = testNamespace 42 | 43 | // Build the docker image 44 | createSampleAppDockerImage(t, uniqueID, examplePath) 45 | 46 | // Install the base chart 47 | // Set a random release name here so we can track it later 48 | releaseName := fmt.Sprintf("k8s-service-config-injection-%s", strings.ToLower(uniqueID)) 49 | // Use the values file in the example and deploy the chart in the test namespace 50 | options := &helm.Options{ 51 | KubectlOptions: kubectlOptions, 52 | ValuesFiles: []string{filepath.Join(examplePath, "values.yaml")}, 53 | // Override the image tag 54 | SetValues: map[string]string{"containerImage.tag": uniqueID}, 55 | } 56 | defer helm.Delete(t, options, releaseName, true) 57 | helm.Install(t, options, helmChartPath, releaseName) 58 | 59 | // Verify the app comes up cleanly and returns the expected text 60 | expectedText := "Hello from backend" 61 | validationFunction := sampleAppValidationFunctionGenerator(t, expectedText) 62 | verifyPodsCreatedSuccessfully(t, kubectlOptions, "sample-sinatra-app", releaseName, NumPodsExpected) 63 | verifyAllPodsAvailable(t, kubectlOptions, "sample-sinatra-app", releaseName, validationFunction) 64 | verifyServiceAvailable(t, kubectlOptions, "sample-sinatra-app", releaseName, validationFunction) 65 | 66 | } 67 | 68 | // Test the ConfigMap case of the k8s-service-config-injection example, where the server text is derived from a 69 | // ConfigMap that is injected as an environment variable. This test will check that: 70 | // 71 | // 1. The docker container can be built 72 | // 2. The provided kubernetes resource file can be used to create a ConfigMap containing a modified server text. 73 | // 3. The base values.yaml file can be combined with the extension for pulling in the server text from a config map. 74 | // 4. The deployed docker container responds with the server text derived from the config map 75 | func TestK8SServiceConfigInjectionConfigMapExample(t *testing.T) { 76 | t.Parallel() 77 | 78 | // Setup paths for testing the example chart 79 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 80 | require.NoError(t, err) 81 | examplePath, err := filepath.Abs(filepath.Join("..", "examples", "k8s-service-config-injection")) 82 | require.NoError(t, err) 83 | 84 | // Create a test namespace to deploy resources into, to avoid colliding with other tests 85 | kubectlOptions := k8s.NewKubectlOptions("", "", "") 86 | uniqueID := random.UniqueId() 87 | testNamespace := fmt.Sprintf("k8s-service-config-injection-%s", strings.ToLower(uniqueID)) 88 | k8s.CreateNamespace(t, kubectlOptions, testNamespace) 89 | defer k8s.DeleteNamespace(t, kubectlOptions, testNamespace) 90 | kubectlOptions.Namespace = testNamespace 91 | 92 | // Build the docker image 93 | createSampleAppDockerImage(t, uniqueID, examplePath) 94 | 95 | // Install the configmap 96 | kubeResourceConfigPath := filepath.Join(examplePath, "kubernetes", "config-map.yaml") 97 | defer k8s.KubectlDelete(t, kubectlOptions, kubeResourceConfigPath) 98 | k8s.KubectlApply(t, kubectlOptions, kubeResourceConfigPath) 99 | 100 | // Install the chart with the base values and config map values 101 | // Set a random release name here so we can track it later 102 | releaseName := fmt.Sprintf("k8s-service-config-injection-%s", strings.ToLower(uniqueID)) 103 | // Use the values file in the example, override it with the configmap extension and deploy the chart in the test 104 | // namespace 105 | options := &helm.Options{ 106 | KubectlOptions: kubectlOptions, 107 | ValuesFiles: []string{ 108 | // Base example values 109 | filepath.Join(examplePath, "values.yaml"), 110 | // Example config map extensions values 111 | filepath.Join(examplePath, "extensions", "config_map_values.yaml"), 112 | }, 113 | // Override the image tag 114 | SetValues: map[string]string{"containerImage.tag": uniqueID}, 115 | } 116 | defer helm.Delete(t, options, releaseName, true) 117 | helm.Install(t, options, helmChartPath, releaseName) 118 | 119 | // Verify the app comes up cleanly and returns the expected text 120 | expectedText := "Hello! I was configured using a ConfigMap!" 121 | validationFunction := sampleAppValidationFunctionGenerator(t, expectedText) 122 | verifyPodsCreatedSuccessfully(t, kubectlOptions, "sample-sinatra-app", releaseName, NumPodsExpected) 123 | verifyAllPodsAvailable(t, kubectlOptions, "sample-sinatra-app", releaseName, validationFunction) 124 | verifyServiceAvailable(t, kubectlOptions, "sample-sinatra-app", releaseName, validationFunction) 125 | } 126 | 127 | // Test the Secret case of the k8s-service-config-injection example, where the server text is derived from a 128 | // Secret that is injected as an environment variable. This test will check that: 129 | // 130 | // 1. The docker container can be built 131 | // 2. Create a Secret used to inject the server text. 132 | // 3. The base values.yaml file can be combined with the extension for pulling in the server text from a secret. 133 | // 4. The deployed docker container responds with the server text derived from the secret 134 | func TestK8SServiceConfigInjectionSecretExample(t *testing.T) { 135 | t.Parallel() 136 | 137 | // Setup paths for testing the example chart 138 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 139 | require.NoError(t, err) 140 | examplePath, err := filepath.Abs(filepath.Join("..", "examples", "k8s-service-config-injection")) 141 | require.NoError(t, err) 142 | 143 | // Create a test namespace to deploy resources into, to avoid colliding with other tests 144 | kubectlOptions := k8s.NewKubectlOptions("", "", "") 145 | uniqueID := random.UniqueId() 146 | testNamespace := fmt.Sprintf("k8s-service-config-injection-%s", strings.ToLower(uniqueID)) 147 | k8s.CreateNamespace(t, kubectlOptions, testNamespace) 148 | defer k8s.DeleteNamespace(t, kubectlOptions, testNamespace) 149 | kubectlOptions.Namespace = testNamespace 150 | 151 | // Build the docker image 152 | createSampleAppDockerImage(t, uniqueID, examplePath) 153 | 154 | // Create a secret using kubectl 155 | // Make sure to delete the secret in the undeploy process 156 | defer k8s.RunKubectl(t, kubectlOptions, "delete", "secret", "sample-sinatra-app-server-text") 157 | // Create Secret from a string literal 158 | // kubectl create secret generic sample-sinatra-app-server-text --from-literal server_text='Hello! I was configured using a Secret!' 159 | k8s.RunKubectl( 160 | t, 161 | kubectlOptions, 162 | "create", 163 | "secret", 164 | "generic", 165 | "sample-sinatra-app-server-text", 166 | "--from-literal", 167 | "server_text=Hello! I was configured using a Secret!", 168 | ) 169 | 170 | // Install the chart with the base values and config map values 171 | // Set a random release name here so we can track it later 172 | releaseName := fmt.Sprintf("k8s-service-config-injection-%s", strings.ToLower(uniqueID)) 173 | // Use the values file in the example, override it with the secret extension and deploy the chart in the test 174 | // namespace 175 | options := &helm.Options{ 176 | KubectlOptions: kubectlOptions, 177 | ValuesFiles: []string{ 178 | // Base example values 179 | filepath.Join(examplePath, "values.yaml"), 180 | // Example config map extensions values 181 | filepath.Join(examplePath, "extensions", "secret_values.yaml"), 182 | }, 183 | // Override the image tag 184 | SetValues: map[string]string{"containerImage.tag": uniqueID}, 185 | } 186 | defer helm.Delete(t, options, releaseName, true) 187 | helm.Install(t, options, helmChartPath, releaseName) 188 | 189 | // Verify the app comes up cleanly and returns the expected text 190 | expectedText := "Hello! I was configured using a Secret!" 191 | validationFunction := sampleAppValidationFunctionGenerator(t, expectedText) 192 | verifyPodsCreatedSuccessfully(t, kubectlOptions, "sample-sinatra-app", releaseName, NumPodsExpected) 193 | verifyAllPodsAvailable(t, kubectlOptions, "sample-sinatra-app", releaseName, validationFunction) 194 | verifyServiceAvailable(t, kubectlOptions, "sample-sinatra-app", releaseName, validationFunction) 195 | } 196 | -------------------------------------------------------------------------------- /test/k8s_service_example_test_helpers.go: -------------------------------------------------------------------------------- 1 | // +build all integration 2 | 3 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 4 | // run just the template tests. See the test README for more information. 5 | 6 | package test 7 | 8 | import ( 9 | "fmt" 10 | "io/ioutil" 11 | "net/http" 12 | "path/filepath" 13 | "strings" 14 | "testing" 15 | "time" 16 | 17 | "github.com/stretchr/testify/assert" 18 | 19 | "github.com/ghodss/yaml" 20 | 21 | http_helper "github.com/gruntwork-io/terratest/modules/http-helper" 22 | "github.com/gruntwork-io/terratest/modules/k8s" 23 | "github.com/gruntwork-io/terratest/modules/logger" 24 | "github.com/gruntwork-io/terratest/modules/retry" 25 | "github.com/stretchr/testify/require" 26 | corev1 "k8s.io/api/core/v1" 27 | metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" 28 | ) 29 | 30 | const ( 31 | WaitTimerRetries = 60 32 | WaitTimerSleep = 5 * time.Second 33 | NumPodsExpected = 3 34 | ) 35 | 36 | // verifyPodsCreatedSuccessfully waits until the pods for the given helm release are created. 37 | func verifyPodsCreatedSuccessfully( 38 | t *testing.T, 39 | kubectlOptions *k8s.KubectlOptions, 40 | appName string, 41 | releaseName string, 42 | numPods int, 43 | ) { 44 | // Get the pods and wait until they are all ready 45 | filters := metav1.ListOptions{ 46 | LabelSelector: fmt.Sprintf("app.kubernetes.io/name=%s,app.kubernetes.io/instance=%s", appName, releaseName), 47 | } 48 | 49 | k8s.WaitUntilNumPodsCreated(t, kubectlOptions, filters, numPods, WaitTimerRetries, WaitTimerSleep) 50 | pods := k8s.ListPods(t, kubectlOptions, filters) 51 | 52 | for _, pod := range pods { 53 | k8s.WaitUntilPodAvailable(t, kubectlOptions, pod.Name, WaitTimerRetries, WaitTimerSleep) 54 | } 55 | } 56 | 57 | // verifyCanaryAndMainPodsCreatedSuccessfully uses gruntwork.io/deployment-type labels to ensure availability of both main and canary pods of a given release 58 | func verifyCanaryAndMainPodsCreatedSuccessfully( 59 | t *testing.T, 60 | kubectlOptions *k8s.KubectlOptions, 61 | appName string, 62 | releaseName string, 63 | ) { 64 | 65 | filters := metav1.ListOptions{ 66 | LabelSelector: fmt.Sprintf("app.kubernetes.io/name=%s,app.kubernetes.io/instance=%s,gruntwork.io/deployment-type=canary", appName, releaseName), 67 | } 68 | 69 | k8s.WaitUntilNumPodsCreated(t, kubectlOptions, filters, NumPodsExpected, WaitTimerRetries, WaitTimerSleep) 70 | pods := k8s.ListPods(t, kubectlOptions, filters) 71 | 72 | for _, pod := range pods { 73 | k8s.WaitUntilPodAvailable(t, kubectlOptions, pod.Name, WaitTimerRetries, WaitTimerSleep) 74 | } 75 | 76 | mainDeploymentFilters := metav1.ListOptions{ 77 | LabelSelector: fmt.Sprintf("app.kubernetes.io/name=%s,app.kubernetes.io/instance=%s,gruntwork.io/deployment-type=main", appName, releaseName), 78 | } 79 | 80 | k8s.WaitUntilNumPodsCreated(t, kubectlOptions, mainDeploymentFilters, NumPodsExpected, WaitTimerRetries, WaitTimerSleep) 81 | mainPods := k8s.ListPods(t, kubectlOptions, mainDeploymentFilters) 82 | 83 | for _, mainPod := range mainPods { 84 | k8s.WaitUntilPodAvailable(t, kubectlOptions, mainPod.Name, WaitTimerRetries, WaitTimerSleep) 85 | } 86 | 87 | } 88 | 89 | // verifyDifferentContainerTagsForCanaryPods ensures that the pods that comprise the main deployment 90 | // and the pods that comprise the canary deployment are running different image tags 91 | func verifyDifferentContainerTagsForCanaryPods( 92 | t *testing.T, 93 | kubectlOptions *k8s.KubectlOptions, 94 | releaseName string, 95 | ) { 96 | // Ensure that the canary deployment is running a separate tag from the main deployment, as configured 97 | canaryFilters := metav1.ListOptions{ 98 | LabelSelector: fmt.Sprintf("app.kubernetes.io/name=%s,app.kubernetes.io/instance=%s,gruntwork.io/deployment-type=canary", "canary-test", releaseName), 99 | } 100 | 101 | canaryPods := k8s.ListPods(t, kubectlOptions, canaryFilters) 102 | canaryTag := canaryPods[0].Spec.Containers[0].Image 103 | 104 | mainFilters := metav1.ListOptions{ 105 | LabelSelector: fmt.Sprintf("app.kubernetes.io/name=%s,app.kubernetes.io/instance=%s,gruntwork.io/deployment-type=main", "canary-test", releaseName), 106 | } 107 | 108 | mainPods := k8s.ListPods(t, kubectlOptions, mainFilters) 109 | mainTag := mainPods[0].Spec.Containers[0].Image 110 | 111 | assert.NotEqual(t, canaryTag, mainTag) 112 | } 113 | 114 | // verifyAllPodsAvailable waits until all the pods from the release are up and ready to serve traffic. The 115 | // validationFunction is used to verify a successful response from the Pod. 116 | func verifyAllPodsAvailable( 117 | t *testing.T, 118 | kubectlOptions *k8s.KubectlOptions, 119 | appName string, 120 | releaseName string, 121 | validationFunction func(int, string) bool, 122 | ) { 123 | filters := metav1.ListOptions{ 124 | LabelSelector: fmt.Sprintf("app.kubernetes.io/name=%s,app.kubernetes.io/instance=%s", appName, releaseName), 125 | } 126 | pods := k8s.ListPods(t, kubectlOptions, filters) 127 | for _, pod := range pods { 128 | verifySinglePodAvailable(t, kubectlOptions, pod, validationFunction) 129 | } 130 | } 131 | 132 | // verifySinglePodAvailable waits until the given pod is ready to serve traffic. Does so by pinging port 80 on the Pod 133 | // container. The validationFunction is used to verify a successful response from the Pod. 134 | func verifySinglePodAvailable( 135 | t *testing.T, 136 | kubectlOptions *k8s.KubectlOptions, 137 | pod corev1.Pod, 138 | validationFunction func(int, string) bool, 139 | ) { 140 | // Open a tunnel from any available port locally 141 | localPort := k8s.GetAvailablePort(t) 142 | tunnel := k8s.NewTunnel(kubectlOptions, k8s.ResourceTypePod, pod.Name, localPort, 80) 143 | defer tunnel.Close() 144 | tunnel.ForwardPort(t) 145 | 146 | // Try to access the service on the local port, retrying until we get a good response for up to 5 minutes 147 | http_helper.HttpGetWithRetryWithCustomValidation( 148 | t, 149 | fmt.Sprintf("http://%s", tunnel.Endpoint()), 150 | nil, 151 | WaitTimerRetries, 152 | WaitTimerSleep, 153 | validationFunction, 154 | ) 155 | } 156 | 157 | // verifyServiceAvailable waits until the service associated with the helm release is available and ready to serve 158 | // traffic. The validationFunction is used to verify a successful response from the Pod. 159 | func verifyServiceAvailable( 160 | t *testing.T, 161 | kubectlOptions *k8s.KubectlOptions, 162 | appName string, 163 | releaseName string, 164 | validationFunction func(int, string) bool, 165 | ) { 166 | // Get the service and wait until it is available 167 | filters := metav1.ListOptions{ 168 | LabelSelector: fmt.Sprintf("app.kubernetes.io/name=%s,app.kubernetes.io/instance=%s", appName, releaseName), 169 | } 170 | services := k8s.ListServices(t, kubectlOptions, filters) 171 | require.Equal(t, len(services), 1) 172 | service := services[0] 173 | k8s.WaitUntilServiceAvailable(t, kubectlOptions, service.Name, WaitTimerRetries, WaitTimerSleep) 174 | 175 | // Now hit the service endpoint to verify it is accessible 176 | // Refresh service object in memory 177 | service = *k8s.GetService(t, kubectlOptions, service.Name) 178 | serviceEndpoint := k8s.GetServiceEndpoint(t, kubectlOptions, &service, 80) 179 | http_helper.HttpGetWithRetryWithCustomValidation( 180 | t, 181 | fmt.Sprintf("http://%s", serviceEndpoint), 182 | nil, 183 | WaitTimerRetries, 184 | WaitTimerSleep, 185 | validationFunction, 186 | ) 187 | } 188 | 189 | // verifyServiceRoutesToMainAndCanaryPods ensures that the service is routing to both the main and the canary pods 190 | // It does this by repeatedly issuing requests to the service and inspecting the nginx Server header 191 | // Once both nginx tags have been seen in this header - we can be confident that we've reached both types of pod via the service 192 | func verifyServiceRoutesToMainAndCanaryPods( 193 | t *testing.T, 194 | kubectlOptions *k8s.KubectlOptions, 195 | appName string, 196 | releaseName string, 197 | ) { 198 | 199 | // Get the service and wait until it is available 200 | filters := metav1.ListOptions{ 201 | LabelSelector: fmt.Sprintf("app.kubernetes.io/name=%s,app.kubernetes.io/instance=%s", appName, releaseName), 202 | } 203 | services := k8s.ListServices(t, kubectlOptions, filters) 204 | require.Equal(t, len(services), 1) 205 | service := services[0] 206 | k8s.WaitUntilServiceAvailable(t, kubectlOptions, service.Name, WaitTimerRetries, WaitTimerSleep) 207 | 208 | var availableService = *k8s.GetService(t, kubectlOptions, service.Name) 209 | serviceEndpoint := k8s.GetServiceEndpoint(t, kubectlOptions, &availableService, 80) 210 | 211 | // Ensure that the service routes to both the main and canary deployment pods 212 | // Read the latest values dynamically in case the fixtures file changes 213 | valuesFile, err := ioutil.ReadFile(filepath.Join("fixtures", "canary_and_main_deployment_values.yaml")) 214 | assert.NoError(t, err) 215 | 216 | rendered := map[string]interface{}{} 217 | err = yaml.Unmarshal([]byte(valuesFile), &rendered) 218 | 219 | mainImageTag := rendered["containerImage"].(map[string]interface{})["tag"].(string) 220 | 221 | canaryImageTag := rendered["canary"].(map[string]interface{})["containerImage"].(map[string]interface{})["tag"].(string) 222 | 223 | // We haven't seen either tag come back in the nginx Server header yet 224 | seen := make(map[string]bool) 225 | seen[mainImageTag] = false 226 | seen[canaryImageTag] = false 227 | 228 | // This will take 30 seconds to timeout (30 max tries with a 1 second sleep between each try) 229 | maxRetries := 30 230 | sleepDuration := 1 * time.Second 231 | 232 | retry.DoWithRetry(t, "Read Server header returned by nginx", maxRetries, sleepDuration, func() (string, error) { 233 | resp, err := http.Get(fmt.Sprintf("http://%s", serviceEndpoint)) 234 | require.NoError(t, err) 235 | 236 | serverNginxHeader := resp.Header.Get("Server") 237 | // Nginx returns a server header in the format "nginx/1.16.0" 238 | serverTag := strings.ReplaceAll(serverNginxHeader, "nginx/", "") 239 | 240 | // When we see a header value, update it as seen 241 | seen[serverTag] = true 242 | 243 | if seen[mainImageTag] && seen[canaryImageTag] { 244 | logger.Logf(t, "Successfully saw both main and canary nginx tags via service: %v", seen) 245 | return "", nil 246 | } 247 | 248 | return "", fmt.Errorf("Still waiting to see both nginx tags returned: %v", seen) 249 | }) 250 | } 251 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright 2019 Gruntwork, Inc 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /test/k8s_service_horizontal_pod_autoscaler_template_test.go: -------------------------------------------------------------------------------- 1 | //go:build all || tpl 2 | // +build all tpl 3 | 4 | // NOTE: We use build flags to differentiate between template tests and integration tests so that you can conveniently 5 | // run just the template tests. See the test README for more information. 6 | 7 | package test 8 | 9 | import ( 10 | "path/filepath" 11 | "strconv" 12 | "testing" 13 | 14 | "github.com/ghodss/yaml" 15 | "github.com/gruntwork-io/terratest/modules/helm" 16 | "github.com/stretchr/testify/assert" 17 | "github.com/stretchr/testify/require" 18 | ) 19 | 20 | // Test that setting horizontalPodAutoscaler.enabled = true will cause the helm template to render the Horizontal Pod 21 | // Autoscaler resource with both metrics 22 | func TestK8SServiceHorizontalPodAutoscalerCreateTrueCreatesHorizontalPodAutoscalerWithAllMetrics(t *testing.T) { 23 | t.Parallel() 24 | minReplicas := "20" 25 | maxReplicas := "30" 26 | avgCpuUtil := "55" 27 | avgMemoryUtil := "65" 28 | 29 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 30 | require.NoError(t, err) 31 | 32 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 33 | // We then use SetValues to override all the defaults. 34 | options := &helm.Options{ 35 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 36 | SetValues: map[string]string{ 37 | "horizontalPodAutoscaler.enabled": "true", 38 | "horizontalPodAutoscaler.minReplicas": minReplicas, 39 | "horizontalPodAutoscaler.maxReplicas": maxReplicas, 40 | "horizontalPodAutoscaler.avgCpuUtilization": avgCpuUtil, 41 | "horizontalPodAutoscaler.avgMemoryUtilization": avgMemoryUtil, 42 | }, 43 | } 44 | out := helm.RenderTemplate(t, options, helmChartPath, "hpa", []string{"templates/horizontalpodautoscaler.yaml"}) 45 | 46 | // We take the output and render it to a map to validate it has created a Horizontal Pod Autoscaler output or not 47 | rendered := map[string]interface{}{} 48 | err = yaml.Unmarshal([]byte(out), &rendered) 49 | assert.NoError(t, err) 50 | assert.NotEqual(t, 0, len(rendered)) 51 | min, err := strconv.ParseFloat(minReplicas, 64) 52 | max, err := strconv.ParseFloat(maxReplicas, 64) 53 | avgCpu, err := strconv.ParseFloat(avgCpuUtil, 64) 54 | avgMem, err := strconv.ParseFloat(avgMemoryUtil, 64) 55 | assert.Equal(t, min, rendered["spec"].(map[string]interface{})["minReplicas"]) 56 | assert.Equal(t, max, rendered["spec"].(map[string]interface{})["maxReplicas"]) 57 | assert.Equal(t, avgCpu, rendered["spec"].(map[string]interface{})["metrics"].([]interface{})[0].(map[string]interface{})["resource"].(map[string]interface{})["target"].(map[string]interface{})["averageUtilization"]) 58 | assert.Equal(t, avgMem, rendered["spec"].(map[string]interface{})["metrics"].([]interface{})[1].(map[string]interface{})["resource"].(map[string]interface{})["target"].(map[string]interface{})["averageUtilization"]) 59 | } 60 | 61 | // Test that setting horizontalPodAutoscaler.enabled = false will cause the helm template to not render the Horizontal 62 | // Pod Autoscaler resource 63 | func TestK8SServiceHorizontalPodAutoscalerCreateFalse(t *testing.T) { 64 | t.Parallel() 65 | 66 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 67 | require.NoError(t, err) 68 | 69 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 70 | // We then use SetValues to override all the defaults. 71 | options := &helm.Options{ 72 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 73 | SetValues: map[string]string{ 74 | "horizontalPodAutoscaler.enabled": "false", 75 | }, 76 | } 77 | _, err = helm.RenderTemplateE(t, options, helmChartPath, "hpa", []string{"templates/horizontalpodautoscaler.yaml"}) 78 | require.Error(t, err) 79 | } 80 | 81 | // Test that setting horizontalPodAutoscaler.enabled = true will cause the helm template to render the Horizontal Pod 82 | // Autoscaler resource with the cpu metric 83 | func TestK8SServiceHorizontalPodAutoscalerCreateTrueCreatesHorizontalPodAutoscalerWithCpuMetric(t *testing.T) { 84 | t.Parallel() 85 | minReplicas := "20" 86 | maxReplicas := "30" 87 | avgCpuUtil := "55" 88 | 89 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 90 | require.NoError(t, err) 91 | 92 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 93 | // We then use SetValues to override all the defaults. 94 | options := &helm.Options{ 95 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 96 | SetValues: map[string]string{ 97 | "horizontalPodAutoscaler.enabled": "true", 98 | "horizontalPodAutoscaler.minReplicas": minReplicas, 99 | "horizontalPodAutoscaler.maxReplicas": maxReplicas, 100 | "horizontalPodAutoscaler.avgCpuUtilization": avgCpuUtil, 101 | }, 102 | } 103 | out := helm.RenderTemplate(t, options, helmChartPath, "hpa", []string{"templates/horizontalpodautoscaler.yaml"}) 104 | 105 | // We take the output and render it to a map to validate it has created a Horizontal Pod Autoscaler output or not 106 | rendered := map[string]interface{}{} 107 | err = yaml.Unmarshal([]byte(out), &rendered) 108 | assert.NoError(t, err) 109 | assert.NotEqual(t, 0, len(rendered)) 110 | min, err := strconv.ParseFloat(minReplicas, 64) 111 | max, err := strconv.ParseFloat(maxReplicas, 64) 112 | avgCpu, err := strconv.ParseFloat(avgCpuUtil, 64) 113 | assert.Equal(t, min, rendered["spec"].(map[string]interface{})["minReplicas"]) 114 | assert.Equal(t, max, rendered["spec"].(map[string]interface{})["maxReplicas"]) 115 | assert.Equal(t, avgCpu, rendered["spec"].(map[string]interface{})["metrics"].([]interface{})[0].(map[string]interface{})["resource"].(map[string]interface{})["target"].(map[string]interface{})["averageUtilization"]) 116 | } 117 | 118 | // Test that setting horizontalPodAutoscaler.enabled = true will cause the helm template to render the Horizontal Pod 119 | // Autoscaler resource with the memory metric 120 | func TestK8SServiceHorizontalPodAutoscalerCreateTrueCreatesHorizontalPodAutoscalerWithMemoryMetric(t *testing.T) { 121 | t.Parallel() 122 | minReplicas := "20" 123 | maxReplicas := "30" 124 | avgMemoryUtil := "65" 125 | 126 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 127 | require.NoError(t, err) 128 | 129 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 130 | // We then use SetValues to override all the defaults. 131 | options := &helm.Options{ 132 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 133 | SetValues: map[string]string{ 134 | "horizontalPodAutoscaler.enabled": "true", 135 | "horizontalPodAutoscaler.minReplicas": minReplicas, 136 | "horizontalPodAutoscaler.maxReplicas": maxReplicas, 137 | "horizontalPodAutoscaler.avgMemoryUtilization": avgMemoryUtil, 138 | }, 139 | } 140 | out := helm.RenderTemplate(t, options, helmChartPath, "hpa", []string{"templates/horizontalpodautoscaler.yaml"}) 141 | 142 | // We take the output and render it to a map to validate it has created a Horizontal Pod Autoscaler output or not 143 | rendered := map[string]interface{}{} 144 | err = yaml.Unmarshal([]byte(out), &rendered) 145 | assert.NoError(t, err) 146 | assert.NotEqual(t, 0, len(rendered)) 147 | min, err := strconv.ParseFloat(minReplicas, 64) 148 | max, err := strconv.ParseFloat(maxReplicas, 64) 149 | avgMem, err := strconv.ParseFloat(avgMemoryUtil, 64) 150 | assert.Equal(t, min, rendered["spec"].(map[string]interface{})["minReplicas"]) 151 | assert.Equal(t, max, rendered["spec"].(map[string]interface{})["maxReplicas"]) 152 | assert.Equal(t, avgMem, rendered["spec"].(map[string]interface{})["metrics"].([]interface{})[0].(map[string]interface{})["resource"].(map[string]interface{})["target"].(map[string]interface{})["averageUtilization"]) 153 | } 154 | 155 | // Test that setting horizontalPodAutoscaler.enabled = true will cause the helm template to render the Horizontal Pod 156 | // Autoscaler resource with the no metrics 157 | func TestK8SServiceHorizontalPodAutoscalerCreateTrueCreatesHorizontalPodAutoscalerWithNoMetrics(t *testing.T) { 158 | t.Parallel() 159 | minReplicas := "20" 160 | maxReplicas := "30" 161 | 162 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 163 | require.NoError(t, err) 164 | 165 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 166 | // We then use SetValues to override all the defaults. 167 | options := &helm.Options{ 168 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 169 | SetValues: map[string]string{ 170 | "horizontalPodAutoscaler.enabled": "true", 171 | "horizontalPodAutoscaler.minReplicas": minReplicas, 172 | "horizontalPodAutoscaler.maxReplicas": maxReplicas, 173 | }, 174 | } 175 | out := helm.RenderTemplate(t, options, helmChartPath, "hpa", []string{"templates/horizontalpodautoscaler.yaml"}) 176 | 177 | // We take the output and render it to a map to validate it has created a Horizontal Pod Autoscaler output or not 178 | rendered := map[string]interface{}{} 179 | err = yaml.Unmarshal([]byte(out), &rendered) 180 | assert.NoError(t, err) 181 | assert.NotEqual(t, 0, len(rendered)) 182 | min, err := strconv.ParseFloat(minReplicas, 64) 183 | max, err := strconv.ParseFloat(maxReplicas, 64) 184 | assert.Equal(t, min, rendered["spec"].(map[string]interface{})["minReplicas"]) 185 | assert.Equal(t, max, rendered["spec"].(map[string]interface{})["maxReplicas"]) 186 | } 187 | 188 | // Test that the apiVersion of the Horizontal Pod Autoscaler is correct for Kubernetes < 1.23 189 | func TestK8SServiceHorizontalPodAutoscalerDisplaysBetaApiVersion(t *testing.T) { 190 | t.Parallel() 191 | expectedApiVersion := "autoscaling/v2beta2" 192 | 193 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 194 | require.NoError(t, err) 195 | 196 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 197 | // We then use SetValues to override all the defaults. 198 | options := &helm.Options{ 199 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 200 | SetValues: map[string]string{ 201 | "horizontalPodAutoscaler.enabled": "true", 202 | }, 203 | } 204 | out := helm.RenderTemplate(t, options, helmChartPath, "hpa", []string{"templates/horizontalpodautoscaler.yaml"}, "--kube-version", "1.22") 205 | 206 | // We take the output and render it to a map to validate it has created a Horizontal Pod Autoscaler output or not 207 | rendered := map[string]interface{}{} 208 | err = yaml.Unmarshal([]byte(out), &rendered) 209 | assert.NoError(t, err) 210 | assert.NotEqual(t, 0, len(rendered)) 211 | assert.Equal(t, expectedApiVersion, rendered["apiVersion"]) 212 | } 213 | 214 | // Test that the apiVersion of the Horizontal Pod Autoscaler is correct for Kubernetes >= 1.23 215 | func TestK8SServiceHorizontalPodAutoscalerDisplaysStableApiVersion(t *testing.T) { 216 | t.Parallel() 217 | expectedApiVersion := "autoscaling/v2" 218 | 219 | helmChartPath, err := filepath.Abs(filepath.Join("..", "charts", "k8s-service")) 220 | require.NoError(t, err) 221 | 222 | // We make sure to pass in the linter_values.yaml values file, which we assume has all the required values defined. 223 | // We then use SetValues to override all the defaults. 224 | options := &helm.Options{ 225 | ValuesFiles: []string{filepath.Join("..", "charts", "k8s-service", "linter_values.yaml")}, 226 | SetValues: map[string]string{ 227 | "horizontalPodAutoscaler.enabled": "true", 228 | }, 229 | } 230 | out := helm.RenderTemplate(t, options, helmChartPath, "hpa", []string{"templates/horizontalpodautoscaler.yaml"}, "--kube-version", "1.23", "--api-versions", "autoscaling/v2") 231 | 232 | // We take the output and render it to a map to validate it has created a Horizontal Pod Autoscaler output or not 233 | rendered := map[string]interface{}{} 234 | err = yaml.Unmarshal([]byte(out), &rendered) 235 | assert.NoError(t, err) 236 | assert.NotEqual(t, 0, len(rendered)) 237 | assert.Equal(t, expectedApiVersion, rendered["apiVersion"]) 238 | } 239 | -------------------------------------------------------------------------------- /examples/k8s-service-config-injection/README.md: -------------------------------------------------------------------------------- 1 | # Quickstart Guide: K8S Service Config Injection Example 2 | 3 | This quickstart guide uses the `k8s-service` Helm Chart to deploy a sample web app that is configured using environment 4 | variables. In this guide, we will walk through the different ways to set environment variables on the application 5 | container deployed using the `k8s-service` Helm Chart. 6 | 7 | This guide is meant to demonstrate how you might pass in external values such as dependent resource URLs and various 8 | secrets that your application needs. 9 | 10 | 11 | ## Prerequisites 12 | 13 | This guide assumes that you are familiar with the defaults provided in the `k8s-service` Helm Chart. Please refer to the 14 | [k8s-service-nginx](../k8s-service-nginx) example for an introduction to the core features of the Helm Chart. 15 | 16 | 17 | ## Overview 18 | 19 | In this guide, we will walk through the steps to: 20 | 21 | - Deploy a dockerized sample app on a Kubernetes cluster. We will use `minikube` for this guide. 22 | - Use the `envVars` input value to set the port that the container listens on. 23 | - Create a [`ConfigMap`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) that 24 | provides some server text for the application to use. 25 | - Use the `configMaps` input value to set the server text returned by the application from the `ConfigMap`. 26 | - Create a [`Secret`](https://kubernetes.io/docs/concepts/configuration/secret/) that 27 | provides some server text for the application to use. 28 | - Use the `secrets` input value to set the server text returned by the application from the `Secret`. 29 | 30 | At the end of this guide, you should be familiar with the three ways provided by `k8s-service` to configure your 31 | application. 32 | 33 | 34 | ## Outline 35 | 36 | 1. [Install and setup `minikube`](#setting-up-your-kubernetes-cluster-minikube) 37 | 1. [Install and setup `helm`](#setting-up-helm-on-minikube) 38 | 1. [Package the sample app docker container for `minikube`](#package-the-sample-app-docker-container-for-minikube) 39 | 1. [Deploy the sample app docker container with `k8s-service`](#deploy-the-sample-app-docker-container-with-k8s-service) 40 | 1. [Setting the server text using a ConfigMap](#setting-the-server-text-using-a-configmap) 41 | 1. [Setting the server text using a Secret](#setting-the-server-text-using-a-secret) 42 | 43 | **NOTE:** This guide assumes you are running the steps in this directory. If you are at the root of the repo, be sure to 44 | change directory before starting: 45 | 46 | ``` 47 | cd examples/k8s-service-config-injection 48 | ``` 49 | 50 | 51 | ## Setting up your Kubernetes cluster: Minikube 52 | 53 | In this guide, we will use `minikube` as our Kubernetes cluster. [Minikube](https://kubernetes.io/docs/setup/minikube/) 54 | is an official tool maintained by the Kubernetes community to be able to provision and run Kubernetes locally your 55 | machine. By having a local environment you can have fast iteration cycles while you develop and play with Kubernetes 56 | before deploying to production. 57 | 58 | To setup `minikube`: 59 | 60 | 1. [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) 61 | 1. [Install the minikube utility](https://kubernetes.io/docs/tasks/tools/install-minikube/) 62 | 1. Run `minikube start` to provision a new `minikube` instance on your local machine. 63 | 1. Verify setup with `kubectl`: `kubectl cluster-info` 64 | 65 | 66 | ## Setting up Helm on Minikube 67 | 68 | In order to install Helm Charts, we need to have the Helm CLI. First install the [`helm` 69 | client](https://docs.helm.sh/using_helm/#installing-helm). Make sure the binary is discoverble in your `PATH` variable. 70 | See [this stackoverflow post](https://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux-unix) 71 | for instructions on setting up your `PATH` on Unix, and [this 72 | post](https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows) for instructions on 73 | Windows. 74 | 75 | Verify your installation by running `helm version`: 76 | 77 | ```bash 78 | $ helm version 79 | version.BuildInfo{Version:"v3.1+unreleased", GitCommit:"c12a9aee02ec07b78dce07274e4816d9863d765e", GitTreeState:"clean", GoVersion:"go1.13.9"} 80 | ``` 81 | 82 | 83 | ## Package the sample app docker container for Minikube 84 | 85 | For this guide, we will need a docker container that provides a web service and is configurable using environment 86 | variables. 87 | 88 | We provide a sample app built using [Sinatra](http://sinatrarb.com/) on Ruby that returns some server text set using the 89 | environment variable `SERVER_TEXT`. You can see the full code for the server in [docker/app.rb](./docker/app.rb). 90 | 91 | In order to be able to deploy this on Kubernetes, we will need to package the app into a Docker container. To do so, we 92 | need to first authenticate the docker client to be able to access the Docker Daemon running on `minikube`: 93 | 94 | ```bash 95 | eval $(minikube docker-env) 96 | ``` 97 | 98 | The above step extracts the host information of the Docker Daemon running on your `minikube` virtual machine, and 99 | configures the `docker` client using environment variables. 100 | 101 | You can verify that you can reach the `minikube` Docker Daemon by running `docker ps`. You should see output similar to 102 | below, listing a bunch of docker containers related to Kubernetes: 103 | 104 | ``` 105 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 106 | 5f6131b6b3ca gcr.io/k8s-minikube/storage-provisioner "/storage-provisioner" About an hour ago Up About an hour k8s_storage-provisioner_storage-provisioner_kube-system_2c2465d6-41c3-11e9-af90-0800274e6ff3_0 107 | 481b954a22b6 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_storage-provisioner_kube-system_2c2465d6-41c3-11e9-af90-0800274e6ff3_0 108 | 8ec108f9948f f59dcacceff4 "/coredns -conf /etc…" About an hour ago Up About an hour k8s_coredns_coredns-86c58d9df4-rr262_kube-system_2a971ff2-41c3-11e9-af90-0800274e6ff3_0 109 | 84fd48fa4fa5 f59dcacceff4 "/coredns -conf /etc…" About an hour ago Up About an hour k8s_coredns_coredns-86c58d9df4-kqn7c_kube-system_2a806542-41c3-11e9-af90-0800274e6ff3_0 110 | b1a8616de7f6 98db19758ad4 "/usr/local/bin/kube…" About an hour ago Up About an hour k8s_kube-pro 111 | ..... SNIPPED FOR BREVITY ... 112 | ``` 113 | 114 | Once your `docker` client is able to talk to the `minikube` Docker Daemon, we can now build our sample app container so 115 | that it is available to `minikube` to use: 116 | 117 | ```bash 118 | docker build -t gruntwork-io/sample-sinatra-app ./docker 119 | ``` 120 | 121 | This will build a container that has the runtime environment for running sinatra in the `minikube` virtual machine. 122 | Once the container is created, we tag it as `gruntwork-io/sample-sinatra-app` so that it is easy to reference later. 123 | Note that because this is built in the `minikube` virtual machine directly, the image will be cached within the VM. This 124 | is why `minikube` is able to use the built container when you reference it in `k8s-service`. 125 | 126 | 127 | ## Deploy the sample app Docker container with k8s-service 128 | 129 | Now that we have a working Kubernetes cluster with Helm installed and a sample Docker container to deploy, we are ready 130 | to deploy our application using the `k8s-service` chart. 131 | 132 | This folder contains predefined input values you can use with the `k8s-service` chart to deploy the sample app 133 | container. Like the [k8s-service-nginx](../k8s-service-nginx) example, these values define the container image to use as 134 | part of the deployment, and augments the default values of the chart by defining a `livenessProbe` and `readinessProbe` 135 | for the main container (which in this case will be `gruntwork-io/sample-sinatra-app:latest`, the one we built in the 136 | previous step). Take a look at the provided [`values.yaml`](./values.yaml) file to see how the values are defined. 137 | 138 | However, the values in this example also sets an environment variable to configure the application. By default the 139 | application listens for web requests on port 8080. However, most of the default values for the `k8s-service` helm chart 140 | assumes the container listens for requests on port 80. While we can update the port that the chart uses, here we opt to 141 | update the application container instead to provide an example of how you can hard code environment variables to pass 142 | into the container in the `values.yaml` file. We use the `envVars` input map to set the `SERVER_PORT` to `80` in the 143 | container: 144 | 145 | ```yaml 146 | envVars: 147 | SERVER_PORT: 80 148 | ``` 149 | 150 | Each key in the `envVars` input map represents an environment variable, with the keys and values directly mapping to the 151 | environment. 152 | 153 | We will now instruct helm to install the `k8s-service` chart using these values. To do so, we will use the `helm 154 | install` command: 155 | 156 | ``` 157 | helm install -f values.yaml ../../charts/k8s-service --wait 158 | ``` 159 | 160 | The above command will instruct the `helm` client to install the Helm Chart defined in the relative path 161 | `../../charts/k8s-service`, merging the input values defined in `values.yaml` with the one provided by the chart. 162 | Additionally, we provide the `--wait` keyword to ensure the command doesn't exit until the `Deployment` resource 163 | completes the rollout of the containers. 164 | 165 | At the end of this command, you should be able to access the sample web app via the `Service`. To hit the `Service`, get 166 | the selected node port and hit the `minikube` ip on that port: 167 | 168 | ```bash 169 | # NOTE: you must set RELEASE_NAME to be the chosen name of the release 170 | export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services "$RELEASE_NAME-sample-sinatra-app") 171 | export NODE_IP=$(minikube ip) 172 | curl "http://$NODE_IP:$NODE_PORT" 173 | ``` 174 | 175 | The above `curl` call should return the default server text set on the application container in JSON format: 176 | 177 | ```json 178 | {"text":"Hello from backend"} 179 | ``` 180 | 181 | 182 | ## Setting the server text using a ConfigMap 183 | 184 | The previous step showed you how you can hard code environment variable settings into the `values.yaml` file. The 185 | disadvantage of hardcoding the environment values is that you will need separate `vaules.yaml` file for each deployment 186 | environment (e.g dev vs production), and manage them independently. This can be cumbersome if you have a lot of common 187 | settings you want to share between the two environments. 188 | 189 | Kubernetes provides [`ConfigMaps`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) to 190 | help decouple application configuration from the deployment settings. `ConfigMaps` are objects that hold key value pairs 191 | that can then be injected into the application container at deploy time as environment variables, or as files on the 192 | file system. 193 | 194 | The `k8s-service` Helm Chart supports both modes of operation. In this guide, we will show you how to set an environment 195 | variable from a `ConfigMap` key. To do so, we first need to create the `ConfigMap`. 196 | 197 | ### Creating the ConfigMap 198 | 199 | For this example we will update the server text of our application using a `ConfigMap`. We will use `kubectl` to create 200 | our `ConfigMap` on our Kubernetes cluster. 201 | 202 | Take a look at [the provided resource file in the `kubernetes` folder](./kubernetes/config-map.yaml) that defines a 203 | `ConfigMap`. This resource file will create a `ConfigMap` resource named `sample-sinatra-app-server-text` containing a 204 | single key `server_text` holding the value `Hello! I was configured using a ConfigMap!`. We can create this `ConfigMap` 205 | by using `kubectl apply` to apply the resource file: 206 | 207 | ``` 208 | kubectl apply -f ./kubernetes/config-map.yaml 209 | ``` 210 | 211 | To verify the `ConfigMap` was created, you can use the `kubectl get` command to get a list of available `ConfigMaps` on 212 | your cluster: 213 | 214 | ``` 215 | $ kubectl get configmap 216 | NAME DATA AGE 217 | sample-sinatra-app-server-text 1 57s 218 | ``` 219 | 220 | ### Injecting the ConfigMap in to the application 221 | 222 | Now that we have created a `ConfigMap` containing the server text config, let's augment our Helm Chart input value to 223 | set the `SERVER_TEXT` environment variable from the `ConfigMap`. Take a look at the 224 | [extensions/config_map_values.yaml](./extensions/config_map_values.yaml) file. This values file defines an entry for the 225 | `configMaps` input map: 226 | 227 | ```yaml 228 | configMaps: 229 | sample-sinatra-app-server-text: 230 | as: environment 231 | items: 232 | server_text: 233 | envVarName: SERVER_TEXT 234 | ``` 235 | 236 | Each key at the root of the `configMaps` map value specifies a `ConfigMap` by name. Then, the value is another map that 237 | specifies how that `ConfigMap` should be included in the application container. You can either include it as a file 238 | (`as: volume`) or environment variable (`as: environmet`). Here we include it as an environment variable, setting the 239 | variable `SERVER_TEXT` to the value of the `server_text` key of the `ConfigMap`. You can refer to the documentation in 240 | the chart's [`values.yaml`](/charts/k8s-service/values.yaml) for details on how to set the input map. 241 | 242 | To deploy this, we will pass it in in addition to the root `values.yaml` file to merge the two inputs together. We will 243 | use `helm upgrade` here instead of `helm install` so that we can update our previous deployment: 244 | 245 | ``` 246 | helm upgrade "$RELEASE_NAME" ../../charts/k8s-service -f values.yaml -f ./extensions/config_map_values.yaml --wait 247 | ``` 248 | 249 | When you pass in multiple `-f` options, `helm` will combine all the yaml files into one, preferring the right value over 250 | the left (e.g if there was overlap, then `helm` will choose the value defined in `./extensions/config_map_values.yaml` 251 | over the one defined in `values.yaml`). 252 | 253 | When this deployment completes and you hit the server again, you should get the server text defined in the `ConfigMap`: 254 | 255 | ``` 256 | $ curl "http://$NODE_IP:$NODE_PORT" 257 | {"text":"Hello! I was configured using a ConfigMap!"} 258 | ``` 259 | 260 | 261 | ## Setting the server text using a Secret 262 | 263 | `ConfigMaps` and hard coded environment variables are great for application configuration values, but are not very 264 | secure. Hard coding environment variables leak into your code, and thus risk being checked in to source control while 265 | `ConfigMaps` are not stored encrypted on the Kubernetes server and reports the value in plain text in the shell. 266 | 267 | Kubernetes provides a built in secrets manager in the form of the 268 | [`Secret`](https://kubernetes.io/docs/concepts/configuration/secret/) resource. Unlike `ConfigMaps`, `Secrets`: 269 | 270 | - Can be stored in encrypted form in `etcd`. 271 | - Is only sent to a node if a pod on that node requires it, and is only available in memory (using `tmpfs`). 272 | - Obfuscates the text using `base64` to avoid "shoulder surfing" leakage. 273 | 274 | **NOTE: Be aware of [the risks with using `Secrets` as your secrets 275 | manager](https://kubernetes.io/docs/concepts/configuration/secret/#risks).** 276 | 277 | Like `ConfigMaps`, `Secrets` can be injected into the application container as environment variables or as files, and 278 | the `k8s-service` Helm Chart supports both modes of operation. 279 | 280 | In this guide, we will use a `Secret` to replace the server text config set using a `ConfigMap` in the previous step. 281 | 282 | ### Creating the Secret 283 | 284 | Since `Secrets` contain sensitive information, it is typically recommended to create `Secrets` manually using the command line. 285 | 286 | To create a `Secret`, we can use `kubectl create secret`. Here, we will create a new secret containing the key 287 | `server_text` set to `Hello! I was configured using a Secret!`: 288 | 289 | ``` 290 | kubectl create secret generic sample-sinatra-app-server-text --from-literal server_text='Hello! I was configured using a Secret!' 291 | ``` 292 | 293 | To verify the `Secret` was created, you can use the `kubectl get` command to get a list of available `Secrets`: 294 | 295 | ``` 296 | $ kubectl get secrets 297 | NAME TYPE DATA AGE 298 | default-token-wmb57 kubernetes.io/service-account-token 3 27m 299 | sample-sinatra-app-server-text Opaque 1 1m 300 | ``` 301 | 302 | ### Injecting the Secret in to the application 303 | 304 | Now that we have created a `Secret` containing the server text config, let's try to inject it into the application 305 | container. The settings to inject `Secrets` is formulated in a very similar manner to `ConfigMaps`. Take a look at the 306 | [extensions/secret_values.yaml](./extensions/secret_values.yaml) file. This file defines a single input value `secrets`, 307 | which sets the `SERVER_TEXT` environment variable to the `server_text` key on the `sample-sinatra-app-server-text` 308 | `Secret`: 309 | 310 | ```yaml 311 | secrets: 312 | sample-sinatra-app-server-text: 313 | as: environment 314 | items: 315 | server_text: 316 | envVarName: SERVER_TEXT 317 | ``` 318 | 319 | Compare this configuration with [extensions/config_map_values.yaml](./extensions/config_map_values.yaml). Note how the 320 | only thing that differs is the input key is `secrets` as opposed to `configMaps`. This is because both `ConfigMaps` and 321 | `Secrets` behave in very similar manners in Kubernetes, and so the `k8s-service` Helm Chart intentionally exposes a 322 | similar interface to configure the two. 323 | 324 | Deploying this config is very similar to how we deployed the `config_map_values.yaml` extension. We need to combine this 325 | with the root `values.yaml` file to get a complete input and update our existing release: 326 | 327 | ``` 328 | helm upgrade "$RELEASE_NAME" ../../charts/k8s-service -f values.yaml -f ./extensions/secret_values.yaml --wait 329 | ``` 330 | 331 | When this deployment completes and you hit the server again, you should get the server text defined in the config map: 332 | 333 | ``` 334 | $ curl "http://$NODE_IP:$NODE_PORT" 335 | {"text":"Hello! I was configured using a Secret!"} 336 | ``` 337 | 338 | 339 | ## Summary 340 | 341 | Congratulations! At this point, you have: 342 | 343 | - Setup `minikube` to have a local dev environment of Kubernetes. 344 | - Installed and deployed Helm on `minikube`. 345 | - Packaged a sample application using Docker. 346 | - Deployed the dockerized application on to `minikube` using the `k8s-service` Helm Chart. 347 | - Configured the application using hard coded environment variables in the input values. 348 | - Configured the application using `ConfigMaps`. 349 | - Configured the application using `Secrets`. 350 | 351 | To learn more about the `k8s-service` Helm Chart, refer to [the chart documentation](/charts/k8s-service). 352 | --------------------------------------------------------------------------------