├── test ├── boilerplate │ ├── boilerplate.py.preamble │ ├── boilerplate.sh.preamble │ ├── boilerplate.xml.preamble │ ├── boilerplate.go.preamble │ ├── boilerplate.html.preamble │ ├── boilerplate.go.txt │ ├── boilerplate.tf.txt │ ├── boilerplate.bzl.txt │ ├── boilerplate.py.txt │ ├── boilerplate.sh.txt │ ├── boilerplate.yaml.txt │ ├── boilerplate.BUILD.txt │ ├── boilerplate.Makefile.txt │ ├── boilerplate.WORKSPACE.txt │ ├── boilerplate.bazel.txt │ ├── boilerplate.xml.txt │ ├── boilerplate.Dockerfile.txt │ ├── boilerplate.html.txt │ ├── boilerplate.css.txt │ ├── boilerplate.java.txt │ ├── boilerplate.js.txt │ ├── boilerplate.scss.txt │ └── boilerplate.ts.txt ├── make.sh ├── test_verify_boilerplate.py └── verify_boilerplate.py ├── img ├── architecture.png └── bastion_proxy.png ├── OWNERS ├── .gitignore ├── terraform ├── provider.tf ├── outputs.tf ├── variables.tf ├── postgres.tf ├── main.tf └── network.tf ├── CONTRIBUTING.md ├── scripts ├── destroy.sh ├── create.sh ├── proxy.sh ├── validate.sh ├── common.sh ├── deploy.sh └── generate-tfvars.sh ├── Makefile ├── manifests └── pgadmin-deployment.yaml ├── Jenkinsfile ├── LICENSE └── README.md /test/boilerplate/boilerplate.py.preamble: -------------------------------------------------------------------------------- 1 | #! 2 | -------------------------------------------------------------------------------- /test/boilerplate/boilerplate.sh.preamble: -------------------------------------------------------------------------------- 1 | #! 2 | -------------------------------------------------------------------------------- /test/boilerplate/boilerplate.xml.preamble: -------------------------------------------------------------------------------- 1 | 16 | -------------------------------------------------------------------------------- /test/boilerplate/boilerplate.Dockerfile.txt: -------------------------------------------------------------------------------- 1 | # Copyright 2018 Google LLC 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # https://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | -------------------------------------------------------------------------------- /test/boilerplate/boilerplate.html.txt: -------------------------------------------------------------------------------- 1 | 16 | -------------------------------------------------------------------------------- /test/boilerplate/boilerplate.css.txt: -------------------------------------------------------------------------------- 1 | // Copyright 2018 Google LLC 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // https://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | -------------------------------------------------------------------------------- /test/boilerplate/boilerplate.java.txt: -------------------------------------------------------------------------------- 1 | // Copyright 2018 Google LLC 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // https://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | -------------------------------------------------------------------------------- /test/boilerplate/boilerplate.js.txt: -------------------------------------------------------------------------------- 1 | // Copyright 2018 Google LLC 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // https://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | -------------------------------------------------------------------------------- /test/boilerplate/boilerplate.scss.txt: -------------------------------------------------------------------------------- 1 | // Copyright 2018 Google LLC 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // https://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | -------------------------------------------------------------------------------- /test/boilerplate/boilerplate.ts.txt: -------------------------------------------------------------------------------- 1 | // Copyright 2018 Google LLC 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // https://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # OSX leaves these everywhere on SMB shares 2 | ._* 3 | 4 | # OSX trash 5 | .DS_Store 6 | 7 | # Emacs save files 8 | *~ 9 | \#*\# 10 | .\#* 11 | 12 | # Vim-related files 13 | [._]*.s[a-w][a-z] 14 | [._]s[a-w][a-z] 15 | *.un~ 16 | Session.vim 17 | .netrwhist 18 | 19 | ### https://raw.github.com/github/gitignore/90f149de451a5433aebd94d02d11b0e28843a1af/Terraform.gitignore 20 | 21 | # Local .terraform directories 22 | **/.terraform/* 23 | 24 | # .tfstate files 25 | *.tfstate 26 | *.tfstate.* 27 | 28 | # kubectl config 29 | scripts/kubeconfig 30 | 31 | # Crash log files 32 | crash.log 33 | 34 | # Ignore any .tfvars files that are generated automatically for each Terraform run. Most 35 | # .tfvars files are managed as part of configuration and so should be included in 36 | # version control. 37 | # 38 | # example.tfvars 39 | 40 | terraform.tfvars 41 | 42 | # gcloud service account credentials 43 | credentials.json 44 | 45 | # postgres db name 46 | .instance 47 | 48 | -------------------------------------------------------------------------------- /terraform/provider.tf: -------------------------------------------------------------------------------- 1 | /* 2 | Copyright 2018 Google LLC 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | https://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | */ 16 | 17 | provider "google" { 18 | version = "~> 2.12.0" 19 | project = var.project 20 | region = var.region 21 | zone = var.zone 22 | } 23 | 24 | provider "google-beta" { 25 | version = "~> 2.12.0" 26 | project = var.project 27 | region = var.region 28 | zone = var.zone 29 | } 30 | 31 | data "google_client_config" "current" {} 32 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | 3 | We'd love to accept your patches and contributions to this project. There are 4 | just a few small guidelines you need to follow. 5 | 6 | ## Contributor License Agreement 7 | Contributions to this project must be accompanied by a Contributor License 8 | Agreement. You (or your employer) retain the copyright to your contribution; 9 | this simply gives us permission to use and redistribute your contributions as 10 | part of the project. Head over to https://cla.developers.google.com/ to see your 11 | current agreements on file or to sign a new one. 12 | 13 | You generally only need to submit a CLA once, so if you've already submitted one 14 | (even if it was for a different project), you probably don't need to do it again. 15 | 16 | ## Code reviews 17 | All submissions, including submissions by project members, require review. We 18 | use GitHub pull requests for this purpose. Consult GitHub Help for more 19 | information on using pull requests. 20 | 21 | ## Community Guidelines 22 | This project follows 23 | [Google's Open Source Community Guidelines](CODE-OF-CONDUCT.md). 24 | -------------------------------------------------------------------------------- /scripts/destroy.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Copyright 2018 Google LLC 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # https://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # "---------------------------------------------------------" 18 | # "- -" 19 | # "- Teardown removes all resources -" 20 | # "- -" 21 | # "---------------------------------------------------------" 22 | 23 | # Do not set errexit as it makes partial deletes impossible 24 | set -o nounset 25 | set -o pipefail 26 | 27 | # Locate the root directory 28 | ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )/.." && pwd )" 29 | 30 | # shellcheck source=scripts/common.sh 31 | source "$ROOT/scripts/common.sh" 32 | 33 | # Tear down Terraform-managed resources and remove generated tfvars 34 | cd "$ROOT/terraform" || exit; 35 | 36 | # Perform the destroy 37 | terraform destroy -input=false -auto-approve 38 | 39 | # Remove the tfvars file generated during "make create" 40 | # TODO rm -f "$ROOT/terraform/terraform.tfvars" 41 | -------------------------------------------------------------------------------- /scripts/create.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Copyright 2018 Google LLC 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # https://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | ############################################################################### 18 | # 19 | # Creates all resources with Terraform. 20 | # 21 | ############################################################################### 22 | 23 | # Bash safeties: exit on error, no unset variables, pipelines can't hide errors 24 | set -o errexit 25 | set -o nounset 26 | set -o pipefail 27 | 28 | # Locate the root directory 29 | ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )/.." && pwd )" 30 | 31 | # shellcheck source=scripts/common.sh 32 | source "${ROOT}/scripts/common.sh" 33 | 34 | # Generate the variables to be used by Terraform 35 | # shellcheck source=scripts/generate-tfvars.sh 36 | # TODO remove this 37 | #source "${ROOT}/scripts/generate-tfvars.sh" 38 | 39 | # Initialize and run Terraform 40 | (cd "${ROOT}/terraform"; terraform init -input=false) 41 | (cd "${ROOT}/terraform"; terraform apply -input=false -auto-approve) 42 | 43 | # Get cluster credentials 44 | GET_CREDS="$(terraform output --state=terraform/terraform.tfstate get_credentials)" 45 | ${GET_CREDS} 46 | -------------------------------------------------------------------------------- /scripts/proxy.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Copyright 2018 Google LLC 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # https://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # "---------------------------------------------------------" 18 | # "- -" 19 | # "- Sets up the gcloud compute ssh proxy to the bastion -" 20 | # "- -" 21 | # "---------------------------------------------------------" 22 | 23 | # Bash safeties: exit on error, no unset variables, pipelines can't hide errors 24 | set -euo pipefail 25 | 26 | # Directory of this script. 27 | ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )/.." && pwd )" 28 | 29 | # shellcheck source=scripts/common.sh 30 | source "$ROOT"/scripts/common.sh 31 | 32 | echo "Detecting SSH Bastion Tunnel/Proxy" 33 | if [[ ! "$(pgrep -f L8888:127.0.0.1:8888)" ]]; then 34 | echo "Did not detect a running SSH tunnel. Opening a new one." 35 | # shellcheck disable=SC2091 36 | BASTION_CMD="$(terraform output --state=terraform/terraform.tfstate bastion_ssh)" 37 | $BASTION_CMD -f tail -f /dev/null 38 | echo "SSH Tunnel/Proxy is now running." 39 | else 40 | echo "Detected a running SSH tunnel. Skipping." 41 | fi 42 | -------------------------------------------------------------------------------- /scripts/validate.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Copyright 2018 Google LLC 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # https://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | ############################################################################### 18 | # 19 | # This file validates that all resources have been created and work as expected. 20 | # 21 | ############################################################################### 22 | 23 | # Bash safeties: exit on error, no unset variables, pipelines can't hide errors 24 | set -euo pipefail 25 | 26 | # Directory of this script. 27 | ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )/.." && pwd )" 28 | 29 | # shellcheck source=scripts/common.sh 30 | source "$ROOT"/scripts/common.sh 31 | 32 | # Ensure the bastion SSH tunnel/proxy is up/running 33 | # shellcheck source=scripts/proxy.sh 34 | source "$ROOT"/scripts/proxy.sh 35 | 36 | # Set the HTTPS_PROXY env var to allow kubectl to bounce through 37 | # the bastion host over the locally forwarded port 8888. 38 | export HTTPS_PROXY=localhost:8888 39 | 40 | test_des "pgAdmin is deployed on the cluster" 41 | test_cmd "$(kubectl rollout status --timeout=10s \ 42 | -f "${ROOT}/manifests/pgadmin-deployment.yaml" 2>&1)" 43 | 44 | test_des "pgAdmin is able to connect to the database instance" 45 | test_cmd "$(kubectl exec -it -n default \ 46 | "$(kubectl get pod -l 'app=pgadmin4' \ 47 | -ojsonpath='{.items[].metadata.name}')" -c pgadmin4 \ 48 | -- pg_isready -h localhost -t 10 2>&1)" 49 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | # Copyright 2018 Google LLC 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # https://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # Make will use bash instead of sh 16 | SHELL := /usr/bin/env bash 17 | ROOT := ${CURDIR} 18 | 19 | .PHONY: help 20 | help: 21 | @echo 'Usage:' 22 | @echo ' make create Create or update GCP resources.' 23 | @echo ' make teardown Destroy all GCP resources.' 24 | @echo ' make validate Check that installed resources work as expected.' 25 | @echo ' make lint Check syntax of all scripts.' 26 | @echo 27 | 28 | # create/delete/validate targets for the demo 29 | .PHONY: create 30 | create: 31 | @$(ROOT)/scripts/create.sh 32 | 33 | .PHONY: deploy 34 | deploy: 35 | @$(ROOT)/scripts/deploy.sh 36 | 37 | .PHONY: teardown 38 | teardown: 39 | @$(ROOT)/scripts/destroy.sh 40 | 41 | .PHONY: validate 42 | validate: 43 | @${ROOT}/scripts/validate.sh 44 | 45 | 46 | ###################################### 47 | # Linting 48 | ###################################### 49 | 50 | .PHONY: lint 51 | lint: check_shell check_python check_golang check_terraform check_docker check_base_files check_headers check_trailing_whitespace 52 | 53 | # The .PHONY directive tells make that this isn't a real target and so 54 | # the presence of a file named 'check_shell' won't cause this target to stop 55 | # working 56 | .PHONY: check_shell 57 | check_shell: 58 | @source test/make.sh && check_shell 59 | 60 | .PHONY: check_python 61 | check_python: 62 | @source test/make.sh && check_python 63 | 64 | .PHONY: check_golang 65 | check_golang: 66 | @source test/make.sh && golang 67 | 68 | .PHONY: check_terraform 69 | check_terraform: 70 | @source test/make.sh && check_terraform 71 | 72 | .PHONY: check_docker 73 | check_docker: 74 | @source test/make.sh && docker 75 | 76 | .PHONY: check_base_files 77 | check_base_files: 78 | @source test/make.sh && basefiles 79 | 80 | .PHONY: check_shebangs 81 | check_shebangs: 82 | @source test/make.sh && check_bash 83 | 84 | .PHONY: check_trailing_whitespace 85 | check_trailing_whitespace: 86 | @source test/make.sh && check_trailing_whitespace 87 | 88 | .PHONY: check_headers 89 | check_headers: 90 | @echo "Checking file headers" 91 | @python3 test/verify_boilerplate.py 92 | -------------------------------------------------------------------------------- /scripts/common.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | # Copyright 2018 Google LLC 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # https://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # "---------------------------------------------------------" 18 | # "- -" 19 | # "- Common commands for all scripts -" 20 | # "- -" 21 | # "---------------------------------------------------------" 22 | 23 | # Locate the root directory. Used by scripts that source this one. 24 | # shellcheck disable=SC2034 25 | ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )/.." && pwd )" 26 | 27 | # git is required for this tutorial 28 | # https://git-scm.com/book/en/v2/Getting-Started-Installing-Git 29 | command -v git >/dev/null 2>&1 || { \ 30 | echo >&2 "I require git but it's not installed. Aborting." 31 | echo >&2 "Refer to: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git" 32 | exit 1 33 | } 34 | 35 | # glcoud is required for this tutorial 36 | # https://cloud.google.com/sdk/install 37 | command -v gcloud >/dev/null 2>&1 || { \ 38 | echo >&2 "I require gcloud but it's not installed. Aborting." 39 | echo >&2 "Refer to: https://cloud.google.com/sdk/install" 40 | exit 1 41 | } 42 | 43 | # Make sure kubectl is installed. If not, refer to: 44 | # https://kubernetes.io/docs/tasks/tools/install-kubectl/ 45 | command -v kubectl >/dev/null 2>&1 || { \ 46 | echo >&2 "I require kubectl but it's not installed. Aborting." 47 | echo >&2 "Refer to: https://kubernetes.io/docs/tasks/tools/install-kubectl/" 48 | exit 1 49 | } 50 | 51 | # Simple test helpers that avoids eval and complex quoting. Note that stderr is 52 | # redirected to stdout so we can properly handle output. 53 | # Usage: test_des "description" 54 | test_des() { 55 | echo -n "Checking that $1... " 56 | } 57 | 58 | # Usage: test_cmd "$(command string 2>&1)" 59 | test_cmd() { 60 | local result=$? 61 | local output="$1" 62 | 63 | # If command completes successfully, output "pass" and continue. 64 | if [[ $result == 0 ]]; then 65 | echo "pass" 66 | 67 | # If ccommand fails, output the error code, command output and exit. 68 | else 69 | echo -e "fail ($result)\\n" 70 | cat <<<"$output" 71 | exit $result 72 | fi 73 | } 74 | -------------------------------------------------------------------------------- /test/make.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Copyright 2018 Google LLC 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # https://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # This function checks to make sure that every 18 | # shebang has a '- e' flag, which causes it 19 | # to exit on error 20 | function check_bash() { 21 | find . -name "*.sh" | while IFS= read -d '' -r file; 22 | do 23 | if [[ "$file" != *"bash -e"* ]]; 24 | then 25 | echo "$file is missing shebang with -e"; 26 | exit 1; 27 | fi; 28 | done; 29 | } 30 | 31 | # This function makes sure that the required files for 32 | # releasing to OSS are present 33 | function basefiles() { 34 | echo "Checking for required files" 35 | test -f CONTRIBUTING.md || echo "Missing CONTRIBUTING.md" 36 | test -f LICENSE || echo "Missing LICENSE" 37 | test -f README.md || echo "Missing README.md" 38 | } 39 | 40 | # This function runs the hadolint linter on 41 | # every file named 'Dockerfile' 42 | function docker() { 43 | echo "Running hadolint on Dockerfiles" 44 | find . -name "Dockerfile" -exec hadolint {} \; 45 | } 46 | 47 | # This function runs 'terraform validate' against all 48 | # files ending in '.tf' 49 | function check_terraform() { 50 | echo "Running terraform validate" 51 | REPO_ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )/.." && pwd )" 52 | cd "${REPO_ROOT}/terraform" || exit 53 | terraform init 54 | terraform validate 55 | } 56 | 57 | # This function runs 'go fmt' and 'go vet' on eery file 58 | # that ends in '.go' 59 | function golang() { 60 | echo "Running go fmt and go vet" 61 | find . -name "*.go" -exec go fmt {} \; 62 | find . -name "*.go" -exec go vet {} \; 63 | } 64 | 65 | # This function runs the flake8 linter on every file 66 | # ending in '.py' 67 | function check_python() { 68 | echo "Running flake8" 69 | find . -name "*.py" -exec flake8 {} \; 70 | } 71 | 72 | # This function runs the shellcheck linter on every 73 | # file ending in '.sh' 74 | function check_shell() { 75 | echo "Running shellcheck" 76 | find . -name "*.sh" -exec shellcheck -x {} \; 77 | } 78 | 79 | # This function makes sure that there is no trailing whitespace 80 | # in any files in the project. 81 | # There are some exclusions 82 | function check_trailing_whitespace() { 83 | echo "The following lines have trailing whitespace" 84 | grep -r '[[:blank:]]$' --exclude-dir=".terraform" --exclude="*.png" --exclude-dir=".git" --exclude="*.pyc" . 85 | rc=$? 86 | if [ $rc = 0 ]; then 87 | exit 1 88 | fi 89 | } 90 | -------------------------------------------------------------------------------- /terraform/outputs.tf: -------------------------------------------------------------------------------- 1 | /* 2 | Copyright 2018 Google LLC 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | https://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | */ 16 | 17 | // Used to identify the cluster in validate.sh. 18 | output "cluster_name" { 19 | description = "Convenience output to obtain the GKE Cluster name" 20 | value = google_container_cluster.cluster.name 21 | } 22 | 23 | // Current GCP project 24 | output "gcp_serviceaccount" { 25 | description = "The email/name of the GCP service account" 26 | value = google_service_account.access_postgres.email 27 | } 28 | 29 | // Used when setting up the GKE cluster to talk to Postgres. 30 | output "postgres_instance" { 31 | description = "The generated name of the Cloud SQL instance" 32 | value = google_sql_database_instance.default.name 33 | } 34 | 35 | // Full connection string for the Postgres DB> 36 | output "postgres_connection" { 37 | description = "The connection string dynamically generated for storage inside the Kubernetes configmap" 38 | value = format("%s:%s:%s", data.google_client_config.current.project, var.region, google_sql_database_instance.default.name) 39 | } 40 | 41 | // Postgres DB username. 42 | output "postgres_user" { 43 | description = "The Cloud SQL Instance User name" 44 | value = google_sql_user.default.name 45 | } 46 | 47 | // Postgres DB password. 48 | output "postgres_pass" { 49 | sensitive = true 50 | description = "The Cloud SQL Instance Password (Generated)" 51 | value = google_sql_user.default.password 52 | } 53 | 54 | output "cluster_endpoint" { 55 | description = "Cluster endpoint" 56 | value = google_container_cluster.cluster.endpoint 57 | } 58 | 59 | output "cluster_ca_certificate" { 60 | sensitive = true 61 | description = "Cluster ca certificate (base64 encoded)" 62 | value = google_container_cluster.cluster.master_auth[0].cluster_ca_certificate 63 | } 64 | 65 | output "get_credentials" { 66 | description = "Gcloud get-credentials command" 67 | value = format("gcloud container clusters get-credentials --project %s --region %s --internal-ip %s", var.project, var.region, var.cluster_name) 68 | } 69 | output "bastion_ssh" { 70 | description = "Gcloud compute ssh to the bastion host command" 71 | value = format("gcloud compute ssh %s --project %s --zone %s -- -L8888:127.0.0.1:8888", google_compute_instance.bastion.name, var.project, google_compute_instance.bastion.zone) 72 | } 73 | 74 | output "bastion_kubectl" { 75 | description = "kubectl command using the local proxy once the bastion_ssh command is running" 76 | value = "HTTPS_PROXY=localhost:8888 kubectl get pods --all-namespaces" 77 | } 78 | -------------------------------------------------------------------------------- /scripts/deploy.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # Copyright 2018 Google LLC 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # https://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | 16 | # Apply the PgAdmin configmap, secret, and deployment manifests to the cluster. 17 | 18 | # "---------------------------------------------------------" 19 | # "- -" 20 | # "- Apply the PgAdmin configmap, secret, and deployment -" 21 | # "- manifests to the cluster. -" 22 | # "- -" 23 | # "---------------------------------------------------------" 24 | 25 | # Bash safeties: exit on error, no unset variables, pipelines can't hide errors 26 | set -euo pipefail 27 | 28 | # Directory of this script. 29 | ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )/.." && pwd )" 30 | 31 | # shellcheck source=scripts/common.sh 32 | source "$ROOT"/scripts/common.sh 33 | 34 | # Ensure the bastion SSH tunnel/proxy is up/running 35 | # shellcheck source=scripts/proxy.sh 36 | source "$ROOT"/scripts/proxy.sh 37 | 38 | # Set the HTTPS_PROXY env var to allow kubectl to bounce through 39 | # the bastion host over the locally forwarded port 8888. 40 | export HTTPS_PROXY=localhost:8888 41 | 42 | # Create the configmap that includes the connection string for the DB. 43 | echo 'Creating the PgAdmin Configmap' 44 | POSTGRES_CONNECTION="$(cd terraform && terraform output postgres_connection)" 45 | kubectl create configmap connectionname \ 46 | --from-literal=connectionname="${POSTGRES_CONNECTION}" \ 47 | --dry-run -o yaml | kubectl apply -f - 48 | 49 | # Create the secret that includes the user/pass for pgadmin 50 | echo 'Creating the PgAdmin Console secret' 51 | POSTGRES_USER="$(cd terraform && terraform output postgres_user)" 52 | POSTGRES_PASS="$(cd terraform && terraform output postgres_pass)" 53 | kubectl create secret generic pgadmin-console \ 54 | --from-literal=user="${POSTGRES_USER}" \ 55 | --from-literal=password="${POSTGRES_PASS}" \ 56 | --dry-run -o yaml | kubectl apply -f - 57 | 58 | # Create the service account 59 | kubectl create serviceaccount postgres -n default \ 60 | --dry-run -o yaml | kubectl apply -f - 61 | 62 | # Annotate it 63 | GCP_SA="$(cd terraform && terraform output gcp_serviceaccount)" 64 | kubectl annotate serviceaccount -n default postgres --overwrite=true \ 65 | iam.gke.io/gcp-service-account="${GCP_SA}" 66 | 67 | # Deployment of the pgadmin container with the cloud-sql-proxy "sidecar". 68 | echo 'Deploying PgAdmin' 69 | kubectl apply -f "${ROOT}/manifests/pgadmin-deployment.yaml" 70 | 71 | # Make sure it is running successfully. 72 | echo 'Waiting for rollout to complete and pod available.' 73 | kubectl rollout status --timeout=5m deployment/pgadmin4-deployment 74 | -------------------------------------------------------------------------------- /terraform/variables.tf: -------------------------------------------------------------------------------- 1 | /* 2 | Copyright 2018 Google LLC 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | https://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | */ 16 | 17 | // Required values to be set in terraform.tfvars 18 | variable "project" { 19 | description = "The project in which to hold the components" 20 | type = "string" 21 | } 22 | 23 | variable "region" { 24 | description = "The region in which to create the VPC network" 25 | type = "string" 26 | } 27 | 28 | variable "zone" { 29 | description = "The zone in which to create the Kubernetes cluster. Must match the region" 30 | type = "string" 31 | } 32 | 33 | 34 | // Optional values that can be overridden or appended to if desired. 35 | variable "cluster_name" { 36 | description = "The name to give the new Kubernetes cluster." 37 | type = "string" 38 | default = "private-cluster" 39 | } 40 | 41 | variable "bastion_tags" { 42 | description = "A list of tags applied to your bastion instance." 43 | type = "list" 44 | default = ["bastion"] 45 | } 46 | 47 | variable "k8s_namespace" { 48 | description = "The namespace to use for the deployment and workload identity binding" 49 | type = "string" 50 | default = "default" 51 | } 52 | 53 | variable "k8s_sa_name" { 54 | description = "The k8s service account name to use for the deployment and workload identity binding" 55 | type = "string" 56 | default = "postgres" 57 | } 58 | 59 | variable "db_username" { 60 | description = "The name for the DB connection" 61 | type = "string" 62 | default = "postgres" 63 | } 64 | 65 | variable "service_account_iam_roles" { 66 | type = "list" 67 | 68 | default = [ 69 | "roles/logging.logWriter", 70 | "roles/monitoring.metricWriter", 71 | "roles/monitoring.viewer", 72 | ] 73 | description = <<-EOF 74 | List of the default IAM roles to attach to the service account on the 75 | GKE Nodes. 76 | EOF 77 | } 78 | 79 | variable "service_account_custom_iam_roles" { 80 | type = "list" 81 | default = [] 82 | 83 | description = <<-EOF 84 | List of arbitrary additional IAM roles to attach to the service account on 85 | the GKE nodes. 86 | EOF 87 | } 88 | 89 | variable "project_services" { 90 | type = "list" 91 | 92 | default = [ 93 | "cloudresourcemanager.googleapis.com", 94 | "servicenetworking.googleapis.com", 95 | "container.googleapis.com", 96 | "compute.googleapis.com", 97 | "iam.googleapis.com", 98 | "logging.googleapis.com", 99 | "monitoring.googleapis.com", 100 | "sqladmin.googleapis.com", 101 | "securetoken.googleapis.com", 102 | ] 103 | description = <<-EOF 104 | The GCP APIs that should be enabled in this project. 105 | EOF 106 | } 107 | -------------------------------------------------------------------------------- /scripts/generate-tfvars.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # Copyright 2018 Google LLC 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # https://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | 16 | # "---------------------------------------------------------" 17 | # "- -" 18 | # "- Helper script to generate terraform variables -" 19 | # "- file based on gcloud defaults. -" 20 | # "- -" 21 | # "---------------------------------------------------------" 22 | 23 | # Stop immediately if something goes wrong 24 | set -euo pipefail 25 | 26 | ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )/.." && pwd )" 27 | 28 | # shellcheck source=scripts/common.sh 29 | source "$ROOT/scripts/common.sh" 30 | 31 | TFVARS_FILE="./terraform/terraform.tfvars" 32 | 33 | # Obtain the needed env variables. Variables are only created if they are 34 | # currently empty. This allows users to set environment variables if they 35 | # would prefer to do so. 36 | # 37 | # The - in the initial variable check prevents the script from exiting due 38 | # from attempting to use an unset variable. 39 | 40 | [[ -z "${REGION-}" ]] && REGION="$(gcloud config get-value compute/region)" 41 | if [[ -z "${REGION}" ]]; then 42 | echo "https://cloud.google.com/compute/docs/regions-zones/changing-default-zone-region" 1>&2 43 | echo "gcloud cli must be configured with a default region." 1>&2 44 | echo "run 'gcloud config set compute/region REGION'." 1>&2 45 | echo "replace 'REGION' with the region name like us-west1." 1>&2 46 | exit 1; 47 | fi 48 | 49 | [[ -z "${ZONE-}" ]] && ZONE="$(gcloud config get-value compute/zone)" 50 | if [[ -z "${ZONE}" ]]; then 51 | echo "https://cloud.google.com/compute/docs/regions-zones/changing-default-zone-region" 1>&2 52 | echo "gcloud cli must be configured with a default zone." 1>&2 53 | echo "run 'gcloud config set compute/zone ZONE'." 1>&2 54 | echo "replace 'ZONE' with the zone name like us-west1-a." 1>&2 55 | exit 1; 56 | fi 57 | 58 | [[ -z "${PROJECT-}" ]] && PROJECT="$(gcloud config get-value core/project)" 59 | if [[ -z "${PROJECT}" ]]; then 60 | echo "gcloud cli must be configured with a default project." 1>&2 61 | echo "run 'gcloud config set core/project PROJECT'." 1>&2 62 | echo "replace 'PROJECT' with the project name." 1>&2 63 | exit 1; 64 | fi 65 | 66 | # If Terraform is run without this file, the user will be prompted for values. 67 | # We don't want to overwrite a pre-existing tfvars file 68 | if [[ -f "${TFVARS_FILE}" ]] 69 | then 70 | echo "${TFVARS_FILE} already exists." 1>&2 71 | echo "Please remove or rename before regenerating." 1>&2 72 | exit 1; 73 | fi 74 | 75 | # Write out all the values we gathered into a tfvars file so you don't 76 | # have to enter the values manually 77 | cat < "${TFVARS_FILE}" 78 | project="${PROJECT}" 79 | zone="${ZONE}" 80 | region="${REGION}" 81 | EOF 82 | -------------------------------------------------------------------------------- /manifests/pgadmin-deployment.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2018 Google LLC 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # https://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # This is the only manifest in the project. It sets up a single pod with 1 16 | # replica. That pod contains two containers, pgAdmin and Cloud SQL Proxy. 17 | # There is no Service because we are using a very simple port-forward to 18 | # access the pod. 19 | 20 | # Documentation https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ 21 | # This is a deploys the pgadmin web application used for postgress database 22 | # management. 23 | apiVersion: apps/v1 24 | kind: Deployment 25 | metadata: 26 | name: pgadmin4-deployment 27 | namespace: default 28 | labels: 29 | app: pgadmin4 30 | spec: 31 | replicas: 1 32 | selector: 33 | matchLabels: 34 | app: pgadmin4 35 | template: 36 | metadata: 37 | labels: 38 | app: pgadmin4 39 | spec: 40 | serviceAccount: postgres 41 | containers: 42 | # This is the official pgAdmin 4 container 43 | - image: dpage/pgadmin4 44 | name: pgadmin4 45 | # You can make environment variables from GKE secrets 46 | # You can read them directly using 'secretKeyRef' 47 | env: 48 | - name: PGADMIN_DEFAULT_EMAIL 49 | valueFrom: 50 | secretKeyRef: 51 | name: pgadmin-console 52 | key: user 53 | - name: PGADMIN_DEFAULT_PASSWORD 54 | valueFrom: 55 | secretKeyRef: 56 | name: pgadmin-console 57 | key: password 58 | ports: 59 | - containerPort: 80 60 | name: pgadmin4 61 | # We are pulling the Cloud SQL Proxy container from the official Google 62 | # container repository 63 | - image: gcr.io/cloudsql-docker/gce-proxy:1.14 64 | imagePullPolicy: Always 65 | name: cloudsql-proxy 66 | securityContext: 67 | runAsUser: 2 # non-root user 68 | allowPrivilegeEscalation: false 69 | # You can make environment variables from GKE configurations 70 | # You can read them from a configmap directly with configMapKeyRef 71 | env: 72 | - name: INSTANCE_CONNECTION 73 | valueFrom: 74 | configMapKeyRef: 75 | name: connectionname 76 | key: connectionname 77 | # Connecting to the PRIVATE IP of the DB instance. 78 | # We will be getting the GCP credentials dynamically via the metadata 79 | # service proxied by the workload identity agent running inside GKE 80 | # that maps the Kubernetes Service Account to the GCP Service account 81 | # automatically. 82 | command: [ 83 | "/cloud_sql_proxy", 84 | "-instances=$(INSTANCE_CONNECTION)=tcp:5432", 85 | "-ip_address_types=PRIVATE" 86 | ] 87 | -------------------------------------------------------------------------------- /Jenkinsfile: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env groovy 2 | 3 | /* 4 | Copyright 2018 Google LLC 5 | 6 | Licensed under the Apache License, Version 2.0 (the "License"); 7 | you may not use this file except in compliance with the License. 8 | You may obtain a copy of the License at 9 | 10 | https://www.apache.org/licenses/LICENSE-2.0 11 | 12 | Unless required by applicable law or agreed to in writing, software 13 | distributed under the License is distributed on an "AS IS" BASIS, 14 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | See the License for the specific language governing permissions and 16 | limitations under the License. 17 | */ 18 | 19 | // Reference: https://github.com/jenkinsci/kubernetes-plugin 20 | // set up pod label and GOOGLE_APPLICATION_CREDENTIALS (for Terraform) 21 | def containerName = "private-cluster" 22 | def GOOGLE_APPLICATION_CREDENTIALS = '/home/jenkins/dev/jenkins-deploy-dev-infra.json' 23 | def jenkins_container_version = env.JENKINS_CONTAINER_VERSION 24 | 25 | podTemplate( 26 | containers: [ 27 | containerTemplate(name: "${containerName}", 28 | image: "gcr.io/pso-helmsman-cicd/jenkins-k8s-node:${jenkins_container_version}", 29 | command: 'tail -f /dev/null', 30 | resourceRequestCpu: '1000m', 31 | resourceLimitCpu: '2000m', 32 | resourceRequestMemory: '1Gi', 33 | resourceLimitMemory: '2Gi' 34 | ) 35 | ], 36 | volumes: [secretVolume(mountPath: '/home/jenkins/dev', 37 | secretName: 'jenkins-deploy-dev-infra' 38 | ), 39 | hostPathVolume(mountPath: '/dev/random', hostPath: '/dev/urandom') 40 | ] 41 | ) { 42 | node(POD_LABEL) { 43 | try { 44 | // Options covers all other job properties or wrapper functions that apply to entire Pipeline. 45 | properties([disableConcurrentBuilds()]) 46 | // set env variable GOOGLE_APPLICATION_CREDENTIALS for Terraform 47 | env.GOOGLE_APPLICATION_CREDENTIALS = GOOGLE_APPLICATION_CREDENTIALS 48 | 49 | stage('Setup') { 50 | container(containerName) { 51 | // checkout code from scm i.e. commits related to the PR 52 | checkout scm 53 | 54 | // Setup gcloud service account access 55 | sh "gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}" 56 | sh "gcloud config set compute/zone ${env.ZONE}" 57 | sh "gcloud config set core/project ${env.PROJECT_ID}" 58 | sh "gcloud config set compute/region ${env.REGION}" 59 | } 60 | } 61 | stage('Lint') { 62 | container(containerName) { 63 | sh "make lint" 64 | } 65 | } 66 | stage('Create') { 67 | container(containerName) { 68 | sh "make create" 69 | } 70 | } 71 | stage('Deploy') { 72 | container(containerName) { 73 | sh "make deploy" 74 | } 75 | } 76 | stage('Validate') { 77 | container(containerName) { 78 | sh "make validate" 79 | } 80 | } 81 | } catch (err) { 82 | // if any exception occurs, mark the build as failed 83 | // and display a detailed message on the Jenkins console output 84 | currentBuild.result = 'FAILURE' 85 | echo "FAILURE caught echo ${err}" 86 | throw err 87 | } finally { 88 | stage('Teardown') { 89 | container(containerName) { 90 | sh "make teardown" 91 | } 92 | } 93 | } 94 | } 95 | } 96 | -------------------------------------------------------------------------------- /terraform/postgres.tf: -------------------------------------------------------------------------------- 1 | /* 2 | Copyright 2018 Google LLC 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | https://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | */ 16 | 17 | resource "google_compute_global_address" "private_ip_address" { 18 | provider = "google-beta" 19 | 20 | name = format("%s-priv-ip", var.cluster_name) 21 | purpose = "VPC_PEERING" 22 | address_type = "INTERNAL" 23 | prefix_length = 16 24 | network = google_compute_network.network.self_link 25 | } 26 | 27 | resource "google_service_networking_connection" "private_vpc_connection" { 28 | provider = "google-beta" 29 | 30 | network = google_compute_network.network.self_link 31 | service = "servicenetworking.googleapis.com" 32 | reserved_peering_ranges = [google_compute_global_address.private_ip_address.name] 33 | } 34 | 35 | // Create the Google SA 36 | resource "google_service_account" "access_postgres" { 37 | account_id = format("%s-pg-sa", var.cluster_name) 38 | } 39 | 40 | // Make an IAM policy that allows the K8S SA to be a workload identity user 41 | data "google_iam_policy" "access_postgres" { 42 | binding { 43 | role = "roles/iam.workloadIdentityUser" 44 | 45 | members = [ 46 | format("serviceAccount:%s.svc.id.goog[%s/%s]", var.project, var.k8s_namespace, var.k8s_sa_name) 47 | ] 48 | } 49 | } 50 | 51 | // Bind the workload identity IAM policy to the GSA 52 | resource "google_service_account_iam_policy" "access_postgres" { 53 | service_account_id = google_service_account.access_postgres.name 54 | policy_data = data.google_iam_policy.access_postgres.policy_data 55 | } 56 | 57 | // Attach cloudsql access permissions to the Google SA. 58 | resource "google_project_iam_binding" "access_postgres" { 59 | project = var.project 60 | role = "roles/cloudsql.client" 61 | 62 | members = [ 63 | format("serviceAccount:%s", google_service_account.access_postgres.email) 64 | ] 65 | } 66 | 67 | resource "random_id" "db_name_suffix" { 68 | byte_length = 4 69 | } 70 | 71 | resource "google_sql_database_instance" "default" { 72 | project = var.project 73 | name = format("%s-pg-%s", var.cluster_name, random_id.db_name_suffix.hex) 74 | database_version = "POSTGRES_9_6" 75 | region = var.region 76 | 77 | depends_on = [ 78 | "google_service_networking_connection.private_vpc_connection" 79 | ] 80 | 81 | settings { 82 | tier = "db-f1-micro" 83 | activation_policy = "ALWAYS" 84 | availability_type = "ZONAL" 85 | 86 | ip_configuration { 87 | ipv4_enabled = "false" 88 | private_network = google_compute_network.network.self_link 89 | // TODO Pull exact pod subnet 90 | authorized_networks { 91 | name = "GKE Pod IPs" 92 | value = "10.0.0.0/8" 93 | } 94 | } 95 | 96 | disk_autoresize = false 97 | disk_size = "10" 98 | disk_type = "PD_SSD" 99 | pricing_plan = "PER_USE" 100 | 101 | location_preference { 102 | zone = var.zone 103 | } 104 | } 105 | 106 | timeouts { 107 | create = "10m" 108 | update = "10m" 109 | delete = "10m" 110 | } 111 | } 112 | 113 | resource "google_sql_database" "default" { 114 | name = "default" 115 | project = var.project 116 | instance = google_sql_database_instance.default.name 117 | collation = "en_US.UTF8" 118 | depends_on = ["google_sql_database_instance.default"] 119 | } 120 | 121 | resource "random_id" "user-password" { 122 | keepers = { 123 | name = google_sql_database_instance.default.name 124 | } 125 | 126 | byte_length = 8 127 | depends_on = ["google_sql_database_instance.default"] 128 | } 129 | 130 | resource "google_sql_user" "default" { 131 | name = var.db_username 132 | project = var.project 133 | instance = google_sql_database_instance.default.name 134 | password = random_id.user-password.hex 135 | depends_on = ["google_sql_database_instance.default"] 136 | } 137 | -------------------------------------------------------------------------------- /test/test_verify_boilerplate.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | # Copyright 2018 Google LLC 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # https://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | ''' A simple test for the verify_boilerplate python script. 18 | This will create a set of test files, both valid and invalid, 19 | and confirm that the has_valid_header call returns the correct 20 | value. 21 | 22 | It also checks the number of files that are found by the 23 | get_files call. 24 | ''' 25 | from copy import deepcopy 26 | from tempfile import mkdtemp 27 | from shutil import rmtree 28 | import unittest 29 | from verify_boilerplate import has_valid_header, get_refs, get_regexs, \ 30 | get_args, get_files 31 | 32 | 33 | class AllTestCase(unittest.TestCase): 34 | """ 35 | All of the setup, teardown, and tests are contained in this 36 | class. 37 | """ 38 | 39 | def write_file(self, filename, content, expected): 40 | """ 41 | A utility method that creates test files, and adds them to 42 | the cases that will be tested. 43 | 44 | Args: 45 | filename: (string) the file name (path) to be created. 46 | content: (list of strings) the contents of the file. 47 | expected: (boolean) True if the header is expected to be valid, 48 | false if not. 49 | """ 50 | 51 | file = open(filename, 'w+') 52 | for line in content: 53 | file.write(line + "\n") 54 | file.close() 55 | self.cases[filename] = expected 56 | 57 | def create_test_files(self, tmp_path, extension, header): 58 | """ 59 | Creates 2 test files for .tf, .xml, .go, etc and one for 60 | Dockerfile, and Makefile. 61 | 62 | The reason for the difference is that Makefile and Dockerfile 63 | don't have an extension. These would be substantially more 64 | difficult to create negative test cases, unless the files 65 | were written, deleted, and re-written. 66 | 67 | Args: 68 | tmp_path: (string) the path in which to create the files 69 | extension: (string) the file extension 70 | header: (list of strings) the header/boilerplate content 71 | """ 72 | 73 | content = "\n...blah \ncould be code or could be garbage\n" 74 | special_cases = ["Dockerfile", "Makefile"] 75 | header_template = deepcopy(header) 76 | valid_filename = tmp_path + extension 77 | valid_content = header_template.append(content) 78 | if extension not in special_cases: 79 | # Invalid test cases for non-*file files (.tf|.py|.sh|.yaml|.xml..) 80 | invalid_header = [] 81 | for line in header_template: 82 | if "2018" in line: 83 | invalid_header.append(line.replace('2018', 'YEAR')) 84 | else: 85 | invalid_header.append(line) 86 | invalid_header.append(content) 87 | invalid_content = invalid_header 88 | invalid_filename = tmp_path + "invalid." + extension 89 | self.write_file(invalid_filename, invalid_content, False) 90 | valid_filename = tmp_path + "testfile." + extension 91 | 92 | valid_content = header_template 93 | self.write_file(valid_filename, valid_content, True) 94 | 95 | def setUp(self): 96 | """ 97 | Set initial counts and values, and initializes the setup of the 98 | test files. 99 | """ 100 | self.cases = {} 101 | self.tmp_path = mkdtemp() + "/" 102 | self.my_args = get_args() 103 | self.my_refs = get_refs(self.my_args) 104 | self.my_regex = get_regexs() 105 | self.prexisting_file_count = len( 106 | get_files(self.my_refs.keys(), self.my_args)) 107 | for key in self.my_refs: 108 | self.create_test_files(self.tmp_path, key, 109 | self.my_refs.get(key)) 110 | 111 | def tearDown(self): 112 | """ Delete the test directory. """ 113 | rmtree(self.tmp_path) 114 | 115 | def test_files_headers(self): 116 | """ 117 | Confirms that the expected output of has_valid_header is correct. 118 | """ 119 | for case in self.cases: 120 | if self.cases[case]: 121 | self.assertTrue(has_valid_header(case, self.my_refs, 122 | self.my_regex)) 123 | else: 124 | self.assertFalse(has_valid_header(case, self.my_refs, 125 | self.my_regex)) 126 | 127 | def test_invalid_count(self): 128 | """ 129 | Test that the initial files found isn't zero, indicating 130 | a problem with the code. 131 | """ 132 | self.assertFalse(self.prexisting_file_count == 0) 133 | 134 | 135 | if __name__ == "__main__": 136 | unittest.main() 137 | -------------------------------------------------------------------------------- /terraform/main.tf: -------------------------------------------------------------------------------- 1 | /* 2 | Copyright 2018 Google LLC 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | https://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | */ 16 | 17 | resource "google_container_cluster" "cluster" { 18 | provider = "google-beta" 19 | 20 | name = var.cluster_name 21 | project = var.project 22 | location = var.region 23 | 24 | network = google_compute_network.network.self_link 25 | subnetwork = google_compute_subnetwork.subnetwork.self_link 26 | 27 | logging_service = "logging.googleapis.com/kubernetes" 28 | monitoring_service = "monitoring.googleapis.com/kubernetes" 29 | 30 | // Decouple the default node pool lifecycle from the cluster object lifecycle 31 | // by removing the node pool and specifying a dedicated node pool in a 32 | // separate resource below. 33 | remove_default_node_pool = "true" 34 | initial_node_count = 1 35 | 36 | // Configure various addons 37 | addons_config { 38 | // Disable the Kubernetes dashboard, which is often an attack vector. The 39 | // cluster can still be managed via the GKE UI. 40 | kubernetes_dashboard { 41 | disabled = true 42 | } 43 | 44 | // Enable network policy (Calico) 45 | network_policy_config { 46 | disabled = false 47 | } 48 | } 49 | 50 | // Enable workload identity 51 | workload_identity_config { 52 | identity_namespace = format("%s.svc.id.goog", var.project) 53 | } 54 | 55 | // Disable basic authentication and cert-based authentication. 56 | // Empty fields for username and password are how to "disable" the 57 | // credentials from being generated. 58 | master_auth { 59 | username = "" 60 | password = "" 61 | 62 | client_certificate_config { 63 | issue_client_certificate = "false" 64 | } 65 | } 66 | 67 | // Enable network policy configurations (like Calico) - for some reason this 68 | // has to be in here twice. 69 | network_policy { 70 | enabled = "true" 71 | } 72 | 73 | // Allocate IPs in our subnetwork 74 | ip_allocation_policy { 75 | use_ip_aliases = true 76 | cluster_secondary_range_name = google_compute_subnetwork.subnetwork.secondary_ip_range.0.range_name 77 | services_secondary_range_name = google_compute_subnetwork.subnetwork.secondary_ip_range.1.range_name 78 | } 79 | 80 | // Specify the list of CIDRs which can access the master's API 81 | master_authorized_networks_config { 82 | cidr_blocks { 83 | display_name = "bastion" 84 | cidr_block = format("%s/32", google_compute_instance.bastion.network_interface.0.network_ip) 85 | } 86 | } 87 | // Configure the cluster to have private nodes and private control plane access only 88 | private_cluster_config { 89 | enable_private_endpoint = "true" 90 | enable_private_nodes = "true" 91 | master_ipv4_cidr_block = "172.16.0.16/28" 92 | } 93 | 94 | // Allow plenty of time for each operation to finish (default was 10m) 95 | timeouts { 96 | create = "30m" 97 | update = "30m" 98 | delete = "30m" 99 | } 100 | 101 | depends_on = [ 102 | "google_project_service.service", 103 | "google_project_iam_member.service-account", 104 | "google_project_iam_member.service-account-custom", 105 | "google_compute_router_nat.nat", 106 | ] 107 | 108 | } 109 | 110 | // A dedicated/separate node pool where workloads will run. A regional node pool 111 | // will have "node_count" nodes per zone, and will use 3 zones. This node pool 112 | // will be 3 nodes in size and use a non-default service-account with minimal 113 | // Oauth scope permissions. 114 | resource "google_container_node_pool" "private-np-1" { 115 | provider = "google-beta" 116 | 117 | name = "private-np-1" 118 | location = var.region 119 | cluster = google_container_cluster.cluster.name 120 | node_count = "1" 121 | 122 | // Repair any issues but don't auto upgrade node versions 123 | management { 124 | auto_repair = "true" 125 | auto_upgrade = "false" 126 | } 127 | 128 | node_config { 129 | machine_type = "n1-standard-2" 130 | disk_type = "pd-ssd" 131 | disk_size_gb = 30 132 | image_type = "COS" 133 | 134 | // Use the cluster created service account for this node pool 135 | service_account = google_service_account.gke-sa.email 136 | 137 | // Use the minimal oauth scopes needed 138 | oauth_scopes = [ 139 | "https://www.googleapis.com/auth/devstorage.read_only", 140 | "https://www.googleapis.com/auth/logging.write", 141 | "https://www.googleapis.com/auth/monitoring", 142 | "https://www.googleapis.com/auth/servicecontrol", 143 | "https://www.googleapis.com/auth/service.management.readonly", 144 | "https://www.googleapis.com/auth/trace.append", 145 | ] 146 | 147 | labels = { 148 | cluster = var.cluster_name 149 | } 150 | 151 | // Enable workload identity on this node pool 152 | workload_metadata_config { 153 | node_metadata = "GKE_METADATA_SERVER" 154 | } 155 | 156 | metadata = { 157 | // Set metadata on the VM to supply more entropy 158 | google-compute-enable-virtio-rng = "true" 159 | // Explicitly remove GCE legacy metadata API endpoint 160 | disable-legacy-endpoints = "true" 161 | } 162 | } 163 | 164 | depends_on = [ 165 | "google_container_cluster.cluster", 166 | ] 167 | } 168 | -------------------------------------------------------------------------------- /terraform/network.tf: -------------------------------------------------------------------------------- 1 | /* 2 | Copyright 2018 Google LLC 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | https://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. 15 | */ 16 | 17 | // Create the GKE service account 18 | resource "google_service_account" "gke-sa" { 19 | account_id = format("%s-node-sa", var.cluster_name) 20 | display_name = "GKE Security Service Account" 21 | project = var.project 22 | } 23 | 24 | // Add the service account to the project 25 | resource "google_project_iam_member" "service-account" { 26 | count = length(var.service_account_iam_roles) 27 | project = var.project 28 | role = element(var.service_account_iam_roles, count.index) 29 | member = format("serviceAccount:%s", google_service_account.gke-sa.email) 30 | } 31 | 32 | // Add user-specified roles 33 | resource "google_project_iam_member" "service-account-custom" { 34 | count = length(var.service_account_custom_iam_roles) 35 | project = var.project 36 | role = element(var.service_account_custom_iam_roles, count.index) 37 | member = format("serviceAccount:%s", google_service_account.gke-sa.email) 38 | } 39 | 40 | // Enable required services on the project 41 | resource "google_project_service" "service" { 42 | count = length(var.project_services) 43 | project = var.project 44 | service = element(var.project_services, count.index) 45 | 46 | // Do not disable the service on destroy. On destroy, we are going to 47 | // destroy the project, but we need the APIs available to destroy the 48 | // underlying resources. 49 | disable_on_destroy = false 50 | } 51 | 52 | // Create a network for GKE 53 | resource "google_compute_network" "network" { 54 | name = format("%s-network", var.cluster_name) 55 | project = var.project 56 | auto_create_subnetworks = false 57 | 58 | depends_on = [ 59 | "google_project_service.service", 60 | ] 61 | } 62 | 63 | // Create subnets 64 | resource "google_compute_subnetwork" "subnetwork" { 65 | name = format("%s-subnet", var.cluster_name) 66 | project = var.project 67 | network = google_compute_network.network.self_link 68 | region = var.region 69 | ip_cidr_range = "10.0.0.0/24" 70 | 71 | private_ip_google_access = true 72 | 73 | secondary_ip_range { 74 | range_name = format("%s-pod-range", var.cluster_name) 75 | ip_cidr_range = "10.1.0.0/16" 76 | } 77 | 78 | secondary_ip_range { 79 | range_name = format("%s-svc-range", var.cluster_name) 80 | ip_cidr_range = "10.2.0.0/20" 81 | } 82 | } 83 | // Create an external NAT IP 84 | resource "google_compute_address" "nat" { 85 | name = format("%s-nat-ip", var.cluster_name) 86 | project = var.project 87 | region = var.region 88 | 89 | depends_on = [ 90 | "google_project_service.service", 91 | ] 92 | } 93 | 94 | // Create a cloud router for use by the Cloud NAT 95 | resource "google_compute_router" "router" { 96 | name = format("%s-cloud-router", var.cluster_name) 97 | project = var.project 98 | region = var.region 99 | network = google_compute_network.network.self_link 100 | 101 | bgp { 102 | asn = 64514 103 | } 104 | } 105 | 106 | // Create a NAT router so the nodes can reach DockerHub, etc 107 | resource "google_compute_router_nat" "nat" { 108 | name = format("%s-cloud-nat", var.cluster_name) 109 | project = var.project 110 | router = google_compute_router.router.name 111 | region = var.region 112 | 113 | nat_ip_allocate_option = "MANUAL_ONLY" 114 | 115 | nat_ips = [google_compute_address.nat.self_link] 116 | 117 | source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS" 118 | 119 | subnetwork { 120 | name = google_compute_subnetwork.subnetwork.self_link 121 | source_ip_ranges_to_nat = ["PRIMARY_IP_RANGE", "LIST_OF_SECONDARY_IP_RANGES"] 122 | 123 | secondary_ip_range_names = [ 124 | google_compute_subnetwork.subnetwork.secondary_ip_range.0.range_name, 125 | google_compute_subnetwork.subnetwork.secondary_ip_range.1.range_name, 126 | ] 127 | } 128 | } 129 | 130 | // Bastion Host 131 | locals { 132 | hostname = format("%s-bastion", var.cluster_name) 133 | } 134 | 135 | // Dedicated service account for the Bastion instance 136 | resource "google_service_account" "bastion" { 137 | account_id = format("%s-bastion-sa", var.cluster_name) 138 | display_name = "GKE Bastion SA" 139 | } 140 | 141 | // Allow access to the Bastion Host via SSH 142 | resource "google_compute_firewall" "bastion-ssh" { 143 | name = format("%s-bastion-ssh", var.cluster_name) 144 | network = google_compute_network.network.name 145 | direction = "INGRESS" 146 | project = var.project 147 | source_ranges = ["0.0.0.0/0"] 148 | 149 | allow { 150 | protocol = "tcp" 151 | ports = ["22"] 152 | } 153 | 154 | target_tags = ["bastion"] 155 | } 156 | 157 | // The user-data script on Bastion instance provisioning 158 | data "template_file" "startup_script" { 159 | template = <<-EOF 160 | sudo apt-get update -y 161 | sudo apt-get install -y tinyproxy 162 | EOF 163 | 164 | } 165 | 166 | // The Bastion Host 167 | resource "google_compute_instance" "bastion" { 168 | name = local.hostname 169 | machine_type = "g1-small" 170 | zone = var.zone 171 | project = var.project 172 | tags = ["bastion"] 173 | 174 | // Specify the Operating System Family and version. 175 | boot_disk { 176 | initialize_params { 177 | image = "debian-cloud/debian-9" 178 | } 179 | } 180 | 181 | // Ensure that when the bastion host is booted, it will have tinyproxy 182 | metadata_startup_script = data.template_file.startup_script.rendered 183 | 184 | // Define a network interface in the correct subnet. 185 | network_interface { 186 | subnetwork = google_compute_subnetwork.subnetwork.name 187 | 188 | // Add an ephemeral external IP. 189 | access_config { 190 | // Ephemeral IP 191 | } 192 | } 193 | 194 | // Allow the instance to be stopped by terraform when updating configuration 195 | allow_stopping_for_update = true 196 | 197 | service_account { 198 | email = google_service_account.bastion.email 199 | scopes = ["cloud-platform"] 200 | } 201 | 202 | // local-exec providers may run before the host has fully initialized. However, they 203 | // are run sequentially in the order they were defined. 204 | // 205 | // This provider is used to block the subsequent providers until the instance 206 | // is available. 207 | provisioner "local-exec" { 208 | command = < len(data): 203 | return False 204 | # truncate our file to the same number of lines as the reference file 205 | data = data[:len(ref)] 206 | 207 | # if we don't match the reference at this point, fail 208 | if ref != data: 209 | return False 210 | 211 | return True 212 | 213 | 214 | def get_file_parts(filename): 215 | """Extracts the basename and extension parts of a filename. 216 | Identifies the extension as everything after the last period in filename. 217 | Args: 218 | filename: string containing the filename 219 | Returns: 220 | A tuple of: 221 | A string containing the basename 222 | A string containing the extension in lowercase 223 | """ 224 | extension = os.path.splitext(filename)[1].split(".")[-1].lower() 225 | basename = os.path.basename(filename) 226 | return basename, extension 227 | 228 | 229 | def normalize_files(files, args): 230 | """Extracts the files that require boilerplate checking from the files 231 | argument. 232 | A new list will be built. Each path from the original files argument will 233 | be added unless it is within one of SKIPPED_DIRS. All relative paths will 234 | be converted to absolute paths by prepending the root_dir path parsed from 235 | the command line, or its default value. 236 | Args: 237 | files: a list of file path strings 238 | Returns: 239 | A modified copy of the files list where any any path in a skipped 240 | directory is removed, and all paths have been made absolute. 241 | """ 242 | newfiles = [f for f in files if not any(s in f for s in SKIPPED_PATHS)] 243 | 244 | for idx, pathname in enumerate(newfiles): 245 | if not os.path.isabs(pathname): 246 | newfiles[idx] = os.path.join(args.rootdir, pathname) 247 | return newfiles 248 | 249 | 250 | def get_files(extensions, args): 251 | """Generates a list of paths whose boilerplate should be verified. 252 | If a list of file names has been provided on the command line, it will be 253 | treated as the initial set to search. Otherwise, all paths within rootdir 254 | will be discovered and used as the initial set. 255 | Once the initial set of files is identified, it is normalized via 256 | normalize_files() and further stripped of any file name whose extension is 257 | not in extensions. 258 | Args: 259 | extensions: a list of file extensions indicating which file types 260 | should have their boilerplate verified 261 | Returns: 262 | A list of absolute file paths 263 | """ 264 | files = [] 265 | if args.filenames: 266 | files = args.filenames 267 | else: 268 | for root, dirs, walkfiles in os.walk(args.rootdir): 269 | # don't visit certain dirs. This is just a performance improvement 270 | # as we would prune these later in normalize_files(). But doing it 271 | # cuts down the amount of filesystem walking we do and cuts down 272 | # the size of the file list 273 | for dpath in SKIPPED_PATHS: 274 | if dpath in dirs: 275 | dirs.remove(dpath) 276 | for name in walkfiles: 277 | pathname = os.path.join(root, name) 278 | files.append(pathname) 279 | files = normalize_files(files, args) 280 | outfiles = [] 281 | for pathname in files: 282 | basename, extension = get_file_parts(pathname) 283 | extension_present = extension in extensions or basename in extensions 284 | if args.force_extension or extension_present: 285 | outfiles.append(pathname) 286 | return outfiles 287 | 288 | 289 | def main(args): 290 | """Identifies and verifies files that should have the desired boilerplate. 291 | Retrieves the lists of files to be validated and tests each one in turn. 292 | If all files contain correct boilerplate, this function terminates 293 | normally. Otherwise it prints the name of each non-conforming file and 294 | exists with a non-zero status code. 295 | """ 296 | refs = get_references(args) 297 | preambles = get_preambles(args) 298 | filenames = get_files(refs.keys(), args) 299 | nonconforming_files = [] 300 | for filename in filenames: 301 | if not has_valid_header(filename, refs, preambles, REGEXES, args): 302 | nonconforming_files.append(filename) 303 | if nonconforming_files: 304 | print('%d files have incorrect boilerplate headers:' % len( 305 | nonconforming_files)) 306 | for filename in sorted(nonconforming_files): 307 | print(os.path.relpath(filename, args.rootdir)) 308 | sys.exit(1) 309 | else: 310 | print('All files examined have correct boilerplate.') 311 | 312 | 313 | if __name__ == "__main__": 314 | ARGS = get_args() 315 | main(ARGS) 316 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # How to use a Private Cluster in Kubernetes Engine 2 | 3 | ## Table of Contents 4 | 5 | 6 | * [Introduction](#introduction) 7 | * [Public Clusters](#public-clusters) 8 | * [Private Clusters](#private-clusters) 9 | * [Workload Identity Overview](#workload-identity-overview) 10 | * [Demo Architecture](#demo-architecture) 11 | * [Bastion Host](#bastion-host) 12 | * [Workload Identity](#workload-identity) 13 | * [Prerequisites](#prerequisites) 14 | * [Cloud Project](#cloud-project) 15 | * [Required GCP APIs](#required-gcp-apis) 16 | * [Run Demo in a Google Cloud Shell](#run-demo-in-a-google-cloud-shell) 17 | * [Install Terraform](#install-terraform) 18 | * [Install Cloud SDK](#install-cloud-sdk) 19 | * [Install kubectl CLI](#install-kubectl-cli) 20 | * [Authenticate gcloud](#authenticate-gcloud) 21 | * [Configure gcloud settings](#configure-gcloud-settings) 22 | * [Create Resources](#create-resources) 23 | * [Validation](#validation) 24 | * [Tear Down](#tear-down) 25 | * [Troubleshooting](#troubleshooting) 26 | * [Relevant Material](#relevant-material) 27 | 28 | 29 | ## Introduction 30 | 31 | This guide demonstrates creating a Kubernetes private cluster in [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overview) (GKE) running a sample Kubernetes workload that connects to a [Cloud SQL](https://cloud.google.com/sql/docs/postgres/) instance using the [cloud-sql-proxy](https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine) "sidecar". In addition, the [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) (currently in Beta) feature is used to provide credentials directly to the `cloud-sql-proxy` container to facilitate secure tunneling to the `cloud sql` instance without having to handle GCP credentials manually. 32 | 33 | ### Public Clusters 34 | 35 | By default, GKE clusters are created with a public IP address in front of the Kubernetes API (aka "masters" or "the control plane"). In addition, the GCE instances that serve as the worker nodes are given both private and ephemeral public IP addresses. This facilitates ease of administration for using tools like `kubectl` to access the Kubernetes API and `SSH` to access the GCE instances for troubleshooting purposes. Assuming the GKE cluster was created on a subnet in the `default` VPC network of a project, the default access control allows "any" or `0.0.0.0/0` to reach the Kubernetes API and the default firewall rules allow "any" or `0.0.0.0/0` to reach the worker nodes via `SSH`. These clusters are commonly referred to as "public clusters". 36 | 37 | While the authentication and authorization mechanisms for accessing the Kubernetes API over `TLS` and worker nodes via `SSH` offers strong protection against unauthorized access, it is strongly recommended that additional steps are taken to limit the scope of potential access: 38 | 39 | 1. Restrict to a known list of source subnets for access to the Kubernetes cluster API via `TLS` (tcp/443) using the [master_authorized_networks](https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks) access list configuration setting. 40 | 1. Restrict to a known list of source subnets or remove the default firewall rules allowing `SSH` (tcp/22) to the worker nodes. 41 | 42 | This provides several key benefits from a [defense-in-depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)) perspective: 43 | 44 | 1. Reducing the scope of source IPs that can potentially perform a Denial of Service or exploit against the Kubernetes API server or the `SSH` daemon running on the worker nodes. 45 | 1. Reducing the scope of source IPs that can leverage credentials stolen from a developer laptop compromise, credentials found in source code repositories, or credentials/tokens obtained from resources inside the cluster. 46 | 1. Decreasing the likelihood of a newly discovered vulnerability being exploitable and/or granting more time to the team to devise a patching/upgrade strategy. 47 | 48 | However, fom an operational perspective, managing these access control lists may not be feasible in every organization. Larger organizations may already have remote access solutions in place either on-premise or in the cloud that they prefer to leverage. They may also have [dedicated interconnects](https://cloud.google.com/interconnect/docs/concepts/overview) which provide direct, high-bandwidth access from their office network to their GCP environments. In these cases, there is no need for the GKE clusters to be publicly accessible. 49 | 50 | ### Private Clusters 51 | 52 | GKE offers two configuration items that combine to form what is commonly known as "[private clusters](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster)": 53 | 54 | 1. The GCE instances that serve as the Kubernetes worker nodes do not get assigned a public IP. Instead, they are only assigned a private IP from the VPC node subnet. This is the `enable_private_nodes` configuration setting. 55 | 1. The Kubernetes API/GKE API/GKE Control Plane IP is assigned a private IP address from a dedicated subnet for this purpose and is automatically accessible from the node and pod subnets. This is the `enable_private_endpoint` configuration setting. 56 | 57 | When both of these are configured on a GKE cluster, a few key behaviors change: 58 | 59 | 1. The worker nodes no longer have egress to the Internet, and that will prevent the `nodes` and `pods` from having external access. In addition, `pods` defined using containers from public container image registries like [DockerHub](https://hub.docker.com/search) will not be able to access and pull those containers. To restore this access, this demo implements a [Cloud NAT](https://cloud.google.com/nat/docs/overview) router to provide egress [Network Address Translation](https://en.wikipedia.org/wiki/Network_address_translation) functionality. 60 | 1. Access to other Google Cloud Platform (GCP) APIs like Google Cloud Storage (GCS) and Google Cloud SQL requires enabling [private API access](https://cloud.google.com/vpc/docs/private-access-options) on the VPC Subnet to enable the private routing configuration which directs traffic headed to GCP APIs entirely over the internal GCP network. This demo connects to the Cloud SQL instance via [private access](https://cloud.google.com/sql/docs/postgres/private-ip). 61 | 1. Access to the Kubernetes API/GKE Cluster Control Plane will only be possible from within the VPC subnets. This demo deploys what is known as a [bastion host](https://cloud.google.com/solutions/connecting-securely) as a dedicated GCE instance in the VPC subnet to allow for an administrator/developer to use [SSH Tunneling](https://www.ssh.com/ssh/tunneling/example) to support `kubectl` access. 62 | 63 | ### Workload Identity Overview 64 | 65 | The [current guide](https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine) for how to configure the `cloud-sql-proxy` with the necessary GCP credentials involves creating a [service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) in JSON format, storing that in a Kubernetes-native `secret` inside the `namespace` where the `pod` is to run, and configuring the `pod` to mount that secret on a particular file path inside the pod. However, there are a few downsides to this approach: 66 | 67 | 1. The credentials inside this JSON file are essentially, static keys and they don't expire unless manually revoked via the GCP APIs. 68 | 1. The act of exporting the credential file to JSON means it touches the disk of the administrator and/or CI/CD system. 69 | 1. Replacing the credential means re-exporting a new Service Account key to JSON, replacing the contents of the Kubernetes `secret` with the updated contents, and restarting the `pod` for the `cloud-sql-proxy` to make use of the new contents. 70 | 71 | [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) helps remove several manual steps and ensures that the `cloud-sql-proxy` is always using a short-lived credential that auto-rotates on it's own. Workload Identity, when configured inside a GKE cluster, allows for a Kubernetes Service Account (KSA) to be mapped to a GCP Service Account (GSA) via a process called "Federation". It then installs a proxy on each GKE worker `node` that intercepts all requests to the [GCE Metadata API](https://cloud.google.com/compute/docs/storing-retrieving-metadata) where the dynamic credentials are accessible and returns the current credentials for that GCP Service Account to the process in the `pod` instead of the credentials normally associated with the underlying GCE Instance. As long as the proper IAM configuration is made to map the KSA to the GSA, the `pod` can be given a dedicated service account with just the permissions needed. 72 | 73 | ## Demo Architecture 74 | 75 | Given a GCP project, the code in this demo will create the following resources via [Terraform](https://terraform.io): 76 | 77 | * A new VPC and new VPC subnets 78 | * A Cloud NAT router for egress access from the VPC subnets 79 | * A Cloud SQL Instance with a Private IP 80 | * A GCE Instance serving as a Bastion Host to support SSH Tunneling 81 | * A GKE Cluster running Workload Identity with no public IPs on either the API or the worker nodes 82 | * A sample Kubernetes deployment that uses the `cloud-sql-proxy` to access the Cloud SQL instance privately and uses Workload Identity to dynamically and securely fetch the GCP credentials. 83 | 84 | ![Demo Architecture Diagram](./img/architecture.png) 85 | 86 | 1. Exposing workloads inside the GKE cluster is done via a standard load balancer strategy. 87 | 1. Accessing the Kubernetes API Server/Control Plane from the Internet is through an `SSH` tunnel on the `Bastion Host`. 88 | 1. GKE worker `nodes` and `pods` running on those `nodes` access the Internet via Cloud NAT through the Cloud Router. 89 | 1. GKE worker `nodes` and `pods` running on those `nodes` access other GCP APIs such as Cloud SQL via Private API Access. 90 | 91 | ### Bastion Host 92 | 93 | Traditionally, a "jump host" or "bastion host" is a dedicated, hardened, and heavily monitored system that was placed in the [DMZ](https://en.wikipedia.org/wiki/DMZ_(computing)) of a network to allow for secure, remote access. In the cloud, this commonly is deployed as a shared instance that multiple users SSH into and work from when accessing cloud resources. Tools like the [Google Cloud SDK](https://cloud.google.com/sdk/) and [Terraform](https://terraform.io) are often installed on these systems. There are two problems with this approach: 94 | 95 | 1. If users authenticate to GCP using `gcloud`, their credentials are stored in the `/home` directory on this instance. The same issue applies to users who obtain a valid `kubeconfig` file for accessing a GKE cluster. 96 | 1. If users log in via `SSH` and then use `su` or `sudo` to switch to a shared account to perform privileged operations, the audit logs will no longer be able to directly identify who performed an action. In the case of `sudo` to `root`, that means all GCP credentials in the `/home` directories are available to be used for impersonation attacks (Alex performs malicious actions with Pat's credentials). 97 | 98 | This bastion host attempts to solve for both issues. It runs two services: 99 | 100 | 1. An [OpenSSH](https://en.wikipedia.org/wiki/OpenSSH)) daemon to support `SSH` access via `gcloud compute ssh` or via [Identity Awareness Proxy](https://cloud.google.com/iap/). 101 | 1. A [TinyProxy](https://github.com/tinyproxy/tinyproxy) daemon listening on `localhost:8888` that provides a simple [HTTP Proxy](https://en.wikipedia.org/wiki/Proxy_server). 102 | 103 | Note: The bastion host is configured to allow `SSH` access from `0.0.0.0/0` via a dedicated firewall rule, but this can and should be restricted to the list of subnets for your needs. 104 | 105 | This means that both `gcloud` and `kubectl` commands can still be run on the local developer/administrator workstation, but `kubectl` commands can be "proxied" through an `SSH Tunnel` made to the bastion on their way to the Kubernetes API without disrupting the TLS connection and certificate verification process. 106 | 107 | From a practical standpoint, using the bastion requires two additional steps for `kubectl` to reach the private cluster's Kubernetes API IP: 108 | 109 | 1. Run `gcloud compute ssh` and forward a local port (`8888`) to the `localhost:8888` on the bastion host where the `tinyproxy` daemon is listening. 110 | 1. Provide an environment variable (`HTTPS_PROXY=localhost:8888`) when using `kubectl` to instruct it to use the forwarded port that reaches the tinyproxy daemon running on the bastion host on its way to the Kubernetes API. 111 | 112 | ![Bastion SSH Proxy](/img/bastion_proxy.png) 113 | 114 | ### Workload Identity 115 | 116 | The use case of this demo requires that the `pgadmin4` (Postgres SQL Admin UI) container has a `cloud-sql-proxy` "sidecar" that it uses to connect securely to the Cloud SQL instance. The IAM Role that is needed to make this connection is `roles/cloudsql.client`. This demo creates a dedicated GCP Service Account, binds the `roles/cloudsql.client` IAM Role to it at the project level, creates a dedicated Kubernetes Service Account (`postgres`) in the `default` `namespace`, and grants `roles/iam.workloadidentityuser` on the KSA-to-GSA IAM binding. 117 | 118 | The result is that the processes inside `pods` that use the `default/postgres` Kubernetes Service Account that reach for the GCE metadata API to retrieve GCP credentials will be given the credentials from the dedicated GCP Service Account with just the Cloud SQL access permissions. There are no static GCP Service Account keys to export, no Kubernetes `secrets` to manage, and the credentials automatically rotate themselves. 119 | 120 | ## Prerequisites 121 | 122 | The steps described in this document require installations of several tools and the proper configuration of authentication to allow them to access your GCP resources. 123 | 124 | ### Cloud Project 125 | 126 | If you do not have a Google Cloud account, please signup for a free trial [here](https://cloud.google.com). You'll need access to a Google Cloud Project with billing enabled. See [Creating and Managing Projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects) for creating a new project. To make cleanup easier it's recommended to create a new project. 127 | 128 | ### Required GCP APIs 129 | 130 | The following APIs will be enabled: 131 | 132 | * Compute Engine API 133 | * Kubernetes Engine API 134 | * Cloud SQL Admin API 135 | * Secret Token API 136 | * Stackdriver Logging API 137 | * Stackdriver Monitoring API 138 | * IAM Service Account Credentials API 139 | 140 | ### Run Demo in a Google Cloud Shell 141 | 142 | Click the button below to run the demo in a [Google Cloud Shell](https://cloud.google.com/shell/docs). 143 | 144 | [![Open in Cloud Shell](http://gstatic.com/cloudssh/images/open-btn.svg)](https://console.cloud.google.com/cloudshell/open?cloudshell_git_repo=https://github.com/GoogleCloudPlatform/gke-private-cluster-demo.git&cloudshell_image=gcr.io/graphite-cloud-shell-images/terraform:latest&cloudshell_tutorial=README.md) 145 | 146 | How to check your account's quota is documented here: [quotas](https://cloud.google.com/compute/quotas). 147 | 148 | All the tools for the demo are installed. When using Cloud Shell execute the following command in order to setup gcloud cli. When executing this command please setup your region and zone. 149 | 150 | ```console 151 | gcloud init 152 | ``` 153 | 154 | ### Tools 155 | 156 | When not using Cloud Shell, the following tools are required: 157 | 158 | * Access to an existing Google Cloud project. 159 | * Bash and common command line tools (Make, etc.) 160 | * [Terraform v0.12.3+](https://www.terraform.io/downloads.html) 161 | * [gcloud v255.0.0+](https://cloud.google.com/sdk/downloads) 162 | * [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) that matches the latest generally-available GKE cluster version. 163 | 164 | #### Install Terraform 165 | 166 | Terraform is used to automate the manipulation of cloud infrastructure. Its [installation instructions](https://www.terraform.io/intro/getting-started/install.html) are also available online. 167 | 168 | #### Install Cloud SDK 169 | 170 | The Google Cloud SDK is used to interact with your GCP resources. [Installation instructions](https://cloud.google.com/sdk/downloads) for multiple platforms are available online. 171 | 172 | #### Install kubectl CLI 173 | 174 | The kubectl CLI is used to interteract with both Kubernetes Engine and Kubernetes in general. [Installation instructions](https://cloud.google.com/kubernetes-engine/docs/quickstart) for multiple platforms are available online. 175 | 176 | ## Deployment 177 | 178 | The steps below will walk you through using Google Kubernetes Engine to create Private Clusters. 179 | 180 | ### Authenticate gcloud 181 | 182 | Prior to running this demo, ensure you have authenticated your gcloud client by running the following command: 183 | 184 | ```console 185 | gcloud auth login 186 | ``` 187 | 188 | ### Configure gcloud settings 189 | 190 | Run `gcloud config list` and make sure that `compute/zone`, `compute/region` and `core/project` are populated with values that work for you. You can choose a [region and zone near you](https://cloud.google.com/compute/docs/regions-zones/). You can set their values with the following commands: 191 | 192 | ```console 193 | # Where the region is us-central1 194 | gcloud config set compute/region us-central1 195 | 196 | Updated property [compute/region]. 197 | ``` 198 | 199 | ```console 200 | # Where the zone inside the region is us-central1-c 201 | gcloud config set compute/zone us-central1-c 202 | 203 | Updated property [compute/zone]. 204 | ``` 205 | 206 | ```console 207 | # Where the project name is my-project-name 208 | gcloud config set project my-project-name 209 | 210 | Updated property [core/project]. 211 | ``` 212 | 213 | ## Create Resources 214 | 215 | To create the entire environment via Terraform, run the following command: 216 | 217 | ```console 218 | make create 219 | 220 | Apply complete! Resources: 33 added, 0 changed, 0 destroyed. 221 | 222 | Outputs: 223 | 224 | ...snip... 225 | bastion_kubectl = HTTPS_PROXY=localhost:8888 kubectl get pods --all-namespaces 226 | bastion_ssh = gcloud compute ssh private-cluster-bastion --project bgeesaman-gke-demos --zone us-central1-a -- -L8888:127.0.0.1:8888 227 | cluster_ca_certificate = 228 | cluster_endpoint = 172.16.0.18 229 | cluster_name = private-cluster 230 | gcp_serviceaccount = private-cluster-pg-sa@my-project-name.iam.gserviceaccount.com 231 | get_credentials = gcloud container clusters get-credentials --project my-project-name --region us-central1 --internal-ip private-cluster 232 | postgres_connection = my-project-name:us-central1:private-cluster-pg-410120c4 233 | postgres_instance = private-cluster-pg-410120c4 234 | postgres_pass = 235 | postgres_user = postgres 236 | Fetching cluster endpoint and auth data. 237 | kubeconfig entry generated for private-cluster. 238 | ``` 239 | 240 | Next, review the `pgadmin` `deployment` located in the `/manifests` directory: 241 | 242 | ```console 243 | cat manifests/pgadmin-deployment.yaml 244 | ``` 245 | 246 | The manifest contains comments that explain the key features of the deployment configuration. Now, deploy the application via: 247 | 248 | ```console 249 | make deploy 250 | 251 | Detecting SSH Bastion Tunnel/Proxy 252 | Did not detect a running SSH tunnel. Opening a new one. 253 | Pseudo-terminal will not be allocated because stdin is not a terminal. 254 | SSH Tunnel/Proxy is now running. 255 | Creating the PgAdmin Configmap 256 | configmap/connectionname created 257 | Creating the PgAdmin Console secret 258 | secret/pgadmin-console created 259 | serviceaccount/postgres created 260 | serviceaccount/postgres annotated 261 | Deploying PgAdmin 262 | deployment.apps/pgadmin4-deployment created 263 | Waiting for rollout to complete and pod available. 264 | Waiting for deployment "pgadmin4-deployment" rollout to finish: 0 of 1 updated replicas are available... 265 | deployment "pgadmin4-deployment" successfully rolled out 266 | ``` 267 | 268 | The `make deploy` step ran the contents of `./scripts/deploy.sh` which did a few things: 269 | 270 | 1. Created an SSH tunnel to the Bastion Host (if it wasn't running already) that should still be running in the background. 271 | 1. Used `kubectl` to create a configmap containing the connection string for connecting to the correct Cloud SQL Instance, a dedicated service account named `postgres` in the `default` namespace, and added a custom annotation to that service account. 272 | 1. Ran `kubectl` to deploy the `pgadmin4` deployment manifest. 273 | 1. Ran `kubectl` to wait for that deployment to be up and healthy. 274 | 275 | Now, with the SSH tunnel still running in the background, you can interact with the GKE cluster using `kubectl`. For example: 276 | 277 | ```console 278 | HTTPS_PROXY=localhost:8888 kubectl get pods --all-namespaces 279 | ``` 280 | 281 | Because that environment variable must be present for each invocation of `kubectl`, you can `alias` that command to reduce the amount of typing needed each time: 282 | 283 | ```console 284 | alias k="HTTPS_PROXY=localhost:8888 kubectl" 285 | ``` 286 | 287 | And now, using `kubectl` looks like the following: 288 | 289 | ```console 290 | k get pods --all-namespaces 291 | k get namespaces 292 | k get svc --all-namespaces 293 | ``` 294 | 295 | Note: `export`-ing the `HTTPS_PROXY` setting in the current terminal may alter the behavior of other common tools that honor that setting (e.g. `curl` and other web related tools). The shell `alias` helps localize the usage to the current invocation of the command only. 296 | 297 | ## Validation 298 | 299 | If no errors are displayed during deployment, you should see your Kubernetes Engine cluster in the [GCP Console](https://console.cloud.google.com/kubernetes) with the sample application deployed. This may take a few minutes. 300 | 301 | Validation is fully automated. The validation script checks for the existence of the Postgress DB, Google Kubernetes Engine cluster, and the deployment of pgAdmin. In order to validate that resources are installed and working correctly, run: 302 | 303 | ```console 304 | make validate 305 | 306 | Detecting SSH Bastion Tunnel/Proxy 307 | Detected a running SSH tunnel. Skipping. 308 | Checking that pgAdmin is deployed on the cluster... pass 309 | Checking that pgAdmin is able to connect to the database instance... pass 310 | ``` 311 | 312 | The `make validate` performs two simple checks that can be done manually. Checking the status of the `pgadmin4` deployment for health: 313 | 314 | ```console 315 | k rollout status --timeout=10s -f manifests/pgadmin-deployment.yaml 316 | 317 | deployment "pgadmin4-deployment" successfully rolled out 318 | ``` 319 | 320 | And using `kubectl exec` to run the `pg_isready` command from the `pgadmin4` container which performs a test connection to the Postgres database and verifies end-to-end success: 321 | 322 | ```console 323 | k exec -it -n default "$(k get pod -l 'app=pgadmin4' -ojsonpath='{.items[].metadata.name}')" -c pgadmin4 -- pg_isready -h localhost -t 10 324 | 325 | localhost:5432 - accepting connections 326 | ``` 327 | 328 | You may also wish to view the logs of the key `pods` in this deployment. To see the logs of the `pgadmin4` container: 329 | 330 | ```console 331 | k logs -l 'app=pgadmin4' -c pgadmin4 -f 332 | ``` 333 | 334 | To see the logs of the `cloud-sql-proxy` container: 335 | 336 | ```console 337 | k logs -l 'app=pgadmin4' -c cloudsql-proxy -f 338 | ``` 339 | 340 | To see the logs of the `gke-metadata-proxy` containers which handle requests for "Workload Identity": 341 | 342 | ```console 343 | k logs -n kube-system -l 'k8s-app=gke-metadata-server' -f 344 | ``` 345 | 346 | ## Tear Down 347 | 348 | When you are finished with this example you will want to clean up the resources that were created so that you avoid accruing charges. Teardown is fully automated. The destroy script deletes all resources created using Terraform. Terraform variable configuration and state files are also cleaned if Terraform destroy is successful. To delete all created resources in GCP, run: 349 | 350 | ```console 351 | make teardown 352 | ``` 353 | 354 | ## Troubleshooting 355 | 356 | * **The create script fails with a `Permission denied` when running Terraform** - The credentials that Terraform is using do not provide the necessary permissions to create resources in the selected projects. Ensure that the account listed in `gcloud config list` has necessary permissions to create resources. If it does, regenerate the application default credentials using `gcloud auth application-default login`. 357 | * **Terraform timeouts** - Sometimes resources may take longer than usual to create and Terraform will timeout. The solution is to just run `make create` again. Terraform should pick up where it left off. 358 | 359 | ## Relevant Material 360 | 361 | * [Private GKE Clusters](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster) 362 | * [Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) 363 | * [Terraform Google Provider](https://www.terraform.io/docs/providers/google/) 364 | * [Securely Connecting to VM Instances](https://cloud.google.com/solutions/connecting-securely) 365 | * [Cloud NAT](https://cloud.google.com/nat/docs/overview) 366 | * [Kubernetes Engine - Hardening your cluster's security](https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster) 367 | 368 | Note, **This is not an officially supported Google product** 369 | --------------------------------------------------------------------------------