├── .github └── CODEOWNERS ├── .gitignore ├── 001-github-to-scc ├── .goblet │ └── config.json ├── README.md ├── main.py └── requirements.txt ├── 002-cloud-custodian-metric-filters ├── README.md ├── enable_uniform_bucket_access.yml ├── instance_utilization.yml ├── unused_firewalls.yml ├── unused_service_account_keys.yml └── unused_service_accounts.yml ├── 003-cost-alerts ├── .goblet │ └── config.json ├── README.md ├── main.py ├── requirements.txt └── slack_message.png ├── 004-dynamic-serverless-loadbalancer ├── README.md └── main.tf ├── 005-bucket-replication ├── .goblet │ └── config.json ├── README.md ├── bucket_replications.json ├── main.py └── requirements.txt ├── 006-service-account-key-rotater ├── .goblet │ └── config.json ├── README.md ├── constants.py ├── core │ ├── secrets_manager.py │ └── service_account.py ├── main.py ├── plugins │ ├── base.py │ ├── github.py │ └── storage.py ├── requirements.txt ├── setup.py └── tests │ └── test_plugin_base.py ├── 007-bigquery-usage-detection ├── .goblet │ └── config.json ├── README.md ├── main.py └── requirements.txt ├── 008-slack-approval-process ├── IAM-permission-provision │ ├── .goblet │ │ └── config.json │ ├── main.py │ └── requirements.txt ├── IAM-permission-request │ ├── .goblet │ │ └── config.json │ ├── main.py │ └── requirements.txt └── README.md ├── 009-cloudrun-cloudbuild-configs ├── .goblet │ └── config.json ├── README.md ├── main.py └── requirements.txt ├── 010-cloudrun-jobs ├── .goblet │ └── config.json ├── Dockerfile ├── README.md ├── main.py └── requirements.txt ├── 011-cloud-run-cors-anywhere ├── Dockerfile ├── LICENSE ├── README.md ├── index.js ├── package-lock.json └── package.json ├── 012-backend-alerts ├── .gitignore ├── .goblet │ └── config.json ├── README.md ├── main.py └── requirements.txt ├── 013-bigquery-remote-functions ├── README.md ├── main.py └── requirements.txt ├── 014-goblet-private-services ├── .goblet │ └── config.json ├── Dockerfile ├── README.md ├── main.py └── requirements.txt ├── 015-goblet-cloudtask ├── .gitignore ├── .goblet │ └── config.json.sample ├── Dockerfile ├── README.md ├── main.py └── requirements.txt ├── 016-gcp-metrics-slack-alerts ├── .goblet │ └── config.json.sample ├── README.md ├── main.py ├── preview.png ├── requirements.txt └── utils │ ├── cloudrun.py │ ├── slack_message.py │ └── utility_functions.py ├── 017-goblet-iam ├── .goblet │ ├── autogen_iam_role.json │ └── config.json ├── README.md ├── main.py └── requirements.txt ├── 018-dataform-bq-remote-functions ├── README.md ├── main.py └── requirements.txt ├── LICENSE └── README.md /.github/CODEOWNERS: -------------------------------------------------------------------------------- 1 | * @premisedata/infrastructure infrastructure@premise.com 2 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | venv 2 | __pycache__ 3 | 4 | .DS_Store 5 | **/.goblet/*.zip 6 | .idea 7 | -------------------------------------------------------------------------------- /001-github-to-scc/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "cloudfunction":{ 3 | "environmentVariables": 4 | { 5 | "ORGANIZATION": "ORG_ID", 6 | "SOURCE": "SOURCE_ID", 7 | "GITHUB_SECRET": "GITHUB_SECRET" 8 | }, 9 | "serviceAccountEmail": "SERVICE_ACCOUNT_EMAIL" 10 | }, 11 | "bindings": [ 12 | { 13 | "role": "roles/cloudfunctions.invoker", 14 | "members": [ 15 | "allUsers" 16 | ] 17 | } 18 | ] 19 | } -------------------------------------------------------------------------------- /001-github-to-scc/README.md: -------------------------------------------------------------------------------- 1 | # Github Findings to SCC 2 | 3 | Create/Reopen/Resolve findings from [Github Security](https://docs.github.com/en/code-security) and forward to [GCP Security Command Center](https://cloud.google.com/security-command-center) 4 | 5 | * repository_vulnerability_alert 6 | * code_scanning_alert 7 | * secret_scanning_alert 8 | 9 | ## Written Tutorial 10 | 11 | [Tutorial](https://engineering.premise.com/tutorial-publishing-github-findings-to-security-command-center-2d1749f530bc) 12 | 13 | ## Install 14 | 15 | * pip install goblet-gcp 16 | 17 | ## Prerequisites 18 | 19 | * GCP Account with Security Command Center enabled 20 | * Custom Security Command Center Source 21 | * Github Account with at least one repo with security scanning enabled 22 | * Python environment (>3.8) 23 | * Gcloud cli 24 | 25 | ## Setup 26 | 27 | * Set the following varibles in `.goblet/config.json` 28 | 29 | * "ORGANIZATION" 30 | * "SOURCE" 31 | * "GITHUB_SECRET" 32 | * "SERVICE_ACCOUNT_EMAIL" 33 | 34 | ## Deploy 35 | 36 | * run ```goblet deploy --project=PROJECT --location LOCATION``` 37 | * Use the generated cloudfunction url and GITHUB_SECRET to create Github webhook 38 | -------------------------------------------------------------------------------- /001-github-to-scc/main.py: -------------------------------------------------------------------------------- 1 | from goblet import Goblet, Response, goblet_entrypoint 2 | from google.cloud import securitycenter 3 | from google.cloud.securitycenter_v1 import CreateFindingRequest, Finding, SetFindingStateRequest 4 | from hashlib import sha1, md5 5 | from hmac import HMAC, compare_digest 6 | import datetime 7 | import os 8 | 9 | app = Goblet(function_name="github-to-scc") 10 | goblet_entrypoint(app) 11 | 12 | ORGANIZATION = os.environ.get("ORGANIZATION") 13 | SOURCE = os.environ.get("SOURCE") 14 | GITHUB_SECRET = os.environ.get("GITHUB_SECRET") 15 | 16 | RESOLVE_ACTIONS = [ 17 | 'dismiss', 18 | 'resolve', 19 | 'fixed', 20 | 'resolved', 21 | 'closed_by_user' 22 | ] 23 | POST_ACTIONS = [ 24 | 'create', 25 | 'created', 26 | 'appeared_in_branch' 27 | ] 28 | 29 | REOPEN_ACTIONS =[ 30 | 'reopened_by_user', 31 | 'reopened' 32 | ] 33 | 34 | # github alerts https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads#secret_scanning_alert 35 | 36 | def extract_category(github_alert): 37 | if github_alert.get("instances"): 38 | return "CODE_SCANNING" 39 | if github_alert.get("affected_range"): 40 | return "DEPENDECY_VULNERABILITY" 41 | if github_alert.get("secret_type"): 42 | return "SECRET_SCANNING" 43 | 44 | def extract_severity(github_alert): 45 | severity = github_alert.get("severity") 46 | if severity == "low": 47 | return Finding.Severity.LOW 48 | if severity == "moderate": 49 | return Finding.Severity.MEDIUM 50 | if severity == "high": 51 | return Finding.Severity.HIGH 52 | if severity == "critical": 53 | return Finding.Severity.CRITICAL 54 | return Finding.Severity.SEVERITY_UNSPECIFIED 55 | 56 | def remove_null(alert): 57 | return {k:v for k,v in alert.items() if v != None} 58 | 59 | def verify_signature(req): 60 | received_sign = req.headers.get('X-Hub-Signature').split('sha1=')[-1].strip() 61 | secret = GITHUB_SECRET.encode() 62 | expected_sign = HMAC(key=secret, msg=req.data, digestmod=sha1).hexdigest() 63 | return compare_digest(received_sign, expected_sign) 64 | 65 | def post_finding(client, github_event, hashed_id): 66 | event_time = datetime.datetime.now() 67 | category = extract_category(github_event["alert"]) 68 | 69 | finding = Finding( 70 | state=Finding.State.ACTIVE, 71 | resource_name=github_event['repository']["full_name"], 72 | severity=extract_severity(github_event['alert']), 73 | category=category, 74 | event_time=event_time, 75 | source_properties=remove_null(github_event['alert']), 76 | external_uri = github_event["repository"]["html_url"] + '/security' 77 | ) 78 | request = CreateFindingRequest( 79 | parent=f"organizations/{ORGANIZATION}/sources/{SOURCE}", 80 | finding_id=hashed_id, 81 | finding=finding, 82 | ) 83 | 84 | return client.create_finding( 85 | request=request 86 | ) 87 | 88 | def set_finding_state(client, state, hashed_id): 89 | start_time = datetime.datetime.now() 90 | 91 | request = SetFindingStateRequest( 92 | name=f"organizations/{ORGANIZATION}/sources/{SOURCE}/findings/{hashed_id}", 93 | state=state, 94 | start_time=start_time 95 | ) 96 | return client.set_finding_state(request=request) 97 | 98 | @app.http() 99 | def github_webhook(request): 100 | if not verify_signature(request): 101 | return Response('Forbidden',status_code=400) 102 | github_event = request.json 103 | client = securitycenter.SecurityCenterClient() 104 | 105 | category = extract_category(github_event["alert"]) 106 | finding_id = github_event["alert"].get("id") or github_event["alert"].get("number") 107 | if not finding_id: 108 | return Response('Not a valid Event', status_code=501) 109 | full_id = f"{github_event['repository']['full_name']}{category}{finding_id}" 110 | hashed_id = md5(full_id.encode()).hexdigest() 111 | 112 | if github_event["action"] in 'create': 113 | post_finding(client, github_event, hashed_id) 114 | 115 | if github_event["action"] in RESOLVE_ACTIONS: 116 | set_finding_state(client, Finding.State.INACTIVE, hashed_id) 117 | 118 | if github_event["action"] in REOPEN_ACTIONS: 119 | set_finding_state(client, Finding.State.ACTIVE, hashed_id) 120 | 121 | return 'SUCCESS' 122 | -------------------------------------------------------------------------------- /001-github-to-scc/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 2 | google-cloud-securitycenter==1.23.2 -------------------------------------------------------------------------------- /002-cloud-custodian-metric-filters/README.md: -------------------------------------------------------------------------------- 1 | # Cloud Custodian Metric Filters 2 | 3 | ## Written Tutorial 4 | 5 | [Tutorial](https://engineering.premise.com/cleaning-up-your-google-cloud-environment-safety-guaranteed-2de51fb8620a) 6 | 7 | ## Setting Up Cloud Custodian 8 | 9 | 10 | Cloud Custodian is a Python application and supports Python 3 on Linux, MacOS and Windows. 11 | 12 | To install run the following commands: 13 | 14 | `pip install c7n` 15 | `pip install c7n_gcp` 16 | 17 | To run cloud custodian you will also need to be authenticated to GCP. If you are not already authenticated you will first need to install gcloud.(the GCP Command Line Interface). 18 | 19 | Then run the following command: 20 | 21 | `gcloud auth application-default login` 22 | 23 | Executing the command will open a browser window with prompts to finish configuring your credentials. For more information on this command, view its documentation. 24 | 25 | ## Running Cloud Custodian 26 | 27 | Cloud Custodian uses policies to interact with cloud resources. A policy is simply a YAML file that follows a predetermined schema to describe what you want Custodian to do. 28 | There are three main components to a policy: 29 | 30 | * Resource: the type of resource to run the policy against 31 | 32 | * Filters: criteria to produce a specific subset of resources 33 | 34 | * Actions: directives to take on the filtered set of resources 35 | 36 | The example below illustrates a policy that filters for compute engine resources named test, and then performs the “stop” action on each resource that matches that filter. 37 | 38 | ```yaml 39 | policies: 40 | - name: my-first-policy 41 | description: | 42 | Stops all compute instances that are named "test" 43 | resource: gcp.instance 44 | filters: 45 | - type: value 46 | key: name 47 | value: test 48 | actions: 49 | - type: stop 50 | ``` 51 | 52 | To execute this policy in your project you would simply execute the following command, where custodian.yml is the file that contains your project. 53 | 54 | `GOOGLE_CLOUD_PROJECT="project-id" custodian run --output-dir=. custodian.yml` 55 | 56 | Adding the flag `--dryrun` is useful since it ensures that the actions will not be run, so you can see what resources would be affected by the policy before executing any destructive actions. 57 | 58 | ## Using Metric Filter Policies 59 | 60 | To run any of the included metric policies you would use the following command, while `project-id` is the project you woudld to run the policy aganst and POLICY.yml should be the name of the corresponding policy you would like to run. 61 | 62 | `GOOGLE_CLOUD_PROJECT="project-id" custodian run --output-dir=. POLICY.yml --dryrun` 63 | 64 | Remove the `--dryrun` flag if you are comfortable triggering the actions. 65 | -------------------------------------------------------------------------------- /002-cloud-custodian-metric-filters/enable_uniform_bucket_access.yml: -------------------------------------------------------------------------------- 1 | policies: 2 | - name: enable-uniform-bucket-access 3 | resource: gcp.bucket 4 | filters: 5 | - iamConfiguration.uniformBucketLevelAccess.enabled: false 6 | - type: metrics 7 | name: storage.googleapis.com/authz/acl_based_object_access_count 8 | value: -1 9 | days: 30 10 | missing-value: -1 11 | op: eq 12 | aligner: ALIGN_COUNT 13 | actions: 14 | - type: set-uniform-access 15 | state: true -------------------------------------------------------------------------------- /002-cloud-custodian-metric-filters/instance_utilization.yml: -------------------------------------------------------------------------------- 1 | policies: 2 | - name: instance-utilization 3 | resource: gcp.instance 4 | filters: 5 | - type: metrics 6 | name: compute.googleapis.com/instance/cpu/utilization 7 | aligner: ALIGN_MEAN 8 | days: 30 9 | value: 10 10 | op: less-than -------------------------------------------------------------------------------- /002-cloud-custodian-metric-filters/unused_firewalls.yml: -------------------------------------------------------------------------------- 1 | policies: 2 | - name: unused-firewalls 3 | resource: gcp.firewall 4 | filters: 5 | - type: metrics 6 | name: firewallinsights.googleapis.com/subnet/firewall_hit_count 7 | aligner: ALIGN_COUNT 8 | days: 30 9 | value: -1 10 | missing-value: -1 11 | op: eq 12 | actions: 13 | - disable -------------------------------------------------------------------------------- /002-cloud-custodian-metric-filters/unused_service_account_keys.yml: -------------------------------------------------------------------------------- 1 | policies: 2 | - name: unused-service-account-keys 3 | resource: gcp.service-account-key 4 | filters: 5 | - type: value 6 | key: keyType 7 | value: SYSTEM_MANAGED 8 | op: ne 9 | - type: metrics 10 | name: iam.googleapis.com/service_account/key/authn_events_count 11 | value: -1 12 | days: 30 13 | missing-value: -1 14 | op: eq 15 | aligner: ALIGN_SUM 16 | actions: 17 | - delete -------------------------------------------------------------------------------- /002-cloud-custodian-metric-filters/unused_service_accounts.yml: -------------------------------------------------------------------------------- 1 | polices: 2 | - name: unused-service-accounts 3 | resource: gcp.service-account 4 | filters: 5 | - type: metrics 6 | name: iam.googleapis.com/service_account/authn_events_count 7 | value: -1 8 | days: 30 9 | missing-value: -1 10 | op: eq 11 | aligner: ALIGN_SUM 12 | actions: 13 | - disable -------------------------------------------------------------------------------- /003-cost-alerts/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "cloudfunction":{ 3 | "environmentVariables": 4 | { 5 | "SLACK_WEBHOOK": "SLACK_WEBHOOK", 6 | "BILLING_ORG": "BILLING_ORG", 7 | "BILLING_ID": "BILLING_ID", 8 | "BQ_TABLE": "BQ_TABLE" 9 | }, 10 | "serviceAccountEmail": "SERVICE_ACCOUNT_EMAIL" 11 | } 12 | } -------------------------------------------------------------------------------- /003-cost-alerts/README.md: -------------------------------------------------------------------------------- 1 | # Cost Alerts 2 | 3 | Publish alerts daily on a per service basis to a slack channel when any large percentage cost increase or decrease occurs. 4 | 5 | ![slack message](slack_message.png) 6 | 7 | ## Written Tutorial 8 | 9 | [Tutorial](https://engineering.premise.com/tutorial-cost-spike-alerting-for-google-cloud-platform-gcp-46fd26ae3f6a) 10 | 11 | ## Install 12 | 13 | * pip install goblet-gcp 14 | 15 | ## Prerequisites 16 | 17 | * GCP Account 18 | * Python environment (>3.8) 19 | * Gcloud cli 20 | * Slack Webhook 21 | 22 | ## Setup 23 | 24 | * Set the following varibles in `.goblet/config.json` 25 | 26 | * "SLACK_WEBHOOK" 27 | * "BILLING_ORG" 28 | * "BILLING_ID" 29 | * "BQ_TABLE 30 | * "SERVICE_ACCOUNT_EMAIL" 31 | 32 | ## Deploy 33 | 34 | * run ```goblet deploy --project=PROJECT --location LOCATION``` -------------------------------------------------------------------------------- /003-cost-alerts/main.py: -------------------------------------------------------------------------------- 1 | from goblet import Goblet, goblet_entrypoint 2 | from slack_sdk.webhook import WebhookClient 3 | import os 4 | from datetime import datetime,timezone,timedelta 5 | import urllib.parse 6 | from google.cloud import bigquery 7 | 8 | app = Goblet(function_name="gcp-cost-spike-alert") 9 | goblet_entrypoint(app) 10 | 11 | # Thresholds 12 | AMOUNT = 250 13 | AMOUNT_CHANGED = 500 14 | PERCENTAGE = 1.1 15 | 16 | # ENV Vars 17 | SLACK_WEBHOOK = os.environ.get("SLACK_WEBHOOK") 18 | BILLING_ORG = os.environ.get("BILLING_ORG") 19 | BILLING_ID = os.environ.get("BILLING_ID") 20 | BQ_TABLE = os.environ.get("BQ_TABLE") 21 | 22 | webhook = WebhookClient(SLACK_WEBHOOK) 23 | 24 | # Set Schedule Here 25 | @app.schedule("0 15 * * *", timezone="EST") 26 | def post_slack(): 27 | daily_costs = get_daily_costs() 28 | flagged_costs = parse_cost_changes(daily_costs) 29 | for cost in flagged_costs: 30 | # Add Additional Integrations Here 31 | webhook.send(text="fallback", blocks=slack_block(cost)) 32 | 33 | # Query to get Daily Costs 34 | def get_daily_costs(): 35 | client = bigquery.Client() 36 | now = datetime.now(timezone.utc) 37 | prev_utc = now -timedelta(2) 38 | curr_utc = now - timedelta(1) 39 | 40 | QUERY = ( 41 | f"""SELECT 42 | project.name as project, 43 | sku.id as sku_id, 44 | sku.description as sku_def, 45 | service.id as service_id, 46 | service.description as service_def, 47 | SUM(CASE WHEN EXTRACT(DAY FROM usage_start_time) = {prev_utc.day} THEN cost ELSE 0 END) AS prev_day, 48 | SUM(CASE WHEN EXTRACT(DAY FROM usage_start_time) = {curr_utc.day} THEN cost ELSE 0 END) AS curr_day, 49 | FROM `{BQ_TABLE}` 50 | WHERE DATE_TRUNC(usage_start_time, DAY) = "{prev_utc.strftime('%Y-%m-%d')}" or DATE_TRUNC(usage_start_time, DAY) = "{curr_utc.strftime('%Y-%m-%d')}" 51 | GROUP BY 1,2,3,4,5 52 | ORDER BY 1;""" 53 | ) 54 | query_job = client.query(QUERY) # API request 55 | rows = query_job.result() 56 | return rows 57 | 58 | # Extract Large Spikes from Costs 59 | def parse_cost_changes(rows): 60 | flagged_items = [] 61 | for item in rows: 62 | i1 = float(item["prev_day"]) 63 | i2 = float(item["curr_day"]) 64 | if (i1 > AMOUNT or i2 > AMOUNT) and i1 != 0 and i2 != 0 and ((i2 /i1) >= PERCENTAGE or (i1 /i2) >= PERCENTAGE) or abs(i2-i1) >= AMOUNT_CHANGED: 65 | flagged_items.append(item) 66 | return flagged_items 67 | 68 | # Slack Message Formatting 69 | def slack_block(item): 70 | encoded_sku = urllib.parse.quote_plus(f"services/{item['service_id']}/skus/{item['sku_id']}") 71 | billing_link = f"https://console.cloud.google.com/billing/{BILLING_ID}/reports;projects={item['project']};skus={encoded_sku}?organizationId={BILLING_ORG}&orgonly=true" 72 | 73 | now = datetime.now(timezone.utc) 74 | prev_utc = now -timedelta(2) 75 | curr_utc = now - timedelta(1) 76 | 77 | return [ 78 | { 79 | "type": "section", 80 | "text": { 81 | "type": "mrkdwn", 82 | "text": f"*Large Cost Change Detected:*\n*<{billing_link}|View in Billing>*" 83 | } 84 | }, 85 | { 86 | "type": "section", 87 | "text": { 88 | "type": "mrkdwn", 89 | "text": f"*Project:*\n{item['project']}" 90 | } 91 | }, 92 | { 93 | "type": "section", 94 | "text": { 95 | "type": "mrkdwn", 96 | "text": f"*SKU:*\n{item['sku_def']}" 97 | } 98 | }, 99 | { 100 | "type": "section", 101 | "text": { 102 | "type": "mrkdwn", 103 | "text": f"*Service:*\n{item['service_def']}" 104 | } 105 | }, 106 | { 107 | "type": "section", 108 | "fields": [ 109 | { 110 | "type": "mrkdwn", 111 | "text": f"*Previous Cost ({prev_utc.strftime('%Y-%m-%d')}):*\n${item['prev_day']:.2f}" 112 | }, 113 | { 114 | "type": "mrkdwn", 115 | "text": f"*Latest Cost ({curr_utc.strftime('%Y-%m-%d')}):*\n${item['curr_day']:.2f}" 116 | } 117 | ] 118 | }, 119 | { 120 | "type": "divider" 121 | } 122 | ] 123 | -------------------------------------------------------------------------------- /003-cost-alerts/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 2 | slack_sdk==3.21.3 3 | google-cloud-bigquery==3.11.4 -------------------------------------------------------------------------------- /003-cost-alerts/slack_message.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/premisedata/gcp-tutorials/99502f88a49edf0f219136832267ea875ff9f2b6/003-cost-alerts/slack_message.png -------------------------------------------------------------------------------- /004-dynamic-serverless-loadbalancer/README.md: -------------------------------------------------------------------------------- 1 | # Dynamic Serverless Loadbalancer 2 | 3 | Google Cloud Platform (GCP) offers support for many different serverless services that include app engine, cloudrun, cloudfunctions, and api gateway. Each service has its pros and cons and often companies will host applications in multiple if not all of these services. One difficulty in managing multiple different types of services is maintaining a robust networking layer. 4 | 5 | Our solution at Premise is to maintain one single load balancer that then routes requests to all of our individual services. The difficulty with this approach is that the singular load balancer can become quite complex and tough to maintain. To solve this problem, we leveraged Terraform's GCP serverless load balancer module combined with dynamic blocks. The result is a simple, scalable, and customizable solution to maintain a singular load balancer for any number of serverless applications. 6 | 7 | 8 | ## Written Tutorial 9 | 10 | [Tutorial](https://austennovis.medium.com/e15751853312) 11 | 12 | ## Prerequisites 13 | 14 | * GCP Account 15 | * Terraform 16 | 17 | ## Setup 18 | 19 | Set the following local varibles in `main.tf`. 20 | 21 | * "domain" 22 | * "project" 23 | * "region" 24 | * "env" 25 | * "services" 26 | 27 | Specify your backend services in the `services` variable. 28 | 29 | An example service could be 30 | 31 | ``` 32 | { 33 | "service" : "default", 34 | "type" : "app_engine", 35 | "path" : "/default/*" 36 | } 37 | ``` 38 | 39 | 40 | ## Plan 41 | 42 | Test your terraform configuration by running ```terraform plan``` 43 | 44 | ## Deploy 45 | 46 | Deploy your terraform resources by running ```terraform apply``` -------------------------------------------------------------------------------- /004-dynamic-serverless-loadbalancer/main.tf: -------------------------------------------------------------------------------- 1 | provider "google" {} 2 | 3 | terraform { 4 | backend "local" {} 5 | } 6 | 7 | locals { 8 | domain = "DOMAIN.COM" // Your Domain Name 9 | project = "PROJECT" // Your GCP project 10 | region = "us-central1" // Your Region 11 | env = "dev" // Your environment 12 | 13 | // Backend Services 14 | services = [ 15 | { 16 | "service" : "default", 17 | "type" : "app_engine", 18 | "path" : "/default/*" 19 | }, 20 | { 21 | "service" : "service1", 22 | "type" : "app_engine", 23 | "path" : "/service1/*", 24 | "path_prefix_rewrite" : "/" 25 | }, 26 | { 27 | "service" : "service2", 28 | "type" : "cloud_run", 29 | "path" : "/service2/*", 30 | "path_prefix_rewrite" : "/" 31 | } 32 | ] 33 | } 34 | 35 | 36 | module "lb-http" { 37 | source = "GoogleCloudPlatform/lb-http/google//modules/serverless_negs" 38 | version = "~> 6.1.1" 39 | name = "${local.env}-${local.domain}" 40 | project = local.project 41 | 42 | ssl = true 43 | managed_ssl_certificate_domains = ["${local.env}.${local.domain}"] 44 | https_redirect = true 45 | create_url_map = false 46 | url_map = google_compute_url_map.url-map.self_link 47 | 48 | backends = { 49 | for serviceObj in local.services : 50 | serviceObj.service => { 51 | description = serviceObj.service 52 | groups = [ 53 | { 54 | group = google_compute_region_network_endpoint_group.neg[serviceObj.service].self_link 55 | } 56 | ] 57 | enable_cdn = false 58 | security_policy = null 59 | custom_request_headers = null 60 | custom_response_headers = null 61 | timeout_sec = 300 62 | 63 | iap_config = { 64 | enable = false 65 | oauth2_client_id = "" 66 | oauth2_client_secret = "" 67 | } 68 | log_config = { 69 | enable = false 70 | sample_rate = null 71 | } 72 | } 73 | } 74 | } 75 | 76 | resource "google_compute_region_network_endpoint_group" "neg" { 77 | for_each = { for service in local.services : "${service.service}" => service } 78 | name = "${each.value.service}-${local.env}" 79 | network_endpoint_type = "SERVERLESS" 80 | region = local.region 81 | project = local.project 82 | 83 | dynamic "app_engine" { 84 | for_each = each.value.type == "app_engine" ? [{ "service" : each.value.service }] : [] 85 | content { 86 | service = app_engine.value.service 87 | } 88 | } 89 | 90 | dynamic "cloud_run" { 91 | for_each = each.value.type == "cloud_run" ? [{ "service" : each.value.service }] : [] 92 | content { 93 | service = cloud_run.value.service 94 | } 95 | } 96 | 97 | } 98 | 99 | resource "google_compute_url_map" "url-map" { 100 | name = "${local.env}-url-map" 101 | description = "{local.env} url mapping for ${local.domain}" 102 | project = local.project 103 | 104 | default_service = module.lb-http.backend_services["default"].self_link 105 | 106 | host_rule { 107 | hosts = ["${local.env}.${local.domain}"] 108 | path_matcher = "default" 109 | } 110 | path_matcher { 111 | name = "default" 112 | default_service = module.lb-http.backend_services["default"].self_link 113 | 114 | dynamic "path_rule" { 115 | for_each = local.services 116 | content { 117 | paths = [path_rule.value.path] 118 | service = module.lb-http.backend_services[path_rule.value.service].self_link 119 | dynamic "route_action" { 120 | for_each = can(path_rule.value.path_prefix_rewrite) ? [{ "path_prefix_rewrite" : path_rule.value.path_prefix_rewrite }] : [] 121 | content { 122 | url_rewrite { 123 | path_prefix_rewrite = route_action.value.path_prefix_rewrite 124 | } 125 | } 126 | } 127 | } 128 | } 129 | } 130 | } 131 | -------------------------------------------------------------------------------- /005-bucket-replication/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "customFiles": ["bucket_replications.json"], 3 | "function_name": "bucket-replicator", 4 | "cloudfunction": { 5 | "environmentVariables": 6 | { 7 | "GOOGLE_PROJECT": "your-project" 8 | }, 9 | "serviceAccountEmail": "your-service-account@your-account.iam.gserviceaccount.com" 10 | } 11 | } -------------------------------------------------------------------------------- /005-bucket-replication/README.md: -------------------------------------------------------------------------------- 1 | # Bucket Replication 2 | 3 | A quick tutorial for setting up bucket replication 4 | 5 | We leverage goblet's use of decorators to set up triggers for Cloud Functions, to treat the service like one Cloud Function with multiple triggers. 6 | 7 | ## Written Tutorial 8 | 9 | [Tutorial](https://engineering.premise.com/tutorial-bucket-replication-for-google-cloud-platform-gcp-cloud-storage-44622c59299c) 10 | 11 | ## Prerequisites 12 | 13 | * GCP Account 14 | * GCP CLI 15 | * Python 3.7 16 | 17 | ## Setup 18 | 19 | Make sure you have a service account with `storage.objectViewer` for the source bucket and `storage.objectAdmin` for the destination bucket. 20 | It is probably best to set this up at the bucket level. 21 | 22 | Replace all the placeholders in `.goblet/config.json` and `bucket_replications.json` with your project, service account, and bucket pairings. 23 | 24 | ## Deploy 25 | 26 | Deploy by running `goblet deploy --project {PROJECT} --location {REGION}` 27 | -------------------------------------------------------------------------------- /005-bucket-replication/bucket_replications.json: -------------------------------------------------------------------------------- 1 | { 2 | "your-project": { 3 | "test-bucket-1": ["test-bucket-2", "test-bucket-3"], 4 | "test-bucket-2": ["test-bucket-3"] 5 | }, 6 | "other-project": { 7 | "...": "..." 8 | } 9 | } -------------------------------------------------------------------------------- /005-bucket-replication/main.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | from goblet import Goblet, goblet_entrypoint 4 | from google.cloud import storage 5 | 6 | PROJECT = os.environ["GOOGLE_PROJECT"] 7 | app = Goblet(function_name="bucket-replicator") 8 | goblet_entrypoint(app) 9 | 10 | with open("bucket_replications.json") as file: 11 | bucket_replications = json.load(file) 12 | 13 | for src in bucket_replications[PROJECT].keys(): 14 | @app.storage(src, "finalize") 15 | def replicate(event): 16 | """Triggered by bucket create/modify blob. Copies the blob to all target buckets 17 | 18 | :param event: bucket create/modify blob event 19 | """ 20 | bucket = event["bucket"] 21 | name = event["name"] 22 | if name[-1] == '/': 23 | app.log.info("Folder creation... skipping") 24 | return 25 | storage_client = storage.Client() 26 | src_bucket = storage_client.bucket(bucket) 27 | src_blob = src_bucket.blob(name) 28 | dests = bucket_replications[PROJECT][bucket] 29 | for dest in dests: 30 | app.log.info(f"Copying {bucket}/{name} to {dest}") 31 | dest_bucket = storage_client.bucket(dest) 32 | src_bucket.copy_blob(src_blob, dest_bucket) 33 | 34 | @app.storage(src, "delete") 35 | def delete(event): 36 | """Triggered by bucket delete blob. Deletes the blob in all target buckets 37 | 38 | :param event: bucket create/mobdify blob event 39 | """ 40 | bucket = event["bucket"] 41 | name = event["name"] 42 | if name[-1] == '/': 43 | app.log.info("Folder creation... skipping") 44 | return 45 | storage_client = storage.Client() 46 | dests = bucket_replications[PROJECT][bucket] 47 | for dest in dests: 48 | app.log.info(f"Deleting {dest}/{name}") 49 | dest_bucket = storage_client.bucket(dest) 50 | dest_bucket.delete_blob(name) 51 | -------------------------------------------------------------------------------- /005-bucket-replication/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 -------------------------------------------------------------------------------- /006-service-account-key-rotater/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "customFiles": [ 3 | "service-account-key-rotater.egg-info/*" 4 | ], 5 | "function_name": "service-account-key-rotater", 6 | "cloudfunction": { 7 | "environmentVariables": { 8 | "PROJECT": "your-project", 9 | "KEY_VALID_DAYS": "2", 10 | "GH_ORGANIZATION": "your-gh-organization", 11 | "STORAGE_BUCKET_NAME": "your-storage-bucket-name", 12 | "ROTATION_PERIOD_SECONDS": "86400s" 13 | }, 14 | "serviceAccountEmail": "your-service-account@your-account.iam.gserviceaccount.com" 15 | } 16 | } -------------------------------------------------------------------------------- /006-service-account-key-rotater/README.md: -------------------------------------------------------------------------------- 1 | # Secret Key Rotation 2 | 3 | Rotate Service Account Keys in GCP Secret Manager. 4 | 5 | The default age for a key is set by `KEY_VALID_DAYS` with the default `2`. They will rotated everyday, so you should get the latest everyday. 6 | 7 | The default rotation period for a key is set by `ROTATION_PERIOD_SECONDS` with the default `86400s`, this will allow the key to be rotated on a daily basis. 8 | 9 | To create a new key use the `service-account-key-rotater` endpoint. The request `service_account_name` and `project`. 10 | 11 | ```{"service_account_name":"NAME", "project":"PROJECT"}``` 12 | 13 | The keys are rotated through a pubsub topic. 14 | 15 | ## Written Tutorial 16 | 17 | [Tutorial](https://engineering.premise.com/tutorial-rotating-service-account-keys-using-secret-manager-5f4dc7142d4b) 18 | 19 | ## Getting Keys 20 | 21 | ### Locally 22 | 23 | Use gcloud functions call api. 24 | 25 | ```bash 26 | gcloud functions call get-secret-key --project {PROJECT} --region {REGION} --data '{"service_account_name":"SERVICE_ACCOUNT", "project":"PROJECT"' 27 | ``` 28 | 29 | If you have jq you can pipe the results directly into a service account file 30 | 31 | ```bash 32 | gcloud functions call get-secret-key --project {PROJECT} --region {REGION} --data '{"service_account_name":"SERVICE_ACCOUNT", "project":"PROJECT"' --format json | jq -r '.result' | base64 --decode > "service-acount.json" 33 | ``` 34 | 35 | 36 | ## Github Plugin 37 | 38 | The organization for Github Secret Rotation is set by `GH_ORGANIZATION`. 39 | 40 | This app will automatically update github secrets with the new keys. 41 | In order to use this flow, make sure to update the labels of the creation request to include `type`, this will allow you to specify `github`. 42 | 43 | To update the secret for a given repository be sure to include the `ghr` flag which will allow you to specify the Github Repository to be updated. In order to update an Organizational level secret be sure to set `gho` to `True`. 44 | 45 | `sn` is an optional flag that allows you to specify the Secret Name and defaults to `GCP_SA_KEY`. 46 | 47 | Note: ghr must be lowercase and type must reflect github 48 | 49 | ``` 50 | { 51 | "service_account_name":"NAME", 52 | "project":"PROJECT", 53 | "labels" : [{ 54 | "type": "github", 55 | "ghr" : "deployment-dev", 56 | "sn": "GCP_SA_KEY" 57 | },{ 58 | "type": "github", 59 | "gho" : True, 60 | "sn": "GCP_SA_KEY" 61 | }] 62 | } 63 | ``` 64 | 65 | 66 | ## Google Cloud Storage Plugin 67 | 68 | The bucket for key rotation is set by `STORAGE_BUCKET_NAME`. 69 | 70 | This application will automatacially update the designated bucket with new keys. In order to use this flow, make sure to update the labels of the creation request to include `type`, this will allow you to specify `storage`. 71 | 72 | `sn` is an optional flag that allows you to specify the Secret Name and defaults to `GCP_SA_KEY`. 73 | 74 | Note: type must reflect storage 75 | 76 | ``` 77 | { 78 | "service_account_name":"NAME", 79 | "project":"PROJECT", 80 | "labels" : [{ 81 | "type": "storage", 82 | "sn": "GCP_SA_KEY" 83 | }] 84 | } 85 | ``` 86 | ### Using secret mananger client 87 | 88 | `base64.decode(client.get(name=SA_NAME))` 89 | 90 | ## Install 91 | 92 | * pip install goblet-gcp 93 | 94 | ## Deploy 95 | 96 | * need to set goblet configs for `PROJECT`, `serviceAccountEmail` 97 | * need to create service account for secret manager and give it `Service Account Key Admin` and `Secret Manager Admin` permissions 98 | `gcloud alpha services identity create \ 99 | --service "secretmanager.googleapis.com" \ 100 | --project {PROJECT}` 101 | * create pubsub topic `secret-rotation` and give the secret manager service account publish permissions 102 | * run ```goblet deploy -p {PROJECT} -l {REGION}``` -------------------------------------------------------------------------------- /006-service-account-key-rotater/constants.py: -------------------------------------------------------------------------------- 1 | DEFAULT_SECRET_NAME = "gcp_sa_key" 2 | 3 | SECRET_NAME_FIELD = "sn" 4 | SERVICE_ACCOUNT_NAME_FIELD = "service_account_name" 5 | PROJECT_FIELD = "project" 6 | TYPE_FIELD = "type" 7 | 8 | GH_REPO_FIELD = "ghr" 9 | GH_ORG_FIELD = "gho" 10 | 11 | LABEL_DELIMITER = "-_" 12 | -------------------------------------------------------------------------------- /006-service-account-key-rotater/core/secrets_manager.py: -------------------------------------------------------------------------------- 1 | from google.cloud import secretmanager 2 | from datetime import datetime, timedelta 3 | from os import environ 4 | from google.protobuf.timestamp_pb2 import Timestamp 5 | from google.protobuf.duration_pb2 import Duration 6 | 7 | KEY_VALID_DAYS = int(environ.get("KEY_VALID_DAYS", "2")) 8 | ROTATION_PERIOD_SECONDS = environ.get("ROTATION_PERIOD_SECONDS", "86400s") 9 | PROJECT = environ.get("PROJECT") 10 | 11 | 12 | class SecretManagerClient: 13 | """Base class for Google Cloud Secret Manager API""" 14 | 15 | def __init__(self): 16 | """Constructor for Secret Manager Client""" 17 | self.client = secretmanager.SecretManagerServiceClient() 18 | 19 | def get_latest_secret_version(self, name): 20 | """Get Latest Version of Secret 21 | 22 | Args: 23 | name (str): Name of Secret 24 | 25 | Returns: 26 | Secret Payload 27 | """ 28 | response = self.client.access_secret_version( 29 | name=f"projects/{PROJECT}/secrets/{name}/versions/latest" 30 | ) 31 | return response.payload.data.decode() 32 | 33 | def create_secret(self, service_account_name, project, labels): 34 | """Creates Secret to be rotated 35 | 36 | Args: 37 | service_account_name (str): Name of Service Account to create secret for 38 | project (str): Name of project associated with service account 39 | labels (dict): Dictionary of labels associated with secret to be created 40 | 41 | Returns: 42 | Secret Manager Secret 43 | """ 44 | timestamp = Timestamp() 45 | duration = Duration() 46 | timestamp.FromDatetime( 47 | datetime.now().replace(hour=0, minute=0, second=0, microsecond=0) 48 | + timedelta(1) 49 | ) 50 | duration.FromJsonString(ROTATION_PERIOD_SECONDS) # 1 days 51 | secret = self.client.create_secret( 52 | request={ 53 | "parent": f"projects/{PROJECT}", 54 | "secret_id": f"{service_account_name}-{project}", 55 | "secret": { 56 | "replication": {"automatic": {}}, 57 | "topics": [{"name": f"projects/{PROJECT}/topics/secret-rotation"}], 58 | "rotation": { 59 | "next_rotation_time": timestamp, 60 | "rotation_period": duration, 61 | }, 62 | "labels": { 63 | "type": "service_account_key", 64 | "service_account_name": service_account_name, 65 | "project": project, 66 | **labels, 67 | }, 68 | }, 69 | } 70 | ) 71 | return secret 72 | 73 | def create_secret_version(self, secret_name, key): 74 | """Create Secret Version 75 | 76 | Args: 77 | name (str): Name of Secret 78 | key (str): Encoded Service Account Key 79 | """ 80 | self.client.add_secret_version( 81 | request={"parent": secret_name, "payload": {"data": key}} 82 | ) 83 | 84 | def delete_old_secret_versions(self, secret_name): 85 | """Delete Old (Expired) Secret Versions 86 | 87 | Args: 88 | secret_name (str): Name of Secret 89 | """ 90 | versions = self.client.list_secret_versions(request={"parent": secret_name}) 91 | old = [] 92 | for v in versions: 93 | if ( 94 | v.state.name == "ENABLED" 95 | and (datetime.now().astimezone() - v.create_time).days > KEY_VALID_DAYS 96 | ): 97 | old.append(v) 98 | for old_v in old: 99 | self.client.destroy_secret_version(name=old_v.name) 100 | -------------------------------------------------------------------------------- /006-service-account-key-rotater/core/service_account.py: -------------------------------------------------------------------------------- 1 | from googleapiclient import discovery 2 | from oauth2client.client import GoogleCredentials 3 | from datetime import datetime 4 | from os import environ 5 | 6 | 7 | KEY_VALID_DAYS = int(environ.get("KEY_VALID_DAYS", "2")) 8 | 9 | 10 | class ServiceAccountsClient: 11 | """Base Class for Google Cloud Service Account Interactions""" 12 | 13 | def __init__(self): 14 | """Constructor for Service Account Client""" 15 | credentials = GoogleCredentials.get_application_default() 16 | service = discovery.build("iam", "v1", credentials=credentials) 17 | self.client = service.projects().serviceAccounts() 18 | 19 | def create_key(self, service_account_name): 20 | """Create Key for Service Account 21 | 22 | Args: 23 | service_account_name (str): Name of Service Account 24 | Returns: 25 | Service Account Key 26 | """ 27 | key = self.client.keys().create(name=service_account_name).execute() 28 | return key["privateKeyData"] 29 | 30 | def delete_old_keys(self, service_account_name): 31 | """Delete old keys associated with Service Account 32 | 33 | Args: 34 | service_account_name (str): Name of Service Account 35 | """ 36 | resp = ( 37 | self.client.keys() 38 | .list(name=service_account_name, keyTypes=["USER_MANAGED"]) 39 | .execute() 40 | ) 41 | keys = resp.get("keys", []) 42 | for k in keys: 43 | if ( 44 | datetime.now().astimezone() 45 | - datetime.fromisoformat(k["validAfterTime"].replace("Z", "+00:00")) 46 | ).days > KEY_VALID_DAYS: 47 | self.client.keys().delete(name=k["name"]).execute() 48 | -------------------------------------------------------------------------------- /006-service-account-key-rotater/main.py: -------------------------------------------------------------------------------- 1 | from goblet import Goblet, Response, goblet_entrypoint 2 | import os 3 | import json 4 | from google.api_core.exceptions import AlreadyExists 5 | from core.secrets_manager import SecretManagerClient 6 | from core.service_account import ServiceAccountsClient 7 | from pkg_resources import iter_entry_points 8 | from constants import ( 9 | SERVICE_ACCOUNT_NAME_FIELD, 10 | PROJECT_FIELD, 11 | SECRET_NAME_FIELD, 12 | DEFAULT_SECRET_NAME, 13 | LABEL_DELIMITER, 14 | ) 15 | import logging 16 | 17 | app = Goblet(function_name="service-account-key-rotater", local="local") 18 | app.log.setLevel(logging.INFO) 19 | goblet_entrypoint(app) 20 | 21 | PROJECT = os.environ.get("PROJECT") 22 | 23 | secret_manager_client = SecretManagerClient() 24 | service_account_client = ServiceAccountsClient() 25 | 26 | plugins = [ 27 | entry_point.load()() 28 | for entry_point in iter_entry_points(group="plugins", name=None) 29 | ] 30 | 31 | LABEL_CHARACTER_LIMIT = 63 32 | 33 | 34 | @app.topic("secret-rotation", attributes={"eventType": "SECRET_ROTATE"}) 35 | def rotate_keys(data): 36 | secret = json.loads(data) 37 | sec_project = secret["labels"][PROJECT_FIELD] 38 | service_account_name = secret["labels"][SERVICE_ACCOUNT_NAME_FIELD] 39 | # generate new key 40 | service_account = f"{service_account_name}@{sec_project}.iam.gserviceaccount.com" 41 | sa_name = f"projects/{sec_project}/serviceAccounts/{service_account}" 42 | private_key = service_account_client.create_key(sa_name) 43 | encoded_private_key = private_key.encode() 44 | 45 | # upload new key to secret manager 46 | secret_manager_client.create_secret_version(secret["name"], encoded_private_key) 47 | 48 | for plugin in plugins: 49 | schemas = plugin.extract_labels(secret["labels"]) 50 | plugin.update_key(schemas, private_key) 51 | 52 | # destroy old secret version 53 | secret_manager_client.delete_old_secret_versions(secret["name"]) 54 | 55 | # delete old keys 56 | service_account_client.delete_old_keys(sa_name) 57 | 58 | return "Success" 59 | 60 | 61 | @app.http() 62 | def create_secret(request): 63 | data = request.json 64 | if ( 65 | not data 66 | or not data.get(SERVICE_ACCOUNT_NAME_FIELD) 67 | or not data.get(PROJECT_FIELD) 68 | ): 69 | return Response("Missing service_account_name or project", status_code=400) 70 | 71 | service_account_name = data[SERVICE_ACCOUNT_NAME_FIELD] 72 | project = data[PROJECT_FIELD] 73 | 74 | service_account = f"{service_account_name}@{project}.iam.gserviceaccount.com" 75 | sa_name = f"projects/{project}/serviceAccounts/{service_account}" 76 | 77 | plugin_data = data.get("plugin_data", []) 78 | 79 | for plugin in plugins: 80 | plugin.reset_count() 81 | 82 | labels = {} 83 | for info in plugin_data: 84 | info[SERVICE_ACCOUNT_NAME_FIELD] = data[SERVICE_ACCOUNT_NAME_FIELD] 85 | info[PROJECT_FIELD] = data[PROJECT_FIELD] 86 | info[SECRET_NAME_FIELD] = info.get(SECRET_NAME_FIELD, DEFAULT_SECRET_NAME) 87 | for plugin in plugins: 88 | label_key, label = plugin.write_label(info) 89 | if label_key and label: 90 | labels[label_key] = label.lower() 91 | 92 | for _, value in labels.items(): 93 | if len(value) > LABEL_CHARACTER_LIMIT: 94 | values = value.split(LABEL_DELIMITER) 95 | sni = values.index(SECRET_NAME_FIELD) 96 | return Response( 97 | f"Failed to create label for Secret, please reduce secret {values[sni + 1]} to {len(values[sni + 1]) - (len(value) - LABEL_CHARACTER_LIMIT)} characters", 98 | status_code=400, 99 | ) 100 | try: 101 | # create new secret 102 | secret = secret_manager_client.create_secret( 103 | service_account_name, project, labels 104 | ) 105 | except AlreadyExists as e: 106 | return Response(e.message, status_code=409) 107 | 108 | # generate new key 109 | private_key = service_account_client.create_key(sa_name) 110 | encoded_private_key = private_key.encode() 111 | 112 | # upload new key to secret manager 113 | secret_manager_client.create_secret_version(secret.name, encoded_private_key) 114 | 115 | for info in plugin_data: 116 | for plugin in plugins: 117 | plugin.initalize_backend(info, private_key) 118 | return f"Secret Created with name {service_account_name}" 119 | -------------------------------------------------------------------------------- /006-service-account-key-rotater/plugins/base.py: -------------------------------------------------------------------------------- 1 | from abc import ABCMeta, abstractmethod 2 | from constants import ( 3 | SERVICE_ACCOUNT_NAME_FIELD, 4 | PROJECT_FIELD, 5 | LABEL_DELIMITER, 6 | TYPE_FIELD, 7 | ) 8 | 9 | 10 | class BasePlugin(metaclass=ABCMeta): 11 | """Abstract Base Class for Plugins""" 12 | 13 | """ 14 | Please refer to constants.py for more information on commmon fields used. 15 | These keys are used as as shorthand for referencing: 16 | 17 | gho: Github Organization 18 | ghr: Github Repository 19 | sn: Secret Name 20 | """ 21 | 22 | count = 0 23 | 24 | @property 25 | @abstractmethod 26 | def type(self): 27 | """Plugin Type""" 28 | pass 29 | 30 | @property 31 | @abstractmethod 32 | def schema(self): 33 | """Schema for Plugin, used to set labels and default values""" 34 | pass 35 | 36 | def is_type(self, type): 37 | return type == self.type 38 | 39 | def reset_count(self): 40 | self.count = 0 41 | 42 | @abstractmethod 43 | def initalize_backend(self, data, key): 44 | pass 45 | 46 | @abstractmethod 47 | def update_key(self, data, key): 48 | pass 49 | 50 | def write_label(self, data): 51 | # Only hyphens (-), underscores (_), lowercase characters, and numbers are allowed. International characters are allowed. 52 | if not data[TYPE_FIELD] == self.type: 53 | return None, None 54 | return self._write_label(data) 55 | 56 | def _write_label(self, data) -> str: 57 | label = "" 58 | for key, value in self.schema.items(): 59 | # Add key and value to label if exists or a default exists 60 | if data.get(key, value): 61 | label += ( 62 | key + LABEL_DELIMITER + str(data.get(key, value)) + LABEL_DELIMITER 63 | ) 64 | label = label[: len(label) - 2] 65 | label_key = f"{self.type}{self.count}" 66 | self.count += 1 67 | return label_key, label 68 | 69 | def extract_labels(self, labels): 70 | """Reads Labels and creates a list of Schemas 71 | 72 | Args: 73 | labels (dict): Dictionary of Labels from Secret Manager 74 | 75 | Returns: 76 | [dict]: List of Schemas for given Plugin type 77 | """ 78 | schemas = [] 79 | for key, value in labels.items(): 80 | if self.type in key: 81 | schema = self.parse_label(value) 82 | schema[SERVICE_ACCOUNT_NAME_FIELD] = labels[SERVICE_ACCOUNT_NAME_FIELD] 83 | schema[PROJECT_FIELD] = labels[PROJECT_FIELD] 84 | schema[TYPE_FIELD] = self.type 85 | schemas.append(schema) 86 | return schemas 87 | 88 | def parse_label(self, label): 89 | spl_label = label.split(LABEL_DELIMITER) 90 | if len(spl_label) % 2 != 0: 91 | raise ValueError("Label is not valid") 92 | parsed_label = {} 93 | for i in range(0, len(spl_label), 2): 94 | parsed_label[spl_label[i]] = spl_label[i + 1] 95 | return parsed_label 96 | -------------------------------------------------------------------------------- /006-service-account-key-rotater/plugins/github.py: -------------------------------------------------------------------------------- 1 | from github import Github 2 | from plugins.base import BasePlugin 3 | from core.secrets_manager import SecretManagerClient 4 | from os import environ 5 | from constants import ( 6 | GH_ORG_FIELD, 7 | GH_REPO_FIELD, 8 | SECRET_NAME_FIELD, 9 | TYPE_FIELD, 10 | DEFAULT_SECRET_NAME, 11 | ) 12 | 13 | 14 | GH_ORGANIZATION = environ.get("GH_ORGANIZATION") 15 | secrets_manager_client = SecretManagerClient() 16 | 17 | 18 | class GithubPlugin(BasePlugin): 19 | """Base class for Github API""" 20 | 21 | type = "github" 22 | count = 0 23 | 24 | schema = { 25 | GH_ORG_FIELD: "", 26 | GH_REPO_FIELD: "", 27 | SECRET_NAME_FIELD: DEFAULT_SECRET_NAME, 28 | } 29 | 30 | def __init__(self): 31 | """Constructor for Github Client 32 | 33 | Args: 34 | token (str): Access Token 35 | organization (str, optional): Name of Github Organization 36 | """ 37 | self.client = Github( 38 | secrets_manager_client.get_latest_secret_version("GH_TOKEN") 39 | ) 40 | self.organization = GH_ORGANIZATION 41 | 42 | def create_secret_for_repo(self, repo, secret_name, key): 43 | repo = self.client.get_repo(self.organization + "/" + repo) 44 | repo.create_secret(secret_name.upper(), key) 45 | 46 | def create_secret_for_organization(self, secret_name, key): 47 | organization = self.client.get_organization(self.organization) 48 | organization.create_secret(secret_name.upper(), key, "selected", []) 49 | 50 | def initalize_backend(self, schema, key): 51 | if self.is_type(schema[TYPE_FIELD]): 52 | if schema.get(GH_ORG_FIELD, False): 53 | self.create_secret_for_organization(schema[SECRET_NAME_FIELD], key) 54 | elif schema[GH_REPO_FIELD]: 55 | self.create_secret_for_repo( 56 | schema[GH_REPO_FIELD], schema[SECRET_NAME_FIELD], key 57 | ) 58 | 59 | def update_key(self, schemas, key): 60 | for schema in schemas: 61 | if self.is_type(schema[TYPE_FIELD]): 62 | if bool(schema.get(GH_ORG_FIELD, False)): 63 | self.create_secret_for_organization(schema[SECRET_NAME_FIELD], key) 64 | elif schema[GH_REPO_FIELD]: 65 | self.create_secret_for_repo( 66 | schema[GH_REPO_FIELD], schema[SECRET_NAME_FIELD], key 67 | ) 68 | 69 | 70 | def initialize(): 71 | return GithubPlugin() 72 | -------------------------------------------------------------------------------- /006-service-account-key-rotater/plugins/storage.py: -------------------------------------------------------------------------------- 1 | from google.cloud import storage 2 | from plugins.base import BasePlugin 3 | from os import environ 4 | from datetime import datetime 5 | from constants import ( 6 | SERVICE_ACCOUNT_NAME_FIELD, 7 | PROJECT_FIELD, 8 | SECRET_NAME_FIELD, 9 | DEFAULT_SECRET_NAME, 10 | TYPE_FIELD, 11 | ) 12 | 13 | STORAGE_BUCKET_NAME = environ.get("STORAGE_BUCKET_NAME") 14 | 15 | class StoragePlugin(BasePlugin): 16 | """Base class for Google Cloud Storage API""" 17 | 18 | type = "storage" 19 | count = 0 20 | schema = {SECRET_NAME_FIELD: DEFAULT_SECRET_NAME} 21 | 22 | def __init__(self): 23 | """Constructor for Storage Client""" 24 | self.bucket = storage.Client().get_bucket(STORAGE_BUCKET_NAME) 25 | 26 | def update_secret_in_bucket(self, schema, key): 27 | """Updates Secret in Bucket 28 | 29 | Args: 30 | schema (dict): Schema for Storage Object 31 | key (str): Service Account Key 32 | """ 33 | service_account_name = schema[SERVICE_ACCOUNT_NAME_FIELD] 34 | project = schema[PROJECT_FIELD] 35 | secret_name = schema[SECRET_NAME_FIELD] 36 | date = datetime.now().strftime("%m%d%Y") 37 | key_blob = self.bucket.blob( 38 | f"{service_account_name}-{project}/{secret_name.upper()}_{date}" 39 | ) 40 | key_blob.upload_from_string(key) 41 | 42 | def initalize_backend(self, schema, key): 43 | if self.is_type(schema[TYPE_FIELD]): 44 | self.update_secret_in_bucket(schema, key) 45 | 46 | def update_key(self, schemas, key): 47 | for schema in schemas: 48 | if self.is_type(schema[TYPE_FIELD]): 49 | self.update_secret_in_bucket(schema, key) 50 | 51 | 52 | def initialize(): 53 | return StoragePlugin() 54 | -------------------------------------------------------------------------------- /006-service-account-key-rotater/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 2 | google-cloud-secret-manager==2.16.3 3 | google-api-python-client 4 | oauth2client 5 | google-cloud-storage== 2.10.0 6 | git+https://github.com/PyGithub/PyGithub@master -------------------------------------------------------------------------------- /006-service-account-key-rotater/setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup 2 | 3 | setup( 4 | name="service-account-key-rotater", 5 | entry_points={ 6 | "plugins": [ 7 | "github=plugins.github:initialize", 8 | "storage=plugins.storage:initialize", 9 | ] 10 | }, 11 | ) 12 | -------------------------------------------------------------------------------- /006-service-account-key-rotater/tests/test_plugin_base.py: -------------------------------------------------------------------------------- 1 | from plugins.base import BasePlugin 2 | 3 | 4 | class MockPlugin(BasePlugin): 5 | 6 | schema = { 7 | 'key1': "value1", 8 | 'key2': "value2" 9 | } 10 | type = "mock" 11 | valid_keys = [] 12 | 13 | def initalize_backend(self, data, key): 14 | pass 15 | 16 | def update_key(self, data, key): 17 | pass 18 | 19 | class TestPluginBase: 20 | def test_write_label(self): 21 | base = MockPlugin() 22 | label_key, key = base._write_label({},"") 23 | assert label_key == "mock0" 24 | assert key == "key1-_value1-_key2-_value2" 25 | 26 | def test_parse_label(self): 27 | label = "key1-_value1-_key2-_value2" 28 | base = MockPlugin() 29 | parsed_label = base.parse_label(label) 30 | assert parsed_label == base.schema -------------------------------------------------------------------------------- /007-bigquery-usage-detection/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "function_name": "bq-usage-detection", 3 | "cloudfunction": { 4 | "environmentVariables": { 5 | "BUCKET": "your-bucket-name", 6 | "PROJECT": "your-gcp-project-name", 7 | "SLACK_WEBHOOK": "your-slack-webhook-url", 8 | "UPLOAD_PATH": "directory/filename" 9 | }, 10 | "availableMemoryMb": "512", 11 | "serviceAccountEmail": "your-service-account@your-gcp-account.iam.gserviceaccount.com", 12 | "timeout": "300s" 13 | } 14 | } -------------------------------------------------------------------------------- /007-bigquery-usage-detection/README.md: -------------------------------------------------------------------------------- 1 | # BigQuery Usage Detection 2 | 3 | This is a tutorial to set up an alerting service to notify you of your BigQuery usage and identify high-cost jobs. 4 | 5 | Leveraging goblet for ease of deployment and recurring scheduled runs, this code launches a Cloud Function that: 6 | - Analyzes BigQuery's `INFORMATION_SCHEMA` metadata to identify the most expensive BigQuery jobs that have finished running in the current day 7 | - Sends a formatted Slack message summarizing the top 10 most expensive jobs in a concise table 8 | - Generates a CSV file summarizing all above-average cost jobs, and stores this in a GCP Cloud Storage bucket 9 | 10 | The goal of this service is to support users wanting to monitor/remediate BigQuery jobs. Ideally, this information will be used to help reduce BigQuery slots usage and/or minimize GCP costs. 11 | 12 | ## Written Tutorial 13 | 14 | [Tutorial](https://engineering.premise.com/tutorial-detection-of-high-usage-bigquery-jobs-on-google-cloud-platform-gcp-aadb591eefe5) 15 | 16 | ## Deploy 17 | 18 | * run ```goblet deploy --project {PROJECT} --location {REGION}``` 19 | -------------------------------------------------------------------------------- /007-bigquery-usage-detection/main.py: -------------------------------------------------------------------------------- 1 | from goblet import Goblet, goblet_entrypoint 2 | from google.cloud import bigquery, storage 3 | from slack_sdk.webhook import WebhookClient 4 | from texttable import Texttable 5 | from datetime import datetime, timezone, timedelta 6 | import csv 7 | import os 8 | 9 | app = Goblet(function_name="bq-usage-detection", local="local") 10 | goblet_entrypoint(app) 11 | 12 | BUCKET = os.environ.get("BUCKET") 13 | PROJECT = os.environ.get("PROJECT") 14 | SLACK_WEBHOOK = os.environ.get("SLACK_WEBHOOK") 15 | UPLOAD_PATH = os.environ.get("UPLOAD_PATH") 16 | 17 | webhook = WebhookClient(SLACK_WEBHOOK) 18 | storage_client = storage.Client() 19 | 20 | 21 | @app.schedule("45 23 * * *", timezone="UTC") 22 | def main(): 23 | jobs, jobs_usage = retrieve_jobs() 24 | above_avg = [ 25 | job 26 | for job in jobs[: int(len(jobs) / 2)] 27 | if job[2] > jobs_usage.get("mean_gigabytes") 28 | ] 29 | 30 | for job in above_avg: 31 | job[2] = f"{job[2]:.2f}" 32 | 33 | now = datetime.now(timezone.utc) 34 | 35 | with open("/tmp/high_usage_jobs.csv", "w") as f: 36 | write = csv.writer(f) 37 | write.writerow( 38 | [ 39 | "Job ID", 40 | "User Email", 41 | "Gigabytes Processed", 42 | "Total Slot Usage", 43 | "Query", 44 | ] 45 | ) 46 | write.writerows(above_avg) 47 | 48 | upload_to_gcs( 49 | BUCKET, 50 | f"{UPLOAD_PATH}_{now.strftime('%Y-%m-%d')}.csv", 51 | "/tmp/high_usage_jobs.csv", 52 | ) 53 | msg = generate_message(above_avg, jobs_usage, now) 54 | webhook.send(text="daily-bq-usage-alerts", blocks=msg) 55 | 56 | 57 | def retrieve_jobs(): 58 | """ 59 | retrieve_jobs retrieves the metadata for all BigQuery jobs that have finished running today 60 | and returns info about these jobs, and info about the mean and median data processed amounts 61 | (in GB) across all of these jobs 62 | 63 | :return: sorted_jobs, jobs_usage 64 | """ 65 | client = bigquery.Client(project=PROJECT) 66 | QUERY = ( 67 | 'SELECT job_id, user_email, total_bytes_processed, total_slot_ms, query FROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT ' 68 | 'WHERE EXTRACT(DATE FROM end_time) = current_date() AND state = "DONE" ' 69 | ) 70 | 71 | query_job = client.query(QUERY) 72 | rows = query_job.result() 73 | 74 | all_jobs = [] 75 | total_gb_processed = 0 76 | total_jobs_processed = 0 77 | 78 | for row in rows: 79 | try: 80 | row_data = list(row) 81 | gb_processed = 0 82 | 83 | if row_data[2]: 84 | gb_processed = row_data[2] = row_data[2] / (1024 ** 3) 85 | 86 | if row_data[3]: 87 | slot_time = str(timedelta(milliseconds=row_data[3])).split(":") 88 | row_data[ 89 | 3 90 | ] = f"{slot_time[0]} hr, {slot_time[1]} min, {float(slot_time[2]):.2f} sec" 91 | 92 | if row_data[4]: 93 | row_data[4] = ( 94 | (row_data[4][:512] + "...") 95 | if len(row_data[4]) > 512 96 | else row_data[4] 97 | ) 98 | 99 | if gb_processed > 0: 100 | all_jobs.append(row_data) 101 | total_gb_processed += gb_processed 102 | total_jobs_processed += 1 103 | 104 | except Exception as e: 105 | print(f"Error: {e}") 106 | print( 107 | f"job_id: {row_data[0]} | user_email: {row_data[1]} | total data processed (GB): {row_data[2]} | total_slot_ms: {row_data[3]} | beginning of query: {row_data[4][:100]}" 108 | ) 109 | 110 | sorted_jobs = sorted(all_jobs, key=lambda x: x[2], reverse=True) 111 | jobs_usage = { 112 | "mean_gigabytes": total_gb_processed / total_jobs_processed, 113 | "median_gigabytes": sorted_jobs[int(len(sorted_jobs) / 2)][2], 114 | } 115 | return sorted_jobs, jobs_usage 116 | 117 | 118 | def upload_to_gcs(bucket_name, file_name, data): 119 | """ 120 | upload_to_gcs publishes a file to the Google Cloud Storage bucket 121 | 122 | :param bucket_name: bucket we are publishing to 123 | :param file_name: the file path and name we are publishing this file as 124 | :param data: data to be published to the file 125 | """ 126 | bucket = storage_client.bucket(bucket_name) 127 | blob = bucket.blob(file_name) 128 | blob.upload_from_filename(data, content_type="text/csv") 129 | 130 | 131 | def generate_message(jobs, jobs_usage, now): 132 | """ 133 | generate_message formats a Slack message containing info about the 134 | top 10 high-usage BQ jobs that have finished in the current day 135 | 136 | :param jobs: list of above-average usage BQ jobs and relevant info 137 | :param jobs_usage: dict containing median and mean data processed amounts for all BQ jobs 138 | :param now: datetime object containing the current date and time 139 | :return: message 140 | """ 141 | message = [ 142 | { 143 | "type": "section", 144 | "text": { 145 | "type": "mrkdwn", 146 | "text": f"*Most expensive BigQuery jobs today ({now.strftime('%Y-%m-%d')})* (based on total_bytes_processed)", 147 | }, 148 | } 149 | ] 150 | 151 | jobs_description = [["job_id", "user_email", "data processed"]] 152 | 153 | for job in jobs[:10]: 154 | job[0] = (job[0][:21] + "...") if len(job[0]) > 24 else job[0] 155 | job[4] = (job[4][:125] + "...") if len(job[4]) > 128 else job[4] 156 | jobs_description.append([job[0], job[1], f"{job[2]} GB"]) 157 | 158 | table = Texttable() 159 | table.add_rows(jobs_description) 160 | 161 | message.extend( 162 | [ 163 | { 164 | "type": "section", 165 | "text": {"type": "mrkdwn", "text": f"```{table.draw()}```"}, 166 | }, 167 | { 168 | "type": "section", 169 | "text": { 170 | "type": "mrkdwn", 171 | "text": f"Our median BQ job processes only *{float(jobs_usage.get('median_gigabytes')):.2f} GB*, and the average data processed across all of our jobs is *{float(jobs_usage.get('mean_gigabytes')):.2f} GB*.\n\nPlease investigate and remediate these high-usage jobs", 172 | }, 173 | }, 174 | { 175 | "type": "section", 176 | "text": { 177 | "type": "mrkdwn", 178 | "text": f"", 179 | }, 180 | }, 181 | {"type": "divider"}, 182 | ] 183 | ) 184 | return message 185 | -------------------------------------------------------------------------------- /007-bigquery-usage-detection/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 2 | google-cloud-bigquery==3.11.4 3 | google-cloud-storage==2.10.0 4 | slack_sdk==3.21.3 5 | texttable==1.6.7 -------------------------------------------------------------------------------- /008-slack-approval-process/IAM-permission-provision/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "function_name": "IAM-permission-provision", 3 | "cloudfunction": { 4 | "environmentVariables": { 5 | "SIGNATURE_SECRET_ID": "{SLACK BOT SIGNING SECRET PATH}", 6 | "WEBHOOK_SECRET_ID": "{STATUS WEBHOOK SECRET PATH}" 7 | }, 8 | "maxInstances": 2, 9 | "serviceAccountEmail": "{SERVICE ACCOUNT}" 10 | } 11 | } -------------------------------------------------------------------------------- /008-slack-approval-process/IAM-permission-provision/main.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | from goblet import Goblet, goblet_entrypoint, Response 4 | from slack_sdk.signature import SignatureVerifier 5 | from slack_sdk import WebhookClient, errors 6 | from google.cloud import secretmanager 7 | 8 | app = Goblet(function_name="IAM-permission-provision") 9 | goblet_entrypoint(app) 10 | 11 | 12 | @app.http() 13 | def provision_iam_request(request): 14 | """Main method 15 | """ 16 | # get slack signature secret from secret manager 17 | secret_client = secretmanager.SecretManagerServiceClient() 18 | signing_secret = secret_client.access_secret_version( 19 | request={"name": os.environ.get("SIGNATURE_SECRET_ID")} 20 | ).payload.data.decode("UTF-8") 21 | # validate request using the signature secret 22 | if not is_valid_signature( 23 | request.headers, request.get_data(as_text=True), signing_secret 24 | ): 25 | return Response("Forbidden", status_code=403) 26 | # parse the slack action 27 | payload = json.loads(request.form["payload"]) 28 | action = payload["actions"][0] 29 | action_id = action["action_id"] 30 | action_value = action["value"] 31 | user = ' '.join(payload["user"]["name"].split('.')) 32 | project, resource_type, resource_name, principal, role, region, user_email = action_value.split(",") 33 | # get slack webhooks from secret manager 34 | secret_client = secretmanager.SecretManagerServiceClient() 35 | webhook = secret_client.access_secret_version( 36 | request={"name": os.environ.get("WEBHOOK_SECRET_ID")} 37 | ).payload.data.decode("UTF-8") 38 | # Sends to the status slack channel open to the whole company 39 | status_slack_client = WebhookClient(webhook) 40 | # Edits the original message to show that it has been responded to 41 | response_url = payload["response_url"] 42 | response_slack_client = WebhookClient(response_url) 43 | if action_id == "request_approve": 44 | try: 45 | if resource_type == "bucket": 46 | add_bucket_iam_access(project, resource_name, role, principal) 47 | elif resource_type == "secret": 48 | add_secret_iam_access(project, resource_name, role, principal) 49 | elif resource_type == "topic": 50 | add_topic_iam_access(project, resource_name, role, principal) 51 | elif resource_type == "subscription": 52 | add_subscription_iam_access(project, resource_name, role, principal) 53 | elif resource_type == "bq-table": 54 | add_bq_table_iam_access(project, resource_name, role, principal) 55 | elif resource_type == "function": 56 | add_function_iam_access(project, resource_name, role, principal, region) 57 | elif resource_type == "cloud-run": 58 | add_run_iam_access(project, resource_name, role, principal, region) 59 | elif resource_type == "registry": 60 | add_artifact_registry_iam_access(project, resource_name, role, principal, region) 61 | elif resource_type == "project": 62 | add_project_iam_access(project, role, principal) 63 | else: 64 | app.log.error(f"Unsupported resource: {resource_type}") 65 | send_status_message( 66 | status_slack_client, project, resource_type, resource_name, role, principal, region, user_email, "Resource not supported" 67 | ) 68 | return 69 | except Exception as e: 70 | app.log.error(f"Error while provisioning: {e}") 71 | for client in (status_slack_client, response_slack_client): 72 | send_status_message( 73 | client, project, resource_type, resource_name, role, principal, region, user_email, f"Error while provisioning: {e}" 74 | ) 75 | return Response("Error while provisioning", status_code=400) 76 | app.log.info(f"Added {principal} with {role} to {resource_name} in {project}") 77 | app.log.info("Sending approved status message") 78 | for client in (status_slack_client, response_slack_client): 79 | send_status_message( 80 | client, project, resource_type, resource_name, role, principal, region, user_email, f"Approved by {user}" 81 | ) 82 | elif action_id == "request_reject": 83 | app.log.info("Sending rejected status message") 84 | for client in (status_slack_client, response_slack_client): 85 | send_status_message( 86 | client, project, resource_type, resource_name, role, principal, region, user_email, f"Rejected by {user}" 87 | ) 88 | 89 | 90 | def is_valid_signature(headers, data, signing_secret): 91 | """Validates the request from the Slack integration 92 | """ 93 | timestamp = headers["x-slack-request-timestamp"] 94 | signature = headers["x-slack-signature"] 95 | verifier = SignatureVerifier(signing_secret) 96 | return verifier.is_valid(data, timestamp, signature) 97 | 98 | 99 | def send_status_message(client, project, resource_type, resource_name, role, principal, region, user_email, status): 100 | """Sends request status message through the provided slack client 101 | """ 102 | try: 103 | response = client.send( 104 | text="fallback", 105 | blocks=[ 106 | { 107 | "type": "header", 108 | "text": { 109 | "type": "plain_text", 110 | "text": "IAM Request", 111 | "emoji": True, 112 | }, 113 | }, 114 | { 115 | "type": "section", 116 | "text": { 117 | "type": "mrkdwn", 118 | "text": f"Project: {project}" 119 | }, 120 | }, 121 | { 122 | "type": "section", 123 | "text": { 124 | "type": "mrkdwn", 125 | "text": f"Resource Type: {resource_type}" 126 | } 127 | }, 128 | { 129 | "type": "section", 130 | "text": { 131 | "type": "mrkdwn", 132 | "text": f"Resource Name: {resource_name if not region else region + '/' + resource_name}" 133 | }, 134 | }, 135 | { 136 | "type": "section", 137 | "text": { 138 | "type": "mrkdwn", 139 | "text": f"Role: {role}" 140 | }, 141 | }, 142 | { 143 | "type": "section", 144 | "text": { 145 | "type": "mrkdwn", 146 | "text": f"Principal: {principal}" 147 | }, 148 | }, 149 | { 150 | "type": "section", 151 | "text": { 152 | "type": "mrkdwn", 153 | "text": f"Status: {status}" 154 | }, 155 | }, 156 | { 157 | "type": "section", 158 | "text": { 159 | "type": "mrkdwn", 160 | "text": f"Requester: {user_email}" 161 | }, 162 | }, 163 | ], 164 | ) 165 | app.log.info(response.status_code) 166 | except errors.SlackApiError as e: 167 | app.log.error(e) 168 | 169 | 170 | def add_binding(client, resource, role, principal): 171 | """Generic add-binding procedure 172 | Not every resource has this method available though. Need to check to the docs 173 | """ 174 | policy = client.get_iam_policy(request={"resource": resource}) 175 | policy.bindings.add(role=role, members=[principal]) 176 | client.set_iam_policy(request={"resource": resource, "policy": policy}) 177 | 178 | 179 | def add_bucket_iam_access(project, bucket_name, role, principal): 180 | """Adds bucket access for the principal 181 | """ 182 | from google.cloud import storage 183 | 184 | storage_client = storage.Client(project) 185 | bucket = storage_client.bucket(bucket_name) 186 | policy = bucket.get_iam_policy(requested_policy_version=3) 187 | policy.bindings.append({"role": role, "members": {principal}}) 188 | bucket.set_iam_policy(policy) 189 | 190 | 191 | def add_secret_iam_access(project, secret_name, role, principal): 192 | """Adds secret access for the principal 193 | """ 194 | client = secretmanager.SecretManagerServiceClient() 195 | secret_path = f"projects/{project}/secrets/{secret_name}" 196 | add_binding(client, secret_path, role, principal) 197 | 198 | 199 | def add_topic_iam_access(project, topic_name, role, principal): 200 | """Adds topic access for the principal 201 | """ 202 | from google.cloud import pubsub_v1 203 | 204 | client = pubsub_v1.PublisherClient() 205 | topic_path = f"projects/{project}/topics/{topic_name}" 206 | add_binding(client, topic_path, role, principal) 207 | 208 | 209 | def add_subscription_iam_access(project, subscription_name, role, principal): 210 | """Adds subscription access for the principal 211 | """ 212 | from google.cloud import pubsub_v1 213 | 214 | client = pubsub_v1.SubscriberClient() 215 | subscription_path = f"projects/{project}/subscriptions/{subscription_name}" 216 | add_binding(client, subscription_path, role, principal) 217 | 218 | 219 | def add_bq_table_iam_access(project, table_name, role, principal): 220 | """Adds table access for the principal 221 | 222 | :param table_name: {table's dataset}.{table name} 223 | """ 224 | from google.cloud import bigquery 225 | 226 | client = bigquery.Client() 227 | name = f"{project}.{table_name}" 228 | table = client.get_table(name) 229 | policy = client.get_iam_policy(table) 230 | policy.bindings.append({"role": role, "members": {principal}}) 231 | client.set_iam_policy(table, policy) 232 | 233 | 234 | def add_function_iam_access(project, function_name, role, principal, region): 235 | """Adds function access for the principal 236 | """ 237 | from google.cloud import functions_v1 238 | 239 | client = functions_v1.CloudFunctionsServiceClient() 240 | function_path = f"projects/{project}/locations/{region}/functions/{function_name}" 241 | add_binding(client, function_path, role, principal) 242 | 243 | 244 | def add_run_iam_access(project, service_name, role, principal, region): 245 | """Adds cloud run service access for the principal 246 | """ 247 | from googleapiclient import discovery 248 | from oauth2client.client import GoogleCredentials 249 | 250 | credentials = GoogleCredentials.get_application_default() 251 | client = discovery.build("run", "v1", credentials=credentials).projects().locations().services() 252 | from google.iam.v1.policy_pb2 import Policy 253 | resource = f"projects/{project}/locations/{region}/services/{service_name}" 254 | policy = client.getIamPolicy(resource=resource).execute() 255 | bindings = policy.get("bindings", []) 256 | bindings.append({"role": f"{role}", "members": [f"{principal}"]}) 257 | policy["bindings"] = bindings 258 | response = client.setIamPolicy(resource=resource, body={"policy": policy}).execute() 259 | 260 | 261 | def add_project_iam_access(project, role, principal): 262 | """ 263 | """ 264 | from google.cloud import resourcemanager_v3 265 | 266 | client = resourcemanager_v3.ProjectsClient() 267 | project_path = f"projects/{project}" 268 | add_binding(client, project_path, role, principal) 269 | 270 | 271 | def add_artifact_registry_iam_access(project, registry_name, role, principal, region): 272 | """ 273 | """ 274 | from google.cloud import artifactregistry_v1beta2 275 | 276 | client = artifactregistry_v1beta2.ArtifactRegistryClient() 277 | registry_path = f"projects/{project}/locations/{region}/repositories/{registry_name}" 278 | add_binding(client, registry_path, role, principal) 279 | 280 | -------------------------------------------------------------------------------- /008-slack-approval-process/IAM-permission-provision/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.6.3 2 | google-api-python-client==2.33.0 3 | oauth2client==4.1.3 4 | google-cloud-secret-manager==2.4.0 5 | google-cloud-bigquery==2.31.0 6 | google-cloud-storage==1.42.3 7 | google-cloud-pubsub==2.5.0 8 | google-cloud-functions==1.4.0 9 | google-cloud-artifact-registry==1.1.1 10 | google-cloud-resource-manager==1.3.3 11 | slack_sdk==3.13.0 -------------------------------------------------------------------------------- /008-slack-approval-process/IAM-permission-request/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "function_name": "IAM-permission-request", 3 | "cloudfunction": { 4 | "environmentVariables": { 5 | "WEBHOOK_SECRET_ID": "{APPROVERS WEBHOOK SECRET PATH}" 6 | }, 7 | "serviceAccountEmail": "{SERVICE ACCOUNT}" 8 | } 9 | } -------------------------------------------------------------------------------- /008-slack-approval-process/IAM-permission-request/main.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | from goblet import Goblet, goblet_entrypoint 4 | from google.cloud import secretmanager 5 | from slack_sdk import WebhookClient, errors 6 | 7 | app = Goblet(function_name="IAM-permission-request") 8 | goblet_entrypoint(app) 9 | 10 | 11 | @app.http() 12 | def send_iam_request(request): 13 | """ Parses the request and sends a message to #Infrastructure-iam-approvals 14 | """ 15 | json_data = app.current_request.json 16 | app.log.info(f"Request: {json_data}") 17 | # extract data from request 18 | project = json_data["project"] 19 | resource_type = json_data["resource_type"] 20 | resource_name = json_data["resource_name"] 21 | role = json_data["role"] 22 | principal = json_data["principal"] 23 | region = json_data.get("region", "") 24 | user_email = json_data["user_email"] 25 | # get webhook from secret manager 26 | secret_client = secretmanager.SecretManagerServiceClient() 27 | webhook = secret_client.access_secret_version( 28 | request={"name": os.environ.get("WEBHOOK_SECRET_ID")} 29 | ).payload.data.decode("UTF-8") 30 | slack_client = WebhookClient(webhook) 31 | # send message 32 | try: 33 | response = slack_client.send( 34 | text="fallback", 35 | blocks=[ 36 | { 37 | "type": "header", 38 | "text": { 39 | "type": "plain_text", 40 | "text": "IAM Request", 41 | "emoji": True, 42 | }, 43 | }, 44 | { 45 | "type": "section", 46 | "text": { 47 | "type": "mrkdwn", 48 | "text": f"Project: {project}" 49 | }, 50 | }, 51 | { 52 | "type": "section", 53 | "text": { 54 | "type": "mrkdwn", 55 | "text": f"Resource Type: {resource_type}" 56 | }, 57 | }, 58 | { 59 | "type": "section", 60 | "text": { 61 | "type": "mrkdwn", 62 | "text": f"Resource Name: {resource_name if not region else region + '/' + resource_name}" 63 | }, 64 | }, 65 | { 66 | "type": "section", 67 | "text": { 68 | "type": "mrkdwn", 69 | "text": f"Role: {role}" 70 | }, 71 | }, 72 | { 73 | "type": "section", 74 | "text": { 75 | "type": "mrkdwn", 76 | "text": f"Principal: {principal}" 77 | }, 78 | }, 79 | { 80 | "type": "section", 81 | "text": { 82 | "type": "mrkdwn", 83 | "text": f"Requester: {user_email}" 84 | } 85 | }, 86 | { 87 | "type": "actions", 88 | "elements": [ 89 | { 90 | "type": "button", 91 | "text": { 92 | "type": "plain_text", 93 | "emoji": True, 94 | "text": "Approve", 95 | }, 96 | "style": "primary", 97 | "action_id": "request_approve", 98 | "value": f"{project},{resource_type},{resource_name},{principal},{role},{region},{user_email}", 99 | "confirm": { 100 | "title": { 101 | "type": "plain_text", 102 | "text": "Are you sure?", 103 | }, 104 | "text": { 105 | "type": "mrkdwn", 106 | "text": f"{role} would be added to {principal} for {resource_name} in {project}", 107 | }, 108 | "confirm": {"type": "plain_text", "text": "Do it"}, 109 | "deny": { 110 | "type": "plain_text", 111 | "text": "Stop, I've changed my mind!", 112 | }, 113 | }, 114 | }, 115 | { 116 | "type": "button", 117 | "text": { 118 | "type": "plain_text", 119 | "emoji": True, 120 | "text": "Reject", 121 | }, 122 | "value": f"{project},{resource_type},{resource_name},{principal},{role},{region},{user_email}", 123 | "style": "danger", 124 | "action_id": "request_reject", 125 | }, 126 | ], 127 | }, 128 | ], 129 | ) 130 | app.log.info(response.status_code) 131 | except errors.SlackApiError as e: 132 | app.log.error(e) 133 | -------------------------------------------------------------------------------- /008-slack-approval-process/IAM-permission-request/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.6.3 2 | slack_sdk==3.13.0 3 | google-cloud-secret-manager==2.4.0 -------------------------------------------------------------------------------- /008-slack-approval-process/README.md: -------------------------------------------------------------------------------- 1 | # Slack approval process 2 | 3 | A quick tutorial for setting up a Slack App for handling approval requests 4 | 5 | The service currently supports changing IAM policies at the project level and resource level for: 6 | * gcs buckets 7 | * secret manager secrets 8 | * pubsub topics and subscriptions 9 | * bigquery tables 10 | * cloud functions 11 | * cloud run 12 | * artifact registry 13 | 14 | ## Written Tutorial 15 | 16 | [Tutorial]() 17 | 18 | ## Prerequisites 19 | 20 | * GCP Account 21 | * GCP CLI 22 | * Python 3.7 23 | 24 | ## Setup 25 | 26 | It will probably be easiest to follow the medium article, because it has screenshots and snippets. 27 | To run through the steps briefly: 28 | * Create slack app 29 | * Create webhooks for the app 30 | * Create secrets in GCP to store the webhooks 31 | * Create service accounts that can modify iam policies for the resources you want 32 | * Update the config files with all the variables you created above 33 | * Deploy the cloud functions 34 | * Update your slack app interactivity url to the endpoint for your provision function 35 | 36 | ## Deploy 37 | 38 | Deploy by running `goblet deploy --project {PROJECT} --location {REGION}` for both the provision and request functions 39 | -------------------------------------------------------------------------------- /009-cloudrun-cloudbuild-configs/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "cloudrun": { 3 | "traffic": 50 4 | }, 5 | "cloudrun_revision": { 6 | "serviceAccount": "name@PROJECT.iam.gserviceaccount.com" 7 | }, 8 | "cloudrun_container": { 9 | "env": [ 10 | { 11 | "name": "env-variable-name", 12 | "value": "env-variable-value" 13 | }, 14 | { 15 | "name": "env-variable-name-2", 16 | "valueSource": { 17 | "secretKeyRef" : { 18 | "secret": "secret-name", 19 | "version": "secret-version" 20 | } 21 | } 22 | } 23 | ] 24 | }, 25 | "cloudbuild": { 26 | "artifact_registry": "us-central1-docker.pkg.dev/PROJECT/cloud-run-source-deploy/SOURCE_NAME", 27 | "serviceAccount": "projects/{PROJECT}/serviceAccounts/{ACCOUNT}" 28 | } 29 | } -------------------------------------------------------------------------------- /009-cloudrun-cloudbuild-configs/README.md: -------------------------------------------------------------------------------- 1 | # Cloud Run and Cloud Build Configurations 2 | 3 | This is a quick tutorial on how to set traffic and other configurations for Cloud Run deployments. 4 | 5 | Configurations shown: 6 | * cloudrun 7 | * cloudrun_revision 8 | * cloudrun_container 9 | * cloudbuild 10 | 11 | ## Written Tutorial 12 | 13 | [Tutorial](https://engineering.premise.com/traffic-revisions-and-artifact-registries-in-google-cloud-run-made-easy-with-goblet-1a3fa86de25c) 14 | 15 | ## Install 16 | 17 | * pip install goblet-gcp 18 | 19 | ## Prerequisites 20 | 21 | * GCP Account 22 | * GCP CLI 23 | * Python 3.7 or above 24 | 25 | ## Traffic 26 | 27 | Enter desired traffic in the `cloudrun` key. It can be any integer from 0 to 100. 28 | 29 | ## Artifact Registry 30 | 31 | If using an external artifact registry, pass it through the `cloudbuild` key. Cross-project artifact registries will require a service account with permissions for the external artifact registry as well. 32 | 33 | ## Environment Variables and Secrets 34 | 35 | Pass in evironment variables through the `cloudbuild_container` key. Secrets can also be passed in as environment variables but will require an additional service account with permissions to each secret which can be passed in through the `cloudrun_revision` key. 36 | 37 | ## Additional Configurations 38 | 39 | In addition to the configurations listed, you can pass in any other field for [cloudrun_revision](https://cloud.google.com/run/docs/reference/rest/v2/projects.locations.services#RevisionTemplate), [cloudrun_container](https://cloud.google.com/run/docs/reference/rest/v2/Container), and [cloudrun_container](https://cloud.google.com/build/docs/api/reference/rest/v1/projects.builds#Build). 40 | 41 | ## Deploy 42 | 43 | Run `goblet deploy --project {PROJECT} --location {REGION}` to deploy your application. 44 | -------------------------------------------------------------------------------- /009-cloudrun-cloudbuild-configs/main.py: -------------------------------------------------------------------------------- 1 | from goblet import Goblet, goblet_entrypoint 2 | 3 | app = Goblet(function_name = "cloudrun-cloudbuild-configs", backend="cloudrun", routes_type="cloudrun") 4 | goblet_entrypoint(app) 5 | 6 | @app.route('/hello') 7 | def hello(): 8 | return "world" -------------------------------------------------------------------------------- /009-cloudrun-cloudbuild-configs/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 -------------------------------------------------------------------------------- /010-cloudrun-jobs/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "scheduler": { 3 | "serviceAccount": "SERVICE_ACCOUNT@PROJECT.iam.gserviceaccount.com" 4 | } 5 | } -------------------------------------------------------------------------------- /010-cloudrun-jobs/Dockerfile: -------------------------------------------------------------------------------- 1 | # https://hub.docker.com/_/python 2 | FROM python:3.10-slim 3 | 4 | # setup environment 5 | ENV APP_HOME /app 6 | WORKDIR $APP_HOME 7 | 8 | # Install dependencies. 9 | COPY requirements.txt . 10 | RUN pip install -r requirements.txt 11 | 12 | # Copy local code to the container image. 13 | COPY . . 14 | -------------------------------------------------------------------------------- /010-cloudrun-jobs/README.md: -------------------------------------------------------------------------------- 1 | # Cloud Run Jobs 2 | 3 | This is a tutorial on how to deploy a cloudrun job using goblet. 4 | 5 | ## Written Tutorial 6 | 7 | [Tutorial](https://engineering.premise.com/tutorial-deploying-cloud-run-jobs-9435466b26f5) 8 | 9 | ## Install 10 | 11 | * pip install goblet-gcp 12 | 13 | ## Prerequisites 14 | 15 | * GCP Account 16 | * GCP CLI 17 | * Python 3.7 or above 18 | 19 | ## Config 20 | 21 | Enter service account in `config.json` if you are deploying a scheduled job 22 | 23 | ## Deploy 24 | 25 | Run `goblet deploy --project {PROJECT} --location {REGION}` to deploy your application. 26 | 27 | ## Cleanup 28 | 29 | Run `goblet destroy --project {PROJECT} --location {REGION}` to cleanup your application. 30 | -------------------------------------------------------------------------------- /010-cloudrun-jobs/main.py: -------------------------------------------------------------------------------- 1 | from goblet import Goblet, goblet_entrypoint 2 | import logging 3 | 4 | app = Goblet(function_name="example-job", backend="cloudrun") 5 | goblet_entrypoint(app) 6 | app.log.setLevel(logging.DEBUG) # configure goblet logger level 7 | 8 | 9 | @app.job("test") 10 | def job1_task_1(id): 11 | app.log.info(f"execution test {id}") 12 | return "200" 13 | 14 | @app.job("test", task_id=1) 15 | def job2_task_2(id): 16 | app.log.info(f"second task {id}") 17 | return "200" 18 | 19 | @app.job("test-schedule", schedule="* * * * *") 20 | def job2(id): 21 | app.log.info(f"execution every min") 22 | return "200" -------------------------------------------------------------------------------- /010-cloudrun-jobs/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 -------------------------------------------------------------------------------- /011-cloud-run-cors-anywhere/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:14-buster-slim 2 | 3 | WORKDIR /app 4 | 5 | COPY . . 6 | 7 | RUN npm install --production 8 | 9 | CMD [ "node", "index.js" ] -------------------------------------------------------------------------------- /011-cloud-run-cors-anywhere/LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /011-cloud-run-cors-anywhere/README.md: -------------------------------------------------------------------------------- 1 | # cloud-run-cors-anywhere 2 | cors-anywhere proxy ready to deploy on Cloud Run 3 | 4 | See https://cloud.google.com/run/docs/deploying?hl=es-419#command-line for instructions on how to deploy to Cloud Run -------------------------------------------------------------------------------- /011-cloud-run-cors-anywhere/index.js: -------------------------------------------------------------------------------- 1 | const cors_proxy = require('cors-anywhere'); 2 | 3 | const port= parseInt(process.env.CORS_PORT || '') || 8080; 4 | const host = process.env.CORS_HOST || '0.0.0.0'; 5 | 6 | cors_proxy.createServer({ 7 | originWhitelist: [], // Allow all origins 8 | requireHeader: ['origin', 'x-requested-with'], 9 | removeHeaders: ['cookie', 'cookie2'] 10 | }).listen(port, host, () => { 11 | console.log('Running CORS Anywhere on ' + host + ':' + port); 12 | }); -------------------------------------------------------------------------------- /011-cloud-run-cors-anywhere/package-lock.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "cors-anywhere", 3 | "version": "0.0.0", 4 | "lockfileVersion": 1, 5 | "requires": true, 6 | "dependencies": { 7 | "cors-anywhere": { 8 | "version": "0.4.4", 9 | "resolved": "https://registry.npmjs.org/cors-anywhere/-/cors-anywhere-0.4.4.tgz", 10 | "integrity": "sha512-8OBFwnzMgR4mNrAeAyOLB2EruS2z7u02of2bOu7i9kKYlZG+niS7CTHLPgEXKWW2NAOJWRry9RRCaL9lJRjNqg==", 11 | "requires": { 12 | "http-proxy": "1.11.1", 13 | "proxy-from-env": "0.0.1" 14 | } 15 | }, 16 | "eventemitter3": { 17 | "version": "1.2.0", 18 | "resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-1.2.0.tgz", 19 | "integrity": "sha512-DOFqA1MF46fmZl2xtzXR3MPCRsXqgoFqdXcrCVYM3JNnfUeHTm/fh/v/iU7gBFpwkuBmoJPAm5GuhdDfSEJMJA==" 20 | }, 21 | "http-proxy": { 22 | "version": "1.11.1", 23 | "resolved": "https://registry.npmjs.org/http-proxy/-/http-proxy-1.11.1.tgz", 24 | "integrity": "sha512-qz7jZarkVG3G6GMq+4VRJPSN4NkIjL4VMTNhKGd8jc25BumeJjWWvnY3A7OkCGa8W1TTxbaK3dcE0ijFalITVA==", 25 | "requires": { 26 | "eventemitter3": "1.x.x", 27 | "requires-port": "0.x.x" 28 | } 29 | }, 30 | "proxy-from-env": { 31 | "version": "0.0.1", 32 | "resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-0.0.1.tgz", 33 | "integrity": "sha512-B9Hnta3CATuMS0q6kt5hEezOPM+V3dgaRewkFtFoaRQYTVNsHqUvFXmndH06z3QO1ZdDnRELv5vfY6zAj/gG7A==" 34 | }, 35 | "requires-port": { 36 | "version": "0.0.1", 37 | "resolved": "https://registry.npmjs.org/requires-port/-/requires-port-0.0.1.tgz", 38 | "integrity": "sha512-AzPDCliPoWDSvEVYRQmpzuPhGGEnPrQz9YiOEvn+UdB9ixBpw+4IOZWtwctmpzySLZTy7ynpn47V14H4yaowtA==" 39 | } 40 | } 41 | } 42 | -------------------------------------------------------------------------------- /011-cloud-run-cors-anywhere/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "cors-anywhere", 3 | "version": "0.0.0", 4 | "main": "index.js", 5 | "private": true, 6 | "engines": { 7 | "node": "12 || 14" 8 | }, 9 | "scripts": { 10 | "start": "node index.js" 11 | }, 12 | "dependencies": { 13 | "cors-anywhere": "^0.4.4" 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /012-backend-alerts/.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | *.zip -------------------------------------------------------------------------------- /012-backend-alerts/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "alerts":{ 3 | "notification_channels": ["projects/PROJECT/notificationChannels/CHANNEL_ID"] 4 | } 5 | } -------------------------------------------------------------------------------- /012-backend-alerts/README.md: -------------------------------------------------------------------------------- 1 | # Backend Alerts 2 | 3 | This is a tutorial on how to deploy a cloudfunction as well as various GCP monitoring alerts. 4 | 5 | ## Written Tutorial 6 | 7 | [Tutorial](https://engineering.premise.com/gcp-alerts-the-easy-way-alerting-for-cloudfunctions-and-cloudrun-using-goblet-62bdf2126ef6) 8 | 9 | ## Install 10 | 11 | * pip install goblet-gcp 12 | 13 | ## Prerequisites 14 | 15 | * GCP Account 16 | * GCP CLI 17 | * Python 3.7 or above 18 | 19 | ## Config 20 | 21 | Enter a Notification Channel in `config.json` if you would like the alert to trigger a channel. Otherwise you can remove the alerting section. 22 | 23 | ## Deploy 24 | 25 | Run `goblet deploy --project {PROJECT} --location {REGION}` to deploy your application. 26 | 27 | ## Cleanup 28 | 29 | Run `goblet destroy --project {PROJECT} --location {REGION}` to cleanup your application. 30 | -------------------------------------------------------------------------------- /012-backend-alerts/main.py: -------------------------------------------------------------------------------- 1 | from goblet import Goblet, goblet_entrypoint, Response 2 | from goblet.infrastructures.alerts import ( 3 | MetricCondition, 4 | LogMatchCondition, 5 | CustomMetricCondition, 6 | ) 7 | import logging 8 | import time 9 | 10 | app = Goblet(function_name="example-cloudfunction-alerts") 11 | goblet_entrypoint(app) 12 | 13 | app.log.setLevel(logging.DEBUG) # configure goblet logger level 14 | 15 | # Example Metric Alert for the cloudfunction metric execution_times with a threshold of 1000 ms 16 | app.alert( 17 | "metric", 18 | conditions=[ 19 | MetricCondition( 20 | "test", 21 | metric="cloudfunctions.googleapis.com/function/execution_times", 22 | value=1000, 23 | aggregations=[ 24 | { 25 | "alignmentPeriod": "300s", 26 | "crossSeriesReducer": "REDUCE_NONE", 27 | "perSeriesAligner": "ALIGN_PERCENTILE_50", 28 | } 29 | ], 30 | ) 31 | ], 32 | ) 33 | 34 | # Example Log Match metric that will trigger an incendent off of any Error logs 35 | app.alert("log-error", conditions=[LogMatchCondition("error", "severity>=ERROR")]) 36 | 37 | # Example Metric Alert that creates a custom metric for errors and creates an alert with a threshold of 1 38 | app.alert( 39 | "custom-error", 40 | conditions=[ 41 | CustomMetricCondition( 42 | "custom-metric", 43 | metric_filter="severity=(ERROR OR CRITICAL OR ALERT OR EMERGENCY)", 44 | value=1, 45 | ) 46 | ], 47 | ) 48 | 49 | 50 | @app.http() 51 | def main(request): 52 | # Set a delay 53 | delay = app.current_request.headers.get("X-DELAY", 0) 54 | time.sleep(int(delay)) 55 | 56 | # Raise error 57 | error = app.current_request.headers.get("X-ERROR", None) 58 | if error: 59 | app.log.error("ERROR") 60 | return Response("ERROR", status_code=500) 61 | 62 | return "200" 63 | -------------------------------------------------------------------------------- /012-backend-alerts/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 -------------------------------------------------------------------------------- /013-bigquery-remote-functions/README.md: -------------------------------------------------------------------------------- 1 | # BigQuery Remote Functions 2 | 3 | This is a tutorial on how to deploy a BigQuery remote functions. 4 | 5 | ## Written Tutorial 6 | 7 | [Tutorial](https://engineering.premise.com/tutorial-deploying-bigquery-remote-functions-9040316d9d3e) 8 | 9 | ## Install 10 | 11 | * pip install goblet-gcp 12 | 13 | ## Prerequisites 14 | 15 | * GCP Account 16 | * GCP CLI 17 | * Python 3.7 or above 18 | 19 | ## Config 20 | 21 | You have to have a dataset created in BigQuery. In this case Dataset is called example-data-set 22 | 23 | ## Deploy 24 | 25 | Run `goblet deploy --project {PROJECT} --location {REGION}` to deploy your application. 26 | 27 | ## Cleanup 28 | 29 | Run `goblet destroy --project {PROJECT} --location {REGION}` to cleanup your application. 30 | -------------------------------------------------------------------------------- /013-bigquery-remote-functions/main.py: -------------------------------------------------------------------------------- 1 | from goblet import Goblet, goblet_entrypoint 2 | 3 | app = Goblet(function_name="example-remote-function") 4 | goblet_entrypoint(app) 5 | 6 | @app.bqremotefunction( 7 | dataset_id="example-data-set" 8 | ) 9 | def example_function(x: int, y: int) -> int: 10 | return x*y -------------------------------------------------------------------------------- /013-bigquery-remote-functions/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 -------------------------------------------------------------------------------- /014-goblet-private-services/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "cloudrun": {}, 3 | "vpcconnector": { 4 | "ipCidrRange": "10.32.1.0/28" 5 | }, 6 | "redis": { 7 | "connectMode": "PRIVATE_SERVICE_ACCESS", 8 | "authorizedNetwork": "projects/{PROJECT}/global/networks/default" 9 | } 10 | } -------------------------------------------------------------------------------- /014-goblet-private-services/Dockerfile: -------------------------------------------------------------------------------- 1 | # https://hub.docker.com/_/python 2 | FROM python:3.10 3 | 4 | # setup environment 5 | ENV APP_HOME /app 6 | WORKDIR $APP_HOME 7 | 8 | # install keyring backend to handle artifact registry authentication 9 | # RUN pip install keyrings.google-artifactregistry-auth==1.1.1 10 | 11 | # Install dependencies. 12 | COPY requirements.txt . 13 | RUN pip install -r requirements.txt 14 | 15 | # Copy local code to the container image. 16 | COPY . . 17 | -------------------------------------------------------------------------------- /014-goblet-private-services/README.md: -------------------------------------------------------------------------------- 1 | # Private Service Access using Goblet 2 | 3 | This is a tutorial for connecting to a Redis instance using an Internal IP leveraging GCP Cloud Run and VPC Connector. 4 | 5 | ## Written Tutorial 6 | [Tutorial](https://engineering.premise.com/tutorial-connecting-cloudrun-and-cloudfunctions-to-redis-and-other-private-services-using-goblet-5782f80da6a0) 7 | 8 | ## Install 9 | 10 | * pip install goblet-gcp 11 | 12 | ## Prerequisites 13 | 14 | * GCP Account 15 | * GCP CLI 16 | * Python 3.7 or above 17 | 18 | ## Config 19 | 20 | For Redis configuration, specify PROJECT in authorizedNetwork in `config.json`. 21 | 22 | For VPC Connector configuration update ipCidrRange in `config.json` if necessary, default is 10.32.1.0/28 23 | 24 | ## Deploy 25 | 26 | Run `goblet deploy --project {PROJECT} --location {REGION}` to deploy your application. 27 | 28 | ## Cleanup 29 | 30 | Run `goblet destroy --project {PROJECT} --location {REGION}` to cleanup your application. -------------------------------------------------------------------------------- /014-goblet-private-services/main.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import redis 4 | from goblet import Goblet, goblet_entrypoint 5 | 6 | app = Goblet( 7 | function_name="goblet-private-services", backend="cloudrun", routes_type="cloudrun" 8 | ) 9 | goblet_entrypoint(app) 10 | 11 | app.vpcconnector("goblet-vpcconnector") 12 | app.redis("goblet-redis") 13 | 14 | redis_host = os.environ.get("REDIS_HOST", "localhost") 15 | redis_port = int(os.environ.get("REDIS_PORT", 6379)) 16 | redis_client = redis.StrictRedis(host=redis_host, port=redis_port) 17 | 18 | 19 | @app.route("/redis", methods=["GET"]) 20 | def index(): 21 | value = redis_client.incr("counter", 1) 22 | return {"Visitor number": value} 23 | -------------------------------------------------------------------------------- /014-goblet-private-services/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 2 | redis==4.6.0 -------------------------------------------------------------------------------- /015-goblet-cloudtask/.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | .idea 3 | *.zip 4 | *_openapi_spec.yml 5 | .goblet/config.json -------------------------------------------------------------------------------- /015-goblet-cloudtask/.goblet/config.json.sample: -------------------------------------------------------------------------------- 1 | { 2 | "cloudrun_revision": { 3 | "serviceAccount": "cloudrun@{PROJECT}.iam.gserviceaccount.com" 4 | }, 5 | "cloudtask": { 6 | "serviceAccount": "cloudtask@{PROJECT}.iam.gserviceaccount.com" 7 | }, 8 | "cloudtaskqueue": { 9 | "my-queue": { 10 | "rateLimits": { 11 | "maxDispatchesPerSecond": 500, 12 | "maxBurstSize": 100, 13 | "maxConcurrentDispatches": 1000 14 | }, 15 | "retryConfig": { 16 | "maxAttempts": 10, 17 | "minBackoff": "0.100s", 18 | "maxBackoff": "3600s", 19 | "maxDoublings": 16 20 | } 21 | } 22 | } 23 | } -------------------------------------------------------------------------------- /015-goblet-cloudtask/Dockerfile: -------------------------------------------------------------------------------- 1 | # https://hub.docker.com/_/python 2 | FROM python:3.10-slim 3 | 4 | # setup environment 5 | ENV APP_HOME /app 6 | WORKDIR $APP_HOME 7 | 8 | # install keyring backend to handle artifact registry authentication 9 | # RUN pip install keyrings.google-artifactregistry-auth==1.1.1 10 | 11 | # Install dependencies. 12 | COPY requirements.txt . 13 | RUN pip install -r requirements.txt 14 | 15 | # Copy local code to the container image. 16 | COPY . . 17 | -------------------------------------------------------------------------------- /015-goblet-cloudtask/README.md: -------------------------------------------------------------------------------- 1 | # CloudTasks and CloudTaskQueues using Goblet 2 | 3 | This is a tutorial on how to use Goblet to create and handle [CloudTasks](https://cloud.google.com/tasks/docs). 4 | 5 | ## Written Tutorial 6 | [Tutorial](https://engineering.premise.com/deploy-and-handle-gcp-cloudtasks-with-goblet-in-minutes-ee138e9dd2c5) 7 | 8 | ## Prerequisites 9 | 10 | * GCP Account 11 | * GCP CLI 12 | * Python 3.7 or above 13 | 14 | ## Setup 15 | ### Environment Variables 16 | ```shell 17 | export GOBLET_PROJECT="" # goblet-cloudtask 18 | export GOBLET_LOCATION="" # us-central1 19 | export MY_GCP_ACCOUNT=`gcloud auth list --filter=status:ACTIVE --format="value(account)"` 20 | # set the project 21 | gcloud config set project $GOBLET_PROJECT 22 | ``` 23 | ### Enable APIs 24 | ```shell 25 | gcloud services enable run.googleapis.com 26 | gcloud services enable cloudfunctions.googleapis.com 27 | gcloud services enable cloudbuild.googleapis.com 28 | gcloud services enable iam.googleapis.com 29 | gcloud services enable artifactregistry.googleapis.com 30 | gcloud services enable cloudresourcemanager.googleapis.com 31 | gcloud services enable apigateway.googleapis.com 32 | gcloud services enable cloudtasks.googleapis.com 33 | ``` 34 | 35 | ### Create and Configure Service Accounts 36 | ```shell 37 | # service account to deploy infrastructure and code 38 | gcloud iam service-accounts create deployer --display-name="deployer" 39 | gcloud projects add-iam-policy-binding goblet-cloudtask \ 40 | --member="serviceAccount:deployer@${GOBLET_PROJECT}.iam.gserviceaccount.com" \ 41 | --role="roles/apigateway.admin" 42 | gcloud projects add-iam-policy-binding goblet-cloudtask \ 43 | --member="serviceAccount:deployer@${GOBLET_PROJECT}.iam.gserviceaccount.com" \ 44 | --role="roles/cloudbuild.serviceAgent" 45 | gcloud projects add-iam-policy-binding goblet-cloudtask \ 46 | --member="serviceAccount:deployer@${GOBLET_PROJECT}.iam.gserviceaccount.com" \ 47 | --role="roles/run.serviceAgent" 48 | gcloud projects add-iam-policy-binding goblet-cloudtask \ 49 | --member="serviceAccount:deployer@${GOBLET_PROJECT}.iam.gserviceaccount.com" \ 50 | --role="roles/cloudfunctions.developer" 51 | gcloud projects add-iam-policy-binding goblet-cloudtask \ 52 | --member="serviceAccount:deployer@${GOBLET_PROJECT}.iam.gserviceaccount.com" \ 53 | --role="roles/run.developer" 54 | gcloud projects add-iam-policy-binding goblet-cloudtask \ 55 | --member="serviceAccount:deployer@${GOBLET_PROJECT}.iam.gserviceaccount.com" \ 56 | --role="roles/cloudtasks.queueAdmin" 57 | 58 | # service account to authenticate the CloudTask against CloudRun 59 | gcloud iam service-accounts create cloudtask --display-name="cloudtask" 60 | gcloud projects add-iam-policy-binding goblet-cloudtask \ 61 | --member="serviceAccount:cloudtask@${GOBLET_PROJECT}.iam.gserviceaccount.com" \ 62 | --role="roles/run.invoker" 63 | 64 | # service account to run the CloudRun Revision 65 | gcloud iam service-accounts create cloudrun --display-name="cloudrun" 66 | gcloud projects add-iam-policy-binding goblet-cloudtask \ 67 | --member="serviceAccount:cloudrun@${GOBLET_PROJECT}.iam.gserviceaccount.com" \ 68 | --role="roles/cloudtasks.enqueuer" 69 | gcloud projects add-iam-policy-binding goblet-cloudtask \ 70 | --member="serviceAccount:cloudrun@${GOBLET_PROJECT}.iam.gserviceaccount.com" \ 71 | --role="roles/run.viewer" 72 | gcloud iam service-accounts add-iam-policy-binding \ 73 | cloudtask@${GOBLET_PROJECT}.iam.gserviceaccount.com \ 74 | --member="serviceAccount:cloudrun@${GOBLET_PROJECT}.iam.gserviceaccount.com" \ 75 | --role="roles/iam.serviceAccountUser" 76 | ``` 77 | ### Impersonate `deployer` Service Account 78 | ```shell 79 | ##### impersonate deployer service account 80 | gcloud iam service-accounts add-iam-policy-binding \ 81 | deployer@${GOBLET_PROJECT}.iam.gserviceaccount.com \ 82 | --member="user:${MY_GCP_ACCOUNT}" \ 83 | --role="roles/iam.serviceAccountTokenCreator" 84 | gcloud auth application-default login \ 85 | --impersonate-service-account=deployer@${GOBLET_PROJECT}.iam.gserviceaccount.com 86 | ``` 87 | 88 | ## Deploy 89 | ```shell 90 | git clone https://github.com/premisedata/gcp-tutorial 91 | cd gcp-tutoriales/015-goblet-cloudtask 92 | sed "s/{PROJECT}/$GOBLET_PROJECT/g" .goblet/config.json.sample > .goblet/config.json 93 | 94 | python3 -m venv venv 95 | . venv/bin/activate 96 | (venv) pip install -r requirements.txt 97 | (venv) goblet deploy -l $GOBLET_LOCATION -p $GOBLET_PROJECT 98 | ``` 99 | 100 | ## Test 101 | ```shell 102 | SERVICE_URL=`gcloud run services list \ 103 | --filter=SERVICE:cloudtask-example --format="value(URL)" 104 | 105 | curl $SERVICE_URL/enqueue \ 106 | -H "Authorization: Bearer $(gcloud auth print-identity-token)" 107 | ``` -------------------------------------------------------------------------------- /015-goblet-cloudtask/main.py: -------------------------------------------------------------------------------- 1 | from goblet import Goblet, goblet_entrypoint 2 | 3 | app = Goblet(function_name="cloudtask-example", backend="cloudrun") 4 | goblet_entrypoint(app) 5 | 6 | client = app.cloudtaskqueue("my-queue") 7 | 8 | 9 | @app.route("/enqueue", methods=["GET"]) 10 | def enqueue(): 11 | payload = {"message": {"title": "enqueue"}} 12 | client.enqueue(target="my_target", payload=payload) 13 | return {} 14 | 15 | 16 | @app.cloudtasktarget(name="my_target") 17 | def my_target_handler(request): 18 | app.log.info(request.json) 19 | return {} 20 | -------------------------------------------------------------------------------- /015-goblet-cloudtask/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 2 | -------------------------------------------------------------------------------- /016-gcp-metrics-slack-alerts/.goblet/config.json.sample: -------------------------------------------------------------------------------- 1 | { 2 | "function_name": "{FUNCTION_NAME}", 3 | "cloudfunction": { 4 | "environmentVariables": { 5 | "CLOUDRUN_CPU_MIN_VALUE": "1000", 6 | "CLOUDRUN_MEMORY_MIN_VALUE": "512", 7 | "CLOUDRUN_CPU_MIN_THRESHOLD": "10%", 8 | "CLOUDRUN_MEMORY_MIN_THRESHOLD": "10%", 9 | "SLACK_CHANNEL_ID": "{SLACK_CHANNEL_ID}", 10 | "CRON_EXPRESSION": "0 10 * 1 *", 11 | "DEBUG": "true" 12 | }, 13 | "secretEnvironmentVariables":[ 14 | { 15 | "key": "SLACK_BOT_TOKEN", 16 | "secret": "{SLACK_TOKEN}", 17 | "version": "latest" 18 | } 19 | ], 20 | "serviceAccountEmail": "{SERVICE_ACCOUNT_EMAIL}", 21 | "timeout": "540s" 22 | } 23 | } -------------------------------------------------------------------------------- /016-gcp-metrics-slack-alerts/README.md: -------------------------------------------------------------------------------- 1 | # Goblet GCP Metrics Slack Alerts 2 | This code sets up a scheduled job that checks the CPU and memory utilization of services in Google Cloud Platform (GCP) projects and sends a Slack message for any service that is under a defined utilization threshold. 3 | 4 | ## Written Tutorial 5 | [Tutorial](https://engineering.premise.com/tutorial-low-usage-alerting-on-slack-for-google-cloud-platform-gcp-cc68ac8ca4d) 6 | 7 | ## Prerequisites 8 | - Python 3.7 or later 9 | - A GCP project with active services 10 | - A Slack webhook URL for sending messages 11 | 12 | ## Installation 13 | 1. Clone this repository. 14 | 2. Install the required packages using the following command: 15 | ``` 16 | pip install -r requirements.txt 17 | ``` 18 | 19 | ## Usage 20 | You can run this project with `goblet local` setting up the required environment variables: 21 | 22 | - `CLOUDRUN_CPU_MIN_THRESHOLD` 23 | - `CLOUDRUN_MEMORY_MIN_THRESHOLD` 24 | - `SLACK_CHANNEL_ID` 25 | - `CRON_EXPRESSION` 26 | - `SLACK_BOT_TOKEN` 27 | 28 | ## Preview 29 | ![Slack message](./preview.png) -------------------------------------------------------------------------------- /016-gcp-metrics-slack-alerts/main.py: -------------------------------------------------------------------------------- 1 | import os 2 | import logging 3 | from goblet import Goblet, jsonify, goblet_entrypoint 4 | from goblet_gcp_client import Client 5 | from utils.cloudrun import CloudRun 6 | from utils.slack_message import send_slack_message 7 | from utils.utility_functions import merge_lists 8 | 9 | # goblet setup 10 | app = Goblet(function_name="goblet-gcp-metrics-slack-alerts") 11 | goblet_entrypoint(app) 12 | 13 | # set debug level 14 | app.log.setLevel(logging.DEBUG if os.environ.get("DEBUG", "false") == "true" else logging.INFO) 15 | 16 | # gcp client 17 | metrics_client = Client( 18 | "monitoring", 19 | "v3", 20 | calls="projects.timeSeries", 21 | parent_schema="projects/{project_id}" 22 | ) 23 | resource_manager_client = Client( 24 | "cloudresourcemanager", 25 | "v1", 26 | calls="projects", 27 | ) 28 | 29 | # scheduled job 30 | @app.schedule(os.environ.get("CRON_EXPRESSION", "0 15 * * 5")) 31 | def scheduled_job(): 32 | # Get active project ids 33 | active_projects = resource_manager_client.execute("list")["projects"] 34 | active_projects = filter( 35 | lambda project: project["lifecycleState"] == "ACTIVE", active_projects) 36 | active_projects = filter( 37 | lambda project: "sys" not in project["projectId"], active_projects) 38 | active_projects = list( 39 | map(lambda project: project["projectId"], active_projects)) 40 | 41 | for project_id in active_projects: 42 | app.log.info(f"Checking project: {project_id}") 43 | services_under_cpu_threshold = [] 44 | services_under_memory_threshold = [] 45 | 46 | # Get services under cpu threshold 47 | services_under_cpu_threshold = CloudRun( 48 | app=app, 49 | metrics_client=metrics_client, 50 | query=""" 51 | fetch cloud_run_revision 52 | | metric 'run.googleapis.com/container/cpu/utilizations' 53 | | group_by 1w, 54 | [value_utilizations_percentile: percentile(value.utilizations, 99)] 55 | | every 1w 56 | | group_by [resource.service_name, resource.location], 57 | [value_utilizations_percentile_max: max(value_utilizations_percentile)] 58 | """, 59 | min_threshold=float(os.environ.get( 60 | "CLOUDRUN_CPU_MIN_THRESHOLD", "10%").replace("%", "")), 61 | min_config=int(os.environ.get("CLOUDRUN_CPU_MIN_VALUE", "1000")), 62 | project_id=project_id, 63 | value_field_name="cpu", 64 | ).services_under_threshold 65 | 66 | # Get services under memory threshold 67 | services_under_memory_threshold = CloudRun( 68 | app=app, 69 | metrics_client=metrics_client, 70 | query=""" 71 | fetch cloud_run_revision 72 | | metric 'run.googleapis.com/container/memory/utilizations' 73 | | group_by 1w, 74 | [value_utilizations_percentile: percentile(value.utilizations, 99)] 75 | | every 1w 76 | | group_by [resource.service_name, resource.location], 77 | [value_utilizations_percentile_max: max(value_utilizations_percentile)] 78 | """, 79 | min_threshold=float(os.environ.get( 80 | "CLOUDRUN_MEMORY_MIN_THRESHOLD", "10%").replace("%", "")), 81 | min_config=int(os.environ.get("CLOUDRUN_MEMORY_MIN_VALUE", "512")), 82 | project_id=project_id, 83 | value_field_name="memory", 84 | ).services_under_threshold 85 | 86 | services = merge_lists(services_under_cpu_threshold, services_under_memory_threshold) 87 | if services: 88 | send_slack_message(app, project_id, services) 89 | return jsonify("success") 90 | -------------------------------------------------------------------------------- /016-gcp-metrics-slack-alerts/preview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/premisedata/gcp-tutorials/99502f88a49edf0f219136832267ea875ff9f2b6/016-gcp-metrics-slack-alerts/preview.png -------------------------------------------------------------------------------- /016-gcp-metrics-slack-alerts/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 2 | goblet-gcp-client==0.1.6 3 | slack-sdk==3.21.3 4 | python-dotenv==1.0.0 -------------------------------------------------------------------------------- /016-gcp-metrics-slack-alerts/utils/cloudrun.py: -------------------------------------------------------------------------------- 1 | import math 2 | import re 3 | from goblet_gcp_client import Client 4 | 5 | def convert_units(value, value_field_name): 6 | number = int(re.findall("\d+", value)[0]) 7 | if value_field_name == "cpu": 8 | if "m" not in value: 9 | return number * 1000 10 | else: 11 | return number 12 | elif value_field_name == "memory": 13 | if "Gi" in value: 14 | return number * 1024 15 | else: 16 | return number 17 | 18 | class CloudRun: 19 | """Cloud Run Usage Alerts""" 20 | 21 | def __init__(self, app, metrics_client, query, min_threshold, min_config, project_id="{project_id}", value_field_name="value"): 22 | """Constructor for Secret Manager Client""" 23 | self.services_under_threshold = self.get_services_under_threshold( 24 | app, metrics_client, query, min_threshold, min_config, project_id, value_field_name) 25 | 26 | 27 | def get_services_under_threshold(self, app, metrics_client, query, min_threshold, min_config, project_id="{project_id}", value_field_name="value"): 28 | cloud_run_client = Client( 29 | "run", 30 | "v2", 31 | calls="projects.locations.services", 32 | ) 33 | 34 | # get cpu usage by service 35 | usage_by_service = metrics_client.execute("query", parent_key="name", parent_schema=f"projects/{project_id}", params={ 36 | "body": { 37 | "query": query, 38 | }, 39 | }) 40 | 41 | app.log.debug(usage_by_service) 42 | services = [] 43 | if usage_by_service["timeSeriesDescriptor"]: 44 | # get unit to get the correct percentage 45 | point_unit = eval(usage_by_service["timeSeriesDescriptor"] 46 | ["pointDescriptors"][0]["unit"].split(".")[0].replace("^", "**")) 47 | # Build a list of services that are under the threshold 48 | for service in usage_by_service["timeSeriesData"]: 49 | service_name = service["labelValues"][0]["stringValue"] 50 | location = service["labelValues"][1]["stringValue"] 51 | 52 | try: 53 | containers = cloud_run_client.execute( 54 | "get", parent_key="name", parent_schema=f"projects/{project_id}/locations/{location}/services/{service_name}")["template"]["containers"] 55 | except Exception as e: 56 | app.log.info( 57 | f"Skipping service {service_name} because it does not exist") 58 | continue 59 | 60 | if len(containers) > 1: 61 | app.log.info( 62 | f"Skipping service {service_name} because it has more than one container") 63 | continue 64 | 65 | limit = convert_units(containers[0]['resources']['limits'][value_field_name], value_field_name) 66 | app.log.debug(f"service {service_name} {value_field_name} limit: {limit} original: {containers[0]['resources']['limits'][value_field_name]}") 67 | 68 | should_alert = ( 69 | service["pointData"][0]["values"][0]["doubleValue"] * point_unit < min_threshold and 70 | limit > min_config 71 | ) 72 | 73 | if should_alert: 74 | services.append({ 75 | "name": service_name, 76 | f"{value_field_name}": math.ceil(service["pointData"][0]["values"][0]["doubleValue"] * point_unit), 77 | }) 78 | 79 | return services 80 | -------------------------------------------------------------------------------- /016-gcp-metrics-slack-alerts/utils/slack_message.py: -------------------------------------------------------------------------------- 1 | import certifi 2 | import ssl 3 | import os 4 | from slack_sdk import WebClient 5 | # slack webhook 6 | ssl_context = ssl.create_default_context(cafile=certifi.where()) 7 | slack_client = WebClient(token=os.environ.get( 8 | "SLACK_BOT_TOKEN"), ssl=ssl_context) 9 | slack_channel_id = os.environ.get("SLACK_CHANNEL_ID") 10 | 11 | 12 | def send_slack_message(app, project_id, services): 13 | blocks = [ 14 | { 15 | "type": "header", 16 | "text": {"type": "plain_text", "text": f"GCP Services Low Usage Alert - Project: {project_id}", "emoji": True,}, 17 | }, 18 | { 19 | "type": "divider", 20 | }, 21 | { 22 | "type": "section", 23 | "text": { 24 | "type": "mrkdwn", 25 | "text": "The following services are under the threshold:", 26 | }, 27 | }, 28 | ] 29 | 30 | fields = [] 31 | counter = 0 32 | for service in services: 33 | field = { 34 | "type": "mrkdwn", 35 | "text": f"> *Service*: {service['name']}", 36 | } 37 | if 'cpu' in service: 38 | field["text"] += f"\n> *CPU*: {service['cpu']}%" 39 | if 'memory' in service: 40 | field["text"] += f"\n> *Memory*: {service['memory']}%" 41 | fields.append(field) 42 | 43 | counter += 1 44 | # if the counter is divisible by 10, create a new block 45 | if counter % 10 == 0: 46 | blocks.append({ 47 | "type": "section", 48 | "fields": fields 49 | }) 50 | 51 | # reset the fields list 52 | fields = [] 53 | 54 | # append the final block (if there are any remaining fields) 55 | if fields: 56 | blocks.append({ 57 | "type": "section", 58 | "fields": fields 59 | }) 60 | 61 | app.log.debug(f"Sending slack message: {blocks}") 62 | 63 | slack_client.chat_postMessage( 64 | channel=slack_channel_id, text="fallback", blocks=blocks, 65 | ) 66 | -------------------------------------------------------------------------------- /016-gcp-metrics-slack-alerts/utils/utility_functions.py: -------------------------------------------------------------------------------- 1 | def merge_lists(list_1, list_2): 2 | for dict1 in list_1: 3 | for dict2 in list_2: 4 | if dict1['name'] == dict2['name']: 5 | dict1.update(dict2) 6 | return list_1 -------------------------------------------------------------------------------- /017-goblet-iam/.goblet/autogen_iam_role.json: -------------------------------------------------------------------------------- 1 | { 2 | "roleId": "Goblet_Deployment_Role_goblet_iam_tutorial", 3 | "role": { 4 | "title": "Deployment role for goblet-iam-tutorial", 5 | "description": "Goblet generated role", 6 | "includedPermissions": [ 7 | "cloudfunctions.functions.create", 8 | "cloudfunctions.functions.delete", 9 | "cloudfunctions.functions.get", 10 | "cloudfunctions.functions.getIamPolicy", 11 | "cloudfunctions.functions.list", 12 | "cloudfunctions.functions.setIamPolicy", 13 | "cloudfunctions.functions.sourceCodeSet", 14 | "cloudfunctions.functions.update", 15 | "cloudfunctions.operations.get", 16 | "cloudscheduler.jobs.create", 17 | "cloudscheduler.jobs.delete", 18 | "cloudscheduler.jobs.get", 19 | "cloudscheduler.jobs.list", 20 | "cloudscheduler.jobs.update", 21 | "pubsub.subscriptions.create", 22 | "pubsub.subscriptions.delete", 23 | "pubsub.subscriptions.get", 24 | "pubsub.subscriptions.list", 25 | "pubsub.subscriptions.update", 26 | "pubsub.topics.create", 27 | "pubsub.topics.delete", 28 | "pubsub.topics.get", 29 | "pubsub.topics.list", 30 | "pubsub.topics.update" 31 | ], 32 | "stage": "GA" 33 | } 34 | } -------------------------------------------------------------------------------- /017-goblet-iam/.goblet/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "pubsub": { 3 | "serviceAccountEmail": "sa_pubsub@goblet.iam.gserviceaccount.com" 4 | }, 5 | "scheduler": { 6 | "serviceAccount": "sa_scheduler@goblet.iam.gserviceaccount.com" 7 | } 8 | } -------------------------------------------------------------------------------- /017-goblet-iam/README.md: -------------------------------------------------------------------------------- 1 | # Goblet IAM 2 | 3 | A tutorial on how Goblet simplfies GCP IAM by auto generating required deployment permissions and creating a deployment service account. 4 | 5 | ## Written Tutorial 6 | 7 | [Tutorial](https://engineering.premise.com/easily-manage-iam-policies-for-serverless-rest-applications-in-gcp-with-goblet-f1580a97b74) 8 | 9 | ## Install 10 | 11 | * pip install goblet-gcp 12 | 13 | ## Prerequisites 14 | 15 | * GCP Account 16 | * GCP CLI 17 | * Python 3.7 or above 18 | 19 | 20 | ## Check and Enable GCP Services 21 | 22 | Run `goblet services check` to view all GCP services that are needed for this application. 23 | Run `goblet services enable` to enable any required servies that are not yet enabled. 24 | 25 | ## View Required Permissions 26 | 27 | Run `goblet services autogen_iam`. This will create an IAM policy in `.goblet/autogen_iam_role.json`. 28 | In this example this will create a role with permissions to deploy a `cloudfunction`, `scheduled job` `pubsub_topic` and `pubsub subscription`. Goblet will also add invoker permissions on the `cloudfunction` for the service accounts assigned to the scheduled job and pubsub subscription. 29 | 30 | ## Create Service Account 31 | 32 | Run `goblet services create_service_account -p PROJECT`. 33 | This will create a custom role and service account based on your application name. 34 | 35 | ## Deploy 36 | 37 | Run `goblet deploy -p PROJECT -l LOCATION` 38 | 39 | ## Cleanup 40 | 41 | Run `goblet destroy -p PROJECT -l LOCATION` 42 | -------------------------------------------------------------------------------- /017-goblet-iam/main.py: -------------------------------------------------------------------------------- 1 | import logging 2 | from goblet import Goblet, goblet_entrypoint 3 | 4 | # goblet setup 5 | app = Goblet(function_name="goblet-iam-tutorial") 6 | goblet_entrypoint(app) 7 | 8 | # set debug level 9 | app.log.setLevel(logging.DEBUG) 10 | 11 | # Define resources 12 | 13 | # Pubsub Topic 14 | app.pubsub_topic("test") 15 | 16 | # Pubsub Subscription 17 | @app.pubsub_subscription(topic="test", use_subscription=True) 18 | def subscription(data): 19 | return "success" 20 | 21 | # Cloud Scheduler 22 | @app.schedule("1 * * * *") 23 | def schedule(): 24 | return "success" -------------------------------------------------------------------------------- /017-goblet-iam/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp=0.11.3 -------------------------------------------------------------------------------- /018-dataform-bq-remote-functions/README.md: -------------------------------------------------------------------------------- 1 | # Dataform and BiqQuery Remote Functions 2 | 3 | A tutorial on how to deploy BiqQuery Remote Functions for a Dataform pipeline. 4 | 5 | ## Written Tutorial 6 | 7 | [Tutorial](https://engineering.premise.com/serverless-data-pipelines-in-gcp-using-dataform-and-bigquery-remote-functions-9ee235d0cb18) 8 | 9 | ## Install 10 | 11 | * pip install goblet-gcp 12 | 13 | ## Deploy 14 | 15 | Run `goblet deploy -p PROJECT -l LOCATION` 16 | 17 | ## Cleanup 18 | 19 | Run `goblet destroy -p PROJECT -l LOCATION` 20 | -------------------------------------------------------------------------------- /018-dataform-bq-remote-functions/main.py: -------------------------------------------------------------------------------- 1 | from goblet import Goblet, goblet_entrypoint 2 | 3 | app = Goblet(function_name="goblet") 4 | goblet_entrypoint(app) 5 | 6 | 7 | @app.bqremotefunction(dataset_id="tutorial", location="US") 8 | def tutorial(value: int, type: str) -> int: 9 | if type == "double": 10 | return value * 2 11 | if type == "square": 12 | return value**2 13 | if type == "zero": 14 | return 0 15 | else: 16 | return value 17 | -------------------------------------------------------------------------------- /018-dataform-bq-remote-functions/requirements.txt: -------------------------------------------------------------------------------- 1 | goblet-gcp==0.11.4 -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # GCP tutorials 2 | 3 | The repo contains the source code for GCP tutorials written by Premise. 4 | 5 | Please clone and star this repo to stay up to date on changes. 6 | 7 | ## Packages by Premise 8 | 9 | [Goblet](https://github.com/goblet/goblet): An easy-to-use framework that enables developers to quickly spin up fully featured REST APIs with python on GCP 10 | 11 | [Goblet Workflows](https://github.com/goblet/goblet_workflows): A wrapper around GCP Workflows, which is a fully-managed orchestration platform that executes services in an order that you define: a workflow. These workflows can combine services including custom services hosted on Cloud Run or Cloud Functions, Google Cloud services such as Cloud Vision AI and BigQuery, and any HTTP-based API. 12 | 13 | [Slack Approvals](https://github.com/premisedata/slack-approval): This python library serves as the basis for managing and deploying a lightweight approval workflow based on Slack and GCP. This library contains two key classes, SlackRequest and SlackProvision as well as the logic to deploy them to GCP (See 008 for tutorial). 14 | 15 | [Goblet GCP Client](https://github.com/goblet/goblet_gcp_client): Goblet GCP Client is a util library with support for creating GCP resource clients, GCP integration tests, and other related utils. 16 | 17 | [Backstage Dynamic Data Form Extension](https://github.com/premisedata/dynamic-data-form-extension): Backstage plugins that allow you to fetch data dynamically from an endpoint and populate Backstage templates. 18 | 19 | ## Tutorials 20 | **001 - Forward Github Security Findings to Security Command Center** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/001-github-to-scc)] [ [Written](https://engineering.premise.com/tutorial-publishing-github-findings-to-security-command-center-2d1749f530bc)] - This tutorial goes over implementation on how to automatically forward Github security findings to GCP's security command center. 21 | 22 | **002 - Using Cloud Custodian Metric Filters to safely find and remove cloud resources** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/002-cloud-custodian-metric-filters)] [ [Written](https://engineering.premise.com/cleaning-up-your-google-cloud-environment-safety-guaranteed-2de51fb8620a)] - Learn how to combine GCP's powerful built in metrics with Cloud Custodian to easily discover potential security risks, evaluate the impact those resources currently have in your environment, and then safely address those issues. 23 | 24 | **003 - Setup Daily Cost Alerts on your Daily GCP Spend** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/003-cost-alerts)] [[Written](https://engineering.premise.com/tutorial-cost-spike-alerting-for-google-cloud-platform-gcp-46fd26ae3f6a)] - Deploy a cloudfunction that checks your daily GCP spend for any large cost increases or decreases in a service and notifies a slack channel 25 | 26 | **004 - Create Dynamic Serverless Loadbalancers** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/004-dynamic-serverless-loadbalancer)] [[Written](https://austennovis.medium.com/e15751853312)] - A simple, scalable, and customizable solution to deploy and maintain load balancers for any number of serverless applications. 27 | 28 | **005 - Bucket Replication** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/005-bucket-replication)] [Written](https://engineering.premise.com/tutorial-bucket-replication-for-google-cloud-platform-gcp-cloud-storage-44622c59299c)] - A simple way to set up bucket replication, leveraging goblet's use of decorators to set up cloud function triggers. 29 | 30 | **006 - Service Account Key Rotater** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/006-service-account-key-rotater)] [[Written](https://engineering.premise.com/tutorial-rotating-service-account-keys-using-secret-manager-5f4dc7142d4b)] - A pluginable solution to automate Service Account Key rotations that currently supports Github Secrets and Cloud Storage. 31 | 32 | **007 - BigQuery Usage Detection** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/007-bigquery-usage-detection)] [Written](https://engineering.premise.com/tutorial-detection-of-high-usage-bigquery-jobs-on-google-cloud-platform-gcp-aadb591eefe5)] - A cloud function to retrieve processing info about your BigQuery jobs, send a Slack message to inform of most expensive jobs, and upload a detailed summary to a Cloud Storage bucket. 33 | 34 | **008 - Slack Approval Process** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/008-slack-approval-process)] [Written](https://engineering.premise.com/tutorial-setting-up-approval-processes-with-slack-apps-d325aee31763)] - A quick walk-through on setting up an approval process facilitated by slack. 35 | 36 | **009 - Cloudrun Cloudbuild Configs** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/009-cloudrun-cloudbuild-configs)] [[ Written ](https://engineering.premise.com/traffic-revisions-and-artifact-registries-in-google-cloud-run-made-easy-with-goblet-1a3fa86de25c)] - A quick walk-through on configuring cloudrun builds with goblet, including traffic, secrets, and artifact registries. 37 | 38 | **010 - Cloudrun Jobs** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/010-cloudrun-jobs)] [[ Written ](https://medium.com/engineering-at-premise/tutorial-deploying-cloud-run-jobs-9435466b26f5)] - A tutorial on how to deploy cloudrun jobs using goblet. 39 | 40 | **011 - Cloudrun Cors Anywhere** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/011-cloud-run-cors-anywhere)] [[ Written ](https://engineering.premise.com/tutorial-handling-cors-in-backstage-api-swagger-documentation-hosted-on-cloud-run-gcp-65584811ec0d)] - A tutorial on how to deploy cors-anywhere proxy on Cloud Run. 41 | 42 | **012 - Backend Alerts** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/011-backend-alerts)] [[ Written ](https://engineering.premise.com/gcp-alerts-the-easy-way-alerting-for-cloudfunctions-and-cloudrun-using-goblet-62bdf2126ef6)] - A tutorial on how to deploy backend alerts alongside your codebase using goblet. 43 | 44 | **013 - BigQuery Remote Functions** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/013-bigquery-remote-functions)] [[ Written ](https://engineering.premise.com/tutorial-deploying-bigquery-remote-functions-9040316d9d3e)] - A tutorial on how to deploy BigQuery Remote Functions on GCP using goblet. 45 | 46 | **014 - Private Service Access using Goblet** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/014-goblet-private-services)] [[ Written ](https://engineering.premise.com/tutorial-connecting-cloudrun-and-cloudfunctions-to-redis-and-other-private-services-using-goblet-5782f80da6a0)] - A tutorial on how on deploying and connecting to private GCP services using Goblet 47 | 48 | **015 - CloudTasks with Goblet** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/015-goblet-cloudtask)] [[ Written ](https://engineering.premise.com/deploy-and-handle-gcp-cloudtasks-with-goblet-in-minutes-ee138e9dd2c5)] - A tutorial on how to deploy CloudTaskQueues, enqueue CloudTasks and handle CloudTasks all with Goblet. 49 | 50 | **016 - GCP Cloud Run Low Usage Slack Alerts** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/016-gcp-metrics-slack-alerts)] [[ Written ](https://engineering.premise.com/tutorial-low-usage-alerting-on-slack-for-google-cloud-platform-gcp-cc68ac8ca4d)] - A tutorial on how to deploy a Scheduled Job with Goblet that monitor low usage and send recomendations for downscaling through Slack. 51 | 52 | **017 - Goblet IAM** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/017-goblet-iam)] [[ Written ](https://engineering.premise.com/easily-manage-iam-policies-for-serverless-rest-applications-in-gcp-with-goblet-f1580a97b74)] - A tutorial on how Goblet simplfies GCP IAM by auto generating required deployment permissions and creating a deployment service account. 53 | 54 | **018 - Dataform and BiqQuery Remote Functions** [[ Source ](https://github.com/premisedata/gcp-tutorials/tree/main/018-dataform-bq-remote-functions)] [[ Written ](https://engineering.premise.com/serverless-data-pipelines-in-gcp-using-dataform-and-bigquery-remote-functions-9ee235d0cb18)] - A tutorial on how to deploy BiqQuery Remote Functions for a Dataform pipeline. --------------------------------------------------------------------------------