├── Cloud Source Repositories: Qwik Start ├── README.md ├── Google Cloud Essential Skills: Challenge Lab ├── App Engine: Qwik Start - PHP ├── Getting Started with Cloud Shell and gcloud.md ├── Analyze BigQuery data in Connected Sheets: Challenge Lab.md ├── Dataprep: Qwik Start.md ├── Creating a Looker Modeled Query and Working with Quick Start ├── Generative AI with Vertex AI: Getting Started.md ├── Generative AI with Vertex AI: Prompt Design.md ├── App Engine: Qwik Start - Python.md ├── Use Functions, Formulas, and Charts in Google Sheets: Challenge Lab ├── APIs Explorer: Qwik Start.md ├── Cloud IAM: Qwik Start ├── Google Fly Cup Challenge: Champion ├── Google Cloud Pub\Sub: Qwik Start - Python.md ├── Scale Out and Update a Containerized Application ona Kubernetes Cluster ├── Dataplex: Qwik Start - Console ├── BigQuery: Qwik Start - Console ├── App Engine: Qwik Start - Go.md ├── App Dev: Storing Application Data in Cloud Datastore - Python.md ├── Cloud Storage: Qwik Start - CLI ├── Kubernetes Engine: Qwik Start.md ├── Cloud Functions: Qwik Start - Command Line.md ├── APIs Explorer: Cloud Storage.md ├── Cloud Natural Language API: Qwik Start.md ├── Cloud Natural Language API: Qwik Start ├── Dataproc: Qwik Start - Console.md ├── Tagging Dataplex Assets ├── Build and Deploy a Docker Image to a Kubernetes Cluster ├── Creating a Virtual Machine.md ├── Google Cloud Speech API: Qwik Start.md ├── App Engine: 3 Ways: Challenge Lab.md ├── Exploring the Public Cryptocurrency Datasets Available in BigQuery LAB ├── Google Clout - BigQuery - Remote Functions ├── Video Intelligence: Qwik Start.md ├── Detect Labels, Faces, and Landmarks in Images with the Cloud Vision API.md ├── Explore Machine Learning Models with Explainable AI: Challenge Lab.md ├── Get Started with Looker: Challenge Lab ├── Deploying a Python Flask Web Application to App Engine Flexible ├── Entity and Sentiment Analysis with the Natural Language API.md ├── Migrate a MySQL Database to Google Cloud SQL ├── Stream Processing with Cloud Pub-Sub and Dataflow: Qwik Start ├── BigLake: Qwik Start.md ├── Dataplex: Qwik Start - Command Line ├── Use Go Code to Work with Google Cloud Data Sources ├── Cloud Composer: Copying BigQuery Tables Across Different Locations ├── Create a Streaming Data Lake on Cloud Storage: Challenge Lab ├── VPC Networking Fundamentals ├── Deploy and Manage Cloud Environments with Google Cloud: Challenge Lab ├── VPC Networks - Controlling Access.md ├── Serverless Firebase Development: Challenge Lab ├── Engineer Data in Google Cloud: Challenge Lab ├── Setting up a Private Kubernetes Cluster.md ├── Analyze Images with the Cloud Vision API: Challenge Lab.md ├── Perform Foundational Data, ML, and AI Tasks in Google Cloud: Challenge Lab ├── MongoDB Atlas with Natural Language API and Cloud Run ├── Deploy to Kubernetes in Google Cloud: Challenge Lab ├── Serverless Cloud Run Development: Challenge Lab ├── Getting Started with BigQuery Machine Learning ├── Create and Manage Cloud Resources: Challenge Lab ├── Check Point: Next-Gen Data Center Security CloudGuard for Google Cloud ├── Set Up and Configure a Cloud Environment in Google Cloud: Challenge Lab ├── Extract, Analyze, and Translate Text from Images with the Cloud ML APIs.md ├── Google Kubernetes Engine Pipeline using Cloud Build ├── Ensure Access & Identity in Google Cloud: Challenge Lab ├── Implement DevOps in Google Cloud: Challenge Lab ├── Build a Website on Google Cloud: Challenge Lab ├── Create ML Models with BigQuery ML: Challenge Lab ├── Networking Fundamentals on Google Cloud: Challenge Lab ├── Importing Data to a Firestore Database ├── Create an Internal Load Balancer.md ├── Predict Taxi Fare with a BigQuery ML Forecasting Model ├── SAP Landing Zone: Plan and Deploy the SAP Network ├── Predict Visitor Purchases with a Classification Model in BigQuery ML ├── Vertex AI PaLM API: Qwik Start.md └── HTTP Load Balancer with Cloud Armor.md /Cloud Source Repositories: Qwik Start: -------------------------------------------------------------------------------- 1 | // Run this in cloudshell // 2 | 3 | gcloud source repos create REPO_DEMO 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ##### My Google Cloud Profile -> https://www.cloudskillsboost.google/public_profiles/4fe0ea1c-e4c0-4309-9c81-6c2bdc9b95da 2 | -------------------------------------------------------------------------------- /Google Cloud Essential Skills: Challenge Lab: -------------------------------------------------------------------------------- 1 | // 2nd TASK // 2 | 3 | sudo su - 4 | 5 | apt-get update 6 | 7 | apt-get install apache2 -y 8 | 9 | service --status-all 10 | -------------------------------------------------------------------------------- /App Engine: Qwik Start - PHP: -------------------------------------------------------------------------------- 1 | gcloud services enable appengine.googleapis.com 2 | 3 | git clone https://github.com/GoogleCloudPlatform/php-docs-samples.git 4 | 5 | cd php-docs-samples/appengine/standard/helloworld 6 | 7 | gcloud app deploy 8 | -------------------------------------------------------------------------------- /Getting Started with Cloud Shell and gcloud.md: -------------------------------------------------------------------------------- 1 | # Getting Started with Cloud Shell and gcloud 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | export ZONE= 6 | ``` 7 | 8 | ```cmd 9 | gcloud compute instances create gcelab2 --machine-type e2-medium --zone $ZONE 10 | ``` 11 | -------------------------------------------------------------------------------- /Analyze BigQuery data in Connected Sheets: Challenge Lab.md: -------------------------------------------------------------------------------- 1 | # Analyze BigQuery data in Connected Sheets: Challenge Lab 2 | 3 | ## Task 2 4 | ```cmd 5 | =COUNTIF(tlc_yellow_trips_2022!airport_fee,">0") 6 | ``` 7 | 8 | ## Task 5 9 | ```cmd 10 | =IF(fare_amount>0, tolls_amount/fare_amount*100, 0) 11 | ``` 12 | -------------------------------------------------------------------------------- /Dataprep: Qwik Start.md: -------------------------------------------------------------------------------- 1 | # Dataprep: Qwik Start 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | gsutil mb gs://$DEVSHELL_PROJECT_ID/ 6 | ``` 7 | ### Got to `Navigation menu` > `Dataprep` 8 | ### Click Accept > Agree and Continue > Allow 9 | ### Select your lab username 10 | ### Click Allow > Accept > Continue 11 | -------------------------------------------------------------------------------- /Creating a Looker Modeled Query and Working with Quick Start: -------------------------------------------------------------------------------- 1 | // Paste this at line number 42 // 2 | 3 | 4 | explore: +order_items { 5 | 6 | query:start_from_here{ 7 | dimensions: [products.department, users.state] 8 | measures: [order_count, users.count] 9 | filters: [users.country: "USA"] 10 | } 11 | 12 | 13 | 14 | } 15 | -------------------------------------------------------------------------------- /Generative AI with Vertex AI: Getting Started.md: -------------------------------------------------------------------------------- 1 | # Generative AI with Vertex AI: Getting Started 2 | 3 | ### Search `vertex ai` > User-Managed Notebooks > Open JupyterLab 4 | 5 | ### Generative-ai > Language > prompts > intro_prompt_design.ipynb (Run each cell one at a time) 6 | 7 | ### Go back to language 8 | 9 | ### getting-started > intro_palm_api.ipynb (Run each cell one at a time) 10 | -------------------------------------------------------------------------------- /Generative AI with Vertex AI: Prompt Design.md: -------------------------------------------------------------------------------- 1 | # Generative AI with Vertex AI: Prompt Design 2 | 3 | ## Open this link 4 | ```cmd 5 | https://console.cloud.google.com/vertex-ai/workbench/user-managed? 6 | ``` 7 | ### Open JupyterLab 8 | 9 | ### Generative-ai > Language > prompts > intro_prompt_design.ipynb (Run each cell one at a time) 10 | 11 | ### examples > ideation.ipynb (Run each cell one at a time) 12 | 13 | -------------------------------------------------------------------------------- /App Engine: Qwik Start - Python.md: -------------------------------------------------------------------------------- 1 | # App Engine: Qwik Start - Python 2 | 3 | ## Run in Cloudshell 4 | ```cmd 5 | gcloud services enable appengine.googleapis.com 6 | git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git 7 | cd python-docs-samples/appengine/standard_python3/hello_world 8 | sed -i 's/Hello World!/Hello, Cruel World!/g' main.py 9 | gcloud app deploy 10 | ``` 11 | 12 | ### Select the Region from task 5 and press y when asked 13 | -------------------------------------------------------------------------------- /Use Functions, Formulas, and Charts in Google Sheets: Challenge Lab: -------------------------------------------------------------------------------- 1 | \\ Task 1 \\ 2 | 3 | =PROPER(A2) 4 | 5 | =ISEMAIL(D2) 6 | 7 | Sort sheet A-Z 8 | 9 | 10 | \\ Task 2 \\ 11 | 12 | =SUBSTITUTE(I2,"Oct-5","Oct-10") 13 | 14 | =VLOOKUP(A2, A2:D16, 4, False) 15 | 16 | 17 | \\ Task 4 \\ 18 | 19 | =AVERAGE(C2:C101) 20 | 21 | =MEDIAN(C2:C101) 22 | 23 | =MAX(c2:c101)-MIN(c2:c101) 24 | 25 | =STDEV(C2:C101) 26 | 27 | 28 | \\ Task 5 \\ 29 | 30 | 'Customer Ratings'!A1:C101 31 | -------------------------------------------------------------------------------- /APIs Explorer: Qwik Start.md: -------------------------------------------------------------------------------- 1 | # APIs Explorer: Qwik Start 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | export PROJECT_ID=$(gcloud config get-value project) 6 | gsutil mb -p $PROJECT_ID -c regional -l us-central1 gs://$PROJECT_ID-bucket 7 | curl -O https://github.com/siddharth7000/practice/blob/main/sign.jpg 8 | mv sign.jpg demo-image.jpg 9 | gsutil cp demo-image.jpg gs://$PROJECT_ID-bucket/demo-image.jpg 10 | gsutil acl ch -u allUsers:R gs://$PROJECT_ID-bucket/demo-image.jpg 11 | ``` 12 | -------------------------------------------------------------------------------- /Cloud IAM: Qwik Start: -------------------------------------------------------------------------------- 1 | // Login with username 1 // 2 | 3 | // In cloudshell // 4 | 5 | export USERNAME2=[paste your USERNAME2 from lab] 6 | 7 | 8 | touch sample.txt 9 | gsutil mb gs://$DEVSHELL_PROJECT_ID 10 | gsutil cp sample.txt gs://$DEVSHELL_PROJECT_ID 11 | gcloud projects remove-iam-policy-binding $DEVSHELL_PROJECT_ID --member="user:$USERNAME2" --role="roles/viewer" 12 | gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member="user:$USERNAME2" --role="roles/storage.objectViewer" 13 | -------------------------------------------------------------------------------- /Google Fly Cup Challenge: Champion: -------------------------------------------------------------------------------- 1 | // Task 1 // 2 | 3 | gsutil cp gs://spls/solutions/DRL3.ipynb . 4 | 5 | 6 | // Task 8 // 7 | 8 | In cloud shell: 9 | 10 | gcloud auth configure-docker us-docker.pkg.dev 11 | docker build . -t us-docker.pkg.dev/$DEVSHELL_PROJECT_ID/drl-lap-time-predictor/webapp:0.2 12 | gcloud artifacts repositories create drl-lap-time-predictor --repository-format=docker --location=us 13 | docker push us-docker.pkg.dev/$DEVSHELL_PROJECT_ID/drl-lap-time-predictor/webapp:0.2 14 | -------------------------------------------------------------------------------- /Google Cloud Pub\Sub: Qwik Start - Python.md: -------------------------------------------------------------------------------- 1 | # Google Cloud Pub\Sub: Qwik Start - Python 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | sudo apt-get install -y virtualenv 6 | python3 -m venv venv 7 | source venv/bin/activate 8 | pip install --upgrade google-cloud-pubsub 9 | git clone https://github.com/googleapis/python-pubsub.git 10 | cd python-pubsub/samples/snippets 11 | python publisher.py $GOOGLE_CLOUD_PROJECT create MyTopic 12 | python subscriber.py $GOOGLE_CLOUD_PROJECT create MyTopic MySub 13 | ``` 14 | -------------------------------------------------------------------------------- /Scale Out and Update a Containerized Application ona Kubernetes Cluster: -------------------------------------------------------------------------------- 1 | gsutil cp gs://sureskills-ql/challenge-labs/ch05-k8s-scale-and-update/echo-web-v2.tar.gz . 2 | 3 | tar xvzf echo-web-v2.tar.gz 4 | 5 | 6 | // 1st TASK // 7 | 8 | gcloud builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/echo-app:v2 . 9 | 10 | 11 | // 4TH TASK // 12 | 13 | kubectl edit deploy echo-web 14 | 15 | // Change image version 'v1' to 'v2' 16 | 17 | 18 | // 3RD TASK // 19 | 20 | kubectl scale deploy echo-web --replicas=2 21 | -------------------------------------------------------------------------------- /Dataplex: Qwik Start - Console: -------------------------------------------------------------------------------- 1 | export LOCATION= 2 | export ID= 3 | 4 | gcloud dataplex lakes create sensors \ 5 | --location=$LOCATION \ 6 | --display-name="sensors" \ 7 | --description="Ecommerce Domain" 8 | 9 | 10 | gcloud dataplex zones create temperature-raw-data \ 11 | --location=$LOCATION \ 12 | --lake=sensors \ 13 | --display-name="temperature raw data" \ 14 | --type=RAW \ 15 | --resource-location-type=SINGLE_REGION \ 16 | --discovery-enabled 17 | 18 | gsutil mb -p $ID -c REGIONAL -l $LOCATION gs://$ID 19 | -------------------------------------------------------------------------------- /BigQuery: Qwik Start - Console: -------------------------------------------------------------------------------- 1 | bq query --use_legacy_sql=false \ 2 | ' 3 | SELECT 4 | weight_pounds, state, year, gestation_weeks 5 | FROM 6 | `bigquery-public-data.samples.natality` 7 | ORDER BY weight_pounds DESC LIMIT 10; 8 | ' 9 | 10 | bq mk babynames 11 | 12 | https://labshell-service-mvrcyiow4a-uc.a.run.app 13 | 14 | gsutil cp gs://spls/gsp072/baby-names.zip . 15 | 16 | unzip baby-names.zip 17 | 18 | bq load --autodetect --source_format=CSV babynames.names_2014 gs://spls/gsp072/baby-names/yob2014.txt name:string,gender:string,count:integer 19 | -------------------------------------------------------------------------------- /App Engine: Qwik Start - Go.md: -------------------------------------------------------------------------------- 1 | # App Engine: Qwik Start - Go 2 | 3 | ### Select your Region from task 3 4 | ```cmd 5 | export REGION= 6 | ``` 7 | 8 | ## Run in cloudshell 9 | ```cmd 10 | gcloud config set compute/region $REGION 11 | gcloud services enable appengine.googleapis.com 12 | git clone https://github.com/GoogleCloudPlatform/golang-samples.git 13 | cd golang-samples/appengine/go11x/helloworld 14 | sudo apt-get install google-cloud-sdk-app-engine-go 15 | gcloud app deploy 16 | gcloud app browse 17 | ``` 18 | 19 | ### Select the Region from task 3 and press y when asked 20 | -------------------------------------------------------------------------------- /App Dev: Storing Application Data in Cloud Datastore - Python.md: -------------------------------------------------------------------------------- 1 | # App Dev: Storing Application Data in Cloud Datastore - Python 2 | 3 | ## Run in Cloudshell 4 | ```cmd 5 | gcloud auth list 6 | git clone https://github.com/GoogleCloudPlatform/training-data-analyst 7 | cd ~/training-data-analyst/courses/developingapps/python/cloudstorage/start 8 | . prepare_environment.sh 9 | gsutil mb gs://$DEVSHELL_PROJECT_ID-media 10 | wget https://storage.googleapis.com/cloud-training/quests/Google_Cloud_Storage_logo.png 11 | gsutil cp Google_Cloud_Storage_logo.png gs://$DEVSHELL_PROJECT_ID-media 12 | ``` 13 | -------------------------------------------------------------------------------- /Cloud Storage: Qwik Start - CLI: -------------------------------------------------------------------------------- 1 | gcloud storage buckets create gs://$tanaydas 2 | 3 | curl https://upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Ada_Lovelace_portrait.jpg/800px-Ada_Lovelace_portrait.jpg --output ada.jpg 4 | 5 | gsutil cp ada.jpg gs://$tanaydas 6 | 7 | rm ada.jpg 8 | 9 | gsutil cp -r gs://$tanaydas/ada.jpg . 10 | 11 | gsutil cp gs://$tanaydas/ada.jpg gs://$tanaydas/image-folder/ 12 | 13 | gsutil ls gs://$tanaydas 14 | 15 | gsutil ls -l gs://$tanaydas/ada.jpg 16 | 17 | gsutil acl ch -u AllUsers:R gs://$tanaydas/ada.jpg 18 | 19 | sutil acl ch -d AllUsers gs://$tanaydas/ada.jpg 20 | 21 | gsutil rm gs://$tanaydas/ada.jpg 22 | -------------------------------------------------------------------------------- /Kubernetes Engine: Qwik Start.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Engine: Qwik Start 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | export ZONE= 6 | ``` 7 | 8 | ```cmd 9 | export REGION=${ZONE::-2} 10 | gcloud config set compute/region $REGION 11 | gcloud config set compute/zone $ZONE 12 | gcloud container clusters create lab-cluster --machine-type=e2-medium --zone=$ZONE 13 | gcloud container clusters get-credentials lab-cluster 14 | kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0 15 | kubectl expose deployment hello-server --type=LoadBalancer --port 8080 16 | ``` 17 | 18 | ```cmd 19 | gcloud container clusters delete lab-cluster 20 | ``` 21 | -------------------------------------------------------------------------------- /Cloud Functions: Qwik Start - Command Line.md: -------------------------------------------------------------------------------- 1 | # GSP080 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | export REGION= 6 | ``` 7 | 8 | ```cmd 9 | gcloud config set compute/region $REGION 10 | mkdir gcf_hello_world 11 | cd gcf_hello_world 12 | cat > index.js << EOF 13 | exports.helloWorld = (data, context) => { 14 | const pubSubMessage = data; 15 | const name = pubSubMessage.data 16 | ? Buffer.from(pubSubMessage.data, 'base64').toString() : "Hello World"; 17 | console.log("My Cloud Function: "+name); 18 | }; 19 | EOF 20 | gsutil mb -p $DEVSHELL_PROJECT_ID gs://$DEVSHELL_PROJECT_ID 21 | gcloud functions deploy helloWorld \ 22 | --stage-bucket $DEVSHELL_PROJECT_ID \ 23 | --trigger-topic hello_world \ 24 | --runtime nodejs20 25 | ``` 26 | -------------------------------------------------------------------------------- /APIs Explorer: Cloud Storage.md: -------------------------------------------------------------------------------- 1 | # APIs Explorer: Cloud Storage 2 | 3 | ## Run this in cloudshell 4 | ```cmd gsutil mb gs://$DEVSHELL_PROJECT_ID 5 | gsutil mb gs://$DEVSHELL_PROJECT_ID-2 6 | curl -o demo-image1.png https://cdn.qwiklabs.com/E4%2BSx10I0HBeOFPB15BFPzf9%2F%2FOK%2Btf7S0Mbn6aQ8fw%3D 7 | curl -o demo-image2.png https://cdn.qwiklabs.com/Hr8ohUSBSeAiMUJe1J998ydGcTu%2FrF4BUjZ2J%2BbiKps%3D 8 | curl -o demo-image1-copy.png https://cdn.qwiklabs.com/E4%2BSx10I0HBeOFPB15BFPzf9%2F%2FOK%2Btf7S0Mbn6aQ8fw%3D 9 | gsutil cp demo-image1.png gs://$DEVSHELL_PROJECT_ID/demo-image1.png 10 | gsutil cp demo-image2.png gs://$DEVSHELL_PROJECT_ID/demo-image2.png 11 | gsutil cp demo-image1-copy.png gs://$DEVSHELL_PROJECT_ID-2/demo-image1-copy.png 12 | ``` 13 | -------------------------------------------------------------------------------- /Cloud Natural Language API: Qwik Start.md: -------------------------------------------------------------------------------- 1 | # Cloud Natural Language API: Qwik Start 2 | 3 | ## Run this in cloudshell 4 | ```cmd 5 | export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value core/project) 6 | gcloud iam service-accounts create my-natlang-sa \ 7 | --display-name "my natural language service account" 8 | gcloud iam service-accounts keys create ~/key.json \ 9 | --iam-account my-natlang-sa@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com 10 | gcloud compute ssh linux-instance 11 | ``` 12 | 13 | ### Press `y` , `ENTER` > `ENTER` > `ENTER` > `n`, `ENTER` 14 | 15 | ```cmd 16 | gcloud ml language analyze-entities \ 17 | --content="Michelangelo Caravaggio, Italian painter, is known for 'The Calling of Saint Matthew'." > result.json 18 | ``` 19 | -------------------------------------------------------------------------------- /Cloud Natural Language API: Qwik Start: -------------------------------------------------------------------------------- 1 | // 1st Task // 2 | 3 | gcloud auth list 4 | gcloud config list project 5 | export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value core/project) 6 | gcloud iam service-accounts create my-natlang-sa \ 7 | --display-name "my natural language service account" 8 | gcloud iam service-accounts keys create ~/key.json \ 9 | --iam-account my-natlang-sa@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com 10 | export GOOGLE_APPLICATION_CREDENTIALS="/home/USER/key.json" 11 | gcloud compute ssh --zone "us-central1-a" "linux-instance" --project "$GOOGLE_CLOUD_PROJECT" 12 | 13 | 14 | // 2nd Task // 15 | 16 | gcloud ml language analyze-entities --content="Michelangelo Caravaggio, Italian painter, is known for 'The Calling of Saint Matthew'." > result.json 17 | -------------------------------------------------------------------------------- /Dataproc: Qwik Start - Console.md: -------------------------------------------------------------------------------- 1 | # Dataproc: Qwik Start - Console 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | export ZONE= 6 | ``` 7 | 8 | ```cmd 9 | export REGION=${ZONE::-2} 10 | gcloud dataproc clusters create example-cluster \ 11 | --region=$REGION \ 12 | --zone=$ZONE \ 13 | --master-machine-type=e2-standard-2 \ 14 | --master-boot-disk-size=50GB \ 15 | --num-workers=2 \ 16 | --worker-machine-type=e2-standard-2 \ 17 | --worker-boot-disk-size=50GB 18 | gcloud dataproc jobs submit spark \ 19 | --region=$REGION \ 20 | --cluster=example-cluster \ 21 | --class=org.apache.spark.examples.SparkPi \ 22 | --jars=file:///usr/lib/spark/examples/jars/spark-examples.jar \ 23 | -- 1000 24 | gcloud dataproc clusters update example-cluster \ 25 | --num-workers=4 \ 26 | --region=$REGION 27 | ``` 28 | -------------------------------------------------------------------------------- /Tagging Dataplex Assets: -------------------------------------------------------------------------------- 1 | export LOCATION= 2 | export ID= 3 | 4 | gcloud dataplex lakes create orders-lake \ 5 | --location=$LOCATION \ 6 | --display-name="Orders Lake" 7 | 8 | 9 | gcloud dataplex zones create customer-curated-zone \ 10 | --location=$LOCATION \ 11 | --lake=orders-lake \ 12 | --display-name="Customer Curated Zone" \ 13 | --type=CURATED \ 14 | --resource-location-type=SINGLE_REGION 15 | 16 | 17 | gcloud dataplex assets create customer-details-dataset \ 18 | --location=$LOCATION \ 19 | --lake=orders-lake \ 20 | --zone=customer-curated-zone \ 21 | --display-name="Customer Details Dataset" \ 22 | --resource-type=BIGQUERY_DATASET \ 23 | --resource-name=projects/$ID/datasets/customers 24 | 25 | 26 | gcloud services enable datacatalog.googleapis.com 27 | -------------------------------------------------------------------------------- /Build and Deploy a Docker Image to a Kubernetes Cluster: -------------------------------------------------------------------------------- 1 | // Firstly clone this repository // 2 | 3 | gsutil cp gs://sureskills-ql/challenge-labs/ch04-kubernetes-app-deployment/echo-web.tar.gz . 4 | 5 | // Then unzip it // 6 | 7 | tar -xvf echo-web.tar.gz 8 | 9 | 10 | 11 | // 2ND TASK // 12 | 13 | gcloud builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/echo-app:v1 . 14 | 15 | 16 | // 1ST TASK // 17 | 18 | gcloud container clusters create echo-cluster --num-nodes 2 --zone us-central1-a --machine-type n1-standard-2 19 | 20 | 21 | // 3RD TASK // 22 | 23 | kubectl create deployment echo-web --image=gcr.io/qwiklabs-resources/echo-app:v1 24 | 25 | 26 | // 4RTH TASK // 27 | 28 | kubectl expose deployment echo-web --type=LoadBalancer --port=80 --target-port=8000 29 | 30 | kubectl get svc 31 | -------------------------------------------------------------------------------- /Creating a Virtual Machine.md: -------------------------------------------------------------------------------- 1 | # Creating a Virtual Machine 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | export ZONE= 6 | ``` 7 | 8 | ```cmd 9 | gcloud compute instances create gcelab \ 10 | --zone=$ZONE \ 11 | --machine-type=e2-medium \ 12 | --image-project=debian-cloud \ 13 | --image-family=debian-11 \ 14 | --tags=http-server 15 | gcloud compute firewall-rules create allow-http \ 16 | --action=ALLOW \ 17 | --direction=INGRESS \ 18 | --rules=tcp:80 \ 19 | --source-ranges=0.0.0.0/0 \ 20 | --target-tags=http-server 21 | gcloud compute instances create gcelab2 --machine-type e2-medium --zone=$ZONE 22 | gcloud compute ssh gcelab --zone=$ZONE 23 | ``` 24 | 25 | ## When asked press y , ENTER > ENTER > ENTER 26 | 27 | ```cmd 28 | sudo apt-get update 29 | sudo apt-get install -y nginx 30 | ps auwx | grep nginx 31 | exit 32 | ``` 33 | -------------------------------------------------------------------------------- /Google Cloud Speech API: Qwik Start.md: -------------------------------------------------------------------------------- 1 | # Google Cloud Speech API: Qwik Start. 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | gcloud compute ssh linux-instance 6 | ``` 7 | 8 | ### Press `y` , `ENTER` > `ENTER` > `ENTER` > `n`, `ENTER` 9 | 10 | ### Get API KEY from `Navigation menu` > `APIs & services` > `Credentials` > `Create credentials` > `API key` 11 | 12 | ```cmd 13 | export API_KEY= 14 | ``` 15 | 16 | ```cmd 17 | cat > request.json < result.json 30 | ``` 31 | -------------------------------------------------------------------------------- /App Engine: 3 Ways: Challenge Lab.md: -------------------------------------------------------------------------------- 1 | # App Engine: 3 Ways: Challenge Lab 2 | 3 | ### Select your message from Task 4 4 | ```cmd 5 | export MESSAGE="" 6 | ``` 7 | 8 | ### Navigation Menu > Compute Engine > VM instance 9 | 10 | ### Click ssh downarrow icon > View gcloud command > Run in cloudshell > Paste ` --quiet` with same command 11 | ```cmd 12 | gcloud services enable appengine.googleapis.com 13 | git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git 14 | cd python-docs-samples/appengine/standard_python3/hello_world 15 | exit 16 | ``` 17 | 18 | ```cmd 19 | git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git 20 | cd python-docs-samples/appengine/standard_python3/hello_world 21 | gcloud app deploy 22 | sed -i 's/Hello World!/'"$MESSAGE"'/g' main.py 23 | gcloud app deploy 24 | ``` 25 | 26 | ### Select the Region from task 3 and press y when asked 27 | -------------------------------------------------------------------------------- /Exploring the Public Cryptocurrency Datasets Available in BigQuery LAB: -------------------------------------------------------------------------------- 1 | // 1ST QUERY // 2 | 3 | CREATE OR REPLACE TABLE lab.51 (transaction_hash STRING) as 4 | SELECT transaction_id FROM `bigquery-public-data.bitcoin_blockchain.transactions` , UNNEST( outputs ) as outputs 5 | where outputs.output_satoshis = 19499300000000 6 | 7 | 8 | // 2ND QUERY // 9 | 10 | CREATE OR REPLACE TABLE lab.52 (balance NUMERIC) as 11 | WITH double_entry_book AS ( 12 | -- debits 13 | SELECT 14 | array_to_string(inputs.addresses, ",") as address 15 | , -inputs.value as value 16 | FROM `bigquery-public-data.crypto_bitcoin.inputs` as inputs 17 | UNION ALL 18 | -- credits 19 | SELECT 20 | array_to_string(outputs.addresses, ",") as address 21 | , outputs.value as value 22 | FROM `bigquery-public-data.crypto_bitcoin.outputs` as outputs 23 | 24 | ) 25 | SELECT 26 | sum(value) as balance 27 | FROM double_entry_book 28 | where address = "1XPTgDRhN8RFnzniWCddobD9iKZatrvH4" 29 | -------------------------------------------------------------------------------- /Google Clout - BigQuery - Remote Functions: -------------------------------------------------------------------------------- 1 | // TASK 3 // 2 | 3 | connection name :- myfunction 4 | 5 | CREATE FUNCTION my_dataset. remote_function(x FLOAT64, y FLOAT64) 6 | RETURNS FLOAT64 7 | REMOTE WITH CONNECTION us.myfunction 8 | OPTIONS(endpoint="https://us-west1-qwiklabs-gcp-00-460641210213.cloudfunctions.net/remote_add"); 9 | 10 | 11 | // TASK 4 // 12 | 13 | import json 14 | 15 | _MAX_LOSSLESS=9007199254740992 16 | 17 | def batch_add(request): 18 | try: 19 | return_value = [] 20 | request_json = request.get_json() 21 | calls = request_json['calls'] 22 | for call in calls: 23 | return_value.append(sum([int(x) if isinstance(x, str) else x for x in call if x is not None])) 24 | replies = [str(x) if x > _MAX_LOSSLESS or x < -_MAX_LOSSLESS else x for x in return_value] 25 | return_json = json.dumps( { "replies" : replies} ) 26 | return return_json 27 | except Exception as inst: 28 | return json.dumps( { "errorMessage": 'something unexpected in input' } ), 400 29 | -------------------------------------------------------------------------------- /Video Intelligence: Qwik Start.md: -------------------------------------------------------------------------------- 1 | # Video Intelligence: Qwik Start 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | gcloud services enable videointelligence.googleapis.com 6 | gcloud iam service-accounts create quickstart 7 | gcloud iam service-accounts keys create key.json --iam-account quickstart@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com 8 | gcloud auth activate-service-account --key-file key.json 9 | gcloud auth print-access-token 10 | cat > request.json < APIs & Services > Credentials > Create Credentials > API key > Close 4 | 5 | ## Run in cloudshell 6 | ```cmd 7 | gcloud alpha services api-keys create --display-name="Tanay" 8 | KEY_NAME=$(gcloud alpha services api-keys list --format="value(name)" --filter "displayName=Tanay") 9 | export API_KEY=$(gcloud alpha services api-keys get-key-string $KEY_NAME --format="value(keyString)") 10 | export PROJECT_ID=$(gcloud config list --format 'value(core.project)') 11 | gsutil mb gs://$PROJECT_ID 12 | git clone https://github.com/siddharth7000/practice.git 13 | gsutil cp practice/donuts.png gs://$PROJECT_ID 14 | gsutil cp practice/selfie.png gs://$PROJECT_ID 15 | gsutil cp practice/city.png gs://$PROJECT_ID 16 | gsutil acl ch -u AllUsers:R gs://$PROJECT_ID/donuts.png 17 | gsutil acl ch -u AllUsers:R gs://$PROJECT_ID/selfie.png 18 | gsutil acl ch -u AllUsers:R gs://$PROJECT_ID/city.png 19 | ``` 20 | -------------------------------------------------------------------------------- /Explore Machine Learning Models with Explainable AI: Challenge Lab.md: -------------------------------------------------------------------------------- 1 | # Explore Machine Learning Models with Explainable AI: Challenge Lab 2 | 3 | ## Task 1 4 | ```cmd 5 | model = Sequential() 6 | model.add(layers.Dense(200, input_shape=(input_size,), activation='relu')) 7 | model.add(layers.Dense(50, activation='relu')) 8 | model.add(layers.Dense(20, activation='relu')) 9 | model.add(layers.Dense(1, activation='sigmoid')) 10 | model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy']) 11 | model.fit(train_data, train_labels, epochs=10, batch_size=2048, validation_split=0.1) 12 | ``` 13 | 14 | ## Task 2 15 | ```cmd 16 | limited_model = Sequential() 17 | limited_model.add(layers.Dense(200, input_shape=(input_size,), activation='relu')) 18 | limited_model.add(layers.Dense(50, activation='relu')) 19 | limited_model.add(layers.Dense(20, activation='relu')) 20 | limited_model.add(layers.Dense(1, activation='sigmoid')) 21 | limited_model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy']) 22 | limited_model.fit(limited_train_data, limited_train_labels, epochs=10, batch_size=2048, validation_split=0.1) 23 | ``` 24 | -------------------------------------------------------------------------------- /Get Started with Looker: Challenge Lab: -------------------------------------------------------------------------------- 1 | // TASK 2 // 2 | 3 | view: users_region { 4 | derived_table: { 5 | sql: SELECT 6 | id, 7 | state, 8 | country 9 | FROM 10 | `cloud-training-demos.looker_ecomm.users` ;; 11 | } 12 | 13 | dimension: id { 14 | type: number 15 | primary_key: yes 16 | sql: ${TABLE}.id ;; 17 | } 18 | 19 | dimension: state { 20 | type: string 21 | sql: ${TABLE}.state ;; 22 | } 23 | 24 | dimension: country { 25 | type: string 26 | sql: ${TABLE}.country ;; 27 | } 28 | 29 | measure: count { 30 | type: count 31 | drill_fields: [id, state, country] 32 | } 33 | } 34 | 35 | 36 | // MODELS >> IN "training ecommerce model" >> LINE 58 // 37 | 38 | join: users_region { 39 | type: left_outer 40 | sql_on: ${events.user_id} = ${users_region.id};; 41 | relationship: many_to_one 42 | } 43 | -------------------------------------------------------------------------------- /Deploying a Python Flask Web Application to App Engine Flexible: -------------------------------------------------------------------------------- 1 | gcloud auth list 2 | gcloud config list project 3 | gcloud storage cp -r gs://spls/gsp023/flex_and_vision/ . 4 | cd flex_and_vision 5 | export PROJECT_ID=$(gcloud config get-value project) 6 | gcloud iam service-accounts create qwiklab \ 7 | --display-name "My Qwiklab Service Account" 8 | gcloud projects add-iam-policy-binding ${PROJECT_ID} \ 9 | --member serviceAccount:qwiklab@${PROJECT_ID}.iam.gserviceaccount.com \ 10 | --role roles/owner 11 | gcloud iam service-accounts keys create ~/key.json \ 12 | --iam-account qwiklab@${PROJECT_ID}.iam.gserviceaccount.com 13 | export GOOGLE_APPLICATION_CREDENTIALS="/home/${USER}/key.json" 14 | virtualenv -p python3 env 15 | source env/bin/activate 16 | pip install -r requirements.txt 17 | gcloud app create 18 | export CLOUD_STORAGE_BUCKET=${PROJECT_ID} 19 | gsutil mb gs://${PROJECT_ID} 20 | python main.py 21 | 22 | 23 | runtime: python 24 | env: flex 25 | entrypoint: gunicorn -b :$PORT main:app 26 | runtime_config: 27 | python_version: 3.7 28 | env_variables: 29 | CLOUD_STORAGE_BUCKET: 30 | manual_scaling: 31 | instances: 1 32 | 33 | 34 | gcloud config set app/cloud_build_timeout 1000 35 | gcloud app deploy 36 | -------------------------------------------------------------------------------- /Entity and Sentiment Analysis with the Natural Language API.md: -------------------------------------------------------------------------------- 1 | # Entity and Sentiment Analysis with the Natural Language API 2 | 3 | ## Navigation Menu > APIs & Services > Credentials > Create Credentials > API key > Close 4 | 5 | ## Run in cloudshell 6 | ```cmd 7 | gcloud beta compute ssh linux-instance 8 | ``` 9 | 10 | ### Press y Enter > Enter > Enter > Press n Enter 11 | ```cmd 12 | gcloud services enable apikeys.googleapis.com 13 | gcloud alpha services api-keys create --display-name="Tanay" 14 | KEY_NAME=$(gcloud alpha services api-keys list --format="value(name)" --filter "displayName=Tanay") 15 | API_KEY=$(gcloud alpha services api-keys get-key-string $KEY_NAME --format="value(keyString)") 16 | echo $API_KEY 17 | touch request.json 18 | tee request.json < result.json 29 | cat result.json 30 | ``` 31 | -------------------------------------------------------------------------------- /Migrate a MySQL Database to Google Cloud SQL: -------------------------------------------------------------------------------- 1 | // 1ST TASK // 2 | 3 | 4 | export ZONE=us-central1-a 5 | 6 | gcloud sql instances create wordpress --tier=db-n1-standard-1 --activation-policy=ALWAYS --gce-zone $ZONE 7 | 8 | // 2ND TASK // 9 | 10 | 11 | gcloud sql users set-password --host % root --instance wordpress --password Password1* 12 | 13 | export ADDRESS= EXTERNALIP of BLOG VIRTUAL MACHINE/32 - CIDR FORM 14 | 15 | gcloud sql instances patch wordpress --authorized-networks $ADDRESS --quiet 16 | 17 | gcloud compute ssh blog --zone=us-central1-a 18 | 19 | MYSQLIP=$(gcloud sql instances describe wordpress --format="value(ipAddresses.ipAddress)") 20 | 21 | mysql --host=[SQL INSTANCE PRIMARY ADDRESS] \ 22 | --user=root --password 23 | 24 | CREATE DATABASE wordpress; 25 | CREATE USER 'blogadmin'@'%' IDENTIFIED BY 'Password1*'; 26 | GRANT ALL PRIVILEGES ON wordpress.* TO 'blogadmin'@'%'; 27 | FLUSH PRIVILEGES; 28 | 29 | exit 30 | 31 | // 3RD TASK // 32 | 33 | 34 | sudo mysqldump -u root -pPassword1* wordpress > wordpress_backup.sql 35 | 36 | mysql --host=$MYSQLIP --user=root -pPassword1* --verbose wordpress < wordpress_backup.sql 37 | 38 | sudo service apache2 restart 39 | 40 | // 4RTH TASK // 41 | 42 | 43 | cd /var/www/html/wordpress 44 | 45 | sudo nano wp-config.php 46 | -------------------------------------------------------------------------------- /Stream Processing with Cloud Pub-Sub and Dataflow: Qwik Start: -------------------------------------------------------------------------------- 1 | gcloud auth list 2 | 3 | gcloud config list project 4 | 5 | gcloud config set compute/region "REGION" 6 | 7 | gcloud services disable dataflow.googleapis.com 8 | 9 | gcloud services enable dataflow.googleapis.com 10 | 11 | PROJECT_ID=$(gcloud config get-value project) 12 | BUCKET_NAME="${PROJECT_ID}-bucket" 13 | TOPIC_ID=my-id 14 | REGION=us-central1 15 | AE_REGION=us-central 16 | 17 | gsutil mb gs://$BUCKET_NAME 18 | 19 | gcloud pubsub topics create $TOPIC_ID 20 | 21 | gcloud app create --region=$AE_REGION 22 | 23 | gcloud scheduler jobs create pubsub publisher-job --schedule="* * * * *" \ 24 | --topic=$TOPIC_ID --message-body="Hello!" 25 | 26 | gcloud scheduler jobs run publisher-job 27 | 28 | git clone https://github.com/GoogleCloudPlatform/java-docs-samples.git 29 | cd java-docs-samples/pubsub/streaming-analytics 30 | 31 | 32 | mvn compile exec:java \ 33 | -Dexec.mainClass=com.examples.pubsub.streaming.PubSubToGcs \ 34 | -Dexec.cleanupDaemonThreads=false \ 35 | -Dexec.args=" \ 36 | --project=$PROJECT_ID \ 37 | --region=$REGION \ 38 | --inputTopic=projects/$PROJECT_ID/topics/$TOPIC_ID \ 39 | --output=gs://$BUCKET_NAME/samples/output \ 40 | --runner=DataflowRunner \ 41 | --windowSize=2" 42 | -------------------------------------------------------------------------------- /BigLake: Qwik Start.md: -------------------------------------------------------------------------------- 1 | # BigLake: Qwik Start 2 | 3 | ## Run this in cloudshell 4 | ```cmd 5 | bq mk --connection --location=US --project_id=$DEVSHELL_PROJECT_ID \ 6 | --connection_type=CLOUD_RESOURCE my-connection 7 | SERVICE_ACCOUNT_ID=$(bq show --format=json --connection $DEVSHELL_PROJECT_ID.US.my-connection | jq -r '.cloudResource.serviceAccountId') 8 | gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member="serviceAccount:$SERVICE_ACCOUNT_ID" --role="roles/storage.objectViewer" 9 | bq mk demo_dataset 10 | bq mkdef --source_format=CSV --autodetect=true \ 11 | gs://$DEVSHELL_PROJECT_ID/customer.csv > mytable_def 12 | bq mk --table --external_table_definition=mytable_def \ 13 | demo_dataset.biglake_table 14 | bq mkdef --source_format=CSV --autodetect=true \ 15 | gs://$DEVSHELL_PROJECT_ID/invoice.csv > mytable_deff 16 | bq mk --table --external_table_definition=mytable_deff \ 17 | demo_dataset.external_table 18 | bq mkdef \ 19 | --autodetect \ 20 | --connection_id=$DEVSHELL_PROJECT_ID.US.my-connection \ 21 | --source_format=CSV \ 22 | "gs://$DEVSHELL_PROJECT_ID/invoice.csv" > /tmp/tabledef.json 23 | bq show --schema --format=prettyjson demo_dataset.external_table > /tmp/schema 24 | bq update --external_table_definition=/tmp/tabledef.json --schema=/tmp/schema demo_dataset.external_table 25 | ``` 26 | -------------------------------------------------------------------------------- /Dataplex: Qwik Start - Command Line: -------------------------------------------------------------------------------- 1 | gcloud auth list 2 | 3 | gcloud config list project 4 | 5 | gcloud services enable \ 6 | dataplex.googleapis.com 7 | 8 | export PROJECT_ID=$(gcloud config get-value project) 9 | 10 | gcloud config set compute/region $REGION 11 | 12 | gcloud dataplex lakes create ecommerce \ 13 | --location=$REGION \ 14 | --display-name="Ecommerce" \ 15 | --description="Ecommerce Domain" 16 | 17 | gcloud dataplex zones create orders-curated-zone \ 18 | --location=$REGION \ 19 | --lake=ecommerce \ 20 | --display-name="Orders Curated Zone" \ 21 | --resource-location-type=SINGLE_REGION \ 22 | --type=CURATED \ 23 | --discovery-enabled \ 24 | --discovery-schedule="0 * * * *" 25 | 26 | bq mk --location=$REGION --dataset orders 27 | 28 | gcloud dataplex assets create orders-curated-dataset \ 29 | --location=$REGION \ 30 | --lake=ecommerce \ 31 | --zone=orders-curated-zone \ 32 | --display-name="Orders Curated Dataset" \ 33 | --resource-type=BIGQUERY_DATASET \ 34 | --resource-name=projects/$PROJECT_ID/datasets/orders \ 35 | --discovery-enabled 36 | 37 | gcloud dataplex assets delete orders-curated-dataset --location=$REGION --zone=orders-curated-zone --lake=ecommerce 38 | 39 | gcloud dataplex zones delete orders-curated-zone --location=$REGION --lake=ecommerce 40 | 41 | gcloud dataplex lakes delete ecommerce --location=$REGION 42 | -------------------------------------------------------------------------------- /Use Go Code to Work with Google Cloud Data Sources: -------------------------------------------------------------------------------- 1 | // TASK 3 // 2 | 3 | export PROJECT_ID=$(gcloud info --format="value(config.project)") 4 | 5 | git clone https://github.com/GoogleCloudPlatform/DIY-Tools.git 6 | 7 | gcloud firestore import gs://$PROJECT_ID-firestore/prd-back 8 | 9 | cd ~/DIY-Tools/gcp-data-drive/cmd/webserver 10 | 11 | go build -mod=readonly -v -o gcp-data-drive 12 | 13 | ./gcp-data-drive 14 | 15 | 16 | // TASK 5 // 17 | 18 | // In the Cloud Shell toolbar, click the + icon next to the first Cloud Shell tab to open a second Cloud Shell tab // 19 | 20 | export PROJECT_ID=$(gcloud info --format="value(config.project)") 21 | 22 | export PREVIEW_URL=[REPLACE_WITH_WEB_PREVIEW_URL] 23 | 24 | echo $PREVIEW_URL/fs/$PROJECT_ID/symbols/product/symbol 25 | 26 | 27 | // Task 6 // 28 | 29 | // In the Cloud Shell toolbar, click the + icon next to the first Cloud Shell tab to open a second Cloud Shell tab // 30 | 31 | export PROJECT_ID=$(gcloud info --format="value(config.project)") 32 | 33 | cd ~/DIY-Tools/gcp-data-drive/cmd/webserver 34 | 35 | gcloud app deploy app.yaml --project $PROJECT_ID -q 36 | 37 | export TARGET_URL=https://$(gcloud app describe --format="value(defaultHostname)") 38 | 39 | curl $TARGET_URL/fs/$PROJECT_ID/symbols/product/symbol 40 | 41 | curl $TARGET_URL/fs/$PROJECT_ID/symbols/product/symbol/008888166900 42 | 43 | curl $TARGET_URL/bq/$PROJECT_ID/publicviews/ca_zip_codes 44 | -------------------------------------------------------------------------------- /Cloud Composer: Copying BigQuery Tables Across Different Locations: -------------------------------------------------------------------------------- 1 | // Run this in cloudshell // 2 | 3 | export PROJECT_ID=$(gcloud config list --format 'value(core.project)') 4 | gsutil mb gs://$PROJECT_ID-us 5 | gsutil mb -l eu gs://$PROJECT_ID-eu 6 | bq mk --dataset_id=nyc_tlc_EU --location=EU 7 | gcloud composer environments create composer-advanced-lab \ 8 | --location=us-east1 \ 9 | --image-version=composer-1.20.12-airflow-2.4.3 \ 10 | --zone=us-east1-c 11 | sudo apt-get install -y virtualenv 12 | python3 -m venv venv 13 | source venv/bin/activate 14 | DAGS_BUCKET=$(gcloud storage buckets list gs://us-east1* --format="value(name)") 15 | gcloud composer environments run composer-advanced-lab \ 16 | --location us-east1 variables -- \ 17 | set table_list_file_path /home/airflow/gcs/dags/bq_copy_eu_to_us_sample.csv 18 | gcloud composer environments run composer-advanced-lab \ 19 | --location us-east1 variables -- \ 20 | set gcs_source_bucket $PROJECT_ID-us 21 | gcloud composer environments run composer-advanced-lab \ 22 | --location us-east1 variables -- \ 23 | set gcs_dest_bucket $PROJECT_ID-eu 24 | cd ~ 25 | gsutil -m cp -r gs://spls/gsp283/python-docs-samples . 26 | gsutil cp -r python-docs-samples/third_party/apache-airflow/plugins/* gs://$DAGS_BUCKET/plugins 27 | gsutil cp python-docs-samples/composer/workflows/bq_copy_across_locations.py gs://$DAGS_BUCKET/dags 28 | gsutil cp python-docs-samples/composer/workflows/bq_copy_eu_to_us_sample.csv gs://$DAGS_BUCKET/dags 29 | -------------------------------------------------------------------------------- /Create a Streaming Data Lake on Cloud Storage: Challenge Lab: -------------------------------------------------------------------------------- 1 | export BUCKET_NAME= 2 | export TOPIC_ID= 3 | export PROJECT_ID= 4 | 5 | gcloud config set compute/region "[REGION]" 6 | 7 | gcloud services disable dataflow.googleapis.com 8 | 9 | gcloud services enable dataflow.googleapis.com 10 | 11 | PROJECT_ID=$(gcloud config get-value project) 12 | REGION=us-central1 13 | AE_REGION=us-central 14 | 15 | 16 | // Task 1 // 17 | 18 | gcloud pubsub topics create $TOPIC_ID 19 | 20 | 21 | // Task 2 // 22 | 23 | gcloud app create --region=$AE_REGION 24 | 25 | gcloud scheduler jobs create pubsub publisher-job --schedule="* * * * *" \ 26 | --topic=$TOPIC_ID --message-body="[MESSAGE]" 27 | 28 | gcloud scheduler jobs run publisher-job 29 | 30 | 31 | // Task 3 // 32 | 33 | gsutil mb gs://$BUCKET_NAME 34 | 35 | 36 | // Task 4 // 37 | 38 | docker run -it -e DEVSHELL_PROJECT_ID=$DEVSHELL_PROJECT_ID python:3.7 /bin/bash 39 | 40 | git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git 41 | 42 | cd python-docs-samples/pubsub/streaming-analytics 43 | 44 | pip install -U -r requirements.txt # Install Apache Beam dependencies 45 | 46 | python PubSubToGCS.py \ 47 | --project=PROJECT_ID \ 48 | --region=REGION \ 49 | --input_topic=projects/PROJECT_ID/topics/TOPIC_ID \ 50 | --output_path=gs://BUCKET_NAME/samples/output \ 51 | --runner=DataflowRunner \ 52 | --window_size=2 \ 53 | --num_shards=2 \ 54 | --temp_location=gs://BUCKET_NAME/temp 55 | -------------------------------------------------------------------------------- /VPC Networking Fundamentals: -------------------------------------------------------------------------------- 1 | export ZONE=[paste your ZONE from lab] 2 | 3 | 4 | // Run all these commands on cloudshell // 5 | 6 | gcloud compute networks create mynetwork --project=$DEVSHELL_PROJECT_ID --subnet-mode=auto --mtu=1460 --bgp-routing-mode=regional 7 | gcloud compute instances create mynet-us-vm --project=$DEVSHELL_PROJECT_ID --zone=$ZONE --machine-type=e2-micro --network-interface=network-tier=PREMIUM,stack-type=IPV4_ONLY,subnet=mynetwork --metadata=enable-oslogin=true --maintenance-policy=MIGRATE --provisioning-model=STANDARD --create-disk=auto-delete=yes,boot=yes,device-name=mynet-us-vm,image=projects/debian-cloud/global/images/debian-11-bullseye-v20230509,mode=rw,size=10,type=projects/$DEVSHELL_PROJECT_ID/zones/$ZONE/diskTypes/pd-balanced --no-shielded-secure-boot --shielded-vtpm --shielded-integrity-monitoring --labels=goog-ec-src=vm_add-gcloud --reservation-affinity=any 8 | gcloud compute instances create mynet-eu-vm --project=$DEVSHELL_PROJECT_ID --zone=europe-west1-b --machine-type=e2-micro --network-interface=network-tier=PREMIUM,stack-type=IPV4_ONLY,subnet=mynetwork --metadata=enable-oslogin=true --maintenance-policy=MIGRATE --provisioning-model=STANDARD --create-disk=auto-delete=yes,boot=yes,device-name=mynet-eu-vm,image=projects/debian-cloud/global/images/debian-11-bullseye-v20230509,mode=rw,size=10,type=projects/$DEVSHELL_PROJECT_ID/zones/europe-west1-b/diskTypes/pd-balanced --no-shielded-secure-boot --shielded-vtpm --shielded-integrity-monitoring --labels=goog-ec-src=vm_add-gcloud --reservation-affinity=any 9 | -------------------------------------------------------------------------------- /Deploy and Manage Cloud Environments with Google Cloud: Challenge Lab: -------------------------------------------------------------------------------- 1 | // TASK 1 // 2 | 3 | cd /work/dm 4 | sed -i s/SET_REGION/us-east1/g prod-network.yaml 5 | 6 | gcloud deployment-manager deployments create prod-network --config=prod-network.yaml 7 | 8 | gcloud config set compute/zone us-east1-b 9 | 10 | gcloud container clusters create kraken-prod \ 11 | --num-nodes 2 \ 12 | --network kraken-prod-vpc \ 13 | --subnetwork kraken-prod-subnet\ 14 | --zone us-east1-b 15 | 16 | 17 | gcloud container clusters get-credentials kraken-prod 18 | 19 | cd /work/k8s 20 | 21 | for F in $(ls *.yaml); do kubectl create -f $F; done 22 | 23 | 24 | // TASK 2 // 25 | 26 | gcloud config set compute/zone us-east1-b 27 | 28 | gcloud compute instances create kraken-admin --network-interface="subnet=kraken-mgmt-subnet" --network-interface="subnet=kraken-prod-subnet" 29 | 30 | 31 | // TASK 3 // 32 | 33 | gcloud config set compute/zone us-east1-b 34 | 35 | gcloud container clusters get-credentials spinnaker-tutorial 36 | 37 | DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}") 38 | 39 | kubectl port-forward --namespace default $DECK_POD 8080:9000 >> /dev/null & 40 | 41 | 42 | // AT THE VERY END // 43 | 44 | gcloud config set compute/zone us-east1-b 45 | 46 | gcloud source repos clone sample-app 47 | 48 | cd sample-app 49 | touch a 50 | 51 | git config --global user.email "$(gcloud config get-value account)" 52 | git config --global user.name "Student" 53 | git commit -a -m "change" 54 | git tag v1.0.1 55 | git push --tags 56 | -------------------------------------------------------------------------------- /VPC Networks - Controlling Access.md: -------------------------------------------------------------------------------- 1 | # VPC Networks - Controlling Access 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | export ZONE= 6 | ``` 7 | 8 | ```cmd 9 | export PROJECT_ID=$(gcloud config get-value project) 10 | gcloud compute instances create blue \ 11 | --zone=$ZONE \ 12 | --machine-type=e2-medium \ 13 | --tags=web-server 14 | gcloud compute instances create green \ 15 | --zone=$ZONE \ 16 | --machine-type=e2-medium 17 | gcloud compute firewall-rules create allow-http-web-server \ 18 | --network=default \ 19 | --action=ALLOW \ 20 | --direction=INGRESS \ 21 | --source-ranges=0.0.0.0/0 \ 22 | --target-tags=web-server \ 23 | --rules=tcp:80,icmp 24 | gcloud compute instances create test-vm --machine-type=e2-micro --subnet=default --zone=$ZONE 25 | ``` 26 | 27 | ### IAM & admin > Service Accounts > Create service account > Name= `Network-admin` > Role > Compute Engine > Compute Network Admin > Save 28 | 29 | ```cmd 30 | gcloud iam service-accounts keys create key.json --iam-account=network-admin@$PROJECT_ID.iam.gserviceaccount.com 31 | gcloud compute ssh blue --zone=$ZONE --quiet 32 | ``` 33 | 34 | ```cmd 35 | sudo apt-get install nginx-light -y 36 | sudo sed -i 's#

Welcome to nginx!

#

Welcome to the blue server!

#' /var/www/html/index.nginx-debian.html 37 | cat /var/www/html/index.nginx-debian.html 38 | exit 39 | ``` 40 | 41 | ```cmd 42 | gcloud beta compute ssh green --zone=$ZONE --quiet 43 | ``` 44 | 45 | ```cmd 46 | sudo apt-get install nginx-light -y 47 | sudo sed -i 's#

Welcome to nginx!

#

Welcome to the Green server!

#' /var/www/html/index.nginx-debian.html 48 | cat /var/www/html/index.nginx-debian.html 49 | ``` 50 | -------------------------------------------------------------------------------- /Serverless Firebase Development: Challenge Lab: -------------------------------------------------------------------------------- 1 | Let's Define Some Environment Variables! 2 | 3 | export DATASET_SERVICE_NAME= 4 | export FRONTEND_STAGING_SERVICE= 5 | export FRONTEND_PRODUCTION_SERVICE= 6 | 7 | Then Clone The Repository!! 8 | 9 | 10 | // TASK 2 // 11 | 12 | cd pet-theory/lab06/firebase-import-csv/solution 13 | npm install 14 | node index.js netflix_titles_original.csv 15 | 16 | 17 | // TASK 3 // 18 | 19 | cd ~/pet-theory/lab06/firebase-rest-api/solution-01 20 | npm install 21 | gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.1 22 | gcloud beta run deploy $DATASET_SERVICE_NAME --image gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.1 --allow-unauthenticated 23 | 24 | 25 | // TASK 4 // 26 | 27 | cd ~/pet-theory/lab06/firebase-rest-api/solution-02 28 | npm install 29 | gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.2 30 | gcloud beta run deploy $DATASET_SERVICE_NAME --image gcr.io/$GOOGLE_CLOUD_PROJECT/rest-api:0.2 --allow-unauthenticated 31 | 32 | SERVICE_URL= 33 | curl -X GET $SERVICE_URL/2019 34 | 35 | 36 | // TASK 5 // 37 | 38 | cd ~/pet-theory/lab06/firebase-frontend/public 39 | nano app.js 40 | 41 | npm install 42 | cd ~/pet-theory/lab06/firebase-frontend 43 | gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-staging:0.1 44 | gcloud beta run deploy $FRONTEND_STAGING_SERVICE --image gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-staging:0.1 45 | 46 | 47 | // TASK 6 // 48 | 49 | gcloud builds submit --tag gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-production:0.1 50 | gcloud beta run deploy $FRONTEND_PRODUCTION_SERVICE --image gcr.io/$GOOGLE_CLOUD_PROJECT/frontend-production:0.1 51 | -------------------------------------------------------------------------------- /Engineer Data in Google Cloud: Challenge Lab: -------------------------------------------------------------------------------- 1 | // Task 1 // 2 | 3 | CREATE OR REPLACE TABLE 4 | taxirides. AS 5 | SELECT 6 | (tolls_amount + fare_amount) AS , 7 | pickup_datetime, 8 | pickup_longitude AS pickuplon, 9 | pickup_latitude AS pickuplat, 10 | dropoff_longitude AS dropofflon, 11 | dropoff_latitude AS dropofflat, 12 | passenger_count AS passengers, 13 | FROM 14 | taxirides.historical_taxi_rides_raw 15 | WHERE 16 | RAND() < 0.001 17 | AND trip_distance > 3 {Change_as_mention_in_lab} 18 | AND fare_amount >= 2.0 {Change_as_mention_in_lab} 19 | AND pickup_longitude > -78 20 | AND pickup_longitude < -70 21 | AND dropoff_longitude > -78 22 | AND dropoff_longitude < -70 23 | AND pickup_latitude > 37 24 | AND pickup_latitude < 45 25 | AND dropoff_latitude > 37 26 | AND dropoff_latitude < 45 27 | AND passenger_count > 3 {Change_as_mention_in_lab} 28 | 29 | 30 | // Task 2 // 31 | 32 | CREATE OR REPLACE MODEL taxirides. 33 | TRANSFORM( 34 | * EXCEPT(pickup_datetime) 35 | 36 | , ST_Distance(ST_GeogPoint(pickuplon, pickuplat), ST_GeogPoint(dropofflon, dropofflat)) AS euclidean 37 | , CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING) AS dayofweek 38 | , CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING) AS hourofday 39 | ) 40 | OPTIONS(input_label_cols=[''], model_type='linear_reg') 41 | AS 42 | 43 | SELECT * FROM taxirides. 44 | 45 | 46 | // Task 3 // 47 | 48 | CREATE OR REPLACE TABLE taxirides.2015_fare_amount_predictions 49 | AS 50 | SELECT * FROM ML.PREDICT(MODEL taxirides.,( 51 | SELECT * FROM taxirides.report_prediction_data) 52 | ) 53 | -------------------------------------------------------------------------------- /Setting up a Private Kubernetes Cluster.md: -------------------------------------------------------------------------------- 1 | # Setting up a Private Kubernetes Cluster 2 | 3 | ## Run in cloudshell (Zone from Task 1 step 3) 4 | ```cmd 5 | export ZONE= 6 | ``` 7 | 8 | ```cmd 9 | gcloud config set compute/zone $ZONE 10 | export REGION=${ZONE%-*} 11 | gcloud beta container clusters create private-cluster \ 12 | --enable-private-nodes \ 13 | --master-ipv4-cidr 172.16.0.16/28 \ 14 | --enable-ip-alias \ 15 | --create-subnetwork "" 16 | gcloud compute instances create source-instance --zone=$ZONE --scopes 'https://www.googleapis.com/auth/cloud-platform' 17 | NAT_IAP_TANAY=$(gcloud compute instances describe source-instance --zone=$ZONE | grep natIP | awk '{print $2}') 18 | gcloud container clusters update private-cluster \ 19 | --enable-master-authorized-networks \ 20 | --master-authorized-networks $NAT_IAP_TANAY/32 21 | gcloud container clusters delete private-cluster --zone=$ZONE --quiet 22 | gcloud compute networks subnets create my-subnet \ 23 | --network default \ 24 | --range 10.0.4.0/22 \ 25 | --enable-private-ip-google-access \ 26 | --region=$REGION \ 27 | --secondary-range my-svc-range=10.0.32.0/20,my-pod-range=10.4.0.0/14 28 | gcloud beta container clusters create private-cluster2 \ 29 | --enable-private-nodes \ 30 | --enable-ip-alias \ 31 | --master-ipv4-cidr 172.16.0.32/28 \ 32 | --subnetwork my-subnet \ 33 | --services-secondary-range-name my-svc-range \ 34 | --cluster-secondary-range-name my-pod-range \ 35 | --zone=$ZONE 36 | NAT_IAP_Cloud=$(gcloud compute instances describe source-instance --zone=$ZONE | grep natIP | awk '{print $2}') 37 | gcloud container clusters update private-cluster2 \ 38 | --enable-master-authorized-networks \ 39 | --zone=$ZONE \ 40 | --master-authorized-networks $NAT_IAP_Cloud/32 41 | ``` 42 | -------------------------------------------------------------------------------- /Analyze Images with the Cloud Vision API: Challenge Lab.md: -------------------------------------------------------------------------------- 1 | # Analyze Images with the Cloud Vision API: Challenge Lab 2 | 3 | ## Run this in cloudshell 4 | ```cmd 5 | gcloud alpha services api-keys create --display-name="Tanay" 6 | KEY_NAME=$(gcloud alpha services api-keys list --format="value(name)" --filter "displayName=Tanay") 7 | export API_KEY=$(gcloud alpha services api-keys get-key-string $KEY_NAME --format="value(keyString)") 8 | export PROJECT_ID=$(gcloud config list --format 'value(core.project)') 9 | cat > request.json < request.json < 22 | 23 | wget https://raw.githubusercontent.com/guys-in-the-cloud/cloud-skill-boosts/main/Challenge-labs/Perform%20Foundational%20Data%2C%20ML%2C%20and%20AI%20Tasks%20in%20Google%20Cloud%3A%20Challenge%20Lab/speech-request.json 24 | 25 | curl -s -X POST -H "Content-Type: application/json" --data-binary @speech-request.json \ 26 | "https://speech.googleapis.com/v1/speech:recognize?key=${API_KEY}" > speech.json 27 | 28 | gsutil cp speech.json gs://$DEVSHELL_PROJECT_ID-marking/ 29 | 30 | 31 | Cloud Natural Language API ---> 32 | 33 | gcloud ml language analyze-entities --content="Old Norse texts portray Odin as one-eyed and long-bearded, frequently wielding a spear named Gungnir and wearing a cloak and a broad hat." > language.json 34 | 35 | gsutil cp language.json gs://$DEVSHELL_PROJECT_ID-marking/ 36 | 37 | Google Video Intelligence ---> 38 | 39 | 40 | wget https://github.com/guys-in-the-cloud/cloud-skill-boosts/blob/main/Challenge-labs/Perform%20Foundational%20Data%2C%20ML%2C%20and%20AI%20Tasks%20in%20Google%20Cloud:%20Challenge%20Lab/video-intelligence-request.json 41 | 42 | curl -s -H 'Content-Type: application/json' \ 43 | -H 'Authorization: Bearer '$(gcloud auth print-access-token)'' \ 44 | 'https://videointelligence.googleapis.com/v1/videos:annotate' \ 45 | -d @video-intelligence-request.json > video.json 46 | 47 | 48 | gsutil cp video.json gs://$DEVSHELL_PROJECT_ID-marking/ 49 | -------------------------------------------------------------------------------- /MongoDB Atlas with Natural Language API and Cloud Run: -------------------------------------------------------------------------------- 1 | 1st - Name the database Bakery and the collection cakes, then click Create 2 | 3 | 2nd - Delete what is currently in the box, add the following cake document, and then press Insert: 4 | 5 | 6 | 7 | { 8 | "name":"Chocolate Cake", 9 | "shortDescription":"Chocolate cake is a cake flavored with melted chocolate, cocoa powder, or sometimes both.", 10 | "description":"Chocolate cake is made with chocolate; it can be made with other ingredients, as well. These ingredients include fudge, vanilla creme, and other sweeteners. The history of chocolate cake goes back to 1764, when Dr. James Baker discovered how to make chocolate by grinding cocoa beans between two massive circular millstones.", 11 | "image":"https://addapinch.com/wp-content/uploads/2020/04/chocolate-cake-DSC_1768.jpg", 12 | "ingredients":[ 13 | "flour", 14 | "sugar", 15 | "cocoa powder" 16 | ], 17 | "stock": 25 18 | } 19 | 20 | 21 | 22 | { 23 | "name":"Cheese Cake", 24 | "shortDescription":"Cheesecake is a sweet dessert consisting of one or more layers. The main, and thickest, layer consists of a mixture of a soft, fresh cheese (typically cottage cheese, cream cheese or ricotta), eggs, and sugar. ", 25 | "description":"Cheesecake is a sweet dessert consisting of one or more layers. The main, and thickest, layer consists of a mixture of a soft, fresh cheese (typically cottage cheese, cream cheese or ricotta), eggs, and sugar. If there is a bottom layer, it most often consists of a crust or base made from crushed cookies (or digestive biscuits), graham crackers, pastry, or sometimes sponge cake.[1] Cheesecake may be baked or unbaked (and is usually refrigerated).", 26 | "image":"https://sallysbakingaddiction.com/wp-content/uploads/2018/05/perfect-cheesecake-recipe.jpg", 27 | "ingredients":[ 28 | "flour", 29 | "sugar", 30 | "eggs" 31 | ], 32 | "stock": 40 33 | } 34 | 35 | 36 | 37 | // Add a new document to this new comments collection and add the following code // 38 | 39 | { 40 | "cakeId": "", 41 | "date": "...", 42 | "name":"", 43 | "text":"" 44 | } 45 | -------------------------------------------------------------------------------- /Deploy to Kubernetes in Google Cloud: Challenge Lab: -------------------------------------------------------------------------------- 1 | // 1st Task // 2 | 3 | source <(gsutil cat gs://cloud-training/gsp318/marking/setup_marking_v2.sh) 4 | gcloud source repos clone valkyrie-app --project=qwiklabs-gcp-01-096d3d9fe9fc [Your Project Id] 5 | cd valkyrie-app 6 | cat > Dockerfile <> /dev/null & 50 | printf $(kubectl get secret cd-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo 51 | 52 | gcloud source repos list 53 | 54 | sed -i "s/green/orange/g" source/html.go 55 | sed -i "s/YOUR_PROJECT/$GOOGLE_CLOUD_PROJECT/g" Jenkinsfile 56 | git config --global user.email "you@example.com" 57 | git config --global user.name "student" 58 | git add . 59 | git commit -m "built pipeline init" 60 | git push 61 | -------------------------------------------------------------------------------- /Serverless Cloud Run Development: Challenge Lab: -------------------------------------------------------------------------------- 1 | Firstly Let's make environment variables -> 2 | 3 | export PUBLIC_BILLING_SERVICE= 4 | export FRONTEND_STAGING_SERVICE= 5 | export PRIVATE_BILLING_SERVICE= 6 | export BILLING_SERVICE_ACCOUNT= 7 | export BILLING_PROD_SERVICE= 8 | export FRONTEND_SERVICE_ACCOUNT= 9 | export FRONTEND_PRODUCTION_SERVICE= 10 | 11 | 12 | // SOME STARTING BASIC COMMANDS FOR THIS LAB -> // 13 | 14 | gcloud config set run/region us-central1 15 | gcloud config set run/platform managed 16 | 17 | git clone https://github.com/rosera/pet-theory.git && cd pet-theory/lab07 18 | 19 | 20 | // TASK 1 // 21 | 22 | cd ~/pet-theory/lab07/unit-api-billing 23 | 24 | gcloud builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/billing-staging-api:0.1 25 | gcloud run deploy $PUBLIC_BILLING_SERVICE --image gcr.io/$DEVSHELL_PROJECT_ID/billing-staging-api:0.1 26 | 27 | 28 | // TASK 2 // 29 | 30 | cd ~/pet-theory/lab07/staging-frontend-billing 31 | 32 | gcloud builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/frontend-staging:0.1 33 | gcloud run deploy $FRONTEND_STAGING_SERVICE --image gcr.io/$DEVSHELL_PROJECT_ID/frontend-staging:0.1 34 | 35 | 36 | // TASK 3 // 37 | 38 | cd ~/pet-theory/lab07/staging-api-billing 39 | 40 | gcloud builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/billing-staging-api:0.2 41 | gcloud run deploy $PRIVATE_BILLING_SERVICE --image gcr.io/$DEVSHELL_PROJECT_ID/billing-staging-api:0.2 42 | 43 | 44 | // TASK 4 // 45 | 46 | gcloud iam service-accounts create $BILLING_SERVICE_ACCOUNT --display-name "Billing Service Account Cloud Run" 47 | 48 | 49 | // TASK 5 // 50 | 51 | cd ~/pet-theory/lab07/prod-api-billing 52 | 53 | gcloud builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/billing-prod-api:0.1 54 | gcloud run deploy $BILLING_PROD_SERVICE --image gcr.io/$DEVSHELL_PROJECT_ID/billing-prod-api:0.1 55 | 56 | 57 | // TASK 6 // 58 | 59 | gcloud iam service-accounts create $FRONTEND_SERVICE_ACCOUNT --display-name "Billing Service Account Cloud Run Invoker" 60 | 61 | 62 | // TASK 7 // 63 | 64 | cd ~/pet-theory/lab07/prod-frontend-billing 65 | 66 | gcloud builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/frontend-prod:0.1 67 | gcloud run deploy $FRONTEND_PRODUCTION_SERVICE --image gcr.io/$DEVSHELL_PROJECT_ID/frontend-prod:0.1 68 | -------------------------------------------------------------------------------- /Getting Started with BigQuery Machine Learning: -------------------------------------------------------------------------------- 1 | // Run this code in cloudshell // 2 | 3 | bq mk bqml_lab 4 | bq query --use_legacy_sql=false \ 5 | 'CREATE OR REPLACE MODEL `bqml_lab.sample_model` 6 | OPTIONS(model_type="logistic_reg") AS 7 | SELECT 8 | IF(totals.transactions IS NULL, 0, 1) AS label, 9 | IFNULL(device.operatingSystem, "") AS os, 10 | device.isMobile AS is_mobile, 11 | IFNULL(geoNetwork.country, "") AS country, 12 | IFNULL(totals.pageviews, 0) AS pageviews 13 | FROM 14 | `bigquery-public-data.google_analytics_sample.ga_sessions_*` 15 | WHERE 16 | _TABLE_SUFFIX BETWEEN "20160801" AND "20170631" 17 | LIMIT 100000;' 18 | bq query --use_legacy_sql=false \ 19 | 'SELECT 20 | * 21 | FROM 22 | ML.EVALUATE(MODEL `bqml_lab.sample_model`, ( 23 | SELECT 24 | IF(totals.transactions IS NULL, 0, 1) AS label, 25 | IFNULL(device.operatingSystem, "") AS os, 26 | device.isMobile AS is_mobile, 27 | IFNULL(geoNetwork.country, "") AS country, 28 | IFNULL(totals.pageviews, 0) AS pageviews 29 | FROM 30 | `bigquery-public-data.google_analytics_sample.ga_sessions_*` 31 | WHERE 32 | _TABLE_SUFFIX BETWEEN "20170701" AND "20170801"));' 33 | bq query --use_legacy_sql=false \ 34 | 'SELECT 35 | country, 36 | SUM(predicted_label) AS total_predicted_purchases 37 | FROM 38 | ML.PREDICT(MODEL `bqml_lab.sample_model`, ( 39 | SELECT 40 | IFNULL(device.operatingSystem, "") AS os, 41 | device.isMobile AS is_mobile, 42 | IFNULL(totals.pageviews, 0) AS pageviews, 43 | IFNULL(geoNetwork.country, "") AS country 44 | FROM 45 | `bigquery-public-data.google_analytics_sample.ga_sessions_*` 46 | WHERE 47 | _TABLE_SUFFIX BETWEEN "20170701" AND "20170801")) 48 | GROUP BY country 49 | ORDER BY total_predicted_purchases DESC 50 | LIMIT 10;' 51 | bq query --use_legacy_sql=false \ 52 | 'SELECT 53 | fullVisitorId, 54 | SUM(predicted_label) AS total_predicted_purchases 55 | FROM 56 | ML.PREDICT(MODEL `bqml_lab.sample_model`, ( 57 | SELECT 58 | IFNULL(device.operatingSystem, "") AS os, 59 | device.isMobile AS is_mobile, 60 | IFNULL(totals.pageviews, 0) AS pageviews, 61 | IFNULL(geoNetwork.country, "") AS country, 62 | fullVisitorId 63 | FROM 64 | `bigquery-public-data.google_analytics_sample.ga_sessions_*` 65 | WHERE 66 | _TABLE_SUFFIX BETWEEN "20170701" AND "20170801")) 67 | GROUP BY fullVisitorId 68 | ORDER BY total_predicted_purchases DESC 69 | LIMIT 10;' 70 | -------------------------------------------------------------------------------- /Create and Manage Cloud Resources: Challenge Lab: -------------------------------------------------------------------------------- 1 | // Disclaimer ::: After Run All Commands Please Wait Up to 3 - 5 Minutes and then Check Your Progress // 2 | 3 | 4 | // Task 2 --> Create a Kubernetes service cluster // 5 | 6 | gcloud container clusters create nucleus-backend \ 7 | --num-nodes 1 \ 8 | --network nucleus-vpc \ 9 | --zone [ZONE] 10 | 11 | gcloud container clusters get-credentials nucleus-backend \ 12 | --zone [ZONE] 13 | 14 | kubectl create deployment hello-server \ 15 | --image=gcr.io/google-samples/hello-app:2.0 16 | 17 | kubectl expose deployment hello-server \ 18 | --type=LoadBalancer \ 19 | --port [PORT_NO] 20 | 21 | 22 | // Task 3 --> Set up an HTTP load balancer // 23 | 24 | gcloud compute instance-templates create web-server-template \ 25 | --metadata-from-file startup-script=startup.sh \ 26 | --network nucleus-vpc \ 27 | --machine-type g1-small \ 28 | --region us-east1 29 | 30 | gcloud compute instance-groups managed create web-server-group \ 31 | --base-instance-name web-server \ 32 | --size 2 \ 33 | --template web-server-template \ 34 | --region us-east1 35 | 36 | gcloud compute firewall-rules create [FIREWALL_RULE] \ 37 | --allow tcp:80 \ 38 | --network nucleus-vpc 39 | 40 | gcloud compute http-health-checks create http-basic-check 41 | 42 | gcloud compute instance-groups managed \ 43 | set-named-ports web-server-group \ 44 | --named-ports http:80 \ 45 | --region us-east1 46 | 47 | gcloud compute backend-services create web-server-backend \ 48 | --protocol HTTP \ 49 | --http-health-checks http-basic-check \ 50 | --global 51 | 52 | gcloud compute backend-services add-backend web-server-backend \ 53 | --instance-group web-server-group \ 54 | --instance-group-region us-east1 \ 55 | --global 56 | 57 | gcloud compute url-maps create web-server-map \ 58 | --default-service web-server-backend 59 | 60 | gcloud compute target-http-proxies create http-lb-proxy \ 61 | --url-map web-server-map 62 | 63 | gcloud compute forwarding-rules create http-content-rule \ 64 | --global \ 65 | --target-http-proxy http-lb-proxy \ 66 | --ports 80 67 | 68 | gcloud compute forwarding-rules list 69 | -------------------------------------------------------------------------------- /Check Point: Next-Gen Data Center Security CloudGuard for Google Cloud: -------------------------------------------------------------------------------- 1 | export ZONE=[paste your ZONE from lab] 2 | 3 | 4 | // Run all these commands on cloudshell // 5 | 6 | export REGION=${ZONE::-2} 7 | gcloud compute networks create vpc-cluster --bgp-routing-mode=regional --subnet-mode=custom 8 | gcloud compute networks subnets create cluster --network=vpc-cluster --range=192.168.110.0/24 --region=$REGION --enable-private-ip-google-access 9 | gcloud compute networks create vpc-management --bgp-routing-mode=regional --subnet-mode=custom 10 | gcloud compute networks subnets create management --network=vpc-management --range=192.168.120.0/24 --region=$REGION --enable-private-ip-google-access 11 | gcloud compute networks create vpc-prod --bgp-routing-mode=regional --subnet-mode=custom 12 | gcloud compute networks subnets create prod --network=vpc-prod --range=10.0.0.0/24 --region=$REGION 13 | gcloud compute networks create vpc-qa --bgp-routing-mode=regional --subnet-mode=custom 14 | gcloud compute networks subnets create qa --network=vpc-qa --range=10.0.1.0/24 --region=$REGION 15 | gcloud compute firewall-rules create ingress-qa --action allow --direction=INGRESS --source-ranges=0.0.0.0/0 --network=vpc-qa --rules all 16 | gcloud compute firewall-rules create ingress-prod --action allow --direction=INGRESS --source-ranges=0.0.0.0/0 --network=vpc-prod --rules all 17 | gcloud compute firewall-rules create rdp-management --action allow --direction=INGRESS --source-ranges=0.0.0.0/0 --network=vpc-management --rules tcp:3389 18 | gcloud compute instances create rdp-client \ 19 | --zone=$ZONE \ 20 | --machine-type=n1-standard-4 \ 21 | --image-project=qwiklabs-resources \ 22 | --image=sap-rdp-image \ 23 | --network=vpc-management \ 24 | --subnet=management \ 25 | --tags=rdp,http-server,https-server \ 26 | --boot-disk-type=pd-ssd 27 | gcloud compute instances create linux-qa --zone $ZONE --image-project=debian-cloud --image-family=debian-11 --custom-cpu 1 --custom-memory 4 --network-interface subnet=qa,private-network-ip=10.0.1.4,no-address --metadata startup-script="\#! /bin/bash 28 | useradd -m -p sa1trmaMoZ25A cp 29 | EOF" 30 | gcloud compute instances create linux-prod --zone $ZONE --image-project=debian-cloud --image-family=debian-11 --custom-cpu 1 --custom-memory 4 --network-interface subnet=prod,private-network-ip=10.0.0.4,no-address --metadata startup-script="\#! /bin/bash 31 | useradd -m -p sa1trmaMoZ25A cp 32 | EOF" 33 | -------------------------------------------------------------------------------- /Set Up and Configure a Cloud Environment in Google Cloud: Challenge Lab: -------------------------------------------------------------------------------- 1 | export USER_NAME2= 2 | 3 | 4 | // Task 1 // 5 | 6 | gcloud compute networks create griffin-dev-vpc --subnet-mode custom 7 | 8 | gcloud compute networks subnets create griffin-dev-wp --network=griffin-dev-vpc --region us-east1 --range=192.168.16.0/20 9 | 10 | gcloud compute networks subnets create griffin-dev-mgmt --network=griffin-dev-vpc --region us-east1 --range=192.168.32.0/20 11 | 12 | 13 | // Task 2 // 14 | 15 | gsutil cp -r gs://cloud-training/gsp321/dm . 16 | 17 | cd dm 18 | 19 | sed -i s/SET_REGION/us-east1/g prod-network.yaml 20 | 21 | gcloud deployment-manager deployments create prod-network \ 22 | --config=prod-network.yaml 23 | 24 | cd .. 25 | 26 | 27 | // Task 3 // 28 | 29 | gcloud compute instances create bastion --network-interface=network=griffin-dev-vpc,subnet=griffin-dev-mgmt --network-interface=network=griffin-prod-vpc,subnet=griffin-prod-mgmt --tags=ssh --zone=us-east1-b 30 | 31 | gcloud compute firewall-rules create fw-ssh-dev --source-ranges=0.0.0.0/0 --target-tags ssh --allow=tcp:22 --network=griffin-dev-vpc 32 | 33 | gcloud compute firewall-rules create fw-ssh-prod --source-ranges=0.0.0.0/0 --target-tags ssh --allow=tcp:22 --network=griffin-prod-vpc 34 | 35 | 36 | // Task 4 // 37 | 38 | gcloud sql instances create griffin-dev-db --root-password password --region=us-east1 --database-version=MYSQL_5_7 39 | 40 | gcloud sql connect griffin-dev-db 41 | 42 | CREATE DATABASE wordpress; 43 | GRANT ALL PRIVILEGES ON wordpress.* TO "wp_user"@"%" IDENTIFIED BY "stormwind_rules"; 44 | FLUSH PRIVILEGES; 45 | 46 | exit 47 | 48 | 49 | // Task 5 // 50 | 51 | gcloud container clusters create griffin-dev \ 52 | --network griffin-dev-vpc \ 53 | --subnetwork griffin-dev-wp \ 54 | --machine-type n1-standard-4 \ 55 | --num-nodes 2 \ 56 | --zone us-east1-b 57 | 58 | 59 | // Task 6 // 60 | 61 | gsutil cp -r gs://cloud-training/gsp321/wp-k8s . 62 | 63 | cd wp-k8s 64 | 65 | sed -i s/username_goes_here/wp_user/g wp-env.yaml 66 | 67 | sed -i s/password_goes_here/stormwind_rules/g wp-env.yaml 68 | 69 | ==> Password is stormwind_rules 70 | 71 | kubectl apply -f wp-env.yaml 72 | 73 | Run command from lab 74 | 75 | 76 | // Task 7 // 77 | 78 | nano wp-deployment.yaml 79 | 80 | kubectl create -f wp-deployment.yaml 81 | 82 | kubectl create -f wp-service.yaml 83 | -------------------------------------------------------------------------------- /Extract, Analyze, and Translate Text from Images with the Cloud ML APIs.md: -------------------------------------------------------------------------------- 1 | # Extract, Analyze, and Translate Text from Images with the Cloud ML APIs 2 | 3 | ## Run in cloudshell 4 | ```cmd 5 | gcloud alpha services api-keys create --display-name="Tanay" 6 | KEY_NAME=$(gcloud alpha services api-keys list --format="value(name)" --filter "displayName=Tanay") 7 | API_KEY=$(gcloud alpha services api-keys get-key-string $KEY_NAME --format="value(keyString)") 8 | export PROJECT_ID=$(gcloud config list --format 'value(core.project)') 9 | gsutil mb -p $PROJECT_ID -c regional -l us-central1 gs://$PROJECT_ID 10 | curl -O https://github.com/siddharth7000/practice/blob/main/sign.jpg 11 | gsutil cp sign.jpg gs://$PROJECT_ID/sign.jpg 12 | gsutil acl ch -u AllUsers:R gs://$PROJECT_ID/sign.jpg 13 | touch ocr-request.json 14 | tee ocr-request.json </tmp/hello-cloudbuild-env-policy.yaml < 2 | 3 | export CUSTOM_SECURIY_ROLE= 4 | 5 | export SERVICE_ACCOUNT= 6 | 7 | export CLUSTER_NAME= 8 | 9 | 10 | // TASK 1 // 11 | 12 | wget https://raw.githubusercontent.com/guys-in-the-cloud/cloud-skill-boosts/main/Challenge-labs/Ensure%20Access%20%26%20Identity%20in%20Google%20Cloud%3A%20Challenge%20Lab/role-definition.yaml 13 | 14 | sed -i "s//$CUSTOM_SECURIY_ROLE/g" role-definition.yaml 15 | 16 | gcloud iam service-accounts create orca-private-cluster-sa --display-name "${SERVICE_ACCOUNT} Service Account" 17 | gcloud iam roles create $CUSTOM_SECURIY_ROLE --project $DEVSHELL_PROJECT_ID --file role-definition.yaml 18 | 19 | 20 | // TASK 2 // 21 | 22 | gcloud iam service-accounts create $SERVICE_ACCOUNT --display-name "${SERVICE_ACCOUNT} Service Account" 23 | 24 | 25 | // TASK 3 // 26 | 27 | gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/monitoring.viewer 28 | 29 | gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/monitoring.metricWriter 30 | 31 | gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/logging.logWriter 32 | 33 | gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role projects/$DEVSHELL_PROJECT_ID/roles/$CUSTOM_SECURIY_ROLE 34 | 35 | 36 | // TASK 4 // 37 | 38 | gcloud config set compute/zone us-east1-b 39 | 40 | gcloud container clusters create $CLUSTER_NAME --num-nodes 1 --master-ipv4-cidr=172.16.0.64/28 --network orca-build-vpc --subnetwork orca-build-subnet --enable-master-authorized-networks --master-authorized-networks 192.168.10.2/32 --enable-ip-alias --enable-private-nodes --enable-private-endpoint --service-account $SERVICE_ACCOUNT@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --zone us-east1-b 41 | 42 | 43 | // TASK 5 // 44 | 45 | gcloud compute ssh --zone "us-east1-b" "orca-jumphost" 46 | 47 | gcloud config set compute/zone us-east1-b 48 | 49 | export CLUSTER_NAME= 50 | 51 | gcloud container clusters get-credentials $CLUSTER_NAME --internal-ip 52 | kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0 53 | 54 | kubectl expose deployment hello-server --name orca-hello-service --type LoadBalancer --port 80 --target-port 8080 55 | -------------------------------------------------------------------------------- /Implement DevOps in Google Cloud: Challenge Lab: -------------------------------------------------------------------------------- 1 | // 1st Task // 2 | 3 | gcloud config set compute/zone us-east1-b 4 | 5 | git clone https://source.developers.google.com/p/$DEVSHELL_PROJECT_ID/r/sample-app 6 | 7 | gcloud container clusters get-credentials jenkins-cd 8 | 9 | kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account) 10 | 11 | helm repo add stable https://charts.helm.sh/stable 12 | 13 | helm repo update 14 | 15 | helm install cd stable/jenkins 16 | 17 | export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=cd" -o jsonpath="{.items[0].metadata.name}") 18 | kubectl port-forward $POD_NAME 8080:8080 >> /dev/null & 19 | printf $(kubectl get secret cd-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo 20 | 21 | cd sample-app 22 | kubectl create ns production 23 | kubectl apply -f k8s/production -n production 24 | kubectl apply -f k8s/canary -n production 25 | kubectl apply -f k8s/services -n production 26 | 27 | kubectl get svc 28 | kubectl get service gceme-frontend -n production 29 | 30 | After these commands do the jenkins part then do the rest commands - 31 | 32 | git init 33 | git config credential.helper gcloud.sh 34 | git remote add origin https://source.developers.google.com/p/$DEVSHELL_PROJECT_ID/r/sample-app 35 | git config --global user.email "" 36 | git config --global user.name "" 37 | git add . 38 | git commit -m "initial commit" 39 | git push origin master 40 | 41 | 42 | // 2nd Task // 43 | 44 | git checkout -b new-feature 45 | 46 | After this command make the required changes! 47 | 48 | git add Jenkinsfile html.go main.go 49 | git commit -m "Version 2.0.0" 50 | git push origin new-feature 51 | 52 | 53 | // 3rd Task // 54 | 55 | curl http://localhost:8001/api/v1/namespaces/new-feature/services/gceme-frontend:80/proxy/version 56 | kubectl get service gceme-frontend -n production 57 | git checkout -b canary 58 | git push origin canary 59 | export FRONTEND_SERVICE_IP=$(kubectl get -o \ 60 | jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend) 61 | git checkout master 62 | git push origin master 63 | 64 | 65 | // 4th Task // 66 | 67 | export FRONTEND_SERVICE_IP=$(kubectl get -o \ 68 | jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend) 69 | while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1; done 70 | 71 | kubectl get service gceme-frontend -n production 72 | 73 | git merge canary 74 | git push origin master 75 | export FRONTEND_SERVICE_IP=$(kubectl get -o \ 76 | jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend) 77 | -------------------------------------------------------------------------------- /Build a Website on Google Cloud: Challenge Lab: -------------------------------------------------------------------------------- 1 | export MONOLITH_IDENTIFIER= 2 | export CLUSTER_NAME= 3 | export ORDERS_IDENTIFIER= 4 | export PRODUCTS_IDENTIFIER= 5 | export FRONTEND_IDENTIFIER= 6 | 7 | gcloud services enable cloudbuild.googleapis.com 8 | gcloud services enable container.googleapis.com 9 | 10 | 11 | // TASK 1 // 12 | 13 | git clone https://github.com/googlecodelabs/monolith-to-microservices.git 14 | 15 | cd ~/monolith-to-microservices 16 | ./setup.sh 17 | 18 | cd ~/monolith-to-microservices/monolith 19 | gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/${MONOLITH_IDENTIFIER}:1.0.0 . 20 | 21 | 22 | // TASK 2 // 23 | 24 | gcloud config set compute/zone us-central1-a 25 | 26 | gcloud container clusters create $CLUSTER_NAME --num-nodes 3 27 | 28 | gcloud container clusters get-credentials $CLUSTER_NAME 29 | 30 | kubectl create deployment $MONOLITH_IDENTIFIER --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/${MONOLITH_IDENTIFIER}:1.0.0 31 | kubectl expose deployment $MONOLITH_IDENTIFIER --type=LoadBalancer --port 80 --target-port 8080 32 | 33 | kubectl get svc -w 34 | 35 | 36 | // TASK 3 // 37 | 38 | cd ~/monolith-to-microservices/microservices/src/orders 39 | gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/${ORDERS_IDENTIFIER}:1.0.0 . 40 | 41 | cd ~/monolith-to-microservices/microservices/src/products 42 | gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/${PRODUCTS_IDENTIFIER}:1.0.0 . 43 | 44 | 45 | // TASK 4 // 46 | 47 | kubectl create deployment $ORDERS_IDENTIFIER --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/${ORDERS_IDENTIFIER}:1.0.0 48 | kubectl expose deployment $ORDERS_IDENTIFIER --type=LoadBalancer --port 80 --target-port 8081 49 | 50 | kubectl get svc -w 51 | 52 | kubectl create deployment $PRODUCTS_IDENTIFIER --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/${PRODUCTS_IDENTIFIER}:1.0.0 53 | kubectl expose deployment $PRODUCTS_IDENTIFIER --type=LoadBalancer --port 80 --target-port 8082 54 | 55 | kubectl get svc -w 56 | 57 | 58 | // TASK 5 // 59 | 60 | export ORDERS_SERVICE_IP=$(kubectl get services -o jsonpath="{.items[1].status.loadBalancer.ingress[0].ip}") 61 | export PRODUCTS_SERVICE_IP=$(kubectl get services -o jsonpath="{.items[2].status.loadBalancer.ingress[0].ip}") 62 | 63 | cd ~/monolith-to-microservices/react-app 64 | sed -i "s/localhost:8081/$ORDERS_SERVICE_IP/g" .env 65 | sed -i "s/localhost:8082/$PRODUCTS_SERVICE_IP/g" .env 66 | npm run build 67 | 68 | 69 | // TASK 6 // 70 | 71 | cd ~/monolith-to-microservices/microservices/src/frontend 72 | gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/${FRONTEND_IDENTIFIER}:1.0.0 . 73 | 74 | 75 | // TASK 7 // 76 | 77 | kubectl create deployment $FRONTEND_IDENTIFIER --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/${FRONTEND_IDENTIFIER}:1.0.0 78 | kubectl expose deployment $FRONTEND_IDENTIFIER --type=LoadBalancer --port 80 --target-port 8080 79 | 80 | kubectl get svc -w 81 | -------------------------------------------------------------------------------- /Create ML Models with BigQuery ML: Challenge Lab: -------------------------------------------------------------------------------- 1 | // 1ST TASK // 2 | 3 | bq mk austin 4 | 5 | 6 | // 2ND TASK // 7 | 8 | CREATE OR REPLACE MODEL austin.location_model 9 | OPTIONS 10 | (model_type='linear_reg', labels=['duration_minutes']) AS 11 | SELECT 12 | start_station_name, 13 | EXTRACT(HOUR FROM start_time) AS start_hour, 14 | EXTRACT(DAYOFWEEK FROM start_time) AS day_of_week, 15 | duration_minutes, 16 | address as location 17 | FROM 18 | `bigquery-public-data.austin_bikeshare.bikeshare_trips` AS trips 19 | JOIN 20 | `bigquery-public-data.austin_bikeshare.bikeshare_stations` AS stations 21 | ON 22 | trips.start_station_name = stations.name 23 | WHERE 24 | EXTRACT(YEAR FROM start_time) = 25 | AND duration_minutes > 0 26 | 27 | 28 | // 3RD TASK // 29 | 30 | CREATE OR REPLACE MODEL austin.subscriber_model 31 | OPTIONS 32 | (model_type='linear_reg', labels=['duration_minutes']) AS 33 | SELECT 34 | start_station_name, 35 | EXTRACT(HOUR FROM start_time) AS start_hour, 36 | subscriber_type, 37 | duration_minutes 38 | FROM `bigquery-public-data.austin_bikeshare.bikeshare_trips` AS trips 39 | WHERE EXTRACT(YEAR FROM start_time) = 40 | 41 | // 2ND QUERY // 42 | 43 | CREATE OR REPLACE MODEL austin.subscriber_model 44 | OPTIONS 45 | (model_type='linear_reg', labels=['duration_minutes']) AS 46 | SELECT 47 | start_station_name, 48 | EXTRACT(HOUR FROM start_time) AS start_hour, 49 | subscriber_type, 50 | duration_minutes 51 | FROM `bigquery-public-data.austin_bikeshare.bikeshare_trips` AS trips 52 | WHERE EXTRACT(YEAR FROM start_time) = 53 | 54 | 55 | // 4RTH TASK // 56 | 57 | // 1ST MODEL // 58 | 59 | -- Evaluation metrics for location_model 60 | SELECT 61 | SQRT(mean_squared_error) AS rmse, 62 | mean_absolute_error 63 | FROM 64 | ML.EVALUATE(MODEL austin.location_model, ( 65 | SELECT 66 | start_station_name, 67 | EXTRACT(HOUR FROM start_time) AS start_hour, 68 | EXTRACT(DAYOFWEEK FROM start_time) AS day_of_week, 69 | duration_minutes, 70 | address as location 71 | FROM 72 | `bigquery-public-data.austin_bikeshare.bikeshare_trips` AS trips 73 | JOIN 74 | `bigquery-public-data.austin_bikeshare.bikeshare_stations` AS stations 75 | ON 76 | trips.start_station_name = stations.name 77 | WHERE EXTRACT(YEAR FROM start_time) = ) 78 | ) 79 | 80 | // 2ND MODEL // 81 | 82 | -- Evaluation metrics for subscriber_model 83 | SELECT 84 | SQRT(mean_squared_error) AS rmse, 85 | mean_absolute_error 86 | FROM 87 | ML.EVALUATE(MODEL austin.subscriber_model, ( 88 | SELECT 89 | start_station_name, 90 | EXTRACT(HOUR FROM start_time) AS start_hour, 91 | subscriber_type, 92 | duration_minutes 93 | FROM 94 | `bigquery-public-data.austin_bikeshare.bikeshare_trips` AS trips 95 | WHERE 96 | EXTRACT(YEAR FROM start_time) = ) 97 | ) 98 | 99 | 100 | // 5TH TASK // 101 | 102 | // 1ST ONE // 103 | 104 | SELECT 105 | start_station_name, 106 | COUNT(*) AS trips 107 | FROM 108 | `bigquery-public-data.austin_bikeshare.bikeshare_trips` 109 | WHERE 110 | EXTRACT(YEAR FROM start_time) = 111 | GROUP BY 112 | start_station_name 113 | ORDER BY 114 | trips DESC 115 | 116 | // 2ND ONE // 117 | 118 | SELECT AVG(predicted_duration_minutes) AS average_predicted_trip_length 119 | FROM ML.predict(MODEL austin.subscriber_model, ( 120 | SELECT 121 | start_station_name, 122 | EXTRACT(HOUR FROM start_time) AS start_hour, 123 | subscriber_type, 124 | duration_minutes 125 | FROM 126 | `bigquery-public-data.austin_bikeshare.bikeshare_trips` 127 | WHERE 128 | EXTRACT(YEAR FROM start_time) = 129 | AND subscriber_type = 'Single Trip' 130 | AND start_station_name = '21st & Speedway @PCL')) 131 | -------------------------------------------------------------------------------- /Networking Fundamentals on Google Cloud: Challenge Lab: -------------------------------------------------------------------------------- 1 | export REGION=[REGION] 2 | 3 | export ZONE=[ZONE] 4 | 5 | 6 | // Task 1 - Create multiple web server instances // 7 | 8 | gcloud compute instances create web1 \ 9 | --zone=$ZONE \ 10 | --machine-type=e2-small \ 11 | --tags=network-lb-tag \ 12 | --image-family=debian-11 \ 13 | --image-project=debian-cloud \ 14 | --metadata=startup-script='#!/bin/bash 15 | apt-get update 16 | apt-get install apache2 -y 17 | service apache2 restart 18 | echo "

Web Server: web1

" | tee /var/www/html/index.html' 19 | 20 | gcloud compute instances create web2 \ 21 | --zone=$ZONE \ 22 | --machine-type=e2-small \ 23 | --tags=network-lb-tag \ 24 | --image-family=debian-11 \ 25 | --image-project=debian-cloud \ 26 | --metadata=startup-script='#!/bin/bash 27 | apt-get update 28 | apt-get install apache2 -y 29 | service apache2 restart 30 | echo "

Web Server: web2

" | tee /var/www/html/index.html' 31 | 32 | gcloud compute instances create web3 \ 33 | --zone=$ZONE \ 34 | --machine-type=e2-small \ 35 | --tags=network-lb-tag \ 36 | --image-family=debian-11 \ 37 | --image-project=debian-cloud \ 38 | --metadata=startup-script='#!/bin/bash 39 | apt-get update 40 | apt-get install apache2 -y 41 | service apache2 restart 42 | echo "

Web Server: web3

" | tee /var/www/html/index.html' 43 | 44 | gcloud compute firewall-rules create www-firewall-network-lb \ 45 | --target-tags network-lb-tag --allow tcp:80 46 | 47 | 48 | // Task 2 - Configure the load balancing service // 49 | 50 | gcloud compute addresses create network-lb-ip-1 \ 51 | --region $REGION 52 | 53 | gcloud compute http-health-checks create basic-check 54 | 55 | gcloud compute target-pools create www-pool \ 56 | --region $REGION --http-health-check basic-check 57 | 58 | gcloud compute target-pools add-instances www-pool \ 59 | --instances web1,web2,web3 --zone $ZONE 60 | 61 | gcloud compute forwarding-rules create www-rule \ 62 | --region $REGION \ 63 | --ports 80 \ 64 | --address network-lb-ip-1 \ 65 | --target-pool www-pool 66 | 67 | 68 | // Task 3 - Create an HTTP load balancer // 69 | 70 | gcloud compute instance-templates create lb-backend-template \ 71 | --region=$REGION \ 72 | --network=default \ 73 | --subnet=default \ 74 | --tags=allow-health-check \ 75 | --machine-type=e2-medium \ 76 | --image-family=debian-11 \ 77 | --image-project=debian-cloud \ 78 | --metadata=startup-script='#!/bin/bash 79 | apt-get update 80 | apt-get install apache2 -y 81 | a2ensite default-ssl 82 | a2enmod ssl 83 | vm_hostname="$(curl -H "Metadata-Flavor:Google" \ 84 | http://169.254.169.254/computeMetadata/v1/instance/name)" 85 | echo "Page served from: $vm_hostname" | \ 86 | tee /var/www/html/index.html 87 | systemctl restart apache2' 88 | 89 | gcloud compute instance-groups managed create lb-backend-group \ 90 | --template=lb-backend-template --size=2 --zone $ZONE 91 | 92 | gcloud compute firewall-rules create fw-allow-health-check \ 93 | --network=default \ 94 | --action=allow \ 95 | --direction=ingress \ 96 | --source-ranges=130.211.0.0/22,35.191.0.0/16 \ 97 | --target-tags=allow-health-check \ 98 | --rules=tcp:80 99 | 100 | 101 | gcloud compute addresses create lb-ipv4-1 \ 102 | --ip-version=IPV4 \ 103 | --global 104 | 105 | gcloud compute addresses describe lb-ipv4-1 \ 106 | --format="get(address)" \ 107 | --global 108 | 109 | gcloud compute health-checks create http http-basic-check \ 110 | --port 80 111 | 112 | gcloud compute backend-services create web-backend-service \ 113 | --protocol=HTTP \ 114 | --port-name=http \ 115 | --health-checks=http-basic-check \ 116 | --global 117 | 118 | gcloud compute backend-services add-backend web-backend-service \ 119 | --instance-group=lb-backend-group \ 120 | --instance-group-zone=$ZONE \ 121 | --global 122 | 123 | gcloud compute url-maps create web-map-http \ 124 | --default-service web-backend-service 125 | 126 | gcloud compute target-http-proxies create http-lb-proxy \ 127 | --url-map web-map-http 128 | 129 | gcloud compute forwarding-rules create http-content-rule \ 130 | --address=lb-ipv4-1\ 131 | --global \ 132 | --target-http-proxy=http-lb-proxy \ 133 | --ports=80 134 | -------------------------------------------------------------------------------- /Importing Data to a Firestore Database: -------------------------------------------------------------------------------- 1 | // Run this in cloudshell // 2 | 3 | gcloud auth list 4 | git clone https://github.com/rosera/pet-theory 5 | cd pet-theory/lab01 6 | npm install @google-cloud/firestore 7 | npm install @google-cloud/logging 8 | updated_content=$(cat < { 29 | var docRef = db.collection("customers").doc(record.email); 30 | batch.set(docRef, record); 31 | if ((i + 1) % 500 === 0) { 32 | console.log(\`Writing record \${i + 1}\`); 33 | batchCommits.push(batch.commit()); 34 | batch = db.batch(); 35 | } 36 | }); 37 | batchCommits.push(batch.commit()); 38 | return Promise.all(batchCommits); 39 | } 40 | function writeToDatabase(records) { 41 | records.forEach((record, i) => { 42 | console.log(\`ID: \${record.id} Email: \${record.email} Name: \${record.name} Phone: \${record.phone}\`); 43 | }); 44 | return ; 45 | } 46 | async function importCsv(csvFileName) { 47 | const fileContents = await readFile(csvFileName, "utf8"); 48 | const records = await parse(fileContents, { columns: true }); 49 | try { 50 | await writeToFirestore(records); 51 | } catch (e) { 52 | console.error(e); 53 | process.exit(1); 54 | } 55 | console.log(\`Wrote \${records.length} records\`); 56 | success_message = \`Success: importTestData - Wrote \${records.length} records\`; 57 | const entry = log.entry( 58 | { resource: resource }, 59 | { message: \`\${success_message}\` } 60 | ); 61 | log.write([entry]); 62 | } 63 | importCsv(process.argv[2]).catch(e => console.error(e)); 64 | EOF 65 | ) 66 | echo "$updated_content" > importTestData.js 67 | npm install faker@5.5.3 68 | updated_contents=$(cat < createTestData.js 118 | gcloud firestore databases create --location=us-east1 119 | node createTestData 1000 120 | node importTestData customers_1000.csv 121 | node createTestData 20000 122 | node importTestData customers_20000.csv 123 | -------------------------------------------------------------------------------- /Create an Internal Load Balancer.md: -------------------------------------------------------------------------------- 1 | # Create an Internal Load Balancer 2 | 3 | ## Run in cloudshell 4 | 5 | ### Zone from task 2 >`Create the managed instance groups`> step 3 6 | ```cmd 7 | export ZONE= 8 | ``` 9 | 10 | ```cmd 11 | export REGION=${ZONE::-2} 12 | last_char="${ZONE: -1}" 13 | if [ "$last_char" == "a" ]; then 14 | export NZONE="${ZONE%?}b" 15 | elif [ "$last_char" == "b" ]; then 16 | export NZONE="${ZONE%?}c" 17 | elif [ "$last_char" == "c" ]; then 18 | export NZONE="${ZONE%?}b" 19 | elif [ "$last_char" == "d" ]; then 20 | export NZONE="${ZONE%?}b" 21 | fi 22 | gcloud compute firewall-rules create app-allow-http --network my-internal-app --action allow --direction INGRESS --target-tags lb-backend --source-ranges 0.0.0.0/0 --rules tcp:80 23 | gcloud compute firewall-rules create app-allow-health-check --network default --action allow --direction INGRESS --target-tags lb-backend --source-ranges 130.211.0.0/22,35.191.0.0/16 --rules tcp 24 | gcloud compute instance-templates create instance-template-1 --machine-type=e2-medium --network=my-internal-app --region $REGION --subnet=subnet-a --tags=lb-backend --metadata=startup-script-url=gs://cloud-training/gcpnet/ilb/startup.sh 25 | gcloud compute instance-templates create instance-template-2 --machine-type=e2-medium --network=my-internal-app --region $REGION --subnet=subnet-b --tags=lb-backend --metadata=startup-script-url=gs://cloud-training/gcpnet/ilb/startup.sh 26 | gcloud compute instance-groups managed create instance-group-1 --base-instance-name=instance-group-1 --template=instance-template-1 --zone=$ZONE --size=1 27 | gcloud compute instance-groups managed set-autoscaling instance-group-1 --zone=$ZONE --cool-down-period=45 --max-num-replicas=5 --min-num-replicas=1 --target-cpu-utilization=0.8 28 | gcloud compute instance-groups managed create instance-group-2 --base-instance-name=instance-group-2 --template=instance-template-2 --zone=$NZONE --size=1 29 | gcloud compute instance-groups managed set-autoscaling instance-group-2 --zone=$NZONE --cool-down-period=45 --max-num-replicas=5 --min-num-replicas=1 --target-cpu-utilization=0.8 30 | gcloud compute instances create utility-vm --zone=$ZONE --machine-type=e2-micro --image-family=debian-10 --image-project=debian-cloud --boot-disk-size=10GB --boot-disk-type=pd-standard --network=my-internal-app --subnet=subnet-a --private-network-ip=10.10.20.50 31 | gcloud compute health-checks create tcp my-ilb-health-check \ 32 | --description="Follow To Dev-tanay" \ 33 | --check-interval=5s \ 34 | --timeout=5s \ 35 | --unhealthy-threshold=2 \ 36 | --healthy-threshold=2 \ 37 | --port=80 \ 38 | --proxy-header=NONE 39 | TOKEN=$(gcloud auth application-default print-access-token) 40 | cat > 1.json < 2.json < 0 28 | AND fare_amount/trip_distance BETWEEN 2 29 | AND 10 30 | AND dropoff_datetime > pickup_datetime 31 | GROUP BY 32 | 1 33 | ORDER BY 34 | 1' 35 | bq query --nouse_legacy_sql ' 36 | #standardSQL 37 | WITH params AS ( 38 | SELECT 39 | 1 AS TRAIN, 40 | 2 AS EVAL 41 | ), 42 | daynames AS 43 | (SELECT ["Sun", "Mon", "Tues", "Wed", "Thurs", "Fri", "Sat"] AS daysofweek), 44 | taxitrips AS ( 45 | SELECT 46 | (tolls_amount + fare_amount) AS total_fare, 47 | daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek, 48 | EXTRACT(HOUR FROM pickup_datetime) AS hourofday, 49 | pickup_longitude AS pickuplon, 50 | pickup_latitude AS pickuplat, 51 | dropoff_longitude AS dropofflon, 52 | dropoff_latitude AS dropofflat, 53 | passenger_count AS passengers 54 | FROM 55 | `nyc-tlc.yellow.trips`, daynames, params 56 | WHERE 57 | trip_distance > 0 AND fare_amount > 0 58 | AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))),1000) = params.TRAIN 59 | ) 60 | SELECT * 61 | FROM taxitrips' 62 | bq mk taxi 63 | bq query --nouse_legacy_sql ' 64 | CREATE or REPLACE MODEL taxi.taxifare_model 65 | OPTIONS 66 | (model_type="linear_reg", labels=["total_fare"]) AS 67 | WITH params AS ( 68 | SELECT 69 | 1 AS TRAIN, 70 | 2 AS EVAL 71 | ), 72 | daynames AS 73 | (SELECT ["Sun", "Mon", "Tues", "Wed", "Thurs", "Fri", "Sat"] AS daysofweek), 74 | taxitrips AS ( 75 | SELECT 76 | (tolls_amount + fare_amount) AS total_fare, 77 | daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek, 78 | EXTRACT(HOUR FROM pickup_datetime) AS hourofday, 79 | pickup_longitude AS pickuplon, 80 | pickup_latitude AS pickuplat, 81 | dropoff_longitude AS dropofflon, 82 | dropoff_latitude AS dropofflat, 83 | passenger_count AS passengers 84 | FROM 85 | `nyc-tlc.yellow.trips`, daynames, params 86 | WHERE 87 | trip_distance > 0 AND fare_amount > 0 88 | AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))),1000) = params.TRAIN 89 | ) 90 | SELECT * 91 | FROM taxitrips' 92 | bq query --nouse_legacy_sql ' 93 | #standardSQL 94 | SELECT 95 | SQRT(mean_squared_error) AS rmse 96 | FROM 97 | ML.EVALUATE(MODEL taxi.taxifare_model, 98 | ( 99 | WITH params AS ( 100 | SELECT 101 | 1 AS TRAIN, 102 | 2 AS EVAL 103 | ), 104 | daynames AS 105 | (SELECT ["Sun", "Mon", "Tues", "Wed", "Thurs", "Fri", "Sat"] AS daysofweek), 106 | taxitrips AS ( 107 | SELECT 108 | (tolls_amount + fare_amount) AS total_fare, 109 | daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek, 110 | EXTRACT(HOUR FROM pickup_datetime) AS hourofday, 111 | pickup_longitude AS pickuplon, 112 | pickup_latitude AS pickuplat, 113 | dropoff_longitude AS dropofflon, 114 | dropoff_latitude AS dropofflat, 115 | passenger_count AS passengers 116 | FROM 117 | `nyc-tlc.yellow.trips`, daynames, params 118 | WHERE 119 | trip_distance > 0 AND fare_amount > 0 120 | AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))),1000) = params.EVAL 121 | ) 122 | SELECT * 123 | FROM taxitrips 124 | ))' 125 | bq query --nouse_legacy_sql ' 126 | #standardSQL 127 | SELECT 128 | * 129 | FROM 130 | ml.PREDICT(MODEL `taxi.taxifare_model`, 131 | ( 132 | WITH params AS ( 133 | SELECT 134 | 1 AS TRAIN, 135 | 2 AS EVAL 136 | ), 137 | daynames AS 138 | (SELECT ["Sun", "Mon", "Tues", "Wed", "Thurs", "Fri", "Sat"] AS daysofweek), 139 | taxitrips AS ( 140 | SELECT 141 | (tolls_amount + fare_amount) AS total_fare, 142 | daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek, 143 | EXTRACT(HOUR FROM pickup_datetime) AS hourofday, 144 | pickup_longitude AS pickuplon, 145 | pickup_latitude AS pickuplat, 146 | dropoff_longitude AS dropofflon, 147 | dropoff_latitude AS dropofflat, 148 | passenger_count AS passengers 149 | FROM 150 | `nyc-tlc.yellow.trips`, daynames, params 151 | WHERE 152 | trip_distance > 0 AND fare_amount > 0 153 | AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))),1000) = params.EVAL 154 | ) 155 | SELECT * 156 | FROM taxitrips 157 | ));' 158 | -------------------------------------------------------------------------------- /SAP Landing Zone: Plan and Deploy the SAP Network: -------------------------------------------------------------------------------- 1 | export Your_Project_Id="Insert your GCP Id here" 2 | 3 | gcloud compute networks create xall-vpc--vpc-01 \ 4 | --description=XYZ-all VPC network = Standard VPC network - 01" \ 5 | \ 6 | --project=$Your_Project_Id \ 7 | \ 8 | --subnet-mode=custom \ 9 | --bgp-routing-mode=global \ 10 | --mtu=1460 11 | 12 | 13 | gcloud compute networks subnets create xgl-subnet--cerps-bau-nonprd--be1-01 \ 14 | --description="XYZ-Global subnet = CERPS-BaU-NonProd - Belgium 1 (GCP) - 01" \ 15 | \ 16 | --project=$Your_Project_Id \ 17 | --network=xall-vpc--vpc-01 \ 18 | --region=us-central1 \ 19 | \ 20 | --range=10.1.1.0/24 \ 21 | --enable-private-ip-google-access \ 22 | --enable-flow-logs 23 | 24 | 25 | gcloud compute firewall-rules create xall-vpc--vpc-01--xall-fw--user--a--linux--v01 \ 26 | --description="xall-vpc--vpc-01 - XYZ-all firewall rule = User access - ALLOW standard linux access - version 01" \ 27 | \ 28 | --project=$Your_Project_Id \ 29 | --network=xall-vpc--vpc-01 \ 30 | \ 31 | --priority=1000 \ 32 | --direction=ingress \ 33 | --action=allow \ 34 | --target-tags=xall-vpc--vpc-01--xall-fw--user--a--linux--v01 \ 35 | --source-ranges=0.0.0.0/0 \ 36 | --rules=tcp:22,icmp 37 | 38 | 39 | gcloud compute firewall-rules create xall-vpc--vpc-01--xall-fw--user--a--windows--v01 \ 40 | --description="xall-vpc--vpc-01 - XYZ-all firewall rule = User access - ALLOW standard windows access - version 01" \ 41 | \ 42 | --project=$Your_Project_Id \ 43 | --network=xall-vpc--vpc-01 \ 44 | \ 45 | --priority=1000 \ 46 | --direction=ingress \ 47 | --action=allow \ 48 | --target-tags=xall-vpc--vpc-01--xall-fw--user--a--windows--v01 \ 49 | --source-ranges=0.0.0.0/0 \ 50 | --rules=tcp:3389,icmp 51 | 52 | 53 | gcloud compute firewall-rules create xall-vpc--vpc-01--xall-fw--user--a--sapgui--v01 \ 54 | --description="xall-vpc--vpc-01 - XYZ-all firewall rule = User access - ALLOW SAPGUI access - version 01" \ 55 | \ 56 | --project=$Your_Project_Id \ 57 | --network=xall-vpc--vpc-01 \ 58 | \ 59 | --priority=1000 \ 60 | --direction=ingress \ 61 | --action=allow \ 62 | --target-tags=xall-vpc--vpc-01--xall-fw--user--a--sapgui--v01 \ 63 | --source-ranges=0.0.0.0/0 \ 64 | --rules=tcp:3200-3299,tcp:3600-3699 65 | 66 | 67 | 68 | 69 | gcloud compute firewall-rules create xall-vpc--vpc-01--xall-fw--user--a--sap-fiori--v01 \ 70 | --description="xall-vpc--vpc-01 - XYZ-all firewall rule = User access - ALLOW SAP Fiori access - version 01" \ 71 | \ 72 | --project=$Your_Project_Id \ 73 | --network=xall-vpc--vpc-01 \ 74 | \ 75 | --priority=1000 \ 76 | --direction=ingress \ 77 | --action=allow \ 78 | --target-tags=xall-vpc--vpc-01--xall-fw--user--a--sap-fiori--v01 \ 79 | --source-ranges=0.0.0.0/0 \ 80 | --rules=tcp:80,tcp:8000-8099,tcp:443,tcp:4300-44300 81 | 82 | 83 | 84 | 85 | gcloud compute firewall-rules create xall-vpc--vpc-01--xgl-fw--cerps-bau-dev--a-env--v01 \ 86 | --description="xall-vpc--vpc-01 - XYZ-Global firewall rule = CERPS-BaU-Dev - ALLOW environment wide access - version 01" \ 87 | \ 88 | --project=$Your_Project_Id \ 89 | --network=xall-vpc--vpc-01 \ 90 | \ 91 | --priority=1000 \ 92 | --direction=ingress \ 93 | --action=allow \ 94 | --target-tags=xall-vpc--vpc-01--xgl-fw--cerps-bau-dev--a-env--v01 \ 95 | --source-tags=xall-vpc--vpc-01--xgl-fw--cerps-bau-dev--a-env--v01 \ 96 | --rules=tcp:3200-3299,tcp:3300-3399,tcp:4800-4899,tcp:80,tcp:8000-8099,tcp:443,tcp:44300-44399,tcp:3600-3699,tcp:8100-8199,tcp:44400-44499,tcp:50000-59999,tcp:30000-39999,tcp:4300-4399,tcp:40000-49999,tcp:1128-1129,tcp:5050,tcp:8000-8499,tcp:515,icmp 97 | 98 | 99 | 100 | gcloud compute firewall-rules create xall-vpc--vpc-01--xgl-fw--cerps-bau-dev--a-ds4--v01 \ 101 | --description="xall-vpc--vpc-01 - XYZ-Global firewall rule = CERPS-BaU-Dev - ALLOW SAP S4 (DS4) system wide access - version 01" \ 102 | \ 103 | --project=$Your_Project_Id \ 104 | --network=xall-vpc--vpc-01 \ 105 | \ 106 | --priority=1000 \ 107 | --direction=ingress \ 108 | --action=allow \ 109 | --target-tags=xall-vpc--vpc-01--xgl-fw--cerps-bau-dev--a-ds4--v01 \ 110 | --source-tags=xall-vpc--vpc-01--xgl-fw--cerps-bau-dev--a-ds4--v01 \ 111 | --rules=tcp,udp,icmp 112 | 113 | 114 | 115 | gcloud compute addresses create xgl-ip-address--cerps-bau-dev--dh1--d-cerpshana1 \ 116 | --description="XYZ-Global reserved IP address = CERPS-BaU-Dev - SAP HANA 1 (DH1) - d-cerpshana1" \ 117 | \ 118 | --project=$Your_Project_Id \ 119 | --region=us-central1 \ 120 | --subnet=xgl-subnet--cerps-bau-nonprd--be1-01 \ 121 | \ 122 | --addresses=10.1.1.100 123 | 124 | 125 | 126 | 127 | gcloud compute addresses create xgl-ip-address--cerps-bau-dev--ds4--d-cerpss4db \ 128 | --description="XYZ-Global reserved IP address = CERPS-BaU-Dev - SAP S4 (DS4) - d-cerpss4db" \ 129 | \ 130 | --project=$Your_Project_Id \ 131 | --region=us-central1 \ 132 | --subnet=xgl-subnet--cerps-bau-nonprd--be1-01 \ 133 | \ 134 | --addresses=10.1.1.101 135 | 136 | 137 | 138 | gcloud compute addresses create xgl-ip-address--cerps-bau-dev--ds4--d-cerpss4scs \ 139 | --description="XYZ-Global reserved IP address = CERPS-BaU-Dev - SAP S4 (DS4) - d-cerpss4scs" \ 140 | \ 141 | --project=$Your_Project_Id \ 142 | --region=us-central1 \ 143 | --subnet=xgl-subnet--cerps-bau-nonprd--be1-01 \ 144 | \ 145 | --addresses=10.1.1.102 146 | 147 | 148 | 149 | 150 | gcloud compute addresses create xgl-ip-address--cerps-bau-dev--ds4--d-cerpss4app1 \ 151 | --description="XYZ-Global reserved IP address = CERPS-BaU-Dev - SAP S4 (DS4) - d-cerpss4app1" \ 152 | \ 153 | --project=$Your_Project_Id \ 154 | --region=us-central1 \ 155 | --subnet=xgl-subnet--cerps-bau-nonprd--be1-01 \ 156 | \ 157 | --addresses=10.1.1.103 158 | 159 | 160 | 161 | 162 | gcloud compute routers create xall-vpc--vpc-01--xall-router--shared-nat--de1-01 \ 163 | --description="xall-vpc--vpc-01 - XYZ-Global router = Shared NAT - Germany 1 (GCP) - 01" \ 164 | \ 165 | --project=$Your_Project_Id \ 166 | --region=us-central1 \ 167 | --network=xall-vpc--vpc-01 168 | 169 | 170 | 171 | 172 | gcloud compute routers nats create xall-vpc--vpc-01--xall-nat-gw--shared-nat--de1-01 \ 173 | \ 174 | --project=$Your_Project_Id \ 175 | --region=us-central1 \ 176 | --router=xall-vpc--vpc-01--xall-router--shared-nat--de1-01 \ 177 | \ 178 | --auto-allocate-nat-external-ips \ 179 | --nat-all-subnet-ip-ranges \ 180 | --enable-logging 181 | -------------------------------------------------------------------------------- /Predict Visitor Purchases with a Classification Model in BigQuery ML: -------------------------------------------------------------------------------- 1 | // Run this in cloudshell // 2 | 3 | bq mk ecommerce 4 | bq query --nouse_legacy_sql ' 5 | CREATE OR REPLACE MODEL `ecommerce.classification_model` 6 | OPTIONS 7 | ( 8 | model_type="logistic_reg", 9 | labels = ["will_buy_on_return_visit"] 10 | ) 11 | AS 12 | #standardSQL 13 | SELECT 14 | * EXCEPT(fullVisitorId) 15 | FROM 16 | # features 17 | (SELECT 18 | fullVisitorId, 19 | IFNULL(totals.bounces, 0) AS bounces, 20 | IFNULL(totals.timeOnSite, 0) AS time_on_site 21 | FROM 22 | `data-to-insights.ecommerce.web_analytics` 23 | WHERE 24 | totals.newVisits = 1 25 | AND date BETWEEN "20160801" AND "20170430") # train on first 9 months 26 | JOIN 27 | (SELECT 28 | fullvisitorid, 29 | IF(COUNTIF(totals.transactions > 0 AND totals.newVisits IS NULL) > 0, 1, 0) AS will_buy_on_return_visit 30 | FROM 31 | `data-to-insights.ecommerce.web_analytics` 32 | GROUP BY fullvisitorid) 33 | USING (fullVisitorId);' 34 | bq query --nouse_legacy_sql ' 35 | SELECT 36 | roc_auc, 37 | CASE 38 | WHEN roc_auc > .9 THEN "good" 39 | WHEN roc_auc > .8 THEN "fair" 40 | WHEN roc_auc > .7 THEN "decent" 41 | WHEN roc_auc > .6 THEN "not great" 42 | ELSE "poor" END AS model_quality 43 | FROM 44 | ML.EVALUATE(MODEL ecommerce.classification_model, ( 45 | SELECT 46 | * EXCEPT(fullVisitorId) 47 | FROM 48 | # features 49 | (SELECT 50 | fullVisitorId, 51 | IFNULL(totals.bounces, 0) AS bounces, 52 | IFNULL(totals.timeOnSite, 0) AS time_on_site 53 | FROM 54 | `data-to-insights.ecommerce.web_analytics` 55 | WHERE 56 | totals.newVisits = 1 57 | AND date BETWEEN "20170501" AND "20170630") # eval on 2 months 58 | JOIN 59 | (SELECT 60 | fullvisitorid, 61 | IF(COUNTIF(totals.transactions > 0 AND totals.newVisits IS NULL) > 0, 1, 0) AS will_buy_on_return_visit 62 | FROM 63 | `data-to-insights.ecommerce.web_analytics` 64 | GROUP BY fullvisitorid) 65 | USING (fullVisitorId) 66 | ));' 67 | bq query --nouse_legacy_sql ' 68 | CREATE OR REPLACE MODEL `ecommerce.classification_model_2` 69 | OPTIONS 70 | (model_type="logistic_reg", labels = ["will_buy_on_return_visit"]) AS 71 | WITH all_visitor_stats AS ( 72 | SELECT 73 | fullvisitorid, 74 | IF(COUNTIF(totals.transactions > 0 AND totals.newVisits IS NULL) > 0, 1, 0) AS will_buy_on_return_visit 75 | FROM `data-to-insights.ecommerce.web_analytics` 76 | GROUP BY fullvisitorid 77 | ) 78 | # add in new features 79 | SELECT * EXCEPT(unique_session_id) FROM ( 80 | SELECT 81 | CONCAT(fullvisitorid, CAST(visitId AS STRING)) AS unique_session_id, 82 | # labels 83 | will_buy_on_return_visit, 84 | MAX(CAST(h.eCommerceAction.action_type AS INT64)) AS latest_ecommerce_progress, 85 | # behavior on the site 86 | IFNULL(totals.bounces, 0) AS bounces, 87 | IFNULL(totals.timeOnSite, 0) AS time_on_site, 88 | IFNULL(totals.pageviews, 0) AS pageviews, 89 | # where the visitor came from 90 | trafficSource.source, 91 | trafficSource.medium, 92 | channelGrouping, 93 | # mobile or desktop 94 | device.deviceCategory, 95 | # geographic 96 | IFNULL(geoNetwork.country, "") AS country 97 | FROM `data-to-insights.ecommerce.web_analytics`, 98 | UNNEST(hits) AS h 99 | JOIN all_visitor_stats USING(fullvisitorid) 100 | WHERE 1=1 101 | # only predict for new visits 102 | AND totals.newVisits = 1 103 | AND date BETWEEN "20160801" AND "20170430" # train 9 months 104 | GROUP BY 105 | unique_session_id, 106 | will_buy_on_return_visit, 107 | bounces, 108 | time_on_site, 109 | totals.pageviews, 110 | trafficSource.source, 111 | trafficSource.medium, 112 | channelGrouping, 113 | device.deviceCategory, 114 | country 115 | );' 116 | bq query --nouse_legacy_sql ' 117 | #standardSQL 118 | SELECT 119 | roc_auc, 120 | CASE 121 | WHEN roc_auc > .9 THEN "good" 122 | WHEN roc_auc > .8 THEN "fair" 123 | WHEN roc_auc > .7 THEN "decent" 124 | WHEN roc_auc > .6 THEN "not great" 125 | ELSE "poor" END AS model_quality 126 | FROM 127 | ML.EVALUATE(MODEL ecommerce.classification_model_2, ( 128 | WITH all_visitor_stats AS ( 129 | SELECT 130 | fullvisitorid, 131 | IF(COUNTIF(totals.transactions > 0 AND totals.newVisits IS NULL) > 0, 1, 0) AS will_buy_on_return_visit 132 | FROM `data-to-insights.ecommerce.web_analytics` 133 | GROUP BY fullvisitorid 134 | ) 135 | # add in new features 136 | SELECT * EXCEPT(unique_session_id) FROM ( 137 | SELECT 138 | CONCAT(fullvisitorid, CAST(visitId AS STRING)) AS unique_session_id, 139 | # labels 140 | will_buy_on_return_visit, 141 | MAX(CAST(h.eCommerceAction.action_type AS INT64)) AS latest_ecommerce_progress, 142 | # behavior on the site 143 | IFNULL(totals.bounces, 0) AS bounces, 144 | IFNULL(totals.timeOnSite, 0) AS time_on_site, 145 | totals.pageviews, 146 | # where the visitor came from 147 | trafficSource.source, 148 | trafficSource.medium, 149 | channelGrouping, 150 | # mobile or desktop 151 | device.deviceCategory, 152 | # geographic 153 | IFNULL(geoNetwork.country, "") AS country 154 | FROM `data-to-insights.ecommerce.web_analytics`, 155 | UNNEST(hits) AS h 156 | JOIN all_visitor_stats USING(fullvisitorid) 157 | WHERE 1=1 158 | # only predict for new visits 159 | AND totals.newVisits = 1 160 | AND date BETWEEN "20170501" AND "20170630" # eval 2 months 161 | GROUP BY 162 | unique_session_id, 163 | will_buy_on_return_visit, 164 | bounces, 165 | time_on_site, 166 | totals.pageviews, 167 | trafficSource.source, 168 | trafficSource.medium, 169 | channelGrouping, 170 | device.deviceCategory, 171 | country 172 | ) 173 | ));' 174 | bq query --nouse_legacy_sql ' 175 | SELECT 176 | * 177 | FROM 178 | ml.PREDICT(MODEL `ecommerce.classification_model_2`, 179 | ( 180 | WITH all_visitor_stats AS ( 181 | SELECT 182 | fullvisitorid, 183 | IF(COUNTIF(totals.transactions > 0 AND totals.newVisits IS NULL) > 0, 1, 0) AS will_buy_on_return_visit 184 | FROM `data-to-insights.ecommerce.web_analytics` 185 | GROUP BY fullvisitorid 186 | ) 187 | SELECT 188 | CONCAT(fullvisitorid, "-",CAST(visitId AS STRING)) AS unique_session_id, 189 | # labels 190 | will_buy_on_return_visit, 191 | MAX(CAST(h.eCommerceAction.action_type AS INT64)) AS latest_ecommerce_progress, 192 | # behavior on the site 193 | IFNULL(totals.bounces, 0) AS bounces, 194 | IFNULL(totals.timeOnSite, 0) AS time_on_site, 195 | totals.pageviews, 196 | # where the visitor came from 197 | trafficSource.source, 198 | trafficSource.medium, 199 | channelGrouping, 200 | # mobile or desktop 201 | device.deviceCategory, 202 | # geographic 203 | IFNULL(geoNetwork.country, "") AS country 204 | FROM `data-to-insights.ecommerce.web_analytics`, 205 | UNNEST(hits) AS h 206 | JOIN all_visitor_stats USING(fullvisitorid) 207 | WHERE 208 | # only predict for new visits 209 | totals.newVisits = 1 210 | AND date BETWEEN "20170701" AND "20170801" # test 1 month 211 | GROUP BY 212 | unique_session_id, 213 | will_buy_on_return_visit, 214 | bounces, 215 | time_on_site, 216 | totals.pageviews, 217 | trafficSource.source, 218 | trafficSource.medium, 219 | channelGrouping, 220 | device.deviceCategory, 221 | country 222 | ) 223 | ) 224 | ORDER BY 225 | predicted_will_buy_on_return_visit DESC;' 226 | -------------------------------------------------------------------------------- /Vertex AI PaLM API: Qwik Start.md: -------------------------------------------------------------------------------- 1 | # Vertex AI PaLM API: Qwik Start 2 | 3 | ### APIs & Services > Search `Vertex AI API` > Enable 4 | 5 | ## Run in Cloudshell 6 | ```cmd 7 | MODEL_ID="text-bison" 8 | PROJECT_ID=$DEVSHELL_PROJECT_ID 9 | curl \ 10 | -X POST \ 11 | -H "Authorization: Bearer $(gcloud auth print-access-token)" \ 12 | -H "Content-Type: application/json" \ 13 | https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ 14 | $'{ 15 | "instances": [ 16 | { "prompt": "Provide a summary with about two sentences for the following article: 17 | The efficient-market hypothesis (EMH) is a hypothesis in financial \ 18 | economics that states that asset prices reflect all available \ 19 | information. A direct implication is that it is impossible to \ 20 | \\"beat the market\\" consistently on a risk-adjusted basis since market \ 21 | prices should only react to new information. Because the EMH is \ 22 | formulated in terms of risk adjustment, it only makes testable \ 23 | predictions when coupled with a particular model of risk. As a \ 24 | result, research in financial economics since at least the 1990s has \ 25 | focused on market anomalies, that is, deviations from specific \ 26 | models of risk. The idea that financial market returns are difficult \ 27 | to predict goes back to Bachelier, Mandelbrot, and Samuelson, but \ 28 | is closely associated with Eugene Fama, in part due to his \ 29 | influential 1970 review of the theoretical and empirical research. \ 30 | The EMH provides the basic logic for modern risk-based theories of \ 31 | asset prices, and frameworks such as consumption-based asset pricing \ 32 | and intermediary asset pricing can be thought of as the combination \ 33 | of a model of risk with the EMH. Many decades of empirical research \ 34 | on return predictability has found mixed evidence. Research in the \ 35 | 1950s and 1960s often found a lack of predictability (e.g. Ball and \ 36 | Brown 1968; Fama, Fisher, Jensen, and Roll 1969), yet the \ 37 | 1980s-2000s saw an explosion of discovered return predictors (e.g. \ 38 | Rosenberg, Reid, and Lanstein 1985; Campbell and Shiller 1988; \ 39 | Jegadeesh and Titman 1993). Since the 2010s, studies have often \ 40 | found that return predictability has become more elusive, as \ 41 | predictability fails to work out-of-sample (Goyal and Welch 2008), \ 42 | or has been weakened by advances in trading technology and investor \ 43 | learning (Chordia, Subrahmanyam, and Tong 2014; McLean and Pontiff \ 44 | 2016; Martineau 2021). 45 | Summary: 46 | "} 47 | ], 48 | "parameters": { 49 | "temperature": 0.2, 50 | "maxOutputTokens": 256, 51 | "topK": 40, 52 | "topP": 0.95 53 | } 54 | }' 55 | MODEL_ID="text-bison" 56 | PROJECT_ID=$DEVSHELL_PROJECT_ID 57 | curl \ 58 | -X POST \ 59 | -H "Authorization: Bearer $(gcloud auth print-access-token)" \ 60 | -H "Content-Type: application/json" \ 61 | https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ 62 | $'{ 63 | "instances": [ 64 | { "prompt": "Give me ten interview questions for the role of program manager."} 65 | ], 66 | "parameters": { 67 | "temperature": 0.2, 68 | "maxOutputTokens": 1024, 69 | "topK": 40, 70 | "topP": 0.8 71 | } 72 | }' 73 | MODEL_ID="text-bison" 74 | PROJECT_ID=$DEVSHELL_PROJECT_ID 75 | curl \ 76 | -X POST \ 77 | -H "Authorization: Bearer $(gcloud auth print-access-token)" \ 78 | -H "Content-Type: application/json" \ 79 | https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ 80 | $'{ 81 | "instances": [ 82 | { "prompt": "Provide a summary with about two sentences for the following article: 83 | The efficient-market hypothesis (EMH) is a hypothesis in financial \ 84 | economics that states that asset prices reflect all available \ 85 | information. A direct implication is that it is impossible to \ 86 | \\"beat the market\\" consistently on a risk-adjusted basis since market \ 87 | prices should only react to new information. Because the EMH is \ 88 | formulated in terms of risk adjustment, it only makes testable \ 89 | predictions when coupled with a particular model of risk. As a \ 90 | result, research in financial economics since at least the 1990s has \ 91 | focused on market anomalies, that is, deviations from specific \ 92 | models of risk. The idea that financial market returns are difficult \ 93 | to predict goes back to Bachelier, Mandelbrot, and Samuelson, but \ 94 | is closely associated with Eugene Fama, in part due to his \ 95 | influential 1970 review of the theoretical and empirical research. \ 96 | The EMH provides the basic logic for modern risk-based theories of \ 97 | asset prices, and frameworks such as consumption-based asset pricing \ 98 | and intermediary asset pricing can be thought of as the combination \ 99 | of a model of risk with the EMH. Many decades of empirical research \ 100 | on return predictability has found mixed evidence. Research in the \ 101 | 1950s and 1960s often found a lack of predictability (e.g. Ball and \ 102 | Brown 1968; Fama, Fisher, Jensen, and Roll 1969), yet the \ 103 | 1980s-2000s saw an explosion of discovered return predictors (e.g. \ 104 | Rosenberg, Reid, and Lanstein 1985; Campbell and Shiller 1988; \ 105 | Jegadeesh and Titman 1993). Since the 2010s, studies have often \ 106 | found that return predictability has become more elusive, as \ 107 | predictability fails to work out-of-sample (Goyal and Welch 2008), \ 108 | or has been weakened by advances in trading technology and investor \ 109 | learning (Chordia, Subrahmanyam, and Tong 2014; McLean and Pontiff \ 110 | 2016; Martineau 2021). 111 | Summary: 112 | "} 113 | ], 114 | "parameters": { 115 | "temperature": 0.2, 116 | "maxOutputTokens": 256, 117 | "topK": 40, 118 | "topP": 0.95 119 | } 120 | }' > summarization_prompt_example.txt 121 | MODEL_ID="text-bison" 122 | PROJECT_ID=$DEVSHELL_PROJECT_ID 123 | curl \ 124 | -X POST \ 125 | -H "Authorization: Bearer $(gcloud auth print-access-token)" \ 126 | -H "Content-Type: application/json" \ 127 | https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ 128 | $'{ 129 | "instances": [ 130 | { "prompt": "Give me ten interview questions for the role of program manager."} 131 | ], 132 | "parameters": { 133 | "temperature": 0.2, 134 | "maxOutputTokens": 1024, 135 | "topK": 40, 136 | "topP": 0.8 137 | } 138 | }' > ideation_prompt_example.txt 139 | export PROJECT_ID=$(gcloud config get-value project) 140 | gsutil cp *.txt gs://$PROJECT_ID 141 | MODEL_ID="chat-bison" 142 | PROJECT_ID=$DEVSHELL_PROJECT_ID 143 | curl \ 144 | -X POST \ 145 | -H "Authorization: Bearer $(gcloud auth print-access-token)" \ 146 | -H "Content-Type: application/json" \ 147 | https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ 148 | '{ 149 | "instances": [{ 150 | "context": "My name is Ned. You are my personal assistant. My favorite movies are Lord of the Rings and Hobbit.", 151 | "examples": [ { 152 | "input": {"content": "Who do you work for?"}, 153 | "output": {"content": "I work for Ned."} 154 | }, 155 | { 156 | "input": {"content": "What do I like?"}, 157 | "output": {"content": "Ned likes watching movies."} 158 | }], 159 | "messages": [ 160 | { 161 | "author": "user", 162 | "content": "Are my favorite movies based on a book series?", 163 | }], 164 | }], 165 | "parameters": { 166 | "temperature": 0.3, 167 | "maxDecodeSteps": 200, 168 | "topP": 0.8, 169 | "topK": 40 170 | } 171 | }' 172 | MODEL_ID="chat-bison" 173 | PROJECT_ID=$DEVSHELL_PROJECT_ID 174 | curl \ 175 | -X POST \ 176 | -H "Authorization: Bearer $(gcloud auth print-access-token)" \ 177 | -H "Content-Type: application/json" \ 178 | https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ 179 | '{ 180 | "instances": [{ 181 | "context": "My name is Ned. You are my personal assistant. My favorite movies are Lord of the Rings and Hobbit.", 182 | "examples": [ { 183 | "input": {"content": "Who do you work for?"}, 184 | "output": {"content": "I work for Ned."} 185 | }, 186 | { 187 | "input": {"content": "What do I like?"}, 188 | "output": {"content": "Ned likes watching movies."} 189 | }], 190 | "messages": [ 191 | { 192 | "author": "user", 193 | "content": "Are my favorite movies based on a book series?", 194 | }], 195 | }], 196 | "parameters": { 197 | "temperature": 0.3, 198 | "maxDecodeSteps": 200, 199 | "topP": 0.8, 200 | "topK": 40 201 | } 202 | }' > sample_chat_prompts.txt 203 | export PROJECT_ID=$(gcloud config get-value project) 204 | gsutil cp sample_chat_prompts.txt gs://$PROJECT_ID 205 | ``` 206 | -------------------------------------------------------------------------------- /HTTP Load Balancer with Cloud Armor.md: -------------------------------------------------------------------------------- 1 | # HTTP Load Balancer with Cloud Armor 2 | 3 | ## Run in cloudshell 4 | 5 | ### Get REGION 1 AND REGION 2 from Task 2 6 | ```cmd 7 | export REGION1= 8 | ``` 9 | 10 | ```cmd 11 | export REGION2= 12 | ``` 13 | 14 | ### Get VM ZONE from TASK 4 Step 3 15 | ```cmd 16 | export VM_ZONE= 17 | ``` 18 | 19 | #### After this there's a one more command in end 20 | ```cmd 21 | gcloud compute --project=$DEVSHELL_PROJECT_ID firewall-rules create default-allow-http \ 22 | --direction=INGRESS \ 23 | --priority=1000 \ 24 | --network=default \ 25 | --action=ALLOW \ 26 | --rules=tcp:80 \ 27 | --source-ranges=0.0.0.0/0 \ 28 | --target-tags=http-server 29 | gcloud compute firewall-rules create default-allow-health-check \ 30 | --project=$DEVSHELL_PROJECT_ID \ 31 | --direction=INGRESS \ 32 | --priority=1000 \ 33 | --network=default \ 34 | --action=ALLOW \ 35 | --rules=tcp \ 36 | --source-ranges=130.211.0.0/22,35.191.0.0/16 \ 37 | --target-tags=http-server 38 | gcloud compute instance-templates create $REGION1-template \ 39 | --project=$DEVSHELL_PROJECT_ID \ 40 | --machine-type=e2-micro \ 41 | --network-interface=network-tier=PREMIUM,subnet=default \ 42 | --metadata=startup-script-url=gs://cloud-training/gcpnet/httplb/startup.sh,enable-oslogin=true \ 43 | --maintenance-policy=MIGRATE \ 44 | --provisioning-model=STANDARD \ 45 | --region=$REGION1 \ 46 | --tags=http-server,https-server \ 47 | --create-disk=auto-delete=yes,boot=yes,device-name=$REGION1-template,image=projects/debian-cloud/global/images/debian-11-bullseye-v20230629,mode=rw,size=10,type=pd-balanced \ 48 | --no-shielded-secure-boot \ 49 | --shielded-vtpm \ 50 | --shielded-integrity-monitoring \ 51 | --reservation-affinity=any 52 | gcloud compute instance-templates create $REGION2-template \ 53 | --project=$DEVSHELL_PROJECT_ID \ 54 | --machine-type=e2-micro \ 55 | --network-interface=network-tier=PREMIUM,subnet=default \ 56 | --metadata=startup-script-url=gs://cloud-training/gcpnet/httplb/startup.sh,enable-oslogin=true \ 57 | --maintenance-policy=MIGRATE \ 58 | --provisioning-model=STANDARD \ 59 | --region=$REGION2 \ 60 | --tags=http-server,https-server \ 61 | --create-disk=auto-delete=yes,boot=yes,device-name=$REGION2-template,image=projects/debian-cloud/global/images/debian-11-bullseye-v20230629,mode=rw,size=10,type=pd-balanced \ 62 | --no-shielded-secure-boot \ 63 | --shielded-vtpm \ 64 | --shielded-integrity-monitoring \ 65 | --reservation-affinity=any 66 | gcloud beta compute instance-groups managed create $REGION1-mig \ 67 | --project=$DEVSHELL_PROJECT_ID \ 68 | --base-instance-name=$REGION1-mig \ 69 | --size=1 \ 70 | --template=$REGION1-template \ 71 | --region=$REGION1 \ 72 | --target-distribution-shape=EVEN \ 73 | --instance-redistribution-type=PROACTIVE \ 74 | --list-managed-instances-results=PAGELESS \ 75 | --no-force-update-on-repair 76 | gcloud beta compute instance-groups managed set-autoscaling $REGION1-mig \ 77 | --project=$DEVSHELL_PROJECT_ID \ 78 | --region=$REGION1 \ 79 | --cool-down-period=45 \ 80 | --max-num-replicas=2 \ 81 | --min-num-replicas=1 \ 82 | --mode=on \ 83 | --target-cpu-utilization=0.8 84 | gcloud beta compute instance-groups managed create $REGION2-mig \ 85 | --project=$DEVSHELL_PROJECT_ID \ 86 | --base-instance-name=$REGION2-mig \ 87 | --size=1 \ 88 | --template=$REGION2-template \ 89 | --region=$REGION2 \ 90 | --target-distribution-shape=EVEN \ 91 | --instance-redistribution-type=PROACTIVE \ 92 | --list-managed-instances-results=PAGELESS \ 93 | --no-force-update-on-repair 94 | gcloud beta compute instance-groups managed set-autoscaling $REGION2-mig \ 95 | --project=$DEVSHELL_PROJECT_ID \ 96 | --region=$REGION2 \ 97 | --cool-down-period=45 \ 98 | --max-num-replicas=2 \ 99 | --min-num-replicas=1 \ 100 | --mode=on \ 101 | --target-cpu-utilization=0.8 102 | gcloud compute health-checks create tcp http-health-check \ 103 | --description="Follow To Dev-tanay" \ 104 | --check-interval=5s \ 105 | --timeout=5s \ 106 | --unhealthy-threshold=2 \ 107 | --healthy-threshold=2 \ 108 | --port=80 \ 109 | --proxy-header=NONE 110 | TOKEN=$(gcloud auth application-default print-access-token) 111 | cat > 1.json < 2.json < 3.json < 4.json < 5.json <