├── .gitignore
├── Readme.md
├── concourse
└── pipeline
│ ├── optional-private-repo-overlay.yaml
│ ├── secrets.yaml
│ └── spring-petclinic.yaml
├── demo-script.md
├── docs
├── 00-tkg-lab-foundation-gitops.md
├── 00-tkg-lab-foundation.md
├── 01-environment-config-gitops.md
├── 01-environment-config.md
├── 02-tbs-base-install.md
├── 03-tbs-custom-dependencies.md
├── 04-petclinic-workspace.md
├── 05-petclinic-tbs-namespace.md
├── 06-petclinic-db.md
├── 07-petclinic-repos.md
├── 08-petclinic-pipeline-gitops.md
├── 08-petclinic-pipeline.md
├── 09-petclinic-dashboard.md
├── 10-tbs-stack-update.md
├── 11-load-generation.md
├── custom-dashboard.png
├── demo.md
├── locust-test-running.png
├── locust-test-setup.png
├── one-off.md
├── petclinic-db.png
├── petclinic-rebase.png
└── tanzu-e2e-cicd.png
├── local-config
└── .gitkeep
├── scripts
├── set-pipeline.sh
├── tbs-in-depth-gitops.sh
├── tbs-in-depth-take-2.sh
├── tbs-in-depth.sh
├── tbs-manual-dotnetcore.sh
└── tkg-day-2-ops.sh
├── tbs
└── demo-cluster-builder-order.yaml
└── traffic-generator
└── locustfile.py
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | local-config/*
3 | !local-config/.gitkeep
4 | __pycache__
5 | *.secret
--------------------------------------------------------------------------------
/Readme.md:
--------------------------------------------------------------------------------
1 | # End to End Tanzu Demo
2 |
3 | ## Overview
4 |
5 | This repo provides an end to end Tanzu experience showcasing developer and platform operator perspectives. The end state enables a demo flow that starts with an update to the Spring Pet Clinic spring boot application and ends with the updated application in production. Along the way, the CI process compiles, tests, and packages the application, then triggers the Tanzu Build Service to containerize the app and push it to Harbor. Harbor scans the image for vulnerabilities. The CD process identifies a new image, publishes a deploy event to Tanzu Observability, and then apply's updated configuration for the app in Tanzu Kubernetes Grid. The demo also highlights the devops experience provisioning the mysql database used by Spring Pet Clinic using Kubeapps and the Tanzu Application Catalog, as well as monitoring the app specific Tanzu Observability dashboard with the deploy events visible on the charts. Finally we showcase the platform operator experience setting up daily backups for the Spring Pet Clinic workspace in Tanzu Mission Control and then accessing Kubernetes Cluster Dashboards in Tanzu Observability.
6 |
7 | 
8 |
9 | Big shoutout to my peers who created this demo, which was was the foundation of this [https://github.com/Pivotal-Field-Engineering/tanzu-gitops](https://github.com/Pivotal-Field-Engineering/tanzu-gitops)
10 |
11 | ## How to Get Everything Setup
12 |
13 | 0. [Setup Foundational Lab Environment and Bonus Labs](docs/00-tkg-lab-foundation.md)
14 | 1. [Setup Environment Specific Params Yaml](docs/01-environment-config.md)
15 | 2. [Install TBS And OOTB Dependencies](docs/02-tbs-base-install.md)
16 | 3. [Setup TBS Demo Stack and Cluster Builder](docs/03-tbs-custom-dependencies.md)
17 | 4. [Setup Workspace and Pet Clinic Namespace](docs/04-petclinic-workspace.md)
18 | 5. [Setup Spring Pet Clinic TBS Project Namespace](docs/05-petclinic-tbs-namespace.md)
19 | 6. [Deploy Spring Pet Clinic MySql Database](docs/06-petclinic-db.md)
20 | 7. [Setup spring-petclinic code and config repositories](docs/07-petclinic-repos.md)
21 | 8. [Create Concourse Pipeline for Spring Pet Clinic](docs/08-petclinic-pipeline.md)
22 | 9. [Create TO Wavefront Dashboard](docs/09-petclinic-dashboard.md)
23 | 10. [Update TBS Stack to Remediate CVEs](docs/10-tbs-stack-update.md)
24 | 11. [Setup Load Generation for More Interesting Dashboards](docs/11-load-generation.md)
25 |
26 | ## Execute the Demo
27 |
28 | With the above in place, you are now set to deliver an awesome short demo showcasing Tanzu!
29 |
30 | [Execute the Demo](docs/demo.md)
31 |
32 | ## Key Capabilities Explained
33 |
34 | 1. Custom Events in Tanzu Observability
35 | 2. Concourse / Tanzu Build Service Integration
36 | 3. TBS Rebase Resolves Vulnerabilities
37 |
38 | ## One-off Activities
39 |
40 | [One-off Operations](docs/one-off.md)
41 |
--------------------------------------------------------------------------------
/concourse/pipeline/optional-private-repo-overlay.yaml:
--------------------------------------------------------------------------------
1 | #@ load("@ytt:overlay", "overlay")
2 | #@ load("@ytt:data", "data")
3 |
4 | #! Only apply this overlay if gitUsername is pressent in the data values file, otherwise NOOP
5 | #@ if/end hasattr(data.values.petclinic, "gitUsername") :
6 | #@overlay/match by=overlay.all, expects="1+"
7 | ---
8 | #@overlay/match missing_ok=True
9 | resources:
10 | #@overlay/match by=overlay.map_key("name")
11 | - name: source-code
12 | source:
13 | #@overlay/match missing_ok=True
14 | username: ((petclinic.gitUsername))
15 | #@overlay/match missing_ok=True
16 | password: ((petclinic.gitPassword))
17 | #@overlay/match by=overlay.map_key("name")
18 | - name: config-repo
19 | source:
20 | #@overlay/match missing_ok=True
21 | username: ((petclinic.gitUsername))
22 | #@overlay/match missing_ok=True
23 | password: ((petclinic.gitPassword))
24 |
--------------------------------------------------------------------------------
/concourse/pipeline/secrets.yaml:
--------------------------------------------------------------------------------
1 |
2 | #@ load("@ytt:data", "data")
3 | #@ load("@ytt:base64", "base64")
4 | apiVersion: v1
5 | kind: Secret
6 | metadata:
7 | name: common-secrets
8 | namespace: concourse-main
9 | type: Opaque
10 | data:
11 | harborDomain: #@ base64.encode(data.values.commonSecrets.harborDomain)
12 | harborUser: #@ base64.encode(data.values.commonSecrets.harborUser)
13 | harborPassword: #@ base64.encode(data.values.commonSecrets.harborPassword)
14 | kubeconfigBuildServer: #@ base64.encode(data.values.commonSecrets.kubeconfigBuildServer)
15 | kubeconfigAppServer: #@ base64.encode(data.values.commonSecrets.kubeconfigAppServer)
16 | concourseHelperImage: #@ base64.encode(data.values.commonSecrets.concourseHelperImage)
17 | ---
18 | apiVersion: v1
19 | kind: Secret
20 | metadata:
21 | name: petclinic
22 | namespace: concourse-main
23 | type: Opaque
24 | data:
25 | host: #@ base64.encode(data.values.petclinic.host)
26 | image: #@ base64.encode(data.values.petclinic.image)
27 | tbsNamespace: #@ base64.encode(data.values.petclinic.tbs.namespace)
28 | wavefrontApplicationName: #@ base64.encode(data.values.petclinic.wavefront.applicationName)
29 | wavefrontUri: #@ base64.encode(data.values.petclinic.wavefront.uri)
30 | wavefrontApiToken: #@ base64.encode(data.values.petclinic.wavefront.apiToken)
31 | wavefrontDeployEventName: #@ base64.encode(data.values.petclinic.wavefront.deployEventName)
32 | configRepo: #@ base64.encode(data.values.petclinic.configRepo)
33 | codeRepo: #@ base64.encode(data.values.petclinic.codeRepo)
34 | #@ if hasattr(data.values.petclinic, "gitUsername") :
35 | gitUsername: #@ base64.encode(data.values.petclinic.gitUsername)
36 | gitPassword: #@ base64.encode(data.values.petclinic.gitPassword)
37 | #@ end
38 |
--------------------------------------------------------------------------------
/concourse/pipeline/spring-petclinic.yaml:
--------------------------------------------------------------------------------
1 | resources:
2 | - name: source-code
3 | type: git
4 | source:
5 | uri: ((petclinic.codeRepo))
6 | branch: main
7 | - name: config-repo
8 | type: git
9 | source:
10 | uri: ((petclinic.configRepo))
11 | branch: master
12 | paths:
13 | - "k8s/**"
14 | - name: spring-petclinic-image
15 | type: docker-image
16 | source:
17 | repository: ((petclinic.image))
18 | tag: latest
19 |
20 | jobs:
21 | - name: continuous-integration
22 | plan:
23 | - get: source-code
24 | trigger: true
25 | - task: compile-and-test
26 | output_mapping:
27 | target: target
28 | config:
29 | platform: linux
30 | image_resource:
31 | type: docker-image
32 | source:
33 | repository: adoptopenjdk
34 | tag: 11-jdk-hotspot
35 | inputs:
36 | - name: source-code
37 | outputs:
38 | - name: target
39 | caches:
40 | - path: source-code/maven
41 | run:
42 | path: /bin/bash
43 | args:
44 | - -c
45 | - |
46 | cd source-code
47 | if [[ -d $PWD/maven && ! -d $HOME/.m2 ]]; then
48 | ln -s "$PWD/maven" "$HOME/.m2"
49 | fi
50 | # Added -DskipTests and -Dcheckstyle.skip to speed up task for demo purpose
51 | # They should not be included in a proper test pipeline
52 | ./mvnw clean package -DskipTests -Dcheckstyle.skip
53 | cp target/*.jar ../target
54 | - task: update-build-service-image
55 | params:
56 | KUBECONFIG_JSON: ((common-secrets.kubeconfigBuildServer))
57 | input_mapping:
58 | target: target
59 | config:
60 | platform: linux
61 | image_resource:
62 | type: docker-image
63 | source:
64 | repository: ((common-secrets.concourseHelperImage))
65 | tag: latest
66 | inputs:
67 | - name: target
68 | run:
69 | path: /bin/bash
70 | args:
71 | - -c
72 | - |
73 | docker login ((common-secrets.harborDomain)) -u '((common-secrets.harborUser))' -p '((common-secrets.harborPassword))'
74 | echo $KUBECONFIG_JSON>kubeconfig.json
75 | export KUBECONFIG=kubeconfig.json
76 | set +e
77 | kp image list -n ((petclinic.tbsNamespace)) | grep "spring-petclinic"
78 | exists=$?
79 | set -e
80 | if [ $exists -eq 0 ]; then
81 | kp image patch spring-petclinic \
82 | --namespace ((petclinic.tbsNamespace)) \
83 | --wait \
84 | --local-path target/*.jar
85 | else
86 | kp image create spring-petclinic \
87 | --tag ((petclinic.image)) \
88 | --cluster-builder demo-cluster-builder \
89 | --namespace ((petclinic.tbsNamespace)) \
90 | --wait \
91 | --local-path target/*.jar
92 | fi
93 |
94 |
95 | - name: continuous-deployment
96 | public: true
97 | serial: true
98 | plan:
99 | - get: spring-petclinic-image
100 | trigger: true
101 | - get: config-repo
102 | - task: create-wavefront-event
103 | params:
104 | WAVEFRONT_API_TOKEN: ((petclinic.wavefrontApiToken))
105 | WAVEFRONT_URL: ((petclinic.wavefrontUri))
106 | WAVEFRONT_DEPLOY_EVENT_NAME: ((petclinic.wavefrontDeployEventName))
107 | config:
108 | platform: linux
109 | image_resource:
110 | type: docker-image
111 | source:
112 | repository: ((common-secrets.concourseHelperImage))
113 | tag: latest
114 | run:
115 | path: /bin/bash
116 | args:
117 | - -c
118 | - |
119 | set -euo pipefail
120 |
121 | START_TIME=$(date +%s000)
122 | sleep 1
123 | END_TIME=$(date +%s000)
124 |
125 | curl \
126 | -X POST \
127 | --header "Content-Type: application/json" \
128 | --header "Accept: application/json" \
129 | --header "Authorization: Bearer ${WAVEFRONT_API_TOKEN}" \
130 | -d "{
131 | \"name\": \"${WAVEFRONT_DEPLOY_EVENT_NAME}\",
132 | \"annotations\": {
133 | \"severity\": \"info\",
134 | \"type\": \"image deploy\",
135 | \"details\": \"new spring-petclinic image deployed\"
136 | },
137 | \"startTime\": "${START_TIME}",
138 | \"endTime\": "${END_TIME}"
139 | }" "${WAVEFRONT_URL}/api/v2/event"
140 |
141 | - task: deploy-app
142 | params:
143 | KUBECONFIG_JSON: ((common-secrets.kubeconfigAppServer))
144 | config:
145 | platform: linux
146 | image_resource:
147 | type: docker-image
148 | source:
149 | repository: ((common-secrets.concourseHelperImage))
150 | tag: latest
151 | inputs:
152 | - name: config-repo
153 | - name: spring-petclinic-image
154 | run:
155 | path: /bin/bash
156 | args:
157 | - -c
158 | - |
159 | export DIGEST=$(cat spring-petclinic-image/digest)
160 |
161 | # TODO Need to setup the kubeconfig
162 | echo $KUBECONFIG_JSON>kubeconfig.json
163 | export KUBECONFIG=kubeconfig.json
164 |
165 | cat > config-repo/k8s/values.yml << EOF
166 | #@data/values
167 | ---
168 | petclinic:
169 | host: ((petclinic.host))
170 | image: ((petclinic.image))@$DIGEST
171 | wavefront:
172 | applicationName: ((petclinic.wavefrontApplicationName))
173 | uri: ((petclinic.wavefrontUri))
174 | apiToken: ((petclinic.wavefrontApiToken))
175 | EOF
176 | cat config-repo/k8s/values.yml
177 |
178 | ytt -f config-repo/k8s --ignore-unknown-comments | kapp deploy -n petclinic -a petclinic -y -f -
179 |
--------------------------------------------------------------------------------
/demo-script.md:
--------------------------------------------------------------------------------
1 | Make code update, commit and push
2 | > Check out the pipeline doing the unit test and trigger image update
3 | > Check out harbor
4 | > Check out the pipeline doing the update to config repo and wavefront event
5 | > Check out the updated app
6 | > Check out the observability app dashboard and event
7 |
8 | Couple things to point out, how did I get mysql database
9 | > Check out kubeapps
10 |
11 |
12 | ## Operator Perspective
13 |
14 | ### Create Cluster Policies
15 | > Setup Security Policy (type strict, warning only, apply to label acme.io/policy/security-default)
16 | > Setup Quota Policy (warning only, apply to label acme.io/policy/quota-default)
17 |
18 | ## Create BizOps Workspace
19 | > TMC
20 | > Creating a workspace (bizops)
21 | > Set Access Policy edit access to the bizops workspace to the bizops-devs group
22 | > Set Image registry policy on the bizops workspace (will break TAC)
23 | > Harbor
24 | > Create a harbor project for the bizops LOB
25 | > Create a robot account for bizops project
26 | > TBS
27 | > Create a build namespace (build-service-bizops)
28 | > Create a registry secret for (build-service-bizops)
29 | > Concourse
30 | > Create a team
31 |
32 | ## Create Petclinc Space for BizOps Workspace
33 | > Create a petclinic namespace (label it acme.io/policy/quota-default, cme.io/policy/security-default)
34 | > Nightly backup for the petclinic namespace
35 |
36 | ## As an Operator
37 | - What I've Done
38 | - Integrated with corporate membership directory
39 | - Enabled easy routable apps with wild card dns
40 | - Provide secure access with wild card ssl
41 | - Setup a workspace with access and image registry policy
42 | - Added a petclinic namespace
43 | - Added nightly backup for petclinic
44 | - What I will do now
45 | - Have a look at the cluster
46 | - Create a new namespace for petadoptions
47 | - Add nightly backup for petadoptions namespace
48 | - Check on the health of the cluster
49 |
--------------------------------------------------------------------------------
/docs/00-tkg-lab-foundation-gitops.md:
--------------------------------------------------------------------------------
1 | # TKG Lab Foundation for GitOps
2 |
3 | Execute the `Foudational Lab Setup` from [tkg-lab](https://github.com/Pivotal-Field-Engineering/tkg-lab).
4 |
5 | Then exercise the following bonus labs...
6 |
7 | 1. [Deploy ArgoCD to Shared Services Cluster](https://github.com/Pivotal-Field-Engineering/tkg-lab/blob/master/docs/bonus-labs/argocd-kustomize.md)
8 | 2. [Deploy Kubeapps to Shared Services Cluster](https://github.com/Pivotal-Field-Engineering/tkg-lab/blob/master/docs/bonus-labs/kubeapps.md)
9 |
10 | >Note: Previously Harbor deployment was a bonus lab, but recent updates to tkg-lab, Harbor is now part of the base install, so it is not specifically referenced here.
11 |
12 | ## Go to Next Step
13 |
14 | [Setup Environment Specific Params Yaml](01-environment-config-gitops.md)
15 |
--------------------------------------------------------------------------------
/docs/00-tkg-lab-foundation.md:
--------------------------------------------------------------------------------
1 | # TKG Lab Foundation
2 |
3 | If using the GitOps (ArgoCD) go to [TKG Lab Foundation for GitOps](00-tkg-lab-foundation-gitops.md) instead.
4 |
5 | Execute the `Foudational Lab Setup` from [tkg-lab](https://github.com/Pivotal-Field-Engineering/tkg-lab).
6 |
7 | >Note: I tested this with 6 worker nodes in the Shared Services cluster and 4 worker nodes in the Workload cluster. Worker nodes were 2 core, 8 GB.
8 |
9 | Then exercise the following bonus labs...
10 |
11 | 1. [Deploy Concourse to Shared Services Cluster](https://github.com/Pivotal-Field-Engineering/tkg-lab/blob/master/docs/bonus-labs/concourse.md)
12 | 2. [Deploy Kubeapps to Workload Cluster](https://github.com/Pivotal-Field-Engineering/tkg-lab/blob/master/docs/bonus-labs/kubeapps.md)
13 |
14 | ## Go to Next Step
15 |
16 | [Setup Environment Specific Params Yaml](01-environment-config.md)
17 |
--------------------------------------------------------------------------------
/docs/01-environment-config-gitops.md:
--------------------------------------------------------------------------------
1 | # Setup Environment Specific Params Yaml
2 |
3 | Setup a local params file containing all the environment specific values for the e2e demo. This is distinct from the params file you used for the `tkg-lab`. Below is a redacted version of the file I used and I placed it in `/local-config/values.yaml` which is referenced in the `.gitignore`.
4 |
5 | ```yaml
6 | #@data/values
7 | ---
8 | cd: argocd # choose concourse or argocd for Continuous Delivery
9 | petclinic:
10 | host: petclinic.ironislands.tkg-vsphere-lab.winterfell.live # Ingress host for your app
11 | image: harbor.stormsend.tkg-vsphere-lab.winterfell.live/petclinic/spring-petclinic # image, includes your harbor domain and project
12 | configRepo: https://github.com/doddatpivotal/spring-petclinic-config.git # your k8s config repo, you could just use mine
13 | codeRepo: https://github.com/doddatpivotal/spring-petclinic.git # your source code repo
14 | configRepoLocalPath: /Users/jaguilar/Code/spring-petclinic-config # the location of your code repo cloned locally
15 | wavefront:
16 | applicationName: YOUR_PREFIX-petclinic # application name, which appears in Tanzu Observability Application Status dashboard. I used dpfeffer-petclinic
17 | uri: https://surf.wavefront.com # Your Tanzu Observability URI
18 | apiToken: REDACTED # Your Tanzu Observability Api Token
19 | deployEventName: YOUR_EVENT_NAME # Mine is dpfeffer-spring-petclinic-deploy, we don't want to conflict here
20 | tmc:
21 | workload-cluster: YOUR_WORKLOAD_CLUSTER_NAME_IN_TMC # Mine is dpfeffer-ironislands-vsphere
22 | shared-services-cluster: YOUR_SHARED_SERVICES_CLUSTER_NAME_IN_TMC # Mine is dpfeffer-stormsend-vsphere
23 | workspace: YOUR_WORKSPACE_NAME_IN_TMC # Mine is dpfeffer-petclinic
24 | tbs:
25 | namespace: tbs-project-petclinic
26 | argocd:
27 | applicationName: petclinic # application name, which appears in the ArgoCD dashboard
28 | server: https://192.168.14.182:6443 # server ArgoCD targets to deploy the application
29 | path: argocd/cd # path that ArgoCD will sync in the config repo
30 | tbs:
31 | harborRepository: harbor.stormsend.tkg-vsphere-lab.winterfell.live/tbs/build-service # where you want tbs images to be placed
32 | commonSecrets:
33 | harborDomain: harbor.stormsend.tkg-vsphere-lab.winterfell.live
34 | harborUser: REDACTED # Recommend creating a robot account in the harbor project you are pushing petclinic images too
35 | harborPassword: REDACTED
36 | ```
37 |
38 | ## Export Location of Params YAML
39 |
40 | ```bash
41 | # You can change the location to where you stored your file
42 | export PARAMS_YAML=local-config/values.yaml
43 | ```
44 |
45 | ## Go to Next Step
46 |
47 | [Install TBS](02-tbs-base-install.md)
48 |
--------------------------------------------------------------------------------
/docs/01-environment-config.md:
--------------------------------------------------------------------------------
1 | # Setup Environment Specific Params Yaml
2 |
3 | Setup a local params file containing all the environment specific values for the e2e demo. This is distinct from the params file you used for the `tkg-lab`. Below is a redacted version of the file I used and I placed it in `/local-config/values.yaml` which is referenced in the `.gitignore`.
4 |
5 | >Note: for the kubeconfig references below, I used the following approach to get single line json...
6 |
7 | ```bash
8 | # Set your context to the build server context, then...
9 | kubectl config view --flatten --minify | yq e - --tojson | jq -c .
10 | # Set your context to the app server context, then...
11 | kubectl config view --flatten --minify | yq e - --tojson | jq -c .
12 | ```
13 |
14 | ```yaml
15 | #@data/values
16 | ---
17 | petclinic:
18 | host: petclinic.ironislands.tkg-vsphere-lab.winterfell.live # Ingress host for your app
19 | image: harbor.stormsend.tkg-vsphere-lab.winterfell.live/petclinic/spring-petclinic # image, includes your harbor domain and project
20 | configRepo: https://github.com/doddatpivotal/spring-petclinic-config.git # your k8s config repo, you could just use mine
21 | codeRepo: https://github.com/doddatpivotal/spring-petclinic.git # your source code repo
22 | # Uncomment the following lines if you have a private repos with http access. Assumes same un/pw. This
23 | # will configure concourse resource appropriately
24 | # gitUsername: REDACTED
25 | # gitPassword: REDACTED
26 | wavefront:
27 | applicationName: YOUR_PREFIX-petclinic # application name, which appears in Tanzu Observability Application Status dashboard. I used dpfeffer-petclinic
28 | uri: https://surf.wavefront.com # Your Tanzu Observability URI
29 | apiToken: REDACTED # Your Tanzu Observability Api Token
30 | deployEventName: YOUR_EVENT_NAME # Mine is dpfeffer-spring-petclinic-deploy, we don't want to conflict here
31 | tmc:
32 | workload-cluster: YOUR_WORKLOAD_CLUSTER_NAME_IN_TMC # Mine is dpfeffer-ironislands-vsphere
33 | shared-services-cluster: YOUR_SHARED_SERVICES_CLUSTER_NAME_IN_TMC # Mine is dpfeffer-stormsend-vsphere
34 | workspace: YOUR_WORKSPACE_NAME_IN_TMC # Mine is dpfeffer-petclinic
35 | tbs:
36 | namespace: tbs-project-petclinic
37 | tbs:
38 | harborRepository: harbor.stormsend.tkg-vsphere-lab.winterfell.live/tbs/build-service # where you want tbs images to be placed
39 | harborUser: robot$tbs # set this to the harbor user account you want tbs to use to update build service images. Recommend creating a robot account, but could be admin account
40 | harborPassword: REDACTED # User account associated to the above account
41 | commonSecrets:
42 | harborDomain: harbor.stormsend.tkg-vsphere-lab.winterfell.live
43 | harborUser: REDACTED # Recommend creating a robot account in the harbor project you are pushing petclinic images too
44 | harborPassword: REDACTED
45 | kubeconfigBuildServer: 'REDACTED' # This should be minified json version of your kubeconfig with context set to the cluster where you Tanzu Build Server is deployed. That should be the shared services cluster.
46 | kubeconfigAppServer: 'REDACTED' # This should be minified json version of your kubeconfig with context set to the cluster where you Pet Clinic is deployed. That should be the workload cluster.
47 | concourseHelperImage: harbor.stormsend.tkg-vsphere-lab.winterfell.live/concourse/concourse-helper # Your concourse helper image, explained in 08-petclinic-peipline.md
48 | concourseAlias: stormsend # Your concourse alias
49 | concourseUri: https://concourse.stormsend.tkg-vsphere-lab.winterfell.live # Your concourse URI
50 | concourseUser: REDACTED
51 | concoursePassword: REDACTED
52 | ```
53 |
54 | ## Export Location of Params YAML
55 |
56 | ```bash
57 | # You can change the location to where you stored your file
58 | export PARAMS_YAML=local-config/values.yaml
59 | ```
60 |
61 | ## [Optional] Automate population of the kubeconfig in your params.yaml
62 |
63 | ```bash
64 | # Set your context to the build server (Shared Services Cluster), then...
65 | export CONFIG=$(kubectl config view --flatten --minify | yq e - --tojson | jq -c .)
66 | yq e -i '.commonSecrets.kubeconfigBuildServer = strenv(CONFIG)' $PARAMS_YAML
67 |
68 | # Change your context to the app server (Workload Cluster), then...
69 | export CONFIG=$(kubectl config view --flatten --minify | yq e - --tojson | jq -c .)
70 | yq e -i '.commonSecrets.kubeconfigAppServer = strenv(CONFIG)' $PARAMS_YAML
71 |
72 | # Add back the document seperator that yq removes
73 | sed -i -e '2i\
74 | ---
75 | ' "$PARAMS_YAML"
76 | rm -f "$PARAMS_YAML-e"
77 |
78 | # Clear out the neviornment variable
79 | unset CONFIG
80 | ```
81 |
82 | ## Go to Next Step
83 |
84 | [Install TBS](02-tbs-base-install.md)
85 |
--------------------------------------------------------------------------------
/docs/02-tbs-base-install.md:
--------------------------------------------------------------------------------
1 | # Install TBS And Dependencies
2 |
3 | 1. Create a project in harbor for Tanzu Build Service. I created it as `tbs` and set it as public.
4 |
5 | 2. Create a robot account inside of your harbor project. This can be done in the UI by accessing the project and then selecting the `Robot Accounts` tab. Update `params.yaml` config `.tbs.harborUser` and `.tbs.harborPassword` with the robot account credentials.
6 |
7 | 3. Set environment variables for use in the following sections
8 |
9 | ```bash
10 | export TBS_REPOSITORY=$(yq e .tbs.harborRepository $PARAMS_YAML)
11 | # For backward compatability to old params.yaml format, if you don't have .tbs.harborUser set, we will set from .commonSecrets.harborUser. Likewise for password
12 | export REGISTRY_USER=$(yq e '.tbs.harborUser // .commonSecrets.harborUser' $PARAMS_YAML)
13 | export REGISTRY_PASSWORD=$(yq e '.tbs.harborPassword //.commonSecrets.harborPassword' $PARAMS_YAML)
14 | export TBS_VERSION=1.4.3
15 | ```
16 |
17 | 4. Download Tanzu Build Service and Dependencies from Tanzu Network
18 |
19 | >Note: The demo includes exercising a rebase, that resolves base image vulnerabilities. In order to do this, we want to import `version 100.0.255` and `version 100.0.286` of the TBS dependencies, where we will see CVE's resolved with the run image used in the demo.
20 |
21 | ```bash
22 | # Pulled the following from pivnet info icon for tbs 1.4.3
23 | pivnet download-product-files --product-slug='tbs-dependencies' --release-version='100.0.255' --product-file-id=1142215 -d ~/Downloads
24 | pivnet download-product-files --product-slug='tbs-dependencies' --release-version='100.0.286' --product-file-id=1188028 -d ~/Downloads
25 | ```
26 |
27 | 5. Push the TBS images into your local Harbor registry
28 |
29 | >Note: Ensure you have logged into harbor registry with your local docker daemon.
30 |
31 | >Note: Ensure you have also logged into Tanzu Registry (docker login registry.tanzu.vmware.com) with your Tanzu Network credentials.
32 |
33 | >Note: Make sure you have the right Carvel tools versions. This combination worked for me: imgpkg v0.17.0 or above v0.18.0 (0.18.0 fails!), kbld v0.31.0, ytt 0.35.1
34 |
35 | ```bash
36 | imgpkg copy -b "registry.tanzu.vmware.com/build-service/bundle:$TBS_VERSION" --to-repo $TBS_REPOSITORY
37 | ```
38 |
39 | Pull the Tanzu Build Service bundle locally:
40 | ```bash
41 | rm -rf /tmp/bundle
42 | imgpkg pull -b $TBS_REPOSITORY":1.4.3" -o /tmp/bundle
43 | ```
44 |
45 | 6. Deploy TBS components into your shared services cluster
46 |
47 | >Note: Ensure you have switched your local kube context to your shared services cluster
48 |
49 | >Note: If you specified a new robot account as harbor user, then make sure the account exists and is a member of the $TBS_REPOSITORY
50 |
51 | ```bash
52 | ytt -f /tmp/bundle/config/ \
53 | -v kp_default_repository="$TBS_REPOSITORY" \
54 | -v kp_default_repository_username="$REGISTRY_USER" \
55 | -v kp_default_repository_password="$REGISTRY_PASSWORD" \
56 | --data-value-yaml pull_from_kp_default_repo=true \
57 | | kbld -f /tmp/bundle/.imgpkg/images.yml -f- \
58 | | kapp deploy -a tanzu-build-service -f- -y
59 | ```
60 |
61 | Install newest descriptors
62 | ```bash
63 | kp import -f ~/Downloads/descriptor-100.0.286.yaml
64 | kp import -f ~/Downloads/descriptor-100.0.255.yaml
65 | ```
66 |
67 | ## Validate
68 |
69 | Verify that the cluster builders are all ready.
70 |
71 | ```bash
72 | kp clusterbuilder list
73 | ```
74 |
75 | ## Go to Next Step
76 |
77 | [Setup TBS Demo Stack and Cluster Builder](03-tbs-custom-dependencies.md)
78 |
--------------------------------------------------------------------------------
/docs/03-tbs-custom-dependencies.md:
--------------------------------------------------------------------------------
1 | # Setup TBS Demo Stack and Cluster Builder
2 |
3 | 1. Set environment variables for use in the following sections
4 |
5 | ```bash
6 | export TBS_REPOSITORY=$(yq e .tbs.harborRepository $PARAMS_YAML)
7 | ```
8 |
9 | 2. Setup custom demo stack and cluster builder
10 |
11 | In order to reliably demonstrate the TBS rebase capability that resolves CVE's identified by Harbor, we create a custom ClusterStack within TBS with a TBS dependency version that has known high vulnerabilities, `version 100.0.255`. We downloaded and imported that version earlier, now we will create the ClusterStack `demo-stack` and ClusterBuilder `demo-cluster-builder` using those vulnerabilities from the `full` ClusterStack.
12 |
13 | > Note: You can open the downloaded ~/Downloads/descriptor-100.0.255.yaml and see the image sha256 references from below.
14 |
15 | ```bash
16 | # make it match with 100.0.255
17 | kp clusterstack create demo-stack \
18 | --build-image $TBS_REPOSITORY@sha256:ae63b7c588f3dd728d2d423dd26790af784decc3d3947eaff2696b8fd30bcfb0 \
19 | --run-image $TBS_REPOSITORY@sha256:ec48e083ab3d47df591de02423440c5de7f8af2e4ec6b4263af476812c4e3f85
20 |
21 | kp clusterbuilder create demo-cluster-builder \
22 | --tag $TBS_REPOSITORY:demo-cluster-builder \
23 | --order tbs/demo-cluster-builder-order.yaml \
24 | --stack demo-stack \
25 | --store default
26 | ```
27 |
28 | ## Validate
29 |
30 | Verify that the cluster builders are all ready.
31 |
32 | ```bash
33 | kp clusterbuilder list
34 | ```
35 |
36 | ## Go to Next Step
37 |
38 | [Setup Workspace and Pet Clinic Namespace](04-petclinic-workspace.md)
39 |
--------------------------------------------------------------------------------
/docs/04-petclinic-workspace.md:
--------------------------------------------------------------------------------
1 | # Setup Workspace and Pet Clinic Namespaces
2 |
3 | We want to deploy Spring Pet Clinic to your tkg-lab workload cluster. Let's create a workspace in Tanzu Mission Control and then a namespace within the workload cluster. Additionally, we want to create a namespace on our shared services cluster for Tanzu Build Service to create pet clinic images.
4 |
5 | 1. Set environment variables for use in the following sections
6 |
7 | ```bash
8 | export TMC_WORKLOAD_CLUSTER=$(yq e .petclinic.tmc.workload-cluster $PARAMS_YAML)
9 | export TMC_SHARED_SERVICES_CLUSTER=$(yq e .petclinic.tmc.shared-services-cluster $PARAMS_YAML)
10 | export TMC_PETCLINIC_WORKSPACE=$(yq e .petclinic.tmc.workspace $PARAMS_YAML)
11 | ```
12 |
13 | 2. Use the Tanzu Mission Control cli, `tmc`, to create a workspace and namespace for the Spring Pet Clinic app.
14 |
15 | ```bash
16 | tmc workspace create -n $TMC_PETCLINIC_WORKSPACE -d "Workspace for Spring Pet Clinic"
17 | tmc cluster namespace create -c $TMC_WORKLOAD_CLUSTER -n petclinic -d "Namespace for Spring Pet Clinic" -k $TMC_PETCLINIC_WORKSPACE -m attached -p attached
18 | tmc cluster namespace create -c $TMC_SHARED_SERVICES_CLUSTER -n tbs-project-petclinic -d "Namespace for TBS to build Spring Pet Clinic images" -k $TMC_PETCLINIC_WORKSPACE -m attached -p attached
19 | ```
20 |
21 | ## Go to Next Step
22 |
23 | [Setup Spring Pet Clinic TBS Project Namespace](05-petclinic-tbs-namespace.md)
--------------------------------------------------------------------------------
/docs/05-petclinic-tbs-namespace.md:
--------------------------------------------------------------------------------
1 | # Setup Spring Pet Clinic TBS Project Namespace
2 |
3 | In order manage Spring Pet Clinic images, we have to do some setup. In the past step, we created the namespace for Tanzu Build Service to do it's magic. Now we have to create a project in Harbor for TBS to publish images, and also create a secret in that namespace with the Harbor credentials required to push the images.
4 |
5 | 1. Create a project in Harbor for Spring Pet Clinic images. I created it as `petclinic` and set it as public.
6 |
7 | 2. Configure the Harbor project to scan images immediately on push. Access project, and choose `Configuration` tab. Check `Automatically scan images on push`
8 |
9 | 3. Create a robot account for build service to use when pushing images to harbor. This can be done in the UI by accessing the project and then selecting the `Robot Accounts` tab. Store these as the credentials in your `params.yaml` file as `.commonSecrets.harborUser` and `.commonSecrets.harborPassword`.
10 |
11 | 4. Set environment variables for use in the following sections
12 |
13 | ```bash
14 | export HARBOR_DOMAIN=$(yq e .commonSecrets.harborDomain $PARAMS_YAML)
15 | export REGISTRY_USER=$(yq e .commonSecrets.harborUser $PARAMS_YAML)
16 | export REGISTRY_PASSWORD=$(yq e .commonSecrets.harborPassword $PARAMS_YAML)
17 | export TBS_NAMESPACE=$(yq e .petclinic.tbs.namespace $PARAMS_YAML)
18 | ```
19 |
20 | 3. Create the secret holding Harbor credentials
21 |
22 | >Note: Ensure you have switched your local kube context to your shared services cluster
23 |
24 | ```bash
25 | kp secret create harbor-creds \
26 | --registry $HARBOR_DOMAIN \
27 | --registry-user $REGISTRY_USER \
28 | --namespace $TBS_NAMESPACE
29 | ```
30 |
31 | ## Go to Next Step
32 |
33 | [Deploy Spring Pet Clinic MySql Database](06-petclinic-db.md)
--------------------------------------------------------------------------------
/docs/06-petclinic-db.md:
--------------------------------------------------------------------------------
1 | # Deploy Spring Pet Clinic MySql Database
2 |
3 | 1. Access Kubeapps on Workload Cluster
4 |
5 | 2. Switch the context to the namespace "petclinic"
6 |
7 | 3. Click Catalog and Search for MySql and then click MySql
8 |
9 | 4. Deploy MySql and name it `petclinic-db`
10 |
11 | Use values
12 |
13 | ```yaml
14 | auth:
15 | database: petclinic
16 | password: petclinic
17 | username: petclinic
18 | rootPassword: petclinic
19 | ```
20 |
21 | 5. Wait for App to be ready
22 |
23 | 
24 |
25 | ## [Alternate Method] Install directly using helm
26 |
27 | ```bash
28 | helm repo add tac https://charts.trials.tac.bitnami.com/demo
29 | helm install petclinic-db tac/mysql -n petclinic --set auth.database=petclinic,auth.password=petclinic,auth.username=petclinic,auth.rootPassword=petclinic
30 | ```
31 |
32 | ## Go to Next Step
33 |
34 | [Setup Spring Pet Clinic code and config repositories](07-petclinic-repos.md)
--------------------------------------------------------------------------------
/docs/07-petclinic-repos.md:
--------------------------------------------------------------------------------
1 | # Setup Spring Pet Clinic code and config repositories
2 |
3 | You will need to setup two repositories.
4 |
5 | ## Spring Pet Clinic Source Code Repo
6 |
7 | The sample application is based upon Spring Pet Clinic canonical app, with adjustments made to add Tanzu Observability integration.
8 |
9 | You can choose to fork, my repo [https://github.com/doddatpivotal/spring-petclinic](https://github.com/doddatpivotal/spring-petclinic) which may grow stale and be done with it.
10 |
11 | Or you could fork the Spring Project's repo [https://github.com/spring-projects/spring-petclinic](https://github.com/spring-projects/spring-petclinic) and make the appropriate adjustments.
12 |
13 | 1. Add following to spring petclinic pom.xml
14 |
15 | ```xml
16 | ...
17 |
18 | ...
19 | 2.1.0-SNAPSHOT
20 | 2020.0.0-M6
21 |
22 | ...
23 |
24 |
25 |
26 | org.springframework.cloud
27 | spring-cloud-dependencies
28 | ${spring-cloud.version}
29 | pom
30 | import
31 |
32 |
33 | com.wavefront
34 | wavefront-spring-boot-bom
35 | ${wavefront.version}
36 | pom
37 | import
38 |
39 |
40 |
41 | ...
42 |
43 | ...
44 |
45 | com.wavefront
46 | wavefront-spring-boot-starter
47 |
48 |
49 | org.springframework.cloud
50 | spring-cloud-starter-sleuth
51 |
52 | ...
53 |
54 | ...
55 | ```
56 |
57 | 2. Update PetclinicIntegrationTests.java
58 |
59 | ```java
60 | import org.springframework.test.context.ActiveProfiles;
61 | @ActiveProfiles("test")
62 | ```
63 |
64 | 3. Add test profile properties `/src/test/resources/application-test.properties`
65 |
66 | ```
67 | management.metrics.export.wavefront.enabled=false
68 | management.metrics.export.wavefront.apiToken=foo
69 | ```
70 |
71 | ## Spring Pet Clinic Kubernetes Config Repo
72 |
73 | - Config Repo: Fork [https://github.com/doddatpivotal/spring-petclinic-config](https://github.com/doddatpivotal/spring-petclinic-config)
74 |
75 | - If using the GitOps (ArgoCD) based approach make sure your config repo has the `/argocd` folder. Example in [https://github.com/jaimegag/spring-petclinic-config](https://github.com/jaimegag/spring-petclinic-config)
76 |
77 | ## Go to Next Step
78 |
79 | [Create Concourse Pipeline for Spring Pet Clinic](08-petclinic-pipeline.md)
80 |
81 | If using the GitOps (ArgoCD) go to [Create ArgoCD Pipeline for Spring Pet Clinic](08-petclinic-pipeline-gitops.md)
82 |
--------------------------------------------------------------------------------
/docs/08-petclinic-pipeline-gitops.md:
--------------------------------------------------------------------------------
1 | # Create ArgoCD Pipeline for Spring Pet Clinic
2 |
3 | > NOTE-TODO: Custom values should be parametrized in the ArgoCD Application (using extVars and something like jsonnet), until then we template it and push to the repo in advance
4 |
5 | > NOTE-TODO: We should use the ArgoCD Application CRD but it doesn't seem to work, so we use the Argocd CLI
6 |
7 | > NOTE-TODO: We need a solution for managing Secrets in a GitOps pipeline (SOPS or SealedSecrets), until then we push the secret directly outside of the CD pipeline
8 |
9 | > NOTE-TODO: With or without the above we should move the below copy&past complexity to scripts
10 |
11 |
12 | ## Get custom configuration
13 | Prepare env variables with Petclinic, Wavefront an custom settings
14 | ```bash
15 | export PETCLINIC_CONFIG_LOCAL=$(yq e .petclinic.configRepoLocalPath $PARAMS_YAML)
16 | export PETCLINIC_HOST=$(yq e .petclinic.host $PARAMS_YAML)
17 | export PETCLINIC_IMAGE=$(yq e .petclinic.image $PARAMS_YAML)
18 | export PETCLINIC_CONFIGREPO=$(yq e .petclinic.configRepo $PARAMS_YAML)
19 | export WAVEFRONT_APP_NAME=$(yq e .petclinic.wavefront.applicationName $PARAMS_YAML)
20 | export WAVEFRONT_URI=$(yq e .petclinic.wavefront.uri $PARAMS_YAML)
21 | export WAVEFRONT_APITOKEN=$(yq e .petclinic.wavefront.apiToken $PARAMS_YAML)
22 | export ARGOCD_APPLICATION=$(yq e .petclinic.argocd.applicationName $PARAMS_YAML)
23 | export ARGOCD_CLUSTER=$(yq e .petclinic.argocd.cluster $PARAMS_YAML)
24 | export ARGOCD_PATH=$(yq e .petclinic.argocd.path $PARAMS_YAML)
25 | ```
26 |
27 | ## Prepare spring-petclinic-config repo for ArgoCD
28 | In the command locate the `spring-petclinic-config` directory and set up folders:
29 | ```bash
30 | cd $PETCLINIC_CONFIG_LOCAL
31 | mkdir -p argocd/cd
32 | mkdir -p argocd/tmp
33 | cp argocd/values.yaml argocd/tmp/values.yaml
34 | ```
35 |
36 | Set the values.yaml file with the env variables configuration.
37 | ```bash
38 | yq e -i '.petclinic.host = env(PETCLINIC_HOST)' argocd/tmp/values.yaml
39 | yq e -i '.petclinic.image = env(PETCLINIC_IMAGE)' argocd/tmp/values.yaml
40 | yq e -i '.petclinic.wavefront.applicationName = env(WAVEFRONT_APP_NAME)' argocd/tmp/values.yaml
41 | yq e -i '.petclinic.wavefront.uri = env(WAVEFRONT_URI)' argocd/tmp/values.yaml
42 | yq e -i '.petclinic.wavefront.apiToken = env(WAVEFRONT_APITOKEN)' argocd/tmp/values.yaml
43 | # Add back the document seperator that yq removes
44 | sed -i -e '2i\
45 | ---\
46 | ' argocd/tmp/values.yaml
47 | rm -f argocd/tmp/values.yaml-e
48 | ```
49 |
50 | Prepare Petclinic App yaml for ArgoCD to manage, and push to repository
51 | ```bash
52 | ytt -f argocd/tmp/values.yaml -f argocd/petclinic-app.yaml --ignore-unknown-comments > argocd/cd/petclinic-app.yaml
53 | git add argocd/cd/petclinic-app.yaml
54 | git commit -m "petclinic argocd custom config"
55 | git push origin master
56 | ```
57 |
58 | Prepare Wavefront Secret to apply directly to your Workload cluster.
59 | > NOTE: Ensure you have switched your local kube context to your workload cluster
60 |
61 | ```bash
62 | ytt -f argocd/tmp/values.yaml -f argocd/to-secret.yaml --ignore-unknown-comments > argocd/tmp/to-secret.yaml
63 | kubectl apply -f argocd/tmp/to-secret.yaml
64 | ```
65 |
66 | ## Create ArgoCD application
67 | Make sure you `argocd login` to refresh your token. Run the following commands to create the ArgoCD application. We won't set it to sync yet since the images might not be ready. A separate lab will patch this app in order to start syncing when the time is right:
68 | ```bash
69 | argocd app create ${ARGOCD_APPLICATION} \
70 | --repo ${PETCLINIC_CONFIGREPO} \
71 | --path ${ARGOCD_PATH} \
72 | --dest-server ${ARGOCD_CLUSTER} \
73 | --sync-policy none
74 | ```
75 |
76 | ## Check App in ArgoCD
77 |
78 | Confirm the application has been registered in ArgoCD and auto-sync is not enabled yet. Run:
79 | ```bash
80 | argocd app get ${ARGOCD_APPLICATION}
81 | ```
82 | Response should look like this:
83 | ```bash
84 | Name: petclinic
85 | Project: default
86 | Server: https://192.168.14.182:6443
87 | Namespace:
88 | URL: https://argocd.example.com/applications/petclinic
89 | Repo: https://github.com/jaimegag/spring-petclinic-config
90 | Target:
91 | Path: argocd/cd
92 | SyncWindow: Sync Allowed
93 | Sync Policy:
94 | Sync Status: OutOfSync from (0cd469e)
95 | Health Status: Missing
96 |
97 | GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
98 | Service petclinic spring-petclinic OutOfSync Missing
99 | apps Deployment petclinic spring-petclinic OutOfSync Missing
100 | extensions Ingress petclinic spring-petclinic OutOfSync Missing
101 | ```
102 |
103 | To enable auto-sync run:
104 | ```bash
105 | argocd app set ${ARGOCD_APPLICATION} --sync-policy automated
106 | ```
107 |
108 | ## Go to Next Step
109 |
110 | [Create TO Wavefront Dashboard](09-petclinic-dashboard.md)
111 |
--------------------------------------------------------------------------------
/docs/08-petclinic-pipeline.md:
--------------------------------------------------------------------------------
1 | # Create Concourse Pipeline for Spring Pet Clinic
2 |
3 | ## Build Concourse Helper Image and Push to Harbor
4 |
5 | I've created a concourse helper image, which is used within the concourse pipeline, with the the following utilities:
6 |
7 | - kapp
8 | - ytt
9 | - kubectl
10 | - kp
11 |
12 | You can do the same and push the image to your local harbor repository.
13 |
14 | 1. Create project in Harbor for the image. I named mine `concourse` and set it to public access
15 |
16 | 2. Clone or fork my repository [https://github.com/doddatpivotal/concourse-helper](https://github.com/doddatpivotal/concourse-helper)
17 |
18 | Follow instructions in the [readme](https://github.com/doddatpivotal/concourse-helper/blob/master/Readme.md) to build the image and push to your local repository.
19 |
20 | ## Create Pipeline
21 |
22 | The Spring Pet Clinic CI/CD pipeline in concourse heavily relies on environment-specific data.
23 |
24 | 1. Ensure you PARAMS_YAML environment value from step 01 is set.
25 |
26 | 2. Ensure you have switched your local kube context to your shared services cluster
27 |
28 | 3. Login to concourse, setup pipeline secrets, and create pipeline
29 |
30 | ```bash
31 | fly -t $(yq e .commonSecrets.concourseAlias $PARAMS_YAML) login \
32 | -c $(yq e .commonSecrets.concourseUri $PARAMS_YAML) \
33 | -n main \
34 | -u $(yq e .commonSecrets.concourseUser $PARAMS_YAML) \
35 | -p $(yq e .commonSecrets.concoursePassword $PARAMS_YAML)
36 | ./scripts/set-pipeline.sh
37 | ```
38 |
39 | 3. Checkout the Pipeline
40 |
41 | ```bash
42 | open $(yq e .commonSecrets.concourseUri $PARAMS_YAML)
43 | ```
44 | And then login
45 |
46 | 4. Unpause the pipeline
47 |
48 | 5. Notice the `continuous-integration` job is automatically triggered. It may take a minute or two.
49 |
50 | 6. Validate that the image was created in Harbor. This will happen when the continuous-integration job is complete. This may take 8 minutes or so.
51 |
52 | 7. Validate that the CD Pipeline was triggered and runs successfully
53 |
54 | 8. Access the Spring Pet Clinic App and Click Around
55 |
56 | ```bash
57 | open https://$(yq e .petclinic.host $PARAMS_YAML)
58 | ```
59 |
60 | ## Go to Next Step
61 |
62 | [Create TO Wavefront Dashboard](09-petclinic-dashboard.md)
63 |
--------------------------------------------------------------------------------
/docs/09-petclinic-dashboard.md:
--------------------------------------------------------------------------------
1 | # Create TO Wavefront Dashboard
2 |
3 | 1. Within Tanzu Observability, access the Dynamically Generated Application Dashboard for Spring Pet Clinic. Application -> Application Status
4 |
5 | 2. Click on your Application, click on the petclinic service. You now see the dynamically generated dashboard.
6 |
7 | 3. Clone the dashboard
8 |
9 | 4. Click "Settings" followed by "Advanced"
10 |
11 | 5. Add the following events query `events(name="DEPLOY_EVENT_NAME")`
12 |
13 | >Note: The DEPLOY_EVENT_NAME value above is from `yq e .petclinic.wavefront.deployEventName $PARAMS_YAML`
14 |
15 | 6. Save the dashboard and name it something specific to your user name. I chose `dpfeffer-petclinic-dashboard`.
16 |
17 | 7. In your dashboard at the top right where it says "Show Events" change it to "From Dashboard Settings". This will cause your events query to be the source of events for all charts in your dashboard.
18 |
19 | ## Go to Next Step
20 |
21 | [Update TBS Stack to Remediate CVEs](10-tbs-stack-update.md)
--------------------------------------------------------------------------------
/docs/10-tbs-stack-update.md:
--------------------------------------------------------------------------------
1 | # Update TBS Stack to Remediate CVEs
2 |
3 | This Lab will demonstrate how TBS can automatically and quickly rebuild images when a stack is updated with a patched version.
4 |
5 | Alternatively and for a more in depth set of steps to demonstrate TBS capabilities and scenarios, leveraging also the Concourse pipeline, you can also check this [script](/scripts/tbs-in-depth.sh)
6 |
7 |
8 | 1. Set environment variables for use in the following sections
9 |
10 | ```bash
11 | export TBS_REPOSITORY=$(yq e .tbs.harborRepository $PARAMS_YAML)
12 | ```
13 |
14 | 2. Trigger a new build of Spring Pet Clinic by updating the Stack associated with its builder
15 |
16 | >Note: Ensure you have switched your local kube context to your shared services cluster
17 |
18 | ```bash
19 | # This sets the stack to use the patched images from TBS dependencies v100.0.125. You can check by looking at full stack in the descriptor-100.0.125.yaml that you downloaded in step 2.
20 | kp clusterstack update demo-stack \
21 | --build-image $TBS_REPOSITORY/build@sha256:8be3ca33427c19dc68d3a9a900e99f61487221894a4fde6d4819e5c3026f11a8 \
22 | --run-image $TBS_REPOSITORY/run@sha256:47a7b67d28a0e137b9918fc6380860086966abbac43242057373d346da3e1c76
23 | ```
24 |
25 | 3. Validate the Harbor has been updated
26 |
27 | ```bash
28 | kp build list spring-petclinic -n tbs-project-petclinic
29 | ```
30 |
31 | Harbor should now show a second image created with fewer CVEs.
32 |
33 | 
34 |
35 | ## Go to Next Step
36 |
37 | [Setup Load Generation for More Interesting Dashboards](11-load-generation.md)
38 |
--------------------------------------------------------------------------------
/docs/11-load-generation.md:
--------------------------------------------------------------------------------
1 | # Setup Load Generation for More Interesting Dashboards
2 |
3 | The following approach was borrowed by the TSM demo which uses [acme-fitness](https://github.com/vmwarecloudadvocacy/acme_fitness_demo/tree/master/traffic-generator).
4 |
5 | # Requirements
6 |
7 | 1. Local docker daemon
8 |
9 | 2. Spring Pet Clinic
10 |
11 | # Steps
12 |
13 | 1. Set environment variables for use in the following sections
14 |
15 | ```bash
16 | export PETCLINIC_HOST=$(yq e .petclinic.host $PARAMS_YAML)
17 | ```
18 |
19 | 2. Run locust via docker
20 |
21 | ```bash
22 | docker run -p 8089:8089 -v $PWD:/mnt/locust locustio/locust -f /mnt/locust/traffic-generator/locustfile.py -H https://$PETCLINIC_HOST
23 | ```
24 |
25 | 3. Access Locus UI
26 |
27 | ```bash
28 | open http://localhost:8089
29 | ```
30 |
31 | 4. Click on 'New Test' and provide the number of users to simulate. I used 10 users with hatch rate 4
32 |
33 | 
34 | 
35 |
36 | 5. Check out your nice data flowing through on your custom TO dashboard
37 |
38 | 
39 |
--------------------------------------------------------------------------------
/docs/custom-dashboard.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doddatpivotal/tkg-lab-e2e-adaptation/6ef11bb0104bae106dd4e27b23bde8255ea2f35b/docs/custom-dashboard.png
--------------------------------------------------------------------------------
/docs/demo.md:
--------------------------------------------------------------------------------
1 | # Tanzu End 2 End Demo
2 |
3 | Generally follow the [script](https://github.com/Pivotal-Field-Engineering/tanzu-gitops/blob/gtm-e2e-demo/SCRIPT.md) from the end to end team.
4 |
5 | ## Traffic Generator
6 |
7 | Ensure you are running the traffic generator from [Step 11](11-load-generation.md).
8 |
9 | ## Open the following tabs in your browser
10 |
11 | 1. **Pet Clinic App**: Introduce the application.
12 | 2. **Github Source Code**: Make an edit to the app and commit, trigging pipeline.
13 | 3. **Concourse Pipeline (petclinic pipeline)**: Show the pipeline and review the steps in the `c`ontinuous-integration` job. Move onto following steps, but come back after a time to show that the `continuous-deployment` job is executing, and describe those steps. Including the event push to Tanzu Observability.
14 | 4. **Harbor (Spring Pet Clinic Project)**: Show comment in the drop of vulnerabilities in the second image. Drill down into the vulnerabilities and show the project configuration preventing pulls.
15 | 5. **Kubeapps (petclinic namespace)**: Show the catalog, show the details of the deployed mysql db that serves petclinic.
16 | 6. **Octant (petclinic namespace)**: Show the application tab to give indications of the developer experience, check out the pod level information as well, like logs or exec into container.
17 | 7. **Tanzu Observability (Pet Clinic Custom Dashboard)**: Show the customized dashboard. Check out the event that now appears on in the charts. Don't forget to set the `Show Events` setting to `From Dashboard Settings`.
18 | 8. **Tanzu Mission Control**: Show the workload cluster, highlight visibility into workloads, show data protection, and then link into Tanzu Observability to show cluster level metrics.
19 | 9. **Tanzu Application Catalog (MySql Helm Details)**: Discuss how the selection of apps in Kubeapps was curated and how you have visibility into security, functional, and license scans.
20 |
21 | At this point the pipeline should be complete you and can check out updated spring pet clinic application.
--------------------------------------------------------------------------------
/docs/locust-test-running.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doddatpivotal/tkg-lab-e2e-adaptation/6ef11bb0104bae106dd4e27b23bde8255ea2f35b/docs/locust-test-running.png
--------------------------------------------------------------------------------
/docs/locust-test-setup.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doddatpivotal/tkg-lab-e2e-adaptation/6ef11bb0104bae106dd4e27b23bde8255ea2f35b/docs/locust-test-setup.png
--------------------------------------------------------------------------------
/docs/one-off.md:
--------------------------------------------------------------------------------
1 | # One Off Activities
2 |
3 | ## Reset Demo Stack For use Later in a demo
4 |
5 | ```bash
6 | export TBS_REPOSITORY=$(yq e .tbs.harborRepository $PARAMS_YAML)
7 |
8 | # make it match with 100.0.81
9 | kp clusterstack update demo-stack \
10 | --build-image $TBS_REPOSITORY/build@sha256:e2371eb5092beeb8eada41259e3b070ab2a0037218a28105c0fea590b3b57cb5 \
11 | --run-image $TBS_REPOSITORY/run@sha256:8c61edbd83d1741b4a50478314bfcb6aea7defa65205fe56044db4ed34874155
12 | ```
13 |
14 | ## Teardown Pet Clinic
15 |
16 | - Within TMC
17 | - delete petclinic namespace within petclinic workspace
18 | - delete tbs-project-petclinic namespace within petclinic workspace
19 | - delete the petclinic workspace
20 | - Within Harbor
21 | - delete project: petclinic
22 | - Within Tanzu Observability
23 | - delete custom dashboard for petclinic application
24 | - For Concorse, delete pipeline and secrets
25 |
26 | ```bash
27 | kapp delete -a concourse-main-secrets -n concourse-main
28 | fly -t $TARGET destroy-pipeline -p petclinic
29 | ```
30 |
31 | ## Teardown TBS
32 |
33 | ```bash
34 | kapp delete -a tanzu-build-service -n tanzu-kapp
35 | ```
36 |
37 | Delete Harbor project: tbs
--------------------------------------------------------------------------------
/docs/petclinic-db.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doddatpivotal/tkg-lab-e2e-adaptation/6ef11bb0104bae106dd4e27b23bde8255ea2f35b/docs/petclinic-db.png
--------------------------------------------------------------------------------
/docs/petclinic-rebase.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doddatpivotal/tkg-lab-e2e-adaptation/6ef11bb0104bae106dd4e27b23bde8255ea2f35b/docs/petclinic-rebase.png
--------------------------------------------------------------------------------
/docs/tanzu-e2e-cicd.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doddatpivotal/tkg-lab-e2e-adaptation/6ef11bb0104bae106dd4e27b23bde8255ea2f35b/docs/tanzu-e2e-cicd.png
--------------------------------------------------------------------------------
/local-config/.gitkeep:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/doddatpivotal/tkg-lab-e2e-adaptation/6ef11bb0104bae106dd4e27b23bde8255ea2f35b/local-config/.gitkeep
--------------------------------------------------------------------------------
/scripts/set-pipeline.sh:
--------------------------------------------------------------------------------
1 | set -e
2 |
3 | ytt -f concourse/pipeline/secrets.yaml -f $PARAMS_YAML --ignore-unknown-comments | kapp deploy -n concourse-main -a concourse-main-secrets -y -f -
4 |
5 | fly -t $(yq e .commonSecrets.concourseAlias $PARAMS_YAML) set-pipeline -p petclinic -c concourse/pipeline/spring-petclinic.yaml -n
6 | ytt \
7 | -f $PARAMS_YAML \
8 | -f concourse/pipeline/optional-private-repo-overlay.yaml \
9 | -f concourse/pipeline/spring-petclinic.yaml \
10 | --ignore-unknown-comments \
11 | | \
12 | fly -t $(yq e .commonSecrets.concourseAlias $PARAMS_YAML) set-pipeline -p petclinic -n -c -
13 |
14 |
--------------------------------------------------------------------------------
/scripts/tbs-in-depth-gitops.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # ---------------------------
4 | # Preparation / Cleanup
5 | # ---------------------------
6 |
7 | # To simplify some of the commands later (depend on your PARAMS_YAML env var)
8 | cd ~/Code/tkg-lab-e2e-adaptation
9 | export PARAMS_YAML=local-config/values.yaml
10 | export TBS_REPOSITORY=$(yq e .tbs.harborRepository $PARAMS_YAML)
11 | export HARBOR_DOMAIN=$(yq e .commonSecrets.harborDomain $PARAMS_YAML)
12 | export PETCLINIC_REPO=$(yq e .petclinic.codeRepo $PARAMS_YAML)
13 | export ARGOCD_APPLICATION=$(yq e .petclinic.argocd.applicationName $PARAMS_YAML)
14 |
15 | # Delete current images
16 | kp image delete spring-petclinic -n tbs-project-petclinic
17 | kp image delete spring-petclinic-demo -n tbs-project-petclinic
18 |
19 | # Update cluster stack to make it match with 100.0.81
20 | kp clusterstack update demo-stack \
21 | --build-image $TBS_REPOSITORY/build@sha256:e2371eb5092beeb8eada41259e3b070ab2a0037218a28105c0fea590b3b57cb5 \
22 | --run-image $TBS_REPOSITORY/run@sha256:8c61edbd83d1741b4a50478314bfcb6aea7defa65205fe56044db4ed34874155
23 |
24 | # Go to your petclinic app local code repo
25 | cd ~/Code/spring-petclinic
26 | # Let's start by creating an image based in source. At this point we have TBS installed and Harbor registry credentials configured
27 | kp image create spring-petclinic --tag $HARBOR_DOMAIN/petclinic/spring-petclinic \
28 | --cluster-builder demo-cluster-builder \
29 | --namespace tbs-project-petclinic \
30 | --wait \
31 | --git $PETCLINIC_REPO \
32 | --git-revision main
33 |
34 | # Sync ArgoCD app
35 | argocd app sync $ARGOCD_APPLICATION
36 | # Check ArgoCD UI to see objects syncing
37 | # Check Petclinic app UI to confirm app is up and running
38 |
39 | # ---------------------------
40 | # TBS Demo for E2E flow
41 | # ---------------------------
42 |
43 | # Go to your petclinic app local code repo
44 | cd ~/Code/spring-petclinic
45 | # Let's create another image based in source, to show the process and first steps of the build
46 | kp image create spring-petclinic-demo --tag $HARBOR_DOMAIN/petclinic/spring-petclinic-demo \
47 | --cluster-builder demo-cluster-builder \
48 | --namespace tbs-project-petclinic \
49 | --wait \
50 | --git $PETCLINIC_REPO \
51 | --git-revision main
52 | # Let's observe the stages of the build
53 | # ...
54 |
55 | # Check images
56 | kp image list -n tbs-project-petclinic
57 |
58 | # Check builds from existing image (already created)
59 | kp build list spring-petclinic -n tbs-project-petclinic
60 | # Check build logs from existing image (already created)
61 | kp build logs spring-petclinic -b 1 -n tbs-project-petclinic
62 |
63 | # Let's go to harbor and find the images
64 |
65 | # Update cluster stack to make it match with 100.0.101
66 | kp clusterstack update demo-stack \
67 | --build-image $TBS_REPOSITORY/build@sha256:2cd4b7a3bdd76c839a29b0a050476ba150c2639b75ff934bb62b8430440e3ea0 \
68 | --run-image $TBS_REPOSITORY/run@sha256:8e86b77ad25bde9e3f080d30789a4c8987ad81565f56eef54398bc5275070fc2
69 | # Image(s) rebuild
70 | kp build list spring-petclinic -n tbs-project-petclinic
71 | # Check logs
72 | kp build logs spring-petclinic -b3 -n tbs-project-petclinic
73 |
74 | # ---------------------------
75 | # More TBS
76 | # ---------------------------
77 |
78 | # Let's make a quick code change and push it (within the spring-petclinic repo folder)
79 | # Go to your petclinic app local code repo
80 | cd ~/Code/spring-petclinicvi src/main/resources/templates/welcome.html
81 | git add . && git commit -m "code change" && git push origin main
82 |
83 | # Check new build kicks in
84 | watch kp build list spring-petclinic -n tbs-project-petclinic
85 |
86 | # Check Harbor again for a new image
87 |
88 | # TODO: Sync changes on ArgoCD or make changes on k8s config objects instead (ConfigMap) that causee app UI to react.
89 |
90 | # Explore the build service central configuration
91 | # Explore stores
92 | kp clusterstore list
93 | kp clusterstore status default
94 | # Explore stacks
95 | kp clusterstack list
96 | kp clusterstack status demo-stack
97 | # Explore builders
98 | kp clusterbuilder list
99 | kp clusterbuilder status demo-cluster-builder
100 |
101 |
102 | # ------------------------------------------------
103 | # Inspect image for metadata for traceability and auditability
104 | # ------------------------------------------------
105 |
106 | # Show result of a Dockerfile image
107 | docker pull jaimegag/cassandra-demo
108 | docker inspect jaimegag/cassandra-demo
109 |
110 | # Now show a TBS built image
111 |
112 | export MOST_RECENT_SUCCESS_IMAGE=$(kp build list spring-petclinic -n tbs-project-petclinic | grep SUCCESS | tail -1 | awk '{print $(3)}')
113 | docker pull $MOST_RECENT_SUCCESS_IMAGE
114 |
115 | docker inspect $MOST_RECENT_SUCCESS_IMAGE
116 | # Discuss the sheer amount of metadata, baked right into the image itself
117 |
118 | docker inspect $MOST_RECENT_SUCCESS_IMAGE | jq ".[].Config.Labels.\"io.buildpacks.build.metadata\" | fromjson"
119 | # Can be parsed
120 |
121 | docker inspect $MOST_RECENT_SUCCESS_IMAGE | jq ".[].Config.Labels.\"io.buildpacks.build.metadata\" | fromjson | .buildpacks"
122 | # And even more specific example, which buildpacks
123 |
--------------------------------------------------------------------------------
/scripts/tbs-in-depth-take-2.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Lab Setup...
4 |
5 | # Open two terminals
6 | # Terminal 1
7 |
8 | # Set context to tbs cluster
9 |
10 | # Delete current image
11 | kp image delete spring-petclinic -n tbs-project-petclinic
12 |
13 | # Go to harbor and delete all repositories in your petclinic project
14 |
15 | # To simplify some of the commands later (depend on your PARAMS_YAML env var)
16 | cd ~/workspace/tanzu-e2e/tkg-lab-e2e-adaptation/
17 | export PARAMS_YAML=local-config/values-vsphere.yaml
18 |
19 | export TBS_REPOSITORY=$(yq e .tbs.harborRepository $PARAMS_YAML)
20 | export HARBOR_DOMAIN=$(yq e .commonSecrets.harborDomain $PARAMS_YAML)
21 |
22 | # Update cluster stack to make it match with 100.0.81
23 | kp clusterstack update demo-stack \
24 | --build-image $TBS_REPOSITORY/build@sha256:e2371eb5092beeb8eada41259e3b070ab2a0037218a28105c0fea590b3b57cb5 \
25 | --run-image $TBS_REPOSITORY/run@sha256:8c61edbd83d1741b4a50478314bfcb6aea7defa65205fe56044db4ed34874155
26 |
27 | #-----------------------------------
28 |
29 | cd ~/workspace/spring-petclinic
30 | ./mvnw clean package -D skipTests
31 |
32 | # Terminal 2
33 | watch kp build list spring-petclinic -n tbs-project-petclinic
34 |
35 | # ------------------------------------------------
36 | # Create an image and then update it
37 | # ------------------------------------------------
38 |
39 | # Let's start by creating an image. At this point we have TBS installed and Harbor registry credentials configured
40 | kp image create spring-petclinic --tag $HARBOR_DOMAIN/petclinic/spring-petclinic \
41 | --cluster-builder demo-cluster-builder \
42 | --namespace tbs-project-petclinic \
43 | --local-path target/*.jar
44 |
45 | # Check images
46 | kp image list -n tbs-project-petclinic
47 | # Check builds
48 | kp build list spring-petclinic -n tbs-project-petclinic
49 | # Check build logs
50 | kp build logs spring-petclinic -b 1 -n tbs-project-petclinic
51 | # Let's observe the stages of the build from ^^^^
52 |
53 | # Let's go to harbor and find the images
54 |
55 | # Let's make a quick code change and push it
56 | vi src/main/resources/templates/welcome.html
57 | ./mvnw clean package -D skipTests
58 |
59 | kp image patch spring-petclinic \
60 | --namespace tbs-project-petclinic \
61 | --local-path target/spring-petclinic-*.BUILD-SNAPSHOT.jar
62 |
63 | # Check Harbor again for a new image
64 |
65 | kp build logs spring-petclinic -b 2 -n tbs-project-petclinic
66 |
67 | # ------------------------------------------------
68 | # Explore the build service central configuration
69 | # ------------------------------------------------
70 |
71 | # Explore stores
72 | kp clusterstore list
73 | kp clusterstore status default
74 | # Explore stacks
75 | kp clusterstack list
76 | kp clusterstack status demo-stack
77 | # Explore builders
78 | kp clusterbuilder list
79 | kp clusterbuilder status demo-cluster-builder
80 |
81 | # ------------------------------------------------
82 | # Inspect image for metadata for traceability and auditability
83 | # ------------------------------------------------
84 |
85 | # Discuss Tanzu continually posting updates
86 | open https://network.pivotal.io/products/tbs-dependencies/
87 |
88 | # Discuss how you can download the descriptor and then run a command like (but don't actually run)
89 | # This process can easily be automated with a CI tool like Concourse
90 | kp import -f ~/Downloads/descriptor-100.0.81.yaml
91 |
92 | # Update cluster stack to make it match with 100.0.125
93 | kp clusterstack update demo-stack \
94 | --build-image $TBS_REPOSITORY/build@sha256:8be3ca33427c19dc68d3a9a900e99f61487221894a4fde6d4819e5c3026f11a8 \
95 | --run-image $TBS_REPOSITORY/run@sha256:47a7b67d28a0e137b9918fc6380860086966abbac43242057373d346da3e1c76
96 | # Image rebuild
97 |
98 | # Check logs this time
99 | kp build logs spring-petclinic -b3 -n tbs-project-petclinic
100 |
101 | # Check Harbor again for a new image with less vulnerabilities
102 |
103 | # ------------------------------------------------
104 | # Inspect image for metadata for traceability and auditability
105 | # ------------------------------------------------
106 |
107 | # Show result of a Dockerfile image
108 | docker pull $HARBOR_DOMAIN/concourse/concourse-helper
109 | docker inspect $HARBOR_DOMAIN/concourse/concourse-helper
110 |
111 | # Now show a TBS built image
112 |
113 | export MOST_RECENT_SUCCESS_IMAGE=$(kp build list spring-petclinic -n tbs-project-petclinic | grep SUCCESS | tail -1 | awk '{print $(3)}')
114 | docker pull $MOST_RECENT_SUCCESS_IMAGE
115 |
116 | docker inspect $MOST_RECENT_SUCCESS_IMAGE
117 | # Discuss the sheer amount of metadata, baked right into the image itself
118 |
119 | docker inspect $MOST_RECENT_SUCCESS_IMAGE | jq ".[].Config.Labels.\"io.buildpacks.build.metadata\" | fromjson"
120 | # Can be parsed
121 |
122 | docker inspect $MOST_RECENT_SUCCESS_IMAGE | jq ".[].Config.Labels.\"io.buildpacks.build.metadata\" | fromjson | .buildpacks"
123 | # And even more specific example, which buildpacks
124 |
--------------------------------------------------------------------------------
/scripts/tbs-in-depth.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # To simplify some of the commands later (depend on your PARAMS_YAML env var)
4 | export TBS_REPOSITORY=$(yq e .tbs.harborRepository $PARAMS_YAML)
5 | export HARBOR_DOMAIN=$(yq e .commonSecrets.harborDomain $PARAMS_YAML)
6 |
7 | # Let's start by creating an image. At this point we have TBS installed and Harbor registry credentials configured
8 | kp image create spring-petclinic --tag $HARBOR_DOMAIN/petclinic/spring-petclinic \
9 | --cluster-builder demo-cluster-builder \
10 | --namespace tbs-project-petclinic \
11 | --wait \
12 | --local-path target/spring-petclinic-*.jar
13 |
14 | # Let's observe the stages of the build
15 |
16 | # Check Concourse to see CD triggered
17 |
18 | # Check images
19 | kp image list -n tbs-project-petclinic
20 | # Check builds
21 | kp build list spring-petclinic -n tbs-project-petclinic
22 | # Check build logs
23 | kp build logs spring-petclinic -b 1 -n tbs-project-petclinic
24 |
25 | # Let's check Concourse to see CD triggered
26 |
27 | # Let's make a quick code change and push it
28 | # Check that Concourse CI starts
29 | # Show what's suppossed to happen in the Concourse pipeline code
30 | # And check a new build start when the app is
31 | watch kp build list spring-petclinic -n tbs-project-petclinic
32 | # Check Harbor again for a new image
33 |
34 |
35 | # Explore stores
36 | kp clusterstore list
37 | kp clusterstore status default
38 | # Explore stacks
39 | kp clusterstack list
40 | # Explore builders
41 | kp clusterbuilder list
42 |
43 | # Update cluster stack to make it match with 100.0.125
44 | kp clusterstack update demo-stack \
45 | --build-image $TBS_REPOSITORY/build@sha256:8be3ca33427c19dc68d3a9a900e99f61487221894a4fde6d4819e5c3026f11a8 \
46 | --run-image $TBS_REPOSITORY/run@sha256:47a7b67d28a0e137b9918fc6380860086966abbac43242057373d346da3e1c76
47 | # Image rebuild
48 | # Check Concourse to see CD triggered
49 | # Check Harbor again for a new image with less vulnerabilities
50 |
51 |
52 | # Inspect image for metadata for traceability and auditability
53 | #Dockerfile image
54 | docker inspect $HARBOR_DOMAIN/concourse/concourse-helper@sha256:e89d7f3359962828ccd1857477448bb56146095215b7e91f028f11a3b5bb1e15
55 |
56 | docker pull $HARBOR_DOMAIN/petclinic/spring-petclinic@sha256:87e7b83d127a8be4afed41b61b35da056b0d97ea2f22f7c424ca46c2092fd606
57 |
58 | docker inspect $HARBOR_DOMAIN/petclinic/spring-petclinic@sha256:87e7b83d127a8be4afed41b61b35da056b0d97ea2f22f7c424ca46c2092fd606
59 |
60 | docker inspect $HARBOR_DOMAIN/petclinic/spring-petclinic@sha256:87e7b83d127a8be4afed41b61b35da056b0d97ea2f22f7c424ca46c2092fd606 | jq ".[].Config.Labels.\"io.buildpacks.build.metadata\" | fromjson"
61 |
62 | docker inspect $HARBOR_DOMAIN/petclinic/spring-petclinic@sha256:87e7b83d127a8be4afed41b61b35da056b0d97ea2f22f7c424ca46c2092fd606 | jq ".[].Config.Labels.\"io.buildpacks.build.metadata\" | fromjson | .buildpacks"
63 |
64 | # Update code manually
65 | kp image patch spring-petclinic \
66 | --namespace tbs-project-petclinic \
67 | --wait \
68 | --local-path target/spring-petclinic-*.jar
69 |
70 | # Cleanup
71 |
72 | # Update cluster stack to make it match with 100.0.81 again
73 | kp clusterstack update demo-stack \
74 | --build-image $TBS_REPOSITORY/build@sha256:e2371eb5092beeb8eada41259e3b070ab2a0037218a28105c0fea590b3b57cb5 \
75 | --run-image $TBS_REPOSITORY/run@sha256:8c61edbd83d1741b4a50478314bfcb6aea7defa65205fe56044db4ed34874155
76 |
--------------------------------------------------------------------------------
/scripts/tbs-manual-dotnetcore.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # ---------------------------
4 | # Preparation / Cleanup
5 | # ---------------------------
6 |
7 | # To simplify some of the commands later (depend on your PARAMS_YAML env var)
8 | cd ~/Code/tkg-lab-e2e-adaptation
9 | export PARAMS_YAML=local-config/values.yaml
10 | export HARBOR_DOMAIN=$(yq e .commonSecrets.harborDomain $PARAMS_YAML)
11 | export HARBOR_USER=$(yq e .commonSecrets.harborUser $PARAMS_YAML)
12 | export TBS_REPOSITORY=$(yq e .tbs.harborRepository $PARAMS_YAML)
13 | export TODOS_IMAGE=$(yq e .todos.image $PARAMS_YAML)
14 | export TBS_TODOS_NAMESPACE=$(yq e .todos.tbs.namespace $PARAMS_YAML)
15 | export TODOS_REPO=$(yq e .todos.codeRepo $PARAMS_YAML)
16 | export TODOS_REPO_PATH=$(yq e .todos.codeRepoPath $PARAMS_YAML)
17 | export TODOS_LOCAL_PATH=$(yq e .todos.codeLocalPath $PARAMS_YAML)
18 |
19 | # We assume TBS_TODOS_NAMESPACE exists and the Harbor credentials are created via "kp" in it
20 | # Create namespace if it doesn't exist
21 | kubectl create namespace $TBS_TODOS_NAMESPACE --dry-run=client --output yaml | kubectl apply -f -
22 |
23 | # Create secret for tbs/kpack to be able to push new OCI images to our registry
24 | kp secret create harbor-creds \
25 | --registry $HARBOR_DOMAIN \
26 | --registry-user $HARBOR_USER \
27 | --namespace $TBS_TODOS_NAMESPACE
28 |
29 | # Delete current images
30 | kp image delete todos -n $TBS_TODOS_NAMESPACE
31 |
32 | # Update cluster stack to make it match with 100.0.255
33 | kp clusterstack update demo-stack \
34 | --build-image $TBS_REPOSITORY@sha256:ae63b7c588f3dd728d2d423dd26790af784decc3d3947eaff2696b8fd30bcfb0 \
35 | --run-image $TBS_REPOSITORY@sha256:ec48e083ab3d47df591de02423440c5de7f8af2e4ec6b4263af476812c4e3f85
36 |
37 | # -------------------------------
38 | # Create image from source code
39 | # -------------------------------
40 |
41 | # Let's start by creating an image based in source. At this point we have TBS installed and Harbor registry credentials configured
42 | kp image create todos --tag $TODOS_IMAGE \
43 | --cluster-builder demo-cluster-builder \
44 | --namespace $TBS_TODOS_NAMESPACE \
45 | --wait \
46 | --local-path $TODOS_LOCAL_PATH
47 |
48 | # Let's observe the stages of the build
49 | # ...
50 |
51 | # Check images
52 | kp image list -n $TBS_TODOS_NAMESPACE
53 |
54 | # Check builds from existing image (already created)
55 | kp build list todos -n $TBS_TODOS_NAMESPACE
56 | # Check build logs from existing image (already created)
57 | kp build logs todos -n $TBS_TODOS_NAMESPACE
58 |
59 | # Let's go to harbor and find the images
60 |
61 | # Update cluster stack to make it match with 100.0.286
62 | kp clusterstack update demo-stack \
63 | --build-image $TBS_REPOSITORY@sha256:43c78f6bcbcfb4ddf1ec6c14effdf26414ffbb21d8773519e85c325fd1a2561f \
64 | --run-image $TBS_REPOSITORY@sha256:d20231b7664446896d79d4d9178a62ce04a45d3ce02b3be54964a6c403b1ef06
65 | # Image(s) rebuild
66 | watch kp build list todos -n $TBS_TODOS_NAMESPACE
67 | # Check logs
68 | kp build logs todos -n $TBS_TODOS_NAMESPACE
69 |
70 | # ---------------------------
71 | # Iterate on Source Code
72 | # ---------------------------
73 |
74 | # Let's make a quick code change
75 | # Change some literal string in
76 | vi $TODOS_LOCAL_PATH/Controllers/EmployeesController.cs
77 |
78 | # Patch existing image (this would normally be triggered by the CI pipeline when suitable)
79 | kp image patch todos \
80 | --cluster-builder demo-cluster-builder \
81 | --namespace $TBS_TODOS_NAMESPACE \
82 | --wait \
83 | --local-path $TODOS_LOCAL_PATH
84 |
85 | # Check new build
86 | kp build list todos -n $TBS_TODOS_NAMESPACE
87 |
88 | # Check how only one layer is changed and the rest of the layers are reused from cache
89 |
90 | # Check Harbor again for a new image
91 |
92 | # Explore the build service central configuration
93 | # Explore stores
94 | kp clusterstore list
95 | kp clusterstore status default
96 | # Explore stacks
97 | kp clusterstack list
98 | kp clusterstack status demo-stack
99 | # Explore builders
100 | kp clusterbuilder list
101 | kp clusterbuilder status demo-cluster-builder
102 |
103 |
104 | # ------------------------------------------------
105 | # Inspect image for metadata for traceability and auditability
106 | # ------------------------------------------------
107 |
108 | # Show result of a Dockerfile image
109 | docker pull jaimegag/cassandra-demo
110 | docker inspect jaimegag/cassandra-demo
111 |
112 | # Now show a TBS built image
113 | docker pull $TODOS_IMAGE
114 | docker inspect $TODOS_IMAGE
115 | # Discuss the sheer amount of metadata, baked right into the image itself
116 |
117 | docker inspect $TODOS_IMAGE | jq ".[].Config.Labels.\"io.buildpacks.build.metadata\" | fromjson"
118 | # Can be parsed
119 |
120 | docker inspect $TODOS_IMAGE | jq ".[].Config.Labels.\"io.buildpacks.build.metadata\" | fromjson | .buildpacks"
121 | # And even more specific example, which buildpacks
122 |
123 |
124 | # -------------------------------
125 | # Create image from source code from GitHub
126 | # -------------------------------
127 |
128 | # Let's also see how can we create an image from a source code repo (github)
129 | kp image create todos2 --tag $TODOS_IMAGE \
130 | --cluster-builder demo-cluster-builder \
131 | --namespace $TBS_TODOS_NAMESPACE \
132 | --wait \
133 | --git $TODOS_REPO \
134 | --git-revision main \
135 | --sub-path $TODOS_REPO_PATH
136 |
137 |
138 | # ---------------------------
139 | # Iterate on Source Code into GitHub
140 | # ---------------------------
141 |
142 | # Let's make a quick code change and push it (within the dotnet-core-sql-k8s-demo repo folder)
143 | # Change some literal string in
144 | vi $TODOS_LOCAL_PATH/Controllers/EmployeesController.cs
145 | git add . && git commit -m "code change" && git push origin main
146 |
147 | # Check new build kicks in
148 | watch kp build list todos2 -n $TBS_TODOS_NAMESPACE
149 |
--------------------------------------------------------------------------------
/scripts/tkg-day-2-ops.sh:
--------------------------------------------------------------------------------
1 | # Day 2 Ops
2 |
3 | # Setup (modify appropriately)
4 | export SCALE_CLUSTER_IP=10.213.92.145
5 | export UPGRADE_CLUSTER_IP=10.213.92.150
6 | export TEMP_CLUSTER_IP=10.213.92.140
7 |
8 | tkg create cluster scale-cluster -p dev --vsphere-controlplane-endpoint $SCALE_CLUSTER_IP
9 | tkg create cluster upgrade-cluster -p dev --vsphere-controlplane-endpoint $UPGRADE_CLUSTER_IP --kubernetes-version v1.19.1+vmware.2
10 |
11 | # List clusters
12 | tkg get clusters
13 |
14 | # create a cluster
15 | tkg create cluster temp-cluster -p dev --vsphere-controlplane-endpoint $TEMP_CLUSTER_IP
16 |
17 | # Scale a cluster
18 | tkg scale cluster scale-cluster -w 2
19 |
20 | # TKG Versions
21 | tkg get kubernetesversions
22 |
23 | # Upgrade a cluster
24 | tkg upgrade cluster upgrade-cluster --kubernetes-version v1.19.3+vmware.1
25 |
26 | # Create new Management Cluster
27 | tkg init --config ~/temp/config.yaml --ui
28 |
29 | # Demo UI wizard but don't provision. Use values from your .secrets file
30 | AZURE_TENANT_ID:
31 | AZURE_CLIENT_ID:
32 | AZURE_CLIENT_SECRET:
33 | AZURE_SUBSCRIPTION_ID:
34 | AZURE_NODE_MACHINE_TYPE: Standard_D2s_v3
35 | SSH_KEY: |-
36 | ssh-rsa foo-keu == email@testmail.com
37 |
38 | # Review Cluster API Components
39 |
40 | kubectl get clusters
41 | kubectl describe cluster upgrade-cluster
42 | kubectl describe vspherecluster upgrade-cluster
43 | kubectl get machines
44 | export SOME_MACHINE=$(kubectl get machine | grep upgrade-cluster-md | tail -1 | awk '{print $(1)}')
45 | kubectl describe machine $SOME_MACHINE
46 | export SOME_VSPHERE_MACHINE=$(kubectl get machine $SOME_MACHINE -o json | jq '.spec.infrastructureRef.name' -r)
47 | kubectl describe vspheremachine $SOME_VSPHERE_MACHINE
48 |
49 | # Cleanup
50 |
51 | tkg delete cluster upgrade-cluster -y
52 | tkg delete cluster scale-cluster -y
53 | tkg delete cluster temp-cluster -y
--------------------------------------------------------------------------------
/tbs/demo-cluster-builder-order.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | - group:
3 | - id: tanzu-buildpacks/dotnet-core
4 | - group:
5 | - id: tanzu-buildpacks/nodejs
6 | - group:
7 | - id: tanzu-buildpacks/go
8 | - group:
9 | - id: tanzu-buildpacks/php
10 | - group:
11 | - id: tanzu-buildpacks/nginx
12 | - group:
13 | - id: tanzu-buildpacks/httpd
14 | - group:
15 | - id: tanzu-buildpacks/java-native-image
16 | - group:
17 | - id: tanzu-buildpacks/java
18 | - group:
19 | - id: paketo-buildpacks/procfile
20 |
--------------------------------------------------------------------------------
/traffic-generator/locustfile.py:
--------------------------------------------------------------------------------
1 | # This program will generate traffic for ACME Fitness Shop App. It simulates both Authenticated and Guest user scenarios. You can run this program either from Command line or from
2 | # the web based UI. Refer to the "locust" documentation for further information.
3 |
4 | from locust import HttpUser, TaskSet, task, User
5 | import random
6 | import logging
7 |
8 | class UserBehavior(TaskSet):
9 |
10 | def on_start(self):
11 | self.home()
12 |
13 | @task(3)
14 | def home(self):
15 | logging.info("Accessing Home Page")
16 | self.client.get("/")
17 |
18 | @task(3)
19 | def findOwners(self):
20 | logging.info("Finding Owners")
21 | self.client.get("/owners/find")
22 |
23 | @task(3)
24 | def listVets(self):
25 | logging.info("Listing Vets")
26 | self.client.get("/vets.html")
27 |
28 | @task(3)
29 | def searchOwners(self):
30 | logging.info("Searching Owners")
31 | self.client.get("owners?lastName=")
32 |
33 | @task(3)
34 | def viewOwner(self):
35 | logging.info("Viewing Single Owner")
36 | self.client.get("/owners/3")
37 |
38 | @task(1)
39 | def genError(self):
40 | logging.info("Generating Error")
41 | self.client.get("/oups")
42 |
43 | class WebSiteUser(HttpUser):
44 |
45 | tasks = [UserBehavior]
46 | userid = ""
47 | min_wait = 2000
48 | max_wait = 10000
49 |
50 |
51 |
52 |
--------------------------------------------------------------------------------