├── .gitignore ├── .vscode └── settings.json ├── README.md ├── TAP.png ├── acc-values.yaml ├── accelerators ├── api-microservice-boot.yaml ├── data-transfer.yaml ├── datatransform-supplychain.yaml ├── event-driven.yaml ├── frontend-ux.yaml ├── llm-boot.yaml ├── machine-learning.yaml ├── microservices-supplychain.yaml ├── testme.yml └── web-function.yaml ├── aria ├── Apply mission control policies.sls ├── Attach to mission control.sls ├── CIS 1.5.0 benchmark test.sls ├── CIS Kubernetes benchmark.sls ├── Config Runtime Space Profile.sls ├── Create EKS cluster.sls ├── Delete mission control policies.sls ├── Detach cluster from mission control.sls ├── PCI DSS 4.0 checks.sls ├── install-tap-aks.sls └── install-tap-eks.sls ├── backstage ├── apis │ ├── brownfield-apis-catalog.yaml │ ├── datacheck-apis-swagger.yaml │ ├── donations-apis-swagger.yaml │ ├── insurance-apis-swagger.yaml │ ├── locations-apis-swagger.yaml │ ├── payments-apis-swagger.yaml │ ├── sentiment-apis-swagger.yaml │ ├── sms-apis-swagger.yaml │ └── suppliers-apis-swagger.yaml ├── catalog-info.yaml ├── domains │ └── dekt-domain.yaml ├── groups │ └── dekt-teams.yaml ├── resources │ └── brownfield-gns.yaml └── systems │ └── devx-mood.yaml ├── brownfield-apis ├── brownfield-gateway.yaml ├── datacheck.yaml ├── donations.yaml ├── insurance.yaml ├── locations.yaml ├── mood-gw-sso-creds.example ├── payments.yaml ├── sentiment.yaml ├── sms.yaml ├── suppliers.yaml └── tsm-generated-config.yaml ├── builder.sh ├── config-templates ├── custom-supplychains │ ├── dekt-medical-scan.yaml │ └── dekt-medical.yaml ├── dataservices │ ├── aws │ │ ├── aws-provider-config.yaml │ │ ├── aws-provider.yaml │ │ ├── rds-postgres-class.yaml │ │ ├── rds-postgres-composition.yaml │ │ ├── rds-postgres-rbac.yaml │ │ └── rds-postgres-xrd.yaml │ ├── azure │ │ ├── azure-provider-config.yaml │ │ ├── azure-provider.yaml │ │ ├── azuresql-postgres-class.yaml │ │ ├── azuresql-postgres-composition.yaml │ │ ├── azuresql-postgres-instance.yaml │ │ ├── azuresql-postgres-rbac.yaml │ │ ├── azuresql-postgres-xrd.yaml │ │ └── direct-secret-binding.yaml │ ├── gcp │ │ ├── cloudsql-postgres-class.yaml │ │ ├── cloudsql-postgres-composition.yaml │ │ ├── cloudsql-postgres-rbac.yaml │ │ ├── cloudsql-postgres-xrd.yaml │ │ ├── gcp-provider-config.yaml │ │ └── gcp-provider.yaml │ └── oncluster │ │ ├── corp-rabbitmq-class.yaml │ │ ├── corp-rabbitmq-composition.yaml │ │ ├── corp-rabbitmq-rbac.yaml │ │ └── corp-rabbitmq-xrd.yaml ├── demo-values.yaml ├── secrets │ ├── carbonblack-creds.yaml │ ├── git-creds-sa-overlay.yaml │ ├── git-creds.yaml │ ├── ingress-issuer-apps.yaml │ ├── ingress-issuer-sys.yaml │ ├── openai-creds.yaml │ ├── snyk-creds.yaml │ └── viewer-rbac.yaml ├── tap-profiles │ ├── tap-dev.yaml │ ├── tap-prod1.yaml │ ├── tap-prod2.yaml │ ├── tap-stage.yaml │ └── tap-view.yaml └── workloads │ ├── mood-doctor.yaml │ ├── mood-painter.yaml │ ├── mood-portal.yaml │ ├── mood-predictor.yaml │ └── mood-sensors.yaml ├── demo.sh ├── devxmood_aria_hub.png ├── scripts ├── apigrid.sh ├── carvel │ ├── imgpkg │ ├── install.sh │ ├── kapp │ ├── kbld │ ├── uninstall.sh │ └── ytt ├── db-handler.sh ├── dektecho.sh ├── ingress-handler.sh ├── k8s-handler.sh └── tanzu-handler.sh └── tmp.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore Mac system files 2 | .DS_store 3 | 4 | # Ignore the config folder 5 | .config 6 | 7 | # ignore carvel bundle 8 | scripts/carvel 9 | 10 | -------------------------------------------------------------------------------- /.vscode/settings.json: -------------------------------------------------------------------------------- 1 | {} -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # Demo of DevX with Tanzu 3 | 4 | This repo contains artifacts to run a demo illustrating the vision and capabilities of Tanzu for Dev, AppOps and Platform Ops 5 | 6 | ## Preparations 7 | 8 | - Install the following 9 | - clouds CLIs for the clouds you plan to use 10 | - az 11 | - eksclt 12 | - gcloud 13 | - tanzu CLIs with apps and services plugins 14 | - version 0.28.1 or above! 15 | - carvel (specifically imgpkg and ytt) 16 | - docker CLI 17 | - jq 18 | - yq 19 | 20 | - Export login environment variables for each cloud provider you are using 21 | ``` 22 | # AWS example 23 | export AWS_ACCESS_KEY_ID=CHANGE_ME 24 | export AWS_SECRET_ACCESS_KEY=CHANGE_ME 25 | export AWS_SESSION_TOKEN=CHANGE_ME 26 | ``` 27 | 28 | - Clone the dekt-devx-demo repo ```git clone https://github.com/dektlong/dekt-devx-demo``` 29 | 30 | - Create a folder named```.config``` in the ```dekt-devx-demo``` directory 31 | 32 | - copy ```config-templates/demo-values.yaml``` to ```.config/demo-values.yaml``` 33 | 34 | - update all values in ```.config/demo-values.yaml``` 35 | 36 | - Generate demo config yamls 37 | ``` 38 | ./builder.sh generate-configs 39 | ``` 40 | - verify all yamls created successfully in the ```.config``` folder 41 | 42 | - 43 | export Tanzu packages to your private registry 44 | ``` 45 | ./builder.sh export-packages tap (relocates TAP and Cluster Essentials packages) 46 | ``` 47 | 48 | ## Installation 49 | 50 | ``` 51 | ./builder.sh create-clusters all 52 | ``` 53 | this process make take 15-20min, depends on your k8s providers of choice* 54 | 55 | Note: 56 | to create just the dev and stage clusters use ```./builder.sh create-clusters devstage``` 57 | to create just the prod1,prod2 and brownfield clusters use ```./builder.sh create-clusters prod``` 58 | ``` 59 | ./builder.sh install-demo all 60 | ``` 61 | 62 | Note: 63 | to install just dev and stage demo components use ```./builder.sh install-demo devstage``` 64 | to install just prod demo components use ```./builder.sh install-demo prod``` 65 | 66 | This scripts automated the following: 67 | 68 | - Set k8s contexts and verify clusters created successfully 69 | - Install demo components on ```clusters.view.name``` 70 | - Carvel tools 71 | - TAP based on ```.config/tap-profiles/tap-view.yaml``` values 72 | - Custom app accelerators 73 | - Ingress rule to ```dns.sysSubDomain.domain`` 74 | - Install demo components on ```clusters.dev.name``` 75 | - Carvel tools 76 | - TAP based on ```.config/tap-profiles/tap-itereate.yaml``` values 77 | - we use namespace-provisioner with ```dektlong/dekt-gitops/resources``` repo for the following ```ootb-testing``` supplychain resources: 78 | - Java apps testing pipeline 79 | - Nodejs apps testing pipeline 80 | - Golang apps testing pipeline 81 | - Ingress rule to ```dns.devSubDomain.domain`` 82 | - Install demo components on ```clusters.stage.name``` 83 | - Carvel tools 84 | - TAP based on ```.config/tap-profiles/tap-build.yaml``` values 85 | - we use namespace-provisioner with ```dektlong/dekt-gitops/resources``` repo for the following ```ootb-testing-scanning``` supplychain resources: 86 | - Java apps testing pipeline 87 | - Nodejs apps testing pipeline 88 | - Golang apps testing pipeline 89 | - Scan policy 90 | - Install CarbonBlack scanner 91 | - Install Snyk scanner 92 | - Install demo components on ```clusters.prod1.name``` and ```clusters.prod2.name``` 93 | - Carvel tools 94 | - TAP based on ```.config/tap-profiles/tap-run1.yaml``` and ```.config/tap-profiles/tap-run1.yaml``` values 95 | - Ingress rules to ```dns.prod1SubDomain.domain``` and ```dns.prod2SubDomain.domain``` 96 | - Install demo components on Brownfield cluster 97 | - Spring Cloud Gateway operator 98 | - Brownfield APIs SCGW instances and routes in ```brownfield-apis``` ns 99 | - Add brownfield 'consumer' k8s services to TAP clusters in ```brownfield-apis``` ns 100 | - Attach all clusters to TMC via the TMC API 101 | 102 | 103 | ## Running the demo 104 | 105 | ### Inner loop 106 | 107 | - access tap gui accelerators via the ```cloud-native-devs``` tag 108 | - create ```mood-sensors``` workload using the api-microservice accelerator 109 | - use ```devx-mood``` as the parent application 110 | - show service claims abstraction 111 | - (optional) create ```mood-portal``` workload using the web-function accelerator 112 | - (optional) create ```mood-doctor``` workload using the node.js accelerator 113 | 114 | 115 | - access the api-portal and highlight how discovery of existing APIs prior to creating new ones is done 116 | - if planning to show Brownfield API (see below), highlight how a developer can simply access an off platform external service by calling the 'brownfield URL' directly, e.g. ```sentiment.tanzu-sm.io/v1/check-sentiment``` 117 | 118 | - highlight the simplicity of the ```workload.yaml``` 119 | 120 | - show demo clusters TAP packages and profiles 121 | ``` 122 | ./demo.sh info 123 | ``` 124 | 125 | 126 | - Deploy innerloop workloads 127 | ``` 128 | ./demo.sh dev 129 | ``` 130 | 131 | - Track the progress of the supply chains in the TAP gui or CLI 132 | ``` 133 | ./demo.sh track dev [logs] 134 | ``` 135 | - show how the RabbitMQ 'reading' single instance resource created 136 | - show how the Bitnami Postgres 'inventory' resource provisioned 137 | - show service claims generated for both data services and mapped to the workload 138 | 139 | - access the live url at mood-portal.```dns.devSubdomain```.```dns.domain``` and show the call back to the mood-sensors APIs and the mood-analyzer outputs in () 140 | 141 | - show system components ```devx-mood``` 142 | - click on ```mood-sensors``` to show application live view 143 | 144 | ### Outer loop 145 | - 'promote' to Staging phase 146 | ``` 147 | ./demo.sh stage 148 | ``` 149 | - show workload created pointing to ```release``` branch instead of ```dev``` branch 150 | - show the enhanced supply chain (dekt-src-to-api-with-scan with scanning) progress on multi-cluster Backstage 151 | - show Deliverables created in your gitops.stage repo, but NO runtime artifacts deployed 152 | - Track provisioned artifacts 153 | ``` 154 | ./demo.sh track stage 155 | ``` 156 | - show how the RDS Postgres 'inventory' resource provisioned 157 | - note: this will take a few minutes to provision in the RDS console 158 | - show service claims generated for both data services and mapped to the workload 159 | 160 | - 'promote' to Run cluster (Deliverable) 161 | ``` 162 | ./demo.sh prod 163 | ``` 164 | - show deliverables deployed to ```app-namespaces.stageProd``` without building source/scanning 165 | - show that the new Deliverable is deployed on the production domain - mood-portal.```dns.prodSubdomain```.```dns.domain``` 166 | - show how the RDS Postgres 'inventory' resource provisioned 167 | - note: this will take a few minutes to provision in the RDS console 168 | - Track provisioned data services 169 | 170 | ``` 171 | ./demo.sh services prod 172 | ``` 173 | - show how the RDS Postgres 'inventory' resource provisioned 174 | - note: this will take a few minutes to provision in the RDS console 175 | 176 | 177 | - The Secured Platform Team 178 | - Showcase how Tanzu helps a secured platform team role along the **4Cs of cloud native security** (https://kubernetes.io/docs/concepts/security/overview) 179 | - **Code** (TAP SupplyChain: source scan, build service, pod conventions) 180 | - **Container** (Carbon Black: k8s runtime, app image scanning) 181 | - **Clusters** (TMC OPA, TSM secure connectivity) 182 | - **Clouds** (CSPM with SecureState , Aria Graph showing the 'devx-mood' app security guardrails) 183 | 184 | 185 | 186 | ### Brownfield APIs (optional) 187 | 188 | - Highlight simple developer and staging access on the TAP cluster at the ```brownfield-consumer``` namespace as if the external services are just local k8s services 189 | - Create a Global Namespace named ```brownfield``` 190 | - Domain: gns.``` 191 | - Map ```brownfield-provider``` ns in ```dekt-brownfield``` cluster to ```brownfield-consumer``` ns in ```dekt-stage``` cluster 192 | - Skip the option to add gateway instances (they are already created), but highlight that functionality 193 | 194 | ### Create custom supply chains via Accelerators (optional) 195 | 196 | - access tap gui accelerators using the ```cloud-native-devsecops``` tag 197 | - create ```dekt-src-to-api-with-scan``` supplychain using the microservices-supplychain accelerator with ```dekt-api``` workload type 198 | - include testing, binding and scanning phases, leveraging the out of the box supply-chain templates 199 | 200 | - highlight the separation of concerns between supplychain (AppOps) and supplychain-templates (Platform Ops) 201 | 202 | - show applied supply chains using ```./demo.sh supplychains``` 203 | 204 | ## Cleanup 205 | 206 | - partial cleanup to remove workloads and reset configs ```./demo.sh reset``` 207 | 208 | - full cleanup to delete all clusters ```./builder.sh delete-all``` 209 | 210 | ### Enjoy! 211 | 212 | 213 | # Extras 214 | 215 | ## API-grid demo addition 216 | ### Preparations 217 | - Update ```dekt4pets/dekt4pets-backend.yml``` to match ```tap-values-full``` 218 | - update ```serverUrl:``` value in ```dekt4pets/gateway/dekt4pets-gatway.yml``` and ```dekt4pets/gateway/dekt4pets-gatway-dev.yml``` to match tap-values 219 | - Update ```host:``` value in ```brownfield-apis``` files to match ```tap-values-full``` 220 | 221 | ### Installation 222 | - ```./api-grid.sh init``` 223 | - create the ```dekt4pets-backend``` and ```dekt4pets-frontend``` images 224 | - setup SSO and app configs 225 | - deploy dekt4pets-dev-gateway 226 | 227 | ### Running the demo 228 | - deploy dekt4pets-backend ```./api-grid.sh backend``` 229 | - show in api-portal dekt4pets-dev item added in real time and how the front end team can discover and re-use backend APIs 230 | 231 | - deploy dekt4pets-frontend ```./api-grid.sh frontend``` 232 | - show in api portal how the frontend routes are added in real time 233 | - deploy a production gateway with ingress access ```./api-grid.sh dekt4pets``` 234 | - show in api-portal a new item dekt4pets 235 | - highlight the separation between routes and gateway runtime 236 | - Note! these apps are not using tap supply chain 237 | 238 | ### Inner loop 239 | - Access app accelerator developer instance on ```acc..``` 240 | - Development curated start 241 | - Select ```onlinestore-dev``` tag 242 | - Select the ```Backend API for online-stores``` accelerator 243 | - Select different deployment options and show generated files 244 | - Select different API-grid options and show generated files 245 | - ```./demo.sh backend``` 246 | - Show how build service detects git-repo changes and auto re-build backend-image (if required) 247 | - Show how the ```dekt4pets-gateway``` micro-gateway starts quickly as just a component of your app 248 | - Access API Hub on ```api-portal..``` 249 | - Show the dekt4Pets API group auto-populated with the API spec you defined 250 | - now the frontend team can easily discover and test the backend APIs and reuse 251 | - Show the other API groups ('brownfield APIs') 252 | - ```./demo.sh frontend``` 253 | - Access Spring Boot Observer at ```http://alv../apps``` to show actuator information on the backend application 254 | - Show the new frontend APIs that where auto-populated to the API portal 255 | 256 | ### Outer loop 257 | - DevOps curated start 258 | - Select ```onlinestore-devops``` tag 259 | - Select the ```API Driven Microservices workflow``` accelerator 260 | - Select different deployment options and show generated files 261 | - Select different API-grid options and show generated files 262 | - Show the supply chain created via ```./demo.sh describe``` 263 | - ```./demo.sh dekt4pets``` 264 | - show how the full supplychain for taking the app to production is manifested 265 | - This phase will also add an ingress rule to the gateway, now you can show: 266 | - External traffic can only routed via the micro-gateway 267 | - Frontend and backend microservices still cannot be accessed directly) 268 | - Access the application on 269 | ``` 270 | https://dekt4pets.. 271 | ``` 272 | - login and show SSO functionality 273 | 274 | ### Brownfield APIs 275 | - now the backend team will leverage the 'brownfield' APIs to add background check functionality on potential adopters 276 | - access the 'datacheck' API group and test adoption-history and background-check APIs 277 | - explain that now other development teams can know exactly how to use a verified working version of both APIs (no tickets to off platform teams) 278 | 279 | #### Demo brownfield API use via adding a route and patching the backend app 280 | - In ```workloads/dekt4pets/backend/routes/dekt4pets-backend-routes.yaml``` add 281 | ``` 282 | - predicates: 283 | - Path=/api/check-adopter 284 | - Method=GET 285 | filters: 286 | - RateLimit=3,60s 287 | tags: 288 | - "Pets" 289 | ``` 290 | - In ```workloads/dekt4pets/backend/src/main/.../AnimalController.java``` add 291 | ``` 292 | @GetMapping("/check-adopter") 293 | public String checkAdopter(Principal adopter) { 294 | 295 | if (adopter == null) { 296 | return "Error: Invalid adopter ID"; 297 | } 298 | 299 | String adopterID = adopter.getName(); 300 | 301 | String adoptionHistoryCheckURI = "UPDATE_FROM_API_PORTAL" + adopterID; 302 | 303 | RestTemplate restTemplate = new RestTemplate(); 304 | 305 | try 306 | { 307 | String result = restTemplate.getForObject(adoptionHistoryCheckURI, String.class); 308 | } 309 | catch (Exception e) {} 310 | 311 | return "

Congratulations,

" + 312 | "

Adopter " + adopterID + ", you are cleared to adopt your next best friend.

"; 313 | } 314 | 315 | ``` 316 | - ```./demo.sh backend -u ``` 317 | - show how build-service is invoking a new image build based on the git-commit-id 318 | - run the new check-adopter api 319 | ``` 320 | dekt4pets../api/check-adopter 321 | ``` 322 | - you should see the 'Congratulations...' message with the same token you received following login 323 | #### Demo brownfield API use via a Cloud Native Runtime function 324 | - ```./demo.sh adopter-check ``` 325 | - call the function via curl 326 | ``` 327 | curl -w'\n' -H 'Content-Type: text/plain' adopter-check.dekt-apps.SERVING_SUB_DOMAIN.dekt.io \ 328 | -d "datacheck.tanzu.dekt.io/adoption-history/109141744605375013560" 329 | ``` 330 | - example output 331 | ``` 332 | Running adoption history check.. 333 | 334 | API: datacheck.tanzu.dekt.io/adoption-history/109141744605375013560 335 | Result: APPROVED 336 | 337 | Source: revision 1 of adopter-check 338 | ``` 339 | - show how the function scales to zero after no use for 60 seconds 340 | ``` kubectl get pods -n dekt-apps ``` 341 | - create a new revision 342 | ```./demo adopter-check -u ``` 343 | - show how a new revision recieving 20% of the traffic is created 344 | -------------------------------------------------------------------------------- /TAP.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dektlong/dekt-devx-demo/895007f962fe7c6789b90d3b951dea428aab4bf2/TAP.png -------------------------------------------------------------------------------- /acc-values.yaml: -------------------------------------------------------------------------------- 1 | server: 2 | service_type: "LoadBalancer" 3 | watched_namespace: "accelerator-system" 4 | samples: 5 | include: true -------------------------------------------------------------------------------- /accelerators/api-microservice-boot.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: accelerator.apps.tanzu.vmware.com/v1alpha1 2 | kind: Accelerator 3 | metadata: 4 | name: api-microservice-boot 5 | spec: 6 | git: 7 | interval: 10s 8 | url: https://github.com/dektlong/api-microservice-boot 9 | ref: 10 | branch: main -------------------------------------------------------------------------------- /accelerators/data-transfer.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: accelerator.apps.tanzu.vmware.com/v1alpha1 2 | kind: Accelerator 3 | metadata: 4 | name: my-phyton-data-transform-process 5 | spec: 6 | git: 7 | url: https://github.com/dektlong/python-data-transform 8 | ref: 9 | branch: main -------------------------------------------------------------------------------- /accelerators/datatransform-supplychain.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: accelerator.apps.tanzu.vmware.com/v1alpha1 2 | kind: Accelerator 3 | metadata: 4 | name: datatransform-supplychain 5 | spec: 6 | git: 7 | url: https://github.com/dektlong/datatransform-supplychain 8 | ref: 9 | branch: main -------------------------------------------------------------------------------- /accelerators/event-driven.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: accelerator.apps.tanzu.vmware.com/v1alpha1 2 | kind: Accelerator 3 | metadata: 4 | name: my-kafka-eventing-app 5 | spec: 6 | git: 7 | url: https://github.com/dektlong/spring-kafka 8 | ref: 9 | branch: main -------------------------------------------------------------------------------- /accelerators/frontend-ux.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: accelerator.apps.tanzu.vmware.com/v1alpha1 2 | kind: Accelerator 3 | metadata: 4 | name: my-frontend-ux 5 | spec: 6 | git: 7 | url: https://github.com/dektlong/frontend-ux 8 | ref: 9 | branch: main -------------------------------------------------------------------------------- /accelerators/llm-boot.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: accelerator.apps.tanzu.vmware.com/v1alpha1 2 | kind: Accelerator 3 | metadata: 4 | name: llm-boot 5 | spec: 6 | git: 7 | interval: 10s 8 | url: https://github.com/0pens0/spring-metal 9 | ref: 10 | branch: main -------------------------------------------------------------------------------- /accelerators/machine-learning.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: accelerator.apps.tanzu.vmware.com/v1alpha1 2 | kind: Accelerator 3 | metadata: 4 | name: my-kube-flow-ml-model 5 | spec: 6 | git: 7 | url: https://github.com/dektlong/kubeflow 8 | ref: 9 | branch: main -------------------------------------------------------------------------------- /accelerators/microservices-supplychain.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: accelerator.apps.tanzu.vmware.com/v1alpha1 2 | kind: Accelerator 3 | metadata: 4 | name: microservices-supplychain 5 | spec: 6 | git: 7 | url: https://github.com/dektlong/microservices-supplychain 8 | ref: 9 | branch: main -------------------------------------------------------------------------------- /accelerators/testme.yml: -------------------------------------------------------------------------------- 1 | apiVersion: accelerator.apps.tanzu.vmware.com/v1alpha1 2 | kind: Fragment 3 | metadata: 4 | name: dekt-test 5 | spec: 6 | git: 7 | ref: 8 | branch: main 9 | url: https://github.com/vmware-tanzu/application-accelerator-samples.git 10 | subPath: fragments/app-sso-client -------------------------------------------------------------------------------- /accelerators/web-function.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: accelerator.apps.tanzu.vmware.com/v1alpha1 2 | kind: Accelerator 3 | metadata: 4 | name: my-web-function 5 | spec: 6 | git: 7 | url: https://github.com/dektlong/web-function 8 | ref: 9 | branch: main -------------------------------------------------------------------------------- /aria/Apply mission control policies.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: TMC - Image registry policy 3 | provider: TMC 4 | category: CONFIG 5 | subcategory: Foundation 6 | template_id: 6a.tmc.1 7 | version: v1 8 | description: enables image registry policy on a cluster namespace 9 | 10 | ## Required Parameters 11 | {% set cluster_name = params.get('cluster_name') %} 12 | {% set workspace_name = params.get('workspace_name') %} 13 | {% set namespace_name = params.get('namespace_name') %} 14 | {% set policy_name = params.get('policy_name') %} 15 | {% set label_env_name = params.get('label_env_name') %} 16 | {% set source_registry = params.get('source_registry') %} 17 | 18 | ## Optional Parameters 19 | {% set policy_type = 'image-policy' %} 20 | {% set policy_recipe = 'custom' %} 21 | 22 | # create workspace 23 | 24 | Create workspace in TMC {{workspace_name}}: 25 | META: 26 | name: Create workspace in TMC 27 | parameters: 28 | workspace_name: 29 | name: tmc workspace name 30 | description: name of the workspace to be created in tmc 31 | uiElement: text 32 | label_env_name: 33 | name: value for label env 34 | description: value for label env 35 | uiElement: text 36 | tmc.workspaces.present: 37 | - name: {{workspace_name}} 38 | - description: demo workspace 39 | - labels: 40 | env: {{label_env_name}} 41 | 42 | # create/attach namespace to a workspace 43 | 44 | Create custer namespace {{cluster_name}}.{{namespace_name}}: 45 | META: 46 | name: Create custer namespace 47 | parameters: 48 | cluster_name: 49 | name: cluster name 50 | description: name of the cluster 51 | uiElement: text 52 | namespace_name: 53 | name: namespace name 54 | description: name of the namespace to be created on cluster 55 | uiElement: text 56 | label_env_name: 57 | name: value for label env 58 | description: value for label env 59 | uiElement: text 60 | tmc.cluster_namespaces.present: 61 | - name: {{cluster_name}}.{{namespace_name}} 62 | - cluster_name: {{cluster_name}} 63 | - namespace_name: {{namespace_name}} 64 | - workspace_name: "${tmc.workspaces:Create workspace in TMC {{workspace_name}}:name}" 65 | - description: demo namespace 66 | - labels: 67 | env: {{label_env_name}} 68 | 69 | # create policy on workspace 70 | 71 | Create workspace policy in TMC {{workspace_name}}.{{policy_name}}: 72 | META: 73 | name: Create workspace policy in TMC 74 | parameters: 75 | policy_name: 76 | name: policy name 77 | description: name of the image registry policy 78 | uiElement: text 79 | source_registry: 80 | name: source registry name 81 | description: source registry from which an image can be pulled 82 | uiElement: text 83 | tmc.workspace_policies.present: 84 | - name: {{workspace_name}}.{{policy_name}} 85 | - workspace_name: "${tmc.workspaces:Create workspace in TMC {{workspace_name}}:name}" 86 | - policy_recipe: {{policy_recipe}} 87 | - policy_name: {{policy_name}} 88 | - policy_type: {{policy_type}} 89 | - policy_input: 90 | rules: 91 | - hostname: {{source_registry}} -------------------------------------------------------------------------------- /aria/Attach to mission control.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: TMC - Reference implementation attach AWS EKS cluster to TMC 3 | provider: TMC 4 | category: CONFIG 5 | subcategory: Reference 6 | template_id: 6b.tmc.1 7 | version: v2 8 | description: Attaches AWS EKS cluster to TMC and enables data protection on a cluster 9 | 10 | #Mandatory parameters 11 | {% set group_name = params.get('group_name') %} 12 | {% set cluster_name = params.get('cluster_name') %} 13 | {% set backup_location = params.get('backup_location') %} 14 | {% set region = params.get('region') %} 15 | {% set credential = params.get('credential') %} 16 | {% set target_provider = params.get('target_provider') %} 17 | {% set label_env_name = params.get('label_env_name') %} 18 | 19 | #Optional parameters 20 | {% set provider_name = 'tmc' %} 21 | {% set total_timeout_seconds = 240 %} 22 | {% set delay_interval_seconds = 30 %} 23 | {% set cluster_status = 'READY' %} 24 | {% set cluster_health = 'HEALTHY' %} 25 | 26 | 27 | #searches aws eks cluster 28 | Search AWS EKS cluster {{cluster_name}}: 29 | META: 30 | name: Search AWS EKS cluster 31 | parameters: 32 | cluster_name: 33 | name: cluster name 34 | description: name of the aws eks cluster 35 | uiElement: text 36 | exec.run: 37 | - path: aws.eks.cluster.get 38 | - kwargs: 39 | name: Search AWS EKS cluster {{cluster_name}} 40 | resource_id: {{cluster_name}} 41 | 42 | #create cluster group 43 | Create cluster group in TMC {{group_name}}: 44 | META: 45 | name: Create cluster group in TMC 46 | parameters: 47 | group_name: 48 | name: cluster group name 49 | description: name of the tmc cluster group 50 | uiElement: text 51 | label_env_name: 52 | name: value for label env 53 | description: value for label env 54 | uiElement: text 55 | tmc.cluster_groups.present: 56 | - name: {{group_name}} 57 | - description: "demo cluster group" 58 | - labels: 59 | env: {{label_env_name}} 60 | - require: 61 | - exec: Search AWS EKS cluster {{cluster_name}} 62 | 63 | #attaches cluster on tmc 64 | "Attach TMC cluster": 65 | META: 66 | name: Attaches EKS cluster in TMC 67 | parameters: 68 | cluster_name: 69 | name: cluster name 70 | description: name of the cluster to the attached in tmc 71 | uiElement: text 72 | group_name: 73 | name: cluster group name 74 | description: name of the tmc cluster group 75 | uiElement: text 76 | label_env_name: 77 | name: value for label env 78 | description: value for label env 79 | uiElement: text 80 | tmc.clusters.present: 81 | - cluster_name : {{cluster_name}} 82 | - group_name: {{group_name}} 83 | - description: "demo cluster" 84 | - labels: 85 | env: {{label_env_name}} 86 | - require: 87 | - tmc.cluster_groups: Create cluster group in TMC {{group_name}} 88 | 89 | #generate eks token 90 | "Generate EKS token": 91 | META: 92 | name: Generates EKS token 93 | parameters: 94 | cluster_name: 95 | name: cluster name 96 | description: name of the cluster to the attached in tmc 97 | uiElement: text 98 | region: 99 | name: aws region name 100 | description: name of the aws region in which the aws EKS cluster is created 101 | uiElement: text 102 | ekstoken.token.present: 103 | - cluster_name: {{cluster_name}} 104 | - region: {{region}} 105 | - require: 106 | - tmc.clusters: "Attach TMC cluster" 107 | 108 | #apply TMC manifest uri on eks_cluster 109 | "Apply TMC installer on EKS cluster": 110 | META: 111 | name: Apply TMC manifest on EKS cluster 112 | parameters: 113 | cluster_name: 114 | name: cluster name 115 | description: name of the cluster to the attached in tmc 116 | uiElement: text 117 | kubernetes.manifest.present: 118 | - manifest_uri: ${tmc.clusters:Attach TMC cluster:status:installerLink} 119 | - cluster_name: {{cluster_name}} 120 | - cluster_config: 121 | provider: "aws" 122 | cluster_endpoint: ${exec:Search AWS EKS cluster {{cluster_name}}:endpoint} 123 | cluster_cert: ${exec:Search AWS EKS cluster {{cluster_name}}:certificate_authority:data} 124 | eks_cluster_token: ${ekstoken.token:generate eks_token:token} 125 | - require: 126 | - ekstoken.token: "Generate EKS token" 127 | 128 | #checks status of tmc cluster attachment 129 | "Check cluster status": 130 | META: 131 | name: Check status of EKS cluster attachment in TMC 132 | type: tmc_attach_cluster 133 | parameters: 134 | cluster_name: 135 | name: cluster name 136 | description: name of the cluster to the attached in tmc 137 | uiElement: text 138 | group_name: 139 | name: cluster group name 140 | description: name of the tmc cluster group 141 | uiElement: text 142 | tmc.clusters.present: 143 | - cluster_name: {{cluster_name}} 144 | - group_name: {{group_name}} 145 | - check_status: 146 | total_timeout_seconds: {{total_timeout_seconds}} 147 | - require: 148 | - kubernetes.manifest: "Apply TMC installer on EKS cluster" 149 | 150 | #create backup location 151 | Create backup location in TMC {{backup_location}}: 152 | META: 153 | name: Create backup location in TMC 154 | parameters: 155 | backup_location: 156 | name: backup_location name 157 | description: name of the backup location that you can use for storage of backups 158 | uiElement: text 159 | credential: 160 | name: credential name 161 | description: name of the credential used to connect to the target backup location 162 | target_provider: 163 | name: target provider name 164 | description: name of the target provider 165 | uiElement: text 166 | tmc.backup_locations.present: 167 | - name: {{backup_location}} 168 | - provider_name: {{provider_name}} 169 | - credential: 170 | name: {{credential}} 171 | - target_provider: {{target_provider}} 172 | - assigned_groups: 173 | - clustergroup: 174 | name: "${tmc.cluster_groups:Create cluster group in TMC {{group_name}}:name}" 175 | - require: 176 | - tmc.clusters: "Check cluster status" 177 | 178 | # Setup/ Enable data protection on cluster as present in TMC_Cluster_1 179 | Enable data protection in TMC {{cluster_name}}.{{backup_location}}: 180 | META: 181 | name: Enable data protection in TMC 182 | tmc.data_protections.present: 183 | - name: {{cluster_name}}.{{backup_location}} 184 | - cluster_name: "${tmc.clusters:Check cluster status:name}" 185 | - backup_location_names: 186 | - "${tmc.backup_locations:Create backup location in TMC {{backup_location}}:name}" 187 | - require: 188 | - tmc.backup_locations: Create backup location in TMC {{backup_location}} -------------------------------------------------------------------------------- /aria/CIS 1.5.0 benchmark test.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: Automation for Secure Clouds - CIS AWS foundation Benchmark 1.5.0 3 | provider: SecureState 4 | category: SECURITY 5 | subcategory: Reference 6 | template_id: 3b.ss_cis_aws.33 7 | version: v1 8 | description: Center for Internet Security AWS Foundations Security Benchmark 1.5.0 is a compliance standard for securing Amazon Web Services resources. This template will enable framework and all rules for CIS AWS foundation Benchmark 1.5.0 9 | 10 | {% set severity = 'All' %} 11 | 12 | enable_cis_aws_framework_benchmark: 13 | META: 14 | name: Enable CIS AWS foundation Benchmark 1.5.0 15 | securestate.framework.present: 16 | - name: CIS AWS foundation Benchmark 1.5.0 17 | - id: '91987e9a-86e8-4071-a748-595f1e313237' 18 | - status: Enabled 19 | 20 | enable_all_state_rule: 21 | META: 22 | name: Enable all rules for CIS AWS foundation Benchmark 1.5.0 23 | exec.run: 24 | - require: 25 | - securestate.framework: enable_cis_aws_framework_benchmark 26 | - path: securestate.framework.get_rule_by_frameworkId 27 | - kwargs: 28 | id: '91987e9a-86e8-4071-a748-595f1e313237' 29 | status: Enabled 30 | filter: 31 | severity: {{ severity }} 32 | 33 | #!require:enable_all_state_rule 34 | 35 | {% for rule_id in hub.idem.arg_bind.resolve('${exec.run:enable_all_state_rule}')['results'] %} 36 | {{rule_id}}: 37 | securestate.rule.present: 38 | - id: {{rule_id}} 39 | - status: Enabled 40 | {% endfor %} 41 | -------------------------------------------------------------------------------- /aria/CIS Kubernetes benchmark.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: Automation for Secure Clouds - CIS Kubernetes V1.23 Benchmark 3 | provider: SecureState 4 | category: SECURITY 5 | subcategory: Reference 6 | template_id: 3b.ss_cis_kubernetes.11 7 | version: v1 8 | description: Enables framework CIS Kubernetes V1.23 Benchmark and all of its rules. This framework is a compliance standard for securing Kubernetes resources. This template works with Secure State Paid version only. 9 | 10 | {% set severity = 'All' %} 11 | 12 | enable_cis_kubernetes_benchmark: 13 | META: 14 | name: Enable CIS Kubernetes V1.23 Benchmark 15 | securestate.framework.present: 16 | - name: CIS Kubernetes V1.23 Benchmark 17 | - id: '4e905288-8690-4319-ab08-a555e03d300a' 18 | - status: Enabled 19 | 20 | enable_all_state_rule: 21 | META: 22 | name: Enable all rules for CIS Kubernetes V1.23 Benchmark 23 | exec.run: 24 | - require: 25 | - securestate.framework: enable_cis_kubernetes_benchmark 26 | - path: securestate.framework.get_rule_by_frameworkId 27 | - kwargs: 28 | id: '4e905288-8690-4319-ab08-a555e03d300a' 29 | status: Enabled 30 | filter: 31 | severity: {{ severity }} 32 | 33 | #!require:enable_all_state_rule 34 | 35 | {% for rule_id in hub.idem.arg_bind.resolve('${exec.run:enable_all_state_rule}')['results'] %} 36 | {{rule_id}}: 37 | securestate.rule.present: 38 | - id: {{rule_id}} 39 | - status: Enabled 40 | {% endfor %} 41 | -------------------------------------------------------------------------------- /aria/Config Runtime Space Profile.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: App Runtime Profile (Space) 3 | provider: TMC 4 | category: CONFIG 5 | subcategory: Reference 6 | template_id: 6b.tmc.1 7 | version: v2 8 | description: App Runtime Profile (Space) 9 | 10 | {% set app_runtime = params.get('app_runtime') %} 11 | {% set languages = params.get('languages') %} 12 | {% set service_catalog = params.get('service_catalog') %} 13 | {% set pipeline = params.get('pipeline') %} 14 | {% set deployment = params.get('deployment') %} 15 | 16 | {% set workload_placement = params.get('workload_placement') %} 17 | {% set resource_allocation = params.get('resource_allocation') %} 18 | {% set ha_policy = params.get('ha_policy') %} 19 | {% set data_compliance = params.get('data_compliance') %} 20 | {% set service_binding = params.get('service_binding') %} 21 | 22 | {% set k8_namespace = params.get('k8_namespace') %} 23 | {% set advanced_key_value = params.get('advanced_key_value') %} 24 | 25 | 26 | #space Capabilities 27 | Profile Capabilities : 28 | META: 29 | name: Profile Capabilities 30 | parameters: 31 | app_runtime: 32 | name: Runtime 33 | uiElement: select 34 | options: 35 | - name: Tanzu Cloud Native Runtime (kNative) 36 | value: knative 37 | - name: Tanzu App Service Runtime (Cloud Foundry) 38 | value: cf 39 | - name: Vanilla k8s provider 40 | value: k8s 41 | - name: BYO middleware (base image) 42 | value: byo 43 | languages: 44 | name: Builders 45 | uiElement: multiselect 46 | options: 47 | - name: Spring/Java 48 | value: spring 49 | - name: .NET Core 50 | value: dotnet 51 | - name: Node.JS 52 | value: node 53 | - name: GoLang 54 | value: go 55 | - name: Phyton 56 | value: phyton 57 | - name: BYO (docker file) 58 | value: docker 59 | pipeline: 60 | name: Pipelines 61 | uiElement: multiselect 62 | options: 63 | - name: Functional Testing 64 | value: testing 65 | - name: Source Scanning 66 | value: src_scan 67 | - name: Image Scanning 68 | value: img_scan 69 | - name: Open API Validation 70 | value: api_validation 71 | - name: Enforce Pod Convensions 72 | value: conv 73 | - name: Manual deployment approval 74 | value: manual_deployment 75 | service_catalog: 76 | name: Services 77 | uiElement: multiselect 78 | options: 79 | - name: AWS curate marketplace 80 | value: aws 81 | - name: Azure curate marketplace 82 | value: azure 83 | - name: GCP curate marketplace 84 | value: gcp 85 | - name: Tanzu App Catalog (bitnami) 86 | value: bitnami 87 | - name: Helm charts 88 | value: helm 89 | deployment: 90 | name: Deployments 91 | uiElement: multiselect 92 | options: 93 | - name: Progressive delivery (blue/green) 94 | value: bluegreen 95 | - name: Autoscaling 96 | value: autoscaling 97 | - name: Brownfield APIs 98 | value: api 99 | - name: SLO metrics 100 | value: slo 101 | - name: Log aggregation and exfiltration 102 | value: logs 103 | tmc.cluster_groups.present: 104 | - app_runtime: {{app_runtime}} 105 | - languages: {{languages}} 106 | - service_catalog: {{service_catalog}} 107 | - pipeline: {{pipeline}} 108 | - deployment: {{deployment}} 109 | 110 | #policies 111 | Profile Policies: 112 | META: 113 | name: Profile Policies 114 | parameters: 115 | workload_placement: 116 | name: Workloads Placement and distribution 117 | uiElement: select 118 | options: 119 | - name: Best effort 120 | value: besteffort 121 | - name: Performance driven 122 | value: sla 123 | - name: Cost driven 124 | value: cost 125 | - name: Strict 126 | value: strict 127 | resource_allocation: 128 | name: CPU and memory resource allocation 129 | uiElement: select 130 | options: 131 | - name: Best effort 132 | value: besteffort 133 | - name: Performance driven 134 | value: sla 135 | - name: Cost driven 136 | value: cost 137 | - name: Strict 138 | value: strict 139 | ha_policy: 140 | name: High Availability 141 | uiElement: multiselect 142 | options: 143 | - name: Cloud Regions 144 | value: clouds 145 | - name: Avalability zones 146 | value: datacenter 147 | - name: Clusters 148 | value: cluster 149 | - name: App instances 150 | value: app 151 | data_compliance: 152 | name: Data Compliance 153 | uiElement: multiselect 154 | options: 155 | - name: Baseline 156 | value: baseline 157 | - name: GDPR 158 | value: GDPR 159 | - name: HIPAA 160 | value: HIPAA 161 | - name: PCI DSS 162 | value: pci 163 | - name: CCPA 164 | value: CCPA 165 | service_binding: 166 | name: Service binding 167 | uiElement: select 168 | options: 169 | - name: Provision and bind 170 | value: provision_bind 171 | - name: Bind only 172 | value: bind 173 | tmc.cluster_groups.present: 174 | - workload_placement: {{workload_placement}} 175 | - resource_allocation: {{resource_allocation}} 176 | - ha_policy: {{ha_policy}} 177 | - data_compliance: {{data_compliance}} 178 | - service_binding: {{service_binding}} 179 | 180 | #advanced 181 | Advanced Configurations {{advanced_key_value}}: 182 | META: 183 | name: Advanced Configurations 184 | parameters: 185 | k8_namespace: 186 | name: Mapped k8s namespaces 187 | uiElement: array 188 | advanced_key_value: 189 | name: Space config key-values 190 | uiElement: array 191 | tmc.cluster_groups.present: 192 | - k8_namespace: {{k8_namespace}} 193 | - advanced_key_value: {{advanced_key_value}} -------------------------------------------------------------------------------- /aria/Create EKS cluster.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: AWS EKS - Cluster and Node Group 3 | provider: AWS 4 | category: CONFIG 5 | subcategory: Reference 6 | template_id: 6b.aws_eks.1 7 | version: v1 8 | description: Create a new EKS Cluster and attach Node Group to this Cluster and create all the pre-requisites required for creation cluster and node group. 9 | 10 | {% set cluster_name = params.get('cluster_name', 'cluster-1') %} 11 | {% set node_group = params.get('node_group','node-group-1') %} 12 | 13 | {% set instance_types = params.get('instance_types', 't3.medium') %} 14 | {% set ami_type = params.get('ami_type','AL2_x86_64') %} 15 | {% set desired_size = params.get('desired_size','2') %} 16 | {% set max_size = params.get('max_size','2') %} 17 | {% set min_size = params.get('min_size','2') %} 18 | {% set disk_size = params.get('disk_size','20') %} 19 | 20 | {% set vpc_name = params.get('vpc_name','vpc-1') %} 21 | {% set public_subnet_1 = params.get('public_subnet_1','public-subnet-1') %} 22 | {% set public_subnet_2 = params.get('public_subnet_2','public-subnet-2') %} 23 | {% set private_subnet_1 = params.get('private_subnet_1','private-subnet-1') %} 24 | {% set private_subnet_2 = params.get('private_subnet_2','private-subnet-2') %} 25 | {% set public_availability_zone_1 = params.get('public_availability_zone_1','us-east-1a') %} 26 | {% set public_availability_zone_2 = params.get('public_availability_zone_2','us-east-1b') %} 27 | {% set private_availability_zone_1 = params.get('private_availability_zone_1','us-east-1c') %} 28 | {% set private_availability_zone_2 = params.get('private_availability_zone_2','us-east-1d') %} 29 | 30 | {% set internet_gateway = params.get('internet_gateway','internet-gateway-1') %} 31 | {% set public_route_table = params.get('public_route_table','public-routetable-1') %} 32 | 33 | {% set cluster_role = params.get('cluster_role','cluster-role-1') %} 34 | {% set worker_role = params.get('worker_role','worker-role-1') %} 35 | 36 | 37 | {{cluster_role}}: 38 | META: 39 | name: Create a Cluster Role 40 | parameters: 41 | cluster_role: 42 | name: Cluster Role 43 | description: Name of the Cluster Role 44 | uiElement: text 45 | aws.iam.role.present: 46 | - path: / 47 | - assume_role_policy_document: '{"Version": "2012-10-17","Statement": {"Effect": "Allow","Principal": {"Service": "eks.amazonaws.com"},"Action": "sts:AssumeRole"}}' 48 | - description: Allows access to other AWS service resources that are required to 49 | operate clusters managed by EKS. 50 | - max_session_duration: 3600 51 | 52 | {{worker_role}}: 53 | META: 54 | name: Create a Worker Role 55 | parameters: 56 | worker_role: 57 | name: Worker Role 58 | description: Name of the Worker Role 59 | uiElement: text 60 | aws.iam.role.present: 61 | - path: / 62 | - assume_role_policy_document: '{"Version": "2012-10-17","Statement": {"Effect": "Allow","Principal": {"Service": "ec2.amazonaws.com"},"Action": "sts:AssumeRole"}}' 63 | - description: Allows access to other AWS service resources that are required to 64 | operate clusters managed by EKS. 65 | - max_session_duration: 3600 66 | 67 | {{worker_role}}/AmazonEC2ContainerRegistryReadOnly: 68 | aws.iam.role_policy_attachment.present: 69 | - role_name: "${aws.iam.role:{{worker_role}}:name}" 70 | - policy_arn: arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly 71 | {{worker_role}}//AmazonEKSWorkerNodePolicy: 72 | aws.iam.role_policy_attachment.present: 73 | - role_name: "${aws.iam.role:{{worker_role}}:name}" 74 | - policy_arn: arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy 75 | {{worker_role}}//AmazonEKS_CNI_Policy: 76 | aws.iam.role_policy_attachment.present: 77 | - role_name: "${aws.iam.role:{{worker_role}}:name}" 78 | - policy_arn: arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy 79 | 80 | {{cluster_role}}/AmazonEKSClusterPolicy: 81 | aws.iam.role_policy_attachment.present: 82 | - role_name: "${aws.iam.role:{{cluster_role}}:name}" 83 | - policy_arn: arn:aws:iam::aws:policy/AmazonEKSClusterPolicy 84 | 85 | 86 | {{vpc_name}}: 87 | META: 88 | name: Create VPC for EKS Cluster 89 | parameters: 90 | vpc_name: 91 | name: VPC 92 | description: Name of the VPC 93 | uiElement: text 94 | aws.ec2.vpc.present: 95 | - instance_tenancy: default 96 | - tags: 97 | - Key: Name 98 | Value: {{vpc_name}} 99 | - cidr_block_association_set: 100 | - CidrBlock: 192.168.0.0/16 101 | - enable_dns_hostnames: true 102 | - enable_dns_support: true 103 | 104 | {{public_subnet_1}}: 105 | META: 106 | name: Create a Public Subnet 1 107 | parameters: 108 | public_subnet_1: 109 | name: Public Subnet 1 110 | description: Name of the Public Subnet 111 | uiElement: text 112 | public_availability_zone_1: 113 | name: Public Availability Zone 1 114 | description: Public Availability Zone 115 | uiElement: text 116 | aws.ec2.subnet.present: 117 | - vpc_id: "${aws.ec2.vpc:{{vpc_name}}:resource_id}" 118 | - cidr_block: 192.168.0.0/18 119 | - availability_zone: {{public_availability_zone_1}} 120 | - map_public_ip_on_launch: true 121 | - tags: 122 | - Key: Name 123 | Value: {{vpc_name}}-PublicSubnet01 124 | - Key: kubernetes.io/role/elb 125 | Value: '1' 126 | 127 | {{public_subnet_2}}: 128 | META: 129 | name: Create a Public Subnet 2 130 | parameters: 131 | public_subnet_2: 132 | name: Public Subnet 2 133 | description: Name of the Public Subnet 134 | uiElement: text 135 | public_availability_zone_2: 136 | name: Public Availability Zone 2 137 | description: Public Availability Zone 138 | uiElement: text 139 | aws.ec2.subnet.present: 140 | - vpc_id: "${aws.ec2.vpc:{{vpc_name}}:resource_id}" 141 | - cidr_block: 192.168.64.0/18 142 | - availability_zone: {{public_availability_zone_2}} 143 | - map_public_ip_on_launch: true 144 | - tags: 145 | - Key: Name 146 | Value: {{vpc_name}}-PublicSubnet02 147 | - Key: kubernetes.io/role/elb 148 | Value: '1' 149 | 150 | {{private_subnet_1}}: 151 | META: 152 | name: Create a Private Subnet 1 153 | parameters: 154 | private_subnet_1: 155 | name: Private Subnet 1 156 | description: Name of the Private Subnet 157 | uiElement: text 158 | private_availability_zone_1: 159 | name: Private Availability Zone 1 160 | description: Private Availability Zone 161 | uiElement: text 162 | aws.ec2.subnet.present: 163 | - vpc_id: "${aws.ec2.vpc:{{vpc_name}}:resource_id}" 164 | - cidr_block: 192.168.128.0/18 165 | - availability_zone: {{private_availability_zone_1}} 166 | - map_public_ip_on_launch: false 167 | - tags: 168 | - Key: kubernetes.io/role/internal-elb 169 | Value: '1' 170 | - Key: Name 171 | Value: {{vpc_name}}-PrivateSubnet01 172 | 173 | {{private_subnet_2}}: 174 | META: 175 | name: Create a Private Subnet 2 176 | parameters: 177 | private_subnet_2: 178 | name: Private Subnet 2 179 | description: Name of the Private Subnet 180 | uiElement: text 181 | private_availability_zone_2: 182 | name: Private Availability Zone 2 183 | description: Private Availability Zone 184 | uiElement: text 185 | aws.ec2.subnet.present: 186 | - vpc_id: "${aws.ec2.vpc:{{vpc_name}}:resource_id}" 187 | - cidr_block: 192.168.192.0/18 188 | - availability_zone: {{private_availability_zone_2}} 189 | - map_public_ip_on_launch: false 190 | - tags: 191 | - Key: Name 192 | Value: {{vpc_name}}-PrivateSubnet02 193 | - Key: kubernetes.io/role/internal-elb 194 | Value: '1' 195 | 196 | {{internet_gateway}}: 197 | META: 198 | name: Create a Internet Gateway 199 | parameters: 200 | internet_gateway: 201 | name: Internet Gateway 202 | description: Name of the Internet Gateway 203 | uiElement: text 204 | aws.ec2.internet_gateway.present: 205 | - vpc_id: 206 | - "${aws.ec2.vpc:{{vpc_name}}:resource_id}" 207 | 208 | {{public_route_table}}: 209 | META: 210 | name: Create a Public Route Table 211 | parameters: 212 | public_route_table: 213 | name: Public Route Table 214 | description: Name of the Public Route Table 215 | uiElement: text 216 | aws.ec2.route_table.present: 217 | - require: 218 | - aws.ec2.internet_gateway: {{internet_gateway}} 219 | - routes: 220 | - DestinationCidrBlock: 0.0.0.0/0 221 | GatewayId: "${aws.ec2.internet_gateway:{{internet_gateway}}:resource_id}" 222 | - tags: 223 | - Key: Name 224 | Value: Public Subnets 225 | - Key: Network 226 | Value: Public 227 | - vpc_id: "${aws.ec2.vpc:{{vpc_name}}:resource_id}" 228 | 229 | "RoutetableAssociation-{{public_subnet_1}}": 230 | META: 231 | name: Associates route table with {{public_subnet_1}} 232 | aws.ec2.route_table_association.present: 233 | - require: 234 | - aws.ec2.route_table: {{public_route_table}} 235 | - route_table_id: "${aws.ec2.route_table:{{public_route_table}}:resource_id}" 236 | - subnet_id: "${aws.ec2.subnet:{{public_subnet_1}}:resource_id}" 237 | 238 | "RoutetableAssociation-{{public_subnet_2}}": 239 | META: 240 | name: Associates route table with {{public_subnet_2}} 241 | aws.ec2.route_table_association.present: 242 | - require: 243 | - aws.ec2.route_table: {{public_route_table}} 244 | - route_table_id: "${aws.ec2.route_table:{{public_route_table}}:resource_id}" 245 | - subnet_id: "${aws.ec2.subnet:{{public_subnet_2}}:resource_id}" 246 | 247 | 248 | {{cluster_name}}: 249 | META: 250 | name: Create EKS Cluster 251 | aws.eks.cluster.present: 252 | - require: 253 | - aws.ec2.subnet: {{public_subnet_1}} 254 | - aws.ec2.subnet: {{public_subnet_2}} 255 | - aws.ec2.subnet: {{private_subnet_1}} 256 | - aws.ec2.subnet: {{private_subnet_2}} 257 | - role_arn: "${aws.iam.role:{{cluster_role}}:arn}" 258 | - version: '1.21' 259 | - resources_vpc_config: 260 | endpointPrivateAccess: false 261 | endpointPublicAccess: true 262 | publicAccessCidrs: 263 | - 0.0.0.0/0 264 | securityGroupIds: [] 265 | subnetIds: 266 | - "${aws.ec2.subnet:{{public_subnet_1}}:resource_id}" 267 | - "${aws.ec2.subnet:{{public_subnet_2}}:resource_id}" 268 | - "${aws.ec2.subnet:{{private_subnet_1}}:resource_id}" 269 | - "${aws.ec2.subnet:{{private_subnet_2}}:resource_id}" 270 | - kubernetes_network_config: 271 | ipFamily: ipv4 272 | serviceIpv4Cidr: 10.100.0.0/16 273 | - logging: 274 | clusterLogging: 275 | - enabled: false 276 | types: 277 | - api 278 | - audit 279 | - authenticator 280 | - controllerManager 281 | - scheduler 282 | - tags: {} 283 | 284 | {{node_group}}: 285 | META: 286 | name: Create EKS Node Group 287 | parameters: 288 | cluster_name: 289 | name: Cluster Name 290 | description: Name of the cluster 291 | uiElement: text 292 | desired_size: 293 | name: Desired Size 294 | description: The current number of nodes that the managed node group should maintain. 295 | uiElement: int 296 | max_size: 297 | name: Max Size 298 | description: The maximum number of nodes that the managed node group can scale out to. 299 | uiElement: int 300 | min_size: 301 | name: Min Size 302 | description: The minimum number of nodes that the managed node group can scale in to. 303 | uiElement: int 304 | instance_types: 305 | name: Instance Types 306 | description: Specify the instance types for a node group. 307 | uiElement: text 308 | ami_type: 309 | name: AMI Type 310 | description: The AMI type for your node group. 311 | uiElement: text 312 | disk_size: 313 | name: Disk Size 314 | description: The root device disk size (in GiB) for your node group instances. 315 | uiElement: int 316 | aws.eks.nodegroup.present: 317 | - require: 318 | - aws.eks.cluster: {{cluster_name}} 319 | - cluster_name: {{cluster_name}} 320 | - version: '1.21' 321 | - release_version: 1.21.5-20220406 322 | - capacity_type: ON_DEMAND 323 | - scaling_config: 324 | desiredSize: {{desired_size}} 325 | maxSize: {{max_size}} 326 | minSize: {{min_size}} 327 | - instance_types: 328 | - {{instance_types}} 329 | - subnets: 330 | - "${aws.ec2.subnet:{{public_subnet_1}}:resource_id}" 331 | - "${aws.ec2.subnet:{{public_subnet_2}}:resource_id}" 332 | - ami_type: {{ami_type}} 333 | - node_role: "${aws.iam.role:{{worker_role}}:arn}" 334 | - labels: {} 335 | - disk_size: {{disk_size}} 336 | - update_config: 337 | maxUnavailable: 1 338 | - tags: {} -------------------------------------------------------------------------------- /aria/Delete mission control policies.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: TMC - Image registry policy 3 | provider: TMC 4 | category: CONFIG 5 | subcategory: Foundation 6 | template_id: 6a.tmc.1 7 | version: v1 8 | description: enables image registry policy on a cluster namespace 9 | 10 | ## Required Parameters 11 | {% set cluster_name = params.get('cluster_name') %} 12 | {% set workspace_name = params.get('workspace_name') %} 13 | {% set namespace_name = params.get('namespace_name') %} 14 | {% set policy_name = params.get('policy_name') %} 15 | {% set label_env_name = params.get('label_env_name') %} 16 | {% set source_registry = params.get('source_registry') %} 17 | 18 | ## Optional Parameters 19 | {% set policy_type = 'image-policy' %} 20 | {% set policy_recipe = 'custom' %} 21 | 22 | # create policy on workspace 23 | 24 | {{workspace_name}}.{{policy_name}}: 25 | META: 26 | name: Create workspace policy in TMC. 27 | parameters: 28 | policy_name: 29 | name: policy name 30 | description: name of the image registry policy 31 | uiElement: text 32 | source_registry: 33 | name: source registry name 34 | description: source registry from which an image can be pulled 35 | uiElement: text 36 | tmc.workspace_policies.absent: 37 | - workspace_name: {{workspace_name}} 38 | - policy_recipe: {{policy_recipe}} 39 | - policy_name: {{policy_name}} 40 | - policy_type: {{policy_type}} 41 | - policy_input: 42 | rules: 43 | - hostname: {{source_registry}} 44 | 45 | # create/attach namespace to a workspace 46 | 47 | {{cluster_name}}.{{namespace_name}}: 48 | META: 49 | name: Create custer namespace 50 | parameters: 51 | cluster_name: 52 | name: cluster name 53 | description: name of the cluster 54 | uiElement: text 55 | namespace_name: 56 | name: namespace name 57 | description: name of the namespace to be created on cluster 58 | uiElement: text 59 | label_env_name: 60 | name: value for label env 61 | description: value for label env 62 | uiElement: text 63 | tmc.cluster_namespaces.absent: 64 | - cluster_name: {{cluster_name}} 65 | - namespace_name: {{namespace_name}} 66 | - workspace_name: {{workspace_name}} 67 | - description: demo namespace 68 | - labels: 69 | env: {{label_env_name}} 70 | - require: 71 | - tmc.workspace_policies: {{workspace_name}}.{{policy_name}} 72 | 73 | 74 | # create workspace 75 | 76 | {{workspace_name}}: 77 | META: 78 | name: Create workspace in TMC 79 | parameters: 80 | workspace_name: 81 | name: tmc workspace name 82 | description: name of the workspace to be created in tmc 83 | uiElement: text 84 | label_env_name: 85 | name: value for label env 86 | description: value for label env 87 | uiElement: text 88 | tmc.workspaces.absent: 89 | - name: {{workspace_name}} 90 | - description: demo workspace 91 | - labels: 92 | env: {{label_env_name}} 93 | - require: 94 | - tmc.cluster_namespaces: {{cluster_name}}.{{namespace_name}} -------------------------------------------------------------------------------- /aria/Detach cluster from mission control.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: TMC - Reference implementation attach AWS EKS cluster to TMC 3 | provider: TMC 4 | category: CONFIG 5 | subcategory: Reference 6 | template_id: 6b.tmc.1 7 | version: v2 8 | description: Dettaches AWS EKS cluster attached to TMC 9 | #Mandatory parameters 10 | {% set group_name = params.get('group_name') %} 11 | {% set cluster_name = params.get('cluster_name') %} 12 | #attaches cluster on tmc 13 | "Attach TMC cluster": 14 | META: 15 | name: Attaches EKS cluster in TMC 16 | parameters: 17 | cluster_name: 18 | name: cluster name 19 | description: name of the cluster to the attached in tmc 20 | uiElement: text 21 | group_name: 22 | name: cluster group name 23 | description: name of the tmc cluster group 24 | uiElement: text 25 | label_env_name: 26 | name: value for label env 27 | description: value for label env 28 | uiElement: text 29 | tmc.clusters.absent: 30 | - cluster_name : {{cluster_name}} 31 | - group_name: {{group_name}} 32 | - description: "demo cluster" -------------------------------------------------------------------------------- /aria/PCI DSS 4.0 checks.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: Automation for Secure Clouds - PCI DSS 4.0 3 | provider: SecureState 4 | category: SECURITY 5 | subcategory: Reference 6 | template_id: 3b.ss_pci_dss.11 7 | version: v1 8 | description: This template will enable PCI DSS 4.0 framework and all of its rules. This template works with Secure State Paid version only. 9 | 10 | {% set severity = 'All' %} 11 | 12 | enable_cis_pci_dss: 13 | META: 14 | name: Enable PCI DSS 4.0 15 | securestate.framework.present: 16 | - name: PCI DSS 4.0 17 | - id: 'a68daabd-4c88-4da8-a726-f55de82f681b' 18 | - status: Enabled 19 | 20 | enable_all_state_rule: 21 | META: 22 | name: Enable all rules for PCI DSS 4.0 23 | exec.run: 24 | - require: 25 | - securestate.framework: enable_cis_pci_dss 26 | - path: securestate.framework.get_rule_by_frameworkId 27 | - kwargs: 28 | id: 'a68daabd-4c88-4da8-a726-f55de82f681b' 29 | status: Enabled 30 | filter: 31 | severity: {{ severity }} 32 | 33 | #!require:enable_all_state_rule 34 | 35 | {% for rule_id in hub.idem.arg_bind.resolve('${exec.run:enable_all_state_rule}')['results'] %} 36 | {{rule_id}}: 37 | securestate.rule.present: 38 | - id: {{rule_id}} 39 | - status: Enabled 40 | {% endfor %} 41 | -------------------------------------------------------------------------------- /aria/install-tap-aks.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: Install TAP on AKS 3 | provider: SaltStack 4 | category: SECURITY 5 | subcategory: Reference 6 | template_id: 3b.ssc.11 7 | version: v1 8 | description: Install TAP on AKS 9 | 10 | {% set tgt_name = params.get('tgt_name') %} 11 | {% set tgt_name1 = params.get('tgt_name1') %} 12 | {% set tgt_desc = params.get('tgt_desc') %} 13 | {% set policy_name = params.get('policy_name') %} 14 | {% set tgt_type = params.get('tgt_type') %} 15 | {% set tgt_value = params.get('tgt_value') %} 16 | {% set remediate = params.get('remediate') %} 17 | {% set benchmark_names = params.get('benchmark_names') %} 18 | 19 | 20 | Create Target {{ tgt_name }}: 21 | META: 22 | name: Install TAP 23 | parameters: 24 | tgt_name: 25 | description: Name of TAP cluster 26 | name: Cluster name 27 | uiElement: text 28 | tgt_desc: 29 | description: AKS nodes 30 | name: AKS nodes 31 | uiElement: text 32 | tgt_type: 33 | description: AKS region 34 | name: AKS region 35 | uiElement: select 36 | options: 37 | - name: US West 38 | value: westus 39 | - name: UK 40 | value: ukwest 41 | - name: Europe 42 | value: eu-central-1 43 | - name: South east asia 44 | value: ap-southeast1 45 | tgt_value: 46 | description: TAP profile 47 | name: TAP profile 48 | uiElement: select 49 | options: 50 | - name: iterate 51 | value: iterate 52 | - name: build 53 | value: build 54 | - name: run 55 | value: run 56 | - name: full 57 | value: full 58 | - name: view 59 | value: view 60 | saltstack.target.present: 61 | - name: {{ tgt_name }} 62 | - desc: {{ tgt_desc }} 63 | - tgt_type: {{ tgt_type }} 64 | - tgt: {{tgt_value}} 65 | 66 | Create Policy on target {{ policy_name }}: 67 | META: 68 | name: Configure TAP 69 | parameters: 70 | policy_name: 71 | description: Deployment domain 72 | name: Deployment domain 73 | uiElement: text 74 | tgt_name1: 75 | description: Installed scanners 76 | name: Installed scanners 77 | uiElement: multiselect 78 | options: 79 | - name: Aqua 80 | value: Aqua 81 | - name: Carbon black 82 | value: carbonblack 83 | - name: Grype 84 | value: grype 85 | - name: Snyk 86 | value: snyk 87 | remediate: 88 | description: Data-services connector 89 | name: Data-services connector 90 | uiElement: multiselect 91 | options: 92 | - name: Crossplane 93 | value: crossplane 94 | - name: Services Toolkit 95 | value: svc-toolkit 96 | - name: K8s secrets 97 | value: k8s-secret 98 | benchmark_names: 99 | description: TAP install values 100 | name: TAP install values 101 | uiElement: array 102 | saltstack.policy.present: 103 | - require: 104 | - saltstack.target: Create Target {{ tgt_name1 }} 105 | - name: {{ policy_name }} 106 | - tgt_name1: {{ tgt_name1 }} 107 | - remediate: {{ remediate }} 108 | - benchmark_names: {{ benchmark_names }} -------------------------------------------------------------------------------- /aria/install-tap-eks.sls: -------------------------------------------------------------------------------- 1 | META: 2 | name: Install TAP on EKS 3 | provider: SaltStack 4 | category: SECURITY 5 | subcategory: Reference 6 | template_id: 3b.ssc.11 7 | version: v1 8 | description: Install TAP on EKS 9 | 10 | {% set tgt_name = params.get('tgt_name') %} 11 | {% set tgt_name1 = params.get('tgt_name1') %} 12 | {% set tgt_desc = params.get('tgt_desc') %} 13 | {% set policy_name = params.get('policy_name') %} 14 | {% set tgt_type = params.get('tgt_type') %} 15 | {% set tgt_value = params.get('tgt_value') %} 16 | {% set remediate = params.get('remediate') %} 17 | {% set benchmark_names = params.get('benchmark_names') %} 18 | 19 | 20 | Create Target {{ tgt_name }}: 21 | META: 22 | name: Install TAP 23 | parameters: 24 | tgt_name: 25 | description: Name of TAP cluster 26 | name: Cluster name 27 | uiElement: text 28 | tgt_desc: 29 | description: EKS nodes 30 | name: EKS nodes 31 | uiElement: text 32 | tgt_type: 33 | description: EKS region 34 | name: EKS region 35 | uiElement: select 36 | options: 37 | - name: US West 38 | value: us-west-1 39 | - name: US East 40 | value: us-east-1 41 | - name: Europe 42 | value: eu-central 43 | - name: East Asia 44 | value: eastasia 45 | - name: Australia 46 | value: australiasoutheast 47 | tgt_value: 48 | description: TAP profile 49 | name: TAP profile 50 | uiElement: select 51 | options: 52 | - name: iterate 53 | value: iterate 54 | - name: build 55 | value: build 56 | - name: run 57 | value: run 58 | - name: full 59 | value: full 60 | - name: view 61 | value: view 62 | saltstack.target.present: 63 | - name: {{ tgt_name }} 64 | - desc: {{ tgt_desc }} 65 | - tgt_type: {{ tgt_type }} 66 | - tgt: {{tgt_value}} 67 | 68 | Create Policy on target {{ policy_name }}: 69 | META: 70 | name: Configure TAP 71 | parameters: 72 | policy_name: 73 | description: Deployment domain 74 | name: Deployment domain 75 | uiElement: text 76 | tgt_name1: 77 | description: Installed scanners 78 | name: Installed scanners 79 | uiElement: multiselect 80 | options: 81 | - name: Aqua 82 | value: Aqua 83 | - name: Carbon black 84 | value: carbonblack 85 | - name: Grype 86 | value: grype 87 | - name: Snyk 88 | value: snyk 89 | remediate: 90 | description: Data-services connector 91 | name: Data-services connector 92 | uiElement: multiselect 93 | options: 94 | - name: Crossplane 95 | value: crossplane 96 | - name: Services Toolkit 97 | value: svc-toolkit 98 | - name: K8s secrets 99 | value: k8s-secret 100 | benchmark_names: 101 | description: TAP install values 102 | name: TAP install values 103 | uiElement: array 104 | saltstack.policy.present: 105 | - require: 106 | - saltstack.target: Create Target {{ tgt_name1 }} 107 | - name: {{ policy_name }} 108 | - tgt_name1: {{ tgt_name1 }} 109 | - remediate: {{ remediate }} 110 | - benchmark_names: {{ benchmark_names }} -------------------------------------------------------------------------------- /backstage/apis/brownfield-apis-catalog.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: backstage.io/v1alpha1 2 | kind: API 3 | metadata: 4 | name: datacheck-apis 5 | description: APIs to interact with datacheck systems 6 | spec: 7 | type: openapi 8 | lifecycle: integration 9 | owner: dekt-app-ops 10 | system: brownfield-apis 11 | definition: 12 | $text: https://github.com/dektlong/dekt-devx-demo/blob/main/backstage/apis/datacheck-apis-swagger.yaml 13 | --- 14 | apiVersion: backstage.io/v1alpha1 15 | kind: API 16 | metadata: 17 | name: donations-apis 18 | description: APIs to interact with donations systems 19 | spec: 20 | type: openapi 21 | lifecycle: integration 22 | owner: dekt-app-ops 23 | system: brownfield-apis 24 | definition: 25 | $text: https://github.com/dektlong/dekt-devx-demo/blob/main/backstage/apis/donations-apis-swagger.yaml 26 | --- 27 | apiVersion: backstage.io/v1alpha1 28 | kind: API 29 | metadata: 30 | name: insurance-apis 31 | description: APIs to interact with insurance systems 32 | spec: 33 | type: openapi 34 | lifecycle: integration 35 | owner: dekt-app-ops 36 | system: brownfield-apis 37 | definition: 38 | $text: https://github.com/dektlong/dekt-devx-demo/blob/main/backstage/apis/insurance-apis-swagger.yaml 39 | --- 40 | apiVersion: backstage.io/v1alpha1 41 | kind: API 42 | metadata: 43 | name: payments-apis 44 | description: APIs to interact with payments systems 45 | spec: 46 | type: openapi 47 | lifecycle: integration 48 | owner: dekt-app-ops 49 | system: brownfield-apis 50 | definition: 51 | $text: https://github.com/dektlong/dekt-devx-demo/blob/main/backstage/apis/payments-apis-swagger.yaml 52 | --- 53 | apiVersion: backstage.io/v1alpha1 54 | kind: API 55 | metadata: 56 | name: suppliers-apis 57 | description: APIs to interact with suppliers systems 58 | spec: 59 | type: openapi 60 | lifecycle: integration 61 | owner: dekt-app-ops 62 | system: brownfield-apis 63 | definition: 64 | $text: https://github.com/dektlong/dekt-devx-demo/blob/main/backstage/apis/suppliers-apis-swagger.yaml 65 | --- 66 | apiVersion: backstage.io/v1alpha1 67 | kind: API 68 | metadata: 69 | name: locations-apis 70 | description: APIs to interact with locations systems 71 | spec: 72 | type: openapi 73 | lifecycle: integration 74 | owner: dekt-app-ops 75 | system: public-apis 76 | definition: 77 | $text: https://github.com/dektlong/dekt-devx-demo/blob/main/backstage/apis/locations-apis-swagger.yaml 78 | --- 79 | apiVersion: backstage.io/v1alpha1 80 | kind: API 81 | metadata: 82 | name: sentiment-apis 83 | description: APIs to interact with sentiment systems 84 | spec: 85 | type: openapi 86 | lifecycle: integration 87 | owner: dekt-app-ops 88 | system: public-apis 89 | definition: 90 | $text: https://github.com/dektlong/dekt-devx-demo/blob/main/backstage/apis/sentiment-apis-swagger.yaml 91 | --- 92 | apiVersion: backstage.io/v1alpha1 93 | kind: API 94 | metadata: 95 | name: sms-apis 96 | description: APIs to interact with sms systems 97 | spec: 98 | type: openapi 99 | lifecycle: integration 100 | owner: dekt-app-ops 101 | system: public-apis 102 | definition: 103 | $text: https://github.com/dektlong/dekt-devx-demo/blob/main/backstage/apis/sms-apis-swagger.yaml -------------------------------------------------------------------------------- /backstage/apis/datacheck-apis-swagger.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | swagger: "2.0" 3 | info: 4 | description: "Swagger API definition for the datacheck apis" 5 | version: "0.0.2" 6 | title: "APIs to interact with datacheck external systems" 7 | host: datacheck.gns.dekt.io 8 | basePath: "/v1" 9 | tags: 10 | - name: "Data collection" 11 | description: "Data collection (read-only) APIs" 12 | - name: "Data manipulation" 13 | description: "Data manipulation (write) APIs" 14 | schemes: 15 | - "https" 16 | paths: 17 | /good-credit/{cadidateID}: 18 | get: 19 | tags: 20 | - "Data collection" 21 | summary: "Verify a good credit score" 22 | consumes: 23 | - "application/json" 24 | produces: 25 | - "application/json" 26 | parameters: 27 | - name: "cadidateID" 28 | in: "path" 29 | description: "ID of candidate to collect data on" 30 | required: true 31 | type: "integer" 32 | format: "int64" 33 | responses: 34 | 200: 35 | description: "successful data collection" 36 | 400: 37 | description: "Invalid candidate identifier" 38 | /medical-history/{cadidateID}: 39 | get: 40 | tags: 41 | - "Data collection" 42 | summary: "Verify a clean medical history" 43 | consumes: 44 | - "application/json" 45 | produces: 46 | - "application/json" 47 | parameters: 48 | - name: "cadidateID" 49 | in: "path" 50 | description: "ID of candidate to collect data on" 51 | required: true 52 | type: "integer" 53 | format: "int64" 54 | responses: 55 | 200: 56 | description: "successful data collection" 57 | 400: 58 | description: "Invalid candidate identifier" 59 | /crimanl-record/{cadidateID}: 60 | post: 61 | tags: 62 | - "Data manipulation" 63 | summary: "Initiate a criminal record background check" 64 | consumes: 65 | - "application/json" 66 | produces: 67 | - "application/json" 68 | parameters: 69 | - name: "cadidateID" 70 | in: "path" 71 | description: "ID of candidate to update data for" 72 | required: true 73 | type: "integer" 74 | format: "int64" 75 | responses: 76 | 200: 77 | description: "successful data collection" 78 | 400: 79 | description: "Invalid candidate identifier" -------------------------------------------------------------------------------- /backstage/apis/donations-apis-swagger.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | swagger: "2.0" 3 | info: 4 | description: "Swagger API definition for the donations apis" 5 | version: "0.0.2" 6 | title: "APIs to interact with donations external systems" 7 | host: donations.gns.dekt.io 8 | basePath: "/v1" 9 | tags: 10 | - name: "Data collection" 11 | description: "Data collection (read-only) APIs" 12 | - name: "Data manipulation" 13 | description: "Data manipulation (write) APIs" 14 | schemes: 15 | - "https" 16 | paths: 17 | /list-donations: 18 | get: 19 | tags: 20 | - "List current donations" 21 | summary: "api1" 22 | produces: 23 | - "application/json" 24 | responses: 25 | 200: 26 | description: "successful data collection" 27 | 400: 28 | description: "Invalid candidate identifier" 29 | /process-donation/{cadidateID}: 30 | post: 31 | tags: 32 | - "Data manipulation" 33 | summary: "Process a new donation" 34 | consumes: 35 | - "application/json" 36 | produces: 37 | - "application/json" 38 | parameters: 39 | - name: "cadidateID" 40 | in: "path" 41 | description: "ID of candidate to update data for" 42 | required: true 43 | type: "integer" 44 | format: "int64" 45 | responses: 46 | 200: 47 | description: "successful data collection" 48 | 400: 49 | description: "Invalid candidate identifier" -------------------------------------------------------------------------------- /backstage/apis/insurance-apis-swagger.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | swagger: "2.0" 3 | info: 4 | description: "Swagger API definition for the insurance apis" 5 | version: "0.0.2" 6 | title: "APIs to interact with insurance external systems" 7 | host: insurance.gns.dekt.io 8 | basePath: "/v1" 9 | tags: 10 | - name: "Data collection" 11 | description: "Data collection (read-only) APIs" 12 | - name: "Data manipulation" 13 | description: "Data manipulation (write) APIs" 14 | schemes: 15 | - "https" 16 | paths: 17 | /list-providers: 18 | get: 19 | tags: 20 | - "Data collection" 21 | summary: "List available insurance providers" 22 | produces: 23 | - "application/json" 24 | responses: 25 | 200: 26 | description: "successful data collection" 27 | 400: 28 | description: "Invalid candidate identifier" 29 | /process-claim/{cadidateID}: 30 | post: 31 | tags: 32 | - "Data manipulation" 33 | summary: "Process a new insurance claim" 34 | consumes: 35 | - "application/json" 36 | produces: 37 | - "application/json" 38 | parameters: 39 | - name: "cadidateID" 40 | in: "path" 41 | description: "ID of candidate to update data for" 42 | required: true 43 | type: "integer" 44 | format: "int64" 45 | responses: 46 | 200: 47 | description: "successful data collection" 48 | 400: 49 | description: "Invalid candidate identifier" -------------------------------------------------------------------------------- /backstage/apis/locations-apis-swagger.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | swagger: "2.0" 3 | info: 4 | description: "Swagger API definition for the locations apis" 5 | version: "0.0.2" 6 | title: "APIs to interact with locations external systems" 7 | host: locations.gns.dekt.io 8 | basePath: "/v1" 9 | tags: 10 | - name: "Data collection" 11 | description: "Data collection (read-only) APIs" 12 | - name: "Data manipulation" 13 | description: "Data manipulation (write) APIs" 14 | schemes: 15 | - "http" 16 | - "https" 17 | paths: 18 | /list-providers: 19 | get: 20 | tags: 21 | - "Data collection" 22 | summary: "List available locations" 23 | produces: 24 | - "application/json" 25 | responses: 26 | 200: 27 | description: "successful data collection" 28 | 400: 29 | description: "Invalid candidate identifier" 30 | /team-location/{teamID}: 31 | post: 32 | tags: 33 | - "Data manipulation" 34 | summary: "Update a team location" 35 | consumes: 36 | - "application/json" 37 | produces: 38 | - "application/json" 39 | parameters: 40 | - name: "teamID" 41 | in: "path" 42 | description: "ID of candidate to update data for" 43 | required: true 44 | type: "integer" 45 | format: "int64" 46 | responses: 47 | 200: 48 | description: "successful data collection" 49 | 400: 50 | description: "Invalid candidate identifier" -------------------------------------------------------------------------------- /backstage/apis/payments-apis-swagger.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | swagger: "2.0" 3 | info: 4 | description: "Swagger API definition for the payments apis" 5 | version: "0.0.2" 6 | title: "APIs to interact with payments external systems" 7 | host: payments.gns.dekt.io 8 | basePath: "/v1" 9 | tags: 10 | - name: "Data collection" 11 | description: "Data collection (read-only) APIs" 12 | - name: "Data manipulation" 13 | description: "Data manipulation (write) APIs" 14 | schemes: 15 | - "https" 16 | paths: 17 | /list-providers: 18 | get: 19 | tags: 20 | - "Data collection" 21 | summary: "List available payments providers" 22 | produces: 23 | - "application/json" 24 | responses: 25 | 200: 26 | description: "successful data collection" 27 | 400: 28 | description: "Invalid candidate identifier" 29 | /process-payment/{cadidateID}: 30 | post: 31 | tags: 32 | - "Data manipulation" 33 | summary: "Process a new payments claim" 34 | consumes: 35 | - "application/json" 36 | produces: 37 | - "application/json" 38 | parameters: 39 | - name: "cadidateID" 40 | in: "path" 41 | description: "ID of candidate to update data for" 42 | required: true 43 | type: "integer" 44 | format: "int64" 45 | responses: 46 | 200: 47 | description: "successful data collection" 48 | 400: 49 | description: "Invalid candidate identifier" -------------------------------------------------------------------------------- /backstage/apis/sentiment-apis-swagger.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | swagger: "2.0" 3 | info: 4 | description: "Swagger API definition for the sentiment apis" 5 | version: "0.0.2" 6 | title: "APIs to interact with sentiment external systems" 7 | host: sentiment.gns.dekt.io 8 | basePath: "/v1" 9 | tags: 10 | - name: "Data collection" 11 | description: "Data collection (read-only) APIs" 12 | - name: "Data manipulation" 13 | description: "Data manipulation (write) APIs" 14 | schemes: 15 | - "http" 16 | - "https" 17 | paths: 18 | /list-media-sources: 19 | get: 20 | tags: 21 | - "Data collection" 22 | summary: "List available social media sources actively reporting sentiment" 23 | produces: 24 | - "application/json" 25 | responses: 26 | 200: 27 | description: "successful data collection" 28 | 400: 29 | description: "Invalid candidate identifier" 30 | /social-sentiment/{cadidateID}: 31 | get: 32 | tags: 33 | - "Data collection" 34 | summary: "Check the aggregated social sentiment for a cadidate" 35 | consumes: 36 | - "application/json" 37 | produces: 38 | - "application/json" 39 | parameters: 40 | - name: "cadidateID" 41 | in: "path" 42 | description: "ID of candidate to collect data on" 43 | required: true 44 | type: "integer" 45 | format: "int64" 46 | responses: 47 | 200: 48 | description: "successful data collection" 49 | 400: 50 | description: "Invalid candidate identifier" 51 | /activate-sentiment-probes/{socialMediaSource}: 52 | post: 53 | tags: 54 | - "Data manipulation" 55 | summary: "Activate the sentiment probes by social media source" 56 | consumes: 57 | - "application/json" 58 | produces: 59 | - "application/json" 60 | parameters: 61 | - name: "cadidateID" 62 | in: "path" 63 | description: "Social media source ID" 64 | required: true 65 | type: "integer" 66 | format: "int64" 67 | responses: 68 | 200: 69 | description: "successful data collection" 70 | 400: 71 | description: "Invalid candidate identifier" -------------------------------------------------------------------------------- /backstage/apis/sms-apis-swagger.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | swagger: "2.0" 3 | info: 4 | description: "Swagger API definition for the sms apis" 5 | version: "0.0.2" 6 | title: "APIs to interact with sms external systems" 7 | host: sms.gns.dekt.io 8 | basePath: "/v1" 9 | tags: 10 | - name: "Data collection" 11 | description: "Data collection (read-only) APIs" 12 | - name: "Data manipulation" 13 | description: "Data manipulation (write) APIs" 14 | schemes: 15 | - "http" 16 | - "https" 17 | paths: 18 | /list-providers: 19 | get: 20 | tags: 21 | - "Data collection" 22 | summary: "List available sms providers" 23 | produces: 24 | - "application/json" 25 | responses: 26 | 200: 27 | description: "successful data collection" 28 | 400: 29 | description: "Invalid candidate identifier" 30 | /send-notification/{mobileNumber}: 31 | post: 32 | tags: 33 | - "Data manipulation" 34 | summary: "Send notification to a given mobile number" 35 | consumes: 36 | - "application/json" 37 | produces: 38 | - "application/json" 39 | parameters: 40 | - name: "cadidateID" 41 | in: "path" 42 | description: "Mobile numberr" 43 | required: true 44 | type: "integer" 45 | format: "int64" 46 | responses: 47 | 200: 48 | description: "successful data collection" 49 | 400: 50 | description: "Invalid candidate identifier" -------------------------------------------------------------------------------- /backstage/apis/suppliers-apis-swagger.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | swagger: "2.0" 3 | info: 4 | description: "Swagger API definition for the suppliers apis" 5 | version: "0.0.2" 6 | title: "APIs to interact with suppliers external systems" 7 | host: suppliers.gns.dekt.io 8 | basePath: "/v1" 9 | tags: 10 | - name: "Data collection" 11 | description: "Data collection (read-only) APIs" 12 | - name: "Data manipulation" 13 | description: "Data manipulation (write) APIs" 14 | schemes: 15 | - "https" 16 | paths: 17 | /list-suppliers: 18 | get: 19 | tags: 20 | - "Data collection" 21 | summary: "List available suppliers" 22 | produces: 23 | - "application/json" 24 | responses: 25 | 200: 26 | description: "successful data collection" 27 | 400: 28 | description: "Invalid candidate identifier" 29 | /add-supplier/{supplierID}: 30 | post: 31 | tags: 32 | - "Data manipulation" 33 | summary: "Add a new supplier" 34 | consumes: 35 | - "application/json" 36 | produces: 37 | - "application/json" 38 | parameters: 39 | - name: "supplierID" 40 | in: "path" 41 | description: "ID of supplier to update data for" 42 | required: true 43 | type: "integer" 44 | format: "int64" 45 | responses: 46 | 200: 47 | description: "successful data collection" 48 | 400: 49 | description: "Invalid candidate identifier" -------------------------------------------------------------------------------- /backstage/catalog-info.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: backstage.io/v1alpha1 2 | kind: Location 3 | metadata: 4 | name: dekt-devx-catalog 5 | description: A catalog for dekt DevX demo 6 | annotations: 7 | 'backstage.io/techdocs-ref': dir:. 8 | spec: 9 | targets: 10 | - ./apis/brownfield-apis-catalog.yaml 11 | - ./domains/dekt-domain.yaml 12 | - ./groups/dekt-teams.yaml 13 | - ./resources/brownfield-gns.yaml 14 | - ./systems/devx-mood.yaml 15 | - https://github.com/dektlong/mood-portal/blob/dev/backstage/catalog-info.yaml 16 | - https://github.com/dektlong/mood-sensors/blob/dev/backstage/catalog-info.yaml 17 | - https://github.com/dektlong/mood-predictor-openai/blob/main/catalog/catalog-info.yaml -------------------------------------------------------------------------------- /backstage/domains/dekt-domain.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: backstage.io/v1alpha1 2 | kind: Domain 3 | metadata: 4 | name: dekt-domain 5 | description: Domain for all appps in the dekt line of business 6 | links: 7 | - url: http://apps.dekt.io/dashboard 8 | title: Dekt LoB app dashboard 9 | icon: dashboard 10 | annotations: 11 | 'backstage.io/techdocs-ref': dir:. 12 | spec: 13 | owner: dekt-platform-ops -------------------------------------------------------------------------------- /backstage/groups/dekt-teams.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: backstage.io/v1alpha1 2 | kind: Group 3 | metadata: 4 | name: dekt-dev-team1 5 | description: Developement team1 6 | spec: 7 | type: team 8 | profile: 9 | displayName: dekt-dev-team1 10 | email: dekel@vmware.com 11 | picture: https://avatars.dicebear.com/api/identicon/team-a@example.com.svg?background=%23fff&margin=25 12 | parent: default-org 13 | children: [] 14 | --- 15 | apiVersion: backstage.io/v1alpha1 16 | kind: Group 17 | metadata: 18 | name: dekt-dev-team2 19 | description: Developement team2 20 | spec: 21 | type: team 22 | profile: 23 | displayName: dekt-dev-team2 24 | email: dekel@vmware.com 25 | picture: https://avatars.dicebear.com/api/identicon/team-a@example.com.svg?background=%23fff&margin=25 26 | parent: default-org 27 | children: [] 28 | --- 29 | apiVersion: backstage.io/v1alpha1 30 | kind: Group 31 | metadata: 32 | name: dekt-app-ops 33 | description: Applications operation team 34 | spec: 35 | type: team 36 | profile: 37 | displayName: Applications operation team 38 | email: dekel@vmware.com 39 | picture: https://avatars.dicebear.com/api/identicon/team-a@example.com.svg?background=%23fff&margin=25 40 | parent: default-org 41 | children: [] 42 | --- 43 | apiVersion: backstage.io/v1alpha1 44 | kind: Group 45 | metadata: 46 | name: dekt-platform-ops 47 | description: Platform operation team 48 | spec: 49 | type: team 50 | profile: 51 | displayName: Platform operation team 52 | email: dekel@vmware.com 53 | picture: https://avatars.dicebear.com/api/identicon/team-a@example.com.svg?background=%23fff&margin=25 54 | parent: default-org 55 | children: [] -------------------------------------------------------------------------------- /backstage/resources/brownfield-gns.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: backstage.io/v1alpha1 2 | kind: Resource 3 | metadata: 4 | name: brownfield 5 | description: TSM Brownfield global namespace setup 6 | annotations: 7 | 'backstage.io/techdocs-ref': dir:. 8 | spec: 9 | type: network 10 | owner: dekt-platform-ops 11 | system: devx-mood -------------------------------------------------------------------------------- /backstage/systems/devx-mood.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: backstage.io/v1alpha1 2 | kind: System 3 | metadata: 4 | name: devx-mood 5 | description: The DevX Mood Application 6 | annotations: 7 | 'backstage.io/techdocs-ref': dir:. 8 | spec: 9 | owner: dekt-app-ops 10 | domain: dekt-domain -------------------------------------------------------------------------------- /brownfield-apis/brownfield-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: tanzu.vmware.com/v1 2 | kind: SpringCloudGateway 3 | metadata: 4 | name: mood-brownfield-gateway 5 | spec: 6 | count: 2 7 | 8 | server: 9 | tls: 10 | - hosts: 11 | - datacheck.gns.dekt.io 12 | - sentiment.gns.dekt.io 13 | secretName: mood-tls-cert 14 | client: 15 | tls: 16 | secretNames: mood-tls-cert 17 | 18 | api: 19 | serverUrl: https://gns.dekt.io 20 | title: Micro-gateway to contorl 3rd party/legacy API access 21 | version: 0.1.0 22 | groupId: brownfield-apis 23 | 24 | sso: 25 | secret: mood-gw-sso-creds 26 | 27 | cors: 28 | allowedOrigins: 29 | - tap-gui.sys.dekt.io 30 | -------------------------------------------------------------------------------- /brownfield-apis/datacheck.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "tanzu.vmware.com/v1" 2 | kind: SpringCloudGatewayRouteConfig 3 | metadata: 4 | name: datacheck-routes-config 5 | #======== supply-chain decorations ("design once API anywhere") 6 | labels: 7 | tsm-scg: datacheck-gns 8 | spec: 9 | service: 10 | name: datacheck-api 11 | - Header=X-Request-Id 12 | ssoEnabled: true 13 | filters: 14 | - RateLimit=2,60s 15 | #========= 16 | routes: 17 | - predicates: 18 | - Path=/api/medical-history/{personID} 19 | - Method=GET 20 | title: "Verify a clean medical history" 21 | description: "Verify a medical history for a given social security / taxId number " 22 | tags: 23 | - Compatibility 24 | - predicates: 25 | - Path=/api/good-credit/{personID} 26 | - Method=GET 27 | title: "Verify a good credit score" 28 | description: "Verify a good credit score for a given adoper's social security / taxId number " 29 | tags: 30 | - Identity 31 | - predicates: 32 | - Path=/api/criminal-record/{personID} 33 | - Method=GET 34 | ssoEnabled: true 35 | tokenRelay: true 36 | title: "Run criminal-record check" 37 | description: "Run a criminal record check for a given adoper's social security / taxId number" 38 | tags: 39 | - Identity 40 | - predicates: 41 | - Path=/api/house-visit-request/{personID}/date/{visitDate} 42 | - Method=POST,PUT,DELETE 43 | ssoEnabled: true 44 | tokenRelay: true 45 | title: "Manage house-visits requests" 46 | tags: 47 | - Compatibility 48 | --- 49 | apiVersion: v1 50 | kind: Service 51 | metadata: 52 | name: datacheck-api 53 | spec: 54 | ports: 55 | - port: 80 56 | targetPort: 80 -------------------------------------------------------------------------------- /brownfield-apis/donations.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "tanzu.vmware.com/v1" 2 | kind: SpringCloudGatewayRouteConfig 3 | metadata: 4 | name: donations-routes-config 5 | spec: 6 | service: 7 | name: donations-api 8 | routes: 9 | - predicates: 10 | - Path=/list-donations 11 | - Method=GET 12 | ssoEnabled: true 13 | - predicates: 14 | - Path=/process-donation/{personID}/donation/{donationDetails} 15 | - Method=POST 16 | ssoEnabled: true 17 | filters: 18 | - CircuitBreaker="donationsCircuitBreaker,forward:/alternate-donation-service" 19 | --- 20 | apiVersion: v1 21 | kind: Service 22 | metadata: 23 | name: donations-api 24 | spec: 25 | ports: 26 | - port: 80 27 | targetPort: 80 28 | -------------------------------------------------------------------------------- /brownfield-apis/insurance.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "tanzu.vmware.com/v1" 2 | kind: SpringCloudGatewayRouteConfig 3 | metadata: 4 | name: insurance-routes-config 5 | spec: 6 | service: 7 | name: insurance-api 8 | routes: 9 | - predicates: 10 | - Path=/list-insurance 11 | - Method=GET 12 | ssoEnabled: true 13 | - predicates: 14 | - Path=/process-claim/{personID}/policy/{policyDetails} 15 | - Method=POST 16 | ssoEnabled: true 17 | filters: 18 | - CircuitBreaker="insuranceCircuitBreaker,forward:/alternate-donation-service" 19 | --- 20 | apiVersion: v1 21 | kind: Service 22 | metadata: 23 | name: insurance-api 24 | spec: 25 | ports: 26 | - port: 80 27 | targetPort: 80 -------------------------------------------------------------------------------- /brownfield-apis/locations.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "tanzu.vmware.com/v1" 2 | kind: SpringCloudGatewayRouteConfig 3 | metadata: 4 | name: locations-routes-config 5 | spec: 6 | service: 7 | name: locations-api 8 | routes: 9 | - predicates: 10 | - Path=/list-locations 11 | - Method=GET 12 | ssoEnabled: true 13 | - predicates: 14 | - Path=/team-location/{teamID} 15 | - Method=POST 16 | ssoEnabled: true 17 | filters: 18 | - CircuitBreaker="locationsCircuitBreaker,forward:/alternate-donation-service" 19 | --- 20 | apiVersion: v1 21 | kind: Service 22 | metadata: 23 | name: locations-api 24 | spec: 25 | ports: 26 | - port: 80 27 | targetPort: 80 -------------------------------------------------------------------------------- /brownfield-apis/mood-gw-sso-creds.example: -------------------------------------------------------------------------------- 1 | scope=openid,profile,email 2 | client-id={your_client_id} 3 | client-secret={your_client_secret} 4 | issuer-uri={your-issuer-uri} 5 | 6 | $ kubectl create secret generic mood-gw-sso-creds --from-env-file=mood-gw-sso-creds.example 7 | -------------------------------------------------------------------------------- /brownfield-apis/payments.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "tanzu.vmware.com/v1" 2 | kind: SpringCloudGatewayRouteConfig 3 | metadata: 4 | name: payments-routes-config 5 | spec: 6 | service: 7 | name: payments-api 8 | routes: 9 | - predicates: 10 | - Path=/list-payments 11 | - Method=GET 12 | ssoEnabled: true 13 | - predicates: 14 | - Path=/process-payment/{teamID} 15 | ssoEnabled: true 16 | filters: 17 | - CircuitBreaker="paymentsCircuitBreaker,forward:/alternate-donation-service" 18 | --- 19 | apiVersion: v1 20 | kind: Service 21 | metadata: 22 | name: payments-api 23 | spec: 24 | ports: 25 | - port: 80 26 | targetPort: 80 -------------------------------------------------------------------------------- /brownfield-apis/sentiment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "tanzu.vmware.com/v1" 2 | kind: SpringCloudGatewayRouteConfig 3 | metadata: 4 | name: sentiment-routes-config 5 | #======== supply-chain decorations ("design once API anywhere") 6 | labels: 7 | tsm-scg: sentiment-gns 8 | spec: 9 | service: 10 | name: sentiment-api 11 | predicates: 12 | - Header=X-Request-Id 13 | ssoEnabled: true 14 | #========= 15 | routes: 16 | - predicates: 17 | - Path=/list-sentiment 18 | - Method=GET 19 | - predicates: 20 | - Path=/process-sentiment/{teamID}/sensor/{sensorID} 21 | - Method=POST 22 | filters: 23 | - CircuitBreaker="sentimentCircuitBreaker,forward:/alternate-donation-service" 24 | --- 25 | apiVersion: v1 26 | kind: Service 27 | metadata: 28 | name: sentiment-api 29 | spec: 30 | ports: 31 | - port: 80 32 | targetPort: 80 33 | -------------------------------------------------------------------------------- /brownfield-apis/sms.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "tanzu.vmware.com/v1" 2 | kind: SpringCloudGatewayRouteConfig 3 | metadata: 4 | name: sms-routes-config 5 | spec: 6 | service: 7 | name: sms-api 8 | routes: 9 | - predicates: 10 | - Path=/list-sms 11 | - Method=GET 12 | ssoEnabled: true 13 | - predicates: 14 | - Path=/process-donation/*/adoption-request/** 15 | - Method=POST 16 | ssoEnabled: true 17 | filters: 18 | - CircuitBreaker="smsCircuitBreaker,forward:/alternate-donation-service" 19 | --- 20 | apiVersion: v1 21 | kind: Service 22 | metadata: 23 | name: sms-api 24 | spec: 25 | ports: 26 | - port: 80 27 | targetPort: 80 28 | -------------------------------------------------------------------------------- /brownfield-apis/suppliers.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "tanzu.vmware.com/v1" 2 | kind: SpringCloudGatewayRouteConfig 3 | metadata: 4 | name: suppliers-routes-config 5 | spec: 6 | service: 7 | name: suppliers-api 8 | routes: 9 | - predicates: 10 | - Path=/list-outgoing-payments/ 11 | - Method=GET 12 | filters: 13 | - RateLimit=2,10s 14 | - predicates: 15 | - Path=/process-payment/*/supplier/** 16 | - Method=POST,PUT,DELETE 17 | ssoEnabled: true 18 | --- 19 | apiVersion: v1 20 | kind: Service 21 | metadata: 22 | name: suppliers-api 23 | spec: 24 | ports: 25 | - port: 80 26 | targetPort: 80 -------------------------------------------------------------------------------- /brownfield-apis/tsm-generated-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "tanzu.vmware.com/v1" 2 | kind: SpringCloudGateway 3 | metadata: 4 | name: datacheck-gateway 5 | spec: 6 | count: 2 7 | api: 8 | serverUrl: datacheck.gns.dekt.io 9 | title: User Verification Services 10 | description: External APIs to provide user verification services 11 | version: 0.5.0 12 | cors: 13 | allowedOrigins: 14 | - api-portal.sys.dekt.io 15 | --- 16 | apiVersion: "tanzu.vmware.com/v1" 17 | kind: SpringCloudGatewayMapping 18 | metadata: 19 | name: datacheck-routes-mapping 20 | spec: 21 | gatewayRef: 22 | name: datacheck-gateway 23 | routeConfigRef: 24 | name: datacheck-routes-config 25 | --- 26 | apiVersion: "tanzu.vmware.com/v1" 27 | kind: SpringCloudGateway 28 | metadata: 29 | name: donations-gateway 30 | spec: 31 | count: 2 32 | api: 33 | serverUrl: donations.gns.dekt.io 34 | title: Donation Services 35 | description: External APIs to help with donations processing 36 | version: 0.5.0 37 | cors: 38 | allowedOrigins: 39 | - api-portal.sys.dekt.io 40 | --- 41 | apiVersion: "tanzu.vmware.com/v1" 42 | kind: SpringCloudGatewayMapping 43 | metadata: 44 | name: donations-routes-mapping 45 | spec: 46 | gatewayRef: 47 | name: donations-gateway 48 | routeConfigRef: 49 | name: donations-routes-config 50 | --- 51 | apiVersion: "tanzu.vmware.com/v1" 52 | kind: SpringCloudGateway 53 | metadata: 54 | name: insurance-gateway 55 | spec: 56 | count: 1 57 | api: 58 | serverUrl: insurance.gns.dekt.io 59 | title: Insurance Services 60 | description: External APIs to help with insurance processing 61 | version: 0.5.0 62 | cors: 63 | allowedOrigins: 64 | - api-portal.sys.dekt.io 65 | --- 66 | apiVersion: "tanzu.vmware.com/v1" 67 | kind: SpringCloudGatewayMapping 68 | metadata: 69 | name: insurance-routes-mapping 70 | spec: 71 | gatewayRef: 72 | name: insurance-gateway 73 | routeConfigRef: 74 | name: insurance-routes-config 75 | --- 76 | apiVersion: "tanzu.vmware.com/v1" 77 | kind: SpringCloudGateway 78 | metadata: 79 | name: location-gateway 80 | spec: 81 | count: 1 82 | api: 83 | serverUrl: location.gns.dekt.io 84 | title: Location Services 85 | description: External APIs to help with location processing 86 | version: 0.5.0 87 | cors: 88 | allowedOrigins: 89 | - api-portal.sys.dekt.io 90 | --- 91 | apiVersion: "tanzu.vmware.com/v1" 92 | kind: SpringCloudGatewayMapping 93 | metadata: 94 | name: location-routes-mapping 95 | spec: 96 | gatewayRef: 97 | name: location-gateway 98 | routeConfigRef: 99 | name: location-routes-config 100 | --- 101 | apiVersion: "tanzu.vmware.com/v1" 102 | kind: SpringCloudGateway 103 | metadata: 104 | name: payment-gateway 105 | spec: 106 | count: 1 107 | api: 108 | serverUrl: payment.gns.dekt.io 109 | title: Payment Services 110 | description: External APIs to help with payment processing 111 | version: 0.5.0 112 | cors: 113 | allowedOrigins: 114 | - api-portal.sys.dekt.io 115 | --- 116 | apiVersion: "tanzu.vmware.com/v1" 117 | kind: SpringCloudGatewayMapping 118 | metadata: 119 | name: payment-routes-mapping 120 | spec: 121 | gatewayRef: 122 | name: payment-gateway 123 | routeConfigRef: 124 | name: payment-routes-config 125 | --- 126 | apiVersion: "tanzu.vmware.com/v1" 127 | kind: SpringCloudGateway 128 | metadata: 129 | name: sentiment-gateway 130 | spec: 131 | count: 2 132 | api: 133 | serverUrl: sentiment.gns.dekt.io 134 | title: Sentiment Services 135 | description: External APIs to help with sentiment processing 136 | version: 0.5.0 137 | cors: 138 | allowedOrigins: 139 | - api-portal.sys.dekt.io 140 | --- 141 | apiVersion: "tanzu.vmware.com/v1" 142 | kind: SpringCloudGatewayMapping 143 | metadata: 144 | name: sentiment-routes-mapping 145 | spec: 146 | gatewayRef: 147 | name: sentiment-gateway 148 | routeConfigRef: 149 | name: sentiment-routes-config 150 | --- 151 | apiVersion: "tanzu.vmware.com/v1" 152 | kind: SpringCloudGateway 153 | metadata: 154 | name: sms-gateway 155 | spec: 156 | count: 1 157 | api: 158 | serverUrl: sms.gns.dekt.io 159 | title: Messaging Services 160 | description: External APIs to help with sms processing 161 | version: 0.5.0 162 | cors: 163 | allowedOrigins: 164 | - api-portal.sys.dekt.io 165 | --- 166 | apiVersion: "tanzu.vmware.com/v1" 167 | kind: SpringCloudGatewayMapping 168 | metadata: 169 | name: sms-routes-mapping 170 | spec: 171 | gatewayRef: 172 | name: sms-gateway 173 | routeConfigRef: 174 | name: sms-routes-config -------------------------------------------------------------------------------- /config-templates/custom-supplychains/dekt-medical-scan.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | apiVersion: carto.run/v1alpha1 4 | kind: ClusterSupplyChain 5 | metadata: 6 | name: dekt-medical 7 | spec: 8 | params: 9 | - name: maven_repository_url 10 | value: https://repo.maven.apache.org/maven2 11 | - default: main 12 | name: gitops_branch 13 | - default: supplychain 14 | name: gitops_user_name 15 | - default: supplychain 16 | name: gitops_user_email 17 | - default: #@ "Generated by dekt-medical custom supply chain " + data.values.clusters.stage.name + " cluster" 18 | name: gitops_commit_message 19 | - default: "" 20 | name: gitops_ssh_secret 21 | - default: #@ data.values.gitops.server 22 | name: gitops_server_address 23 | - default: #@ data.values.gitops.owner 24 | name: gitops_repository_owner 25 | - default: #@ data.values.gitops.deliverables.devRepo 26 | name: gitops_repository_name 27 | resources: 28 | - name: source-provider 29 | params: 30 | - default: default 31 | name: serviceAccount 32 | - default: go-git 33 | name: gitImplementation 34 | templateRef: 35 | kind: ClusterSourceTemplate 36 | name: source-template 37 | - name: clinical-trials 38 | sources: 39 | - name: source 40 | resource: source-provider 41 | templateRef: 42 | kind: ClusterSourceTemplate 43 | name: testing-pipeline 44 | - name: patient-imaging 45 | params: 46 | - default: default 47 | name: serviceAccount 48 | - name: registry 49 | value: 50 | ca_cert_data: "" 51 | repository: #@ data.values.private_registry.repo + "/workloads" 52 | server: #@ data.values.private_registry.host 53 | - default: default 54 | name: clusterBuilder 55 | - default: ./Dockerfile 56 | name: dockerfile 57 | - default: ./ 58 | name: docker_build_context 59 | - default: [] 60 | name: docker_build_extra_args 61 | sources: 62 | - name: source 63 | resource: patient-imaging 64 | templateRef: 65 | kind: ClusterImageTemplate 66 | options: 67 | - name: kpack-template 68 | selector: 69 | matchFields: 70 | - key: spec.params[?(@.name=="dockerfile")] 71 | operator: DoesNotExist 72 | - name: kaniko-template 73 | selector: 74 | matchFields: 75 | - key: spec.params[?(@.name=="dockerfile")] 76 | operator: Exists 77 | - images: 78 | - name: image 79 | resource: patient-imaging 80 | name: image-scanner 81 | params: 82 | - default: scan-policy 83 | name: scanning_image_policy 84 | - default: private-image-scan-template 85 | name: scanning_image_template 86 | templateRef: 87 | kind: ClusterImageTemplate 88 | name: image-scanner-template 89 | - images: 90 | - name: image 91 | resource: image-scanner 92 | name: hippa-configs 93 | params: 94 | - default: default 95 | name: serviceAccount 96 | templateRef: 97 | kind: ClusterConfigTemplate 98 | name: convention-template 99 | - configs: 100 | - name: config 101 | resource: hippa-configs 102 | name: app-config 103 | templateRef: 104 | kind: ClusterConfigTemplate 105 | options: 106 | - name: config-template 107 | selector: 108 | matchLabels: 109 | apps.tanzu.vmware.com/workload-type: dekt-medical 110 | - name: server-template 111 | selector: 112 | matchLabels: 113 | apps.tanzu.vmware.com/workload-type: dekt-medical-server 114 | - name: worker-template 115 | selector: 116 | matchLabels: 117 | apps.tanzu.vmware.com/workload-type: dekt-medical-worker 118 | - configs: 119 | - name: app_def 120 | resource: app-config 121 | name: medical-services 122 | templateRef: 123 | kind: ClusterConfigTemplate 124 | name: service-bindings 125 | - configs: 126 | - name: config 127 | resource: medical-services 128 | name: config-writer 129 | params: 130 | - default: default 131 | name: serviceAccount 132 | - name: registry 133 | value: 134 | ca_cert_data: "" 135 | repository: #@ data.values.private_registry.repo + "/workloads" 136 | server: #@ data.values.private_registry.host 137 | templateRef: 138 | kind: ClusterTemplate 139 | name: config-writer-template 140 | - name: deliverable 141 | params: 142 | - name: registry 143 | value: 144 | ca_cert_data: "" 145 | repository: #@ data.values.private_registry.repo + "/workloads" 146 | server: #@ data.values.private_registry.host 147 | - default: go-git 148 | name: gitImplementation 149 | templateRef: 150 | kind: ClusterTemplate 151 | name: deliverable-template 152 | selector: 153 | apps.tanzu.vmware.com/has-tests: "true" 154 | selectorMatchExpressions: 155 | - key: apps.tanzu.vmware.com/workload-type 156 | operator: In 157 | values: 158 | - dekt-medical 159 | - dekt-medical-server 160 | - dekt-medical-worker 161 | -------------------------------------------------------------------------------- /config-templates/custom-supplychains/dekt-medical.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | apiVersion: carto.run/v1alpha1 4 | kind: ClusterSupplyChain 5 | metadata: 6 | name: dekt-medical 7 | spec: 8 | params: 9 | - name: maven_repository_url 10 | value: https://repo.maven.apache.org/maven2 11 | - default: main 12 | name: gitops_branch 13 | - default: supplychain 14 | name: gitops_user_name 15 | - default: supplychain 16 | name: gitops_user_email 17 | - default: #@ "Generated by dekt-medical custom supply chain on " + data.values.clusters.dev.name + " cluster" 18 | name: gitops_commit_message 19 | - default: "" 20 | name: gitops_ssh_secret 21 | - default: #@ data.values.gitops.server 22 | name: gitops_server_address 23 | - default: #@ data.values.gitops.owner 24 | name: gitops_repository_owner 25 | - default: #@ data.values.gitops.deliverables.devRepo 26 | name: gitops_repository_name 27 | resources: 28 | - name: source-provider 29 | params: 30 | - default: default 31 | name: serviceAccount 32 | - default: go-git 33 | name: gitImplementation 34 | templateRef: 35 | kind: ClusterSourceTemplate 36 | name: source-template 37 | - name: clinical-trials 38 | sources: 39 | - name: source 40 | resource: source-provider 41 | templateRef: 42 | kind: ClusterSourceTemplate 43 | name: testing-pipeline 44 | - name: patient-imaging 45 | params: 46 | - default: default 47 | name: serviceAccount 48 | - name: registry 49 | value: 50 | ca_cert_data: "" 51 | repository: #@ data.values.private_registry.repo + "/workloads" 52 | server: #@ data.values.private_registry.host 53 | - default: default 54 | name: clusterBuilder 55 | - default: ./Dockerfile 56 | name: dockerfile 57 | - default: ./ 58 | name: docker_build_context 59 | - default: [] 60 | name: docker_build_extra_args 61 | sources: 62 | - name: source 63 | resource: clinical-trials 64 | templateRef: 65 | kind: ClusterImageTemplate 66 | options: 67 | - name: kpack-template 68 | selector: 69 | matchFields: 70 | - key: spec.params[?(@.name=="dockerfile")] 71 | operator: DoesNotExist 72 | - name: kaniko-template 73 | selector: 74 | matchFields: 75 | - key: spec.params[?(@.name=="dockerfile")] 76 | operator: Exists 77 | - images: 78 | - name: image 79 | resource: patient-imaging 80 | name: hippa-configs 81 | params: 82 | - default: default 83 | name: serviceAccount 84 | templateRef: 85 | kind: ClusterConfigTemplate 86 | name: convention-template 87 | - configs: 88 | - name: config 89 | resource: hippa-configs 90 | name: app-config 91 | templateRef: 92 | kind: ClusterConfigTemplate 93 | options: 94 | - name: config-template 95 | selector: 96 | matchLabels: 97 | apps.tanzu.vmware.com/workload-type: dekt-medical 98 | - name: server-template 99 | selector: 100 | matchLabels: 101 | apps.tanzu.vmware.com/workload-type: dekt-medical-server 102 | - name: worker-template 103 | selector: 104 | matchLabels: 105 | apps.tanzu.vmware.com/workload-type: dekt-medical-worker 106 | - configs: 107 | - name: app_def 108 | resource: app-config 109 | name: medical-services 110 | templateRef: 111 | kind: ClusterConfigTemplate 112 | name: service-bindings 113 | - configs: 114 | - name: config 115 | resource: medical-services 116 | name: config-writer 117 | params: 118 | - default: default 119 | name: serviceAccount 120 | - name: registry 121 | value: 122 | ca_cert_data: "" 123 | repository: #@ data.values.private_registry.repo + "/workloads" 124 | server: #@ data.values.private_registry.host 125 | templateRef: 126 | kind: ClusterTemplate 127 | name: config-writer-template 128 | - name: deliverable 129 | params: 130 | - name: registry 131 | value: 132 | ca_cert_data: "" 133 | repository: #@ data.values.private_registry.repo + "/workloads" 134 | server: #@ data.values.private_registry.host 135 | - default: go-git 136 | name: gitImplementation 137 | templateRef: 138 | kind: ClusterTemplate 139 | name: deliverable-template 140 | selector: 141 | apps.tanzu.vmware.com/has-tests: "true" 142 | selectorMatchExpressions: 143 | - key: apps.tanzu.vmware.com/workload-type 144 | operator: In 145 | values: 146 | - dekt-medical 147 | - dekt-medical-server 148 | - dekt-medical-worker 149 | -------------------------------------------------------------------------------- /config-templates/dataservices/aws/aws-provider-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: aws.upbound.io/v1beta1 2 | kind: ProviderConfig 3 | metadata: 4 | name: default 5 | spec: 6 | credentials: 7 | source: Secret 8 | secretRef: 9 | namespace: crossplane-system 10 | name: aws-secret 11 | key: creds -------------------------------------------------------------------------------- /config-templates/dataservices/aws/aws-provider.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # The AWS "family" Provider - manages the ProviderConfig for all other Providers in the same family. 3 | # Does not have to be created explicitly, if not created explicitly it will be installed by the first Provider created 4 | # in the family. 5 | apiVersion: pkg.crossplane.io/v1 6 | kind: Provider 7 | metadata: 8 | name: upbound-provider-family-aws 9 | spec: 10 | package: xpkg.upbound.io/upbound/provider-family-aws:v0.36.0 11 | controllerConfigRef: 12 | name: upbound-provider-family-aws 13 | --- 14 | # The AWS RDS Provider - just one of the many Providers in the AWS family. 15 | # You can add as few or as many additional Providers in the same family as you wish. 16 | apiVersion: pkg.crossplane.io/v1 17 | kind: Provider 18 | metadata: 19 | name: upbound-provider-aws-rds 20 | spec: 21 | package: xpkg.upbound.io/upbound/provider-aws-rds:v0.36.0 22 | controllerConfigRef: 23 | name: upbound-provider-family-aws 24 | --- 25 | # The ControllerConfig applies settings to a Provider Pod. 26 | # With family Providers each Provider is a unique Pod running in the cluster. 27 | apiVersion: pkg.crossplane.io/v1alpha1 28 | kind: ControllerConfig 29 | metadata: 30 | name: upbound-provider-family-aws -------------------------------------------------------------------------------- /config-templates/dataservices/aws/rds-postgres-class.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 2 | kind: ClusterInstanceClass 3 | metadata: 4 | name: postgres-rds-corp 5 | spec: 6 | description: 7 | short: AWS RDS Postgresql corporate SLA database 8 | provisioner: 9 | crossplane: 10 | compositeResourceDefinition: xpostgresqlinstances.database.rds.example.org -------------------------------------------------------------------------------- /config-templates/dataservices/aws/rds-postgres-composition.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.crossplane.io/v1 2 | kind: Composition 3 | metadata: 4 | labels: 5 | provider: "aws" 6 | vpc: "default" 7 | name: xpostgresqlinstances.database.rds.example.org 8 | spec: 9 | compositeTypeRef: 10 | apiVersion: database.rds.example.org/v1alpha1 11 | kind: XPostgreSQLInstance 12 | publishConnectionDetailsWithStoreConfigRef: 13 | name: default 14 | resources: 15 | - base: 16 | apiVersion: rds.aws.upbound.io/v1beta1 17 | kind: Instance 18 | spec: 19 | forProvider: 20 | instanceClass: db.t3.micro 21 | autoGeneratePassword: true 22 | passwordSecretRef: 23 | key: password 24 | namespace: crossplane-system 25 | engine: postgres 26 | engineVersion: "13.7" 27 | name: postgres 28 | username: masteruser 29 | publiclyAccessible: true 30 | region: UPDATE_IN_RUNTIME 31 | skipFinalSnapshot: true 32 | writeConnectionSecretToRef: 33 | namespace: crossplane-system 34 | connectionDetails: 35 | - name: type 36 | value: postgresql 37 | - name: provider 38 | value: aws 39 | - name: database 40 | value: postgres 41 | - fromConnectionSecretKey: username 42 | - fromConnectionSecretKey: password 43 | - name: host 44 | fromConnectionSecretKey: endpoint 45 | - fromConnectionSecretKey: port 46 | name: instance 47 | patches: 48 | - fromFieldPath: metadata.uid 49 | toFieldPath: spec.forProvider.passwordSecretRef.name 50 | transforms: 51 | - string: 52 | fmt: '%s-postgresql-pw' 53 | type: Format 54 | type: string 55 | type: FromCompositeFieldPath 56 | - fromFieldPath: metadata.uid 57 | toFieldPath: spec.writeConnectionSecretToRef.name 58 | transforms: 59 | - string: 60 | fmt: '%s-postgresql' 61 | type: Format 62 | type: string 63 | type: FromCompositeFieldPath 64 | - fromFieldPath: spec.storageGB 65 | toFieldPath: spec.forProvider.allocatedStorage 66 | type: FromCompositeFieldPath 67 | -------------------------------------------------------------------------------- /config-templates/dataservices/aws/rds-postgres-rbac.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: app-operator-claim-aws-rds-psql 5 | labels: 6 | apps.tanzu.vmware.com/aggregate-to-app-operator-cluster-access: "true" 7 | rules: 8 | - apiGroups: 9 | - "services.apps.tanzu.vmware.com" 10 | resources: 11 | - clusterinstanceclasses 12 | resourceNames: 13 | - aws-rds-psql 14 | verbs: 15 | - claim -------------------------------------------------------------------------------- /config-templates/dataservices/aws/rds-postgres-xrd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.crossplane.io/v1 2 | kind: CompositeResourceDefinition 3 | metadata: 4 | name: xpostgresqlinstances.database.rds.example.org 5 | spec: 6 | claimNames: 7 | kind: PostgreSQLInstance 8 | plural: postgresqlinstances 9 | connectionSecretKeys: 10 | - type 11 | - provider 12 | - host 13 | - port 14 | - database 15 | - username 16 | - password 17 | group: database.rds.example.org 18 | names: 19 | kind: XPostgreSQLInstance 20 | plural: xpostgresqlinstances 21 | versions: 22 | - name: v1alpha1 23 | referenceable: true 24 | schema: 25 | openAPIV3Schema: 26 | properties: 27 | spec: 28 | properties: 29 | storageGB: 30 | type: integer 31 | default: 20 32 | type: object 33 | type: object 34 | served: true -------------------------------------------------------------------------------- /config-templates/dataservices/azure/azure-provider-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: azure.jet.crossplane.io/v1alpha1 2 | kind: ProviderConfig 3 | metadata: 4 | name: default 5 | spec: 6 | credentials: 7 | source: Secret 8 | secretRef: 9 | namespace: crossplane-system 10 | name: jet-azure-creds 11 | key: creds -------------------------------------------------------------------------------- /config-templates/dataservices/azure/azure-provider.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: pkg.crossplane.io/v1alpha1 2 | kind: ControllerConfig 3 | metadata: 4 | name: jet-azure-config 5 | spec: 6 | image: crossplane/provider-jet-azure-controller:v0.12.0 7 | args: ["-d"] 8 | --- 9 | apiVersion: pkg.crossplane.io/v1 10 | kind: Provider 11 | metadata: 12 | name: provider-jet-azure 13 | spec: 14 | package: crossplane/provider-jet-azure:v0.12.0 15 | controllerConfigRef: 16 | name: jet-azure-config 17 | -------------------------------------------------------------------------------- /config-templates/dataservices/azure/azuresql-postgres-class.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 2 | kind: ClusterInstanceClass 3 | metadata: 4 | name: postgres-azuresql-corp 5 | spec: 6 | description: 7 | short: Azure Postgresql corporate SLA database 8 | pool: 9 | kind: Secret 10 | labelSelector: 11 | matchLabels: 12 | services.apps.tanzu.vmware.com/class: postgres-azuresql-corp 13 | fieldSelector: type=connection.crossplane.io/v1alpha1 -------------------------------------------------------------------------------- /config-templates/dataservices/azure/azuresql-postgres-composition.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.crossplane.io/v1 2 | kind: Composition 3 | metadata: 4 | labels: 5 | provider: azure 6 | name: xpostgresqlinstances.bindable.gcp.database.example.org 7 | spec: 8 | compositeTypeRef: 9 | apiVersion: bindable.database.example.org/v1alpha1 10 | kind: XPostgreSQLInstance 11 | publishConnectionDetailsWithStoreConfigRef: 12 | name: default 13 | resources: 14 | - name: dbinstance 15 | base: 16 | apiVersion: dbforpostgresql.azure.jet.crossplane.io/v1alpha2 17 | kind: FlexibleServer 18 | spec: 19 | forProvider: 20 | administratorLogin: myPgAdmin 21 | administratorPasswordSecretRef: 22 | name: "" 23 | namespace: crossplane-system 24 | key: password 25 | location: UPDATE_IN_RUNTIME 26 | skuName: GP_Standard_D2s_v3 27 | version: "12" 28 | resourceGroupName: tap-psql-demo 29 | writeConnectionSecretToRef: 30 | namespace: crossplane-system 31 | connectionDetails: 32 | - name: type 33 | value: postgresql 34 | - name: provider 35 | value: azure 36 | - name: database 37 | value: postgres 38 | - name: username 39 | fromFieldPath: spec.forProvider.administratorLogin 40 | - name: password 41 | fromConnectionSecretKey: "attribute.administrator_password" 42 | - name: host 43 | fromFieldPath: status.atProvider.fqdn 44 | - name: port 45 | type: FromValue 46 | value: "5432" 47 | patches: 48 | - fromFieldPath: metadata.uid 49 | toFieldPath: spec.writeConnectionSecretToRef.name 50 | transforms: 51 | - string: 52 | fmt: '%s-postgresql' 53 | type: Format 54 | type: string 55 | type: FromCompositeFieldPath 56 | - type: FromCompositeFieldPath 57 | fromFieldPath: metadata.name 58 | toFieldPath: spec.forProvider.administratorPasswordSecretRef.name 59 | - fromFieldPath: spec.parameters.storageGB 60 | toFieldPath: spec.forProvider.storageMb 61 | type: FromCompositeFieldPath 62 | transforms: 63 | - type: math 64 | math: 65 | multiply: 1024 66 | - name: dbfwrule 67 | base: 68 | apiVersion: dbforpostgresql.azure.jet.crossplane.io/v1alpha2 69 | kind: FlexibleServerFirewallRule 70 | spec: 71 | forProvider: 72 | serverIdSelector: 73 | matchControllerRef: true 74 | #! not recommended for production deployments! 75 | startIpAddress: 0.0.0.0 76 | endIpAddress: 255.255.255.255 77 | - name: password 78 | base: 79 | apiVersion: kubernetes.crossplane.io/v1alpha1 80 | kind: Object 81 | spec: 82 | forProvider: 83 | manifest: 84 | apiVersion: secretgen.k14s.io/v1alpha1 85 | kind: Password 86 | metadata: 87 | name: "" 88 | namespace: crossplane-system 89 | spec: 90 | length: 64 91 | secretTemplate: 92 | type: Opaque 93 | stringData: 94 | password: $(value) 95 | patches: 96 | - type: FromCompositeFieldPath 97 | fromFieldPath: metadata.name 98 | toFieldPath: spec.forProvider.manifest.metadata.name -------------------------------------------------------------------------------- /config-templates/dataservices/azure/azuresql-postgres-instance.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: bindable.database.example.org/v1alpha1 2 | kind: PostgreSQLInstance 3 | metadata: 4 | name: inventory-db 5 | spec: 6 | parameters: 7 | #! supported storage sizes: 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768 8 | storageGB: 32 9 | compositionSelector: 10 | matchLabels: 11 | provider: azure 12 | publishConnectionDetailsTo: 13 | name: inventory-db 14 | metadata: 15 | labels: 16 | services.apps.tanzu.vmware.com/class: postgres-azuresql-corp -------------------------------------------------------------------------------- /config-templates/dataservices/azure/azuresql-postgres-rbac.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: stk-secret-reader 5 | labels: 6 | servicebinding.io/controller: "true" 7 | rules: 8 | - apiGroups: 9 | - "" 10 | resources: 11 | - secrets 12 | verbs: 13 | - get 14 | - list 15 | - watch -------------------------------------------------------------------------------- /config-templates/dataservices/azure/azuresql-postgres-xrd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.crossplane.io/v1 2 | kind: CompositeResourceDefinition 3 | metadata: 4 | name: xpostgresqlinstances.bindable.database.example.org 5 | spec: 6 | claimNames: 7 | kind: PostgreSQLInstance 8 | plural: postgresqlinstances 9 | connectionSecretKeys: 10 | - type 11 | - provider 12 | - host 13 | - port 14 | - database 15 | - username 16 | - password 17 | group: bindable.database.example.org 18 | names: 19 | kind: XPostgreSQLInstance 20 | plural: xpostgresqlinstances 21 | versions: 22 | - name: v1alpha1 23 | referenceable: true 24 | schema: 25 | openAPIV3Schema: 26 | properties: 27 | spec: 28 | properties: 29 | parameters: 30 | properties: 31 | storageGB: 32 | type: integer 33 | required: 34 | - storageGB 35 | type: object 36 | required: 37 | - parameters 38 | type: object 39 | type: object 40 | served: true -------------------------------------------------------------------------------- /config-templates/dataservices/azure/direct-secret-binding.yaml: -------------------------------------------------------------------------------- 1 | # external-azure-db-binding-compatible.yaml 2 | --- 3 | apiVersion: v1 4 | kind: Secret 5 | metadata: 6 | name: external-azure-db-binding-compatible 7 | type: Opaque 8 | stringData: 9 | type: postgresql 10 | provider: azure 11 | host: CHANGE_ME 12 | port: "CHANGE_ME" 13 | database: "CHANGE_ME" 14 | username: "CHANGE_ME" 15 | password: "CHANGE_ME" 16 | -------------------------------------------------------------------------------- /config-templates/dataservices/gcp/cloudsql-postgres-class.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 2 | kind: ClusterInstanceClass 3 | metadata: 4 | name: postgres-cloudsql-corp 5 | spec: 6 | description: 7 | short: GCP CloudSQL Postgresql corporate SLA database 8 | pool: 9 | kind: Secret 10 | labelSelector: 11 | matchLabels: 12 | services.apps.tanzu.vmware.com/class: postgres-cloudsql-corp 13 | fieldSelector: type=connection.crossplane.io/v1alpha1 -------------------------------------------------------------------------------- /config-templates/dataservices/gcp/cloudsql-postgres-composition.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.crossplane.io/v1 2 | kind: Composition 3 | metadata: 4 | labels: 5 | provider: gcp 6 | name: xpostgresqlinstances.bindable.gcp.database.example.org 7 | spec: 8 | compositeTypeRef: 9 | apiVersion: bindable.database.example.org/v1alpha1 10 | kind: XPostgreSQLInstance 11 | publishConnectionDetailsWithStoreConfigRef: 12 | name: default 13 | resources: 14 | - base: 15 | apiVersion: database.gcp.crossplane.io/v1beta1 16 | kind: CloudSQLInstance 17 | spec: 18 | forProvider: 19 | databaseVersion: POSTGRES_14 20 | region: UPDATE_IN_RUNTIME 21 | settings: 22 | dataDiskType: PD_SSD 23 | ipConfiguration: 24 | authorizedNetworks: 25 | - value: 0.0.0.0/0 26 | ipv4Enabled: true 27 | tier: db-custom-1-3840 28 | writeConnectionSecretToRef: 29 | namespace: crossplane-system 30 | connectionDetails: 31 | - name: type 32 | value: postgresql 33 | - name: provider 34 | value: gcp 35 | - name: database 36 | value: postgres 37 | - fromConnectionSecretKey: username 38 | - fromConnectionSecretKey: password 39 | - name: host 40 | fromConnectionSecretKey: endpoint 41 | - name: port 42 | type: FromValue 43 | value: "5432" 44 | name: cloudsqlinstance 45 | patches: 46 | - fromFieldPath: metadata.uid 47 | toFieldPath: spec.writeConnectionSecretToRef.name 48 | transforms: 49 | - string: 50 | fmt: '%s-postgresql' 51 | type: Format 52 | type: string 53 | type: FromCompositeFieldPath 54 | - fromFieldPath: spec.parameters.storageGB 55 | toFieldPath: spec.forProvider.settings.dataDiskSizeGb 56 | type: FromCompositeFieldPath -------------------------------------------------------------------------------- /config-templates/dataservices/gcp/cloudsql-postgres-rbac.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: stk-secret-reader 5 | labels: 6 | servicebinding.io/controller: "true" 7 | rules: 8 | - apiGroups: 9 | - "" 10 | resources: 11 | - secrets 12 | verbs: 13 | - get 14 | - list 15 | - watch -------------------------------------------------------------------------------- /config-templates/dataservices/gcp/cloudsql-postgres-xrd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.crossplane.io/v1 2 | kind: CompositeResourceDefinition 3 | metadata: 4 | name: xpostgresqlinstances.bindable.database.example.org 5 | spec: 6 | claimNames: 7 | kind: PostgreSQLInstance 8 | plural: postgresqlinstances 9 | connectionSecretKeys: 10 | - type 11 | - provider 12 | - host 13 | - port 14 | - database 15 | - username 16 | - password 17 | group: bindable.database.example.org 18 | names: 19 | kind: XPostgreSQLInstance 20 | plural: xpostgresqlinstances 21 | versions: 22 | - name: v1alpha1 23 | referenceable: true 24 | schema: 25 | openAPIV3Schema: 26 | properties: 27 | spec: 28 | properties: 29 | parameters: 30 | properties: 31 | storageGB: 32 | type: integer 33 | required: 34 | - storageGB 35 | type: object 36 | required: 37 | - parameters 38 | type: object 39 | type: object 40 | served: true -------------------------------------------------------------------------------- /config-templates/dataservices/gcp/gcp-provider-config.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | apiVersion: gcp.crossplane.io/v1beta1 4 | kind: ProviderConfig 5 | metadata: 6 | name: default 7 | spec: 8 | projectID: #@ data.values.clouds.gcp.projectID 9 | credentials: 10 | source: Secret 11 | secretRef: 12 | namespace: crossplane-system 13 | name: gcp-secret 14 | key: creds -------------------------------------------------------------------------------- /config-templates/dataservices/gcp/gcp-provider.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: pkg.crossplane.io/v1 2 | kind: Provider 3 | metadata: 4 | name: provider-gcp 5 | spec: 6 | package: xpkg.upbound.io/upbound/provider-gcp:v0.28.0 -------------------------------------------------------------------------------- /config-templates/dataservices/oncluster/corp-rabbitmq-class.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 2 | kind: ClusterInstanceClass 3 | metadata: 4 | name: rabbitmq-operator-corp 5 | spec: 6 | description: 7 | short: On-demand RabbitMQ clusters with corporate SLA 8 | provisioner: 9 | crossplane: 10 | compositeResourceDefinition: xrabbitmqclusters.messaging.bigcorp.org -------------------------------------------------------------------------------- /config-templates/dataservices/oncluster/corp-rabbitmq-composition.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.crossplane.io/v1 2 | kind: Composition 3 | metadata: 4 | name: xrabbitmqclusters.messaging.bigcorp.org 5 | spec: 6 | compositeTypeRef: 7 | apiVersion: messaging.bigcorp.org/v1alpha1 8 | kind: XRabbitmqCluster 9 | resources: 10 | - base: 11 | apiVersion: kubernetes.crossplane.io/v1alpha1 12 | kind: Object 13 | spec: 14 | forProvider: 15 | manifest: 16 | apiVersion: rabbitmq.com/v1beta1 17 | kind: RabbitmqCluster 18 | metadata: 19 | namespace: rmq-corp 20 | spec: 21 | terminationGracePeriodSeconds: 0 22 | replicas: 1 23 | persistence: 24 | storage: 1Gi 25 | resources: 26 | requests: 27 | cpu: 200m 28 | memory: 1Gi 29 | limits: 30 | cpu: 300m 31 | memory: 1Gi 32 | rabbitmq: 33 | envConfig: | 34 | RABBITMQ_LOGS="" 35 | additionalConfig: | 36 | log.console = true 37 | log.console.level = debug 38 | log.console.formatter = json 39 | log.console.formatter.json.field_map = verbosity:v time msg domain file line pid level:- 40 | log.console.formatter.json.verbosity_map = debug:7 info:6 notice:5 warning:4 error:3 critical:2 alert:1 emergency:0 41 | log.console.formatter.time_format = epoch_usecs 42 | connectionDetails: 43 | - apiVersion: v1 44 | kind: Secret 45 | namespace: rmq-corp 46 | fieldPath: data.provider 47 | toConnectionSecretKey: provider 48 | - apiVersion: v1 49 | kind: Secret 50 | namespace: rmq-corp 51 | fieldPath: data.type 52 | toConnectionSecretKey: type 53 | - apiVersion: v1 54 | kind: Secret 55 | namespace: rmq-corp 56 | fieldPath: data.host 57 | toConnectionSecretKey: host 58 | - apiVersion: v1 59 | kind: Secret 60 | namespace: rmq-corp 61 | fieldPath: data.port 62 | toConnectionSecretKey: port 63 | - apiVersion: v1 64 | kind: Secret 65 | namespace: rmq-corp 66 | fieldPath: data.username 67 | toConnectionSecretKey: username 68 | - apiVersion: v1 69 | kind: Secret 70 | namespace: rmq-corp 71 | fieldPath: data.password 72 | toConnectionSecretKey: password 73 | writeConnectionSecretToRef: 74 | namespace: rmq-corp 75 | connectionDetails: 76 | - fromConnectionSecretKey: provider 77 | - fromConnectionSecretKey: type 78 | - fromConnectionSecretKey: host 79 | - fromConnectionSecretKey: port 80 | - fromConnectionSecretKey: username 81 | - fromConnectionSecretKey: password 82 | patches: 83 | - fromFieldPath: metadata.name 84 | toFieldPath: spec.forProvider.manifest.metadata.name 85 | type: FromCompositeFieldPath 86 | - fromFieldPath: spec.replicas 87 | toFieldPath: spec.forProvider.manifest.spec.replicas 88 | type: FromCompositeFieldPath 89 | - fromFieldPath: spec.storageGB 90 | toFieldPath: spec.forProvider.manifest.spec.persistence.storage 91 | transforms: 92 | - string: 93 | fmt: '%dGi' 94 | type: Format 95 | type: string 96 | type: FromCompositeFieldPath 97 | - fromFieldPath: metadata.name 98 | toFieldPath: spec.writeConnectionSecretToRef.name 99 | transforms: 100 | - string: 101 | fmt: '%s-rmq' 102 | type: Format 103 | type: string 104 | type: FromCompositeFieldPath 105 | - fromFieldPath: metadata.name 106 | toFieldPath: spec.connectionDetails[0].name 107 | transforms: 108 | - string: 109 | fmt: '%s-default-user' 110 | type: Format 111 | type: string 112 | type: FromCompositeFieldPath 113 | - fromFieldPath: metadata.name 114 | toFieldPath: spec.connectionDetails[1].name 115 | transforms: 116 | - string: 117 | fmt: '%s-default-user' 118 | type: Format 119 | type: string 120 | type: FromCompositeFieldPath 121 | - fromFieldPath: metadata.name 122 | toFieldPath: spec.connectionDetails[2].name 123 | transforms: 124 | - string: 125 | fmt: '%s-default-user' 126 | type: Format 127 | type: string 128 | type: FromCompositeFieldPath 129 | - fromFieldPath: metadata.name 130 | toFieldPath: spec.connectionDetails[3].name 131 | transforms: 132 | - string: 133 | fmt: '%s-default-user' 134 | type: Format 135 | type: string 136 | type: FromCompositeFieldPath 137 | - fromFieldPath: metadata.name 138 | toFieldPath: spec.connectionDetails[4].name 139 | transforms: 140 | - string: 141 | fmt: '%s-default-user' 142 | type: Format 143 | type: string 144 | type: FromCompositeFieldPath 145 | - fromFieldPath: metadata.name 146 | toFieldPath: spec.connectionDetails[5].name 147 | transforms: 148 | - string: 149 | fmt: '%s-default-user' 150 | type: Format 151 | type: string 152 | type: FromCompositeFieldPath 153 | readinessChecks: 154 | - type: MatchString 155 | fieldPath: status.atProvider.manifest.status.conditions[1].status # ClusterAvailable 156 | matchString: "True" -------------------------------------------------------------------------------- /config-templates/dataservices/oncluster/corp-rabbitmq-rbac.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: rmqcluster-read-writer 5 | labels: 6 | services.tanzu.vmware.com/aggregate-to-provider-kubernetes: "true" 7 | rules: 8 | - apiGroups: 9 | - rabbitmq.com 10 | resources: 11 | - rabbitmqclusters 12 | verbs: 13 | - "*" 14 | --- 15 | apiVersion: rbac.authorization.k8s.io/v1 16 | kind: ClusterRole 17 | metadata: 18 | name: app-operator-claim-class-bigcorp-rabbitmq 19 | labels: 20 | apps.tanzu.vmware.com/aggregate-to-app-operator-cluster-access: "true" 21 | rules: 22 | - apiGroups: 23 | - services.apps.tanzu.vmware.com 24 | resources: 25 | - clusterinstanceclasses 26 | resourceNames: 27 | - bigcorp-rabbitmq 28 | verbs: 29 | - claim -------------------------------------------------------------------------------- /config-templates/dataservices/oncluster/corp-rabbitmq-xrd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.crossplane.io/v1 2 | kind: CompositeResourceDefinition 3 | metadata: 4 | name: xrabbitmqclusters.messaging.bigcorp.org 5 | spec: 6 | connectionSecretKeys: 7 | - host 8 | - password 9 | - port 10 | - provider 11 | - type 12 | - username 13 | group: messaging.bigcorp.org 14 | names: 15 | kind: XRabbitmqCluster 16 | plural: xrabbitmqclusters 17 | versions: 18 | - name: v1alpha1 19 | referenceable: true 20 | schema: 21 | openAPIV3Schema: 22 | properties: 23 | spec: 24 | description: The OpenAPIV3Schema of this Composite Resource Definition. 25 | properties: 26 | replicas: 27 | description: The desired number of replicas forming the cluster 28 | type: integer 29 | storageGB: 30 | description: The desired storage capacity of a single replica, in GB. 31 | type: integer 32 | type: object 33 | type: object 34 | served: true -------------------------------------------------------------------------------- /config-templates/demo-values.yaml: -------------------------------------------------------------------------------- 1 | clusters: 2 | 3 | view: # TAP view profile installed 4 | name: 5 | provider: #accepted values: aks,eks,gke 6 | nodes: 4 7 | region: #accepted values for AKS provider: westus,ukwest,germanynorth,eastasia,australiasoutheast 8 | #accpeted values for EKS provider: us-east-1,us-west-1,eu-west-2,eu-central-1,ap-southeast1,ap-southeast-2 9 | #accpeted values for GKE provider: us-central1,europe-west2,europe-west3,asia-southeast1,australia-southeast1 10 | 11 | #selected region is also used to provision cloud postgres (rabbitmq is always oncluster) 12 | 13 | dev: # TAP interate profile installed 14 | name: 15 | provider: 16 | nodes: 4 17 | region: 18 | 19 | stage: #TAP build profile installed 20 | name: 21 | provider: 22 | nodes: 6 23 | region: 24 | 25 | prod1: # TAP run profile installed 26 | name: 27 | provider: 28 | nodes: 3 29 | region: 30 | 31 | prod2: # TAP run profile installed 32 | name: 33 | provider: 34 | nodes: 3 35 | region: 36 | 37 | brownfield: #for bronwfield apis demo, SCGW and TSM, no TAP installed 38 | name: 39 | provider: 40 | nodes: 2 41 | region: 42 | 43 | clouds: 44 | azure: 45 | subscriptionID: 46 | resourceGroup: 47 | nodeType: Standard_DS3_v2 48 | aws: 49 | accountID: 50 | IAMuser: 51 | instanceType: t3.xlarge 52 | gcp: 53 | projectID: 54 | machineType: e2-standard-4 55 | 56 | private_registry: 57 | host: 58 | username: 59 | password: 60 | repo: 61 | 62 | tanzu_network: 63 | username: 64 | password: 65 | 66 | tap: 67 | 68 | tapVersion: 1.6.1 69 | carvelBundle: sha256:54e516b5d088198558d23cababb3f907cd8073892cacfb2496bb9d66886efe15 70 | scanTemplates: #accepted values: carbonblack-private-image-scan-template snyk-private-image-scan-template, private-image-scan-template 71 | portal: snyk-private-image-scan-template 72 | sensors: carbonblack-private-image-scan-template 73 | doctor: snyk-private-image-scan-template 74 | predictor: private-image-scan-template 75 | painter: carbonblack-private-image-scan-template 76 | appsIngressIssuer: # used for workloads, accepted values: tap-ingress-selfsigned or your own cert (in which case verify if you need to update .config/secrets/cluster-issue-apps.yaml) 77 | isvImg: #BYO image name, must be pushed to private_registry/isvs repo 78 | 79 | snyk: 80 | token: 81 | 82 | openai: 83 | token: 84 | 85 | carbonblack: 86 | cbc_api_id: 87 | cbc_api_key: 88 | cbc_org_key: 89 | cbc_saas_url: 90 | 91 | apps_namespaces: 92 | dev1: #for mood-portal single dev workload 93 | dev2: #for mood-sensors single dev workload 94 | team: #for the DevXmood dev team (mood-portal, mood-sensors, mood-doctor and mood-predictor workloads) 95 | stageProd: #for promoting the DevXmood business application to stage and prod 96 | 97 | gitops: 98 | 99 | server: #e.g https://github.com 100 | owner: 101 | token: #must have read/write permissions for all repos below repo 102 | resources: # repo must be created prior to installing the demo 103 | repo: 104 | # subPath needs to be created and contain at least the following resources: 105 | # 1. scan-policy.yaml 106 | # 2. tekton-pipeline-golang.yaml 107 | # 3. tekton-pipeline-java.yaml 108 | # 4. tekton-pipeline-nodejs.yaml 109 | # 5. snyk-install.yaml !!make sure the snyk_values["metadataStore"] = {"url" value is updated 110 | # 6. carbonblack-install.yaml !!make sure the snyk_values["metadataStore"] = {"url" value is updated 111 | # 112 | # see example at https://github.com/dektlong/gitops-resources/tree/main/resources 113 | subPath: 114 | 115 | deliverables: # repos must be created prior to installing the demo and cloned to the same path as the demo install 116 | devRepo: # dev and team supplychains deliverables will be pushed to /config 117 | stageRepo: # stage supplychain deliverables will be pushed to /config 118 | 119 | 120 | 121 | 122 | dns: 123 | domain: 124 | sysSubDomain: 125 | devSubDomain: 126 | stageSubDomain: 127 | prod1SubDomain: 128 | prod2SubDomain: 129 | godaddyApiKey: 130 | godaddyApiSecret: 131 | 132 | tmc: 133 | apiToken: 134 | clusterGroup: 135 | -------------------------------------------------------------------------------- /config-templates/secrets/carbonblack-creds.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | apiVersion: v1 4 | kind: Secret 5 | metadata: 6 | name: carbonblack-config-secret 7 | stringData: 8 | cbc_api_id: #@ data.values.carbonblack.cbc_api_id 9 | cbc_api_key: #@ data.values.carbonblack.cbc_api_key 10 | cbc_org_key: #@ data.values.carbonblack.cbc_org_key 11 | cbc_saas_url: #@ data.values.carbonblack.cbc_saas_url 12 | -------------------------------------------------------------------------------- /config-templates/secrets/git-creds-sa-overlay.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: git-creds-sa-overlay 5 | namespace: tap-install 6 | annotations: 7 | kapp.k14s.io/change-rule: "delete after deleting tap" 8 | stringData: 9 | workload-git-auth-overlay.yaml: | 10 | #@ load("@ytt:overlay", "overlay") 11 | #@overlay/match by=overlay.subset({"apiVersion": "v1", "kind": "ServiceAccount","metadata":{"name":"default"}}), expects="0+" 12 | --- 13 | secrets: 14 | #@overlay/append 15 | - name: git-creds -------------------------------------------------------------------------------- /config-templates/secrets/git-creds.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | apiVersion: v1 4 | kind: Secret 5 | metadata: 6 | name: git-creds 7 | annotations: 8 | tekton.dev/git-0: #@ data.values.gitops.server 9 | type: kubernetes.io/basic-auth 10 | stringData: 11 | password: #@ data.values.gitops.token 12 | username: #@ data.values.gitops.owner -------------------------------------------------------------------------------- /config-templates/secrets/ingress-issuer-apps.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | apiVersion: cert-manager.io/v1 4 | kind: ClusterIssuer 5 | metadata: 6 | name: #@ data.values.tap.appsIngressIssuer 7 | spec: 8 | acme: 9 | email: #@ "cert-notification@" + data.values.dns.domain 10 | privateKeySecretRef: 11 | name: #@ data.values.tap.appsIngressIssuer 12 | server: https://acme-v02.api.letsencrypt.org/directory 13 | solvers: 14 | - http01: 15 | ingress: 16 | class: contour -------------------------------------------------------------------------------- /config-templates/secrets/ingress-issuer-sys.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | apiVersion: v1 4 | kind: ServiceAccount 5 | metadata: 6 | name: tap-acme-http01-solver 7 | namespace: tap-gui 8 | imagePullSecrets: 9 | - name: acme-pull 10 | --- 11 | apiVersion: v1 12 | kind: ServiceAccount 13 | metadata: 14 | name: tap-acme-http01-solver 15 | namespace: metadata-store 16 | imagePullSecrets: 17 | - name: acme-pull 18 | --- 19 | apiVersion: v1 20 | kind: ServiceAccount 21 | metadata: 22 | name: tap-acme-http01-solver 23 | namespace: app-live-view 24 | imagePullSecrets: 25 | - name: acme-pull 26 | --- 27 | apiVersion: cert-manager.io/v1 28 | kind: ClusterIssuer 29 | metadata: 30 | name: #@ data.values.tap.sysIngressIssuer 31 | spec: 32 | acme: 33 | email: #@ "cert-notification@" + data.values.dns.domain 34 | privateKeySecretRef: 35 | name: #@ data.values.tap.sysIngressIssuer 36 | server: https://acme-v02.api.letsencrypt.org/directory 37 | solvers: 38 | - http01: 39 | ingress: 40 | class: contour 41 | podTemplate: 42 | spec: 43 | serviceAccountName: tap-acme-http01-solver 44 | 45 | -------------------------------------------------------------------------------- /config-templates/secrets/openai-creds.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | apiVersion: v1 4 | kind: Secret 5 | metadata: 6 | name: openai-binding 7 | type: servicebinding.io/openai 8 | stringData: 9 | type: openai 10 | provider: sample 11 | key: #@ data.values.openai.token 12 | --- 13 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 14 | kind: ResourceClaim 15 | metadata: 16 | name: openai-claim 17 | spec: 18 | ref: 19 | apiVersion: v1 20 | kind: Secret 21 | name: openai-binding 22 | -------------------------------------------------------------------------------- /config-templates/secrets/snyk-creds.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | apiVersion: v1 4 | kind: Secret 5 | metadata: 6 | name: snyk-token-secret 7 | data: 8 | snyk_token: #@ data.values.snyk.token 9 | -------------------------------------------------------------------------------- /config-templates/secrets/viewer-rbac.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: tap-gui 5 | --- 6 | apiVersion: v1 7 | kind: ServiceAccount 8 | metadata: 9 | namespace: tap-gui 10 | name: tap-gui-viewer 11 | --- 12 | apiVersion: rbac.authorization.k8s.io/v1 13 | kind: ClusterRoleBinding 14 | metadata: 15 | name: tap-gui-read-k8s 16 | subjects: 17 | - kind: ServiceAccount 18 | namespace: tap-gui 19 | name: tap-gui-viewer 20 | roleRef: 21 | kind: ClusterRole 22 | name: k8s-reader 23 | apiGroup: rbac.authorization.k8s.io 24 | --- 25 | apiVersion: rbac.authorization.k8s.io/v1 26 | kind: ClusterRole 27 | metadata: 28 | name: k8s-reader 29 | rules: 30 | - apiGroups: [''] 31 | resources: ['pods', 'pods/log', 'services', 'configmaps', 'limitranges'] 32 | verbs: ['get', 'watch', 'list'] 33 | - apiGroups: ['metrics.k8s.io'] 34 | resources: ['pods'] 35 | verbs: ['get', 'watch', 'list'] 36 | - apiGroups: ['apps'] 37 | resources: ['deployments', 'replicasets', 'statefulsets', 'daemonsets'] 38 | verbs: ['get', 'watch', 'list'] 39 | - apiGroups: ['autoscaling'] 40 | resources: ['horizontalpodautoscalers'] 41 | verbs: ['get', 'watch', 'list'] 42 | - apiGroups: ['networking.k8s.io'] 43 | resources: ['ingresses'] 44 | verbs: ['get', 'watch', 'list'] 45 | - apiGroups: ['networking.internal.knative.dev'] 46 | resources: ['serverlessservices'] 47 | verbs: ['get', 'watch', 'list'] 48 | - apiGroups: [ 'autoscaling.internal.knative.dev' ] 49 | resources: [ 'podautoscalers' ] 50 | verbs: [ 'get', 'watch', 'list' ] 51 | - apiGroups: ['serving.knative.dev'] 52 | resources: 53 | - configurations 54 | - revisions 55 | - routes 56 | - services 57 | verbs: ['get', 'watch', 'list'] 58 | - apiGroups: ['carto.run'] 59 | resources: 60 | - clusterconfigtemplates 61 | - clusterdeliveries 62 | - clusterdeploymenttemplates 63 | - clusterimagetemplates 64 | - clusterruntemplates 65 | - clustersourcetemplates 66 | - clustersupplychains 67 | - clustertemplates 68 | - deliverables 69 | - runnables 70 | - workloads 71 | verbs: ['get', 'watch', 'list'] 72 | - apiGroups: ['source.toolkit.fluxcd.io'] 73 | resources: 74 | - gitrepositories 75 | verbs: ['get', 'watch', 'list'] 76 | - apiGroups: ['source.apps.tanzu.vmware.com'] 77 | resources: 78 | - imagerepositories 79 | - mavenartifacts 80 | verbs: ['get', 'watch', 'list'] 81 | - apiGroups: ['conventions.apps.tanzu.vmware.com'] 82 | resources: 83 | - podintents 84 | verbs: ['get', 'watch', 'list'] 85 | - apiGroups: ['kpack.io'] 86 | resources: 87 | - images 88 | - builds 89 | verbs: ['get', 'watch', 'list'] 90 | - apiGroups: ['scanning.apps.tanzu.vmware.com'] 91 | resources: 92 | - sourcescans 93 | - imagescans 94 | - scanpolicies 95 | verbs: ['get', 'watch', 'list'] 96 | - apiGroups: ['tekton.dev'] 97 | resources: 98 | - taskruns 99 | - pipelineruns 100 | verbs: ['get', 'watch', 'list'] 101 | - apiGroups: ['kappctrl.k14s.io'] 102 | resources: 103 | - apps 104 | verbs: ['get', 'watch', 'list'] 105 | - apiGroups: [ 'batch' ] 106 | resources: [ 'jobs', 'cronjobs' ] 107 | verbs: [ 'get', 'watch', 'list' ] 108 | - apiGroups: ['conventions.carto.run'] 109 | resources: 110 | - podintents 111 | verbs: ['get', 'watch', 'list'] 112 | - apiGroups: ['appliveview.apps.tanzu.vmware.com'] 113 | resources: 114 | - resourceinspectiongrants 115 | verbs: ['get', 'watch', 'list', 'create'] -------------------------------------------------------------------------------- /config-templates/tap-profiles/tap-dev.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | profile: iterate 4 | shared: 5 | ingress_domain: #@ "{}.{}".format(data.values.dns.devSubDomain, data.values.dns.domain) 6 | ingress_issuer: #@ data.values.tap.appsIngressIssuer 7 | image_registry: 8 | project_path: #@ "{}/{}".format(data.values.private_registry.host, data.values.private_registry.repo) 9 | username: #@ data.values.private_registry.username 10 | password: #@ data.values.private_registry.password 11 | supply_chain: testing 12 | 13 | ootb_supply_chain_testing: 14 | gitops: 15 | server_address: #@ data.values.gitops.server 16 | repository_owner: #@ data.values.gitops.owner 17 | repository_name: #@ data.values.gitops.deliverables.devRepo 18 | commit_message: #@ "Generated by TAP on " + data.values.clusters.dev.name + " cluster" 19 | 20 | namespace_provisioner: 21 | controller: true 22 | additional_sources: 23 | - git: 24 | ref: origin/main 25 | subPath: #@ data.values.gitops.resources.subPath 26 | url: #@ "{}/{}/{}.git".format(data.values.gitops.server, data.values.gitops.owner, data.values.gitops.resources.repo) 27 | path: #@ "_ytt_lib/" + data.values.gitops.resources.subPath 28 | overlay_secrets: 29 | - name: git-creds-sa-overlay 30 | namespace: tap-install 31 | create_export: true 32 | cnrs: 33 | domain_template: '{{.Name}}.{{.Domain}}' 34 | contour: 35 | envoy: 36 | service: 37 | type: LoadBalancer 38 | appliveview_connector: 39 | activateSensitiveOperations: true 40 | backend: 41 | sslDeactivated: false 42 | host: #@ "{}.{}.{}".format("appliveview", data.values.dns.sysSubDomain, data.values.dns.domain) 43 | ingressEnabled: true 44 | caCertData: UPDATED_IN_RUNTIME 45 | api_auto_registration: 46 | tap_gui_url: #@ "{}.{}.{}".format("https://tap-gui", data.values.dns.sysSubDomain, data.values.dns.domain) 47 | excluded_packages: 48 | - learningcenter.tanzu.vmware.com 49 | - workshops.learningcenter.tanzu.vmware.com 50 | - tap-telemetry.tanzu.vmware.com 51 | ceip_policy_disclosed: true 52 | -------------------------------------------------------------------------------- /config-templates/tap-profiles/tap-prod1.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | profile: run 4 | shared: 5 | ingress_domain: #@ "{}.{}".format(data.values.dns.prod1SubDomain, data.values.dns.domain) 6 | ingress_issuer: #@ data.values.tap.appsIngressIssuer 7 | cnrs: 8 | domain_template: '{{.Name}}.{{.Domain}}' 9 | contour: 10 | envoy: 11 | service: 12 | type: LoadBalancer 13 | 14 | excluded_packages: 15 | - learningcenter.tanzu.vmware.com 16 | - workshops.learningcenter.tanzu.vmware.com 17 | - eventing.tanzu.vmware.com 18 | - tap-telemetry.tanzu.vmware.com 19 | - bitnami.services.tanzu.vmware.com 20 | ceip_policy_disclosed: true 21 | -------------------------------------------------------------------------------- /config-templates/tap-profiles/tap-prod2.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | profile: run 4 | shared: 5 | ingress_domain: #@ "{}.{}".format(data.values.dns.prod2SubDomain, data.values.dns.domain) 6 | ingress_issuer: #@ data.values.tap.appsIngressIssuer 7 | cnrs: 8 | domain_template: '{{.Name}}.{{.Domain}}' 9 | contour: 10 | envoy: 11 | service: 12 | type: LoadBalancer 13 | 14 | excluded_packages: 15 | - learningcenter.tanzu.vmware.com 16 | - workshops.learningcenter.tanzu.vmware.com 17 | - eventing.tanzu.vmware.com 18 | - tap-telemetry.tanzu.vmware.com 19 | - bitnami.services.tanzu.vmware.com 20 | ceip_policy_disclosed: true 21 | -------------------------------------------------------------------------------- /config-templates/tap-profiles/tap-stage.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | profile: build 4 | shared: 5 | ingress_domain: #@ "{}.{}".format(data.values.dns.stageSubDomain, data.values.dns.domain) 6 | ingress_issuer: #@ data.values.tap.appsIngressIssuer 7 | image_registry: 8 | project_path: #@ "{}/{}".format(data.values.private_registry.host, data.values.private_registry.repo) 9 | username: #@ data.values.private_registry.username 10 | password: #@ data.values.private_registry.password 11 | 12 | supply_chain: testing_scanning 13 | 14 | ootb_supply_chain_testing_scanning: 15 | gitops: 16 | server_address: #@ data.values.gitops.server 17 | repository_owner: #@ data.values.gitops.owner 18 | repository_name: #@ data.values.gitops.deliverables.stageRepo 19 | commit_message: #@ "Generated by TAP on " + data.values.clusters.stage.name + " cluster" 20 | 21 | namespace_provisioner: 22 | controller: true 23 | additional_sources: 24 | - git: 25 | ref: origin/main 26 | subPath: #@ data.values.gitops.resources.subPath 27 | url: #@ "{}/{}/{}.git".format(data.values.gitops.server, data.values.gitops.owner, data.values.gitops.resources.repo) 28 | path: #@ "_ytt_lib/" + data.values.gitops.resources.subPath 29 | overlay_secrets: 30 | - name: git-creds-sa-overlay 31 | namespace: tap-install 32 | create_export: true 33 | 34 | 35 | grype: 36 | namespace: "" 37 | metadataStore: 38 | url: #@ "{}.{}.{}".format("https://metadata-store", data.values.dns.sysSubDomain, data.values.dns.domain) 39 | caSecret: 40 | name: store-ca-cert 41 | importFromNamespace: metadata-store-secrets 42 | authSecret: 43 | name: store-auth-token 44 | importFromNamespace: metadata-store-secrets 45 | scanning: 46 | metadataStore: {} 47 | 48 | cnrs: 49 | domain_template: '{{.Name}}.{{.Domain}}' 50 | contour: 51 | envoy: 52 | service: 53 | type: LoadBalancer 54 | appliveview_connector: 55 | activateSensitiveOperations: true 56 | backend: 57 | sslDeactivated: false 58 | host: #@ "{}.{}.{}".format("appliveview", data.values.dns.sysSubDomain, data.values.dns.domain) 59 | ingressEnabled: true 60 | 61 | api_auto_registration: 62 | tap_gui_url: #@ "{}.{}.{}".format("https://tap-gui", data.values.dns.sysSubDomain, data.values.dns.domain) 63 | excluded_packages: 64 | - learningcenter.tanzu.vmware.com 65 | - workshops.learningcenter.tanzu.vmware.com 66 | - tap-telemetry.tanzu.vmware.com 67 | ceip_policy_disclosed: true 68 | -------------------------------------------------------------------------------- /config-templates/tap-profiles/tap-view.yaml: -------------------------------------------------------------------------------- 1 | #@ load("@ytt:data", "data") 2 | --- 3 | profile: view 4 | shared: 5 | ingress_domain: #@ "{}.{}".format(data.values.dns.sysSubDomain, data.values.dns.domain) 6 | ingress_issuer: #@ data.values.tap.sysIngressIssuer 7 | tap_gui: 8 | service_type: ClusterIP 9 | metadataStoreAutoconfiguration: true 10 | app_config: 11 | customize: 12 | custom_name: 'Dekt Developer Portal' 13 | organization: 14 | name: 'Dekt' 15 | auth: 16 | allowGuestAccess: true 17 | integrations: 18 | github: 19 | - host: github.com 20 | - host: gitlab.eng.vmware.com 21 | apiBaseUrl: https://gitlab.eng.vmware.com 22 | catalog: 23 | locations: 24 | - type: url 25 | target: https://github.com/tanzu-demo/tap-gui-catalogs/blob/main/blank/catalog-info.yaml 26 | - type: url 27 | target: https://github.com/dektlong/dekt-devx-demo/blob/main/backstage/catalog-info.yaml 28 | - type: url 29 | target: https://github.com/krisapplegate/tap-gui-catalogs/blob/main/tap-gui/api-test.yaml 30 | appLiveView: 31 | activateAppLiveViewSecureAccessControl: false 32 | backend: 33 | cors: 34 | origin: #@ "{}.{}.{}".format("https://tap-gui", data.values.dns.sysSubDomain, data.values.dns.domain) 35 | reading: 36 | allow: 37 | - host: petstore.swagger.io 38 | - host: #@ "{}.{}".format("*", data.values.dns.domain) 39 | kubernetes: 40 | serviceLocatorMethod: 41 | type: 'multiTenant' 42 | clusterLocatorMethods: 43 | - type: config 44 | clusters: 45 | - url: UPDATED_IN_RUNTIME 46 | name: UPDATED_IN_RUNTIME 47 | authProvider: serviceAccount 48 | serviceAccountToken: UPDATED_IN_RUNTIME 49 | skipTLSVerify: true 50 | - url: UPDATED_IN_RUNTIME 51 | name: UPDATED_IN_RUNTIME 52 | authProvider: serviceAccount 53 | serviceAccountToken: UPDATED_IN_RUNTIME 54 | skipTLSVerify: true 55 | envoy: 56 | service: 57 | type: LoadBalancer 58 | metadata_store: 59 | app_service_type: ClusterIP 60 | appliveview: 61 | ingressEnabled: true 62 | excluded_packages: 63 | - learningcenter.tanzu.vmware.com 64 | - workshops.learningcenter.tanzu.vmware.com 65 | - api-portal.tanzu.vmware.com 66 | - tap-telemetry.tanzu.vmware.com 67 | ceip_policy_disclosed: true 68 | -------------------------------------------------------------------------------- /config-templates/workloads/mood-doctor.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: carto.run/v1alpha1 2 | kind: Workload 3 | metadata: 4 | name: mood-doctor 5 | labels: 6 | apps.tanzu.vmware.com/workload-type: dekt-medical-worker 7 | app.kubernetes.io/part-of: mood-doctor 8 | apps.tanzu.vmware.com/has-tests: "true" 9 | spec: 10 | #source code 11 | source: 12 | git: 13 | url: https://github.com/dektlong/mood-doctor 14 | ref: 15 | branch: main 16 | 17 | serviceClaims: 18 | - name: reading 19 | ref: 20 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 21 | kind: ClassClaim 22 | name: rabbitmq-claim 23 | 24 | params: 25 | - name: testing_pipeline_matching_labels 26 | value: 27 | apps.tanzu.vmware.com/language: nodejs -------------------------------------------------------------------------------- /config-templates/workloads/mood-painter.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: carto.run/v1alpha1 2 | kind: Workload 3 | metadata: 4 | name: mood-painter 5 | labels: 6 | apps.tanzu.vmware.com/workload-type: server 7 | app.kubernetes.io/part-of: mood-painter 8 | spec: 9 | serviceClaims: 10 | - name: reading 11 | ref: 12 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 13 | kind: ClassClaim 14 | name: rabbitmq-claim -------------------------------------------------------------------------------- /config-templates/workloads/mood-portal.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: carto.run/v1alpha1 2 | kind: Workload 3 | metadata: 4 | name: mood-portal 5 | labels: 6 | apps.tanzu.vmware.com/workload-type: web 7 | app.kubernetes.io/part-of: mood-portal 8 | apps.tanzu.vmware.com/has-tests: "true" 9 | spec: 10 | source: 11 | git: 12 | url: https://github.com/dektlong/mood-portal 13 | ref: 14 | branch: main 15 | params: 16 | - name: testing_pipeline_matching_labels 17 | value: 18 | apps.tanzu.vmware.com/language: golang 19 | - name: annotations 20 | value: 21 | autoscaling.knative.dev/minScale: "1" 22 | 23 | 24 | #env var to connect to mood-sensors set in runtime -------------------------------------------------------------------------------- /config-templates/workloads/mood-predictor.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: carto.run/v1alpha1 2 | kind: Workload 3 | metadata: 4 | name: mood-predictor 5 | labels: 6 | apps.tanzu.vmware.com/workload-type: worker 7 | apps.tanzu.vmware.com/has-tests: "true" 8 | app.kubernetes.io/part-of: mood-predictor 9 | spec: 10 | source: 11 | git: 12 | url: https://github.com/dektlong/mood-predictor-openai 13 | ref: 14 | branch: main 15 | build: 16 | env: 17 | - name: BP_JVM_VERSION 18 | value: "17" 19 | params: 20 | - name: testing_pipeline_matching_labels 21 | value: 22 | apps.tanzu.vmware.com/language: java 23 | serviceClaims: 24 | - name: predictor-openai 25 | ref: 26 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 27 | kind: ResourceClaim 28 | name: openai-claim 29 | - name: reading 30 | ref: 31 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 32 | kind: ClassClaim 33 | name: rabbitmq-claim 34 | -------------------------------------------------------------------------------- /config-templates/workloads/mood-sensors.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: carto.run/v1alpha1 2 | kind: Workload 3 | metadata: 4 | name: mood-sensors 5 | labels: 6 | apps.tanzu.vmware.com/workload-type: web 7 | apps.tanzu.vmware.com/has-tests: "true" 8 | app.kubernetes.io/part-of: mood-sensors 9 | apps.tanzu.vmware.com/auto-configure-actuators: "true" 10 | apis.apps.tanzu.vmware.com/register-api: "true" 11 | spec: 12 | #source code 13 | source: 14 | git: 15 | url: https://github.com/dektlong/mood-sensors 16 | ref: 17 | branch: main 18 | 19 | serviceClaims: 20 | - name: inventory 21 | ref: 22 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 23 | kind: ClassClaim 24 | name: postgres-claim 25 | - name: reading 26 | ref: 27 | apiVersion: services.apps.tanzu.vmware.com/v1alpha1 28 | kind: ClassClaim 29 | name: rabbitmq-claim 30 | 31 | params: 32 | - name: testing_pipeline_matching_labels 33 | value: 34 | apps.tanzu.vmware.com/language: java 35 | - name: annotations 36 | value: 37 | autoscaling.knative.dev/minScale: "1" 38 | - name: api_descriptor 39 | value: 40 | type: openapi 41 | location: 42 | path: /api-docs 43 | owner: dekt-dev-team2 44 | system: devx-mood 45 | description: APIs to interact with the mood-sensors component -------------------------------------------------------------------------------- /devxmood_aria_hub.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dektlong/dekt-devx-demo/895007f962fe7c6789b90d3b951dea428aab4bf2/devxmood_aria_hub.png -------------------------------------------------------------------------------- /scripts/apigrid.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | PRIVATE_REPO=$(yq e .ootb_supply_chain_basic.registry.server ../.config/tap-values-full.yaml) 4 | PRIVATE_REPO_USER=$(yq e .buildservice.kp_default_repository_username ../.config/tap-values-full.yaml) 5 | PRIVATE_REPO_PASSWORD=$(yq e .buildservice.kp_default_repository_password ../.config/tap-values-full.yaml) 6 | DEMO_APP_GIT_REPO="https://github.com/dektlong/APIGridDemo" 7 | BUILDER_NAME="online-stores-builder" 8 | BACKEND_TBS_IMAGE="dekt4pets-backend" 9 | FRONTEND_TBS_IMAGE="dekt4pets-frontend" 10 | APPS_NAMESPACE=$(yq .tap.appNamespace ../.config/demo-values.yaml) 11 | DEV_SUB_DOMAIN=$(yq .cnrs.domain_name ../.config/tap-values-full.yaml | cut -d'.' -f 1) 12 | 13 | #init (assumes api-portal and api-gw are installed) 14 | init() { 15 | 16 | scripts/dektecho.sh err "!!! currently only working on AKS due to SCGW issue. Hit any key to continue..." 17 | read 18 | #dekt4pets images 19 | frontend_image_location=$PRIVATE_REPO/$PRIVATE_REGISTRY_APP_REPO/$FRONTEND_TBS_IMAGE:0.0.1 20 | backend_image_location=$PRIVATE_REPO/$PRIVATE_REGISTRY_APP_REPO/$BACKEND_TBS_IMAGE:0.0.1 21 | 22 | export REGISTRY_PASSWORD=$PRIVATE_REPO_PASSWORD 23 | kp secret create private-registry-creds \ 24 | --registry $PRIVATE_REPO \ 25 | --registry-user $PRIVATE_REPO_USER \ 26 | --namespace $APPS_NAMESPACE 27 | 28 | 29 | kp image create $BACKEND_TBS_IMAGE -n $APPS_NAMESPACE \ 30 | --tag $backend_image_location \ 31 | --git $DEMO_APP_GIT_REPO \ 32 | --sub-path ./workloads/dekt4pets/backend \ 33 | --git-revision main \ 34 | --wait 35 | 36 | 37 | kp image save $FRONTEND_TBS_IMAGE -n $APPS_NAMESPACE \ 38 | --tag $frontend_image_location \ 39 | --git $DEMO_APP_GIT_REPO \ 40 | --sub-path ./workloads/dekt4pets/frontend \ 41 | --git-revision main \ 42 | --wait 43 | 44 | #dekt4pets secrets 45 | kubectl create secret generic sso-secret --from-env-file=../.config/sso-creds.txt -n $APPS_NAMESPACE 46 | kubectl create secret generic jwk-secret --from-env-file=../.config/jwk-creds.txt -n $APPS_NAMESPACE 47 | kubectl create secret generic wavefront-secret --from-env-file=../.config/wavefront-creds.txt -n $APPS_NAMESPACE 48 | 49 | #dev gateway and apps 50 | kubectl apply -f workloads/dekt4pets/gateway/dekt4pets-gateway-dev.yaml -n $APPS_NAMESPACE 51 | create-backend 52 | create-frontend 53 | 54 | #ingress rules 55 | scripts/ingress-handler.sh add-scgw-ingress $DEV_SUB_DOMAIN 56 | 57 | } 58 | 59 | #create-backend 60 | create-backend() { 61 | 62 | scripts/dektecho.sh info "Deploy dekt4pets-backend (inner loop)" 63 | 64 | scripts/dektecho.sh info "1. Commit code changes to $DEMO_APP_GIT_REPO" 65 | 66 | touch dummy-commit.me 67 | 68 | git add . 69 | git commit -q -m "done backend inner-loop" dummy-commit.me 70 | git push 71 | 72 | scripts/dektecho.sh info "2. Apply development routes, mapping and micro-gateway" 73 | 74 | kubectl apply -f workloads/dekt4pets/backend/routes/dekt4pets-backend-mapping-dev.yaml -n $APPS_NAMESPACE 75 | kubectl apply -f workloads/dekt4pets/backend/routes/dekt4pets-backend-route../.config.yaml -n $APPS_NAMESPACE 76 | #dekt4pets-dev gateway instances created as part of demo build to save time 77 | 78 | scripts/dektecho.sh info "3. Create backend app via src-to-img supply-chain" 79 | 80 | #kp image patch $BACKEND_TBS_IMAGE -n $APPS_NAMESPACE 81 | 82 | #wait-for-tbs $BACKEND_TBS_IMAGE $APPS_NAMESPACE 83 | 84 | scripts/dektecho.sh info "Starting to tail build logs ..." 85 | 86 | kp build logs $BACKEND_TBS_IMAGE -n $APPS_NAMESPACE 87 | 88 | kubectl apply -f workloads/dekt4pets/backend/dekt4pets-backend.yaml -n $APPS_NAMESPACE 89 | } 90 | 91 | #create-frontend 92 | create-frontend() { 93 | 94 | scripts/dektecho.sh info "Create dekt4pets-frontend (inner loop)" 95 | echo " 1. Deploy app via src-to-img supply-chain" 96 | echo " 2. Apply development routes, mapping and micro-gateway" 97 | echo 98 | 99 | kp image patch $FRONTEND_TBS_IMAGE -n $APPS_NAMESPACE 100 | 101 | kustomize build workloads/dekt4pets/frontend | kubectl apply -f - 102 | 103 | } 104 | 105 | #patch-backend 106 | patch-backend() { 107 | 108 | scripts/dektecho.sh info "Commit code changes to $DEMO_APP_GIT_REPO" 109 | 110 | commit-adopter-check-api 111 | 112 | wait-for-tbs $BACKEND_TBS_IMAGE $APPS_NAMESPACE 113 | 114 | scripts/dektecho.sh info "Starting to tail build logs ..." 115 | 116 | kp build logs $BACKEND_TBS_IMAGE -n $APPS_NAMESPACE 117 | 118 | scripts/dektecho.sh info "Apply changes to backend app, service and routes" 119 | 120 | kubectl delete -f workloads/dekt4pets/backend/dekt4pets-backend.yaml -n $APPS_NAMESPACE 121 | kubectl apply -f workloads/dekt4pets/backend/dekt4pets-backend.yaml -n $APPS_NAMESPACE 122 | kubectl apply -f workloads/dekt4pets/backend/routes/dekt4pets-backend-route../.config.yaml -n $APPS_NAMESPACE 123 | 124 | } 125 | 126 | #dekt4pets 127 | dekt4pets() { 128 | 129 | scripts/dektecho.sh info "Promote dekt4pets-backend to production (outer loop)" 130 | echo " 1. Deploy app via src-to-img supply-chain" 131 | echo " 2. Apply production routes, mapping and micro-gateway" 132 | echo 133 | kubectl apply -f workloads/dekt4pets/backend/routes/dekt4pets-backend-mapping.yaml -n $APPS_NAMESPACE 134 | 135 | scripts/dektecho.sh info "Promote dekt4pets-frontend to production (outer loop)" 136 | echo " 1. Deploy app via src-to-img supply-chain" 137 | echo " 2. Apply production routes, mapping and micro-gateway" 138 | echo 139 | kubectl apply -f workloads/dekt4pets/frontend/routes/dekt4pets-frontend-mapping.yaml -n $APPS_NAMESPACE 140 | 141 | scripts/dektecho.sh info "Create dekt4pets micro-gateway (w/ external traffic)" 142 | echo 143 | kubectl apply -f workloads/dekt4pets/gateway/dekt4pets-gateway.yaml -n $APPS_NAMESPACE 144 | 145 | } 146 | 147 | #adopter-check-workload 148 | adopter-check () { 149 | 150 | scripts/dektecho.sh info "Apply adopter-check TAP workload and deploy via src-to-url supply-chain" 151 | echo 152 | 153 | tanzu apps workload apply adopter-check -f adopter-check-workload.yaml -y -n $APPS_NAMESPACE 154 | 155 | #tanzu apps workload tail adopter-check --since 10m --timestamp -n dekt-apps 156 | 157 | tanzu apps workload get adopter-check -n dekt-apps 158 | 159 | } 160 | 161 | #commit-adopter-check-api 162 | commit-adopter-check-api () { 163 | 164 | git commit -m "add check-adpoter api route" workloads/dekt4pets/backend/routes/dekt4pets-backend-route../.config.yaml 165 | 166 | git commit -m "add check-adpoter function" workloads/dekt4pets/backend/src/main/java/io/spring/cloud/samples/animalrescue/backend/AnimalController.java 167 | 168 | git push 169 | } 170 | 171 | #cleanup 172 | cleanup() { 173 | 174 | kp secret delete private-registry-creds -n $APPS_NAMESPACE 175 | kubectl delete secret sso-secret -n $APPS_NAMESPACE 176 | kubectl delete secret jwk-secret -n $APPS_NAMESPACE 177 | kubectl delete secret wavefront-secret -n $APPS_NAMESPACE 178 | kp image delete $BACKEND_TBS_IMAGE -n $APPS_NAMESPACE 179 | kp image delete $FRONTEND_TBS_IMAGE -n $APPS_NAMESPACE 180 | kubectl delete -f workloads/dekt4pets/backend/dekt4pets-backend.yaml -n $APPS_NAMESPACE 181 | kubectl delete -f workloads/dekt4pets/backend/routes/dekt4pets-backend-mapping.yaml -n $APPS_NAMESPACE 182 | kubectl delete -f workloads/dekt4pets/frontend/routes/dekt4pets-frontend-mapping.yaml -n $APPS_NAMESPACE 183 | kubectl delete -f workloads/dekt4pets/gateway/dekt4pets-gateway.yaml -n $APPS_NAMESPACE 184 | kubectl delete -f workloads/dekt4pets/gateway/dekt4pets-gateway-dev.yaml -n $APPS_NAMESPACE 185 | kubectl delete -f workloads/dekt4pets/backend/routes/dekt4pets-backend-mapping-dev.yaml -n $APPS_NAMESPACE 186 | kubectl delete -f workloads/dekt4pets/backend/routes/dekt4pets-backend-route../.config.yaml -n $APPS_NAMESPACE 187 | kustomize build workloads/dekt4pets/frontend | kubectl delete -f - 188 | 189 | 190 | rm dummy-commit.me 191 | 192 | } 193 | 194 | #wait4tbs 195 | wait-for-tbs() { 196 | image_name=$1 197 | namespace=$2 198 | 199 | status="" 200 | printf "Waiting for tanzu build service to start building $image_name image in namespace $namespace" 201 | while [ "$status" == "" ] 202 | do 203 | printf "." 204 | status="$(kp image status $image_name -n $namespace | grep 'Building')" 205 | sleep 1 206 | done 207 | echo 208 | } 209 | 210 | #usage 211 | usage() { 212 | 213 | scripts/dektecho.sh err "Incorrect usage. Please specify one of the following:" 214 | echo 215 | echo "${bold}init${normal} - deploy the dekt4pets api-grid core components and dekt4petsdev instances" 216 | echo 217 | echo "${bold}describe${normal} - describe the dekt4pets api-grid configs" 218 | echo 219 | echo "${bold}prod-deploy${normal} - run end-to-end dekt4pets deployment to production" 220 | echo 221 | echo "${bold}patch-backend${normal} - update the dekt4pets backend service and APIs" 222 | echo 223 | echo "${bold}adopter-check${normal} - deploy the adopter-check TAP workload using the default supply-chain" 224 | echo 225 | echo "${bold}cleanup${normal} - remove the dekt4pets api-grid core components, dekt4pets dev and prod instances" 226 | echo 227 | echo 228 | exit 229 | 230 | } 231 | 232 | #describe-apigrid 233 | describe-apigrid() { 234 | 235 | scripts/dektecho.sh info "Dekt4pets api-grid components" 236 | 237 | echo "${bold}Workload Images${normal}" 238 | echo 239 | kp images list -n $APPS_NAMESPACE 240 | echo "${bold}API Routes${normal}" 241 | echo 242 | kubectl get SpringCloudGatewayRouteConfig -n $APPS_NAMESPACE 243 | echo 244 | echo "${bold}API Mappings${normal}" 245 | echo 246 | kubectl get SpringCloudGatewayMapping -n $APPS_NAMESPACE 247 | echo 248 | echo "${bold}API Gateways${normal}" 249 | echo 250 | 251 | echo 252 | echo "${bold}Ingress rules${normal}" 253 | kubectl get ingress --field-selector metadata.name=dekt4pets-ingress -n $APPS_NAMESPACE 254 | echo 255 | } 256 | 257 | #################### main ####################### 258 | 259 | bold=$(tput bold) 260 | normal=$(tput sgr0) 261 | 262 | case $1 in 263 | 264 | init) 265 | init 266 | ;; 267 | prod-deploy) 268 | prod-deploy 269 | ;; 270 | patch-backend) 271 | patch-backend 272 | ;; 273 | adopter-check) 274 | adopter-check 275 | ;; 276 | describe) 277 | describe-apigrid 278 | ;; 279 | cleanup) 280 | cleanup 281 | ;; 282 | *) 283 | usage 284 | ;; 285 | esac 286 | -------------------------------------------------------------------------------- /scripts/carvel/imgpkg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dektlong/dekt-devx-demo/895007f962fe7c6789b90d3b951dea428aab4bf2/scripts/carvel/imgpkg -------------------------------------------------------------------------------- /scripts/carvel/install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | # Note script should be idempotent 6 | 7 | if [ "$(uname)" == "Darwin" ]; then 8 | if command -v xattr &>/dev/null; then 9 | xattr -d com.apple.quarantine imgpkg kapp kbld ytt 1>/dev/null 2>&1 || true 10 | fi 11 | fi 12 | 13 | ns_name=tanzu-cluster-essentials 14 | echo "## Creating namespace $ns_name" 15 | cat </dev/null; then 9 | xattr -d com.apple.quarantine imgpkg kapp kbld ytt 1>/dev/null 2>&1 || true 10 | fi 11 | fi 12 | 13 | ns_name=tanzu-cluster-essentials 14 | 15 | echo "## Deleting kapp-controller" 16 | ./kapp delete -a kapp-controller -n $ns_name "$@" 17 | 18 | echo "## Deleting secretgen-controller" 19 | ./kapp delete -a secretgen-controller -n $ns_name "$@" 20 | 21 | echo "## Keeping namespace '${ns_name}'" 22 | -------------------------------------------------------------------------------- /scripts/carvel/ytt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/dektlong/dekt-devx-demo/895007f962fe7c6789b90d3b951dea428aab4bf2/scripts/carvel/ytt -------------------------------------------------------------------------------- /scripts/db-handler.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | 4 | #setup-rabbitmq-crossplane 5 | setup-rabbitmq-crossplane() { 6 | 7 | cluster_name=$1 8 | scripts/dektecho.sh status "Installing RabbitMQ operatror and setup crossplane in cluster $cluster_name" 9 | 10 | kapp -y deploy --app rmq-operator --file https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml 11 | 12 | kubectl apply -f .config/dataservices/oncluster/corp-rabbitmq-xrd.yaml 13 | 14 | kubectl apply -f .config/dataservices/oncluster/corp-rabbitmq-composition.yaml 15 | 16 | kubectl create ns rmq-corp 17 | 18 | kubectl apply -f .config/dataservices/oncluster/corp-rabbitmq-class.yaml 19 | 20 | kubectl apply -f .config/dataservices/oncluster/corp-rabbitmq-rbac.yaml 21 | 22 | 23 | } 24 | 25 | #setup-rds-crossplane 26 | setup-rds-crossplane () { 27 | 28 | cluster_name=$1 29 | export region=$2 30 | scripts/dektecho.sh status "Installing crossplane provider for AWS cluster $cluster_name in region $region and configure RDS Postgres access" 31 | 32 | kubectl apply -f .config/dataservices/aws/aws-provider.yaml 33 | kubectl wait "providers.pkg.crossplane.io/upbound-provider-aws-rds" --for=condition=Healthy --timeout=3m 34 | 35 | cat < output.yaml < $record_data" 57 | curl -s -X PUT \ 58 | -H "Authorization: sso-key $GODADDY_API_KEY:$GODADDY_API_SECRET" "https://api.godaddy.com/v1/domains/$DOMAIN/records/CNAME/*.$subDomain" \ 59 | -H "Content-Type: application/json" \ 60 | -d "[{\"data\": \"${record_data}\"}]" 61 | else 62 | record_data=$(kubectl get svc $ingress_service_name --namespace $ingress_namespace -o=jsonpath='{.status.loadBalancer.ingress[0].ip}') 63 | scripts/dektecho.sh status "updating this A record in GoDaddy: *.$subDomain.$DOMAIN --> $record_data" 64 | curl -s -X PUT \ 65 | -H "Authorization: sso-key $GODADDY_API_KEY:$GODADDY_API_SECRET" "https://api.godaddy.com/v1/domains/$DOMAIN/records/A/*.$subDomain" \ 66 | -H "Content-Type: application/json" \ 67 | -d "[{\"data\": \"${record_data}\"}]" 68 | fi 69 | 70 | } 71 | 72 | subDomain=$2 73 | cloudProvider=$3 74 | 75 | case $1 in 76 | update-tap-dns) 77 | update-dns-record "envoy" "tanzu-system-ingress" 78 | ;; 79 | add-brownfield-apis) 80 | create-ingress-rule "api-portal-ingress" "contour" "api-portal.$subDomain.$DOMAIN" "api-portal-server" "8080" "api-portal" 81 | create-ingress-rule "scg-openapi-ingress" "contour" "scg-openapi.$subDomain.$DOMAIN" "scg-operator" "80" "scgw-system" 82 | ;; 83 | add-scgw-ingress) 84 | scripts/install-nginx.sh 85 | update-dns-record "dekt-ingress-nginx-controller" "nginx-system" 86 | create-ingress-rule "dekt4pets-dev" "nginx" "dekt4pets-dev.$subDomain.$DOMAIN" "dekt4pets-gateway" "80" $DEMO_APPS_NS 87 | create-ingress-rule "dekt4pets" "nginx" "dekt4pets.$subDomain.$DOMAIN" "dekt4pets-gateway" "80" $DEMO_APPS_NS 88 | ;; 89 | gui-dev) 90 | create-ingress-rule "tap-gui-ingress" "contour" "tap-gui.$subDomain.$DOMAIN" "server" "7000" "tap-gui" 91 | ;; 92 | acc) 93 | create-ingress-rule "acc-ingress" "contour" "acc.sys.dekt.io" "acc-server" "80" "accelerator-system" 94 | ;; 95 | *) 96 | scripts/dektecho.sh err "incorrect usage. Please use 'tap-full', 'tap-run', 'apis', 'gui-dev' or 'scgw'" 97 | ;; 98 | esac 99 | 100 | 101 | 102 | 103 | 104 | -------------------------------------------------------------------------------- /scripts/k8s-handler.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | #azure configs 4 | AZURE_SUBSCRIPTION_ID=$(yq .clouds.azure.subscriptionID .config/demo-values.yaml) 5 | AZURE_RESOURCE_GROUP=$(yq .clouds.azure.resourceGroup .config/demo-values.yaml) 6 | AZURE_NODE_TYPE=$(yq .clouds.azure.nodeType .config/demo-values.yaml) 7 | #aws configs 8 | AWS_ACCOUNT_ID=$(yq .clouds.aws.accountID .config/demo-values.yaml) 9 | AWS_IAM_USER=$(yq .clouds.aws.IAMuser .config/demo-values.yaml) 10 | AWS_INSTANCE_TYPE=$(yq .clouds.aws.instanceType .config/demo-values.yaml) 11 | #gcp configs 12 | GCP_PROJECT_ID=$(yq .clouds.gcp.projectID .config/demo-values.yaml) 13 | GCP_MACHINE_TYPE=$(yq .clouds.gcp.machineType .config/demo-values.yaml) 14 | 15 | 16 | #create-aks-cluster 17 | create-aks-cluster() { 18 | 19 | cluster_name=$1 20 | location=$2 21 | number_of_nodes=$3 22 | 23 | scripts/dektecho.sh info "Creating AKS cluster $cluster_name in location $location with $number_of_nodes nodes" 24 | 25 | #make sure your run 'az login' and use WorkspaceOn SSO prior to running this 26 | 27 | az group create --name $AZURE_RESOURCE_GROUP --location $location 28 | 29 | az aks create --name $cluster_name \ 30 | --resource-group $AZURE_RESOURCE_GROUP \ 31 | --node-count $number_of_nodes \ 32 | --node-vm-size $AZURE_NODE_TYPE \ 33 | --generate-ssh-keys 34 | } 35 | 36 | #delete-aks-cluster 37 | delete-aks-cluster() { 38 | 39 | cluster_name=$1 40 | location=$2 41 | 42 | scripts/dektecho.sh status "Starting deleting resources of AKS cluster $cluster_name in location $location" 43 | 44 | az aks delete --name $cluster_name --resource-group $AZURE_RESOURCE_GROUP --yes 45 | } 46 | 47 | 48 | #create-eks-cluster 49 | create-eks-cluster () { 50 | 51 | #must run after setting access via 'aws configure' 52 | 53 | cluster_name=$1 54 | region=$2 55 | number_of_nodes=$3 56 | 57 | scripts/dektecho.sh info "Creating EKS cluster $cluster_name in region $region with $number_of_nodes nodes" 58 | 59 | eksctl create cluster \ 60 | --name $cluster_name \ 61 | --managed \ 62 | --region $region \ 63 | --instance-types $AWS_INSTANCE_TYPE \ 64 | --version 1.24 \ 65 | --with-oidc \ 66 | -N $number_of_nodes 67 | 68 | eksctl create iamserviceaccount \ 69 | --name ebs-csi-controller-sa \ 70 | --namespace kube-system \ 71 | --cluster $cluster_name \ 72 | --region $region \ 73 | --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \ 74 | --approve \ 75 | --role-only \ 76 | --role-name AmazonEKS_EBS_CSI_DriverRole-$cluster_name 77 | 78 | eksctl create addon \ 79 | --name aws-ebs-csi-driver \ 80 | --cluster $cluster_name \ 81 | --region $region \ 82 | --service-account-role-arn arn:aws:iam::$AWS_ACCOUNT_ID:role/AmazonEKS_EBS_CSI_DriverRole-$cluster_name \ 83 | --force 84 | } 85 | 86 | #delete-eks-cluster 87 | delete-eks-cluster () { 88 | 89 | cluster_name=$1 90 | region=$2 91 | 92 | scripts/dektecho.sh status "Starting deleting resources of EKS cluster $cluster_name in region $region..." 93 | 94 | eksctl delete cluster --name $cluster_name --region $region --force 95 | } 96 | 97 | #create-gke-cluster 98 | create-gke-cluster () { 99 | 100 | cluster_name=$1 101 | region=$2 102 | number_of_nodes=$3 103 | 104 | scripts/dektecho.sh info "Creating GKE cluster $cluster_name in region $region with $number_of_nodes nodes" 105 | 106 | gcloud container clusters create $cluster_name \ 107 | --region $region \ 108 | --project $GCP_PROJECT_ID \ 109 | --num-nodes $number_of_nodes \ 110 | --machine-type $GCP_MACHINE_TYPE 111 | 112 | gcloud container clusters get-credentials $cluster_name --region $region --project $GCP_PROJECT_ID 113 | 114 | gcloud components install gke-gcloud-auth-plugin 115 | } 116 | 117 | #delete-eks-cluster 118 | delete-gke-cluster () { 119 | 120 | cluster_name=$1 121 | region=$2 122 | 123 | scripts/dektecho.sh status "Starting deleting resources of GKE cluster $cluster_name in region $region" 124 | 125 | gcloud container clusters delete $cluster_name \ 126 | --region $region \ 127 | --project $GCP_PROJECT_ID \ 128 | --quiet 129 | 130 | kubectl config delete-context $cluster_name 131 | 132 | } 133 | 134 | 135 | #install-krew 136 | install-krew () { 137 | set -x; cd "$(mktemp -d)" && 138 | OS="$(uname | tr '[:upper:]' '[:lower:]')" && 139 | ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" && 140 | KREW="krew-${OS}_${ARCH}" && 141 | curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" && 142 | tar zxvf "${KREW}.tar.gz" && 143 | ./"${KREW}" install krew 144 | 145 | export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH" 146 | 147 | } 148 | 149 | 150 | #wait-for-all-running-pods 151 | wait-for-all-running-pods () { 152 | 153 | namespace=$1 154 | status="" 155 | printf "Waiting for all pods in namespace $namespace to be in 'running' state ." 156 | while [ "$status" == "" ] 157 | do 158 | printf "." 159 | status="$(kubectl get pods -n $namespace -o=json | grep 'running')" 160 | sleep 1 161 | done 162 | echo 163 | } 164 | 165 | #################### main ####################### 166 | 167 | #incorrect-usage 168 | incorrect-usage() { 169 | 170 | scripts/dektecho.sh err "Incorrect usage. Please specify:" 171 | echo " create [aks/eks/gke cluster-name region number-of-nodes]" 172 | echo " delete [aks/eks/gke cluster-name region]" 173 | echo " set-context [aks/eks/gke cluster-name region]" 174 | exit 175 | } 176 | 177 | operation=$1 178 | clusterProvider=$2 179 | clusterName=$3 180 | region=$4 181 | numOfNodes=$5 182 | 183 | case $operation in 184 | create) 185 | case $clusterProvider in 186 | aks) 187 | create-aks-cluster $clusterName $region $numOfNodes 188 | ;; 189 | eks) 190 | create-eks-cluster $clusterName $region $numOfNodes 191 | ;; 192 | gke) 193 | create-gke-cluster $clusterName $region $numOfNodes 194 | ;; 195 | *) 196 | incorrect-usage 197 | ;; 198 | esac 199 | ;; 200 | set-context) 201 | case $clusterProvider in 202 | aks) 203 | az aks get-credentials --overwrite-existing --resource-group $AZURE_RESOURCE_GROUP --name $clusterName 204 | ;; 205 | eks) 206 | kubectl config rename-context $AWS_IAM_USER@$clusterName.$region.eksctl.io $clusterName 207 | ;; 208 | gke) 209 | kubectl config rename-context gke_$GCP_PROJECT_ID"_"$region"_"$clusterName $clusterName 210 | ;; 211 | *) 212 | incorrect-usage 213 | ;; 214 | esac 215 | ;; 216 | delete) 217 | case $clusterProvider in 218 | aks) 219 | delete-aks-cluster $clusterName $region 220 | ;; 221 | eks) 222 | delete-eks-cluster $clusterName $region 223 | ;; 224 | gke) 225 | delete-gke-cluster $clusterName $region 226 | ;; 227 | *) 228 | incorrect-usage 229 | ;; 230 | esac 231 | ;; 232 | install-krew) 233 | install-krew 234 | ;; 235 | *) 236 | incorrect-usage 237 | ;; 238 | esac -------------------------------------------------------------------------------- /scripts/tanzu-handler.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | export IMGPKG_REGISTRY_HOSTNAME=$(yq .private_registry.host .config/demo-values.yaml) 4 | export IMGPKG_REGISTRY_USERNAME=$(yq .private_registry.username .config/demo-values.yaml) 5 | export IMGPKG_REGISTRY_PASSWORD=$(yq .private_registry.password .config/demo-values.yaml) 6 | PRIVATE_RGISTRY_REPO=$(yq .private_registry.repo .config/demo-values.yaml) 7 | CARVEL_BUNDLE=$(yq .tap.carvelBundle .config/demo-values.yaml) 8 | TANZU_NETWORK_REGISTRY="registry.tanzu.vmware.com" 9 | TANZU_NETWORK_USER=$(yq .tanzu_network.username .config/demo-values.yaml) 10 | TANZU_NETWORK_PASSWORD=$(yq .tanzu_network.password .config/demo-values.yaml) 11 | TAP_VERSION=$(yq .tap.tapVersion .config/demo-values.yaml) 12 | TDS_VERSION=$(yq .data_services.tdsVersion .config/demo-values.yaml) 13 | GW_INSTALL_DIR=$(yq .brownfield_apis.scgwInstallDirectory .config/demo-values.yaml) 14 | TMC_API_TOKEN_VALUE=$(yq .tmc.apiToken .config/demo-values.yaml) 15 | TMC_CLUSTER_GROUP=$(yq .tmc.clusterGroup .config/demo-values.yaml) 16 | 17 | #relocate-tap-images 18 | relocate-tap-images() { 19 | 20 | scripts/dektecho.sh status "relocating TAP $TAP_VERSION images to $IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO/tap-packages" 21 | 22 | imgpkg copy \ 23 | --bundle $TANZU_NETWORK_REGISTRY/tanzu-application-platform/tap-packages:$TAP_VERSION \ 24 | --to-tar .config/tap-packages-$TAP_VERSION.tar \ 25 | --include-non-distributable-layers 26 | 27 | imgpkg copy \ 28 | --tar .config/tap-packages-$TAP_VERSION.tar \ 29 | --to-repo $IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO/tap-packages \ 30 | --include-non-distributable-layers 31 | 32 | rm -f .config/tap-packages-$TAP_VERSION.tar 33 | 34 | } 35 | 36 | #relocate-carvel-bundle 37 | relocate-carvel-bundle() { 38 | 39 | scripts/dektecho.sh status "relocating cluster-essentials to $IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO/cluster-essentials-bundle" 40 | 41 | imgpkg copy \ 42 | --bundle $TANZU_NETWORK_REGISTRY/tanzu-cluster-essentials/cluster-essentials-bundle@$CARVEL_BUNDLE \ 43 | --to-tar .config/carvel-bundle.tar \ 44 | --include-non-distributable-layers 45 | 46 | imgpkg copy \ 47 | --tar .config/carvel-bundle.tar \ 48 | --to-repo $IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO/cluster-essentials-bundle \ 49 | --include-non-distributable-layers 50 | 51 | rm -f .config/carvel-bundle.tar 52 | } 53 | 54 | #relocate-tbs-images 55 | relocate-tbs-images() { 56 | 57 | scripts/dektecho.sh status "relocating TBS full dependencies for TAP version $TAP_VERSION to $IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO/full-deps-package-repo" 58 | 59 | imgpkg copy \ 60 | --bundle $TANZU_NETWORK_REGISTRY/tanzu-application-platform/full-deps-package-repo:$TAP_VERSION \ 61 | --to-tar=.config/full-deps-package-repo.tar 62 | 63 | imgpkg copy \ 64 | --tar .config/full-deps-package-repo.tar \ 65 | --to-repo=$IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO/full-deps-package-repo 66 | 67 | rm -f .config/full-deps-package-repo.tar 68 | 69 | } 70 | 71 | #relocate-gw-images 72 | relocate-scgw-images() { 73 | 74 | scripts/dektecho.sh status "relocating Spring Cloud Gateway images $IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO" 75 | 76 | $GW_INSTALL_DIR/scripts/relocate-images.sh $IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO 77 | } 78 | 79 | #relocate-tds-images 80 | relocate-tds-images() { 81 | 82 | scripts/dektecho.sh status "relocating Tanzu Data Services $TDS_VERSION to $IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO/tds-packages" 83 | 84 | imgpkg copy \ 85 | --bundle $TANZU_NETWORK_REGISTRY/packages-for-vmware-tanzu-data-services/tds-packages:$TDS_VERSION \ 86 | --to-repo $IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO/tds-packages 87 | } 88 | #add-carvel 89 | add-carvel () { 90 | 91 | scripts/dektecho.sh status "Add Carvel tools to cluster $(kubectl config current-context)" 92 | 93 | pushd scripts/carvel 94 | 95 | INSTALL_BUNDLE=$IMGPKG_REGISTRY_HOSTNAME/$PRIVATE_RGISTRY_REPO/cluster-essentials-bundle@$CARVEL_BUNDLE \ 96 | INSTALL_REGISTRY_HOSTNAME=$IMGPKG_REGISTRY_HOSTNAME \ 97 | INSTALL_REGISTRY_USERNAME=$IMGPKG_REGISTRY_USERNAME \ 98 | INSTALL_REGISTRY_PASSWORD=$IMGPKG_REGISTRY_PASSWORD \ 99 | ./install.sh --yes 100 | 101 | pushd 102 | } 103 | 104 | 105 | #generate-config-yamls 106 | generate-config-yamls() { 107 | 108 | scripts/dektecho.sh status "Generating demo configuration yamls" 109 | 110 | #tap-profiles 111 | mkdir -p .config/tap-profiles 112 | ytt -f config-templates/tap-profiles/tap-view.yaml --data-values-file=.config/demo-values.yaml > .config/tap-profiles/tap-view.yaml 113 | ytt -f config-templates/tap-profiles/tap-dev.yaml --data-values-file=.config/demo-values.yaml > .config/tap-profiles/tap-dev.yaml 114 | ytt -f config-templates/tap-profiles/tap-stage.yaml --data-values-file=.config/demo-values.yaml > .config/tap-profiles/tap-stage.yaml 115 | ytt -f config-templates/tap-profiles/tap-prod1.yaml --data-values-file=.config/demo-values.yaml > .config/tap-profiles/tap-prod1.yaml 116 | ytt -f config-templates/tap-profiles/tap-prod2.yaml --data-values-file=.config/demo-values.yaml > .config/tap-profiles/tap-prod2.yaml 117 | 118 | #custom-supplychains 119 | mkdir -p .config/custom-supplychains 120 | ytt -f config-templates/custom-supplychains/dekt-medical.yaml --data-values-file=.config/demo-values.yaml > .config/custom-supplychains/dekt-medical.yaml 121 | ytt -f config-templates/custom-supplychains/dekt-medical-scan.yaml --data-values-file=.config/demo-values.yaml > .config/custom-supplychains/dekt-medical-scan.yaml 122 | 123 | #secrets 124 | mkdir -p .config/secrets 125 | cp -a config-templates/secrets/ .config/secrets 126 | ytt -f config-templates/secrets/carbonblack-creds.yaml --data-values-file=.config/demo-values.yaml > .config/secrets/carbonblack-creds.yaml 127 | ytt -f config-templates/secrets/snyk-creds.yaml --data-values-file=.config/demo-values.yaml > .config/secrets/snyk-creds.yaml 128 | ytt -f config-templates/secrets/openai-creds.yaml --data-values-file=.config/demo-values.yaml > .config/secrets/openai-creds.yaml 129 | ytt -f config-templates/secrets/git-creds.yaml --data-values-file=.config/demo-values.yaml > .config/secrets/git-creds.yaml 130 | ytt -f config-templates/secrets/ingress-issuer-sys.yaml --data-values-file=.config/demo-values.yaml > .config/secrets/ingress-issuer-sys.yaml 131 | ytt -f config-templates/secrets/ingress-issuer-apps.yaml --data-values-file=.config/demo-values.yaml > .config/secrets/ingress-issuer-apps.yaml 132 | 133 | #crossplane 134 | mkdir -p .config/crossplane 135 | cp -a config-templates/crossplane/ .config/crossplane 136 | ytt -f config-templates/crossplane/gcp/gcp-provider-config.yaml --data-values-file=.config/demo-values.yaml > .config/crossplane/gcp/gcp-provider-config.yaml 137 | 138 | #workloads 139 | mkdir -p .config/workloads 140 | cp -a config-templates/workloads/ .config/workloads/ 141 | } 142 | 143 | #intall-tanzu-package 144 | install-tanzu-package() { 145 | 146 | package_full_name=$1 147 | package_display_name=$2 148 | value_file_path=$3 149 | 150 | tanzu package available list -n tap-install $package_full_name -o yaml | sed 's/- / /' > .config/package_info.yaml 151 | package_version=$(yq .version .config/package_info.yaml) 152 | rm .config/package_info.yaml 153 | 154 | scripts/dektecho.sh status "Installing tanzu package $package_display_name with discoverd version $package_version" 155 | 156 | if [ "$value_file" == "" ] 157 | then 158 | tanzu package install $package_display_name \ 159 | --package $package_full_name \ 160 | --version $package_version \ 161 | --namespace tap-install 162 | else 163 | tanzu package install $package_display_name \ 164 | --package $package_full_name \ 165 | --version $package_version\ 166 | --namespace tap-install \ 167 | --values-file $value_file_path 168 | fi 169 | 170 | } 171 | #attach TMC cluster 172 | attach-tmc-cluster() { 173 | 174 | cluster_name=$1 175 | 176 | scripts/dektecho.sh status "Attaching $cluster_name cluster to TMC" 177 | 178 | export TMC_API_TOKEN=$TMC_API_TOKEN_VALUE 179 | tmc login -n devxdemo-tmc -c 180 | 181 | kubectl config use-context $cluster_name 182 | tmc cluster attach -n $cluster_name -g $TMC_CLUSTER_GROUP 183 | kubectl apply -f k8s-attach-manifest.yaml 184 | rm -f k8s-attach-manifest.yaml 185 | } 186 | 187 | #delete-tmc-cluster 188 | remove-tmc-cluster() { 189 | 190 | cluster_name=$1 191 | 192 | scripts/dektecho.sh status "Removing $cluster_name cluster from TMC" 193 | 194 | export TMC_API_TOKEN=$TMC_API_TOKEN_VALUE 195 | tmc login -n devxdemo-tmc -c 196 | 197 | tmc cluster delete $cluster_name -f -m attached -p attached 198 | 199 | } 200 | 201 | #################### main ####################### 202 | 203 | #incorrect-usage 204 | incorrect-usage() { 205 | scripts/dektecho.sh err "Incorrect usage. Use one of the following: " 206 | echo " relocate-tanzu-images tap|tbs|tds|scgw" 207 | echo 208 | echo " add-carvel-tools" 209 | echo 210 | echo " install-tanzu-package package-full-name,package-display-name,(optional)value-file-path" 211 | echo 212 | echo " tmc-cluster attach|remove" 213 | echo 214 | echo " generate-configs" 215 | echo 216 | } 217 | 218 | case $1 in 219 | relocate-tanzu-images) 220 | docker login $IMGPKG_REGISTRY_HOSTNAME -u $IMGPKG_REGISTRY_USERNAME -p $IMGPKG_REGISTRY_PASSWORD 221 | docker login registry.tanzu.vmware.com -u $TANZU_NETWORK_USER -p $TANZU_NETWORK_PASSWORD 222 | case $2 in 223 | tap) 224 | relocate-carvel-bundle 225 | relocate-tap-images 226 | ;; 227 | tbs) 228 | relocate-tbs-images 229 | ;; 230 | tds) 231 | relocate-tds-images 232 | ;; 233 | scgw) 234 | relocate-scgw-images 235 | ;; 236 | *) 237 | incorrect-usage 238 | ;; 239 | esac 240 | ;; 241 | add-carvel-tools) 242 | add-carvel 243 | ;; 244 | install-tanzu-package) 245 | install-tanzu-package $2 $3 $4 246 | ;; 247 | tmc-cluster) 248 | case $2 in 249 | attach) 250 | attach-tmc-cluster $3 251 | ;; 252 | remove) 253 | remove-tmc-cluster $3 254 | ;; 255 | *) 256 | incorrect-usage 257 | ;; 258 | esac 259 | ;; 260 | generate-configs) 261 | generate-config-yamls 262 | ;; 263 | *) 264 | incorrect-usage 265 | ;; 266 | esac -------------------------------------------------------------------------------- /tmp.yaml: -------------------------------------------------------------------------------- 1 | account TMC account management for the organization mission-control v0.3.3 v0.3.3 installed 2 | agentartifacts Image artifacts of all the agents to be made available on the cluster mission-control v0.1.19 v0.1.19 installed 3 | aks-cluster aks-cluster plugin is used to provision and manage the tmc aks clusters. mission-control v0.3.5 v0.3.5 installed 4 | apply Apply a resource file mission-control v0.4.1 v0.4.1 installed 5 | apply Apply a resource file operations v0.1.3 installed 6 | appsv2 Applications on Kubernetes for TAP (SaaS distribution) kubernetes v0.2.0-beta.3 installed 7 | audit Run an audit request on an organization mission-control v0.1.19 v0.1.19 installed 8 | build Generate app image from source code global v0.3.0-beta.16 installed 9 | builder Build Tanzu components global v1.3.0-alpha.3 installed 10 | cluster A TMC managed Kubernetes cluster mission-control v0.2.16 v0.2.16 installed 11 | cluster Plugin for operating clusters in the organization operations v0.2.0 installed 12 | clustergroup A group of Kubernetes clusters mission-control v0.1.19 v0.1.19 installed 13 | clustergroup A group of Kubernetes clusters operations v0.1.6 installed 14 | context Management for TMC contexts mission-control v0.1.14 v0.1.14 installed 15 | continuousdelivery Manage cluster configurations mission-control v0.2.0 v0.2.0 installed 16 | data-protection Backup, restore, or migrate cluster data. mission-control v0.1.19 v0.1.19 installed 17 | ekscluster ekscluster plugin is used to provision and manage the tmc eks clusters. mission-control v0.2.4 v0.2.4 installed 18 | ekscluster ekscluster plugin is used to provision and manage the tmc eks clusters. operations v0.1.1 installed 19 | events TMC events for any meaningful user activity or system state change mission-control v0.1.19 v0.1.19 installed 20 | helm Manage helm deployments mission-control v0.1.20 v0.1.20 installed 21 | iam IAM Policies for tmc resources mission-control v0.1.19 v0.1.19 installed 22 | iam IAM Policies for tap resources operations v0.1.6 installed 23 | inspection Run an inspection on a cluster mission-control v0.1.19 v0.1.19 installed 24 | integration Get available integrations and their information from registry. mission-control v0.1.19 v0.1.19 installed 25 | management-cluster A TMC registered Management cluster mission-control v0.2.21 v0.2.21 installed 26 | package tanzu package management kubernetes v0.34.4 installed 27 | policy Policy management for resources mission-control v0.1.19 v0.1.19 installed 28 | policy Policy management for resources operations v0.1.8 installed 29 | project View, list and use Tanzu Projects. kubernetes v0.2.0-beta.1 installed 30 | provider-aks-cluster provider-aks-cluster plugin is used to manage and unmanage tmc provider aks mission-control v0.1.21 v0.1.21 installed 31 | clusters 32 | provider-eks-cluster provider-eks-cluster plugin is used to manage and unmanage tmc provider eks mission-control v0.1.19 v0.1.19 installed 33 | clusters 34 | provider-eks-cluster provider-eks-cluster plugin is used to manage and unmanage tmc provider eks operations v0.1.1 installed 35 | clusters 36 | resource manage resources in a Kubernetes cluster kubernetes v0.0.10 installed 37 | secret Tanzu secret management kubernetes v0.32.1 installed 38 | secret Manage kubernetes secrets resource for a cluster mission-control v0.1.21 v0.1.21 installed 39 | services Commands for working with service instances, classes and claims kubernetes v0.10.0-rc.4 installed 40 | setting Manage default settings for some tmc features mission-control v0.2.17 v0.2.17 installed 41 | space Tanzu space, space profile, and space trait management kubernetes v0.2.0-beta.1 installed 42 | tanzupackage Install and manage Tanzu packages. Note: tanzu-standard package repository is mission-control v0.4.7 v0.4.7 installed 43 | unavailable for non-TKG clusters. 44 | telemetry configure cluster-wide settings for vmware tanzu telemetry global v1.1.0 installed 45 | test Test the CLI global v1.3.0-alpha.3 installed 46 | workload create, update, view and list Tanzu Workloads. kubernetes v0.1.0-beta.3 installed 47 | workspace A group of Kubernetes namespaces mission-control v0.1.21 v0.1.21 installed 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | --------------------------------------------------------------------------------