├── LICENSE ├── NOTICE ├── README.md ├── images ├── accesscontrol.png ├── alana.png ├── appcatalog-mysql1.png ├── appcatalog-mysql2.png ├── appcatalog-mysql3.png ├── contentlibrary.png ├── contentlibrary2.png ├── contentlibrary3.png ├── elasticsearch1.png ├── elasticsearch2.png ├── elasticsearch3.png ├── elasticsearch4.png ├── elasticsearch5.png ├── enable.png ├── enable2.png ├── enable3.png ├── enable4.png ├── enable5.png ├── enable6.png ├── enable7.png ├── enablesupervisor.png ├── harbor1.png ├── harbor2.png ├── harbor3.png ├── kibana1.png ├── kibana2.png ├── kibana3.png ├── kibana4.png ├── kibana5.png ├── kubeapps.png ├── kubeapps1.png ├── kubectl2.png ├── kubectl3.png ├── kubectl4.png ├── kubectl5.png ├── kubectl6.png ├── kubectl7.png ├── mysql1.png ├── mysql2.png ├── mysql3.png ├── namespace-access.png ├── namespace.png ├── namespace1.png ├── namespace2.png ├── namespace3.png ├── namespace4.png ├── nativepod.png ├── permission1.png ├── permission2.png ├── resource1.png ├── resource2.png ├── supervisor-cluster1.png ├── supervisor-cluster2.png ├── supervisor1.png ├── tkg.png ├── wordpress.png ├── wordpress1.png ├── wordpress3.png └── workloadcluster1.png ├── labmanual └── Project Pacific - Lab Manual_49b10bbaa9f6404b87bd67c3b553bb89-310320-1021-48.pdf ├── nestedinstall ├── README.md └── vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment │ ├── README.md │ ├── screenshots │ ├── diagram.png │ ├── screenshot-1.png │ ├── screenshot-10.png │ ├── screenshot-11.png │ ├── screenshot-12.png │ ├── screenshot-13.png │ ├── screenshot-14.png │ ├── screenshot-15.png │ ├── screenshot-16.png │ ├── screenshot-17.png │ ├── screenshot-18.png │ ├── screenshot-19.png │ ├── screenshot-2.png │ ├── screenshot-20.png │ ├── screenshot-3.png │ ├── screenshot-5.png │ ├── screenshot-6.png │ ├── screenshot-7.png │ ├── screenshot-8.png │ └── screenshot-9.png │ └── vghetto-vsphere-with-kubernetes-external-nsxt-lab-deployment.ps1 ├── onecloud └── Project Pacific on OneCloud.pdf ├── scripts ├── .DS_Store ├── Allow.yaml ├── CreateCluster-guest.yaml ├── DeployKuard-marketing-uat.yaml ├── DeployKuard.yaml ├── ExposeKuard-marketing-dev.yaml ├── ExposeKuard-marketing-uat.yaml ├── ExposeKuard.yaml ├── PVC.yaml ├── RunAsNoRoot.yaml ├── TestVirtualMachine.yaml ├── all-privileged-access.yaml ├── allow-all-networkpolicy.yaml ├── allow-runasnonroot-clusterrole.yaml ├── allow-runasnonroot-clusterrole.yaml.1 ├── create-service-cluster.yaml ├── create-tkc-cluster.yaml ├── create-workload-cluster.yaml ├── create-worklod-cluster ├── demo-busybox.yaml ├── demo-ghost.yaml ├── demo-harbor-busybox.yaml ├── demo-hellokubernetes.yaml ├── demo-hipstershop.yaml ├── demo-nginx.yaml ├── demo-sockshop.yaml ├── demo-unifi.yaml ├── minio │ ├── README.md │ ├── credentials-velero │ ├── minio-standalone-deployment.yaml │ ├── minio-standalone-pvc.yaml │ └── minio-standalone-service.yaml ├── mysql │ ├── mysql-configmap.yaml │ ├── mysql-services.yaml │ └── mysql-statefulset.yaml └── wordpress │ ├── README.md │ ├── deployment-app-catalog.yaml │ ├── deployment.yaml │ ├── kustomization.yaml │ ├── mysql-deployment.yaml │ └── wordpress-deployment.yaml ├── servicecluster ├── EK │ └── README.md ├── attachclustertotmc │ ├── README.md │ └── tmc-svc-cluster-attach.yaml ├── createservicecluster │ └── README.md ├── integrateapplicationcatalog │ └── README.md ├── logging │ ├── .DS_Store │ └── README.md └── wavefront │ ├── .DS_Store │ └── README.md ├── supervisorcluster ├── accesscluster │ └── README.md ├── accesscontrol │ └── README.md ├── enablecluster │ ├── README.md │ └── v7wk8s-enable-cluster.html ├── enableharbor │ └── README.md ├── namespace │ └── README.md └── nativepod │ └── README.md └── workloadcluster ├── attachclustertotmc └── README.md ├── createworkloadcluster ├── .DS_Store └── README.md ├── deployworkloads └── README.md ├── logging ├── .DS_Store └── README.md └── wavefront ├── .DS_Store └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | Redistribution and use in source and binary forms, with or without 2 | modification, are permitted provided that the following conditions are 3 | met: 4 | 5 | 1. Redistributions of source code must retain the above copyright 6 | notice, this list of conditions and the following disclaimer. 7 | 8 | 2. Redistributions in binary form must reproduce the above 9 | copyright notice, this list of conditions and the following 10 | disclaimer in the documentation and/or other materials provided 11 | with the distribution. 12 | 13 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 14 | "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 15 | LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 16 | A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 17 | HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 18 | SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 19 | LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 20 | DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 21 | THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 22 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 23 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | Copyright 2022 VMware, Inc. 2 | 3 | This product is licensed to you under the BSD 2 clause (the "License"). You may not use this product except in compliance with the License. 4 | 5 | This product may include a number of subcomponents with separate copyright notices and license terms. Your use of these subcomponents is subject to the terms and conditions of the subcomponent's license, as noted in the LICENSE file. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes on vSphere Lab 2 | 3 | ![](./images/supervisor-cluster1.png) 4 | 5 | # Business Context: 6 | Customer wants to build a Blogging software which is a called WordPress. Company wants to make WordPress available to anyone who wants to create a website or blog. In developing WordPress and making it perpetually free, its creators hoped to “democratize publishing” by designing a site building program that could allow anyone to have a voice and presence online. With that idea in mind Business owners appoached architecuture and proudct team to start working on it. After carefull consideration Product team decided that they need a kubernetes enviornment which dev teams can self service and VI Admin can provide governance. Their dev team should be able to create services like Mysql and also be able to search their logs using a dashboard. Their kubernetes enviornment also needs monitoring and team is looking for a capability where they can create dashboard monitor real time. With all this requirements VI Admin need to provide all resources to product teams so that they could self service. Team also wants isolation for some of their services. 7 | 8 | ### VI Admin Persona: 9 | 10 | 1. Create a namespace for wordpress product team. 11 | 2. Add users to namespace and prvovide them proper access. 12 | 3. Apply proper storage and quota policies. 13 | 4. Provide Federation/Role bindings for dev teams using TMC 14 | 5. Setup Application catalog for developers to create services. 15 | 6. Deploy wavefront to monitor kubernetes enviorment. 16 | 7. Deploy Elastic search and Kibana for dev teams to search their logs. 17 | 18 | 19 | ### Developer's Persona: 20 | 21 | 1. Create Mysql service using Application catalog. 22 | 2. Create workload cluster to deploy wordpress. 23 | 3. Deploy services to Native for the workloads that needs isolation. 24 | 4. Create dashboards in wavefront to monitor their enviornment. 25 | 5. Seach their logs using Kibana dashboards. 26 | 27 | ![](./images/supervisor-cluster2.png) 28 | 29 | 30 | # Steps: 31 | 32 | ### CLIs 33 | > 1. kubectl 34 | > 2. tmc 35 | > 3. helm 3 36 | 37 | ## 1. Installation 38 | 39 | > 1. [Nested Install](./nestedinstall) 40 | > 2. [Request OneCloud setup if needed](./onecloud) 41 | 42 | ## 2. Supervisor Cluster 43 | 44 | > 1. [Enable Supervisor Cluster](./supervisorcluster/enablecluster) 45 | > 2. [Enable Harbor](./supervisorcluster/enableharbor) 46 | > 3. [Create Namespace](./supervisorcluster/namespace) 47 | > 4. [Enable access control](./supervisorcluster/accesscontrol) 48 | > 5. [Access Supervisor Cluster](./supervisorcluster/accesscluster) 49 | > 6. [Deploy Nativepod](./supervisorcluster/nativepod) 50 | 51 | 52 | ## 3. Application Catalog Cluster 53 | 54 | > 1. [Create Service Cluster](./servicecluster/createservicecluster) 55 | > 2. [Attach to TMC](./servicecluster/attachclustertotmc) 56 | > 3. [Integrate Application Catalog](./servicecluster/integrateapplicationcatalog) 57 | > 4. [Install Elastic Search and Kibana](./servicecluster/EK) 58 | > 5. [Install Wavefront](./servicecluster/wavefront) 59 | 60 | ## 4. Workload Cluster 61 | 62 | > 1. [Create Workload Cluster](./workloadcluster/createworkloadcluster) 63 | > 2. [Attach to TMC](./workloadcluster/attachclustertotmc) 64 | > 3. [Install Fluent Bit](./workloadcluster/logging) 65 | > 4. [Install Wavefront](./workloadcluster/wavefront) 66 | > 5. [Deploy wordpress](./workloadcluster/deployworkloads) 67 | -------------------------------------------------------------------------------- /images/accesscontrol.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/accesscontrol.png -------------------------------------------------------------------------------- /images/alana.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/alana.png -------------------------------------------------------------------------------- /images/appcatalog-mysql1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/appcatalog-mysql1.png -------------------------------------------------------------------------------- /images/appcatalog-mysql2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/appcatalog-mysql2.png -------------------------------------------------------------------------------- /images/appcatalog-mysql3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/appcatalog-mysql3.png -------------------------------------------------------------------------------- /images/contentlibrary.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/contentlibrary.png -------------------------------------------------------------------------------- /images/contentlibrary2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/contentlibrary2.png -------------------------------------------------------------------------------- /images/contentlibrary3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/contentlibrary3.png -------------------------------------------------------------------------------- /images/elasticsearch1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/elasticsearch1.png -------------------------------------------------------------------------------- /images/elasticsearch2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/elasticsearch2.png -------------------------------------------------------------------------------- /images/elasticsearch3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/elasticsearch3.png -------------------------------------------------------------------------------- /images/elasticsearch4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/elasticsearch4.png -------------------------------------------------------------------------------- /images/elasticsearch5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/elasticsearch5.png -------------------------------------------------------------------------------- /images/enable.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/enable.png -------------------------------------------------------------------------------- /images/enable2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/enable2.png -------------------------------------------------------------------------------- /images/enable3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/enable3.png -------------------------------------------------------------------------------- /images/enable4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/enable4.png -------------------------------------------------------------------------------- /images/enable5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/enable5.png -------------------------------------------------------------------------------- /images/enable6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/enable6.png -------------------------------------------------------------------------------- /images/enable7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/enable7.png -------------------------------------------------------------------------------- /images/enablesupervisor.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/enablesupervisor.png -------------------------------------------------------------------------------- /images/harbor1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/harbor1.png -------------------------------------------------------------------------------- /images/harbor2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/harbor2.png -------------------------------------------------------------------------------- /images/harbor3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/harbor3.png -------------------------------------------------------------------------------- /images/kibana1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kibana1.png -------------------------------------------------------------------------------- /images/kibana2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kibana2.png -------------------------------------------------------------------------------- /images/kibana3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kibana3.png -------------------------------------------------------------------------------- /images/kibana4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kibana4.png -------------------------------------------------------------------------------- /images/kibana5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kibana5.png -------------------------------------------------------------------------------- /images/kubeapps.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kubeapps.png -------------------------------------------------------------------------------- /images/kubeapps1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kubeapps1.png -------------------------------------------------------------------------------- /images/kubectl2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kubectl2.png -------------------------------------------------------------------------------- /images/kubectl3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kubectl3.png -------------------------------------------------------------------------------- /images/kubectl4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kubectl4.png -------------------------------------------------------------------------------- /images/kubectl5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kubectl5.png -------------------------------------------------------------------------------- /images/kubectl6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kubectl6.png -------------------------------------------------------------------------------- /images/kubectl7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/kubectl7.png -------------------------------------------------------------------------------- /images/mysql1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/mysql1.png -------------------------------------------------------------------------------- /images/mysql2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/mysql2.png -------------------------------------------------------------------------------- /images/mysql3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/mysql3.png -------------------------------------------------------------------------------- /images/namespace-access.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/namespace-access.png -------------------------------------------------------------------------------- /images/namespace.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/namespace.png -------------------------------------------------------------------------------- /images/namespace1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/namespace1.png -------------------------------------------------------------------------------- /images/namespace2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/namespace2.png -------------------------------------------------------------------------------- /images/namespace3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/namespace3.png -------------------------------------------------------------------------------- /images/namespace4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/namespace4.png -------------------------------------------------------------------------------- /images/nativepod.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/nativepod.png -------------------------------------------------------------------------------- /images/permission1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/permission1.png -------------------------------------------------------------------------------- /images/permission2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/permission2.png -------------------------------------------------------------------------------- /images/resource1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/resource1.png -------------------------------------------------------------------------------- /images/resource2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/resource2.png -------------------------------------------------------------------------------- /images/supervisor-cluster1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/supervisor-cluster1.png -------------------------------------------------------------------------------- /images/supervisor-cluster2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/supervisor-cluster2.png -------------------------------------------------------------------------------- /images/supervisor1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/supervisor1.png -------------------------------------------------------------------------------- /images/tkg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/tkg.png -------------------------------------------------------------------------------- /images/wordpress.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/wordpress.png -------------------------------------------------------------------------------- /images/wordpress1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/wordpress1.png -------------------------------------------------------------------------------- /images/wordpress3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/wordpress3.png -------------------------------------------------------------------------------- /images/workloadcluster1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/images/workloadcluster1.png -------------------------------------------------------------------------------- /labmanual/Project Pacific - Lab Manual_49b10bbaa9f6404b87bd67c3b553bb89-310320-1021-48.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/labmanual/Project Pacific - Lab Manual_49b10bbaa9f6404b87bd67c3b553bb89-310320-1021-48.pdf -------------------------------------------------------------------------------- /nestedinstall/README.md: -------------------------------------------------------------------------------- 1 | # Nested Install. 2 | -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/README.md: -------------------------------------------------------------------------------- 1 | # vGhetto Automated vSphere with Kubernetes Lab Deployment 2 | 3 | ## Table of Contents 4 | 5 | * [Description](#description) 6 | * [Changelog](#changelog) 7 | * [Requirements](#requirements) 8 | * [FAQ](#faq) 9 | * [Configuration](#configuration) 10 | * [Logging](#logging) 11 | * [Sample Execution](#sample-execution) 12 | * [Lab Deployment Script](#lab-deployment-script) 13 | * [Enable Workload Management](#enable-workload-management) 14 | * [Create Namespace](#create-namespace) 15 | * [Deploy Sample K8s Application](#deploy-sample-k8s-application) 16 | * [Deploy Tanzu Kubernetes Cluster](#deploy-tanzu-kubernetes-cluster) 17 | * [Network Topology](#network-topology) 18 | 19 | ## Description 20 | 21 | Similar to other "vGhetto Lab Deployment Scripts" (such as [here](https://www.virtuallyghetto.com/2016/11/vghetto-automated-vsphere-lab-deployment-for-vsphere-6-0u2-vsphere-6-5.html), [here](https://www.virtuallyghetto.com/2017/10/vghetto-automated-nsx-t-2-0-lab-deployment.html) and [here](https://www.virtuallyghetto.com/2018/06/vghetto-automated-pivotal-container-service-pks-lab-deployment.html)), this script makes it very easy for anyone with VMware Cloud Foundation 4 licensing to deploy vSphere with Kubernetes in a Nested Lab environment for learning and educational purposes. All required VMware components (ESXi, vCenter Server, NSX Unified Appliance and Edge) are automatically deployed and configured to allow enablement of vSphere with Kubernetes. For more details about vSphere with Kubernetes, please refer to the official VMware documentation [here](https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-21ABC792-0A23-40EF-8D37-0367B483585E.html). 22 | 23 | Below is a diagram of what is deployed as part of the solution and you simply need to have an existing vSphere environment running that is managed by vCenter Server and with enough resources (CPU, Memory and Storage) to deploy this "Nested" lab. For a complete end-to-end example including workload management enablement (post-deployment operation) and the deployment of a Tanzu Kubernetes Grid (TKG) Cluster, please have a look at the [Sample Execution](#sample-execution) section below. 24 | 25 | You are now ready to get your K8s on! 😁 26 | 27 | ![](screenshots/diagram.png) 28 | 29 | ## Changelog 30 | 31 | * **04/13/2020** 32 | * Initial Release 33 | 34 | ## Requirements 35 | * vCenter Server running at least vSphere 6.7 or later 36 | * If your physical storage is vSAN, please ensure you've applied the following setting as mentioned [here](https://www.virtuallyghetto.com/2013/11/how-to-run-nested-esxi-on-top-of-vsan.html) 37 | * Resource Requirements 38 | * Compute 39 | * Ability to provision VMs with up to 8 vCPU 40 | * Ability to provision up to 116-140 GB of memory 41 | * Network 42 | * Single Standard or Distributed Portgroup (Native VLAN) used to deploy all VMs 43 | * 6 x IP Addresses for VCSA, ESXi, NSX-T UA and Edge VM 44 | * 5 x Consecutive IP Addresses for Kubernetes Control Plane VMs 45 | * 1 x IP Address for T0 Static Route 46 | * 32 x IP Addresses (/27) for Egress CIDR range is the minimum (must not overlap with Ingress CIDR) 47 | * 32 x IP Addresses (/27) for Ingress CIDR range is the minimum (must not overlap with Egress CIDR) 48 | * All IP Addresses should be able to communicate with each other 49 | * Storage 50 | * Ability to provision up to 1TB of storage 51 | 52 | **Note:** For detailed requirements, plesae refer to the official document [here](https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-B1388E77-2EEC-41E2-8681-5AE549D50C77.html) 53 | 54 | * [VMware Cloud Foundation Licenses](https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-9A190942-BDB1-4A19-BA09-728820A716F2.html) 55 | * Desktop (Windows, Mac or Linux) with latest PowerShell Core and PowerCLI 12.0 Core installed. See [ instructions here](https://blogs.vmware.com/PowerCLI/2018/03/installing-powercli-10-0-0-macos.html) for more details 56 | * vSphere 7 & NSX-T OVAs: 57 | * [vCenter Server Appliance 7.0 Build 15952498](https://my.vmware.com/web/vmware/details?downloadGroup=VC700&productId=974&rPId=45006) 58 | * [NSX-T Unified Appliance 3.0 OVA - Build 15946738](https://my.vmware.com/group/vmware/details?downloadGroup=NSX-T-300&productId=982&download=true&fileId=35e89f7fce7f0681258656c9b16ebc5d&secureParam=3e43a22a12477bcae462c37d09677dd2&uuId=724599c8-41c9-47ca-8995-eda318eda20e&downloadType=) 59 | * [NSX-T Edge 3.0 OVA - Build 15946738](https://my.vmware.com/group/vmware/details?downloadGroup=NSX-T-300&productId=982&download=true&fileId=7ca3ab9202b39d92b805d5414628215a&secureParam=833ac1e89d29baa2c62c274d5fc072f7&uuId=f79fbe81-d12e-4403-b0c5-e8f76aa5621e&downloadType=) 60 | * [Nested ESXi 7.0 OVA - Build 15344619](https://download3.vmware.com/software/vmw-tools/nested-esxi/Nested_ESXi7.0_Appliance_Template_v1.ova) 61 | 62 | ## FAQ 63 | 64 | 1) What if I do not have a VMware Cloud Foundation 4 License? 65 | * You can purchase a VMUG Advantage membership which gives you access to all the latest VMware solutions including VCF 4.0. There is also a special [VMUG Advantage Homelab Group Buy with an additional 15% discount](https://www.virtuallyghetto.com/2020/04/special-vmug-advantage-homelab-group-buy.html) that you can take advantage of right now! 66 | 67 | 2) Can I reduce the default CPU, Memory and Storage resources? 68 | 69 | * You can, but it is highly recommended to leave the current defaults for the best working experience. For non-vSphere with Kubernetes usage, you can certainly tune down the resources. For vSphere Pod usage, it is possible to deploy the NSX-T Edge with just 4 vCPU, however if you are going to deploy TKG Clusters, you will need 8 vCPUs on the NSX-T Edge for proper functionality. For memory resources, you can reduce the ESXi VM memory to 16GB but if you intend to deploy K8s application/workloads, you will want to keep the default. For NSX-T memory, I have seen cases where system will become unresponsive and although you can probably tune it down a bit more, I would strongly suggest you keep the defaults unless you plan to do exhaustive testing to ensure there is no negative impact. 70 | 71 | 3) Can I just deploy vSphere (VCSA, ESXi) and vSAN without NSX-T and vSphere with Kubernetes? 72 | 73 | * Yes, simply search for the following variables and change their values to `0` to not deploy NSX-T components or run through the configurations 74 | 75 | ``` 76 | $setupPacificStoragePolicy = 0 77 | $deployNSXManager = 0 78 | $deployNSXEdge = 0 79 | $postDeployNSXConfig = 0 80 | $setupPacific = 0 81 | ``` 82 | 83 | 4) Can I just deploy vSphere (VCSA, ESXi), vSAN and NSX-T but not configure it for vSphere with Kubernetes? 84 | 85 | * Yes, but some of the NSX-T automation will contain some configurations related to vSphere with Kubernetes. It does not affect the usage of NSX-T, so you can simply ignore or just delete those settings. Search for the following variables and change their values to `0` to not apply the vSphere with Kubernetes configurations 86 | 87 | ``` 88 | $setupPacific = 0 89 | ``` 90 | 91 | 5) Can the script deploy two NSX-T Edges? 92 | 93 | * Yes, simply append to the configuration to include the additional Edge which will be brought into the Edge Cluster during configuration. The script currently does not include the additional code that is required to make use of the 2nd edge. This maybe an enhancement in the future or you can manually configure the required settings. 94 | 95 | 6) How do I enable vSphere with Kubernetes after the script has completed? 96 | 97 | * Please refer to the official VMware documentation [here](https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-287138F0-1FFD-4774-BBB9-A1FAB932D1C4.html) with the instructions 98 | 99 | ## Configuration 100 | 101 | Before you can run the script, you will need to edit the script and update a number of variables to match your deployment environment. Details on each section is described below including actual values used in my home lab environment. 102 | 103 | This section describes the credentials to your physical vCenter Server in which the Project Pacific lab environment will be deployed to: 104 | ```console 105 | $VIServer = "mgmt-vcsa-01.cpbu.corp" 106 | $VIUsername = "administrator@vsphere.local" 107 | $VIPassword = "VMware1!" 108 | ``` 109 | 110 | 111 | This section describes the location of the files required for deployment. 112 | 113 | ```console 114 | $NestedESXiApplianceOVA = "C:\Users\william\Desktop\Project-Pacific\Nested_ESXi7.0_Appliance_Template_v1.ova" 115 | $VCSAInstallerPath = "C:\Users\william\Desktop\Project-Pacific\VMware-VCSA-all-7.0.0-15952498" 116 | $NSXTManagerOVA = "C:\Users\william\Desktop\Project-Pacific\nsx-unified-appliance-3.0.0.0.0.15946738.ova" 117 | $NSXTEdgeOVA = "C:\Users\william\Desktop\Project-Pacific\nsx-edge-3.0.0.0.0.15946738.ova" 118 | ``` 119 | **Note:** The path to the VCSA Installer must be the extracted contents of the ISO 120 | 121 | 122 | This section defines the number of Nested ESXi VMs to deploy along with their associated IP Address(s). The names are merely the display name of the VMs when deployed. At a minimum, you should deploy at least three hosts, but you can always add additional hosts and the script will automatically take care of provisioning them correctly. 123 | ```console 124 | $NestedESXiHostnameToIPs = @{ 125 | "pacific-esxi-7" = "172.17.31.113" 126 | "pacific-esxi-8" = "172.17.31.114" 127 | "pacific-esxi-9" = "172.17.31.115" 128 | } 129 | ``` 130 | 131 | This section describes the resources allocated to each of the Nested ESXi VM(s). Depending on your usage, you may need to increase the resources. For Memory and Disk configuration, the unit is in GB. 132 | ```console 133 | $NestedESXivCPU = "4" 134 | $NestedESXivMEM = "24" #GB 135 | $NestedESXiCachingvDisk = "8" #GB 136 | $NestedESXiCapacityvDisk = "100" #GB 137 | ``` 138 | 139 | This section describes the VCSA deployment configuration such as the VCSA deployment size, Networking & SSO configurations. If you have ever used the VCSA CLI Installer, these options should look familiar. 140 | ```console 141 | $VCSADeploymentSize = "tiny" 142 | $VCSADisplayName = "pacific-vcsa-3" 143 | $VCSAIPAddress = "172.17.31.112" 144 | $VCSAHostname = "pacific-vcsa-3.cpbu.corp" #Change to IP if you don't have valid DNS 145 | $VCSAPrefix = "24" 146 | $VCSASSODomainName = "vsphere.local" 147 | $VCSASSOPassword = "VMware1!" 148 | $VCSARootPassword = "VMware1!" 149 | $VCSASSHEnable = "true" 150 | ``` 151 | 152 | This section describes the location as well as the generic networking settings applied to Nested ESXi VCSA & NSX VMs 153 | ```console 154 | $VMDatacenter = "San Jose" 155 | $VMCluster = "Cluster-01" 156 | $VMNetwork = "SJC-CORP-MGMT" 157 | $VMDatastore = "vsanDatastore" 158 | $VMNetmask = "255.255.255.0" 159 | $VMGateway = "172.17.31.253" 160 | $VMDNS = "172.17.31.5" 161 | $VMNTP = "pool.ntp.org" 162 | $VMPassword = "VMware1!" 163 | $VMDomain = "cpbu.corp" 164 | $VMSyslog = "172.17.31.112" 165 | $VMFolder = "Project-Pacific" 166 | # Applicable to Nested ESXi only 167 | $VMSSH = "true" 168 | $VMVMFS = "false" 169 | ``` 170 | 171 | This section describes the configuration of the new vCenter Server from the deployed VCSA. **Default values are sufficient.** 172 | ```console 173 | $NewVCDatacenterName = "Pacific-Datacenter" 174 | $NewVCVSANClusterName = "Workload-Cluster" 175 | $NewVCVDSName = "Pacific-VDS" 176 | $NewVCDVPGName = "DVPG-Management Network" 177 | ``` 178 | 179 | This section describes the Project Pacific Configurations. **Default values are sufficient.** 180 | ```console 181 | # Pacific Configuration 182 | $StoragePolicyName = "pacific-gold-storage-policy" 183 | $StoragePolicyTagCategory = "pacific-demo-tag-category" 184 | $StoragePolicyTagName = "pacific-demo-storage" 185 | $DevOpsUsername = "devops" 186 | $DevOpsPassword = "VMware1!" 187 | ``` 188 | 189 | This section describes the NSX-T configurations, the defaults values are sufficient with for the following variables which ust be defined by users and the rest can be left as defaults. 190 | **$NSXLicenseKey**, **$NSXVTEPNetwork**, **$T0GatewayInterfaceAddress**, **$T0GatewayInterfaceStaticRouteAddress** and the **NSX-T Manager** and **Edge** Sections 191 | ```console 192 | # NSX-T Configuration 193 | $NSXLicenseKey = "NSX-LICENSE-KEY" 194 | $NSXRootPassword = "VMware1!VMware1!" 195 | $NSXAdminUsername = "admin" 196 | $NSXAdminPassword = "VMware1!VMware1!" 197 | $NSXAuditUsername = "audit" 198 | $NSXAuditPassword = "VMware1!VMware1!" 199 | $NSXSSHEnable = "true" 200 | $NSXEnableRootLogin = "true" 201 | $NSXVTEPNetwork = "Pacific-VTEP" # This portgroup needs be created before running script 202 | 203 | # Transport Node Profile 204 | $TransportNodeProfileName = "Pacific-Host-Transport-Node-Profile" 205 | 206 | # Transport Zones 207 | $TunnelEndpointName = "TEP-IP-Pool" 208 | $TunnelEndpointDescription = "Tunnel Endpoint for Transport Nodes" 209 | $TunnelEndpointIPRangeStart = "172.30.1.10" 210 | $TunnelEndpointIPRangeEnd = "172.30.1.20" 211 | $TunnelEndpointCIDR = "172.30.1.0/24" 212 | $TunnelEndpointGateway = "172.30.1.1" 213 | 214 | $OverlayTransportZoneName = "TZ-Overlay" 215 | $OverlayTransportZoneHostSwitchName = "nsxswitch" 216 | $VlanTransportZoneName = "TZ-VLAN" 217 | $VlanTransportZoneNameHostSwitchName = "edgeswitch" 218 | 219 | # Network Segment 220 | $NetworkSegmentName = "Pacific-Segment" 221 | $NetworkSegmentVlan = "0" 222 | 223 | # T0 Gateway 224 | $T0GatewayName = "Pacific-T0-Gateway" 225 | $T0GatewayInterfaceAddress = "172.17.31.119" # should be a routable address 226 | $T0GatewayInterfacePrefix = "24" 227 | $T0GatewayInterfaceStaticRouteName = "Pacific-Static-Route" 228 | $T0GatewayInterfaceStaticRouteNetwork = "0.0.0.0/0" 229 | $T0GatewayInterfaceStaticRouteAddress = "172.17.31.253" 230 | 231 | # Uplink Profiles 232 | $ESXiUplinkProfileName = "ESXi-Host-Uplink-Profile" 233 | $ESXiUplinkProfilePolicy = "FAILOVER_ORDER" 234 | $ESXiUplinkName = "uplink1" 235 | 236 | $EdgeUplinkProfileName = "Edge-Uplink-Profile" 237 | $EdgeUplinkProfilePolicy = "FAILOVER_ORDER" 238 | $EdgeOverlayUplinkName = "uplink1" 239 | $EdgeOverlayUplinkProfileActivepNIC = "fp-eth1" 240 | $EdgeUplinkName = "tep-uplink" 241 | $EdgeUplinkProfileActivepNIC = "fp-eth2" 242 | $EdgeUplinkProfileTransportVLAN = "0" 243 | $EdgeUplinkProfileMTU = "1600" 244 | 245 | # Edge Cluster 246 | $EdgeClusterName = "Edge-Cluster-01" 247 | 248 | # NSX-T Manager Configurations 249 | $NSXTMgrDeploymentSize = "small" 250 | $NSXTMgrvCPU = "6" #override default size 251 | $NSXTMgrvMEM = "24" #override default size 252 | $NSXTMgrDisplayName = "pacific-nsx-3" 253 | $NSXTMgrHostname = "pacific-nsx-3.cpbu.corp" 254 | $NSXTMgrIPAddress = "172.17.31.118" 255 | 256 | # NSX-T Edge Configuration 257 | $NSXTEdgeDeploymentSize = "medium" 258 | $NSXTEdgevCPU = "8" #override default size 259 | $NSXTEdgevMEM = "32" #override default size 260 | $NSXTEdgeHostnameToIPs = @{ 261 | "pacific-nsx-edge-3a" = "172.17.31.116" 262 | } 263 | ``` 264 | 265 | Once you have saved your changes, you can now run the PowerCLI script as you normally would. 266 | 267 | ## Logging 268 | 269 | There is additional verbose logging that outputs as a log file in your current working directory **pacific-nsxt-external-vghetto-lab-deployment.log** 270 | 271 | ## Sample Execution 272 | 273 | In this example below, I will be using a single /24 native VLAN (172.17.31.0/24) which all the VMs provisioned by the automation script will be connected to. It is expected that you will have a similar configuration which is the most basic configuration for POC and testing purposes. 274 | 275 | | Hostname | IP Address | Function | 276 | |----------------------------|--------------------------------|------------------------------| 277 | | pacific-vcsa-3.cpbu.corp | 172.17.31.112 | vCenter Server | 278 | | pacific-esxi-7.cpbu.corp | 172.17.31.113 | ESXi | 279 | | pacific-esxi-8.cpbu.corp | 172.17.31.114 | ESXi | 280 | | pacific-esxi-9.cpbu.corp | 172.17.31.115 | ESXi | 281 | | pacific-nsx-edge.cpbu.corp | 172.17.31.116 | NSX-T Edge | 282 | | pacific-nsx-ua.cpbu.corp | 172.17.31.118 | NSX-T Unified Appliance | 283 | | n/a | 172.17.31.119 | T0 Static Route Address | 284 | | n/a | 172.17.31.120 to 172.17.31.125 | K8s Master Control Plane VMs | 285 | | n/a | 172.17.31.140/27 | Ingress CIDR Range | 286 | | n/a | 172.17.31.160/27 | Egress CIDR Range | 287 | 288 | **Note:** Make sure Ingress/Egress CIDR ranges do NOT overlap and the IP Addresses within that block is not being used. This is important as the Egress CIDR will consume at least 15 IP Addresses for the SNAT of each namespace within the Supervisor Cluster. 289 | 290 | ### Lab Deployment Script 291 | 292 | Here is a screenshot of running the script if all basic pre-reqs have been met and the confirmation message before starting the deployment: 293 | 294 | ![](screenshots/screenshot-1.png) 295 | 296 | Here is an example output of a complete deployment: 297 | 298 | ![](screenshots/screenshot-2.png) 299 | 300 | **Note:** Deployment time will vary based on underlying physical infrastructure resources. In my lab, this took ~40min to complete. 301 | 302 | Once completed, you will end up with your deployed vSphere with Kubernetes Lab which is placed into a vApp 303 | 304 | ![](screenshots/screenshot-3.png) 305 | 306 | ### Enable Workload Management 307 | 308 | To consume the vSphere with Kubernetes capability in vSphere 7, you must enable workload management on a specific vSphere Cluster, which is currently not part of the automation script. The instructions below outline the steps and configuration values used in my example. For more details, please refer to the official VMware documentation [here](https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-21ABC792-0A23-40EF-8D37-0367B483585E.html). 309 | 310 | Step 1 - Login to vSphere UI and click on `Menu->Workload Management` and click on the `Enable` button 311 | 312 | ![](screenshots/screenshot-5.png) 313 | 314 | Step 2 - Select the `Workload Cluster` vSphere Cluster which should automatically show up in the Compatible list. If it does not, then it means something has gone wrong with either the selected configuration or there was an error durig deployment that you may have missed. 315 | 316 | ![](screenshots/screenshot-6.png) 317 | 318 | Step 3 - Select the Kubernetes Control Plane Size which you can use `Tiny` 319 | 320 | ![](screenshots/screenshot-7.png) 321 | 322 | Step 4 - Configure the Management Network by selecting the `DVPG-Management-Network` distributed portgroup which is automatically created for you as part of the automation. Fill out the rest of the network configuration based on your enviornment 323 | 324 | ![](screenshots/screenshot-8.png) 325 | 326 | Step 5 - Configure the Workload Network by selecting the `Pacific-VDS` distributed virtual switch which is automatically created for you as part of the automation. After selecting a valid VDS, the Edge Cluster option should automatically populate with our NSX-T Edge Cluster called `Edge-Cluster-01`. Next, fill in your DNS server along with both the Ingress and Egress CIDR values (/27 network is required minimally or you can go larger) 327 | 328 | ![](screenshots/screenshot-9.png) 329 | 330 | Step 6 - Configure the Storage policies by selecting the `pacific-gold-storage-policy` VM Storage Policy which is automatically created for you as part of the automation or any other VM Storage Policy you wish to use. 331 | 332 | Step 7 - Finally, review workload management configuration and click `Finish` to begin the deployment. 333 | 334 | ![](screenshots/screenshot-11.png) 335 | 336 | This will take some time depending on your environment and you will see various errors on the screen, that is expected. In my example, it took ~26 minutes to complete. You will know when it is completely done when you refreshed the workload management UI and you see a `Running` status along with an accessible Control PLane Node IP Address, in my case it is `172.17.31.129` 337 | 338 | ![](screenshots/screenshot-12.png) 339 | 340 | **Note:** In the future, I may look into automating this portion of the configuration to further accelerate the deployment. For now, it is recommended to get familiar with the concepts of vSphere with Kubernetes by going through the workflow manually so you understand what is happening. 341 | 342 | ### Create Namespace 343 | 344 | Before we can deploy a workload into Supervisor Cluste which uses vSphere Pods, we need to first create a vSphere Namespace and assign a user and VM Storage Policy. 345 | 346 | Step 1 - Under the `Namespaces` tab within the workload management UI, select the Supervisor Cluster (aka vSphere Cluster enabled with workload management) and provide a name. 347 | 348 | ![](screenshots/screenshot-13.png) 349 | 350 | Step 2 - Click on `Add Permissions` to assign both the user `administrator@vsphere.local` and `devops@vsphere.local` which was automatically created by the Automation or any other valid user within vSphere to be able to deploy workloads and click on `Edit Storage` to assign the VM Storage Policy `pacific-gold-storage-policy` or any other valid VM Storage Policy. 351 | 352 | ![](screenshots/screenshot-14.png) 353 | 354 | Step 3 - Finally click on the `Open` URL under the Namespace Status tile to download kubectl and vSphere plugin and extract that onto your desktop. 355 | 356 | ![](screenshots/screenshot-15.png) 357 | 358 | ### Deploy Sample K8s Application 359 | 360 | Step 1 - Login to Control Plane IP Address: 361 | 362 | ```console 363 | ./kubectl vsphere login --server=172.17.31.129 -u administrator@vsphere.local --insecure-skip-tls-verify 364 | ``` 365 | 366 | Step 2 - Change context into our `yelb` namespace: 367 | 368 | ```console 369 | ./kubectl config use-context yelb 370 | 371 | Switched to context "yelb". 372 | ``` 373 | 374 | Step 3 - Create a file called `enable-all-policy.yaml` with the following content: 375 | 376 | ```console 377 | apiVersion: networking.k8s.io/v1 378 | kind: NetworkPolicy 379 | metadata: 380 | name: allow-all 381 | spec: 382 | podSelector: {} 383 | ingress: 384 | - {} 385 | egress: 386 | - {} 387 | policyTypes: 388 | - Ingress 389 | - Egress 390 | ``` 391 | 392 | Apply the policy by running the following: 393 | 394 | ```console 395 | ./kubectl apply -f enable-all-policy.yaml 396 | 397 | networkpolicy.networking.k8s.io/allow-all created 398 | ``` 399 | 400 | Step 3 - Deploy our K8s Application called `Yelb` 401 | 402 | ```console 403 | ./kubectl apply -f https://raw.githubusercontent.com/lamw/vmware-k8s-app-demo/master/yelb-lb.yaml 404 | 405 | service/redis-server created 406 | service/yelb-db created 407 | service/yelb-appserver created 408 | service/yelb-ui created 409 | deployment.apps/yelb-ui created 410 | deployment.apps/redis-server created 411 | deployment.apps/yelb-db created 412 | deployment.apps/yelb-appserver created 413 | ``` 414 | 415 | Step 4 - Access the Yelb UI by retrieving the External Load Balancer IP Address provisioned by NSX-T and then open web browser to that IP Address 416 | 417 | ```console 418 | ./kubectl get service 419 | 420 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 421 | redis-server ClusterIP 10.96.0.69 6379/TCP 43s 422 | yelb-appserver ClusterIP 10.96.0.48 4567/TCP 42s 423 | yelb-db ClusterIP 10.96.0.181 5432/TCP 43s 424 | yelb-ui LoadBalancer 10.96.0.75 172.17.31.130 80:31924/TCP 42s 425 | ``` 426 | 427 | ![](screenshots/screenshot-17.png) 428 | 429 | ### Deploy Tanzu Kubernetes Cluster 430 | 431 | Step 1 - Create a new subscribed vSphere Content Library pointing to `https://wp-content.vmware.com/v2/latest/lib.json` which contains the VMware Tanzu Kubernetes Grid (TKG) Images which must be sync'ed before you can deploy a TKG Cluster. 432 | 433 | ![](screenshots/screenshot-16.png) 434 | 435 | Step 2 - Navigate to the `Workload-Cluster` and under `Namespaces->General` click on `Add Library` to associate the vSphere Content Library we had just created in the previous step. 436 | 437 | ![](screenshots/screenshot-18.png) 438 | 439 | Step 3 - Create a file called `tkg-cluster.yaml` with the following content: 440 | 441 | ```console 442 | apiVersion: run.tanzu.vmware.com/v1alpha1 443 | kind: TanzuKubernetesCluster 444 | metadata: 445 | name: tkg-cluster-1 446 | namespace: yelb 447 | spec: 448 | distribution: 449 | version: v1.16.8 450 | topology: 451 | controlPlane: 452 | class: best-effort-xsmall 453 | count: 1 454 | storageClass: pacific-gold-storage-policy 455 | workers: 456 | class: best-effort-xsmall 457 | count: 3 458 | storageClass: pacific-gold-storage-policy 459 | settings: 460 | network: 461 | cni: 462 | name: calico 463 | services: 464 | cidrBlocks: ["198.51.100.0/12"] 465 | pods: 466 | cidrBlocks: ["192.0.2.0/16"] 467 | ``` 468 | 469 | Step 4 - Create TKG Cluster by running the following: 470 | 471 | ```console 472 | ./kubectl apply -f tkg-cluster.yaml 473 | 474 | tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1 created 475 | ``` 476 | 477 | Step 5 - Login to TKG Cluster specifying by running the following: 478 | 479 | ```console 480 | ./kubectl vsphere login --server=172.17.31.129 -u administrator@vsphere.local --insecure-skip-tls-verify --tanzu-kubernetes-cluster-name tkg-cluster-1 --tanzu-kubernetes-cluster-namespace yelb 481 | ``` 482 | 483 | Step 6 - Verify the TKG Cluster is ready before use by running the following command: 484 | 485 | ```console 486 | ./kubectl get machine 487 | 488 | NAME PROVIDERID PHASE 489 | tkg-cluster-1-control-plane-2lnfb vsphere://421465e7-bded-c92d-43ba-55e0a862b828 running 490 | tkg-cluster-1-workers-p98cj-644dd658fd-4vtjj vsphere://4214d30f-5fd8-eae5-7b1e-f28b8576f38e provisioned 491 | tkg-cluster-1-workers-p98cj-644dd658fd-bjmj5 vsphere://42141954-ecaf-dc15-544e-a7ef2b30b7e9 provisioned 492 | tkg-cluster-1-workers-p98cj-644dd658fd-g6zxh vsphere://4214d101-4ed0-97d3-aebc-0d0c3a7843cb provisioned 493 | ``` 494 | ![](screenshots/screenshot-19.png) 495 | 496 | Step 7 - Change context into `tkg-cluster-1` and you are now ready to deploy K8s apps into a TKG Cluster provisioned by vSphere with Kubernetes! 497 | 498 | ```console 499 | ./kubectl config use-context tkg-cluster-1 500 | ``` 501 | 502 | ### Network Topology 503 | 504 | Here is view into what the networking looks like (Network Topology tab in NSX-T UI) once this is fully configured and workloads are deployed.You can see where the T0 Static Route Address is being used to connect both vSphere Pods (icons on the left) and Tanzu Kubernetes Grid (TKG) Clusters (icons on the right). 505 | 506 | ![](screenshots/screenshot-20.png) 507 | 508 | -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/diagram.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-1.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-10.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-11.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-12.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-13.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-14.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-15.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-15.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-16.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-17.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-17.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-18.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-18.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-19.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-19.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-2.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-20.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-20.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-3.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-5.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-6.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-7.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-8.png -------------------------------------------------------------------------------- /nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/nestedinstall/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment/screenshots/screenshot-9.png -------------------------------------------------------------------------------- /onecloud/Project Pacific on OneCloud.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/onecloud/Project Pacific on OneCloud.pdf -------------------------------------------------------------------------------- /scripts/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/scripts/.DS_Store -------------------------------------------------------------------------------- /scripts/Allow.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: NetworkPolicy 3 | metadata: 4 | name: allow-all 5 | spec: 6 | podSelector: {} 7 | ingress: 8 | - {} 9 | egress: 10 | - {} 11 | policyTypes: 12 | - Ingress 13 | - Egress 14 | -------------------------------------------------------------------------------- /scripts/CreateCluster-guest.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: gcm.vmware.com/v1alpha1 2 | kind: ManagedCluster 3 | metadata: 4 | name: uat-cluster 5 | namespace: projets 6 | spec: 7 | topology: 8 | controlPlane: 9 | count: 1 10 | class: guaranteed-xsmall 11 | storageClass: pacificproject-storage-policy 12 | workers: 13 | count: 2 14 | class: guaranteed-xsmall 15 | storageClass: pacificproject-storage-policy 16 | distribution: 17 | version: v1.15.5+vmware.1-tkg.1.85123d8 18 | settings: 19 | #imageName: ob-15344507-photon-3-k8s-v1.15.5+vmware.1.66-guest.1.37 20 | network: 21 | cni: 22 | name: calico 23 | services: 24 | cidrBlocks: ["10.43.0.0/16"] 25 | pods: 26 | cidrBlocks: ["10.44.0.0/16"] 27 | -------------------------------------------------------------------------------- /scripts/DeployKuard-marketing-uat.yaml: -------------------------------------------------------------------------------- 1 | #kubectl create deployment kuard-semgr --image=10.96.64.66/pcf-ns/kuard 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: kuard-marketing-uat 6 | labels: 7 | app: kuard 8 | deployment: kuard-marketing-uat 9 | spec: 10 | replicas: 1 11 | selector: 12 | matchLabels: 13 | app: kuard 14 | template: 15 | metadata: 16 | labels: 17 | app: kuard 18 | deployment: kuard-marketing-uat 19 | spec: 20 | containers: 21 | - name: kuard-marketing-uat 22 | image: gcr.io/kuar-demo/kuard-amd64:blue 23 | ports: 24 | - containerPort: 8080 25 | -------------------------------------------------------------------------------- /scripts/DeployKuard.yaml: -------------------------------------------------------------------------------- 1 | #kubectl create deployment kuard-semgr --image=10.96.64.66/pcf-ns/kuard 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: kuard-marketing 6 | labels: 7 | app: kuard 8 | deployment : kuard-marketing 9 | spec: 10 | replicas: 1 11 | selector: 12 | matchLabels: 13 | app: kuard 14 | template: 15 | metadata: 16 | labels: 17 | app: kuard 18 | deployment : kuard-marketing 19 | spec: 20 | containers: 21 | - name: kuard-marketing 22 | image: gcr.io/kuar-demo/kuard-amd64:blue 23 | ports: 24 | - containerPort: 8080 25 | -------------------------------------------------------------------------------- /scripts/ExposeKuard-marketing-dev.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: kuard-marketing-dev 5 | spec: 6 | selector: 7 | app: kuard 8 | deployment: kuard-marketing-dev 9 | type: LoadBalancer 10 | ports: 11 | - port: 8080 12 | targetPort: 8080 13 | -------------------------------------------------------------------------------- /scripts/ExposeKuard-marketing-uat.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: kuard-marketing-uat 5 | spec: 6 | selector: 7 | app: kuard 8 | deployment: kuard-marketing-uat 9 | type: LoadBalancer 10 | ports: 11 | - port: 8080 12 | targetPort: 8080 13 | -------------------------------------------------------------------------------- /scripts/ExposeKuard.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: kuard-marketing 5 | spec: 6 | selector: 7 | app: kuard 8 | deployment: kuard-marketing 9 | type: LoadBalancer 10 | ports: 11 | - port: 8080 12 | targetPort: 8080 13 | -------------------------------------------------------------------------------- /scripts/PVC.yaml: -------------------------------------------------------------------------------- 1 | 2 | apiVersion: v1 3 | kind: PersistentVolumeClaim 4 | metadata: 5 | name: pvc-content 6 | labels: 7 | Pour-Application: kuard 8 | spec: 9 | accessModes: 10 | - ReadWriteOnce 11 | resources: 12 | requests: 13 | storage: 1Gi 14 | storageClassName: pacificproject-storage-policy 15 | -------------------------------------------------------------------------------- /scripts/RunAsNoRoot.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: psp:privileged 5 | rules: 6 | - apiGroups: ['policy'] 7 | resources: ['podsecuritypolicies'] 8 | verbs: ['use'] 9 | resourceNames: 10 | - vmware-system-privileged 11 | --- 12 | apiVersion: rbac.authorization.k8s.io/v1 13 | kind: ClusterRoleBinding 14 | metadata: 15 | name: all:psp:privileged 16 | roleRef: 17 | kind: ClusterRole 18 | name: psp:privileged 19 | apiGroup: rbac.authorization.k8s.io 20 | subjects: 21 | - kind: Group 22 | name: system:serviceaccounts 23 | apiGroup: rbac.authorization.k8s.io 24 | 25 | -------------------------------------------------------------------------------- /scripts/TestVirtualMachine.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: vmoperator.vmware.com/v1alpha1 2 | kind: VirtualMachine 3 | metadata: 4 | name: i-am-a-vm 5 | spec: 6 | imageName: ob-15604142-photon-3-k8s-v1.15.5+vmware.1-tkg.1.85123d8 7 | className: guaranteed-xsmall 8 | powerState: poweredOn 9 | storageClass: pacificproject-storage-policy 10 | -------------------------------------------------------------------------------- /scripts/all-privileged-access.yaml: -------------------------------------------------------------------------------- 1 | kind: RoleBinding 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: rolebinding-default-privileged-sa-ns_default 5 | namespace: default 6 | roleRef: 7 | kind: ClusterRole 8 | name: psp:vmware-system-privileged 9 | apiGroup: rbac.authorization.k8s.io 10 | subjects: 11 | - kind: Group 12 | apiGroup: rbac.authorization.k8s.io 13 | name: system:serviceaccounts 14 | -------------------------------------------------------------------------------- /scripts/allow-all-networkpolicy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: NetworkPolicy 3 | metadata: 4 | name: allow-all 5 | spec: 6 | podSelector: {} 7 | ingress: 8 | - {} 9 | egress: 10 | - {} 11 | policyTypes: 12 | - Ingress 13 | - Egress -------------------------------------------------------------------------------- /scripts/allow-runasnonroot-clusterrole.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: psp:privileged 5 | rules: 6 | - apiGroups: ['policy'] 7 | resources: ['podsecuritypolicies'] 8 | verbs: ['use'] 9 | resourceNames: 10 | - vmware-system-privileged 11 | --- 12 | apiVersion: rbac.authorization.k8s.io/v1 13 | kind: ClusterRoleBinding 14 | metadata: 15 | name: all:psp:privileged 16 | roleRef: 17 | kind: ClusterRole 18 | name: psp:privileged 19 | apiGroup: rbac.authorization.k8s.io 20 | subjects: 21 | - kind: Group 22 | name: system:serviceaccounts 23 | apiGroup: rbac.authorization.k8s.io 24 | -------------------------------------------------------------------------------- /scripts/create-service-cluster.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: gcm.vmware.com/v1alpha1 2 | kind: ManagedCluster 3 | metadata: 4 | name: application-catalog 5 | namespace: vsphere-k8-hol 6 | spec: 7 | topology: 8 | controlPlane: 9 | count: 1 10 | class: guaranteed-medium # vmclass to be used for master(s) 11 | storageClass: pacific-gold-storage-policy 12 | workers: 13 | count: 2 14 | class: guaranteed-medium # vmclass to be used for workers(s) 15 | storageClass: pacific-gold-storage-policy 16 | distribution: 17 | version: v1.15.5+vmware.1.66-guest.1.37 18 | settings: 19 | network: 20 | cni: 21 | name: calico 22 | services: 23 | cidrBlocks: ["198.51.100.0/12"] 24 | pods: 25 | cidrBlocks: ["192.0.2.0/16"] 26 | -------------------------------------------------------------------------------- /scripts/create-tkc-cluster.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: run.tanzu.vmware.com/v1alpha1 #tkg api endpoint 2 | kind: TanzuKubernetesCluster #required parameter 3 | metadata: 4 | name: david-tkc-1 #cluster name, user defined 5 | namespace: david-ns-01 #supervisor namespace 6 | spec: 7 | distribution: 8 | version: v1.15 #resolved kubernetes version 9 | topology: 10 | controlPlane: 11 | count: 3 #number of master nodes 12 | class: guaranteed-small #vmclass for master nodes 13 | storageClass: pacific-workload-vms-1 #storageclass for master nodes 14 | workers: 15 | count: 3 #number of worker nodes 16 | class: guaranteed-small #vmclass for worker nodes 17 | storageClass: pacific-workload-vms-1 #storageclass for worker nodes -------------------------------------------------------------------------------- /scripts/create-workload-cluster.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: gcm.vmware.com/v1alpha1 2 | kind: ManagedCluster 3 | metadata: 4 | name: wordpress 5 | namespace: vsphere-k8-hol 6 | spec: 7 | topology: 8 | controlPlane: 9 | count: 1 10 | class: guaranteed-xsmall # vmclass to be used for master(s) 11 | storageClass: pacific-gold-storage-policy 12 | workers: 13 | count: 2 14 | class: guaranteed-xsmall # vmclass to be used for workers(s) 15 | storageClass: pacific-gold-storage-policy 16 | distribution: 17 | version: v1.15.5+vmware.1.66-guest.1.37 18 | settings: 19 | network: 20 | cni: 21 | name: calico 22 | services: 23 | cidrBlocks: ["198.51.100.0/12"] 24 | pods: 25 | cidrBlocks: ["192.0.2.0/16"] 26 | -------------------------------------------------------------------------------- /scripts/create-worklod-cluster: -------------------------------------------------------------------------------- 1 | apiVersion: gcm.vmware.com/v1alpha1 2 | kind: ManagedCluster 3 | metadata: 4 | name: my-guestcluster 5 | namespace: my-namespace 6 | spec: 7 | topology: 8 | controlPlane: 9 | count: 1 10 | class: guaranteed-xsmall # vmclass to be used for master(s) 11 | storageClass: pacific-gold-storage-policy 12 | workers: 13 | count: 2 14 | class: guaranteed-xsmall # vmclass to be used for workers(s) 15 | storageClass: pacific-gold-storage-policy 16 | distribution: 17 | version: v1.15.5+vmware.1.66-guest.1.37 18 | settings: 19 | network: 20 | cni: 21 | name: calico 22 | services: 23 | cidrBlocks: ["198.51.100.0/12"] 24 | pods: 25 | cidrBlocks: ["192.0.2.0/16"] 26 | -------------------------------------------------------------------------------- /scripts/demo-busybox.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: busybox 5 | spec: 6 | containers: 7 | - name: busybox 8 | image: busybox:latest 9 | command: 10 | - sleep 11 | - "3600" 12 | imagePullPolicy: IfNotPresent 13 | restartPolicy: Always -------------------------------------------------------------------------------- /scripts/demo-ghost.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: ghost 5 | spec: 6 | type: LoadBalancer 7 | ports: 8 | - port: 80 9 | targetPort: 2368 10 | selector: 11 | app: ghost 12 | --- 13 | apiVersion: apps/v1beta1 14 | kind: Deployment 15 | metadata: 16 | name: ghost 17 | spec: 18 | replicas: 1 19 | selector: 20 | matchLabels: 21 | app: ghost 22 | template: 23 | metadata: 24 | labels: 25 | app: ghost 26 | spec: 27 | containers: 28 | - name: ghost 29 | image: ghost:latest 30 | imagePullPolicy: IfNotPresent 31 | ports: 32 | - containerPort: 2368 33 | env: 34 | - name: url 35 | value: http://ghost.vmware.demo 36 | volumeMounts: 37 | - mountPath: /var/lib/ghost/content 38 | name: content 39 | volumes: 40 | - name: content 41 | persistentVolumeClaim: 42 | claimName: ghost-content 43 | --- 44 | apiVersion: v1 45 | kind: PersistentVolumeClaim 46 | metadata: 47 | name: ghost-content 48 | spec: 49 | storageClassName: pacific-gold-storage-policy 50 | accessModes: 51 | - ReadWriteOnce 52 | resources: 53 | requests: 54 | storage: 2Gi 55 | -------------------------------------------------------------------------------- /scripts/demo-harbor-busybox.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: busybox 5 | namespace: my-namespace 6 | spec: 7 | containers: 8 | - name: busybox 9 | image: 10.197.62.66/my-namespace/busybox:latest 10 | command: 11 | - sleep 12 | - "3600" 13 | imagePullPolicy: IfNotPresent 14 | restartPolicy: Always 15 | -------------------------------------------------------------------------------- /scripts/demo-hellokubernetes.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: hello-kubernetes 5 | spec: 6 | type: LoadBalancer 7 | ports: 8 | - port: 80 9 | targetPort: 8080 10 | selector: 11 | app: hello-kubernetes 12 | --- 13 | apiVersion: apps/v1 14 | kind: Deployment 15 | metadata: 16 | name: hello-kubernetes 17 | spec: 18 | replicas: 3 19 | selector: 20 | matchLabels: 21 | app: hello-kubernetes 22 | template: 23 | metadata: 24 | labels: 25 | app: hello-kubernetes 26 | spec: 27 | containers: 28 | - name: hello-kubernetes 29 | image: paulbouwer/hello-kubernetes:1.5 30 | imagePullPolicy: IfNotPresent 31 | ports: 32 | - containerPort: 8080 -------------------------------------------------------------------------------- /scripts/demo-hipstershop.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2018 Google LLC 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # ---------------------------------------------------------- 16 | # WARNING: This file is autogenerated. Do not manually edit. 17 | # ---------------------------------------------------------- 18 | 19 | apiVersion: apps/v1 20 | kind: Deployment 21 | metadata: 22 | name: emailservice 23 | spec: 24 | selector: 25 | matchLabels: 26 | app: emailservice 27 | template: 28 | metadata: 29 | labels: 30 | app: emailservice 31 | spec: 32 | terminationGracePeriodSeconds: 5 33 | containers: 34 | - name: server 35 | image: gcr.io/google-samples/microservices-demo/emailservice:v0.1.2 36 | ports: 37 | - containerPort: 8080 38 | env: 39 | - name: PORT 40 | value: "8080" 41 | env: 42 | - name: ENABLE_PROFILER 43 | value: "0" 44 | resources: 45 | requests: 46 | cpu: 100m 47 | memory: 64Mi 48 | limits: 49 | cpu: 200m 50 | memory: 128Mi 51 | --- 52 | apiVersion: v1 53 | kind: Service 54 | metadata: 55 | name: emailservice 56 | spec: 57 | type: ClusterIP 58 | selector: 59 | app: emailservice 60 | ports: 61 | - name: grpc 62 | port: 5000 63 | targetPort: 8080 64 | --- 65 | apiVersion: apps/v1 66 | kind: Deployment 67 | metadata: 68 | name: checkoutservice 69 | spec: 70 | selector: 71 | matchLabels: 72 | app: checkoutservice 73 | template: 74 | metadata: 75 | labels: 76 | app: checkoutservice 77 | spec: 78 | containers: 79 | - name: server 80 | image: gcr.io/google-samples/microservices-demo/checkoutservice:v0.1.2 81 | ports: 82 | - containerPort: 5050 83 | env: 84 | - name: PORT 85 | value: "5050" 86 | - name: PRODUCT_CATALOG_SERVICE_ADDR 87 | value: "productcatalogservice:3550" 88 | - name: SHIPPING_SERVICE_ADDR 89 | value: "shippingservice:50051" 90 | - name: PAYMENT_SERVICE_ADDR 91 | value: "paymentservice:50051" 92 | - name: EMAIL_SERVICE_ADDR 93 | value: "emailservice:5000" 94 | - name: CURRENCY_SERVICE_ADDR 95 | value: "currencyservice:7000" 96 | - name: CART_SERVICE_ADDR 97 | value: "cartservice:7070" 98 | # - name: JAEGER_SERVICE_ADDR 99 | # value: "jaeger-collector:14268" 100 | resources: 101 | requests: 102 | cpu: 100m 103 | memory: 64Mi 104 | limits: 105 | cpu: 200m 106 | memory: 128Mi 107 | --- 108 | apiVersion: v1 109 | kind: Service 110 | metadata: 111 | name: checkoutservice 112 | spec: 113 | type: ClusterIP 114 | selector: 115 | app: checkoutservice 116 | ports: 117 | - name: grpc 118 | port: 5050 119 | targetPort: 5050 120 | --- 121 | apiVersion: apps/v1 122 | kind: Deployment 123 | metadata: 124 | name: recommendationservice 125 | spec: 126 | selector: 127 | matchLabels: 128 | app: recommendationservice 129 | template: 130 | metadata: 131 | labels: 132 | app: recommendationservice 133 | spec: 134 | terminationGracePeriodSeconds: 5 135 | containers: 136 | - name: server 137 | image: gcr.io/google-samples/microservices-demo/recommendationservice:v0.1.2 138 | ports: 139 | - containerPort: 8080 140 | env: 141 | - name: PORT 142 | value: "8080" 143 | - name: PRODUCT_CATALOG_SERVICE_ADDR 144 | value: "productcatalogservice:3550" 145 | - name: ENABLE_PROFILER 146 | value: "0" 147 | resources: 148 | requests: 149 | cpu: 100m 150 | memory: 220Mi 151 | limits: 152 | cpu: 200m 153 | memory: 450Mi 154 | --- 155 | apiVersion: v1 156 | kind: Service 157 | metadata: 158 | name: recommendationservice 159 | spec: 160 | type: ClusterIP 161 | selector: 162 | app: recommendationservice 163 | ports: 164 | - name: grpc 165 | port: 8080 166 | targetPort: 8080 167 | --- 168 | apiVersion: apps/v1 169 | kind: Deployment 170 | metadata: 171 | name: frontend 172 | spec: 173 | selector: 174 | matchLabels: 175 | app: frontend 176 | template: 177 | metadata: 178 | labels: 179 | app: frontend 180 | spec: 181 | containers: 182 | - name: server 183 | image: gcr.io/google-samples/microservices-demo/frontend:v0.1.2 184 | ports: 185 | - containerPort: 8080 186 | env: 187 | - name: PORT 188 | value: "8080" 189 | - name: PRODUCT_CATALOG_SERVICE_ADDR 190 | value: "productcatalogservice:3550" 191 | - name: CURRENCY_SERVICE_ADDR 192 | value: "currencyservice:7000" 193 | - name: CART_SERVICE_ADDR 194 | value: "cartservice:7070" 195 | - name: RECOMMENDATION_SERVICE_ADDR 196 | value: "recommendationservice:8080" 197 | - name: SHIPPING_SERVICE_ADDR 198 | value: "shippingservice:50051" 199 | - name: CHECKOUT_SERVICE_ADDR 200 | value: "checkoutservice:5050" 201 | - name: AD_SERVICE_ADDR 202 | value: "adservice:9555" 203 | # - name: JAEGER_SERVICE_ADDR 204 | # value: "jaeger-collector:14268" 205 | resources: 206 | requests: 207 | cpu: 100m 208 | memory: 64Mi 209 | limits: 210 | cpu: 200m 211 | memory: 128Mi 212 | --- 213 | apiVersion: v1 214 | kind: Service 215 | metadata: 216 | name: frontend 217 | spec: 218 | type: ClusterIP 219 | selector: 220 | app: frontend 221 | ports: 222 | - name: http 223 | port: 80 224 | targetPort: 8080 225 | --- 226 | apiVersion: v1 227 | kind: Service 228 | metadata: 229 | name: frontend-external 230 | spec: 231 | type: LoadBalancer 232 | selector: 233 | app: frontend 234 | ports: 235 | - name: http 236 | port: 80 237 | targetPort: 8080 238 | --- 239 | apiVersion: apps/v1 240 | kind: Deployment 241 | metadata: 242 | name: paymentservice 243 | spec: 244 | selector: 245 | matchLabels: 246 | app: paymentservice 247 | template: 248 | metadata: 249 | labels: 250 | app: paymentservice 251 | spec: 252 | terminationGracePeriodSeconds: 5 253 | containers: 254 | - name: server 255 | image: gcr.io/google-samples/microservices-demo/paymentservice:v0.1.2 256 | ports: 257 | - containerPort: 50051 258 | env: 259 | - name: PORT 260 | value: "50051" 261 | resources: 262 | requests: 263 | cpu: 100m 264 | memory: 64Mi 265 | limits: 266 | cpu: 200m 267 | memory: 128Mi 268 | --- 269 | apiVersion: v1 270 | kind: Service 271 | metadata: 272 | name: paymentservice 273 | spec: 274 | type: ClusterIP 275 | selector: 276 | app: paymentservice 277 | ports: 278 | - name: grpc 279 | port: 50051 280 | targetPort: 50051 281 | --- 282 | apiVersion: apps/v1 283 | kind: Deployment 284 | metadata: 285 | name: productcatalogservice 286 | spec: 287 | selector: 288 | matchLabels: 289 | app: productcatalogservice 290 | template: 291 | metadata: 292 | labels: 293 | app: productcatalogservice 294 | spec: 295 | terminationGracePeriodSeconds: 5 296 | containers: 297 | - name: server 298 | image: gcr.io/google-samples/microservices-demo/productcatalogservice:v0.1.2 299 | ports: 300 | - containerPort: 3550 301 | env: 302 | - name: PORT 303 | value: "3550" 304 | # env: 305 | # - name: JAEGER_SERVICE_ADDR 306 | # value: "jaeger-collector:14268" 307 | resources: 308 | requests: 309 | cpu: 100m 310 | memory: 64Mi 311 | limits: 312 | cpu: 200m 313 | memory: 128Mi 314 | --- 315 | apiVersion: v1 316 | kind: Service 317 | metadata: 318 | name: productcatalogservice 319 | spec: 320 | type: ClusterIP 321 | selector: 322 | app: productcatalogservice 323 | ports: 324 | - name: grpc 325 | port: 3550 326 | targetPort: 3550 327 | --- 328 | apiVersion: apps/v1 329 | kind: Deployment 330 | metadata: 331 | name: cartservice 332 | spec: 333 | selector: 334 | matchLabels: 335 | app: cartservice 336 | template: 337 | metadata: 338 | labels: 339 | app: cartservice 340 | spec: 341 | terminationGracePeriodSeconds: 5 342 | containers: 343 | - name: server 344 | image: gcr.io/google-samples/microservices-demo/cartservice:v0.1.2 345 | ports: 346 | - containerPort: 7070 347 | env: 348 | - name: REDIS_ADDR 349 | value: "redis-cart:6379" 350 | - name: PORT 351 | value: "7070" 352 | - name: LISTEN_ADDR 353 | value: "0.0.0.0" 354 | resources: 355 | requests: 356 | cpu: 200m 357 | memory: 64Mi 358 | limits: 359 | cpu: 300m 360 | memory: 128Mi 361 | --- 362 | apiVersion: v1 363 | kind: Service 364 | metadata: 365 | name: cartservice 366 | spec: 367 | type: ClusterIP 368 | selector: 369 | app: cartservice 370 | ports: 371 | - name: grpc 372 | port: 7070 373 | targetPort: 7070 374 | --- 375 | apiVersion: apps/v1 376 | kind: Deployment 377 | metadata: 378 | name: loadgenerator 379 | spec: 380 | selector: 381 | matchLabels: 382 | app: loadgenerator 383 | replicas: 1 384 | template: 385 | metadata: 386 | labels: 387 | app: loadgenerator 388 | spec: 389 | terminationGracePeriodSeconds: 5 390 | restartPolicy: Always 391 | initContainers: 392 | - name: wait-frontend 393 | image: alpine:3.6 394 | command: ['sh', '-c', 'set -x; apk add --no-cache curl && 395 | until timeout -t 2 curl -f "http://${FRONTEND_ADDR}"; do 396 | echo "waiting for http://${FRONTEND_ADDR}"; 397 | sleep 2; 398 | done;'] 399 | env: 400 | - name: FRONTEND_ADDR 401 | value: "frontend:80" 402 | containers: 403 | - name: main 404 | image: gcr.io/google-samples/microservices-demo/loadgenerator:v0.1.2 405 | env: 406 | - name: FRONTEND_ADDR 407 | value: "frontend:80" 408 | - name: USERS 409 | value: "10" 410 | resources: 411 | requests: 412 | cpu: 300m 413 | memory: 256Mi 414 | limits: 415 | cpu: 500m 416 | memory: 512Mi 417 | --- 418 | apiVersion: apps/v1 419 | kind: Deployment 420 | metadata: 421 | name: currencyservice 422 | spec: 423 | selector: 424 | matchLabels: 425 | app: currencyservice 426 | template: 427 | metadata: 428 | labels: 429 | app: currencyservice 430 | spec: 431 | terminationGracePeriodSeconds: 5 432 | containers: 433 | - name: server 434 | image: gcr.io/google-samples/microservices-demo/currencyservice:v0.1.2 435 | ports: 436 | - name: grpc 437 | containerPort: 7000 438 | env: 439 | - name: PORT 440 | value: "7000" 441 | resources: 442 | requests: 443 | cpu: 100m 444 | memory: 64Mi 445 | limits: 446 | cpu: 200m 447 | memory: 128Mi 448 | --- 449 | apiVersion: v1 450 | kind: Service 451 | metadata: 452 | name: currencyservice 453 | spec: 454 | type: ClusterIP 455 | selector: 456 | app: currencyservice 457 | ports: 458 | - name: grpc 459 | port: 7000 460 | targetPort: 7000 461 | --- 462 | apiVersion: apps/v1 463 | kind: Deployment 464 | metadata: 465 | name: shippingservice 466 | spec: 467 | selector: 468 | matchLabels: 469 | app: shippingservice 470 | template: 471 | metadata: 472 | labels: 473 | app: shippingservice 474 | spec: 475 | containers: 476 | - name: server 477 | image: gcr.io/google-samples/microservices-demo/shippingservice:v0.1.2 478 | ports: 479 | - containerPort: 50051 480 | env: 481 | - name: PORT 482 | value: "50051" 483 | # env: 484 | # - name: JAEGER_SERVICE_ADDR 485 | # value: "jaeger-collector:14268" 486 | resources: 487 | requests: 488 | cpu: 100m 489 | memory: 64Mi 490 | limits: 491 | cpu: 200m 492 | memory: 128Mi 493 | --- 494 | apiVersion: v1 495 | kind: Service 496 | metadata: 497 | name: shippingservice 498 | spec: 499 | type: ClusterIP 500 | selector: 501 | app: shippingservice 502 | ports: 503 | - name: grpc 504 | port: 50051 505 | targetPort: 50051 506 | --- 507 | apiVersion: apps/v1 508 | kind: Deployment 509 | metadata: 510 | name: redis-cart 511 | spec: 512 | selector: 513 | matchLabels: 514 | app: redis-cart 515 | template: 516 | metadata: 517 | labels: 518 | app: redis-cart 519 | spec: 520 | containers: 521 | - name: redis 522 | image: redis:alpine 523 | ports: 524 | - containerPort: 6379 525 | volumeMounts: 526 | - mountPath: /data 527 | name: redis-data 528 | resources: 529 | limits: 530 | memory: 256Mi 531 | cpu: 125m 532 | requests: 533 | cpu: 70m 534 | memory: 200Mi 535 | volumes: 536 | - name: redis-data 537 | emptyDir: {} 538 | --- 539 | apiVersion: v1 540 | kind: Service 541 | metadata: 542 | name: redis-cart 543 | spec: 544 | type: ClusterIP 545 | selector: 546 | app: redis-cart 547 | ports: 548 | - name: redis 549 | port: 6379 550 | targetPort: 6379 551 | --- 552 | apiVersion: apps/v1 553 | kind: Deployment 554 | metadata: 555 | name: adservice 556 | spec: 557 | selector: 558 | matchLabels: 559 | app: adservice 560 | template: 561 | metadata: 562 | labels: 563 | app: adservice 564 | spec: 565 | terminationGracePeriodSeconds: 5 566 | containers: 567 | - name: server 568 | image: gcr.io/google-samples/microservices-demo/adservice:v0.1.2 569 | ports: 570 | - containerPort: 9555 571 | env: 572 | - name: PORT 573 | value: "9555" 574 | #- name: JAEGER_SERVICE_ADDR 575 | # value: "jaeger-collector:14268" 576 | resources: 577 | requests: 578 | cpu: 200m 579 | memory: 180Mi 580 | limits: 581 | cpu: 300m 582 | memory: 300Mi 583 | --- 584 | apiVersion: v1 585 | kind: Service 586 | metadata: 587 | name: adservice 588 | spec: 589 | type: ClusterIP 590 | selector: 591 | app: adservice 592 | ports: 593 | - name: grpc 594 | port: 9555 595 | targetPort: 9555 596 | --- -------------------------------------------------------------------------------- /scripts/demo-nginx.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | labels: 5 | name: nginx 6 | name: nginx 7 | spec: 8 | ports: 9 | - port: 80 10 | selector: 11 | app: nginx 12 | type: LoadBalancer 13 | --- 14 | apiVersion: extensions/v1beta1 15 | kind: Deployment 16 | metadata: 17 | name: nginx 18 | spec: 19 | replicas: 3 20 | template: 21 | metadata: 22 | labels: 23 | app: nginx 24 | spec: 25 | containers: 26 | - name: nginx 27 | image: nginx 28 | imagePullPolicy: IfNotPresent 29 | ports: 30 | - containerPort: 80 31 | -------------------------------------------------------------------------------- /scripts/demo-sockshop.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: carts-db 5 | labels: 6 | name: carts-db 7 | namespace: default 8 | spec: 9 | replicas: 1 10 | template: 11 | metadata: 12 | labels: 13 | name: carts-db 14 | spec: 15 | containers: 16 | - name: carts-db 17 | image: mongo 18 | ports: 19 | - name: mongo 20 | containerPort: 27017 21 | volumeMounts: 22 | - mountPath: /tmp 23 | name: tmp-volume 24 | volumes: 25 | - name: tmp-volume 26 | emptyDir: 27 | medium: Memory 28 | --- 29 | apiVersion: v1 30 | kind: Service 31 | metadata: 32 | name: carts-db 33 | labels: 34 | name: carts-db 35 | namespace: default 36 | spec: 37 | ports: 38 | # the port that this service should serve on 39 | - port: 27017 40 | targetPort: 27017 41 | selector: 42 | name: carts-db 43 | --- 44 | apiVersion: extensions/v1beta1 45 | kind: Deployment 46 | metadata: 47 | name: carts 48 | labels: 49 | name: carts 50 | namespace: default 51 | spec: 52 | replicas: 1 53 | template: 54 | metadata: 55 | labels: 56 | name: carts 57 | spec: 58 | containers: 59 | - name: carts 60 | image: weaveworksdemos/carts:0.4.8 61 | ports: 62 | - containerPort: 80 63 | env: 64 | - name: ZIPKIN 65 | value: zipkin.jaeger.svc.cluster.local 66 | - name: JAVA_OPTS 67 | value: -Xms64m -Xmx128m -XX:PermSize=32m -XX:MaxPermSize=64m -XX:+UseG1GC -Djava.security.egd=file:/dev/urandom 68 | volumeMounts: 69 | - mountPath: /tmp 70 | name: tmp-volume 71 | volumes: 72 | - name: tmp-volume 73 | emptyDir: 74 | medium: Memory 75 | --- 76 | apiVersion: v1 77 | kind: Service 78 | metadata: 79 | name: carts 80 | labels: 81 | name: carts 82 | namespace: default 83 | spec: 84 | ports: 85 | # the port that this service should serve on 86 | - port: 80 87 | targetPort: 80 88 | selector: 89 | name: carts 90 | --- 91 | apiVersion: extensions/v1beta1 92 | kind: Deployment 93 | metadata: 94 | name: catalogue-db 95 | labels: 96 | name: catalogue-db 97 | namespace: default 98 | spec: 99 | replicas: 1 100 | template: 101 | metadata: 102 | labels: 103 | name: catalogue-db 104 | spec: 105 | containers: 106 | - name: catalogue-db 107 | image: weaveworksdemos/catalogue-db:0.3.0 108 | env: 109 | - name: MYSQL_ROOT_PASSWORD 110 | value: fake_password 111 | - name: MYSQL_DATABASE 112 | value: socksdb 113 | ports: 114 | - name: mysql 115 | containerPort: 3306 116 | --- 117 | apiVersion: v1 118 | kind: Service 119 | metadata: 120 | name: catalogue-db 121 | labels: 122 | name: catalogue-db 123 | namespace: default 124 | spec: 125 | ports: 126 | # the port that this service should serve on 127 | - port: 3306 128 | targetPort: 3306 129 | selector: 130 | name: catalogue-db 131 | --- 132 | apiVersion: extensions/v1beta1 133 | kind: Deployment 134 | metadata: 135 | name: catalogue 136 | labels: 137 | name: catalogue 138 | namespace: default 139 | spec: 140 | replicas: 1 141 | template: 142 | metadata: 143 | labels: 144 | name: catalogue 145 | spec: 146 | containers: 147 | - name: catalogue 148 | image: weaveworksdemos/catalogue:0.3.5 149 | ports: 150 | - containerPort: 80 151 | --- 152 | apiVersion: v1 153 | kind: Service 154 | metadata: 155 | name: catalogue 156 | labels: 157 | name: catalogue 158 | namespace: default 159 | spec: 160 | ports: 161 | # the port that this service should serve on 162 | - port: 80 163 | targetPort: 80 164 | selector: 165 | name: catalogue 166 | --- 167 | apiVersion: extensions/v1beta1 168 | kind: Deployment 169 | metadata: 170 | name: front-end 171 | namespace: default 172 | spec: 173 | replicas: 1 174 | template: 175 | metadata: 176 | labels: 177 | name: front-end 178 | spec: 179 | containers: 180 | - name: front-end 181 | image: weaveworksdemos/front-end:0.3.12 182 | resources: 183 | requests: 184 | cpu: 100m 185 | memory: 100Mi 186 | ports: 187 | - containerPort: 8079 188 | --- 189 | apiVersion: v1 190 | kind: Service 191 | metadata: 192 | name: front-end 193 | labels: 194 | name: front-end 195 | namespace: default 196 | spec: 197 | type: LoadBalancer 198 | ports: 199 | - port: 80 200 | targetPort: 8079 201 | selector: 202 | name: front-end 203 | --- 204 | apiVersion: extensions/v1beta1 205 | kind: Deployment 206 | metadata: 207 | name: orders-db 208 | labels: 209 | name: orders-db 210 | namespace: default 211 | spec: 212 | replicas: 1 213 | template: 214 | metadata: 215 | labels: 216 | name: orders-db 217 | spec: 218 | containers: 219 | - name: orders-db 220 | image: mongo 221 | ports: 222 | - name: mongo 223 | containerPort: 27017 224 | volumeMounts: 225 | - mountPath: /tmp 226 | name: tmp-volume 227 | volumes: 228 | - name: tmp-volume 229 | emptyDir: 230 | medium: Memory 231 | --- 232 | apiVersion: v1 233 | kind: Service 234 | metadata: 235 | name: orders-db 236 | labels: 237 | name: orders-db 238 | namespace: default 239 | spec: 240 | ports: 241 | # the port that this service should serve on 242 | - port: 27017 243 | targetPort: 27017 244 | selector: 245 | name: orders-db 246 | --- 247 | apiVersion: extensions/v1beta1 248 | kind: Deployment 249 | metadata: 250 | name: orders 251 | labels: 252 | name: orders 253 | namespace: default 254 | spec: 255 | replicas: 1 256 | template: 257 | metadata: 258 | labels: 259 | name: orders 260 | spec: 261 | containers: 262 | - name: orders 263 | image: weaveworksdemos/orders:0.4.7 264 | env: 265 | - name: ZIPKIN 266 | value: zipkin.jaeger.svc.cluster.local 267 | - name: JAVA_OPTS 268 | value: -Xms64m -Xmx128m -XX:PermSize=32m -XX:MaxPermSize=64m -XX:+UseG1GC -Djava.security.egd=file:/dev/urandom 269 | ports: 270 | - containerPort: 80 271 | volumeMounts: 272 | - mountPath: /tmp 273 | name: tmp-volume 274 | volumes: 275 | - name: tmp-volume 276 | emptyDir: 277 | medium: Memory 278 | --- 279 | apiVersion: v1 280 | kind: Service 281 | metadata: 282 | name: orders 283 | labels: 284 | name: orders 285 | namespace: default 286 | spec: 287 | ports: 288 | # the port that this service should serve on 289 | - port: 80 290 | targetPort: 80 291 | selector: 292 | name: orders 293 | --- 294 | apiVersion: extensions/v1beta1 295 | kind: Deployment 296 | metadata: 297 | name: payment 298 | labels: 299 | name: payment 300 | namespace: default 301 | spec: 302 | replicas: 1 303 | template: 304 | metadata: 305 | labels: 306 | name: payment 307 | spec: 308 | containers: 309 | - name: payment 310 | image: weaveworksdemos/payment:0.4.3 311 | ports: 312 | - containerPort: 80 313 | --- 314 | apiVersion: v1 315 | kind: Service 316 | metadata: 317 | name: payment 318 | labels: 319 | name: payment 320 | namespace: default 321 | spec: 322 | ports: 323 | # the port that this service should serve on 324 | - port: 80 325 | targetPort: 80 326 | selector: 327 | name: payment 328 | --- 329 | apiVersion: extensions/v1beta1 330 | kind: Deployment 331 | metadata: 332 | name: queue-master 333 | labels: 334 | name: queue-master 335 | namespace: default 336 | spec: 337 | replicas: 1 338 | template: 339 | metadata: 340 | labels: 341 | name: queue-master 342 | spec: 343 | containers: 344 | - name: queue-master 345 | image: weaveworksdemos/queue-master:0.3.1 346 | ports: 347 | - containerPort: 80 348 | --- 349 | apiVersion: v1 350 | kind: Service 351 | metadata: 352 | name: queue-master 353 | labels: 354 | name: queue-master 355 | annotations: 356 | prometheus.io/path: "/prometheus" 357 | namespace: default 358 | spec: 359 | ports: 360 | # the port that this service should serve on 361 | - port: 80 362 | targetPort: 80 363 | selector: 364 | name: queue-master 365 | --- 366 | apiVersion: extensions/v1beta1 367 | kind: Deployment 368 | metadata: 369 | name: rabbitmq 370 | labels: 371 | name: rabbitmq 372 | namespace: default 373 | spec: 374 | replicas: 1 375 | template: 376 | metadata: 377 | labels: 378 | name: rabbitmq 379 | spec: 380 | containers: 381 | - name: rabbitmq 382 | image: rabbitmq:3.6.8 383 | ports: 384 | - containerPort: 5672 385 | --- 386 | apiVersion: v1 387 | kind: Service 388 | metadata: 389 | name: rabbitmq 390 | labels: 391 | name: rabbitmq 392 | namespace: default 393 | spec: 394 | ports: 395 | # the port that this service should serve on 396 | - port: 5672 397 | targetPort: 5672 398 | selector: 399 | name: rabbitmq 400 | --- 401 | apiVersion: extensions/v1beta1 402 | kind: Deployment 403 | metadata: 404 | name: shipping 405 | labels: 406 | name: shipping 407 | namespace: default 408 | spec: 409 | replicas: 1 410 | template: 411 | metadata: 412 | labels: 413 | name: shipping 414 | spec: 415 | containers: 416 | - name: shipping 417 | image: weaveworksdemos/shipping:0.4.8 418 | env: 419 | - name: ZIPKIN 420 | value: zipkin.jaeger.svc.cluster.local 421 | - name: JAVA_OPTS 422 | value: -Xms64m -Xmx128m -XX:PermSize=32m -XX:MaxPermSize=64m -XX:+UseG1GC -Djava.security.egd=file:/dev/urandom 423 | ports: 424 | - containerPort: 80 425 | volumeMounts: 426 | - mountPath: /tmp 427 | name: tmp-volume 428 | volumes: 429 | - name: tmp-volume 430 | emptyDir: 431 | medium: Memory 432 | --- 433 | apiVersion: v1 434 | kind: Service 435 | metadata: 436 | name: shipping 437 | labels: 438 | name: shipping 439 | namespace: default 440 | spec: 441 | ports: 442 | # the port that this service should serve on 443 | - port: 80 444 | targetPort: 80 445 | selector: 446 | name: shipping 447 | --- 448 | apiVersion: extensions/v1beta1 449 | kind: Deployment 450 | metadata: 451 | name: user-db 452 | labels: 453 | name: user-db 454 | namespace: default 455 | spec: 456 | replicas: 1 457 | template: 458 | metadata: 459 | labels: 460 | name: user-db 461 | spec: 462 | containers: 463 | - name: user-db 464 | image: weaveworksdemos/user-db:0.4.0 465 | ports: 466 | - name: mongo 467 | containerPort: 27017 468 | volumeMounts: 469 | - mountPath: /tmp 470 | name: tmp-volume 471 | volumes: 472 | - name: tmp-volume 473 | emptyDir: 474 | medium: Memory 475 | --- 476 | apiVersion: v1 477 | kind: Service 478 | metadata: 479 | name: user-db 480 | labels: 481 | name: user-db 482 | namespace: default 483 | spec: 484 | ports: 485 | # the port that this service should serve on 486 | - port: 27017 487 | targetPort: 27017 488 | selector: 489 | name: user-db 490 | --- 491 | apiVersion: extensions/v1beta1 492 | kind: Deployment 493 | metadata: 494 | name: user 495 | labels: 496 | name: user 497 | namespace: default 498 | spec: 499 | replicas: 1 500 | template: 501 | metadata: 502 | labels: 503 | name: user 504 | spec: 505 | containers: 506 | - name: user 507 | image: weaveworksdemos/user:0.4.7 508 | ports: 509 | - containerPort: 80 510 | env: 511 | - name: MONGO_HOST 512 | value: user-db:27017 513 | --- 514 | apiVersion: v1 515 | kind: Service 516 | metadata: 517 | name: user 518 | labels: 519 | name: user 520 | namespace: default 521 | spec: 522 | ports: 523 | # the port that this service should serve on 524 | - port: 80 525 | targetPort: 80 526 | selector: 527 | name: user 528 | -------------------------------------------------------------------------------- /scripts/demo-unifi.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: unifi 5 | spec: 6 | type: LoadBalancer 7 | ports: 8 | - port: 8443 9 | targetPort: 8443 10 | selector: 11 | app: unifi 12 | --- 13 | apiVersion: apps/v1 14 | kind: Deployment 15 | metadata: 16 | name: unifi 17 | spec: 18 | replicas: 1 19 | selector: 20 | matchLabels: 21 | app: unifi 22 | template: 23 | metadata: 24 | labels: 25 | app: unifi 26 | spec: 27 | containers: 28 | - name: unifi 29 | image: linuxserver/unifi-controller 30 | ports: 31 | - containerPort: 8443 -------------------------------------------------------------------------------- /scripts/minio/README.md: -------------------------------------------------------------------------------- 1 | # minio deployment -------------------------------------------------------------------------------- /scripts/minio/credentials-velero: -------------------------------------------------------------------------------- 1 | [default] 2 | aws_access_key_id = minio 3 | aws_secret_access_key = minio123 4 | -------------------------------------------------------------------------------- /scripts/minio/minio-standalone-deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | # This name uniquely identifies the Deployment 5 | name: minio 6 | spec: 7 | selector: 8 | matchLabels: 9 | app: minio # has to match .spec.template.metadata.labels 10 | strategy: 11 | # Specifies the strategy used to replace old Pods by new ones 12 | # Refer: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy 13 | type: Recreate 14 | template: 15 | metadata: 16 | labels: 17 | # This label is used as a selector in Service definition 18 | app: minio 19 | spec: 20 | # Volumes used by this deployment 21 | volumes: 22 | - name: data 23 | # This volume is based on PVC 24 | persistentVolumeClaim: 25 | # Name of the PVC created earlier 26 | claimName: minio-pv-claim 27 | containers: 28 | - name: minio 29 | # Volume mounts for this container 30 | volumeMounts: 31 | # Volume 'data' is mounted to path '/data' 32 | - name: data 33 | mountPath: "/data" 34 | # Pulls the lastest Minio image from Docker Hub 35 | image: minio/minio:RELEASE.2019-10-12T01-39-57Z 36 | args: 37 | - server 38 | - /data 39 | env: 40 | # MinIO access key and secret key 41 | - name: MINIO_ACCESS_KEY 42 | value: "minio" 43 | - name: MINIO_SECRET_KEY 44 | value: "minio123" 45 | ports: 46 | - containerPort: 9000 47 | # Readiness probe detects situations when MinIO server instance 48 | # is not ready to accept traffic. Kubernetes doesn't forward 49 | # traffic to the pod while readiness checks fail. 50 | readinessProbe: 51 | httpGet: 52 | path: /minio/health/ready 53 | port: 9000 54 | initialDelaySeconds: 120 55 | periodSeconds: 20 56 | # Liveness probe detects situations where MinIO server instance 57 | # is not working properly and needs restart. Kubernetes automatically 58 | # restarts the pods if liveness checks fail. 59 | livenessProbe: 60 | httpGet: 61 | path: /minio/health/live 62 | port: 9000 63 | initialDelaySeconds: 120 64 | periodSeconds: 20 65 | -------------------------------------------------------------------------------- /scripts/minio/minio-standalone-pvc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | # This name uniquely identifies the PVC. This is used in deployment. 5 | name: minio-pv-claim 6 | spec: 7 | # Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes 8 | storageClassName: project-pacific-storage 9 | accessModes: 10 | # The volume is mounted as read-write by a single node 11 | - ReadWriteOnce 12 | resources: 13 | # This is the request for storage. Should be available in the cluster. 14 | requests: 15 | storage: 10Gi 16 | -------------------------------------------------------------------------------- /scripts/minio/minio-standalone-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | # This name uniquely identifies the service 5 | name: minio-service 6 | spec: 7 | type: LoadBalancer 8 | ports: 9 | - port: 9000 10 | targetPort: 9000 11 | protocol: TCP 12 | selector: 13 | # Looks for labels `app:minio` in the namespace and applies the spec 14 | app: minio 15 | -------------------------------------------------------------------------------- /scripts/mysql/mysql-configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: mysql 5 | labels: 6 | app: mysql 7 | data: 8 | master.cnf: | 9 | # Apply this config only on the master. 10 | [mysqld] 11 | log-bin 12 | slave.cnf: | 13 | # Apply this config only on slaves. 14 | [mysqld] 15 | super-read-only 16 | 17 | -------------------------------------------------------------------------------- /scripts/mysql/mysql-services.yaml: -------------------------------------------------------------------------------- 1 | # Headless service for stable DNS entries of StatefulSet members. 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | name: mysql 6 | labels: 7 | app: mysql 8 | spec: 9 | ports: 10 | - name: mysql 11 | port: 3306 12 | clusterIP: None 13 | selector: 14 | app: mysql 15 | --- 16 | # Client service for connecting to any MySQL instance for reads. 17 | # For writes, you must instead connect to the master: mysql-0.mysql. 18 | apiVersion: v1 19 | kind: Service 20 | metadata: 21 | name: mysql-read 22 | labels: 23 | app: mysql 24 | spec: 25 | ports: 26 | - name: mysql 27 | port: 3306 28 | selector: 29 | app: mysql 30 | 31 | -------------------------------------------------------------------------------- /scripts/mysql/mysql-statefulset.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: StatefulSet 3 | metadata: 4 | name: mysql 5 | spec: 6 | selector: 7 | matchLabels: 8 | app: mysql 9 | serviceName: mysql 10 | replicas: 3 11 | template: 12 | metadata: 13 | labels: 14 | app: mysql 15 | spec: 16 | initContainers: 17 | - name: init-mysql 18 | image: mysql:5.7 19 | command: 20 | - bash 21 | - "-c" 22 | - | 23 | set -ex 24 | # Generate mysql server-id from pod ordinal index. 25 | [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 26 | ordinal=${BASH_REMATCH[1]} 27 | echo [mysqld] > /mnt/conf.d/server-id.cnf 28 | # Add an offset to avoid reserved server-id=0 value. 29 | echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf 30 | # Copy appropriate conf.d files from config-map to emptyDir. 31 | if [[ $ordinal -eq 0 ]]; then 32 | cp /mnt/config-map/master.cnf /mnt/conf.d/ 33 | else 34 | cp /mnt/config-map/slave.cnf /mnt/conf.d/ 35 | fi 36 | volumeMounts: 37 | - name: conf 38 | mountPath: /mnt/conf.d 39 | - name: config-map 40 | mountPath: /mnt/config-map 41 | - name: clone-mysql 42 | image: gcr.io/google-samples/xtrabackup:1.0 43 | command: 44 | - bash 45 | - "-c" 46 | - | 47 | set -ex 48 | # Skip the clone if data already exists. 49 | [[ -d /var/lib/mysql/mysql ]] && exit 0 50 | # Skip the clone on master (ordinal index 0). 51 | [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 52 | ordinal=${BASH_REMATCH[1]} 53 | [[ $ordinal -eq 0 ]] && exit 0 54 | # Clone data from previous peer. 55 | ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql 56 | # Prepare the backup. 57 | xtrabackup --prepare --target-dir=/var/lib/mysql 58 | volumeMounts: 59 | - name: data 60 | mountPath: /var/lib/mysql 61 | subPath: mysql 62 | - name: conf 63 | mountPath: /etc/mysql/conf.d 64 | containers: 65 | - name: mysql 66 | image: mysql:5.7 67 | env: 68 | - name: MYSQL_ALLOW_EMPTY_PASSWORD 69 | value: "1" 70 | ports: 71 | - name: mysql 72 | containerPort: 3306 73 | volumeMounts: 74 | - name: data 75 | mountPath: /var/lib/mysql 76 | subPath: mysql 77 | - name: conf 78 | mountPath: /etc/mysql/conf.d 79 | resources: 80 | requests: 81 | cpu: 500m 82 | memory: 1Gi 83 | livenessProbe: 84 | exec: 85 | command: ["mysqladmin", "ping"] 86 | initialDelaySeconds: 30 87 | periodSeconds: 10 88 | timeoutSeconds: 5 89 | readinessProbe: 90 | exec: 91 | # Check we can execute queries over TCP (skip-networking is off). 92 | command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"] 93 | initialDelaySeconds: 5 94 | periodSeconds: 2 95 | timeoutSeconds: 1 96 | - name: xtrabackup 97 | image: gcr.io/google-samples/xtrabackup:1.0 98 | ports: 99 | - name: xtrabackup 100 | containerPort: 3307 101 | command: 102 | - bash 103 | - "-c" 104 | - | 105 | set -ex 106 | cd /var/lib/mysql 107 | 108 | # Determine binlog position of cloned data, if any. 109 | if [[ -f xtrabackup_slave_info && "x$( change_master_to.sql.in 113 | # Ignore xtrabackup_binlog_info in this case (it's useless). 114 | rm -f xtrabackup_slave_info xtrabackup_binlog_info 115 | elif [[ -f xtrabackup_binlog_info ]]; then 116 | # We're cloning directly from master. Parse binlog position. 117 | [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1 118 | rm -f xtrabackup_binlog_info xtrabackup_slave_info 119 | echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\ 120 | MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in 121 | fi 122 | 123 | # Check if we need to complete a clone by starting replication. 124 | if [[ -f change_master_to.sql.in ]]; then 125 | echo "Waiting for mysqld to be ready (accepting connections)" 126 | until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done 127 | 128 | echo "Initializing replication from clone position" 129 | mysql -h 127.0.0.1 \ 130 | -e "$( 15 | ``` 16 | 17 | Make sure to use the correct Storage Class that you created and Namespace in the manifest file. 18 | **[Note:** In the GA version of vSphere 7 the apiVersion and kind have changed. Use the following for GA versions in your create-service-cluster.yaml] 19 | ``` 20 | apiVersion: run.tanzu.vmware.com/v1alpha1 21 | kind: TanzuKubernetesCluster 22 | ``` 23 | 24 | ![](../.././images/workloadcluster1.png) 25 | 26 | ```bash 27 | kubectl apply -f scripts/create-service-cluster.yaml 28 | ``` 29 | 30 | SOMETIMES THE GUEST CLUSTER DOES NOT SHOW UP IN THE VSPHERE UI. 31 | YOU MAY HAVE TO DELETE AND DO IT AGAIN. 32 | 33 | ``` 34 | kubectl config use-context application-catalog 35 | ``` 36 | -------------------------------------------------------------------------------- /servicecluster/integrateapplicationcatalog/README.md: -------------------------------------------------------------------------------- 1 | # Application Catalog 2 | 3 | ```bash 4 | kubectl create namespace kubeapps 5 | helm repo add bitnami https://charts.bitnami.com/bitnami 6 | helm repo update 7 | helm install kubeapps bitnami/kubeapps -n kubeapps --set useHelm3=true --set frontend.service.type=LoadBalancer 8 | kubectl -n kubeapps get svc kubeapps 9 | ``` 10 | 11 | ![](../.././images/kubeapps.png) 12 | Kubeapps Access Control can be configured according to your needs but we will not dive into this topic here and just configure admin access. 13 | 14 | ```bash 15 | kubectl create serviceaccount kubeapps-operator 16 | kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator 17 | kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath='{range .secrets[*]}{.name}{"\n"}{end}' | grep kubeapps-operator-token) -o jsonpath='{.data.token}' -o go-template='{{.data.token | base64decode}}' && echo 18 | ``` 19 | 20 | Get the Service ip and paste it in browser and use admin user. Let's use the token we got from the last command to access kubeapps. 21 | 22 | ![](../.././images/kubeapps1.png) -------------------------------------------------------------------------------- /servicecluster/logging/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/servicecluster/logging/.DS_Store -------------------------------------------------------------------------------- /servicecluster/logging/README.md: -------------------------------------------------------------------------------- 1 | # Fluent bit 2 | 3 | ## Deploy fluent-bit 4 | 5 | Login as admin 6 | 7 | ```bash 8 | kubectl vsphere login --insecure-skip-tls-verify --managed-cluster-name=application-catalog --server wcp.haas-435.pez.pivotal.io -u alana@vsphere.local 9 | ``` 10 | 11 | 12 | ```bash 13 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/00-fluent-bit-namespace.yaml 14 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/01-fluent-bit-service-account.yaml 15 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/02-fluent-bit-role.yaml 16 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/03-fluent-bit-role-binding.yaml 17 | # Change following parameters in 04-fluent-bit-configmap.yaml 18 | 19 | to the name of the tkg cluster. 20 | 21 | to the name of tkg instance. This name should be the same for management cluster and all the workload clusters that form a part of one TKG deployment. 22 | 23 | to service name of Elasticsearch within your cluster. 24 | 25 | to the appropriate port Elastic Search server is listening to. 26 | 27 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/output/elasticsearch/04-fluent-bit-configmap.yaml 28 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/output/elasticsearch/05-fluent-bit-ds.yaml 29 | ``` 30 | All fluent bit pods should be running 31 | 32 | ```bash 33 | kubectl get po -n tanzu-system-logging 34 | ``` 35 | -------------------------------------------------------------------------------- /servicecluster/wavefront/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/servicecluster/wavefront/.DS_Store -------------------------------------------------------------------------------- /servicecluster/wavefront/README.md: -------------------------------------------------------------------------------- 1 | # Deploy WaveFront 2 | 3 | Login to Pivotal Okta to get API_KEY from wavefront 4 | 5 | ```bash 6 | export API_KEY=API_KEY 7 | export VMWARE_ID=YOUR_VMWARE_ID 8 | 9 | kubectl create namespace wavefront 10 | helm install wavefront wavefront/wavefront \ 11 | --set wavefront.url=https://surf.wavefront.com \ 12 | --set wavefront.token=$API_KEY \ 13 | --set clusterName=$VMWARE_ID-svc-cluster \ 14 | --namespace wavefront 15 | ``` 16 | 17 | ## Access wavefront 18 | 19 | https://surf.wavefront.com and filter the cluster list to your $VMWARE_ID-svc-cluster. -------------------------------------------------------------------------------- /supervisorcluster/accesscluster/README.md: -------------------------------------------------------------------------------- 1 | # Access SupervisorCluster 2 | 3 | ![](../.././images/supervisor1.png) 4 | 5 | If you are using a jumpbox then ssh into the jumbox to access clusters. 6 | ssh ubuntu@ip 7 | 8 | Install kubcetl to your linux/mac path 9 | 10 | ![](../.././images/kubectl3.png) 11 | 12 | ![](../.././images/kubectl4.png) 13 | 14 | ```bash 15 | From your ubuntu Jumpbox curl -kLO 16 | Install unzip to unzip file 17 | sudo apt install unzip 18 | ``` 19 | 20 | 21 | ![](../.././images/kubectl5.png) 22 | ```bash 23 | unzip the vsphere-plugin file 24 | unzip vsphere-plugin.zip 25 | ``` 26 | 27 | ![](../.././images/kubectl6.png) 28 | 29 | Put the contents of the .zip file in your OS's executable search path. 30 | Note: If you are NOT using the Ubuntu jumpbox, and are using your MAC directly, you may run into issues running the below export command. Reason being, if kubectl is already in your PATH from before, it will pick that one instead of the PATH of the downloaded ZIP, which is needed for this exercise. If you are using your MAC, then place the ZIP contents in the beginning of your PATH rather than at the end to avoid any conflicts. 31 | 32 | ```bash 33 | export PATH=/some/new/path:$PATH 34 | Adding a Path to the Linux PATH Variable 35 | Everything You Need to Know About $PATH in Bash 36 | 37 | export PATH=$PATH:/home/ubuntu/bin 38 | which kubectl 39 | ``` 40 | 41 | ![](../.././images/kubectl7.png) 42 | 43 | Using Alana User: 44 | 45 | ```bash 46 | Run command: kubectl vsphere login --insecure-skip-tls-verify --server your-wcp-server -u alana@vsphere.local 47 | to log in to server. Use wcp IP or hostname that you have configured. User your admin credentials 48 | You can also use administrator@vsphere.local user. 49 | 50 | Username: alana@vsphere.local 51 | PW: VMware1! 52 | ``` 53 | 54 | We have given Alana user 'Can Edit' role. One thing to notice here that Alana user can only access its own namespace. If you try to execute kubectl get ns command it will give access denied error but admin user should be able to see all namespaces. 55 | 56 | ![](../../images/alana.png) 57 | 58 | 59 | 60 | -------------------------------------------------------------------------------- /supervisorcluster/accesscontrol/README.md: -------------------------------------------------------------------------------- 1 | # Access Control 2 | 3 | Menu > Administration 4 | 5 | ``Add two user(Alana and Cody) in vsphere.local domain. We will give edit access to Alana and view access to cody.`` 6 | 7 | ![](../../images/accesscontrol.png) 8 | 9 | -------------------------------------------------------------------------------- /supervisorcluster/enablecluster/README.md: -------------------------------------------------------------------------------- 1 | # Enable Supervisor Cluster 2 | 3 | ![](../../././images/enablesupervisor.png) 4 | 5 | Menu > Workload Management > Enable 6 | 7 | ![](../.././images/enable.png) 8 | 9 | ![](../.././images/enable2.png) 10 | 11 | ![](../.././images/enable3.png) 12 | 13 | ``Below configurations should be according to your enviornment`` 14 | 15 | ``Network: Management Network`` 16 | 17 | ``Subnet Mask: 255.255.255.0`` 18 | 19 | ``Starting IP: 10.###.###.45`` 20 | 21 | ``Gateway: 10.###.###.1`` 22 | 23 | ``DNS Server: 10.192.2.10`` 24 | 25 | ``NTP: 10.192.2.5`` 26 | 27 | ``DNS Search Domain: pplab.io`` 28 | 29 | ``vSphere Distributed Switch: choose default (UUID)`` 30 | 31 | ``Edge Cluster: Edge-Cluster-01`` 32 | 33 | ``Ingress CIDR: 10.###.###.64/26`` 34 | 35 | ``Egress CIDR: 10.###.###.128/26`` 36 | 37 | 38 | ![](../.././images/enable4.png) 39 | 40 | ![](../.././images/enable5.png) 41 | 42 | ![](../.././images/enable6.png) 43 | 44 | ![](../.././images/enable7.png) 45 | 46 | THIS STEP TAKES ABOUT 20-30min. IT DOES FAIL AT TIMES AND YOU MAY NEED TO REMOVE WORKLOAD MANAGEMENT, AND DO THIS STEP OVER. 47 | 48 | 49 | ## To store the Guest Cluster Images we will need to create a Content Library. 50 | 51 | 1. Navigate to Menu -> Content Libraries -> Create 52 | 2. Set name as Kubernetes (case sensitive) 53 | 3. Choose Subscribed Content Library and enter in https://wp-content.vmware.com/v2/beta/lib.json for the URL, and choose when needed 54 | ![](../.././images/contentlibrary.png) 55 | 56 | 4. Click Yes to accept the certificate thumbprint. 57 | ![](../.././images/contentlibrary2.png) 58 | 59 | ![](../.././images/contentlibrary3.png) 60 | 61 | 5. Choose a datastore to store your images. 62 | 6. Click Finish 63 | -------------------------------------------------------------------------------- /supervisorcluster/enableharbor/README.md: -------------------------------------------------------------------------------- 1 | # Enable Harbor 2 | 3 | Menu > Host and Clusters > Cluster > Configuration > Image Registry 4 | 5 | 6 | ![](../.././images/harbor1.png) 7 | 8 | ![](../.././images/harbor2.png) 9 | 10 | ![](../.././images/harbor3.png) 11 | 12 | Copy link to Harbor and try to access with the user/pwd that you have setup during install. 13 | -------------------------------------------------------------------------------- /supervisorcluster/namespace/README.md: -------------------------------------------------------------------------------- 1 | # Create Namespace 2 | 3 | ![](../.././images/namespace.png) 4 | 5 | Once Supervisor Cluster is enabled then we need to create a namespace. Vi Admins are going to create this namespace per product team. Vi admin can provide access to this namespace to dev teams and also apply storage and quota polices. Name your workspace as vsphere-k8-hol. 6 | 7 | 8 | ![](../.././images/namespace1.png) 9 | 10 | ![](../.././images/namespace2.png) 11 | 12 | ## Storage 13 | 14 | `Choose your storage policy.` 15 | 16 | ![](../.././images/namespace3.png) 17 | 18 | ![](../.././images/namespace4.png) 19 | 20 | ## Manage Permission 21 | 22 | Assging 'can edit' role to Alana and 'can view' role to cody. 23 | 24 | ![](../.././images/permission1.png) 25 | 26 | ![](../.././images/permission2.png) 27 | 28 | ## Resource Limits 29 | 30 | You can also assign resouce limits. 31 | 32 | ![](../.././images/resource1.png) 33 | 34 | ![](../.././images/resource2.png) -------------------------------------------------------------------------------- /supervisorcluster/nativepod/README.md: -------------------------------------------------------------------------------- 1 | # Native Pod 2 | 3 | ![](../.././images/nativepod.png) 4 | 5 | ## Replace or add your storage class in yaml files that you have created for PVC before you execute any scripts. 6 | 7 | Switch your context to the namespace you have created. 8 | 9 | `kubectl config use-context my-namespace` 10 | 11 | ## Deploy MySql 12 | 13 | ```bash 14 | kubectl apply -f scripts/mysql 15 | 16 | kubectl get pods -l app=mysql 17 | ``` 18 | 19 | ![](../.././images/mysql1.png) 20 | 21 | kubectl get pvc 22 | 23 | ![](../.././images/mysql1.png) 24 | 25 | ```bash 26 | kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never -- mysql -h mysql-0.mysql < 18 | ` 19 | 20 | Make sure to use the correct Storage Class that you created and Namespace in the manifest file. 21 | **[Note:** In the GA version of vSphere 7 the apiVersion and kind have changed. Use the following for GA versions in your create-service-cluster.yaml] 22 | ``` 23 | apiVersion: run.tanzu.vmware.com/v1alpha1 24 | kind: TanzuKubernetesCluster 25 | ``` 26 | 27 | ![](../../images/workloadcluster1.png) 28 | 29 | ` 30 | kubectl apply -f scripts/create-managed-cluster.yaml 31 | ` 32 | 33 | SOMETIMES THE GUEST CLUSTER DOES NOT SHOW UP IN THE VSPHERE UI. 34 | YOU MAY HAVE TO DELETE AND DO IT AGAIN. 35 | 36 | kubectl config use-context guestcluster 37 | -------------------------------------------------------------------------------- /workloadcluster/deployworkloads/README.md: -------------------------------------------------------------------------------- 1 | # Deploy Wordpress App 2 | 3 | Wordpress needs a MySql database. We will create a mysql database using Tanzu Application Catalog. Mysql will be deployed in Application Catalog Cluster. Once deployed we can get its external ip and use it in wordpress deployment. 4 | 5 | ##### Login to KubeApps. Dev Team should have view rights on Application catalog cluster. 6 | 7 | # Deploy Mysql 8 | 9 | ```bash 10 | kubectl vsphere login --insecure-skip-tls-verify --managed-cluster-name=application-catalog --server wcp.haas-435.pez.pivotal.io -u cody@vsphere.local 11 | ``` 12 | 13 | Access Kubeapps. Search MySql from catalog and click Deploy. 14 | 15 | #### Change Storage class according to your env. kubectl get storageclasses 16 | #### Change Service as LoadBalancer 17 | #### Change root mysql password: cat scripts/wordpress/kustomization.yaml and copy mysql password. 18 | 19 | ![](../.././images/appcatalog-mysql1.png) 20 | 21 | ![](../.././images/appcatalog-mysql2.png) 22 | 23 | ![](../.././images/appcatalog-mysql3.png) 24 | 25 | Change Namespace from top menu and select mysql. 26 | 27 | Click Submit. 28 | 29 | Login as Cody on application catalog cluster: 30 | 31 | ```bash 32 | vsphere login --insecure-skip-tls-verify --managed-cluster-name=application-catalog --server wcp.haas-435.pez.pivotal.io -u cody@vsphere.local 33 | kubectl config use-context application-catalog 34 | kubectl get svc -n mysql 35 | 36 | Login as Cody on Wordpress cluster: 37 | 38 | vsphere login --insecure-skip-tls-verify --managed-cluster-name=wordpress --server wcp.haas-435.pez.pivotal.io -u cody@vsphere.local 39 | ``` 40 | 41 | # Deploy Wordpress 42 | 43 | ##### Get MySql External IP. You can get it from Application Catalog and using Kubectl get svc on Application Catalog cluster. Replace WORDPRESS_DB_HOST value in deployment-app-catalog.yaml. 44 | 45 | ![](../.././images/wordpress3.png) 46 | 47 | ``` 48 | kubectl apply -f scripts/wordpress/deployment-app-catalog.yaml 49 | 50 | ``` 51 | 52 | -------------------------------------------------------------------------------- /workloadcluster/logging/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/workloadcluster/logging/.DS_Store -------------------------------------------------------------------------------- /workloadcluster/logging/README.md: -------------------------------------------------------------------------------- 1 | # Fluent bit 2 | 3 | ## Deploy fluent-bit 4 | 5 | Login as admin 6 | 7 | ```bash 8 | kubectl vsphere login --insecure-skip-tls-verify --managed-cluster-name=wordpress --server wcp.haas-435.pez.pivotal.io -u alana@vsphere.local 9 | ``` 10 | 11 | ```bash 12 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/00-fluent-bit-namespace.yaml 13 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/01-fluent-bit-service-account.yaml 14 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/02-fluent-bit-role.yaml 15 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/03-fluent-bit-role-binding.yaml 16 | # Change following parameters in 04-fluent-bit-configmap.yaml 17 | 18 | to the name of the tkg cluster. 19 | 20 | to the name of tkg instance. This name should be the same for management cluster and all the workload clusters that form a part of one TKG deployment. 21 | 22 | to service name of Elasticsearch within your cluster. 23 | 24 | to the appropriate port Elastic Search server is listening to. 25 | 26 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/output/elasticsearch/04-fluent-bit-configmap.yaml 27 | kubectl apply -f tkg-extensions/logging/fluent-bit/vsphere/output/elasticsearch/05-fluent-bit-ds.yaml 28 | ``` 29 | All fluent bit pods should be running 30 | 31 | ```bash 32 | kubectl get po -n tanzu-system-logging 33 | ``` 34 | -------------------------------------------------------------------------------- /workloadcluster/wavefront/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/vmware-archive/vSphere7-k8-lab/f6b2e250504ba72d5af9468d321e0ec28a82941c/workloadcluster/wavefront/.DS_Store -------------------------------------------------------------------------------- /workloadcluster/wavefront/README.md: -------------------------------------------------------------------------------- 1 | # Deploy WaveFront 2 | 3 | Login to Pivotal Okta to get API_KEY from wavefront 4 | 5 | ```bash 6 | export API_KEY=API_KEY 7 | export VMWARE_ID=YOUR_VMWARE_ID 8 | 9 | kubectl create namespace wavefront 10 | helm install wavefront wavefront/wavefront \ 11 | --set wavefront.url=https://surf.wavefront.com \ 12 | --set wavefront.token=$API_KEY \ 13 | --set clusterName=$VMWARE_ID-workload-cluster \ 14 | --namespace wavefront 15 | ``` 16 | 17 | ## Access wavefront 18 | 19 | https://surf.wavefront.com and filter the cluster list to your $VMWARE_ID-workload-cluster. --------------------------------------------------------------------------------