├── .gitignore ├── .travis.yml ├── BOM.md ├── CONTRIBUTING.md ├── LICENSE_MIT.txt ├── README.md ├── bootstrap ├── Dockerfile ├── Dockerfile.packer ├── Dockerfile.testing ├── README.md ├── bootstrap-spec.json ├── bootstrap-vm.sh ├── docker-env ├── entrypoint.sh ├── patch ├── prep.sh ├── provision │ ├── concourse.yml │ ├── download.yml │ ├── external_roles.yml │ ├── fileshare.yml │ ├── group_vars │ │ └── bootstrap │ │ │ └── vars.yml │ ├── inventory │ ├── inventory.yml │ ├── refly │ │ ├── group_vars │ │ │ └── concourse-web │ │ │ │ └── vars.yml │ │ ├── inventory │ │ ├── pipeline.yml │ │ ├── pipeline_parallelizer.yml │ │ ├── site.yml │ │ └── templates │ │ │ └── extra_vars.yml.j2 │ ├── site.yml │ ├── templates │ │ └── mc │ │ │ └── config.json │ └── utilities.yml ├── set-env.sh ├── user-data.yml └── vars ├── downloads ├── README.md ├── config.json └── download.sh ├── one-cloud-nsxt-param.yaml └── pks-params.sample.yml /.gitignore: -------------------------------------------------------------------------------- 1 | bootstrap.ova 2 | bootstrap-spec.edited.json 3 | user-data.edited.yml 4 | vault-pass 5 | downloads/config.json 6 | downloads/files.json 7 | downloads/index.json 8 | downloads/product.json 9 | parse-ova-linux 10 | solution-name 11 | *.retry 12 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | stages: 2 | - lint 3 | - name: test-bootstrap-provision 4 | if: type IN (push, cron) 5 | - name: build-bootstrap 6 | if: type IN (push, cron) 7 | - name: build-prep 8 | 9 | language: python 10 | 11 | sudo: required 12 | 13 | services: 14 | - docker 15 | 16 | addons: 17 | apt: 18 | update: true 19 | 20 | before_install: 21 | - docker info 22 | - cd bootstrap 23 | 24 | jobs: 25 | include: 26 | - stage: test-bootstrap-provision 27 | script: 28 | - sudo ./prep.sh 29 | - stage: test-bootstrap-provision 30 | script: 31 | - docker build -t bootstrap -f Dockerfile.testing . 32 | - docker run -v $PWD/..:/deployroot/pks-deploy bootstrap 33 | - stage: build-bootstrap 34 | script: 35 | - docker build -t vmware/pks-deploy:bootstrap . 36 | -------------------------------------------------------------------------------- /BOM.md: -------------------------------------------------------------------------------- 1 | # Bill of materials 2 | 3 | These are all the things we must have to be able to do an NSX-T/PKS automated deployment: 4 | 5 | * VMware binaries 6 | * nsx-edge-2.1.0.0.0.7395502.ova 7 | * nsx-unified-appliance-2.1.0.0.0.7395503.ova 8 | * nsx-controller-2.1.0.0.0.7395493.ova 9 | * VMware-ovftool-4.3.0-7948156-lin.x86_64.bundle 10 | * Pivotal pivnet binaries 11 | * pcf-vsphere-2.1-build.318.ova 12 | * pks-linux-amd64-1.0.4-build.1 13 | * Container images in s3 14 | * ??? 15 | * Services 16 | * S3 support on the concourse host (Minio) 17 | * Concourse 18 | * tools 19 | * fly CLI 20 | * pivnet CLI 21 | * ovftool 22 | * govc 23 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Contributing to vmware-pks-deploy 4 | 5 | The vmware-pks-deploy project team welcomes contributions from the community. Before you start working with vmware-pks-deploy, please read our [Developer Certificate of Origin](https://cla.vmware.com/dco). All contributions to this repository must be signed as described on that page. Your signature certifies that you wrote the patch or have the right to pass it on as an open-source patch. 6 | 7 | ## Community 8 | 9 | ## Getting Started 10 | 11 | ## Contribution Flow 12 | 13 | This is a rough outline of what a contributor's workflow looks like: 14 | 15 | - Create a topic branch from where you want to base your work 16 | - Make commits of logical units 17 | - Make sure your commit messages are in the proper format (see below) 18 | - Push your changes to a topic branch in your fork of the repository 19 | - Submit a pull request 20 | 21 | Example: 22 | 23 | ``` shell 24 | git remote add upstream https://github.com/vmware/vmware-pks-deploy.git 25 | git checkout -b my-new-feature master 26 | git commit -a 27 | git push origin my-new-feature 28 | ``` 29 | 30 | ### Staying In Sync With Upstream 31 | 32 | When your branch gets out of sync with the vmware/master branch, use the following to update: 33 | 34 | ``` shell 35 | git checkout my-new-feature 36 | git fetch -a 37 | git pull --rebase upstream master 38 | git push --force-with-lease origin my-new-feature 39 | ``` 40 | 41 | ### Updating pull requests 42 | 43 | If your PR fails to pass CI or needs changes based on code review, you'll most likely want to squash these changes into 44 | existing commits. 45 | 46 | If your pull request contains a single commit or your changes are related to the most recent commit, you can simply 47 | amend the commit. 48 | 49 | ``` shell 50 | git add . 51 | git commit --amend 52 | git push --force-with-lease origin my-new-feature 53 | ``` 54 | 55 | If you need to squash changes into an earlier commit, you can use: 56 | 57 | ``` shell 58 | git add . 59 | git commit --fixup 60 | git rebase -i --autosquash master 61 | git push --force-with-lease origin my-new-feature 62 | ``` 63 | 64 | Be sure to add a comment to the PR indicating your new changes are ready to review, as GitHub does not generate a 65 | notification when you git push. 66 | 67 | ### Code Style 68 | 69 | ### Formatting Commit Messages 70 | 71 | We follow the conventions on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/). 72 | 73 | Be sure to include any related GitHub issue references in the commit message. See 74 | [GFM syntax](https://guides.github.com/features/mastering-markdown/#GitHub-flavored-markdown) for referencing issues 75 | and commits. 76 | 77 | ## Reporting Bugs and Creating Issues 78 | 79 | When opening a new issue, try to roughly follow the commit message format conventions above. 80 | 81 | ## Repository Structure 82 | -------------------------------------------------------------------------------- /LICENSE_MIT.txt: -------------------------------------------------------------------------------- 1 | pks-deploy-meta 2 | 3 | Copyright (c) 2018 VMware, Inc. All Rights Reserved. 4 | 5 | The MIT license (the License) set forth below applies to all parts of the ansible-role-govc project. You may not use this file except in compliance with the License.� 6 | 7 | MIT License 8 | 9 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do 10 | so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 13 | 14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # vmware-pks-deploy 2 | 3 | [![Build Status](https://travis-ci.org/vmware/vmware-pks-deploy.svg?branch=master)](https://travis-ci.org/vmware/vmware-pks-deploy) 4 | 5 | This is a project intended to document and automate the process required for a PKS + NSX-T deployment on vSphere. 6 | 7 | ## Extra documents 8 | 9 | These documents are not public yet. They are linked here for VMware internal users, but should be converted over time into publicly consumable documents. 10 | 11 | * Read 12 | [Niran's step-by-step NSX-T deploy](https://onevmw-my.sharepoint.com/:w:/r/personal/nevenchen_vmware_com/_layouts/15/Doc.aspx?sourcedoc=%7B06F3406E-D0A2-42AE-9F5C-F35583D92EDF%7D&file=Deploy%20NSX-T%20with%20Concourse%20V1%2004-27-2018.docx&action=default&mobileredirect=true) 13 | for a step-by-step manual deploy using some automation 14 | * Use the PoC [Planning Reference](https://vault.vmware.com/group/vault-main-library/document-preview?fileId=38127906) document to vet the planned deployment 15 | * Use the [PKS Configuration Worksheet](https://vault.vmware.com/group/vault-main-library/document-preview?fileId=38127882) to identify and track needed configuration that must be captured as YAML for the pipelines to work properly 16 | 17 | ## High level end-to-end PKS deploment 18 | 19 | The overall process for a PKS and NSX-T deployment is 20 | 21 | * Start with a new vcenter, or a new cluster in an existing vcenter 22 | * Deploy a PKS deployment server. This has Pivotal Concourse for running pipelines, and all the needed binaries and tools to do an automated deploy of PKS and NSX-T 23 | * Use a default configuration YAML or create a new one for NSX-T and another for PKS. These describe what the final deployment will look like 24 | * Apply the pipelines to your configuration 25 | * Connect to the Concourse UI 26 | * Trigger pipelines to: 27 | * deploy NSX-T 28 | * deploy PKS 29 | 30 | To get the inital OVA, you must bootstrap. That process looks is: 31 | 32 | * Start with a machine with access to a vCenter 33 | * Download this code as described below 34 | * Create a container with tools needed to operate on vCenter 35 | * Deploy a ubuntu 16.04 cloud image into vCenter 36 | * Boot the stock VM using cloudinit to set usernames/passwords/ssh keys 37 | * Run ansible playbooks against the VM to provision everything needed to make a deploy server, including: 38 | * install concourse 39 | * download and host needed binaries 40 | * host container images needed by concourse 41 | 42 | 43 | At this point, you have two choices: 44 | 45 | * export the VM as an OVA for a future deployment 46 | * use the running VM to perform a deploy now 47 | 48 | Assuming you want to do these things, continue into the details of this process below: 49 | 50 | ## Get the code 51 | 52 | ***Do not clone this repository.*** 53 | Instead, [install Google Repo](https://source.android.com/source/downloading#installing-repo). 54 | 55 | Here's a quick google repo install for the impatient. 56 | 57 | ```bash 58 | # Validate python 59 | python2.7 -c "print 'Python OK'" || echo 'Need python 2.7!' 60 | python --version | grep "Python 2" || echo 'Warning: python 3 is default!' 61 | mkdir ~/bin 62 | PATH=~/bin:$PATH 63 | curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo 64 | chmod a+x ~/bin/repo 65 | # If you get a warning that about python 3, you might run this: 66 | # After repo is installed: 67 | sed -ri "1s:/usr/bin/env python:/usr/bin/python2.7:" ~/bin/repo 68 | ``` 69 | 70 | Once you've installed Google Repo, you will use it to download and assemble all the component git repositories. 71 | 72 | This process is as follows: 73 | 74 | ``` bash 75 | mkdir pks-deploy-testing 76 | cd pks-deploy-testing 77 | repo init -u https://github.com/vmware/vmware-pks-deploy-meta.git 78 | # or, with ssh: (you will have first had to register an SSH key with Github) 79 | repo init -u git@github.com:vmware/vmware-pks-deploy-meta.git 80 | # Then sync, which pulls down the code. 81 | repo sync 82 | ``` 83 | 84 | After pulling down all the code as described above, go into `pks-deploy-testing` 85 | and you'll see there are several directories. These are each a git repository. 86 | 87 | We'll focus on the `pks-deploy` repository. 88 | 89 | ## Bootstrapping 90 | 91 | Go into `pks-deploy/bootstrap`. 92 | This directory contains code that will create a VM in vCenter, install Concourse, ansible, and other tools into that VM. 93 | 94 | You can use an existing OVA captured after doing this process once, or you can go into the [bootstrap directory](bootstrap/) 95 | and follow the readme there to create the VM directly in vCenter. 96 | 97 | This should take about 15 minutes. 98 | 99 | ## Ssh into the jumpbox 100 | 101 | Get the ip of the vm created in the bootstrap step above. 102 | If you set up ssh keys, you can ssh right now, otherwise use: 103 | 104 | * Username: `vmware` 105 | * Password: `VMware1!` 106 | 107 | On the jumpbox, there is also a copy of the source you used to bootstrap at `/home/vmware/deployroot`. 108 | 109 | ### Download VMware bits 110 | 111 | If you passed the following variables into the bootstrap process above, 112 | the required binaries will be downloaded as part of the automation: `PIVNET_API_TOKEN`, `MY_VMWARE_USER`, and `MY_VMWARE_PASSWORD`. 113 | If you did not pass those in, then you'll need to run this step manually as described below. 114 | 115 | Go into the jumpbox directory `/home/vmware/deployroot/pks-deploy/downloads`, 116 | and follow the readme there to pull needed bits from http://my.vmware.com and pivnet. 117 | You can see an online version in [downloads](downloads). 118 | 119 | The downloaded files will be hosted via s3 by minio and 120 | can be accessed at `http://bootstrap-box-ip:9091`. 121 | 122 | ### Apply various pipelines 123 | 124 | On the jumpbox, the pipelines exist at `/home/vmware/deployroot` and concourse is running on `http://jumpbox-ip:8080` with the same credentials as ssh to log in. 125 | You can use fly from the jumpbox to apply the pipelines. To log in try `fly --target main login -c http://localhost:8080` and `fly pipelines --target main` 126 | 127 | #### Install NSX-T 128 | 129 | cd `/home/vmware/deployroot/nsx-t-gen` and follow the guide from [sparameswaran/nsx-t-gen](https://github.com/sparameswaran/nsx-t-gen). 130 | 131 | Anther good guide is [from Sabha](http://allthingsmdw.blogspot.com/2018/05/introducing-nsx-t-gen-automating-nsx-t.html) 132 | 133 | A sample config file is at `/home/vmware/deployroot/deploy-params/one-cloud-param.yaml` on the jumpbox, or [live here](https://github.com/NiranEC77/NSX-T-Concourse-Pipeline-Onecloud-param/blob/master/one-cloud-param.yaml). 134 | 135 | There is also good coverage of the config file needed in Niran's guide from above starting in section 4.b. 136 | 137 | Once you have the config file correct: 138 | 139 | ``` bash 140 | cd /home/vmware/deployroot/nsx-t-gen 141 | fly --target main login -c http://localhost:8080 -u vmware -p 'VMware1!' 142 | fly -t main set-pipeline -p deploy-nsx -c pipelines/nsx-t-install.yml -l ../pks-deploy/one-cloud-nsxt-param.yaml 143 | fly -t main unpause-pipeline -p deploy-nsx 144 | ``` 145 | 146 | #### Install PAS and/or PKS 147 | 148 | cd `/home/vmware/deployroot/nsx-t-ci-pipeline` and follow the guide from [sparameswaran/nsx-t-ci-pipeline](https://github.com/sparameswaran/nsx-t-ci-pipeline) 149 | 150 | In particular, [this is the pipeline](https://github.com/sparameswaran/nsx-t-ci-pipeline/blob/master/pipelines/install-pks-pipeline.yml) and here is [a sample param file](https://github.com/sparameswaran/nsx-t-ci-pipeline/blob/master/pipelines/pks-params.sample.yml). 151 | 152 | ``` bash 153 | cd /home/vmware/deployroot/nsx-t-ci-pipeline 154 | fly --target main login -c http://localhost:8080 -u vmware -p 'VMware1!' 155 | fly -t main set-pipeline -p deploy-pks -c pipelines/install-pks-pipeline.yml -l ../pks-deploy/pks-params.sample.yml 156 | fly -t main unpause-pipeline -p deploy-pks 157 | ``` 158 | 159 | ## Contributing 160 | 161 | The vmware-pks-deploy project team welcomes contributions from the community. Before you start working with vmware-pks-deploy, please read our [Developer Certificate of Origin](https://cla.vmware.com/dco). All contributions to this repository must be signed as described on that page. Your signature certifies that you wrote the patch or have the right to pass it on as an open-source patch. For more detailed information, refer to [CONTRIBUTING.md](CONTRIBUTING.md). 162 | 163 | ### Development 164 | 165 | For development, you will clone this repository and submit PRs back to upstream. 166 | This is intended to be used as a sub project pulled together by a meta-project called [vmware-pks-deploy-meta](https://github.com/vmware/vmware-pks-deploy-meta). 167 | You can get the full set of repositories by follow the prep section above. 168 | 169 | ## License 170 | 171 | Copyright © 2018 VMware, Inc. All Rights Reserved. 172 | 173 | SPDX-License-Identifier: MIT 174 | -------------------------------------------------------------------------------- /bootstrap/Dockerfile: -------------------------------------------------------------------------------- 1 | #Copyright (c) 2018 VMware, Inc. All Rights Reserved. 2 | # 3 | #SPDX-License-Identifier: MIT 4 | FROM alpine:3.7 5 | 6 | ENV GOVC=https://github.com/vmware/govmomi/releases/download/v0.18.0/govc_linux_amd64.gz 7 | ENV OVA=https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64.ova 8 | 9 | # use this instead of curling it below to always update on new base images. 10 | # use the curl below for faster iterations on building while testing. 11 | # ADD $OVA /bootstrap.ova 12 | 13 | RUN apk update && apk add --virtual build-dependencies curl git bash jq openssh rsync py-pip python python-dev libffi-dev openssl-dev build-base && \ 14 | pip install --upgrade pip && \ 15 | pip install 'ansible<2.6' && \ 16 | curl -L $OVA -o /bootstrap.ova && \ 17 | curl -L $GOVC | gunzip > /usr/bin/govc && chmod +x /usr/bin/govc 18 | 19 | COPY . / 20 | 21 | ENTRYPOINT [ "/entrypoint.sh" ] 22 | -------------------------------------------------------------------------------- /bootstrap/Dockerfile.packer: -------------------------------------------------------------------------------- 1 | #Copyright (c) 2018 VMware, Inc. All Rights Reserved. 2 | # 3 | #SPDX-License-Identifier: MIT 4 | FROM hashicorp/packer:light 5 | 6 | ENV GOVC=https://github.com/vmware/govmomi/releases/download/v0.18.0/govc_linux_amd64.gz 7 | ENV VSPHERE_CLONE=https://github.com/pivotal-cf/packer-builder-vsphere/releases/download/v2.0-beta4-pcf.2/packer-builder-vsphere-clone.linux 8 | 9 | # use this instead of curling it below to always update on new base images. 10 | # use the curl below for faster iterations on building while testing. 11 | # ADD $OVA /bootstrap.ova 12 | 13 | RUN apk update && apk add --virtual build-dependencies curl git bash jq openssh rsync py-pip python python-dev libffi-dev openssl-dev build-base && \ 14 | pip install --upgrade pip && \ 15 | pip install 'ansible<2.6' 16 | 17 | RUN curl -L -o /bin/packer-builder-vsphere-clone ${VSPHERE_CLONE} && chmod +x /bin/packer-builder-vsphere-clone && \ 18 | curl -L $GOVC | gunzip > /usr/bin/govc && chmod +x /usr/bin/govc 19 | 20 | WORKDIR /deployroot/packer-ova-concourse 21 | CMD [ "./loop.sh" ] 22 | 23 | ENTRYPOINT [ "/bin/bash" ] 24 | -------------------------------------------------------------------------------- /bootstrap/Dockerfile.testing: -------------------------------------------------------------------------------- 1 | #Copyright (c) 2018 VMware, Inc. All Rights Reserved. 2 | # 3 | #SPDX-License-Identifier: MIT 4 | FROM ubuntu:16.04 5 | 6 | ENV GOVC=https://github.com/vmware/govmomi/releases/download/v0.18.0/govc_linux_amd64.gz 7 | ENV OVA=https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64.ova 8 | ENV ANSIBLE_HOST_KEY_CHECKING=False 9 | 10 | RUN apt-get -qq update && apt-get install -y \ 11 | curl \ 12 | jq \ 13 | git \ 14 | openssh-server \ 15 | python-pip \ 16 | sudo && \ 17 | curl -L $GOVC | gunzip > /usr/bin/govc && chmod +x /usr/bin/govc && \ 18 | useradd -ms /bin/bash vmware && \ 19 | pip install 'ansible<2.6' && \ 20 | echo 'vmware ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/vmware 21 | 22 | COPY . /home/vmware 23 | RUN chown -R vmware:vmware /home/vmware 24 | 25 | # This isn't really needed for purely testing the automation 26 | # ADD $OVA /bootstrap.ova 27 | 28 | RUN mkdir -p /deployroot 29 | WORKDIR /home/vmware/provision 30 | USER vmware 31 | 32 | RUN ansible-galaxy install -r external_roles.yml && \ 33 | ssh-keygen -f ~/.ssh/id_rsa.test -t rsa -N '' && \ 34 | echo "---" > ~/.ansible/roles/concourse/tasks/kernel_update.yml 35 | 36 | CMD ["-vv", "-e", "do_download=false", "-e", "ansible_ssh_user=root", \ 37 | "-e", "ansible_ssh_private_key_file=~/.ssh/id_rsa.test", \ 38 | "-e", "bootstrap_box_ip=localhost", \ 39 | "-e", "deploy_user=vmware", \ 40 | "site.yml", "-i", "inventory"] 41 | ENTRYPOINT [ "ansible-playbook" ] 42 | -------------------------------------------------------------------------------- /bootstrap/README.md: -------------------------------------------------------------------------------- 1 | # Bootstrap a control host 2 | 3 | We want to stand up a simple VM on which to 4 | 5 | * run ansible playbooks 6 | * install and run Concourse 7 | * download and host binaries to install into vSphere and other VMs 8 | 9 | We'll enable standing this VM up, and capturing it as an OFV for a pre-packaged VM 10 | 11 | ## Requirements 12 | 13 | * A machine that you control that has: 14 | * bash 15 | * docker 16 | * a vCenter to which this machine can connect 17 | * ovftool (optional: only needed if you need to capture an OVF after bootstrapping) 18 | * DHCP for the network the bootstrap VM will be created on 19 | * a default resource pool into which we will install the bootstrap VM 20 | 21 | ## Preparation 22 | 23 | You may be able to run the ```prep.sh``` script to get ready to build the 24 | bootstrap docker container. 25 | 26 | ``` bash 27 | ./prep.sh 28 | ``` 29 | 30 | ### user-data (optional) 31 | 32 | Edit [user-data.yml](./user-data.yml) to add your ssh key if desired. There is a default 33 | user/password set in case you don't do this step. 34 | 35 | Your local ssh key should already be registered with your github account. Grab your ssh key from: 36 | 37 | ``` bash 38 | cat ~/.ssh/id_rsa.pub 39 | ``` 40 | 41 | Put that key text in [user-data.yml](./user-data.yml) under the `ssh-authorized-keys` with a leading `- ` just like the other line in that section. 42 | 43 | ### Create a conainer image with needed tools 44 | 45 | This step allows you to avoid needing to install a bunch of tools locally to get going. Just have docker. 46 | 47 | ``` bash 48 | docker build -t bootstrap . 49 | ``` 50 | 51 | ## Deploy the Bootstrap host 52 | 53 | In the bootstrap image, there exists a script that will create a VM in vCenter, and run some provisioning in the VM. 54 | We will later use this VM to run the full PKS and NSX-T deploy. The command below will result in this VM configured and running in vCenter. 55 | 56 | There are several parameters you can pass in to this provisioning step: 57 | 58 | * `NAME`: this base name of the deployed thing (default is 'pks', or the contents of a file named `solution-name` in the current directory) 59 | * `VM_NAME`: this is what the bootstrap VM will be named (default is '$NAME-bootstrapper') 60 | * `GOVC_NETWORK`: What network the VM should be placed on (default is 'VM Network') 61 | * `GOVC_PASSWORD`: administrator password 62 | * `GOVC_INSECURE`: allow skipping SSL verification (default is '1' for true) 63 | * `GOVC_URL`: the url used to connect to vcenter 64 | * `GOVC_DATACENTER`: datacenter in which to place the VM (default is 'Goddard') 65 | * `GOVC_DATASTORE`: datastore in which to place the VM (default is 'vms') 66 | * `GOVC_USERNAME`: administrator username (default is 'administrator@vsphere.local') 67 | * `GOVC_RESOURCE_POOL`: resource pool in which to place the VM (default is none) 68 | * `MY_VMWARE_USER`: Username used to log into my.vmware.com when downloading binaries, no default 69 | * `MY_VMWARE_PASSWORD`: Password used to log into my.vmware.com when downloading bin, no default 70 | * `PIVNET_API_TOKEN`: API token for network.pivotal.io for downloading binaries, no default 71 | 72 | Setting `PIVNET_API_TOKEN`, `MY_VMWARE_USER`, and `MY_VMWARE_PASSWORD` will enable automatic downloading of binaries 73 | from my.vmware.com and pivnet. 74 | 75 | You can edit [docker-env](./docker-env) in this directory to reflect your environment. This is passed in to the provisioning process in the following command: 76 | 77 | ``` bash 78 | # for debugging purposes, this may be better: 79 | docker run -it --env-file docker-env -v $PWD/../..:/deployroot --entrypoint /bin/bash bootstrap 80 | bash-4.4# ./entrypoint.sh 81 | # and if failures happen... you can re-run 82 | bash-4.4# ./entrypoint.sh 83 | 84 | # For a one-shot run, try: 85 | docker run -it --env-file docker-env -v $PWD/../..:/deployroot bootstrap 86 | ``` 87 | 88 | After this completes, you should have a VM in the vCenter named after your `$VM_NAME`. 89 | 90 | ``` bash 91 | # Debugging note: you can pass additional arguments to the entrypoint.sh to 92 | # target an existing VM for more interactive debugging. 93 | # -a
for existing VM, -u , -d 94 | # E.g. 95 | ./entrypoint.sh -a my-bootstrapper -u gardnerj -d ../../my_deployroot 96 | ``` 97 | 98 | ## Capture an ovf 99 | 100 | You can capture an ovf from the bootstrapped VM for future deploys without waiting for the bootstrap process to download and configure everything. 101 | 102 | ``` bash 103 | govc vm.power -off $NAME-bootstrapper 104 | govc export.ovf -vm $NAME-bootstrapper . 105 | ovftool $NAME-bootstrapper/$NAME-bootstrapper.ovf baked-$NAME-deploy.ova 106 | ``` 107 | 108 | ## Destroy the Bootstrap host 109 | 110 | This will power off and remove the VM from vCenter. 111 | 112 | `govc vm.destroy $NAME-bootstrapper` 113 | -------------------------------------------------------------------------------- /bootstrap/bootstrap-spec.json: -------------------------------------------------------------------------------- 1 | { 2 | "DiskProvisioning": "thin", 3 | "IPAllocationPolicy": "dhcpPolicy", 4 | "IPProtocol": "IPv4", 5 | "PropertyMapping": [ 6 | { 7 | "Key": "instance-id", 8 | "Value": "id-ovf" 9 | }, 10 | { 11 | "Key": "hostname", 12 | "Value": "ubuntuguest" 13 | }, 14 | { 15 | "Key": "seedfrom", 16 | "Value": "" 17 | }, 18 | { 19 | "Key": "public-keys", 20 | "Value": "" 21 | }, 22 | { 23 | "Key": "user-data", 24 | "Value": $userdata 25 | }, 26 | { 27 | "Key": "password", 28 | "Value": "" 29 | } 30 | ], 31 | "NetworkMapping": [ 32 | { 33 | "Name": "VM Network", 34 | "Network": $network 35 | } 36 | ], 37 | "PowerOn": false, 38 | "InjectOvfEnv": false, 39 | "WaitForIP": false, 40 | "Name": null 41 | } 42 | -------------------------------------------------------------------------------- /bootstrap/bootstrap-vm.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | # Copyright 2017 VMware, Inc. All Rights Reserved. 3 | # 4 | # 5 | # Create (or reuse) a bootstrap ubuntu VM 6 | # Requires ESX to be configured with: 7 | # govc host.esxcli system settings advanced set -o /Net/GuestIPHack -i 1 8 | 9 | set -o pipefail 10 | 11 | vm="$VM_NAME" 12 | network="$GOVC_NETWORK" 13 | bootstrap_name=${NAME:='concourse'} 14 | varsfile="../packer-ova-${bootstrap_name}/vars/vsphere-template.json" 15 | destroy=false 16 | verbose=true 17 | ova=xenial-server-cloudimg-amd64.ova 18 | user=vmware 19 | deployroot=/deployroot 20 | use_packer=0 21 | 22 | # bootstrap vm traits 23 | cpus="1" 24 | memory="4096" 25 | disk="40G" 26 | 27 | function usage { 28 | echo "usage: $0 [-qh] [-d deployroot] [-a address] [-u user]" 29 | echo " -a address address of hostname of bootstrap box (VM won't be created)" 30 | echo " -d deployroot base directory to copy to the bootstrap box" 31 | echo " -n network network to which to connect the boostrap box" 32 | echo " -u user user to connect to the bootstrap box" 33 | echo " -v vmname name to assign to the VM" 34 | echo " -c copy files only, no provision (for refreshing pipeline)" 35 | echo " -h display help" 36 | echo " -q quiet mode, less output and no prompting" 37 | exit 1 38 | } 39 | 40 | while getopts :d:v:n:a:u:pqch flag 41 | do 42 | case $flag in 43 | c) 44 | # Only copy files, no provision. 45 | copyonly=true 46 | ;; 47 | d) 48 | # root directory of files to be coppied into the bootstrap box 49 | deployroot=$OPTARG 50 | ;; 51 | v) 52 | # vm name to use for the bootstrap box in vcenter 53 | vm=$OPTARG 54 | unset destroy 55 | ;; 56 | n) 57 | # network to connect the boostrap box to 58 | network=$OPTARG 59 | unset destroy 60 | ;; 61 | a) 62 | # address of an existing machine. 63 | # Setting this indicates that we should not create a new vm in vcenter. 64 | # This machine will only work if it is based on Ubuntu 16.04. 65 | ip=$OPTARG 66 | ;; 67 | p) 68 | # use packer for provisioning 69 | use_packer=1 70 | ;; 71 | u) 72 | # user name used to ssh into an existing machine 73 | # Setting this indicates that we should not create a new vm in vcenter. 74 | user=$OPTARG 75 | ;; 76 | q) 77 | # disable verbose output, don't prompt for input 78 | unset verbose 79 | ;; 80 | h) 81 | usage 82 | exit 0 83 | ;; 84 | \? ) 85 | echo "Unexpected argument." 86 | usage 87 | exit 1 88 | ;; 89 | esac 90 | done 91 | 92 | if [ -z "${use_packer}" -o ${use_packer} -ne 0 ]; then 93 | go build -o ../../packer-ova-concourse/software/parse-ova-linux ../../parse-ova-vm-setting/main.go 94 | if [ $? -ne 0 ]; then 95 | echo "ERROR: failed to build parse-ova-linux" 96 | exit 1 97 | fi 98 | docker run --env-file docker-env -v $PWD/../..:/deployroot bootstrap 99 | else 100 | # connectivity options 101 | keyfile=~/.ssh/id_rsa.${vm} 102 | opts=(-i $keyfile -o "UserKnownHostsFile /dev/null" -o "StrictHostKeyChecking no" -o "LogLevel=ERROR") 103 | rsyncopts=(-ra --delete -e 'ssh -i '$keyfile' -o "UserKnownHostsFile /dev/null" -o "StrictHostKeyChecking no" -o "LogLevel=ERROR"') 104 | 105 | # ansible extra vars 106 | export ANSIBLE_HOST_KEY_CHECKING=False 107 | exvars=(-e pivnet_api_token=$PIVNET_API_TOKEN -e my_vmware_user=$MY_VMWARE_USER -e "my_vmware_password='$MY_VMWARE_PASSWORD'") 108 | exvars+=(-e ansible_ssh_private_key_file=$keyfile -e ansible_ssh_user=$user -e do_download=true) 109 | 110 | if [ ! -f $keyfile ]; then 111 | echo "Optional $keyfile not found." 112 | echo "Generate and use a temporary key for ssh..." 113 | ssh-keygen -f $keyfile -t rsa -N '' 114 | fi 115 | 116 | function createvm 117 | { 118 | echo "Creating a VM \"$vm\" to serve as bootstrap box." 119 | 120 | if [ "$verbose" == "true" ] ; then 121 | echo "Ok to continue (ctl-c to quit)?" 122 | read answer 123 | fi 124 | 125 | key=`cat ${keyfile}.pub` 126 | sed -e "s:%%GENERATED_KEY%%:$key\n:" user-data.yml > user-data.edited.yml 127 | userdata=`base64 user-data.edited.yml` 128 | 129 | if [ ! -e bootstrap.ova ] ; then 130 | echo "Downloading ${ova}..." 131 | curl https://cloud-images.ubuntu.com/xenial/current/$ova -o bootstrap.ova 132 | 133 | # use this to get the json spec for the ova, if it changes 134 | # govc import.spec $ova > ${ova}.json 135 | else 136 | echo "${ova} already exists, skipping download" 137 | fi 138 | 139 | vm_path="$(govc find / -type m -name "$vm")" 140 | 141 | if [ -z "$vm_path" ] ; then 142 | echo "Creating VM ${vm}..." 143 | jq -n --arg userdata "$userdata" --arg network "$network" -f bootstrap-spec.json > bootstrap-spec.edited.json 144 | govc import.ova -options=bootstrap-spec.edited.json -name=$vm bootstrap.ova 145 | govc vm.change -vm $vm -nested-hv-enabled=true 146 | govc vm.change -vm $vm -m=${memory} 147 | govc vm.change -vm $vm -c=${cpus} 148 | govc vm.disk.change -vm $vm -size ${disk} 149 | vm_path="$(govc find / -type m -name "$vm")" 150 | else 151 | echo "VM ${vm} already exists" 152 | fi 153 | 154 | if [ -z "$vm_path" ] ; then 155 | echo "Failed to find the vm." 156 | exit 1 157 | fi 158 | 159 | state=$(govc object.collect -s "$vm_path" runtime.powerState) 160 | if [ "$state" != "poweredOn" ]; then 161 | echo "Power on the VM" 162 | govc vm.power -on "$vm" 163 | fi 164 | 165 | echo -n "Waiting for ${vm} ip..." 166 | ip=$(govc vm.ip "$vm") 167 | 168 | echo "Got ip $ip" 169 | 170 | echo -n "Wait until cloud-init finishes" 171 | until ssh "${opts[@]}" vmware@$ip "grep -q 'runcmd done' /etc/cmds"; do 172 | echo -n "." 173 | sleep 5 174 | done 175 | echo "Done" 176 | } 177 | 178 | if [ -z "$ip" ]; then 179 | createvm 180 | fi 181 | 182 | if [ -z "$ip" ]; then 183 | echo "Unable to get the vm's IP. Unknown error." 184 | fi 185 | 186 | echo -n "Copy code to the bootstrap box..." 187 | rsync "${rsyncopts[@]}" ${deployroot}/* ${user}@${ip}:deployroot 188 | echo "Done" 189 | 190 | # Allow additional customizations, anthing extra*.sh in this directory 191 | for extra in extra*.sh; do 192 | [ -e "$extra" ] || continue 193 | echo -n "Performing extra host prep $extra..." 194 | source "./${extra}" 195 | echo "Done" 196 | done 197 | 198 | if [ $copyonly ]; then 199 | [ -z ${verbose+x} ] && echo "Skipping provision steps, all done." 200 | exit 0 201 | fi 202 | 203 | exvars+=(-e bootstrap_box_ip=${ip} -e solution_name=${bootstrap_name}) 204 | exvars+=(-e deploy_user=${user} -e minio_group=${user}) 205 | exvars+=(-e deployroot=${deployroot}) 206 | 207 | echo "Provisioning bootstrap box $vm at $ip." 208 | cd provision 209 | ansible-galaxy install -r external_roles.yml 210 | if [ -f "additional_roles.yml" ]; then 211 | ansible-galaxy install -r additional_roles.yml 212 | fi 213 | 214 | retry=3 215 | until [ $retry -le 0 ]; do 216 | vees="" 217 | [ $verbose ] && vees="-vv" 218 | [ $verbose ] && echo ansible-playbook -i inventory ${exvars[@]} site.yml $vees 219 | ansible-playbook -i inventory ${exvars[@]} site.yml $vees && break 220 | retry=$(( retry - 1 )) 221 | done 222 | fi 223 | 224 | echo "Completed configuration of $vm with IP $ip" 225 | -------------------------------------------------------------------------------- /bootstrap/docker-env: -------------------------------------------------------------------------------- 1 | GOVC_NETWORK=VM Network 2 | GOVC_PASSWORD=VMware1! 3 | GOVC_INSECURE=1 4 | GOVC_URL=vcsa-01a.corp.local 5 | GOVC_DATACENTER=RegionA01 6 | GOVC_DATASTORE=RegionA01-ISCSI01-COMP01 7 | GOVC_USERNAME=administrator@vsphere.local 8 | GOVC_RESOURCE_POOL=resource_pool 9 | #MY_VMWARE_USER=some_user@vmware.com 10 | #MY_VMWARE_PASSWORD=super_secret 11 | #PIVNET_API_TOKEN=super_secret 12 | VM_NAME=pks-bootstrapper -------------------------------------------------------------------------------- /bootstrap/entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh -e 2 | 3 | NAME=${NAME:='pks'} 4 | if [ -f solution-name ]; then 5 | NAME=$(cat solution-name) 6 | fi 7 | export NAME 8 | export VM_NAME=${VM_NAME:="${NAME}-bootstrapper"} 9 | export GOVC_NETWORK=${GOVC_NETWORK:='VM Network'} 10 | export GOVC_PASSWORD=${GOVC_PASSWORD:='VMware1!'} 11 | export GOVC_INSECURE=${GOVC_INSECURE:='1'} 12 | export GOVC_URL=${GOVC_URL:='vcenter.home.local'} 13 | export GOVC_DATACENTER=${GOVC_DATACENTER:='Goddard'} 14 | export GOVC_DATASTORE=${GOVC_DATASTORE:='vms'} 15 | export GOVC_USERNAME=${GOVC_USERNAME:='administrator@vsphere.local'} 16 | export GOVC_RESOURCE_POOL=${GOVC_RESOURCE_POOL:=''} 17 | 18 | exec "./bootstrap-vm.sh" "-q" $@ 19 | -------------------------------------------------------------------------------- /bootstrap/patch: -------------------------------------------------------------------------------- 1 | diff --git a/bootstrap/README.md b/bootstrap/README.md 2 | index 4053670..d8eb884 100644 3 | --- a/bootstrap/README.md 4 | +++ b/bootstrap/README.md 5 | @@ -55,7 +55,7 @@ We will later use this VM to run the full PKS and NSX-T deploy. The command bel 6 | 7 | There are several parameters you can pass in to this provisioning step: 8 | 9 | -* `VM_NAME`: this is what the bootstrap VM will be named (default is 'concourse-bootstrapper') 10 | +* `VM_NAME`: this is what the bootstrap VM will be named (default is 'pks-bootstrapper') 11 | * `GOVC_NETWORK`: What network the VM should be placed on (default is 'VM Network') 12 | * `GOVC_PASSWORD`: administrator password 13 | * `GOVC_INSECURE`: allow skipping SSL verification (default is '1' for true) 14 | @@ -83,26 +83,18 @@ docker run -it --env-file docker-env -v $PWD/../..:/deployroot bootstrap 15 | 16 | After this completes, you should have a VM in the vCenter named after your `$VM_NAME`. 17 | 18 | -``` bash 19 | -# Debugging note: you can pass additional arguments to the entrypoint.sh to 20 | -# target an existing VM for more interactive debugging. 21 | -# -a
for existing VM, -u , -d 22 | -# E.g. 23 | -./entrypoint.sh -a concourse-bootstrapper -u gardnerj -d ../../my_deployroot 24 | -``` 25 | - 26 | ## Capture an ovf 27 | 28 | You can capture an ovf from the bootstrapped VM for future deploys without waiting for the bootstrap process to download and configure everything. 29 | 30 | ``` bash 31 | -govc vm.power -off concourse-bootstrapper 32 | -govc export.ovf -vm concourse-bootstrapper . 33 | -ovftool concourse-bootstrapper/concourse-bootstrapper.ovf baked-concourse-deploy.ova 34 | +govc vm.power -off pks-bootstrapper 35 | +govc export.ovf -vm pks-bootstrapper . 36 | +ovftool pks-bootstrapper/pks-bootstrapper.ovf baked-pks-deploy.ova 37 | ``` 38 | 39 | ## Destroy the Bootstrap host 40 | 41 | This will power off and remove the VM from vCenter. 42 | 43 | -`govc vm.destroy concourse-bootstrapper` 44 | +`govc vm.destroy pks-bootstrapper` 45 | -------------------------------------------------------------------------------- /bootstrap/prep.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Pull in vars to setup various configurations 4 | . vars 5 | 6 | UPTODATE= 7 | if [[ $EUID -ne 0 ]]; then 8 | echo "This script must be run as root" 9 | exit 1 10 | fi 11 | 12 | if [ -f /etc/os-release ]; then 13 | # freedesktop.org and systemd 14 | . /etc/os-release 15 | OS=$NAME 16 | VER=$VERSION_ID 17 | else 18 | # Fall back to uname, e.g. "Linux ", also works for BSD, etc. 19 | OS=$(uname -s) 20 | VER=$(uname -r) 21 | fi 22 | 23 | if [ "$OS" = "Ubuntu" ] && [ "$VER" = "16.04" ]; then 24 | # don't reinstall docker 25 | docker version >/dev/null 2>&1 26 | if [ $? -ne 0 ]; then 27 | apt-get update -qq 28 | apt-get install docker.io git golang-go 29 | fi 30 | UPTODATE=1 31 | fi 32 | 33 | # setup variables for packer-ova-concourse only if it's available 34 | if [ -d ../../packer-ova-concourse ]; then 35 | 36 | cat >../../packer-ova-concourse/govc_import_concourse_ova.json <../../packer-ova-concourse/variables.json <../../packer-ova-concourse/govc_import_concourse_ova.json <- 62 | cd $HOME/refly && 63 | ansible-playbook -i inventory site.yml 64 | -e "@extra_vars.yml" 65 | >> refly.log 2>&1 && savelog -q -p -c 30 refly.log 66 | user: "{{ deploy_user }}" 67 | -------------------------------------------------------------------------------- /bootstrap/provision/download.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Download various binaries 3 | hosts: bootstrap 4 | become: True 5 | tasks: 6 | - name: Create the downloads bucket 7 | file: 8 | state: directory 9 | path: "{{ download_dir }}/{{ download_bucket }}" 10 | owner: "root" 11 | group: "{{ minio_group }}" 12 | mode: 0775 13 | - name: Run download script 14 | script: >- 15 | {{ deployroot }}/{{ solution_name }}-deploy/downloads/download.sh 16 | {{ download_dir }}/{{ download_bucket }} 17 | environment: 18 | PIVNET_API_TOKEN: "{{ pivnet_api_token }}" 19 | MY_VMWARE_USER: "{{ my_vmware_user }}" 20 | MY_VMWARE_PASSWORD: "{{ my_vmware_password }}" 21 | when: do_download is defined and do_download|bool 22 | -------------------------------------------------------------------------------- /bootstrap/provision/external_roles.yml: -------------------------------------------------------------------------------- 1 | # get these roles using ansible-galaxy 2 | 3 | - name: govc 4 | src: vmware.govc 5 | - name: ovftool 6 | src: vmware.ovftool 7 | src: https://github.com/vmware/ansible-role-ovftool 8 | version: eula 9 | - name: concourse 10 | src: ahelal.concourse 11 | version: v3.2.2 12 | - name: postgresql 13 | src: ANXS.postgresql 14 | version: v1.10.1 15 | - name: kubectl 16 | src: andrewrothstein.kubectl 17 | - name: minio 18 | src: https://github.com/atosatto/ansible-minio 19 | version: 36eaf8b1c208752f60f18c1e4003e3818bc20622 20 | 21 | -------------------------------------------------------------------------------- /bootstrap/provision/fileshare.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Deploy a webserver for hosting binaries 3 | hosts: bootstrap 4 | become: True 5 | pre_tasks: 6 | - name: create the downloads directory 7 | file: 8 | state: directory 9 | path: "{{ download_dir }}" 10 | owner: "root" 11 | group: "{{ minio_group }}" 12 | mode: 0775 13 | roles: 14 | - minio 15 | -------------------------------------------------------------------------------- /bootstrap/provision/group_vars/bootstrap/vars.yml: -------------------------------------------------------------------------------- 1 | ovf_zip_url: "https://www.dropbox.com/s/n5pepfatetp55q2/VMware-ovftool-4.3.0-7948156-lin.x86_64.bundle?dl=1" 2 | ovf_zip: "VMware-ovftool-4.3.0-7948156-lin.x86_64.bundle" 3 | ovf_zip_checksum: "sha256:d327c8c7ebaac7432a589b1207410889d00c1ffd3fe18fa751b14459644de980" 4 | 5 | # This enables or disables automated downloading 6 | # of binaries from pivnet and my.vmware.com 7 | do_download: false 8 | download_dir: "/downloads" 9 | download_bucket: "pks" 10 | deploy_user: "vmware" 11 | deployroot: "/deployroot" 12 | solution_name: "pks" 13 | 14 | minio_server_datadirs: [ "{{ download_dir }}" ] 15 | minio_access_key: "vmware" 16 | minio_secret_key: "VMware1!" 17 | minio_group: vmware 18 | 19 | postgresql_databases: 20 | - name: concourse 21 | owner: concourseci 22 | postgresql_users: 23 | - name: concourseci 24 | pass: conpass 25 | encrypted: no 26 | concourseci_manage_pipelines: True 27 | concourse_web_options: 28 | CONCOURSE_BASIC_AUTH_USERNAME : "vmware" 29 | # Set your own password and save it securely in vault 30 | CONCOURSE_BASIC_AUTH_PASSWORD : "VMware1!" 31 | # Set your own password and save it securely in vault 32 | CONCOURSE_POSTGRES_DATABASE : "concourse" 33 | CONCOURSE_POSTGRES_HOST : "127.0.0.1" 34 | CONCOURSE_POSTGRES_PASSWORD : "conpass" 35 | CONCOURSE_POSTGRES_SSLMODE : "disable" 36 | CONCOURSE_POSTGRES_USER : "concourseci" 37 | # ********************* Example Keys (YOU MUST OVERRIDE THEM) ********************* 38 | # This keys are demo keys. generate your own and store them safely i.e. ansible-vault 39 | # Check the key section on how to auto generate keys. 40 | # ********************************************************************************** 41 | concourseci_key_session_public : ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC6tKHmRtRp0a5SAeqbVy3pJSuoWQfmTIk106Md1bGjELPDkj0A8Z4a5rJZrAR7WqrRmHr2dTL9eKroymtIqxgJdu1RO+SM3uZVV5UFfYrBV0rmp5fP2g/+Wom2RB+zCzPT1TjDnKph8xPqj19P/0FY9rKbU8h6EzEp6Z5DjwKZKvxAF8p9r6wJde4nY+oneIuG1qpxYmLpNvdM3G44vgNeMg20jVywjJVwYDNe8ourqPu8rBauLbSiQI8Uxx6dlJSTsVawrKwHQUPEI9B5LPwUzZ9t/d7k2uJnCig6aJwM8dcyr8tqxlfdfmQiHRlZozail8UzIv65MbVngji5sqoB 42 | concourseci_key_session_private : | 43 | -----BEGIN RSA PRIVATE KEY----- 44 | MIIEowIBAAKCAQEAurSh5kbUadGuUgHqm1ct6SUrqFkH5kyJNdOjHdWxoxCzw5I9 45 | APGeGuayWawEe1qq0Zh69nUy/Xiq6MprSKsYCXbtUTvkjN7mVVeVBX2KwVdK5qeX 46 | z9oP/lqJtkQfswsz09U4w5yqYfMT6o9fT/9BWPaym1PIehMxKemeQ48CmSr8QBfK 47 | fa+sCXXuJ2PqJ3iLhtaqcWJi6Tb3TNxuOL4DXjINtI1csIyVcGAzXvKLq6j7vKwW 48 | ri20okCPFMcenZSUk7FWsKysB0FDxCPQeSz8FM2fbf3e5NriZwooOmicDPHXMq/L 49 | asZX3X5kIh0ZWaM2opfFMyL+uTG1Z4I4ubKqAQIDAQABAoIBAFWUZoF/Be5bRmQg 50 | rMD3fPvZJeHMrWpKuroJgEM0qG/uP/ftGDlOhwIdrLKdvpAsRxA7rGE751t37B84 51 | aWStyB7OfIk3wtMveLS1qIETwn5M3PBM8bE8awhTx7vcDgurnt4CZjqDnTW4jfB+ 52 | N1obzoBQ1B2Okd4i3e4wP3MIIlDCMoTPPd79DfQ6Hz2vd0eFlQcwb2S66oAGTgxi 53 | oG0X0A+o+/GXGGhcuoRfXCR/oaeMtCTAML8UVNT8qktYr+Lfo4JoQR6VroQMStOm 54 | 7DvS3yJe7ZZDrQBdNDHVAsIG9/QXEWmiKNv3p1gHm216FQeJV6rzSXGjeE22tE9S 55 | JzmBKAECgYEA6CiFBIMECPzEnBooyrh8tehb5m0F6TeSeYwdgu+WuuBDRMh5Kruu 56 | 9ydHE3tYHE1uR2Lng6suoU4Mnzmjv4E6THPTmTlolDQEqv7V24a0e8nWxV/+K7lN 57 | XHrq4BFE5Xa8lkLAHw4tF8Ix6162ooHkaLhhmUWzkGVxAUhL/tbVc/ECgYEAzeEn 58 | cR2NMDsNMR/anJzkjDilhiM5pORtN5O+eBIzpbFUEDZL4LIT7gqzic0kKnMJczr7 59 | 0WYUp2U762yrA4U2BqiGyTO2yhcMM5kuDTG+1VTdw3G6rZ0L80jUugW9131VC3tB 60 | zcinIUs8N2hWsbuaRNhTCmlEzfe5UsikRjHgZxECgYEAze1DMCFWrvInI6BAlrDW 61 | TjTxb489MwVMM+yJMN98f/71LEn20GTyaeC5NxqtqU01iLS+TxjEn+gvYf0qtm/W 62 | WoJTKxK1JOCPU24AHF18MmFy1Fi1h+syJ9oQBPjMeA2+cjp7WBCnBvAGf5Tfw34c 63 | MJd8WwxsnqScfFq4ri+53sECgYBGobw6Xn0V0uyPsfH6UQlH4hdHkcYw//1IV/O8 64 | leIKMnA4r6gQioez3xABctO5jIXtdor2KCNl2qFX/4wcRRNn7WFwncFUS9vvx9m4 65 | xRxHbDo410fIUFzNNmtk9ptO1rzal4rX4sMT9Q/Poog7qbUfcWfr5nmogBiggh15 66 | x5rJQQKBgE4khLJKEpLi8ozo/h02R/H4XT1YGNuadyuIULZDyuXVzcUP8R7Xx43n 67 | ITU3tVCvmKizZmC3LZvVPkfDskhI9Yl3X7weBMUDeXxgDeUJNJZXXuDf1CC//Uo9 68 | N1EQdIhtxo4mgHXjF/8L32SqinAJb5ErNXQQwT5k9G22mZkHZY7Y 69 | -----END RSA PRIVATE KEY----- 70 | concourseci_key_tsa_public : ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCjddjviqF3BjVnxrledNsKM0wm7bwJwRgnUomLVrwHXjfArEz5yFa2C87IT9CYpIxkZMgmd0Bdtwj3kiNPP0qYpcj/uTqQTE5xLzTiJIUFsgSQwrMt/zd5x44g71qiHF/1KtHdcZq1dW3+5IwBog692HjcytbAxpUEGGpocHs/aoJ5/xn2tx61QOhkr5+PP1Ft7eHu719/pb1czhH8tZwCwNJQs4vzf79Mlgt0ikjJ84o9kOiUGP+Fc0+EjapBg9M2GE6/l86IJzcx/t/uQYCFOdKbg5ukck9NztldaOUeAPkUttPtf2vdjZU+EwSYc3XvhyQlN/QQmZ8tvG3gV9wv 71 | concourseci_key_tsa_private : | 72 | -----BEGIN RSA PRIVATE KEY----- 73 | MIIEogIBAAKCAQEAo3XY74qhdwY1Z8a5XnTbCjNMJu28CcEYJ1KJi1a8B143wKxM 74 | +chWtgvOyE/QmKSMZGTIJndAXbcI95IjTz9KmKXI/7k6kExOcS804iSFBbIEkMKz 75 | Lf83eceOIO9aohxf9SrR3XGatXVt/uSMAaIOvdh43MrWwMaVBBhqaHB7P2qCef8Z 76 | 9rcetUDoZK+fjz9Rbe3h7u9ff6W9XM4R/LWcAsDSULOL83+/TJYLdIpIyfOKPZDo 77 | lBj/hXNPhI2qQYPTNhhOv5fOiCc3Mf7f7kGAhTnSm4ObpHJPTc7ZXWjlHgD5FLbT 78 | 7X9r3Y2VPhMEmHN174ckJTf0EJmfLbxt4FfcLwIDAQABAoIBAFzux2OJIbuV4A8c 79 | QI+fSFlISOdpChtRmPXiSyjZKxXVT0VPsIPijsn5dJsWJbZi9x6s3c5gxkuBoKuA 80 | fmqzxSl8OAaLvOwFNiPLfvmDYc2XJFlZGJ3yGAw4lGnNK243S6cLrT2FNTwtg1gD 81 | gEX9aPwucqi0+duoC1jEuNqf+LJYZykDicw3yHixgas/pKe2yDvsUhyQy2m/g9SW 82 | rpKjppxas7aKQr1GEI4Gz4JY6L78ksdLLFCiXD/pg/DLbyfOoMid8eCUnGbh1rhB 83 | PsKNyk3r/CSWsSlUlrujEqFdc/H8Ej07wVmVduTZddvjE4LcVtFlBzcEZbEofnyx 84 | H8wLv8ECgYEA0F/jBIVcSWLTB00R/Fix7Bo9ICtZ1sXL+hLPm/zVlL/gD+MlAAVB 85 | FimJKqMZa25B1ZUrYWV+Zddtel61ZxTrb86KKqtb0yuIVtPBc2ssVsO9hKL7NJ9i 86 | g6tpR0hOhD46WJxOI9Srjv61f9tP7izlwbKXo6TrdYxM8YdjXlUyMCcCgYEAyNIB 87 | IayYqg+pFoNdqKi3/n7/yGGWvlO0kW9aXkzrwTxT/k3NCHwarGgeSqU+/XVhnAHB 88 | pvsORLAnf++gQNfoxU10nrdhkj6YIdg8OK5rO4n7iNysa4bZi2DrwJt9/mFpNkvY 89 | lD956Lof/J1gPKmcNAwnsxijJE7w3I3rJ5UucLkCgYB5PMEGWV2XqTMlVVc4npZu 90 | y9lyxSZRSuZiSt2WYaYXFQiV1dAqUeRLs8EGGL1qf004qsEBux6uvIgLId2j600M 91 | 0XwcVXVoyTRbaHtu3xV+Kgczi+xi8rVL7MilW9GrKdWixtbEDDIBUftiN8Uqy96m 92 | M3X9FbCVxRrjkKVlNmasEwKBgBCMxg0ZZUd2nO+/CcPxi6BMpRXFfR/YVCQ8Mg1d 93 | d3xoVV+616/gUm5s8joinitTNiUeO/Bf9lAQ2GCBxgoyAPvpozfFUyQzRmRbprLh 94 | JPM2LuWbkhYWee0zopov9lU1f+86lvG4vXpBhItUCO9W5wmfCtKGsEM4wj7a70tG 95 | zxn5AoGARTzJJb6nxhqQCAWTq2GOQ1dL3uJvHihDs8+Brtn686Xqkajaw+1+OX2i 96 | ehm8iE8k8Mdv7BqPMXQxlLb954l/ieTYkmwTnTG5Ot2+bx0Q15TGJqKFCWQZRveV 97 | uPTcE+vQzvMV3lJo0CHTlNMo1JgHOO5UsFZ1cBxO7MZXCzChGE8= 98 | -----END RSA PRIVATE KEY----- 99 | concourseci_worker_position : "{{ groups[concourseci_worker_group].index(inventory_hostname)| default(0) }}" 100 | concourseci_key_worker_public : "{{ concourseci_worker_keys[concourseci_worker_position | int ].public}}" 101 | concourseci_key_worker_private : "{{ concourseci_worker_keys[concourseci_worker_position | int ].private}}" 102 | concourseci_worker_keys : 103 | - public : ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKW31QIWcCR2Gh8i1fodDPHqviQV5eAW7Zv37Hzs1SKISvYeJ32EQ1mx2UXV8omzJiVojlXIsqkTXIBK6awvXQcRt8HFXwB9LjBfbYOUm+vU6L46HG3p2rBFAynh3NOeXvV1IBMNeuJ/w7v4CNNIkfKfQ34iwpirnX9fwoRV1pIt7c7MnKwZVrq/BwFpGh/GfOKrXLRXUsJAxDA+Mm0q2rvfpcsviINM7V41Lzemany1KVfjMLVe86CKWT0j2WERYejVlhxTXLlz7lHAowyU87dXh4QVHmDgMMSRIWgbMS0/1uAwfpdLMkzBEUhWRgKXDe/NWRk2I+Q77IJa1fnunJ 104 | private : | 105 | -----BEGIN RSA PRIVATE KEY----- 106 | MIIEpQIBAAKCAQEAylt9UCFnAkdhofItX6HQzx6r4kFeXgFu2b9+x87NUiiEr2Hi 107 | d9hENZsdlF1fKJsyYlaI5VyLKpE1yASumsL10HEbfBxV8AfS4wX22DlJvr1Oi+Oh 108 | xt6dqwRQMp4dzTnl71dSATDXrif8O7+AjTSJHyn0N+IsKYq51/X8KEVdaSLe3OzJ 109 | ysGVa6vwcBaRofxnziq1y0V1LCQMQwPjJtKtq736XLL4iDTO1eNS83pmp8tSlX4z 110 | C1XvOgilk9I9lhEWHo1ZYcU1y5c+5RwKMMlPO3V4eEFR5g4DDEkSFoGzEtP9bgMH 111 | 6XSzJMwRFIVkYClw3vzVkZNiPkO+yCWtX57pyQIDAQABAoIBAQCP6rWbEcaDFmVX 112 | mjeu9hTd2YCBb+A/l2FROCJg1LGuJucHHOTGO2d3gJRu+mE9LfONgOHnzgOkCJZp 113 | ZPsRUmslDexwPm7YQZg4oftHGKdcIqMEVqauG5GjGXQ4K8AiP3VK3Z2S/zvFvuZj 114 | T/WLd7u2EE6CmDa0bNdzwpzNv1eJ92DGTm7bz71tGbjexuXuIzJVmUq1UVhj6lle 115 | dklzM9RIp0wAaCrKVifNhEdZ4cy6YG0vBaAVbUZfxO9Qnec9V5Ycor9HZ9bsPhub 116 | 7H3i5j7eGFH6f01bm2o3bSVwsvSosIiG6uXbNw83RGZhsIIFK1bJ2W4CtP86C1fG 117 | +L2GaZtpAoGBAO9Anc8hsLAZEJ9gYm+abTFbTkNv4f/TPQxSngNbPx/OaDsBtHK0 118 | pQ0piG21wx6eKER0Bsb3p44Qav1G/3NVMwYAPWkoujai6OGt0bAjNCBZe5jzoYHO 119 | cN/PTSNuhfri5Hpp6EqF8m3H6gJT/rMVgEfflorXnfj7WvNwVIh50CynAoGBANiF 120 | t5pHWmvIWJs3feLiJm0o0Jp7IlpwS7vn62qfnoqv9Yze/0vNVscczkCzCbUuayf4 121 | TVgtfOe+AHs+N8u38BHrLzcYf/uRAj6fi9rf8Lhxbjv+jFOhPNttGdP5m+GDjlsW 122 | 5D14cNjD/8jKIgecmYSgRTIQmdevfZseQQKhPtQPAoGBAMVVAFQlL3wvUDyD3Oy7 123 | 7C/3ZRfOIhNFAWc2hUmzat8q+WEhyNmLEU9H4FTMxABu5jt/j09wWGyeMgBxHKTd 124 | stXSQNSJWP1TZM0u9nJWttmvtHe1CpLr2MFgU/lTYYJKvbQRwhwlWo0dhG8jJEJF 125 | C6c8TQh7SrpfZua+0Zo3DnKlAoGAPYpL8/Kh1Y6c+IjeI9VJPK9kEvQ6gF/4dpDl 126 | TWnOwvZeIUrkXuQe7PrX+HWqpa9qz3J4cT6EiM1tD5pQe3ttJXql8c/p2FOPwsLQ 127 | GkaaAaJjxXOE6OQkCu3IcII6du9QT72C46HO2R1kHuqsn2M4EwUGhcNIJpB/b846 128 | hgfUdqsCgYEAn3EGdd1DNC+ykrCk2P6nlFXvjxdlxCKNzCPWHjwPGE3t2DRIaEWI 129 | 0XBHuBy2hbiZPIZpAK3DtPONUjjrcv3gPmhraoz5K7saY5vxyEJFYNCu2nKCJUkp 130 | ZNJ69MjK2HDIBIpqFJ7jnp32Dp8wviHXQ5e1PJQxoaXNyubfOs1Cpa0= 131 | -----END RSA PRIVATE KEY----- 132 | - public : ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXQFU/VlngUtaW9i6HgkdEfcB6Ak+Ogk/EN96lB6lm6NHvMWL0ggtrxzPcyQ+K6Rri1Vh2zDKenF+ZutqfxfNEmmDUNuHW96djUXEzwLuTYYdoobHNFtV9s2pEix2QFaMxMnvWSIfvKqDvI2Z+zwfzNFKjDweiVsCPw3vAF9vIL6W12zDb3hGN4uJqpz4GCj0K3DR/dxMZVEcE4VQ5ITOusqRKeZTt3QMJI9ZdJF8xg+Bdg/NSDvH7GOcmN5eLEheIx3lWCmhtQvh1iwa+JlDVWQFmxbVqPTzI/8phjOIfEqimg+nBVq157UIfDf77Xj2YyVQSv2inVc4RLZSMQw3p 133 | private : | 134 | -----BEGIN RSA PRIVATE KEY----- 135 | MIIEowIBAAKCAQEA10BVP1ZZ4FLWlvYuh4JHRH3AegJPjoJPxDfepQepZujR7zFi 136 | 9IILa8cz3MkPiuka4tVYdswynpxfmbran8XzRJpg1Dbh1venY1FxM8C7k2GHaKGx 137 | zRbVfbNqRIsdkBWjMTJ71kiH7yqg7yNmfs8H8zRSow8HolbAj8N7wBfbyC+ltdsw 138 | 294RjeLiaqc+Bgo9Ctw0f3cTGVRHBOFUOSEzrrKkSnmU7d0DCSPWXSRfMYPgXYPz 139 | Ug7x+xjnJjeXixIXiMd5VgpobUL4dYsGviZQ1VkBZsW1aj08yP/KYYziHxKopoPp 140 | wVatee1CHw3++149mMlUEr9op1XOES2UjEMN6QIDAQABAoIBAH42vsWwwGqEqEdE 141 | euwCO/+xLNdd24BYcKVBjU9/OpmZEuAKOVfdmQzNdV+UlYSCQr2XE5Q1D8lpL7VY 142 | lzDwRUCItRY6SBpghMn7y0DpVhOJMHjttu/m37AhL8KZP/Bof5QtYee4B9z5Rfxy 143 | 6XqZsrOsjngGLBfIfojNuxZb5wdttX/u7Qp9otnESxifTbn9PfUM5UwhXRncWbsT 144 | MJ0p+aP36aNxwWDKht6bxiBRryvwNbRZX2iu18oxUUWg50uK0M/lo0KK4Svvc5lN 145 | YfBFvum78KckgDX7zVenEOmU9bQfXWgB79oP8IpRP5OyPF2AJjgiKOfR7X+JA7Nq 146 | pfXj48ECgYEA8MjIKz4ILS0ahsaxOIPsc1UyOK6F9v1PrU7ooi8WfLdY4ouPvIl5 147 | BI6zCFL9IdNOlc6Rh+UpfPYndaJz/1cWJyC0diChVdLAR+j6fuEqIKPSiv/Xn+hM 148 | sbsiNn23MoA6C2Jvv1FLez+Shvlj6fF4G1t8MfHwoXZ0yVhuSkJH9o0CgYEA5Np/ 149 | k4fA9w/OsbtJu0KtGN0AwhCVmFE/3doE4BVSsmWGznzHcC864S924CHsgznrM3OW 150 | HX7C+PFgbsbtXwqxiaMAaxrh1wBnx28c4wMsNkXCFUds4DkjqDW6IhNH7W9TtuDL 151 | qNoniBH/o18aj0xGF6HJFt6tU7f9iTxJ+tmY280CgYB2HMe0DpXMM1fTzRuZ8XzH 152 | hn9ANrwYUGIJTa/n/tk1DGtZlcRIY9ctWSKRbsQlF5Zw/gd9dfhICCeLGMl18642 153 | O2DKoW8CvoL7w1k9bA5SPIpHDQEku7sDZByARmLbLvNKKltOqf4w0xp5g1RzqbOV 154 | F+dwSJIVYhofunU/kAvk8QKBgGMPcUma6ZwH66BjQXcdVW/9ueZG53oXMV4GkTWu 155 | BS3TZJbczDdzOjlfIkXCaW4kE/shfUknJZ48XVGWKgmJx2+cbwHtkPRP6JwbLJXX 156 | ObwEVg5/7FDiatzU5Mz7K5dLKSFwDLf6NkJgCBffgs+kZHK2RSTxHnWunsBYqG08 157 | 4z3BAoGBAItHnGHbnl7cVHJFHd8teLzS+ki+gv+mKwPwSmeimV6zALAKZvj4RIg8 158 | 4g6kUWry+NNuiaH6fsDA0FWnT3Kyc59/M/EuKNCR7ci1Gnkunc0IUn78aWtNcxX5 159 | RsCKJUM8l63P0jyUufpTbG6nAP8fMdWCdtDBidFLV2JMPYnWb4aP 160 | -----END RSA PRIVATE KEY----- 161 | -------------------------------------------------------------------------------- /bootstrap/provision/inventory: -------------------------------------------------------------------------------- 1 | localhost ansible_connection=local -------------------------------------------------------------------------------- /bootstrap/provision/inventory.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | tasks: 4 | - name: add bootstrap box to inventory 5 | add_host: 6 | name: "{{ bootstrap_box_ip }}" 7 | groups: 8 | - bootstrap 9 | - concourse-web 10 | - concourse-worker 11 | -------------------------------------------------------------------------------- /bootstrap/provision/refly/group_vars/concourse-web/vars.yml: -------------------------------------------------------------------------------- 1 | concourse_host: localhost 2 | councourse_web_user: vmware 3 | councourse_web_password: PASSED_IN_AS_EXVARS 4 | -------------------------------------------------------------------------------- /bootstrap/provision/refly/inventory: -------------------------------------------------------------------------------- 1 | [concourse-web] 2 | localhost ansible_connection=local 3 | -------------------------------------------------------------------------------- /bootstrap/provision/refly/pipeline.yml: -------------------------------------------------------------------------------- 1 | # Examine and load or reload a potential pipeline 2 | --- 3 | - name: debug 4 | debug: 5 | msg: "Checking pipeline {{ pipeline_stat.path | basename }} for freshness" 6 | 7 | - name: Assume invalid pipeline name 8 | set_fact: 9 | invalid_pipeline: true 10 | when: 11 | - pipeline_stat.path is defined 12 | 13 | - name: Validate pipeline name 14 | set_fact: 15 | invalid_pipeline: false 16 | when: 17 | - pipeline_stat.path is match(".*.ya?ml(?:.j2)?$") 18 | 19 | - name: Check for existing pipeline checksum 20 | stat: 21 | path: "/tmp/{{ pipeline_stat.path | basename }}.checksum" 22 | register: pipelines_checksum_stat 23 | when: not invalid_pipeline 24 | 25 | # handle an existing pipeline checksum 26 | - block: 27 | - name: Read pipeline checksum 28 | command: cat "/tmp/{{ pipeline_stat.path | basename }}.checksum" 29 | register: previous_checksum 30 | 31 | - name: Set pipelines checksum 32 | set_fact: previous_checksum="{{ previous_checksum.stdout }}" 33 | 34 | when: 35 | - not invalid_pipeline 36 | - pipelines_checksum_stat.stat.exists 37 | 38 | - name: Write to pipeline checksum 39 | copy: 40 | content: "{{ pipeline_stat.checksum }}" 41 | dest: "/tmp/{{ pipeline_stat.path | basename }}.checksum" 42 | when: 43 | - not invalid_pipeline 44 | - not pipelines_checksum_stat.stat.exists or 45 | (pipelines_checksum_stat.stat.exists and 46 | previous_checksum != pipeline_stat.checksum) 47 | register: updated_file 48 | 49 | - name: Super simple check for whether the file is a pipeline definition 50 | shell: cat "{{ pipeline_stat.path }}" | grep -e "^jobs:$" 51 | when: 52 | - not invalid_pipeline 53 | - updated_file.changed 54 | register: have_pipeline 55 | failed_when: have_pipeline.rc > 1 or have_pipeline.rc < 0 56 | 57 | # If we don't have a pipeline, expect that this is a params file. 58 | # The convention is {pipeline}-params.yml 59 | - block: 60 | - name: Set expected pipeline name from params file 61 | set_fact: 62 | expected_pipeline: >- 63 | {{ pipeline_stat.path | 64 | regex_replace('^(.*)-params.yml$', '\1.yml') }} 65 | - name: Check for expected pipeline by computed name 66 | stat: 67 | path: "{{ expected_pipeline }}" 68 | register: now_have_pipeline 69 | - name: Now we have the pipeline we need 70 | set_fact: 71 | pipeline_def: "{{ expected_pipeline }}" 72 | when: now_have_pipeline is defined and now_have_pipeline.stat.exists 73 | - name: No pipeline yet, check for template 74 | stat: 75 | path: "{{ expected_pipeline }}.j2" 76 | register: now_have_pipeline_template 77 | - name: Now we have the (template) pipeline we need 78 | set_fact: 79 | pipeline_def: "{{ expected_pipeline }}.j2" 80 | when: 81 | - now_have_pipeline_template is defined 82 | - now_have_pipeline_template.stat.exists 83 | - name: Now we have the params we need 84 | set_fact: 85 | pipeline_params: "{{ pipeline_stat.path }}" 86 | when: 87 | - not invalid_pipeline 88 | - have_pipeline.changed and have_pipeline.rc > 0 and updated_file.changed 89 | 90 | # If we do have a pipeline, look for the matching params file. 91 | # The convention is {pipeline}-params.yml 92 | - block: 93 | - name: Set the pipeline name 94 | set_fact: 95 | pipeline_def: "{{ pipeline_stat.path }}" 96 | - name: Set expected params file from pipeline name 97 | set_fact: 98 | expected_params: >- 99 | {{ pipeline_stat.path | 100 | regex_replace('^(.*).ya?ml.*$', '\1-params.yml') }} 101 | - name: Check for expected params by computed name 102 | stat: 103 | path: "{{ expected_params }}" 104 | register: now_have_params 105 | - name: Now we have the params we need 106 | set_fact: 107 | pipeline_params: "{{ expected_params }}" 108 | when: now_have_params is defined and now_have_params.stat.exists 109 | - name: Now we have the pipeline we need 110 | set_fact: 111 | pipeline_def: "{{ pipeline_stat.path }}" 112 | when: have_pipeline.changed and have_pipeline.rc == 0 and updated_file.changed 113 | 114 | # This is a workaround for the fact that sequences are still evaulated even 115 | # when they should be skipped by "when" clause. There's a loop in the template 116 | # processing below that counts down from remainder, so remainder must 117 | # _always_ be initialized to a numeric value for the sequence to be valid. 118 | - name: Set a default to avoid a dumb issue with decrementing sequence 119 | set_fact: 120 | remainder: 1 121 | 122 | # This block allows pipeline to be defined as jinja2 files (if pipeline ends 123 | # in .yml.j2 .) If a pipeline_scale_items variable is set, we load the 124 | # pipeline params and look for a host var with the name that is contained in 125 | # pipeline_scale_items, and count those things to determine pipeline scale. 126 | # Pipeline scale is then used to pass in distinct ranges into the pipeline 127 | # template, e.g. 1..10, 11..20, etc. This allows the pipeline to break up 128 | # a large number of identical deployments into a visually distinct set of 129 | # parallel deployments. 130 | - name: Check for template pipeline 131 | block: 132 | - name: Load pipeline variables to determine ranges 133 | include_vars: 134 | file: "{{ pipeline_params }}" 135 | - set_fact: 136 | pipeline_location: "{{ pipeline_def | dirname }}" 137 | pipeline_template: "{{ pipeline_def | basename }}" 138 | pipeline_scale: >- 139 | {{ hostvars[inventory_hostname][pipeline_scale_items] |length |int }} 140 | - import_tasks: pipeline_parallelizer.yml 141 | when: 142 | - not invalid_pipeline 143 | - have_pipeline.changed 144 | - pipeline_def is defined 145 | - pipeline_params is defined 146 | - pipeline_scale_items is defined 147 | - pipeline_def is match(".*.ya?ml.j2$") 148 | 149 | - name: Rerun fly command 150 | block: 151 | - debug: 152 | msg: "RELOAD PIPELINE {{ pipeline_def }} with params {{ pipeline_params }}" 153 | - name: Login to concourse 154 | command: > 155 | fly --target main login -c http://{{ concourse_host }}:8080 156 | -u {{ concourse_web_user }} -p '{{ concourse_web_password }}' 157 | - name: Upload pipeline definition 158 | command: > 159 | fly -t main set-pipeline -n 160 | -p {{ (pipeline_def | basename | splitext)[0] }} 161 | -c {{ pipeline_def }} 162 | -l {{ pipeline_params }} 163 | - name: Unpause pipeline 164 | command: > 165 | fly -t main unpause-pipeline 166 | -p {{ (pipeline_def | basename | splitext)[0] }} 167 | when: 168 | - not invalid_pipeline 169 | - have_pipeline.changed 170 | - pipeline_def is defined 171 | - pipeline_params is defined 172 | tags: 173 | - reload 174 | -------------------------------------------------------------------------------- /bootstrap/provision/refly/pipeline_parallelizer.yml: -------------------------------------------------------------------------------- 1 | # Copyright © 2018 VMware, Inc. All Rights Reserved. 2 | # SPDX-License-Identifier: Apache-2.0 3 | --- 4 | # These tasks slice up a Concourse pipeline into parallel chunks. 5 | 6 | - name: Extract pipeline parts 7 | set_fact: 8 | pipeline_location: "{{ pipeline_def | dirname }}" 9 | pipeline_template: "{{ pipeline_def | basename }}" 10 | 11 | - name: Initialize variables 12 | set_fact: 13 | ranges: [] 14 | range_min: 1 15 | range_index: 0 16 | range_extra: 0 17 | remainder: 1 18 | 19 | - name: Initialize max number of ranges 20 | set_fact: 21 | max_ranges: 10 22 | when: max_ranges is not defined 23 | 24 | # Note that we're taking the min of max_ranges and pipeline_scale, even when 25 | # the "when" has already guaranteed pipeline_scale is the min, because ansible 26 | # evaluates the sequence even when this is skipped, so it creates a super long 27 | # loop in the case where a large number is passed for pipeline_scale. 28 | - name: Explicitly use single items as ranges when small scale 29 | set_fact: 30 | ranges: "{{ ranges }} + [{ 'min': {{ item }}, 'max': {{ item }} }]" 31 | with_sequence: start=1 end={{ [max_ranges|int, pipeline_scale|int]|min }} 32 | when: "pipeline_scale|int < max_ranges" 33 | 34 | - name: Use larger block ranges when more than max_ranges 35 | block: 36 | - name: Set range size 37 | set_fact: 38 | range_size: >- 39 | {{ (pipeline_scale|int / max_ranges|int) | round(0,'floor') }} 40 | - name: Set remainder 41 | set_fact: 42 | remainder: >- 43 | {{ pipeline_scale|int - (max_ranges|int * range_size|int) }} 44 | - debug: 45 | msg: "Range size is {{ range_size }}" 46 | - set_fact: 47 | range_extras: >- 48 | {{ range_extras | default([]) + [1 | int] }} 49 | with_sequence: start="{{ remainder|default(0) }}" end=0 stride=-1 50 | 51 | - set_fact: 52 | ranges: >- 53 | {{ ranges }} + [ { 54 | 'min': {{ range_min|int }}, 55 | 'max': {{ range_min|int + range_size|int - 1 + range_extras[item|int]|default(0)|int }} 56 | } ] 57 | range_min: >- 58 | {{ range_min|int + range_size|int + 59 | range_extras[item|int]|default(0)|int }} 60 | remainder: "{{ remainder|int - 1 }}" 61 | with_sequence: start=1 end={{ max_ranges }} 62 | when: pipeline_scale|int >= max_ranges 63 | 64 | - debug: 65 | var: ranges 66 | 67 | - debug: 68 | var: pipeline_params 69 | 70 | - name: Set config version 71 | block: 72 | - command: date "+%Y%m%d%H%M%S" 73 | register: datestamp 74 | - set_fact: 75 | config_version: "-{{ datestamp.stdout }}" 76 | when: config_version is undefined 77 | 78 | - debug: 79 | var: config_version 80 | when: config_version is defined 81 | 82 | - name: Transform pipeline for ranges of the total 83 | template: 84 | src: "{{ pipeline_def }}" 85 | dest: "/tmp/{{ (pipeline_template | splitext)[0] }}" 86 | when: not ansible_check_mode 87 | 88 | - name: Set pipeline name to processed file. 89 | set_fact: 90 | pipeline_def: >- 91 | /tmp/{{ (pipeline_template | splitext)[0] }} 92 | -------------------------------------------------------------------------------- /bootstrap/provision/refly/site.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Continuously refresh the concourse pipelines 3 | hosts: concourse-web 4 | become: False 5 | tasks: 6 | - name: Set default pipelines directory 7 | set_fact: pipelines_home="{{ ansible_env.HOME }}/deployroot/pipelines" 8 | 9 | - name: Check for pipelines 10 | stat: path="{{ pipelines_home }}" 11 | register: pipelines_stat 12 | 13 | - name: Get all pipelines 14 | stat: 15 | path: "{{ item }}" 16 | follow: True 17 | register: all_pipelines_stats 18 | with_fileglob: "{{ pipelines_home }}/*.yml*" 19 | when: pipelines_stat.stat.exists 20 | 21 | - name: Iterate over pipelines 22 | include_tasks: pipeline.yml 23 | loop: "{{ all_pipelines_stats.results|map(attribute='stat')|list }}" 24 | loop_control: 25 | loop_var: pipeline_stat 26 | when: all_pipelines_stats.results is defined 27 | -------------------------------------------------------------------------------- /bootstrap/provision/refly/templates/extra_vars.yml.j2: -------------------------------------------------------------------------------- 1 | concourse_host: localhost 2 | concourse_web_user: {{ concourse_web_options.CONCOURSE_BASIC_AUTH_USERNAME }} 3 | concourse_web_password: {{ concourse_web_options.CONCOURSE_BASIC_AUTH_PASSWORD }} 4 | -------------------------------------------------------------------------------- /bootstrap/provision/site.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - import_playbook: inventory.yml 3 | 4 | - name: Deploy tools for the bootstrap host 5 | hosts: bootstrap 6 | become: True 7 | roles: 8 | - govc 9 | - ovftool 10 | 11 | - import_playbook: concourse.yml 12 | - import_playbook: fileshare.yml 13 | - import_playbook: utilities.yml 14 | - import_playbook: download.yml 15 | -------------------------------------------------------------------------------- /bootstrap/provision/templates/mc/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": "9", 3 | "hosts": { 4 | "local": { 5 | "url": "http://localhost:9091", 6 | "accessKey": "{{ minio_access_key }}", 7 | "secretKey": "{{ minio_secret_key }}", 8 | "api": "S3v4", 9 | "lookup": "auto" 10 | } 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /bootstrap/provision/utilities.yml: -------------------------------------------------------------------------------- 1 | - name: Add some useful command line utilities 2 | hosts: bootstrap 3 | become: True 4 | vars: 5 | roles: 6 | - { name: "kubectl", tags: "kubectl" } 7 | tasks: 8 | - name: download pivnet CLI 9 | get_url: 10 | url: "https://github.com/pivotal-cf/pivnet-cli/releases/download/v0.0.51/pivnet-linux-amd64-0.0.51" 11 | dest: /usr/bin/pivnet-cli 12 | register: download_pivnet_cli 13 | retries: 5 14 | delay: 5 15 | until: not download_pivnet_cli.failed 16 | - name: perms for pivnet cli 17 | file: 18 | path: /usr/bin/pivnet-cli 19 | mode: "+x" 20 | when: 21 | - download_pivnet_cli.changed 22 | - not download_pivnet_cli.failed 23 | - name: login to pivnet 24 | become: True 25 | command: /usr/bin/pivnet-cli login --api-token {{ pivnet_api_token }} 26 | when: pivnet_api_token is defined and pivnet_api_token != "" 27 | - name: login to pivnet as non-privileged user 28 | become: False 29 | command: /usr/bin/pivnet-cli login --api-token {{ pivnet_api_token }} 30 | when: pivnet_api_token is defined and pivnet_api_token != "" 31 | -------------------------------------------------------------------------------- /bootstrap/set-env.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | #Copyright (c) 2018 VMware, Inc. All Rights Reserved. 4 | # 5 | #SPDX-License-Identifier: MIT 6 | # 7 | 8 | export GOVC_NETWORK='VM Network' 9 | export GOVC_PASSWORD='VMware1!' 10 | export GOVC_INSECURE='1' 11 | export GOVC_URL='vcsa-01a.corp.local' 12 | export GOVC_DATACENTER='RegionA01' 13 | export GOVC_DATASTORE='RegionA01-ISCSI01-COMP01' 14 | export GOVC_USERNAME='administrator@vsphere.local' 15 | export GOVC_RESOURCE_POOL='resource_pool' 16 | -------------------------------------------------------------------------------- /bootstrap/user-data.yml: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | # 3 | #Copyright (c) 2018 VMware, Inc. All Rights Reserved. 4 | # 5 | #SPDX-License-Identifier: MIT 6 | # 7 | debug: 8 | verbose: true 9 | bootcmd: 10 | - echo 'boot cmd execute' >> /etc/cmds 11 | runcmd: 12 | - echo "moooo!" 13 | - usermod -a -G docker vmware 14 | - easy_install pip 15 | - pip install --no-clean ansible 16 | - echo 'runcmd done' >> /etc/cmds 17 | chpasswd: { expire: False } 18 | chpasswd: 19 | list: | 20 | root:VMware1! 21 | expire: False 22 | ssh_pwauth: yes 23 | users: 24 | - name: vmware 25 | passwd: $6$rounds=8888$y1zBWcO8MT$J2a599jjEaJDV5Vq3t4yEKz7CVOmJqidRmv4h1fXlkzHArHEcBNUVwbnSGrIaJ2aHUwxRMS193SyuH9fKQJD20 26 | sudo: ALL=(ALL) NOPASSWD:ALL 27 | groups: users,wheel 28 | lock_passwd: false 29 | ssh-authorized-keys: 30 | - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsDYOXiGgQR1bdrvUP9/U5kchBMh0N8tCdOFfF0C7aGO3bwoztq7F46ojIOMz9r9yryrHaVgjrbxNEEjhGz7tfzUo3ix7oHGDgpkkNt+4Nuu7c4tJjsAUMgDj7cPslUt13IjKPHbcJVTXuyVn1fVdiQgU1B7SPVUSwwE8W1VLt2ogCCrWtll3rEU+XyzKFi/csTbzSUx7K05w7yfBLnEO5Ba2gFNdeu0AYuLABDn3DYAbbIJrE5vRNyA/1QuqWkxGuB1VRzpeF6X5+Z0NE9rQwFyCY80QrH+9lVinY6SdsvUOKK4bOxQ1H6JhcyeQ8phLFbr8REigrev11Nrrh31Tx tscanlan@tscanlan-M01.vmware.com 31 | - %%GENERATED_KEY%% 32 | ssh_import_id: 33 | - gh:tompscanlan 34 | shell: /bin/bash 35 | hostname: ubuntu 36 | final_message: 'The system is finally up after $UPTIME seconds using $DATASOURCE and $VERSION' 37 | ntp: 38 | enabled: true 39 | packages: 40 | - curl 41 | - jq 42 | - docker.io 43 | - open-vm-tools 44 | - python-setuptools 45 | - python2.7 46 | - python2.7-dev 47 | - unzip 48 | # - xubuntu-desktop 49 | apt_update: true 50 | apt_upgrade: true 51 | apt_reboot_if_required: true 52 | manage_resolv_conf: true 53 | resolv_conf: 54 | nameservers: ['8.8.4.4', '8.8.8.8'] 55 | growpart: 56 | mode: auto 57 | devices: ['/'] 58 | -------------------------------------------------------------------------------- /bootstrap/vars: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | #Copyright (c) 2018 VMware, Inc. All Rights Reserved. 4 | # 5 | #SPDX-License-Identifier: MIT 6 | # 7 | 8 | bosh_stemcell_username="EDIT_ME" 9 | bosh_stemcell_password="EDIT_ME" 10 | vm_username="EDIT_ME" 11 | vm_password="EDIT_ME" 12 | vsphere_vm_name='EDIT_ME' 13 | vsphere_ip_address='EDIT_ME' 14 | vsphere_network='EDIT_ME' 15 | vsphere_netmask='EDIT_ME' 16 | vsphere_gateway='EDIT_ME' 17 | vsphere_ntp_servers='EDIT_ME' 18 | vsphere_nameserver='EDIT_ME' 19 | vsphere_vcenter_server='EDIT_ME' 20 | vsphere_username='EDIT_ME' 21 | vsphere_password='EDIT_ME' 22 | vsphere_datacenter='EDIT_ME' 23 | vsphere_cluster='EDIT_ME' 24 | vsphere_datastore='EDIT_ME' 25 | vsphere_resource_pool='EDIT_ME' 26 | vsphere_folder='EDIT_ME' 27 | vsphere_insecure='EDIT_ME' 28 | -------------------------------------------------------------------------------- /downloads/README.md: -------------------------------------------------------------------------------- 1 | # Downloading bits 2 | 3 | The script in this directory is intended to download 4 | 5 | ## Setup config 6 | 7 | Edit the file named `config.json` in the local directory to match your credentials on my.vmware.com 8 | 9 | ``` json 10 | { 11 | "username": "some@vmware.com", 12 | "password": "goodpassword" 13 | } 14 | ``` 15 | 16 | Then run `download.sh` -------------------------------------------------------------------------------- /downloads/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "username": "asd@vmware.com", 3 | "password": "supersecure" 4 | } 5 | -------------------------------------------------------------------------------- /downloads/download.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # first arg is the directory to put files 4 | cd $1 5 | 6 | # Where is the default minio config-folder 7 | MINIO_CONFIG="~minio/.minio" 8 | 9 | if [[ -z "$PIVNET_API_TOKEN" ]]; then 10 | echo "Must provide a Pivotal Network API_TOKEN in environment" 1>&2 11 | exit 1 12 | fi 13 | 14 | # This file is required to enable automatic download for vmware binaries 15 | if [ -n "$MY_VMWARE_USER" ] && [ -n "$MY_VMWARE_PASSWORD" ]; then 16 | echo "Setup downloader config" 17 | echo '{ "username": "'$MY_VMWARE_USER'", "password": "'$MY_VMWARE_PASSWORD'"}' > config.json 18 | fi 19 | 20 | curl -L -o VMware-ovftool-4.3.0-7948156-lin.x86_64.bundle "https://www.dropbox.com/s/n5pepfatetp55q2/VMware-ovftool-4.3.0-7948156-lin.x86_64.bundle?dl=1" 21 | 22 | docker run -v ${PWD}:/vmwfiles apnex/myvmw 23 | docker run -v ${PWD}:/vmwfiles apnex/myvmw "VMware Pivotal Container Service" 24 | docker run -v ${PWD}:/vmwfiles apnex/myvmw get nsx-unified-appliance-2.1.0.0.0.7395503.ova 25 | docker run -v ${PWD}:/vmwfiles apnex/myvmw get nsx-controller-2.1.0.0.0.7395493.ova 26 | docker run -v ${PWD}:/vmwfiles apnex/myvmw get nsx-edge-2.1.0.0.0.7395502.ova 27 | 28 | pivnet-cli login --api-token $PIVNET_API_TOKEN 29 | pivnet-cli download-product-files -p ops-manager -r 2.1.5 -g '*vsphere*' 30 | pivnet-cli download-product-files -p pivotal-container-service -r 1.0.4 -g '*pks-linux*' 31 | 32 | # Set download permisions on the download directory 33 | # This gyration is needed to to overcome testing problems with minio 34 | # not getting set with the right credentials. 35 | tmpdir=/tmp/$(mktemp -u XXXXXXXX) 36 | mkdir -p ${tmpdir} 37 | chmod 0755 ${tmpdir} 38 | sudo su -c "cat ~minio/.minio/config.json >${tmpdir}/config.json.tmp" 39 | sudo su -c "chmod 644 ${tmpdir}/config.json.tmp" 40 | minio_accessKey=$(sudo su -c "jq -r .credential.accessKey ${tmpdir}/config.json.tmp") 41 | minio_secretKey=$(sudo su -c "jq -r .credential.secretKey ${tmpdir}/config.json.tmp") 42 | set | grep minio 43 | mc --config-folder ${tmpdir} config host add local 'http://localhost:9091' "${minio_accessKey}" "${minio_secretKey}" S3v4 44 | sudo su -c "chmod 0644 ${tmpdir}/config.json" 45 | sudo su -c "chmod 0755 ${tmpdir}/certs" 46 | mc --config-folder ${tmpdir} policy download local/${PWD##*/} 47 | rm -rf ${tmpdir} 48 | -------------------------------------------------------------------------------- /one-cloud-nsxt-param.yaml: -------------------------------------------------------------------------------- 1 | enable_ansible_debug: false # set value to true for verbose output from ansible 2 | nsx_t_installer: nsx-t-gen # Set to name of installer or env or any value so resources can be identified 3 | 4 | # vCenter details to deploy the Mgmt OVAs (Mgr, Edge, Controller) 5 | vcenter_host: vcsa-01a.corp.local # EDIT - this is for deployment of the ovas for the mgmt plane 6 | vcenter_usr: administrator@corp.local # EDIT - this is for deployment of the ovas for the mgmt plane 7 | vcenter_pwd: EDIT_ME # EDIT - this is for deployment of the ovas for the mgmt plane 8 | vcenter_datacenter: RegionA01 # EDIT 9 | vcenter_datastore: RegionA01-ISCSI01-COMP01 # EDIT 10 | vcenter_cluster: RegionA01-MGMT # EDIT 11 | vcenter_manager: vcsa-01a.corp.local # EDIT 12 | vcenter_rp: PKS_pool # EDIT resource pool where mgmt VMs would be deployed 13 | 14 | 15 | # OVA general network settings 16 | ntpservers: time.vmware.com # EDIT 17 | mgmt_portgroup: 'ESXi-RegionA01-vDS-COMP' # EDIT 18 | dnsserver: 192.168.110.10 # EDIT 19 | dnsdomain: corp.local # EDIT 20 | defaultgateway: 192.168.110.1 # EDIT 21 | netmask: 255.255.255.0 # EDIT 22 | 23 | # Host a Webserver to serve the ova images and ovftool 24 | # Please edit the endpoint and the file names 25 | nsx_image_webserver: http://minio.corp.local:9091/pks 26 | 27 | # Download NSX-T 2.1 bits from 28 | # https://my.vmware.com/group/vmware/details?downloadGroup=NSX-T-210&productId=673 29 | nsx_mgr_ova: nsx-unified-appliance-2.1.0.0.0.7395503.ova 30 | nsx_controller_ova: nsx-controller-2.1.0.0.0.7395493.ova 31 | nsx_edge_ova: nsx-edge-2.1.0.0.0.7395502.ova 32 | # Ensure ovftool is minimally v4.2 33 | # https://my.vmware.com/group/vmware/details?productId=614&downloadGroup=OVFTOOL420# 34 | ovftool_image: VMware-ovftool-4.3.0-7948156-lin.x86_64.bundle 35 | 36 | # The Esxi Hosts can be added to transport nodes in two ways: 37 | # a) specify the esxi hosts individually - first checked for 38 | # b) or use compute_vcenter_manager to add hosts under a specific vcenter as transport nodes 39 | 40 | # Specify passwd of the esxi hosts that should be used for nsx-t 41 | esxi_hosts_root_pwd: EDIT_ME # EDIT - Root password for the esxi hosts 42 | 43 | # Specify FQDN, ip (and passwd if they are different) for each of the esxi hosts that should be used for nsx-t 44 | esxi_hosts_config: 45 | # esxi_hosts: 46 | # - name: esxi-host1.corp.local.io 47 | # ip: 10.13.12.10 48 | # root_pwd: rootPasswd 49 | # - name: esxi-host2.corp.local.io 50 | # ip: 10.13.12.11 51 | # root_pwd: rootPasswd 52 | 53 | # Using a separate compute manager to specify esxi hosts 54 | # If filled, then change the the esxi_hosts_config section to be empty 55 | # Sample empty config: 56 | # esxi_hosts_config: # Provide space after colon 57 | # compute_vcenter_manager: # EDIT - Name for vCenter; If filled, then comment off the esxi_hosts_config 58 | # compute_vcenter_host: # EDIT - Use Compute vCenter Esxi hosts as transport node 59 | # compute_vcenter_usr: # EDIT - Use Compute vCenter Esxi hosts as transport node 60 | # compute_vcenter_pwd: # EDIT - Use Compute vCenter Esxi hosts as transport node 61 | # compute_vcenter_cluster: # EDIT - Use Compute vCenter Esxi hosts as transport node 62 | 63 | # Handle multiple vcenter and compute clusters 64 | compute_manager_configs: | 65 | compute_managers: 66 | - vcenter_name: vcsa-01a 67 | vcenter_host: vcsa-01a.corp.local 68 | vcenter_usr: administrator@vsphere.local 69 | vcenter_pwd: VMWare1! 70 | clusters: 71 | # Multiple clusters under same vcenter can be specified 72 | - vcenter_cluster: RegionA01-K8s 73 | overlay_profile_mtu: 1600 # Min 1600 74 | overlay_profile_vlan: 0 # VLAN ID for the TEP/Overlay network 75 | # Specify an unused vmnic on esxi host to be used for nsx-t 76 | # can be multiple vmnics separated by comma 77 | uplink_vmnics: vmnic1 # vmnic1,vmnic2... 78 | # - vcenter_cluster: Cluster2 79 | # overlay_profile_mtu: 1600 # Min 1600 80 | # overlay_profile_vlan: EDIT_ME # VLAN ID for the TEP/Overlay network 81 | # # Specify an unused vmnic on esxi host to be used for nsx-t 82 | # # can be multiple vmnics separated by comma 83 | # uplink_vmnics: vmnic1 # vmnic1,vmnic2... 84 | # Multiple vcenters can be specified 85 | # - vcenter_name: vcenter-02 86 | # vcenter_host: vcenter-02.corp.local 87 | # vcenter_usr: administrator@vsphere.local 88 | # vcenter_pwd: VMWare2! 89 | # clusters: 90 | # - vcenter_cluster: Cluster4 91 | # overlay_profile_mtu: 1600 # Min 1600 92 | # overlay_profile_vlan: EDIT_ME # VLAN ID for the TEP/Overlay network 93 | # # Specify an unused vmnic on esxi host to be used for nsx-t 94 | # # can be multiple vmnics separated by comma 95 | # uplink_vmnics: vmnic1 # vmnic1,vmnic2... 96 | 97 | # Using a separate vCenter to add Edges 98 | # Edge Specific vCenter settings 99 | # If Edges are going to use the same vCenter as Mgr, then dont set any of the following properties 100 | edge_vcenter_host: # EDIT - If filled, then Edges would use this separate vcenter 101 | edge_vcenter_usr: # EDIT - Use Edge specific vCenter 102 | edge_vcenter_pwd: # EDIT - Use Edge specific vCenter 103 | edge_vcenter_datacenter: # EDIT - Use Edge specific vCenter 104 | edge_vcenter_datastore: # EDIT - Use Edge specific vCenter 105 | edge_vcenter_cluster: # EDIT - Use Edge specific vCenter 106 | edge_vcenter_rp: # EDIT - Use Edge specific vCenter 107 | edge_ntpservers: # EDIT - Use Edge specific vCenter 108 | edge_mgmt_portgroup: # EDIT - Use Edge specific vCenter 109 | edge_dnsserver: # EDIT - Use Edge specific vCenter 110 | edge_dnsdomain: # EDIT - Use Edge specific vCenter 111 | edge_defaultgateway: # EDIT - Use Edge specific vCenter 112 | edge_netmask: # EDIT - Use Edge specific vCenter 113 | 114 | # Edit following parameters 115 | nsx_t_manager_host_name: nsx-mgr.corp.local # Set as FQDN, will be used also as certificate common name 116 | nsx_t_manager_vm_name: 'NSX-T Mgr' # Can have spaces 117 | nsx_t_manager_ip: 192.168.110.34 118 | nsx_t_manager_admin_user: admin 119 | nsx_t_manager_admin_pwd: EDIT_ME # Min 8 chars, upper, lower, number, special digit 120 | nsx_t_manager_root_pwd: EDIT_ME # Min 8 chars, upper, lower, number, special digit 121 | 122 | # Following properties can be used for deploying controller to same cluster/rp 123 | nsx_t_controller_host_prefix: nsx-t-ctl # Without spaces, Generated controller would be nsx-t-ctrl-1.corp.local.io,... 124 | nsx_t_controller_vm_name_prefix: 'NSX-T Controller' # Generated edge host name would be "NSX-T Controller-1" 125 | nsx_t_controller_ips: 192.168.110.35,192.168.110.36,192.168.110.37 # Should be 1 or 3 ips to maintain quorum for Controller Cluster 126 | nsx_t_controller_root_pwd: EDIT_ME # Min 8 chars, upper, lower, number, special digit 127 | nsx_t_controller_cluster_pwd: EDIT_ME # Min 8 chars, upper, lower, number, special digit 128 | 129 | # Controllers would be deployed to the same Mgmt Cluster 130 | # as specified by the vcenter_host & vcenter_cluster 131 | # NOT RECOMMENDED To use separate cluster for each controller 132 | nsx_t_controllers_config: 133 | # CAUTION: Using different clusters for controllers require complex anti-affinity rules setup 134 | # If really want to spread controllers across different controllers, 135 | # remove previous line (the empty nsx_t_controllers_config) and 136 | # uncomment following nsx_t_controllers_config section, edit it to allow different clusters and resource pool per controller 137 | # nsx_t_controllers_config: | 138 | # controllers: 139 | # host_prefix: nsx-t-ctl # EDIT_ME 140 | # vm_name_prefix: NSX-T-Controller # EDIT_ME 141 | # root_pwd: EDIT_ME # EDIT_ME 142 | # cluster_pwd: EDIT_ME # EDIT_ME 143 | # # Should be 1 or 3 members 144 | # members: 145 | # # EDIT ME to specify the correct cluster, 146 | # # resource pool, datastore per controller member 147 | # - ip: 10.13.12.51 # EDIT_ME 148 | # cluster: Cluster1 # EDIT_ME 149 | # resource_pool: nsx_rp1 # EDIT_ME 150 | # datastore: store1 # EDIT_ME 151 | # - ip: 10.13.12.52 # EDIT_ME 152 | # cluster: Cluster2 # EDIT_ME 153 | # resource_pool: nsx_rp2 # EDIT_ME 154 | # datastore: store2 # EDIT_ME 155 | # - ip: 10.13.12.53 # EDIT_ME 156 | # cluster: Cluster3 # EDIT_ME 157 | # resource_pool: nsx_rp3 # EDIT_ME 158 | # datastore: store3 # EDIT_ME 159 | 160 | nsx_t_edge_host_prefix: nsx-t-edge # Without spaces, generated edge would be nsx-t-edge-1.corp.local.io,... 161 | nsx_t_edge_vm_name_prefix: 'NSX-T Edge' # Generated edge host name would be "NSX-T Edge-1" 162 | nsx_t_edge_ips: 192.168.110.38,192.168.110.39 # comma separated ips, requires min 2 for HA 163 | nsx_t_edge_root_pwd: EDIT_ME 164 | nsx_t_edge_portgroup_ext: ESXi-RegionA01-vDS-COMP # For external routing 165 | nsx_t_edge_portgroup_transport: VM-RegionA01-vDS-COMP # For TEP/overlay 166 | 167 | # If ova deployment succeeded but controller membership failed or edges didnt get to join for any reason 168 | # enable rerun of the configure controllers 169 | rerun_configure_controllers: false # set it to true if you want to rerun the configure controllers 170 | # (as part of base ova install job) even as ova deployment succeeded 171 | 172 | # Edge network interfaces 173 | # Network1 and Network4 are for mgmt and not used for uplink 174 | # Network2 is for external uplink 175 | # Network3 is for overlay 176 | # Change only if necessary 177 | nsx_t_edge_overlay_interface: fp-eth1 # Wired to Network3 178 | nsx_t_edge_uplink_interface: fp-eth0 # Wired to Network2 179 | 180 | # Tunnel endpoint network ip pool - change pool_end based on # of members in the tep pool 181 | nsx_t_tep_pool_name: tep-ip-pool 182 | nsx_t_tep_pool_cidr: 192.168.213.0/24 183 | nsx_t_tep_pool_gateway: 192.168.213.1 184 | nsx_t_tep_pool_start: 192.168.213.10 185 | nsx_t_tep_pool_end: 192.168.213.200 186 | #nsx_t_tep_pool_nameserver: 192.168.110.10 # Not required 187 | 188 | # Memory reservation is turned ON by default with the NSX-T OVAs. 189 | # This would mean a deployment of an edge or a mgr would reserve full memory 190 | # leading to memory constraints 191 | # if nsx_t_keep_reservation to true - would keep reservation ON, recommended for production setups. 192 | # if nsx_t_keep_reservation to false - would turn reservation OFF, recommended for POCs, smaller setups. 193 | nsx_t_keep_reservation: false # for Prod setup 194 | 195 | # Default Sizing of NSX-T VMs 196 | # Manager Sizing 197 | # small : 2 vCPU, 8GB RAM 198 | # medium : 4 vCPU, 16GB RAM - default with OVA 199 | # large : 8 vCPU, 32GB RAM 200 | # Controller Sizing 201 | # default: 4 vCPU, 16GB RAM 202 | # Edge Instance Sizing 203 | # small : 2 vCPU, 4GB RAM - Too small - dont use 204 | # medium : 4 vCPU, 8GB RAM - default with OVA 205 | # large : 8 vCPU, 16GB RAM 206 | 207 | # Controller is fixed to the 4 vCPU and 16 GB RAM 208 | # Change default size for deployment for edge and mgr 209 | # Size of the edge determines size of loadbalancers and # of lbrs. 210 | # So, choose the size of edge based on requirements 211 | nsx_t_mgr_deploy_size: small # Recommended for real barebones demo, smallest setup 212 | #nsx_t_edge_deploy_size: medium # Recommended for POCs, smaller setup (# of lbrs very limited) 213 | nsx_t_edge_deploy_size: large # Recommended when 4 small lbrs are required 214 | 215 | # More control over memory and vcpu to use 216 | # Not exposed and support removed 217 | # nsx_t_sizing_spec: | 218 | # nsx_t_sizing: 219 | # mgr: 220 | # cpu: 2 221 | # memory: 8192 # in MB, needs to be multiples of 1024 222 | # controller: 223 | # # Controller would be resized from 4vCPU/16GB to 2vCPU/8GB 224 | # cpu: 2 225 | # memory: 8192 # in MB, needs to be multiples of 1024 226 | # edge: 227 | # cpu: 2 228 | # memory: 8192 # in MB, needs to be multiples of 1024 229 | 230 | 231 | nsx_t_overlay_hostswitch: hostswitch2 232 | nsx_t_vlan_hostswitch: hostswitch1 233 | 234 | # For Edge External uplink 235 | # Check with network admin if its tagged or untagged 236 | nsx_t_transport_vlan: 0 237 | 238 | nsx_t_vlan_transport_zone: vlan-tz 239 | nsx_t_overlay_transport_zone: overlay-tz 240 | 241 | nsx_t_pas_ncp_cluster_tag: pks1 242 | 243 | nsx_t_edge_cluster: 'Edge Cluster' 244 | 245 | # For outbound uplink connection used by Edge 246 | nsx_t_single_uplink_profile_name: "single-uplink-profile" 247 | nsx_t_single_uplink_profile_mtu: 1600 # Min 1600 248 | nsx_t_single_uplink_profile_vlan: 0 # Default 249 | 250 | # For internal overlay connection used by Esxi hosts 251 | nsx_t_overlay_profile_name: "host-overlay-profile" 252 | nsx_t_overlay_profile_mtu: 1600 # Min 1600 253 | nsx_t_overlay_profile_vlan: 0 # VLAN ID for the TEP/Overlay network 254 | 255 | # Specify an unused vmnic on esxi host to be used for nsx-t 256 | # can be multiple vmnics separated by comma 257 | nsx_t_esxi_vmnics: vmnic1 # vmnic1,vmnic2... 258 | 259 | # Configs for T0Router (only one per run), T1Routers, Logical switches and tags... 260 | # Make sure the ncp/cluster tag matches the one defined at the top level. 261 | # Expects to use atleast 2 edge instances for HA that need to be installed 262 | nsx_t_t0router_spec: | 263 | t0_router: 264 | name: DefaultT0Router 265 | ha_mode: 'ACTIVE_STANDBY' 266 | # Specify the edges to be used for hosting the T0Router instance 267 | edge_indexes: 268 | # Index starts from 1 -> denoting nsx-t-edge-01 269 | primary: 1 # Index for primary edge to be used 270 | secondary: 2 # Index for secondary edge to be used 271 | vip: 192.168.110.33/24 272 | ip1: 192.168.110.31/24 273 | ip2: 192.168.110.32/24 274 | vlan_uplink: 0 275 | static_route: 276 | next_hop: 192.168.110.1 277 | network: 0.0.0.0/0 278 | admin_distance: 1 279 | tags: 280 | ncp/cluster: 'pks1' # Should match the top level ncp/cluster tag value 281 | ncp/shared_resource: 'true' # required for PKS 282 | testtag: 'testpks1' 283 | 284 | # T1 Logical Router with associated logical switches 285 | # Add additional or comment off unnecessary t1 routers and switches as needed 286 | # Can have 3 different setups: 287 | # 1: One shared mgmt T1 Router and infra logical switch for both PKS & PAS 288 | # 2: One mgmt T1 Router and infra logical switch for either PKS or PAS.. 289 | # Comment off the T1 router not required 290 | # 3: Separate mgmt T1 Router and infra logical switch for each PKS and PAS.. 291 | # Add additional T1Router-Mgmt2 as needed with its infra logical switch 292 | # Name the routers and logical switches and cidrs differently to avoid conflict 293 | nsx_t_t1router_logical_switches_spec: | 294 | t1_routers: 295 | # Add additional T1 Routers or collapse switches into same T1 Router as needed 296 | # Sample for PKS - Ops Mgr, Bosh Director 297 | - name: T1-Router-PKS-Infra 298 | switches: 299 | - name: PKS-Infra 300 | logical_switch_gw: 172.23.1.1 # Last octet should be 1 rather than 0 301 | subnet_mask: 24 302 | 303 | # Hosts the PKS Controller & Harbor 304 | - name: T1Router-PKS-Services 305 | switches: 306 | - name: PKS-Services 307 | logical_switch_gw: 172.23.2.1 # Last octet should be 1 rather than 0 308 | subnet_mask: 24 309 | 310 | # - name: T1Router-PKS-K8s-Clusters 311 | # switches: 312 | # - name: PKS-K8s-Clusters 313 | # logical_switch_gw: 192.168.40.1 # Last octet should be 1 rather than 0 314 | # subnet_mask: 24 315 | 316 | 317 | # Make sure the ncp/cluster tag matches the one defined on the T0 Router 318 | # Additional the ncp/ha tag should be set for HA Spoof guard profile 319 | nsx_t_ha_switching_profile_spec: | 320 | ha_switching_profiles: 321 | - name: HASwitchingProfile 322 | tags: 323 | ncp/cluster: 'pks1' # Should match the top level ncp/cluster tag value 324 | ncp/ha: 'true' # Required for HA 325 | testtag: 'testpks1' 326 | 327 | 328 | # Make sure the ncp/cluster tag matches the one defined on the T0 Router 329 | # Add additional container ip blocks as needed 330 | nsx_t_container_ip_block_spec: | 331 | container_ip_blocks: 332 | # For PKS clusters 333 | - name: node-container-ip-block-pks-with-tag 334 | cidr: 172.24.0.0/14 335 | ncp/shared_resource: 'true' 336 | # No tags for this block 337 | - name: pod-container-ip-block-pks-with-tag 338 | cidr: 172.28.0.0/14 339 | ncp/shared_resource: 'true' 340 | # No tags for this block 341 | 342 | 343 | 344 | # Make sure the ncp/cluster tag matches the one defined on the T0 Router for PAS 345 | # Make sure the ncp/shared tag is set to true for PKS 346 | # Additional the ncp/external tag should be set for external facing ip pool 347 | # Add additional exernal ip pools as needed 348 | # Change the cidr, gateway, nameserver, dns_domain as needed 349 | # The cidr, gateway, start/end ips should be reachable via static or bgp routing through T0 router 350 | nsx_t_external_ip_pool_spec: | 351 | external_ip_pools: 352 | - name: snat-vip-pool-for-pks 353 | cidr: 192.168.100.0/24 # Should be a 0/24 or some valid cidr, matching the external exposed uplink 354 | gateway: 192.168.100.1 355 | start: 192.168.100.32 # Should not include gateway 356 | end: 192.168.100.63 # Should not include gateway 357 | nameserver: 192.168.110.10 358 | dns_domain: corp.local 359 | tags: 360 | ncp/external: 'true' # Required for external facing ips 361 | ncp/shared_resource: 'true' # Required for PKS 362 | 363 | # Specify NAT rules 364 | # Provide matching dnat and snat rule for specific vms by using ips for destination_network and translated_network that need to be exposed like Ops Mgr 365 | # Provide snat rules for outbound from either container or a vm by specifying the source_network (cidr) and translated network ip 366 | # Details of NAT: 367 | # Ingress into Ops Mgr: External IP of Ops Mgr -> DNAT -> translated into internal ip of Ops Mgr 368 | # Egress from Ops Mgr: internal ip of Ops Mgr -> SNAT -> translated into external IP of Ops Mgr 369 | # Ingress into PKS API Controller: External IP of controller -> DNAT -> translated into internal ip of controller 370 | # Egress from PKS API Controller: internal ip of controller -> SNAT -> translated into external IP of controller 371 | # Egress from PKS-Infra: cidr of pks infra -> SNAT -> translated into some external IP 372 | # Egress from PKS-Clusters: cidr of PKS-Clusters -> SNAT -> translated into some external IP 373 | nsx_t_nat_rules_spec: | 374 | nat_rules: 375 | # Sample entry for PKS PKS-Infra network - outbound/egress 376 | - t0_router: DefaultT0Router 377 | nat_type: snat 378 | source_network: 172.23.1.0/24 # PKS Infra network cidr 379 | translated_network: 192.168.110.41 # SNAT External Address for PKS networks 380 | rule_priority: 8001 # Lower priority 381 | 382 | # Sample entry for PKS PKS-Clusters network - outbound/egress 383 | - t0_router: DefaultT0Router 384 | nat_type: snat 385 | source_network: 172.23.2.0/24 # PKS Clusters network cidr 386 | translated_network: 192.168.110.41 # SNAT External Address for PKS networks 387 | rule_priority: 8001 # Lower priority 388 | 389 | # Sample entry for allowing inbound to PKS Ops manager - ingress 390 | - t0_router: DefaultT0Router 391 | nat_type: dnat 392 | destination_network: 192.168.110.40 # External IP address for PKS opsmanager 393 | translated_network: 172.23.1.5 # Internal IP of PKS Ops manager 394 | rule_priority: 1024 # Higher priority 395 | # Sample entry for allowing outbound from PKS Ops Mgr to external - egress 396 | - t0_router: DefaultT0Router 397 | nat_type: snat 398 | source_network: 172.23.1.5 # Internal IP of PAS opsmanager 399 | translated_network: 192.168.110.40 # External IP address for PAS opsmanager 400 | rule_priority: 1024 # Higher priority 401 | 402 | # Sample entry for allowing inbound to PKS Controller - ingress 403 | # Not required if using nsx-t-ci-pipeline that would automatically figure out PKS controller ip 404 | # and add new nat rule mapping external pks api controller ip to its internal ip 405 | - t0_router: DefaultT0Router 406 | nat_type: dnat 407 | destination_network: 192.168.110.30 # External IP address for PKS opsmanager 408 | translated_network: 172.23.2.4 # Internal IP of PKS Ops Controller 409 | rule_priority: 1024 # Higher priority 410 | # Sample entry for allowing outbound from PKS Controller to external - egress 411 | - t0_router: DefaultT0Router 412 | nat_type: snat 413 | source_network: 172.23.2.4 # Internal IP of PKS controller 414 | destination_network: 192.168.110.30 # External IP address for PKS opsmanager 415 | rule_priority: 1024 # Higher priority 416 | 417 | nsx_t_csr_request_spec: | 418 | csr_request: 419 | #common_name not required - would use nsx_t_manager_host_name 420 | org_name: Company # EDIT 421 | org_unit: net-integ # EDIT 422 | country: US # EDIT 423 | state: CA # EDIT 424 | city: SF # EDIT 425 | key_size: 2048 # Valid values: 2048 or 3072 426 | algorithm: RSA # Valid values: RSA or DSA 427 | 428 | # LBR definition 429 | # By default, all the server pools would use Layer 4 - TCP pass through 430 | # No ssl termination or handling, uses default tcp active monitor & auto map for virtual server 431 | # Auto Map mode uses LB interface IP and ephemeral port. 432 | # In scenarios where both Clients and Pool Members are attached to the same Logical Router, 433 | # SNAT (Auto Map or IP List) must be used. 434 | 435 | # LBR Sizing guide 436 | # LB Size | Virtual Servers | Pool Members 437 | # small | 10 | 30 438 | # medium | 100 | 300 439 | # large | 1000 | 3000 440 | 441 | # No. of LBs per edge based on size of edge 442 | # Edge Size | Small LBs| Medium LBs| Large LBs 443 | # 444 | # small | 0 | 0 | 0 445 | # medium | 1 | 0 | 0 # Recommended for running only PAS or PKS 446 | # large | 4 | 1 | 1 # Recommended for running PAS + PKS 447 | # Bare metal - not handled here 448 | 449 | # No need for any default Loadbalancers 450 | # Would be created on cluster creation automatically 451 | nsx_t_lbr_spec: 452 | -------------------------------------------------------------------------------- /pks-params.sample.yml: -------------------------------------------------------------------------------- 1 | # Pipeline resource configuration 2 | pivnet_token: Xsd...23KS # Pivnet token for downloading resources from Pivnet. Find this token at https://network.pivotal.io/users/dashboard/edit-profile 3 | opsman_major_minor_version: ^2\.1\.2$ # PCF Ops Manager minor version to track 4 | pks_major_minor_version: ^1\..*$ # Can pick 1.0 (or v1.1 once it becomes GA and available on pivnet) 5 | 6 | # vCenter configuration 7 | vcenter_host: 10.1.1.10 # vCenter host or IP 8 | vcenter_usr: administrator@vsphere.local # vCenter username. If user is tied to a domain, then escape the \, example `domain\\user` 9 | vcenter_pwd: "k32...la!" # vCenter password 10 | vcenter_data_center: Datacenter # vCenter datacenter 11 | vcenter_insecure: 1 # vCenter skip TLS cert validation; enter `1` to disable cert verification, `0` to enable verification 12 | vcenter_ca_cert: # vCenter CA cert at the API endpoint; enter a value if `vcenter_insecure: 0` 13 | 14 | # Ops Manager VM configuration 15 | om_vm_host: 192.168.10.5 # Optional - vCenter host to deploy Ops Manager in 16 | om_data_store: vsan2 # vCenter datastore name to deploy Ops Manager in 17 | opsman_domain_or_ip_address: opsmgr.test-cf-app.com # FQDN to access Ops Manager without protocol (will use https), ex: opsmgr.example.com 18 | opsman_admin_username: admin # Username for Ops Manager admin account 19 | opsman_admin_password: password123 # Password for Ops Manager admin account 20 | om_ssh_pwd: pas..d123 # SSH password for Ops Manager (ssh user is ubuntu) 21 | om_decryption_pwd: pas...rd123 # Decryption password for Ops Manager exported settings 22 | om_ntp_servers: us.pool.ntp.org # Comma-separated list of NTP Servers 23 | om_dns_servers: 10.1.1.2 # Comma-separated list of DNS Servers 24 | om_gateway: 192.168.10.1 # Gateway for Ops Manager network 25 | om_netmask: 255.255.255.192 # Netmask for Ops Manager network 26 | om_ip: 192.168.10.5 # IP to assign to Ops Manager VM 27 | om_vm_network: infra # vCenter network name to use to deploy Ops Manager in 28 | om_vm_name: opsmgr-nsx # Name to use for Ops Manager VM 29 | opsman_disk_type: "thin" # Disk type for Ops Manager VM (thick|thin) 30 | om_vm_power_state: true # Whether to power on Ops Manager VM after creation 31 | 32 | # vCenter Cluster or Resource Pool to use to deploy Ops Manager. 33 | # Possible formats: 34 | # cluster: //host/ 35 | # resource pool: //host//Resources/ 36 | om_resource_pool: rpcluster-1 37 | 38 | ephemeral_storage_names: vsan # Ephemeral Storage names in vCenter for use by PCF 39 | persistent_storage_names: vsan # Persistent Storage names in vCenter for use by PCF 40 | 41 | bosh_vm_folder: "nsx_vms" # vSphere datacenter folder (such as pcf_vms) where VMs will be placed 42 | bosh_template_folder: "nsx_templates" # vSphere datacenter folder (such as pcf_templates) where templates will be placed 43 | bosh_disk_path: "nsx_disk" # vSphere datastore folder (such as pcf_disk) where attached disk images will be created 44 | 45 | trusted_certificates: # Trusted certificates to be deployed along with all VM's provisioned by BOSH 46 | vm_disk_type: "thick" # Disk type for BOSH provisioned VM. (thick|thin) 47 | 48 | # AZ configuration for Ops Director 49 | az_1_name: PKS # Logical name of availability zone. No spaces or special characters. 50 | az_1_cluster_name: Cluster1 # Name of cluster in vCenter for PKS 51 | az_1_rp_name: pks-rp # Resource pool name in vCenter for PKS 52 | 53 | ntp_servers: us.pool.ntp.org # Comma-separated list of NTP servers to use for VMs deployed by BOSH 54 | enable_vm_resurrector: true # Whether to enable BOSH VM resurrector 55 | max_threads: 30 # Max threads count for deploying VMs 56 | 57 | # Network configuration for Ops Director 58 | icmp_checks_enabled: false # Enable or disable ICMP checks 59 | 60 | infra_network_name: "INFRASTRUCTURE" 61 | infra_vsphere_network: infra # vCenter Infrastructure network name 62 | infra_nw_cidr: 192.168.20.0/24 # Infrastructure network CIDR, ex: 10.0.0.0/22 63 | infra_excluded_range: 192.168.20.0-192.168.20.9,192.168.20.25-192.168.20.64 # Infrastructure network exclusion range 64 | infra_nw_dns: 10.1.1.2 # Infrastructure network DNS 65 | infra_nw_gateway: 192.168.20.1 # Infrastructure network Gateway 66 | infra_nw_azs: PKS # Comma separated list of AZ's to be associated with this network 67 | nsx_networking_enabled: false # (true|false) to use nsx networking feature 68 | 69 | # Careful with the indent and the | for multi-line input 70 | nsx_ca_certificate: | # cert for nsx 71 | -----BEGIN CERTIFICATE----- 72 | MIIDjDCCAnSasfasdfsfd324242342UAMIG3232GMS4wLAYDVQQD 73 | asfsafsafasf 74 | ............. 75 | -----END CERTIFICATE----- 76 | 77 | # Additional network for PKS Clusters - set to empty "" string for the vsphere network if its not required 78 | pks_network_name: "PKS-k8s-clusters" # Logical switch for PKS-k8s-clusters 79 | pks_vsphere_network: "PKS-k8s-clusters" # vCenter PKS-k8s-clusters network name for PKS - if empty quotes, then this network would be skipped 80 | pks_nw_cidr: 192.168.40.0/24 # PKS-k8s-clusters network CIDR, ex: 10.0.0.0/22 81 | pks_excluded_range: 192.168.40.1 # PKS-k8s-clusters network exclusion range 82 | pks_nw_dns: 10.1.1.2 # PKS-k8s-clusters network DNS 83 | pks_nw_gateway: 192.168.40.1 # PKS-k8s-clusters network Gateway 84 | pks_nw_azs: az1 # Comma separated list of AZ's to be associated with this network 85 | is_service_network: true # Required select service network option in Ops man true or false 86 | 87 | ## NSX Mgr section 88 | nsx_address: nsx-t-mgr.test-domain.com # address of nsx-t mgr 89 | nsx_username: admin # username for nsx-t access 90 | nsx_password: 23..d! # password for nsx-t access 91 | 92 | ## PKS Tile section 93 | 94 | # Create DNS entry in the loadbalancer and DNAT/SNAT entries for following 95 | pks_tile_system_domain: pks.test.corp.com 96 | # Just provide the prefix like uaa or api for domain_prefix. 97 | # The prefix together with system domain would be used like api.pks.test.corp.com or uaa.pks.test.corp.com 98 | pks_tile_uaa_domain_prefix: api # Would be used for UAA as ${prefix}.${pks_system_domain} 99 | pks_tile_cli_username: pksadmin 100 | pks_tile_cli_password: pksadmin123 101 | pks_tile_cli_useremail: pksadmin@corp.com 102 | pks_tile_cert_pem: # Would be generated or provide 103 | #cert_pem: | 104 | # -----BEGIN CERTIFICATE----- 105 | # MIIDjDCCAnSasfasdfsfd324242342UAMIG3232GMS4wLAYDVQQD 106 | # asfsafsafasf 107 | # ............. 108 | # -----END CERTIFICATE----- 109 | 110 | pks_tile_private_key_pem: # Would be generated 111 | 112 | pks_tile_vcenter_host: EDIT_ME 113 | pks_tile_vcenter_usr: EDIT_ME 114 | pks_tile_vcenter_pwd: EDIT_ME 115 | pks_tile_vcenter_data_center: EDIT_ME 116 | pks_tile_vcenter_cluster: EDIT_ME 117 | pks_tile_vcenter_datastore: EDIT_ME 118 | pks_tile_vm_folder: EDIT_ME 119 | 120 | pks_tile_singleton_job_az: az1 # Check 121 | pks_tile_nw_azs: az1 122 | pks_tile_deployment_network_name: INFRASTRUCTURE # Can be any network created already, default: INFRASTRUCTURE 123 | pks_tile_cluster_service_network_name: PKS-k8s-clusters # Should match the PKS-k8s-clusters network created already 124 | 125 | # ALERT - The underlying Edge instances should be large, 8 vcpus. 126 | # Otherwise the pks-nsx-t-precheck errrand would fail. 127 | pks_tile_nsx_skip_ssl_verification: true # can be true or false 128 | pks_tile_t0_router_id: EDIT_ME # UUID of T0 Router to be used for PKS 129 | pks_tile_ip_block_id: EDIT_ME # UUID of Container IP Block in NSX Mgr to be used for PKS 130 | pks_tile_floating_ip_pool_id: EDIT_ME # UUID of External Floating IP Pool in NSX Mgr to be used for PKS 131 | pks_tile_nodes_ip_block_id: EDIT_ME # UUID of Node IP BLock in NSX Mgr to be used for PKS nodes in v1.1 132 | 133 | # Syslog Flags 134 | pks_tile_syslog_migration_enabled: disabled # can be set to 'enabled', if 'disabled' all syslog properties ignored 135 | pks_tile_syslog_address: #101.10.10.10 136 | pks_tile_syslog_port: # 0 137 | pks_tile_syslog_transport_protocol: #tcp 138 | pks_tile_syslog_tls_enabled: # true 139 | pks_tile_syslog_peer: # *.test.corp.com 140 | pks_tile_syslog_ca_cert: 141 | 142 | # Allow public ip 143 | pks_tile_allow_public_ip: true # leave it to empty or false to turn off public ip 144 | 145 | # vRealize Log Insight (vrli) flags 146 | pks_tile_vrli_enabled: false # Change to true and fill following fields for using vrli 147 | pks_tile_vrli_host: 148 | pks_tile_vrli_use_ssl: true 149 | pks_tile_vrli_skip_cert_verify: true 150 | pks_tile_vrli_ca_cert: # cert contents 151 | pks_tile_vrli_rate_limit: 0 152 | 153 | # PKS use Proxy 154 | pks_tile_enable_http_proxy: false # Change to true and fill following fields for using proxy 155 | pks_tile_http_proxy_url: 156 | pks_tile_http_proxy_user: 157 | pks_tile_http_proxy_password: 158 | pks_tile_https_proxy_url: 159 | pks_tile_https_proxy_user: 160 | pks_tile_https_proxy_password: 161 | pks_tile_no_proxy: 162 | 163 | # Use LDAP for PKS UAA 164 | # Default is to use internal uaa 165 | pks_tile_uaa_use_ldap: false # Change to true and fill following fields for using LDAP 166 | pks_tile_ldap_url: 167 | pks_tile_ldap_user: 168 | pks_tile_ldap_password: 169 | pks_tile_ldap_search_base: 170 | pks_tile_ldap_group_search_base: 171 | pks_tile_ldap_server_ssl_cert: 172 | pks_tile_ldap_server_ssl_cert_alias: 173 | pks_tile_ldap_email_domains: 174 | pks_tile_ldap_first_name_attribute: 175 | pks_tile_ldap_last_name_attribute: 176 | 177 | # PKS Wavefront integration 178 | pks_tile_wavefront_api_url: # wavefront api url, empty is disabled 179 | pks_tile_wavefront_token: # wavefront access token 180 | pks_tile_wavefront_alert_targets: # email ids to sent alerts 181 | 182 | pks_tile_telemetry: disabled # can be enabled or disabled 183 | 184 | pks_tile_plan_details: 185 | - plan_detail: 186 | name: "small" # the name that appears for end users to choose 187 | plan_selector: plan1_selector # Dont change the value of selector - Needs to be planN_selector 188 | is_active: true 189 | description: "Small plan" 190 | # AZ can be only a single value for PKS v1.0 191 | # There can be a comma separated list for PKS v1.1 which supports multiple AZs 192 | az_placement: az1 # Specify the AZ in which the cluster will be created, single for PKS v1.0 193 | authorization_mode: rbac # rbac - Default Cluster Authorization Mode 194 | master_vm_type: medium 195 | worker_vm_type: medium 196 | persistent_disk_type: "10240" # Used in PKS 1.0 197 | master_persistent_disk_type: "10240" # Used in PKS 1.1+ 198 | worker_persistent_disk_type: "10240" # Used in PKS 1.1+ 199 | worker_instances: 3 # The number of K8s worker instances 200 | errand_vm_type: micro 201 | addons_spec: "" # Kubernetes yml that contains specifications of addons to run on every cluster. This is an experimental feature. Please consider carefully before applying this to your plan 202 | allow_privileged_containers: false # Privileged containers run with host-like permissions. Allowing your users to deploy privileged containers in clusters using this plan can create security vulnerabilities and may impact other tiles. Use with caution." 203 | - plan_detail: 204 | name: "medium" # the name that appears for end users to choose 205 | is_active: false 206 | plan_selector: plan2_selector # Dont change the value of selector - Needs to be planN_selector 207 | description: "Medium plan" 208 | # AZ can be only a single value for PKS v1.0 209 | # There can be a comma separated list for PKS v1.1 which supports multiple AZs 210 | az_placement: az1 # Specify the AZ in which the cluster will be created, single for PKS v1.0 211 | authorization_mode: rbac # rbac or abac - Default Cluster Authorization Mode 212 | master_vm_type: medium 213 | worker_vm_type: medium 214 | persistent_disk_type: "10240" # Used in PKS 1.0 215 | master_persistent_disk_type: "10240" # Used in PKS 1.1+ 216 | worker_persistent_disk_type: "10240" # Used in PKS 1.1+ 217 | worker_instances: 5 # The number of K8s worker instances 218 | errand_vm_type: micro 219 | addons_spec: "" # Kubernetes yml that contains specifications of addons to run on every cluster. This is an experimental feature. Please consider carefully before applying this to your plan 220 | allow_privileged_containers: false # Privileged containers run with host-like permissions. Allowing your users to deploy privileged containers in clusters using this plan can create security vulnerabilities and may impact other tiles. Use with caution." 221 | - plan_detail: 222 | name: "large" # the name that appears for end users to choose 223 | is_active: false 224 | plan_selector: plan3_selector # Dont change the value of selector - Needs to be planN_selector 225 | # other fields not considered if its not active 226 | 227 | # Use this for downloading PKS from a s3 bucket 228 | # s3_bucket: pks-tile-s3 # Required for S3. ID of the AWS S3 bucket to download Pivotal releases from 229 | # s3_creds_access_key_id: AAa23....asdfZA # Required for S3. Access key of the AWS S3 bucket 230 | # s3_creds_secret_access_key: s6A234...asfsadUL # Required for S3. Secret access key of the AWS S3 bucket 231 | # s3_region: us-west-2 # The region the bucket is in. Leave it blank if not applicable (e.g. for Minio) 232 | # s3_pks_tile_path: pivotal-container-service-(.*).pivotal # file path name to the tile in the s3 bucket 233 | 234 | # Use this for downloading PKS tile from a web server (till it becomes available on Pivnet) 235 | # pks_tile_webserver: http://101.101.101.101:8080 # EDIT ME 236 | # pksv11_tile: pivotal-container-service-1.1.0.pivotal # EDIT ME 237 | --------------------------------------------------------------------------------