├── .gitignore ├── LICENSE ├── README.md ├── main.tf ├── output.tf ├── terraform.tfvars └── variables.tf /.gitignore: -------------------------------------------------------------------------------- 1 | # Local .terraform directories 2 | .terraform/ 3 | 4 | # Compiled .tfstate files 5 | *.tfstate 6 | *.tfstate.* 7 | 8 | # Logs 9 | *.log 10 | 11 | # Ignore Mac .DS_Store files 12 | .DS_Store 13 | 14 | # Ignored vscode files 15 | .vscode/ 16 | 17 | # Ignore any .tfvars files that are generated automatically for each Terraform run. Most 18 | # .tfvars files are managed as part of configuration and so should be included in 19 | # version control. 20 | # 21 | # example.tfvars 22 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Romain Decker 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Deploy one or multiple VMware VM(s) with Terraform 2 | 3 | ## Overview 4 | 5 | I created a [GitHub repository](https://github.com/cloudmaniac/ansible-deploy-vmware-vm) 18 months ago to share how Ansible can be used to deploy multiple vSphere virtual machines from a template. 6 | 7 | Ansible was my first choice to clone virtual machines and build/configure vSphere infrastructure as I was already used to it. However, I switched to Terraform in 2019 for various reasons such as immutable infrastructure, state management, the possibility to destroy resources easily, and much more. 8 | 9 | This repository provides an example (and a memo for myself) on **how to clone a vSphere template into one or multiple virtual machines using Terraform**. 10 | 11 | ## Terraform 12 | 13 | [HashiCorp Terraform](https://www.terraform.io/) allows infrastructure to be expressed as code in a simple, human-readable language called HCL (HashiCorp Configuration Language). Terraform uses this language to provide an execution plan of changes, which can be reviewed for safety and then applied to make changes. 14 | 15 | Almost any infrastructure type can be represented as a **resource** in Terraform. While resources are the primary construct in the Terraform language, the _behaviors_ of resources rely on their associated resource types, and these types are defined by _providers_. 16 | 17 | [Providers](https://www.terraform.io/docs/providers/index.html) are responsible for understanding API interactions and exposing resources to the outside world. Extensible providers allow Terraform to manage a broad range of resources, including hardware, iaas, paas, and saas services. 18 | 19 | In the example below, the `vsphere_virtual_machine` resource from the [VMware vSphere provider](https://www.terraform.io/docs/providers/vsphere/index.html) is leveraged to clone and configure multiple vSphere virtual machines. 20 | 21 | ## Requirements 22 | 23 | * [Terraform](https://www.terraform.io/downloads.html) 0.12+ 24 | 25 | ## Configuration 26 | 27 | The set of files used to describe infrastructure in Terraform is simply known as a Terraform _configuration_.: 28 | 29 | ├── main.tf 30 | ├── output.tf 31 | ├── terraform.tfvars 32 | └── variables.tf 33 | 34 | 35 | 1. The `main.tf` file contains my provider definition as well as the **logic**: while _data sources_ allow data to be fetched or computed for use elsewhere in the configuration (e.g., vSphere cluster, datastore, portgroup, and so on), the _resource_ blocks describe the virtual machines to create. 36 | 2. The `variables.tf` file contains the variables definition within your Terraform configuration (but not the values of those variables which are defined in `terraform.tfvars`). 37 | 3. For all files which match `terraform.tfvars` or `*.auto.tfvars` present in the current directory, Terraform automatically loads them to populate variables. **This file has to be updated to match your infrastructure settings**. 38 | 4. (optional) The `output.tf` file provides useful information for troubleshooting purposes. 39 | 40 | > **Note:** Although .tfvars files are *usually* not distributed for security reasons, I included mine here for demonstration purposes. 41 | 42 | ## Resources 43 | 44 | Two `vsphere_virtual_machine` resource blocks are defined: 45 | 46 | - `kubernetes_master` clones a Linux vSphere template into a new virtual machine and customize the guest. 47 | - `kubernetes_workers` clones a Linux vSphere template into multiple new virtual machines and customize the guests; `count.index` was used to loop over resources, but other mechanisms can be used as a replacement (such as `for_each` or `for` loops). 48 | 49 | ## Execution 50 | 51 | ### Init 52 | 53 | The first command to run for a new configuration is `terraform init`, which initializes various local settings and data that will be used by subsequent commands. This command will also automatically download and install any provider defined in the configuration. 54 | 55 | \❯ terraform init 56 | 57 | Initializing the backend... 58 | Initializing provider plugins... 59 | 60 | The following providers do not have any version constraints in configuration, 61 | so the latest version was installed. 62 | 63 | To prevent automatic upgrades to new major versions that may contain breaking 64 | changes, it is recommended to add version = "..." constraints to the 65 | corresponding provider blocks in configuration, with the constraint strings 66 | suggested below. 67 | 68 | * provider.vsphere: version = "~> 1.14" 69 | 70 | Terraform has been successfully initialized! 71 | 72 | You may now begin working with Terraform. Try running "terraform plan" to see 73 | any changes that are required for your infrastructure. All Terraform commands 74 | should now work. 75 | 76 | If you ever set or change modules or backend configuration for Terraform, 77 | rerun this command to reinitialize your working directory. If you forget, other 78 | commands will detect it and remind you to do so if necessary. 79 | 80 | ### Plan 81 | 82 | The `terraform plan` command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files. 83 | 84 | This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state. 85 | 86 | ### Apply 87 | 88 | The `terraform apply` command is used to **apply the changes required to reach the desired state of the configuration**. 89 | 90 | \❯ terraform apply 91 | data.vsphere_datacenter.target_dc: Refreshing state... 92 | data.vsphere_network.target_network: Refreshing state... 93 | data.vsphere_compute_cluster.target_cluster: Refreshing state... 94 | data.vsphere_virtual_machine.source_template: Refreshing state... 95 | data.vsphere_datastore.target_datastore: Refreshing state... 96 | vsphere_virtual_machine.kubernetes_workers[2]: Refreshing state... [id=422fd79c-755b-a2d4-bb09-c9d6476217f5] 97 | vsphere_virtual_machine.kubernetes_master: Refreshing state... [id=422faaf3-2f12-b3ee-f0ed-8d602bfa4b11] 98 | vsphere_virtual_machine.kubernetes_workers[1]: Refreshing state... [id=422ff5fb-7c96-1494-a865-6969a6fdd52f] 99 | vsphere_virtual_machine.kubernetes_workers[0]: Refreshing state... [id=422f4d66-fc03-14ed-767b-62ade0142d19] 100 | 101 | An execution plan has been generated and is shown below. 102 | Resource actions are indicated with the following symbols: 103 | + create 104 | ~ update in-place 105 | 106 | Terraform will perform the following actions: 107 | 108 | # vsphere_virtual_machine.kubernetes_master will be updated in-place 109 | ~ resource "vsphere_virtual_machine" "kubernetes_master" { 110 | - annotation = "Ubuntu 18.04.3 LTS (Bionic Beaver) - 2020-01-10" -> null 111 | 112 | [...] 113 | 114 | Plan: 1 to add, 3 to change, 0 to destroy. 115 | 116 | Do you want to perform these actions? 117 | Terraform will perform the actions described above. 118 | Only 'yes' will be accepted to approve. 119 | 120 | Enter a value: yes 121 | 122 | Once the resources are provisioned, the state will be stored by default in a local file named `terraform.tfstate`; it can also be stored remotely, which works better in a team environment. 123 | 124 | ### Destroy 125 | 126 | If you're using Terraform to spin up multiple environments such as lab, dev, or test environments, then destroying is a useful action. 127 | 128 | Resources can be destroyed using the `terraform destroy` command, which is similar to `terraform apply`, but it behaves as if all of the resources have been removed from the configuration. 129 | 130 | **Enjoy!** :) 131 | -------------------------------------------------------------------------------- /main.tf: -------------------------------------------------------------------------------- 1 | ##### Terraform Initialization 2 | terraform { 3 | required_version = ">= 0.13" 4 | 5 | required_providers { 6 | vsphere = { 7 | source = "hashicorp/vsphere" 8 | version = "1.24.3" 9 | } 10 | } 11 | } 12 | 13 | ##### Provider 14 | provider "vsphere" { 15 | user = var.provider_vsphere_user 16 | password = var.provider_vsphere_password 17 | vsphere_server = var.provider_vsphere_host 18 | 19 | # if you have a self-signed cert 20 | allow_unverified_ssl = true 21 | } 22 | 23 | ##### Data sources 24 | data "vsphere_datacenter" "target_dc" { 25 | name = var.deploy_vsphere_datacenter 26 | } 27 | 28 | data "vsphere_datastore" "target_datastore" { 29 | name = var.deploy_vsphere_datastore 30 | datacenter_id = data.vsphere_datacenter.target_dc.id 31 | } 32 | 33 | data "vsphere_compute_cluster" "target_cluster" { 34 | name = var.deploy_vsphere_cluster 35 | datacenter_id = data.vsphere_datacenter.target_dc.id 36 | } 37 | 38 | data "vsphere_network" "target_network" { 39 | name = var.deploy_vsphere_network 40 | datacenter_id = data.vsphere_datacenter.target_dc.id 41 | } 42 | 43 | data "vsphere_virtual_machine" "source_template" { 44 | name = var.guest_template 45 | datacenter_id = data.vsphere_datacenter.target_dc.id 46 | } 47 | 48 | ##### Resources 49 | # Clones a single Linux VM from a template 50 | resource "vsphere_virtual_machine" "kubernetes_master" { 51 | count = length(var.master_ips) 52 | name = "${var.guest_name_prefix}-master0${count.index + 1}" 53 | resource_pool_id = data.vsphere_compute_cluster.target_cluster.resource_pool_id 54 | datastore_id = data.vsphere_datastore.target_datastore.id 55 | folder = var.deploy_vsphere_folder 56 | firmware = var.guest_firmware 57 | 58 | num_cpus = var.guest_vcpu 59 | memory = var.guest_memory 60 | guest_id = data.vsphere_virtual_machine.source_template.guest_id 61 | 62 | scsi_type = data.vsphere_virtual_machine.source_template.scsi_type 63 | 64 | network_interface { 65 | network_id = data.vsphere_network.target_network.id 66 | adapter_type = data.vsphere_virtual_machine.source_template.network_interface_types[0] 67 | } 68 | 69 | disk { 70 | label = "disk0" 71 | size = data.vsphere_virtual_machine.source_template.disks[0].size 72 | eagerly_scrub = data.vsphere_virtual_machine.source_template.disks[0].eagerly_scrub 73 | thin_provisioned = data.vsphere_virtual_machine.source_template.disks[0].thin_provisioned 74 | } 75 | 76 | clone { 77 | template_uuid = data.vsphere_virtual_machine.source_template.id 78 | 79 | customize { 80 | linux_options { 81 | host_name = "${var.guest_name_prefix}-master0${count.index + 1}" 82 | domain = var.guest_domain 83 | } 84 | 85 | network_interface { 86 | ipv4_address = lookup(var.master_ips, count.index) 87 | ipv4_netmask = var.guest_ipv4_netmask 88 | } 89 | 90 | ipv4_gateway = var.guest_ipv4_gateway 91 | dns_server_list = [var.guest_dns_servers] 92 | dns_suffix_list = [var.guest_dns_suffix] 93 | } 94 | } 95 | 96 | boot_delay = 10000 97 | 98 | # Remove existing SSH known hosts as remote identification (host key) changes between deployments. 99 | provisioner "local-exec" { 100 | command = "ssh-keygen -R ${self.guest_ip_addresses[0]}" 101 | } 102 | 103 | # Ansible requires Python to be installed on the remote machines (as well as the local machine). 104 | provisioner "remote-exec" { 105 | inline = ["sudo apt-get update && sudo apt-get -qq install python -y"] 106 | 107 | connection { 108 | type = "ssh" 109 | user = var.guest_ssh_user 110 | password = var.guest_ssh_password 111 | host = self.guest_ip_addresses[0] 112 | } 113 | } 114 | 115 | # Disabling SSH authenticity checking StrictHostKeyChecking=no, to avoid beeing asked to add RSA key fingerprint of a host when you access it for the first time. 116 | provisioner "local-exec" { 117 | command = "sshpass -p ${var.guest_ssh_password} ssh-copy-id -i ${var.guest_ssh_key_public} -o StrictHostKeyChecking=no ${var.guest_ssh_user}@${self.guest_ip_addresses[0]}" 118 | } 119 | 120 | # Prepare operating system for kubernetes using Ansible 121 | #provisioner "local-exec" { 122 | # command = "ansible-playbook -i '${self.guest_ip_addresses[0]},' --private-key ${var.guest_ssh_key_private} ../ansible/k8s-preparation.yml" 123 | #} 124 | 125 | lifecycle { 126 | ignore_changes = [annotation] 127 | } 128 | } 129 | 130 | # Clones multiple Linux VMs from a template 131 | resource "vsphere_virtual_machine" "kubernetes_workers" { 132 | count = length(var.worker_ips) 133 | name = "${var.guest_name_prefix}-worker0${count.index + 1}" 134 | resource_pool_id = data.vsphere_compute_cluster.target_cluster.resource_pool_id 135 | datastore_id = data.vsphere_datastore.target_datastore.id 136 | folder = var.deploy_vsphere_folder 137 | firmware = var.guest_firmware 138 | 139 | num_cpus = var.guest_vcpu 140 | memory = var.guest_memory 141 | guest_id = data.vsphere_virtual_machine.source_template.guest_id 142 | 143 | scsi_type = data.vsphere_virtual_machine.source_template.scsi_type 144 | 145 | network_interface { 146 | network_id = data.vsphere_network.target_network.id 147 | adapter_type = data.vsphere_virtual_machine.source_template.network_interface_types[0] 148 | } 149 | 150 | disk { 151 | label = "disk0" 152 | size = data.vsphere_virtual_machine.source_template.disks[0].size 153 | eagerly_scrub = data.vsphere_virtual_machine.source_template.disks[0].eagerly_scrub 154 | thin_provisioned = data.vsphere_virtual_machine.source_template.disks[0].thin_provisioned 155 | } 156 | 157 | clone { 158 | template_uuid = data.vsphere_virtual_machine.source_template.id 159 | 160 | customize { 161 | linux_options { 162 | host_name = "${var.guest_name_prefix}-worker0${count.index + 1}" 163 | domain = var.guest_domain 164 | } 165 | 166 | network_interface { 167 | ipv4_address = lookup(var.worker_ips, count.index) 168 | ipv4_netmask = var.guest_ipv4_netmask 169 | } 170 | 171 | ipv4_gateway = var.guest_ipv4_gateway 172 | dns_server_list = [var.guest_dns_servers] 173 | dns_suffix_list = [var.guest_dns_suffix] 174 | } 175 | } 176 | 177 | boot_delay = 10000 178 | 179 | # Remove existing SSH known hosts as remote identification (host key) changes between deployments. 180 | provisioner "local-exec" { 181 | command = "ssh-keygen -R ${self.guest_ip_addresses[0]}" 182 | } 183 | 184 | # Ansible requires Python to be installed on the remote machines (as well as the local machine). 185 | provisioner "remote-exec" { 186 | inline = ["sudo apt-get update && sudo apt-get -qq install python -y"] 187 | 188 | connection { 189 | type = "ssh" 190 | user = var.guest_ssh_user 191 | password = var.guest_ssh_password 192 | host = self.guest_ip_addresses[0] 193 | } 194 | } 195 | 196 | # Disabling SSH authenticity checking StrictHostKeyChecking=no, to avoid beeing asked to add RSA key fingerprint of a host when you access it for the first time. 197 | provisioner "local-exec" { 198 | command = "sshpass -p ${var.guest_ssh_password} ssh-copy-id -i ${var.guest_ssh_key_public} -o StrictHostKeyChecking=no ${var.guest_ssh_user}@${self.guest_ip_addresses[0]}" 199 | } 200 | 201 | # Prepare operating system for kubernetes using Ansible 202 | #provisioner "local-exec" { 203 | # command = "ansible-playbook -i '${self.guest_ip_addresses[0]},' --private-key ${var.guest_ssh_key_private} ../ansible/k8s-preparation.yml" 204 | #} 205 | 206 | lifecycle { 207 | ignore_changes = [annotation] 208 | } 209 | } 210 | -------------------------------------------------------------------------------- /output.tf: -------------------------------------------------------------------------------- 1 | output "datacenter_id" { 2 | value = data.vsphere_datacenter.target_dc.id 3 | } 4 | 5 | output "cluster_id" { 6 | value = data.vsphere_compute_cluster.target_cluster.id 7 | } 8 | 9 | output "datastore_id" { 10 | value = data.vsphere_datastore.target_datastore.id 11 | } 12 | 13 | output "portgroup_id" { 14 | value = data.vsphere_network.target_network.id 15 | } -------------------------------------------------------------------------------- /terraform.tfvars: -------------------------------------------------------------------------------- 1 | # Provider 2 | provider_vsphere_host = "vcsa01-z67.sddc.lab" 3 | provider_vsphere_user = "administrator@vsphere.local" 4 | provider_vsphere_password = "VMware1!" 5 | 6 | # Infrastructure 7 | deploy_vsphere_datacenter = "MGMT-Z67" 8 | deploy_vsphere_cluster = "CL01-Z67" 9 | deploy_vsphere_datastore = "vsanDatastore" 10 | deploy_vsphere_folder = "/kubernetes" 11 | deploy_vsphere_network = "PG-10.67.11.0" 12 | 13 | # Guest 14 | guest_name_prefix = "k8s-prod" 15 | guest_template = "packer-ubuntu-18.04" 16 | guest_vcpu = "1" 17 | guest_memory = "1024" 18 | guest_ipv4_netmask = "24" 19 | guest_ipv4_gateway = "10.67.11.254" 20 | guest_dns_servers = "10.67.10.5" 21 | guest_dns_suffix = "sddc.lab" 22 | guest_domain = "sddc.lab" 23 | guest_ssh_user = "packer" 24 | guest_ssh_password = "VMware1!" 25 | guest_ssh_key_private = "~/.ssh/id_ed25519" 26 | guest_ssh_key_public = "~/.ssh/id_ed25519.pub" 27 | guest_firmware = "efi" 28 | 29 | # Master(s) 30 | master_ips = { 31 | "0" = "10.67.11.11" 32 | "1" = "10.67.11.12" 33 | "2" = "10.67.11.13" 34 | } 35 | 36 | # Worker(s) 37 | worker_ips = { 38 | "0" = "10.67.11.21" 39 | "1" = "10.67.11.22" 40 | "2" = "10.67.11.23" 41 | "3" = "10.67.11.24" 42 | } 43 | -------------------------------------------------------------------------------- /variables.tf: -------------------------------------------------------------------------------- 1 | ##### Provider 2 | # - Arguments to configure the VMware vSphere Provider 3 | 4 | variable "provider_vsphere_host" { 5 | description = "vCenter server FQDN or IP - Example: vcsa01-z67.sddc.lab" 6 | } 7 | 8 | variable "provider_vsphere_user" { 9 | description = "vSphere username to use to connect to the environment - Default: administrator@vsphere.local" 10 | default = "administrator@vsphere.local" 11 | } 12 | 13 | variable "provider_vsphere_password" { 14 | description = "vSphere password" 15 | } 16 | 17 | ##### Infrastructure 18 | # - Defines the vCenter / vSphere environment 19 | 20 | variable "deploy_vsphere_datacenter" { 21 | description = "vSphere datacenter in which the virtual machine will be deployed." 22 | } 23 | 24 | variable "deploy_vsphere_cluster" { 25 | description = "vSphere cluster in which the virtual machine will be deployed." 26 | } 27 | 28 | variable "deploy_vsphere_datastore" { 29 | description = "Datastore in which the virtual machine will be deployed." 30 | } 31 | 32 | variable "deploy_vsphere_folder" { 33 | description = "The path to the folder to put this virtual machine in, relative to the datacenter that the resource pool is in." 34 | } 35 | 36 | variable "deploy_vsphere_network" { 37 | description = "Porgroup to which the virtual machine will be connected." 38 | } 39 | 40 | ##### Guest 41 | # - Describes virtual machine / guest options 42 | 43 | variable "guest_name_prefix" { 44 | description = "VM / hostname prefix for the kubernetes cluster." 45 | } 46 | 47 | variable "guest_template" { 48 | description = "The source virtual machine or template to clone from." 49 | } 50 | 51 | variable "guest_vcpu" { 52 | description = "The number of virtual processors to assign to this virtual machine. Default: 1." 53 | default = "1" 54 | } 55 | 56 | variable "guest_memory" { 57 | description = "The size of the virtual machine's memory, in MB. Default: 1024 (1 GB)." 58 | default = "1024" 59 | } 60 | 61 | variable "guest_ipv4_netmask" { 62 | description = "The IPv4 subnet mask, in bits (example: 24 for 255.255.255.0)." 63 | } 64 | 65 | variable "guest_ipv4_gateway" { 66 | description = "The IPv4 default gateway." 67 | } 68 | 69 | variable "guest_dns_servers" { 70 | description = "The list of DNS servers to configure on the virtual machine." 71 | } 72 | 73 | variable "guest_dns_suffix" { 74 | description = "A list of DNS search domains to add to the DNS configuration on the virtual machine." 75 | } 76 | 77 | variable "guest_domain" { 78 | description = "The domain name for this machine." 79 | } 80 | 81 | variable "guest_ssh_user" { 82 | description = "SSH username to connect to the guest VM." 83 | } 84 | 85 | variable "guest_ssh_password" { 86 | description = "SSH password to connect to the guest VM." 87 | } 88 | 89 | variable "guest_ssh_key_private" { 90 | description = "SSH private key (e.g., id_rsa) path." 91 | } 92 | 93 | variable "guest_ssh_key_public" { 94 | description = "SSH public key (e.g., id_rsa.pub) path." 95 | } 96 | 97 | variable "guest_firmware" { 98 | description = "Boot mode for cloned template i.e. efi, bios" 99 | default = "bios" 100 | } 101 | 102 | ##### Master(s) 103 | # - Describes master(s) nodes options 104 | 105 | variable "master_ips" { 106 | type = map(any) 107 | description = "List of IPs used for the kubernetes master nodes. 1 IP for a single master, or 3 for a multi-master configuration." 108 | } 109 | 110 | ##### Worker(s) 111 | # - Describes workers(s) nodes (a.k.a., minions) options 112 | 113 | variable "worker_ips" { 114 | type = map(any) 115 | description = "List of IPs used for the kubernetes worker nodes." 116 | } 117 | --------------------------------------------------------------------------------