├── .gitignore ├── README.md ├── example_ha ├── README.md ├── aws │ ├── Makefile │ ├── database │ │ ├── Makefile │ │ ├── database.tfvars │ │ ├── main.tf │ │ ├── outputs.tf │ │ └── variables.tf │ ├── kubernetes-env │ │ ├── Makefile │ │ ├── files │ │ │ └── userdata.template │ │ ├── kubernetes-env.tfvars │ │ ├── main.tf │ │ └── variables.tf │ ├── main-vars.tfvars │ ├── management-cluster │ │ ├── Makefile │ │ ├── files │ │ │ └── userdata.template │ │ ├── main.tf │ │ ├── management-cluster.tfvars │ │ ├── outputs.tf │ │ └── variables.tf │ └── network │ │ ├── Makefile │ │ ├── main.tf │ │ ├── network.tfvars │ │ ├── outputs.tf │ │ └── variables.tf ├── database │ └── README.md ├── gce │ ├── database │ │ ├── main.tf │ │ └── variables.tf │ └── management-cluster │ │ └── main.tf └── network │ └── README.md ├── example_standalone └── do │ └── main.tf ├── modules ├── aws │ ├── compute │ │ ├── asg │ │ │ ├── asg.tf │ │ │ └── variables.tf │ │ ├── bastion │ │ │ ├── iam │ │ │ │ └── main.tf │ │ │ ├── main.tf │ │ │ └── variables.tf │ │ └── ha-mgmt │ │ │ ├── ha-mgmt.tf │ │ │ └── variables.tf │ ├── data │ │ └── rds │ │ │ ├── db │ │ │ ├── db.tf │ │ │ ├── outputs.tf │ │ │ └── variables.tf │ │ │ ├── rds.tf │ │ │ └── variables.tf │ └── network │ │ ├── components │ │ ├── elb │ │ │ └── elb.tf │ │ ├── igw │ │ │ └── igw.tf │ │ ├── nat │ │ │ └── nat.tf │ │ ├── private_subnet │ │ │ └── private_subnet.tf │ │ ├── public_subnet │ │ │ └── public_subnet.tf │ │ └── vpc │ │ │ └── vpc.tf │ │ ├── networks │ │ └── full-vpc │ │ │ ├── full-vpc.tf │ │ │ └── variables.tf │ │ └── security_groups │ │ ├── bastion │ │ └── main.tf │ │ ├── env-ipsec │ │ └── main.tf │ │ ├── ipsec │ │ └── ipsec.tf │ │ ├── mgmt │ │ ├── ha │ │ │ └── ha.tf │ │ └── ui │ │ │ └── ui.tf │ │ ├── primary_web_elb │ │ └── primary_web_elb.tf │ │ ├── sg_db │ │ └── sg_db.tf │ │ ├── vpc_all_internal │ │ └── vpc_all_internal.tf │ │ ├── vpc_private_subnets │ │ └── vpc_private_subnets.tf │ │ ├── vpc_public_subnets │ │ └── vpc_public_subnets.tf │ │ └── vpc_sgs │ │ └── vpc_sgs.tf ├── do │ └── compute │ │ ├── main.tf │ │ ├── user-data-ubuntu.tpl │ │ └── vars.tf └── gce │ ├── compute │ ├── files │ │ └── userdata.template │ ├── main.tf │ └── variables.tf │ └── database │ ├── main.tf │ └── variables.tf └── scripts └── plan /.gitignore: -------------------------------------------------------------------------------- 1 | *tfstate* 2 | .terraform 3 | *.swp 4 | *.plan 5 | *-env/* 6 | *.key 7 | *.crt 8 | .DS_Store 9 | terraform.tfvars 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Rancher Terraform Modules 2 | --- 3 | 4 | These are opinionated [Terraform](https://www.terraform.io/) modules that provision Rancher HA environments on AWS & GCE. We are working on support for Azure as well. Additionally there is a Digital Ocean module for non-HA setups. Currently these make use of [RancherOS](https://github.com/rancher/os), though work is being done to include additional OSes. 5 | 6 | See the `example_ha` folder for a possible layout that breaks up the network, DB and management plane into separate components. You should be able to deploy into existing environments leveraging the components that you need. 7 | 8 | The HA example is divided up into two sections, one for AWS and one for GCE that will create: 9 | 10 | **AWS**: 11 | * VPC - Across one or more availability zones. It has everything needed for external communications. 12 | * RDS MySQL Database 13 | * ELB pointing to an ASG of 3 nodes. 14 | 15 | **GCE**: 16 | * RancherOS Compute Image 17 | * VM Instance Group - Allows you to dynamically scale by changing the quantity. 18 | * Forwarding Rule - Balances traffic between members of the instance group. 19 | * CloudSQL Instance - MySQL compatible persistence service. Connections are proxied over the [GCE Cloud SQL Proxy](https://cloud.google.com/sql/docs/mysql/sql-proxy). 20 | 21 | **DO** 22 | * Ubuntu 16.04 Compute Image (official DO image) 23 | * Single node Rancher server, embedded database 24 | 25 | #### Getting Help / Feedback 26 | We appreciate feedback and of course pull requests to help improve these modules. Filing Github issues is also a good way to share feedback. You can communicate with the Rancher community at: 27 | 28 | - [Rancher Users Slack](https://slack.rancher.io/) 29 | - [Rancher Forums](https://forums.rancher.com/) 30 | 31 | ## License 32 | Copyright (c) 2014-2017 [Rancher Labs, Inc.](http://rancher.com) 33 | 34 | Licensed under the Apache License, Version 2.0 (the "License"); 35 | you may not use this file except in compliance with the License. 36 | You may obtain a copy of the License at 37 | 38 | [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0) 39 | 40 | Unless required by applicable law or agreed to in writing, software 41 | distributed under the License is distributed on an "AS IS" BASIS, 42 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 43 | See the License for the specific language governing permissions and 44 | limitations under the License. 45 | -------------------------------------------------------------------------------- /example_ha/README.md: -------------------------------------------------------------------------------- 1 | # AWS Reference Modules 2 | 3 | --- 4 | 5 | The configurations here are for reference using the modules for AWS. They model an HA setup and creation of a Rancher Kuberenetes HA environment. They are meant to be a starting point and extended to meet your own needs. 6 | 7 | ## Building Blocks 8 | 9 | These scripts are broken out so that the Terraform statefiles stay specific to a single scope. This layout would allow separate teams to manage each concern if thats how your organization is run. This also makes it very clear the inputs/outputs needed for each layer of the environment. 10 | 11 | The Terraform components are laid out to handle the base level network creating VPC, subnets, IGW and NATs so that you can deploy your Rancher HA management stack. The outputs of the networking module are consumed by the database and management-cluster units. 12 | 13 | The database layer is delivered through RDS in this case. The username/Db and connection strings are all exported. 14 | 15 | The management layer is then deployed using Rancher OS in an HA setup behind an ELB. 16 | 17 | There is a common Makefile that provides a simple interface for working with Terraform. 18 | 19 | ## Getting Started 20 | 21 | To get started the first thing to do is decide which components are going to be used. 22 | 23 | The `main-vars.tfvars` file is meant to define all of the provider type variables / secrets. For instance AWS keys, DNS Provider Keys, Rancher API keys, etc. This is not meant to define all of the variables for each of the major subsystems. Subsystem variables are best handled via remote state if they need to share. 24 | 25 | Once the main variables have been set, the next step is to create the [network]() 26 | 27 | The [database]() level should be created. 28 | 29 | The [mangement cluster]() should be the last item built. -------------------------------------------------------------------------------- /example_ha/aws/Makefile: -------------------------------------------------------------------------------- 1 | MODULE := $(shell basename $$PWD) 2 | TIMESTAMP := $(shell date +%Y-%m-%d-%H%M%S) 3 | 4 | .PHONY: get plan plan-destroy plan-output apply 5 | 6 | state-pull: 7 | @terraform remote pull 8 | 9 | get: 10 | @terraform init 11 | 12 | plan: get 13 | @terraform plan -var-file ../main-vars.tfvars -var-file ./$(MODULE).tfvars 14 | 15 | plan-output: get 16 | @terraform plan -var-file ../main-vars.tfvars -var-file ./$(MODULE).tfvars -out $(MODULE)-$(TIMESTAMP).plan 17 | 18 | plan-destroy: get 19 | @terraform plan -var-file ../main-vars.tfvars -var-file ./$(MODULE).tfvars -destroy 20 | 21 | apply: get 22 | @terraform apply -var-file ../main-vars.tfvars -var-file ./$(MODULE).tfvars 23 | 24 | apply-plan: 25 | @terraform apply $(PLAN) 26 | 27 | clean: 28 | @rm *.plan 29 | -------------------------------------------------------------------------------- /example_ha/aws/database/Makefile: -------------------------------------------------------------------------------- 1 | ../Makefile -------------------------------------------------------------------------------- /example_ha/aws/database/database.tfvars: -------------------------------------------------------------------------------- 1 | database_password="" 2 | 3 | aws_rds_instance_class="db.m3.medium" 4 | 5 | aws_env_name = "rancher-database" 6 | -------------------------------------------------------------------------------- /example_ha/aws/database/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "${var.aws_region}" 3 | access_key = "${var.aws_access_key}" 4 | secret_key = "${var.aws_secret_key}" 5 | } 6 | 7 | data "terraform_remote_state" "network" { 8 | backend = "local" 9 | 10 | config { 11 | path = "${path.module}/../network/terraform.tfstate" 12 | } 13 | } 14 | 15 | module "database" { 16 | source = "../../modules/aws/data/rds" 17 | 18 | rds_instance_name = "${var.aws_env_name}" 19 | database_password = "${var.database_password}" 20 | vpc_id = "${data.terraform_remote_state.network.vpc_id}" 21 | source_cidr_blocks = "${concat(split(",", data.terraform_remote_state.network.aws_public_subnet_cidrs),split(",",data.terraform_remote_state.network.aws_private_subnet_cidrs))}" 22 | rds_instance_class = "${var.aws_rds_instance_class}" 23 | db_subnet_ids = "${concat(split(",", data.terraform_remote_state.network.aws_private_subnet_ids))}" 24 | } 25 | -------------------------------------------------------------------------------- /example_ha/aws/database/outputs.tf: -------------------------------------------------------------------------------- 1 | output "database" { 2 | value = "${module.database.database_name}" 3 | } 4 | 5 | output "password" { 6 | value = "${var.database_password}" 7 | } 8 | 9 | output "username" { 10 | value = "${module.database.username}" 11 | } 12 | 13 | output "endpoint" { 14 | value = "${module.database.endpoint}" 15 | } 16 | -------------------------------------------------------------------------------- /example_ha/aws/database/variables.tf: -------------------------------------------------------------------------------- 1 | variable "aws_secret_key" {} 2 | 3 | variable "aws_access_key" {} 4 | 5 | variable "aws_region" {} 6 | 7 | variable "aws_env_name" {} 8 | 9 | variable "database_password" {} 10 | 11 | variable "aws_rds_instance_class" {} 12 | -------------------------------------------------------------------------------- /example_ha/aws/kubernetes-env/Makefile: -------------------------------------------------------------------------------- 1 | ../Makefile -------------------------------------------------------------------------------- /example_ha/aws/kubernetes-env/files/userdata.template: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | write_files: 3 | - path: /etc/rc.local 4 | permissions: "0755" 5 | owner: root 6 | content: | 7 | #!/bin/bash 8 | wait-for-docker 9 | IP="$(wget -qO - http://169.254.169.254/2016-06-30/meta-data/${ip-addr})" 10 | mkdir -p /var/lib/rancher/etc 11 | echo "export CATTLE_AGENT_IP=$${IP}" >> /var/lib/rancher/etc/agent.conf 12 | ${rancher_registration_command} 13 | rancher: 14 | docker: 15 | engine: docker-1.12.6 16 | log_driver: "json-file" 17 | log_opts: 18 | max-file: "3" 19 | max-size: "100m" 20 | labels: "prodservices,production" 21 | services_include: 22 | kernel-headers: "true" 23 | services: 24 | giddyup-health: 25 | image: cloudnautique/giddyup:v0.14.0 26 | ports: 27 | - 1620:1620 28 | command: /giddyup health 29 | -------------------------------------------------------------------------------- /example_ha/aws/kubernetes-env/kubernetes-env.tfvars: -------------------------------------------------------------------------------- 1 | name = "k8s-ha-demo" 2 | 3 | vpc_id = "" 4 | 5 | ipsec_node_cidrs = "cidr_for_aws_nodes" 6 | 7 | ssh_key_name = "" 8 | 9 | subnet_ids = "subnet-" 10 | 11 | subnet_cidrs = "" 12 | 13 | aws_instance_type = "t2.medium" 14 | 15 | cattle_agent_ip = "local-ipv4" 16 | 17 | project_template_id = "1pt6" 18 | 19 | rancher_api_url = "" 20 | -------------------------------------------------------------------------------- /example_ha/aws/kubernetes-env/main.tf: -------------------------------------------------------------------------------- 1 | provider "rancher" { 2 | api_url = "${var.rancher_api_url}" 3 | access_key = "${var.rancher_access_key}" 4 | secret_key = "${var.rancher_secret_key}" 5 | } 6 | 7 | provider "aws" { 8 | access_key = "${var.aws_access_key}" 9 | secret_key = "${var.aws_secret_key}" 10 | region = "${var.aws_region}" 11 | } 12 | 13 | resource "rancher_environment" "ha_k8s" { 14 | name = "${var.name}" 15 | project_template_id = "${var.project_template_id}" 16 | } 17 | 18 | resource "rancher_registration_token" "etcd_nodes" { 19 | name = "etcd_nodes" 20 | environment_id = "${rancher_environment.ha_k8s.id}" 21 | 22 | host_labels { 23 | etcd = "true" 24 | } 25 | } 26 | 27 | resource "rancher_registration_token" "orchestration_nodes" { 28 | name = "etcd_nodes" 29 | environment_id = "${rancher_environment.ha_k8s.id}" 30 | 31 | host_labels { 32 | orchestration = "true" 33 | } 34 | } 35 | 36 | resource "rancher_registration_token" "compute_nodes" { 37 | name = "etcd_nodes" 38 | environment_id = "${rancher_environment.ha_k8s.id}" 39 | 40 | host_labels { 41 | compute = "true" 42 | } 43 | } 44 | 45 | data "template_file" "etcd_userdata" { 46 | template = "${file("${path.module}/files/userdata.template")}" 47 | 48 | vars { 49 | rancher_registration_command = "${rancher_registration_token.etcd_nodes.command}" 50 | ip-addr = "${var.cattle_agent_ip}" 51 | } 52 | } 53 | 54 | data "template_file" "orchestration_userdata" { 55 | template = "${file("${path.module}/files/userdata.template")}" 56 | 57 | vars { 58 | rancher_registration_command = "${rancher_registration_token.orchestration_nodes.command}" 59 | ip-addr = "${var.cattle_agent_ip}" 60 | } 61 | } 62 | 63 | data "template_file" "compute_userdata" { 64 | template = "${file("${path.module}/files/userdata.template")}" 65 | 66 | vars { 67 | rancher_registration_command = "${rancher_registration_token.compute_nodes.command}" 68 | ip-addr = "${var.cattle_agent_ip}" 69 | } 70 | } 71 | 72 | module "ipsec_sg" { 73 | source = "../../modules/aws/network/security_groups/env-ipsec" 74 | 75 | vpc_id = "${var.vpc_id}" 76 | name = "${var.name}-ipsec-sg" 77 | environment_cidrs = "${var.ipsec_node_cidrs}" 78 | } 79 | 80 | module "etcd_asg" { 81 | source = "../../modules/aws/compute/asg" 82 | 83 | name = "${var.name}-etcd" 84 | userdata = "${data.template_file.etcd_userdata.rendered}" 85 | health_check_type = "EC2" 86 | ssh_key_name = "${var.ssh_key_name}" 87 | security_groups = "${module.ipsec_sg.ipsec_id}" 88 | lb_ids = "" 89 | health_check_target = "HTTP:1620/ping" 90 | ami_id = "ami-5c5a3f4a" 91 | subnet_ids = "${var.subnet_ids}" 92 | subnet_cidrs = "${var.subnet_cidrs}" 93 | vpc_id = "${var.vpc_id}" 94 | instance_type = "${var.aws_instance_type}" 95 | } 96 | 97 | module "orchestration_asg" { 98 | source = "../../modules/aws/compute/asg" 99 | 100 | name = "${var.name}-orchestration" 101 | userdata = "${data.template_file.orchestration_userdata.rendered}" 102 | health_check_type = "EC2" 103 | ssh_key_name = "${var.ssh_key_name}" 104 | security_groups = "${module.ipsec_sg.ipsec_id}" 105 | lb_ids = "" 106 | health_check_target = "HTTP:1620/ping" 107 | ami_id = "ami-5c5a3f4a" 108 | subnet_ids = "${var.subnet_ids}" 109 | subnet_cidrs = "${var.subnet_cidrs}" 110 | vpc_id = "${var.vpc_id}" 111 | instance_type = "${var.aws_instance_type}" 112 | } 113 | 114 | module "compute_asg" { 115 | source = "../../modules/aws/compute/asg" 116 | 117 | name = "${var.name}-compute" 118 | userdata = "${data.template_file.compute_userdata.rendered}" 119 | health_check_type = "EC2" 120 | ssh_key_name = "${var.ssh_key_name}" 121 | security_groups = "${module.ipsec_sg.ipsec_id}" 122 | lb_ids = "" 123 | health_check_target = "HTTP:1620/ping" 124 | ami_id = "ami-5c5a3f4a" 125 | subnet_ids = "${var.subnet_ids}" 126 | subnet_cidrs = "${var.subnet_cidrs}" 127 | vpc_id = "${var.vpc_id}" 128 | instance_type = "${var.aws_instance_type}" 129 | } 130 | -------------------------------------------------------------------------------- /example_ha/aws/kubernetes-env/variables.tf: -------------------------------------------------------------------------------- 1 | variable "name" {} 2 | 3 | variable "project_template_id" {} 4 | 5 | variable "rancher_access_key" {} 6 | 7 | variable "rancher_secret_key" {} 8 | 9 | variable "aws_access_key" {} 10 | 11 | variable "aws_secret_key" {} 12 | 13 | variable "aws_region" {} 14 | 15 | variable "vpc_id" {} 16 | 17 | variable "ipsec_node_cidrs" {} 18 | 19 | variable "ssh_key_name" {} 20 | 21 | variable "subnet_ids" {} 22 | 23 | variable "subnet_cidrs" {} 24 | 25 | variable "aws_instance_type" {} 26 | 27 | variable "cattle_agent_ip" {} 28 | 29 | variable "rancher_api_url" {} 30 | -------------------------------------------------------------------------------- /example_ha/aws/main-vars.tfvars: -------------------------------------------------------------------------------- 1 | ## 2 | # Best to keep your private access keys here as they will 3 | # be used by multiple components 4 | ## 5 | aws_access_key = "" 6 | 7 | aws_secret_key = "" 8 | 9 | ws_region = "us-west-1" 10 | -------------------------------------------------------------------------------- /example_ha/aws/management-cluster/Makefile: -------------------------------------------------------------------------------- 1 | ../Makefile -------------------------------------------------------------------------------- /example_ha/aws/management-cluster/files/userdata.template: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | write_files: 3 | - path: /etc/rc.local 4 | permissions: "0755" 5 | owner: root 6 | content: | 7 | #!/bin/bash 8 | wait-for-docker 9 | docker run -d -p 8080:8080 -p 9345:9345 --name rancher-server --restart=always -e CATTLE_USE_LOCAL_ARTIFACTS=false -e DEFAULT_CATTLE_API_UI_INDEX=https://releases.rancher.com/ui/${api_ui_version} -e JAVA_OPTS="-Xmx8g" ${rancher_version} --advertise-address $(wget -qO - http://169.254.169.254/2016-06-30/meta-data/${ip-addr}) --db-host ${database_endpoint} --db-pass ${database_password} --db-user ${database_user} 10 | - path: /var/lib/rancher/etc/ssl/README.txt 11 | content: "CA crt will be pulled into this directory" 12 | - path: /etc/nginx/conf.d/redirect.conf 13 | permissions: "0644" 14 | content: | 15 | server { 16 | listen 80 default_server; 17 | listen [::]:80 default_server; 18 | server_name _; 19 | return 301 https://$host$request_uri; 20 | } 21 | rancher: 22 | docker: 23 | engine: docker-1.10.3 24 | log_driver: "json-file" 25 | log_opts: 26 | max-file: "3" 27 | max-size: "100m" 28 | labels: "production" 29 | services_include: 30 | kernel-headers: true 31 | services: 32 | nginx-redirect: 33 | image: nginx:1.10.2-alpine 34 | net: host 35 | volumes: 36 | - /etc/nginx/conf.d:/etc/nginx/conf.d 37 | 38 | -------------------------------------------------------------------------------- /example_ha/aws/management-cluster/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | access_key = "${var.aws_access_key}" 3 | secret_key = "${var.aws_secret_key}" 4 | region = "${var.aws_region}" 5 | } 6 | 7 | provider "cloudflare" { 8 | email = "${var.cloudflare_email}" 9 | token = "${var.cloudflare_token}" 10 | } 11 | 12 | data "terraform_remote_state" "network" { 13 | backend = "local" 14 | 15 | config { 16 | path = "${path.module}/../network/terraform.tfstate" 17 | } 18 | } 19 | 20 | data "terraform_remote_state" "database" { 21 | backend = "local" 22 | 23 | config { 24 | path = "${path.module}/../database/terraform.tfstate" 25 | } 26 | } 27 | 28 | data "template_file" "userdata" { 29 | template = "${file("${path.module}/files/userdata.template")}" 30 | 31 | vars { 32 | database_endpoint = "${element(split(":", data.terraform_remote_state.database.endpoint),0)}" 33 | ip-addr = "local-ipv4" 34 | database_name = "${data.terraform_remote_state.database.database}" 35 | database_user = "${data.terraform_remote_state.database.username}" 36 | database_password = "${data.terraform_remote_state.database.password}" 37 | rancher_version = "${var.rancher_version}" 38 | sysdig_key = "${var.sysdig_key}" 39 | api_ui_version = "${var.api_ui_version}" 40 | } 41 | } 42 | 43 | module "management_elb" { 44 | source = "../../../modules/aws/network/components/elb" 45 | 46 | name = "${var.aws_env_name}-api-mgmt" 47 | security_groups = "${module.management_sgs.elb_sg_id}" 48 | public_subnets = "${data.terraform_remote_state.network.aws_public_subnet_ids}" 49 | instance_ssl_port = "8080" 50 | proxy_proto_port_string = "80,8080" 51 | instance_http_port = "80" 52 | 53 | health_check_target = "HTTP:8080/v1/scripts/api.crt" 54 | 55 | ssl_certificate_arn = "${data.terraform_remote_state.network.rancher_com_arn}" 56 | } 57 | 58 | module "management_sgs" { 59 | source = "../../../modules/aws/network/security_groups/mgmt/ha" 60 | 61 | name = "${var.aws_env_name}" 62 | vpc_id = "${data.terraform_remote_state.network.vpc_id}" 63 | private_subnet_cidrs = "${data.terraform_remote_state.network.aws_public_subnet_cidrs}" 64 | } 65 | 66 | module "compute" { 67 | source = "../../../modules/aws/compute/ha-mgmt" 68 | 69 | vpc_id = "${data.terraform_remote_state.network.vpc_id}" 70 | name = "${var.aws_env_name}-management" 71 | ami_id = "${var.aws_ami_id}" 72 | instance_type = "${var.aws_instance_type}" 73 | ssh_key_name = "${var.aws_key_pair}" 74 | security_groups = "${join(",", list(module.management_sgs.management_node_sgs))}" 75 | lb_ids = "${join(",", list(module.management_elb.elb_id))}" 76 | spot_enabled = "${var.spot_enabled}" 77 | 78 | subnet_ids = "${data.terraform_remote_state.network.aws_public_subnet_ids}" 79 | subnet_cidrs = "${data.terraform_remote_state.network.aws_public_subnet_cidrs}" 80 | externally_defined_userdata = "${data.template_file.userdata.rendered}" 81 | health_check_type = "${var.health_check_type}" 82 | } 83 | -------------------------------------------------------------------------------- /example_ha/aws/management-cluster/management-cluster.tfvars: -------------------------------------------------------------------------------- 1 | rancher_version = "rancher/server:latest" 2 | 3 | aws_ami_id = "ami-10381d70" 4 | 5 | aws_instance_type = "t2.medium" 6 | 7 | spot_enabled = "true" 8 | 9 | aws_env_name = "production" 10 | 11 | api_ui_version = "1.5.8" 12 | 13 | rancher_hostname = "myrancher" 14 | 15 | domain_name = "example.com" 16 | -------------------------------------------------------------------------------- /example_ha/aws/management-cluster/outputs.tf: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /example_ha/aws/management-cluster/variables.tf: -------------------------------------------------------------------------------- 1 | variable "rancher_hostname" {} 2 | 3 | variable "domain_name" {} 4 | 5 | variable "aws_access_key" {} 6 | 7 | variable "aws_secret_key" {} 8 | 9 | variable "aws_region" {} 10 | 11 | variable "aws_ami_id" {} 12 | 13 | variable "aws_env_name" {} 14 | 15 | variable "aws_instance_type" {} 16 | 17 | variable "rancher_version" {} 18 | 19 | variable "api_ui_version" {} 20 | 21 | variable "spot_enabled" {} 22 | 23 | variable "health_check_type" { 24 | default = "EC2" 25 | } 26 | 27 | variable "sysdig_key" {} 28 | 29 | variable "cloudflare_token" { 30 | default = "" 31 | } 32 | 33 | variable "cloudflare_email" { 34 | default = "" 35 | } 36 | 37 | variable "aws_key_pair" { 38 | type = "string" 39 | default = "value" 40 | } 41 | -------------------------------------------------------------------------------- /example_ha/aws/network/Makefile: -------------------------------------------------------------------------------- 1 | ../Makefile -------------------------------------------------------------------------------- /example_ha/aws/network/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | access_key = "${var.aws_access_key}" 3 | secret_key = "${var.aws_secret_key}" 4 | region = "${var.aws_region}" 5 | } 6 | 7 | module "vpc_network" { 8 | source = "../../modules/aws/network/networks/full-vpc" 9 | 10 | vpc_name = "${var.aws_env_name}" 11 | vpc_cidr = "${var.aws_vpc_cidr}" 12 | region = "${var.aws_region}" 13 | public_subnet_cidrs = "${var.aws_public_subnet_cidrs}" 14 | private_subnet_cidrs = "${var.aws_private_subnet_cidrs}" 15 | azs = "${var.aws_subnet_azs}" 16 | } 17 | 18 | resource "aws_iam_server_certificate" "site_certificate" { 19 | name_prefix = "${var.aws_env_name}-certificate" 20 | certificate_body = "${file("${var.server_cert_path}")}" 21 | private_key = "${file("${var.server_key_path}")}" 22 | certificate_chain = "${file("${var.ca_chain_path}")}" 23 | 24 | lifecycle { 25 | create_before_destroy = true 26 | } 27 | } 28 | -------------------------------------------------------------------------------- /example_ha/aws/network/network.tfvars: -------------------------------------------------------------------------------- 1 | aws_env_name = "production" 2 | 3 | aws_vpc_cidr = "10.0.0.0/16" 4 | 5 | aws_private_subnet_cidrs = "10.0.0.0/24,10.0.64.0/24" 6 | 7 | aws_public_subnet_cidrs = "10.0.32.0/24,10.0.96.0/24" 8 | 9 | aws_subnet_azs = "us-west-1a,us-west-1b" 10 | 11 | server_cert_path = "certfile" 12 | 13 | server_key_path = "Keyfile" 14 | 15 | ca_chain_path = "bundlefile" 16 | -------------------------------------------------------------------------------- /example_ha/aws/network/outputs.tf: -------------------------------------------------------------------------------- 1 | output "vpc_id" { 2 | value = "${module.vpc_network.vpc_id}" 3 | } 4 | 5 | output "aws_public_subnet_cidrs" { 6 | value = "${var.aws_public_subnet_cidrs}" 7 | } 8 | 9 | output "aws_private_subnet_cidrs" { 10 | value = "${var.aws_private_subnet_cidrs}" 11 | } 12 | 13 | output "aws_private_subnet_ids" { 14 | value = "${module.vpc_network.private_subnet_ids}" 15 | } 16 | 17 | output "aws_public_subnet_ids" { 18 | value = "${module.vpc_network.public_subnet_ids}" 19 | } 20 | 21 | output "rancher_com_arn" { 22 | value = "${aws_iam_server_certificate.rancher_com.arn}" 23 | } 24 | -------------------------------------------------------------------------------- /example_ha/aws/network/variables.tf: -------------------------------------------------------------------------------- 1 | variable "aws_access_key" {} 2 | 3 | variable "aws_secret_key" {} 4 | 5 | variable "aws_env_name" {} 6 | 7 | variable "aws_vpc_cidr" {} 8 | 9 | variable "aws_region" {} 10 | 11 | variable "aws_public_subnet_cidrs" {} 12 | 13 | variable "aws_private_subnet_cidrs" {} 14 | 15 | variable "aws_subnet_azs" {} 16 | 17 | variable "server_cert_path" {} 18 | 19 | variable "server_key_path" {} 20 | 21 | variable "ca_chain_path" {} 22 | -------------------------------------------------------------------------------- /example_ha/database/README.md: -------------------------------------------------------------------------------- 1 | # Database 2 | 3 | --- 4 | The database module will create the following resources: 5 | 6 | * RDS - MySQL database instance 7 | * Security group allowing access to the DB instances. 8 | 9 | ## Getting Started 10 | 11 | You should have the `main-vars.tfvars` file populated in the main directory before configuring the database. 12 | 13 | You will need to either use a remote state from the network module, or you will need to edit the main.tf in this directory to remove the data source and the references. 14 | 15 | You will need to populate the following variables in this section: 16 | 17 | ``` 18 | database_password="password" 19 | aws_rds_instance_class="db.m3.medium" 20 | aws_env_name = "rancher-database" 21 | aws_region = "us-west-1" 22 | ``` 23 | 24 | once updated, the command: 25 | 26 | `make plan` 27 | 28 | Will show what terraform would plan to do. 29 | 30 | `make plan-output` 31 | 32 | ## Outputs 33 | 34 | The following will be outputs that can be imported by other components via remote state. 35 | * database - The name of the database created. 36 | * password - The password that should be used to connect to the database. Keep in mind this should be guarded closely. 37 | * username - The username for connecting to the database. 38 | * endpoint - The RDS endpoint to connect to. 39 | * address - RDS internal DNS/host name without port -------------------------------------------------------------------------------- /example_ha/gce/database/main.tf: -------------------------------------------------------------------------------- 1 | // Configure the Google Cloud provider 2 | provider "google" { 3 | credentials = "${file("~/.gce/credentials")}" 4 | project = "rancher-dev" 5 | region = "us-central1" 6 | } 7 | 8 | resource "random_id" "database" { 9 | byte_length = 4 10 | } 11 | 12 | module "gce_database" { 13 | source = "../../../modules/gce/database" 14 | 15 | name = "rancher-${random_id.database.hex}" 16 | region = "us-central" 17 | db_tier = "db-n1-standard-1" 18 | disk_size = 20 19 | disk_type = "PD_SSD" 20 | db_user = "${var.db_user}" 21 | db_pass = "${var.db_pass}" 22 | } 23 | 24 | output "name" { 25 | value = "${module.gce_database.name}" 26 | } 27 | -------------------------------------------------------------------------------- /example_ha/gce/database/variables.tf: -------------------------------------------------------------------------------- 1 | variable "db_user" {} 2 | variable "db_pass" {} 3 | -------------------------------------------------------------------------------- /example_ha/gce/management-cluster/main.tf: -------------------------------------------------------------------------------- 1 | // Expected to be set at runtime by user 2 | variable "gce_project" {} 3 | 4 | variable "gce_region" {} 5 | variable "database_endpoint" {} 6 | variable "database_user" {} 7 | variable "database_password" {} 8 | variable "ssh_pub_key" {} 9 | 10 | // Configure the Google Cloud provider 11 | provider "google" { 12 | credentials = "${file("~/.gce/credentials")}" 13 | project = "${var.gce_project}" 14 | region = "${var.gce_region}" 15 | } 16 | 17 | // Get local state from our adjacent terraform module 18 | data "terraform_remote_state" "database" { 19 | backend = "local" 20 | 21 | config { 22 | path = "${path.module}/../database/terraform.tfstate" 23 | } 24 | } 25 | 26 | resource "random_id" "server" { 27 | byte_length = 4 28 | } 29 | 30 | module "gce_compute" { 31 | source = "../../../modules/gce/compute" 32 | 33 | name = "rancher-server-${random_id.server.hex}" 34 | gce_project = "${var.gce_project}" 35 | machine_type = "n1-standard-2" 36 | zone = "us-central1-f" 37 | server_count = "1" 38 | service_account_scopes = [] 39 | database_endpoint = "${var.database_endpoint}" 40 | database_user = "${var.database_user}" 41 | database_password = "${var.database_password}" 42 | gce-cloud-sql-instance-connection-name = "${var.gce_project}:${var.gce_region}:${data.terraform_remote_state.database.name}" 43 | rancher_version = "stable" 44 | ssh_pub_key = "${var.ssh_pub_key}" 45 | } 46 | -------------------------------------------------------------------------------- /example_ha/network/README.md: -------------------------------------------------------------------------------- 1 | # Network 2 | 3 | --- 4 | 5 | The network will create the following resources: 6 | 7 | * VPC 8 | * IGW 9 | * NAT 10 | * Public Subnets 11 | * Private Subnets 12 | * Security Groups 13 | * AWS IAM Server Certificate (Used on the ELB) 14 | 15 | ## Getting Started 16 | 17 | You should have the main-vars.tfvars file populated in the main directory before configuring this section. 18 | 19 | For the network you will need to define the following variables in the `network.tfvars` 20 | 21 | ``` 22 | aws_env_name = "production" 23 | aws_vpc_cidr = "10.0.0.0/16" # CIDR block for the whole VPC 24 | aws_private_subnet_cidrs = "10.0.0.0/24,10.0.64.0/24" # Should be one per Availability Zone 25 | aws_public_subnet_cidrs = "10.0.32.0/24,10.0.96.0/24" # '' '' 26 | aws_subnet_azs = "us-west-1a,us-west-1b" # Comma separated list of AZs 27 | server_cert_path = "certfile" 28 | server_key_path = "Keyfile" 29 | ca_chain_path = "bundlefile 30 | ``` 31 | 32 | This module will create an SSL Certificate ARN for Terminating SSL on an ELB if you don't want this, you can remove the `aws_iam_server_certificate` resource in main.tf. 33 | 34 | Once updated, the command: 35 | 36 | `make plan` 37 | 38 | will show what terraform would plan to do. 39 | 40 | `make plan-output` 41 | 42 | will make a plan file. To execute the plan file: 43 | 44 | `make apply-plan PLAN=` 45 | 46 | ## Outputs 47 | 48 | The following will be outputs that can be imported by other components via remote state. 49 | 50 | ``` 51 | "vpc_id" 52 | "aws_public_subnet_cidrs" 53 | "aws_private_subnet_cidrs" 54 | "aws_private_subnet_ids" 55 | "aws_public_subnet_ids" 56 | "rancher_com_arn" 57 | ``` 58 | -------------------------------------------------------------------------------- /example_standalone/do/main.tf: -------------------------------------------------------------------------------- 1 | variable "digitalocean_token" {} 2 | 3 | variable "rancher_version_tag" { 4 | default = "stable" 5 | } 6 | 7 | variable "ssh_keys" { 8 | type = "list" 9 | } 10 | 11 | module "digital-ocean" { 12 | source = "../../modules/do/compute" 13 | node_count = 1 14 | digitalocean_token = "${var.digitalocean_token}" 15 | docker_cmd = "docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:${var.rancher_version_tag}" 16 | ssh_keys = "${var.ssh_keys}" 17 | instance_type = "server" 18 | } 19 | 20 | output "server-ip" { 21 | value = "${module.digital-ocean.server-ip}" 22 | } 23 | -------------------------------------------------------------------------------- /modules/aws/compute/asg/asg.tf: -------------------------------------------------------------------------------- 1 | resource "aws_launch_configuration" "config" { 2 | name_prefix = "Launch-Config-${var.name}" 3 | image_id = "${var.ami_id}" 4 | 5 | security_groups = ["${split(",", var.security_groups)}"] 6 | 7 | instance_type = "${var.instance_type}" 8 | key_name = "${var.ssh_key_name}" 9 | associate_public_ip_address = false 10 | ebs_optimized = false 11 | user_data = "${var.userdata}" 12 | iam_instance_profile = "${var.iam_instance_profile}" 13 | 14 | root_block_device { 15 | volume_type = "${var.root_volume_type}" 16 | volume_size = "${var.root_volume_size}" 17 | } 18 | 19 | lifecycle { 20 | create_before_destroy = true 21 | } 22 | } 23 | 24 | resource "aws_autoscaling_group" "elb" { 25 | name = "${var.name}-compute-asg" 26 | count = "${var.use_elb}" 27 | 28 | min_size = "${var.scale_min_size}" 29 | max_size = "${var.scale_max_size}" 30 | desired_capacity = "${var.scale_desired_size}" 31 | 32 | vpc_zone_identifier = ["${split(",", var.subnet_ids)}"] 33 | launch_configuration = "${aws_launch_configuration.config.name}" 34 | 35 | health_check_grace_period = 900 36 | health_check_type = "${var.health_check_type}" 37 | force_delete = false 38 | load_balancers = ["${split(",", var.lb_ids)}"] 39 | 40 | tags = [ 41 | { 42 | key = "Name" 43 | value = "${var.name}" 44 | propagate_at_launch = true 45 | }, 46 | { 47 | key = "spot-enabled" 48 | value = "${var.spot_enabled}" 49 | propagate_at_launch = true 50 | }, 51 | "${var.tags}", 52 | ] 53 | 54 | lifecycle { 55 | create_before_destroy = true 56 | } 57 | } 58 | 59 | resource "aws_autoscaling_group" "alb" { 60 | name = "${var.name}-asg" 61 | count = "${1 - var.use_elb}" 62 | 63 | min_size = "${var.scale_min_size}" 64 | max_size = "${var.scale_max_size}" 65 | desired_capacity = "${var.scale_desired_size}" 66 | 67 | health_check_grace_period = 900 68 | health_check_type = "${var.health_check_type}" 69 | force_delete = false 70 | target_group_arns = ["${split(",", var.target_group_arn)}"] 71 | 72 | vpc_zone_identifier = ["${split(",", var.subnet_ids)}"] 73 | launch_configuration = "${aws_launch_configuration.config.name}" 74 | 75 | tags = [ 76 | { 77 | key = "Name" 78 | value = "${var.name}" 79 | propagate_at_launch = true 80 | }, 81 | { 82 | key = "spot-enabled" 83 | value = "${var.spot_enabled}" 84 | propagate_at_launch = true 85 | }, 86 | "${var.tags}", 87 | ] 88 | 89 | lifecycle { 90 | create_before_destroy = true 91 | } 92 | } 93 | 94 | // Changes from PR: https://github.com/rancher/terraform-modules/pull/20 95 | output "name" { 96 | value = "${element(concat(aws_autoscaling_group.alb.*.name, aws_autoscaling_group.elb.*.name), var.use_elb)}" 97 | } 98 | 99 | output "id" { 100 | value = "${element(concat(aws_autoscaling_group.alb.*.id, aws_autoscaling_group.elb.*.id), var.use_elb)}" 101 | } 102 | -------------------------------------------------------------------------------- /modules/aws/compute/asg/variables.tf: -------------------------------------------------------------------------------- 1 | variable "name" {} 2 | 3 | variable "vpc_id" {} 4 | 5 | variable "subnet_ids" {} 6 | 7 | variable "ssh_key_name" {} 8 | 9 | variable "lb_ids" {} 10 | 11 | variable "use_elb" { 12 | description = "To use ELB pass 1, to use ALB pass 0" 13 | default = 1 14 | } 15 | 16 | variable "health_check_type" { 17 | default = "ELB" 18 | } 19 | 20 | variable "spot_enabled" { 21 | default = "false" 22 | } 23 | 24 | variable "scale_min_size" { 25 | default = "1" 26 | } 27 | 28 | variable "scale_desired_size" { 29 | default = "1" 30 | } 31 | 32 | variable "scale_max_size" { 33 | default = "1" 34 | } 35 | 36 | variable "ami_id" {} 37 | 38 | variable "security_groups" { 39 | default = "" 40 | } 41 | 42 | variable "instance_type" { 43 | default = "t2.micro" 44 | } 45 | 46 | variable "root_volume_size" { 47 | default = "8" 48 | } 49 | 50 | variable "root_volume_type" { 51 | default = "standard" 52 | } 53 | 54 | variable "target_group_arn" { 55 | description = "Required for ALB" 56 | default = "" 57 | } 58 | 59 | variable "userdata" {} 60 | 61 | variable "health_check_target" {} 62 | 63 | variable "iam_instance_profile" { 64 | default = "" 65 | } 66 | 67 | // Ref: https://github.com/hashicorp/terraform-aws-consul/blob/master/modules/consul-cluster/variables.tf 68 | variable "tags" { 69 | description = "List fo extra tag blocks added to the autoscaling group configuration. Each element in the list is a map containing keys 'key', 'value', and 'propagate_at_launch' mapped to the respective values." 70 | type = "list" 71 | default = [] 72 | } 73 | -------------------------------------------------------------------------------- /modules/aws/compute/bastion/iam/main.tf: -------------------------------------------------------------------------------- 1 | variable "name" {} 2 | 3 | resource "aws_iam_instance_profile" "eip_assignment" { 4 | name = "${var.name}-eip-assignment-profile" 5 | role = "${aws_iam_role.eip_assignment.name}" 6 | } 7 | 8 | resource "aws_iam_role" "eip_assignment" { 9 | name = "${var.name}-eip-assignment-role" 10 | 11 | assume_role_policy = <> /etc/hosts 10 | 11 | # Setup Docker + Rancher 12 | apt-get update && apt -y install docker.io=1.12.6-0ubuntu1~16.04.1 ntp 13 | echo "attempting to run: " 14 | echo "${docker_cmd}" 15 | ${docker_cmd} 16 | -------------------------------------------------------------------------------- /modules/do/compute/vars.tf: -------------------------------------------------------------------------------- 1 | # Expects this variable to be set as environment variable TF_VAR_digitalocean_token or through CLI 2 | # see https://www.terraform.io/docs/configuration/variables.html 3 | variable "digitalocean_token" {} 4 | 5 | variable "instance_type" { 6 | default = "node" 7 | } 8 | 9 | variable "docker_cmd" {} 10 | 11 | variable "ssh_keys" { 12 | type = "list" 13 | } 14 | 15 | variable "node_count" { 16 | default = 1 17 | } 18 | 19 | variable "do_region" { 20 | default = "sfo1" 21 | } 22 | 23 | variable "do_droplet_size" { 24 | default = "2gb" 25 | } 26 | -------------------------------------------------------------------------------- /modules/gce/compute/files/userdata.template: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | ssh_authorized_keys: 3 | - ssh-rsa ${ssh_pub_key} 4 | write_files: 5 | - path: /etc/rc.local 6 | permissions: "0755" 7 | owner: root 8 | content: | 9 | #!/bin/bash 10 | wait-for-docker 11 | docker run -d -p 8080:8080 -p 9345:9345 --net=host --name rancher-server --restart=always -e CATTLE_USE_LOCAL_ARTIFACTS=false -e JAVA_OPTS="-Xmx8g" rancher/server:${rancher_version} --advertise-address $(wget -qO - --header="Metadata-Flavor: Google" http://metadata/computeMetadata/v1/instance/network-interfaces/0/ip) --db-host 127.0.0.1 --db-pass ${database_password} --db-user ${database_user} 12 | 13 | docker run -d -v /cloudsql:/cloudsql -p 127.0.0.1:3306:3306 --name gce-cloud-sql-proxy --restart=on-failure:10 gcr.io/cloudsql-docker/gce-proxy:1.10 /cloud_sql_proxy -instances=${gce-cloud-sql-instance-connection-name}=tcp:0.0.0.0:3306 14 | rancher: 15 | docker: 16 | engine: ${docker_version} 17 | log_driver: "json-file" 18 | log_opts: 19 | max-file: "3" 20 | max-size: "100m" 21 | labels: "production" 22 | services_include: 23 | kernel-headers: true 24 | -------------------------------------------------------------------------------- /modules/gce/compute/main.tf: -------------------------------------------------------------------------------- 1 | // RancherOS Image 2 | resource "google_compute_image" "rancheros" { 3 | name = "rancheros" 4 | 5 | raw_disk { 6 | source = "https://storage.googleapis.com/releases.rancher.com/os/v1.0.3/rancheros-v1.0.3.tar.gz" 7 | sha1 = "e151a5fab00a7ee83c9f9589a42a3fbb833043c1" 8 | } 9 | } 10 | 11 | // Rancher Server Node 12 | resource "google_compute_instance_group_manager" "rancher-servers" { 13 | name = "${var.name}-rancher-servers" 14 | description = "Rancher Servers Instance Group Manager" 15 | 16 | base_instance_name = "${var.name}-server" 17 | instance_template = "${google_compute_instance_template.rancher-servers.self_link}" 18 | update_strategy = "NONE" 19 | zone = "${var.instance_zone}" 20 | 21 | target_pools = ["${google_compute_target_pool.rancher-servers.self_link}"] 22 | target_size = "${var.server_count}" 23 | 24 | named_port { 25 | name = "rancher-api" 26 | port = 8080 27 | } 28 | } 29 | 30 | resource "google_compute_http_health_check" "rancher-servers" { 31 | name = "rancher-server-health-check" 32 | description = "Health check for Rancher Server instances" 33 | request_path = "/v1/scripts/api.crt" 34 | port = "8080" 35 | 36 | timeout_sec = 2 37 | check_interval_sec = 30 38 | unhealthy_threshold = 2 39 | } 40 | 41 | resource "google_compute_target_pool" "rancher-servers" { 42 | name = "rancher-servers-target" 43 | description = "Target pool for Rancher Servers" 44 | depends_on = ["google_compute_http_health_check.rancher-servers"] 45 | 46 | health_checks = [ 47 | "${google_compute_http_health_check.rancher-servers.name}", 48 | ] 49 | 50 | // Options are "NONE" (no affinity). "CLIENT_IP" (hash of the source/dest addresses / ports), and "CLIENT_IP_PROTO" also includes the protocol (default "NONE"). 51 | session_affinity = "NONE" 52 | } 53 | 54 | data "template_file" "userdata" { 55 | template = "${file("${path.module}/files/userdata.template")}" 56 | 57 | vars { 58 | database_endpoint = "${var.database_endpoint}" 59 | database_user = "${var.database_user}" 60 | database_password = "${var.database_password}" 61 | rancher_version = "${var.rancher_version}" 62 | docker_version = "${var.docker_version}" 63 | gce-cloud-sql-instance-connection-name = "${var.gce-cloud-sql-instance-connection-name}" 64 | ssh_pub_key = "${var.ssh_pub_key}" 65 | } 66 | } 67 | 68 | resource "google_compute_instance_template" "rancher-servers" { 69 | name = "${var.name}-server" 70 | description = "Template for Rancher Servers" 71 | 72 | tags = ["${var.instance_tags}", "rancher-servers", "created-by-terraform"] 73 | 74 | machine_type = "${var.machine_type}" 75 | instance_description = "Instance running Rancher Server" 76 | can_ip_forward = false 77 | 78 | scheduling { 79 | automatic_restart = true 80 | on_host_maintenance = "MIGRATE" 81 | } 82 | 83 | disk { 84 | source_image = "${google_compute_image.rancheros.self_link}" 85 | auto_delete = true 86 | boot = true 87 | } 88 | 89 | network_interface { 90 | network = "default" 91 | access_config = {} 92 | } 93 | 94 | service_account { 95 | scopes = ["compute-ro", "storage-ro", "cloud-platform"] 96 | } 97 | 98 | metadata = "${var.instance_metadata}" 99 | 100 | metadata_startup_script = "${data.template_file.userdata.rendered}" 101 | } 102 | 103 | resource "google_compute_forwarding_rule" "rancher-servers" { 104 | name = "rancher-servers-forwarder" 105 | description = "Externally facing forwarder for Rancher servers" 106 | target = "${google_compute_target_pool.rancher-servers.self_link}" 107 | ip_protocol = "TCP" 108 | port_range = "80-8080" 109 | load_balancing_scheme = "EXTERNAL" 110 | } 111 | 112 | resource "google_compute_firewall" "default" { 113 | name = "tf-rancher-servers-firewall" 114 | network = "default" 115 | 116 | allow { 117 | protocol = "tcp" 118 | ports = ["22"] 119 | } 120 | 121 | allow { 122 | protocol = "tcp" 123 | ports = ["80"] 124 | } 125 | 126 | allow { 127 | protocol = "tcp" 128 | ports = ["8080"] 129 | } 130 | 131 | source_ranges = ["0.0.0.0/0"] 132 | target_tags = ["rancher-servers"] 133 | } 134 | -------------------------------------------------------------------------------- /modules/gce/compute/variables.tf: -------------------------------------------------------------------------------- 1 | variable "name" {} 2 | variable "gce_project" {} 3 | variable "machine_type" {} 4 | variable "zone" {} 5 | 6 | variable "server_count" { 7 | default = "1" 8 | } 9 | 10 | variable "service_account_scopes" { 11 | type = "list" 12 | } 13 | 14 | variable "instance_metadata" { 15 | type = "map" 16 | default = {} 17 | } 18 | 19 | variable "instance_zone" { 20 | default = "us-central1-a" 21 | } 22 | 23 | variable "instance_tags" { 24 | type = "list" 25 | default = [] 26 | } 27 | 28 | variable "database_endpoint" { 29 | default = "" 30 | } 31 | 32 | variable "database_user" { 33 | default = "" 34 | } 35 | 36 | variable "database_password" { 37 | default = "" 38 | } 39 | 40 | variable "docker_version" { 41 | default = "docker-1.12.6" 42 | } 43 | 44 | variable "rancher_version" { 45 | default = "stable" 46 | } 47 | 48 | variable "gce-cloud-sql-instance-connection-name" {} 49 | 50 | variable "ssh_pub_key" {} 51 | -------------------------------------------------------------------------------- /modules/gce/database/main.tf: -------------------------------------------------------------------------------- 1 | // GCE Cloud SQL is a MySQL compatible persistence service 2 | resource "google_sql_database_instance" "master" { 3 | name = "${var.name}" 4 | region = "${var.region}" 5 | database_version = "MYSQL_5_6" 6 | 7 | settings { 8 | tier = "${var.db_tier}" 9 | disk_size = "${var.disk_size}" 10 | disk_type = "${var.disk_type}" 11 | 12 | ip_configuration { 13 | ipv4_enabled = true 14 | } 15 | } 16 | } 17 | 18 | resource "google_sql_database" "master" { 19 | name = "cattle" 20 | instance = "${google_sql_database_instance.master.name}" 21 | } 22 | 23 | resource "google_sql_user" "rancher" { 24 | name = "${var.db_user}" 25 | instance = "${google_sql_database_instance.master.name}" 26 | host = "%" 27 | password = "${var.db_pass}" 28 | } 29 | 30 | output "name" { 31 | value = "${google_sql_database_instance.master.name}" 32 | } 33 | -------------------------------------------------------------------------------- /modules/gce/database/variables.tf: -------------------------------------------------------------------------------- 1 | variable "name" {} 2 | variable "region" {} 3 | variable "db_tier" {} 4 | variable "disk_size" {} 5 | variable "disk_type" {} 6 | variable "db_user" {} 7 | variable "db_pass" {} 8 | -------------------------------------------------------------------------------- /scripts/plan: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo $PWD 4 | --------------------------------------------------------------------------------