├── .gitignore
├── Readme.md
├── aws-eks
├── Readme.md
└── main.tf
├── azure-aks
├── Readme.md
└── main.tf
├── digitalocean-doks
├── Readme.md
└── main.tf
├── docs
└── vscode.png
├── gcloud-gke
├── Readme.md
└── main.tf
├── ibmcloud-k8s
├── Readme.md
└── main.tf
├── linode-lke
├── Readme.md
└── main.tf
├── ovhcloud-k8s
├── Readme.md
└── main.tf
└── scaleway-kapsule
├── Readme.md
└── main.tf
/.gitignore:
--------------------------------------------------------------------------------
1 | .vscode/
2 | *.DS_Store
--------------------------------------------------------------------------------
/Readme.md:
--------------------------------------------------------------------------------
1 | # Coder OSS with Terraform
2 |
3 | The purpose of this repo is to demonstrate how remote development environments work using [Coder's OSS product](https://github.com/coder/coder). This repo should not be used for production use cases, but simply a proof-of-concept for what coding-in-a-browser feels like using Coder.
4 |
5 |
6 |
7 | ## Currently supported platforms
8 |
9 | Each subfolder in this repo is for a different platform.
10 |
11 | * Google GKE
12 | * Azure AKS
13 | * AWS EKS
14 | * Linode LKE
15 | * DigitalOcean DOKS
16 | * IBMCloud K8s
17 | * OVHCloud K8s
18 | * Scaleway K8s Kapsule
19 |
20 |
21 | ## Important Caveat
22 |
23 | In order to make this demo "1 click apply", I am using an anti-pattern where I create the k8s cluster and deploy in the same repo. This is a [known](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#stacking-with-managed-kubernetes-cluster-resources) anti-pattern. The consequence is that you can get authentication errors while trying to update the namespace or helm charts. For the most part, things have "just worked" for me. You can fix this by file mounting a kubeconfig (ovhcloud-k8s shows how to do this).
24 |
--------------------------------------------------------------------------------
/aws-eks/Readme.md:
--------------------------------------------------------------------------------
1 | ## Getting Coder Installed
2 |
3 | 1. Create an [AWS Account](https://portal.aws.amazon.com/billing/signup#/start/email).
4 | 2. Create an IAM User with the Administrator policy. Generate access keys and grant it console access. See bottom for notes.
5 | 3. Fork this repo and set it up with [spacelift.io](https://spacelift.io/) or equivalent.
6 | 4. Set [AWS_ACCESS_KEY_ID](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) and [AWS_SECRET_ACCESS_KEY](https://registry.terraform.io/providers/hashicorp/aws/latest/docs)
7 | 5. Make sure to set the directory to aws-eks/
8 | 6. Run and apply the Terraform (takes 20-30 minutes).
9 |
10 | Note: EKS is unstable, and I had more problems with EKS than the rest combined. I've seen weird, unreproducible bugs that required deleting and trying again.
11 |
12 | ## Coder setup Instructions
13 |
14 | To get Coder set up initially, we need to give it an admin user and create a kubernetes template for our workspace.
15 |
16 | 1. Navigate to the DNS of the load balancer (AWS / EC2 / Load balancers).
17 | 2. Create the initial username and password.
18 | 3. Go to Templates, click Develop in Kubernetes, and click use template
19 | 4. Click create template (it will refresh and prompt for 3 more template inputs)
20 | 5. Set var.use_kubeconfig to false
21 | 6. Set var.namespace to coder
22 | 6. Click create template
23 |
24 | With the admin user created and the template imported, we are ready to launch a workspace based on that template.
25 |
26 | 1. Click create workspace from the kubernetes template (templates/kubernetes/workspace)
27 | 2. Give it a name and click create
28 | 3. Within three minutes, the workspace should launch.
29 |
30 | From there, you can click the Terminal button to get an interactive session in the k8s container, or you can click code-server to open up a VSCode window and start coding!
31 |
32 | ## Why grant the Terraform user Console Access?
33 | Most of the kubernetes resources can only be managed if granted permissions via the kubernetes cluster role binding. The (AWS docs)[https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html] can step you through how to do this. It cannot be granted via IAM alone, except for the IAM user that originally created the EKS cluster. For this reason, it's easiest to grant the Terraform user console access so you can view the properties of the cluster. In production, you'd want to do this differently.
34 |
35 | ## Why did I fork coder's helm chart?
36 | Due to a limitation in AWS, I needed to make a [change](https://github.com/coder/coder/pull/5448) to the Helm chart. Until this is fully released, I've forked the helm chart locally.
37 |
38 | ## The VPC creation failed
39 | There's a rare AWS bug that can cause the VPC to fail to be created, but still create anyway. Sucks. Gotta delete and try again.
40 |
41 | ## The Postgres pod is failing to get it's Persistent Volume created
42 | If you've tried to apply multiple times, there could be a duplicate volume. There's a wierd bug the way Terraform destroys the Helm chart that can leave the Persistent Volume around. Check to see if the volume already exists and manually delete it.
43 |
44 | ## Authentication to Kubernetes is failing
45 | I've found AWS to be a bit unstable with it's auth. I had to use different auth techniques depending on if I was using Spacelift vs Terraform Cloud. I've documented both techniques in main.tf in the comments.
46 |
--------------------------------------------------------------------------------
/aws-eks/main.tf:
--------------------------------------------------------------------------------
1 | terraform {
2 | required_providers {
3 | aws = {
4 | source = "hashicorp/aws"
5 | version = "~> 4.0"
6 | }
7 | }
8 | }
9 |
10 | variable "coder_version" {
11 | default = "0.13.6"
12 | }
13 |
14 | # Change this password away from the default if you are doing
15 | # anything more than a testing stack.
16 | variable "db_password" {
17 | default = "coder"
18 | }
19 |
20 | ###############################################################
21 | # VPC configuration
22 | ###############################################################
23 | # Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
24 | provider "aws" {
25 | region = "us-east-1"
26 | }
27 | module "vpc" {
28 | source = "terraform-aws-modules/vpc/aws"
29 | name = "coder"
30 |
31 | enable_nat_gateway = true
32 | enable_dns_hostnames = true
33 |
34 | cidr = "10.0.0.0/16"
35 | azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
36 | private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
37 | public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
38 |
39 | private_subnet_tags = {
40 | "kubernetes.io/cluster/coder" : "shared"
41 | }
42 |
43 | public_subnet_tags = {
44 | "kubernetes.io/role/elb" : 1
45 | }
46 | }
47 |
48 | ###############################################################
49 | # EKS configuration
50 | ###############################################################
51 | module "eks" {
52 | source = "terraform-aws-modules/eks/aws"
53 | version = "~> 19.0"
54 |
55 | vpc_id = module.vpc.vpc_id
56 | subnet_ids = module.vpc.private_subnets
57 |
58 | cluster_name = "coder"
59 | cluster_version = "1.24"
60 | cluster_endpoint_public_access = true
61 | cluster_addons = {
62 | coredns = {
63 | most_recent = true
64 | }
65 | kube-proxy = {
66 | most_recent = true
67 | }
68 | vpc-cni = {
69 | most_recent = true
70 | }
71 | aws-ebs-csi-driver = {
72 | most_recent = true
73 | }
74 | }
75 |
76 | eks_managed_node_groups = {
77 | default = {
78 | min_size = 1
79 | max_size = 1
80 | desired_size = 1
81 |
82 | instance_types = ["t3.xlarge"]
83 | capacity_type = "ON_DEMAND"
84 |
85 | # Needed by the aws-ebs-csi-driver
86 | iam_role_additional_policies = {
87 | AmazonEBSCSIDriverPolicy = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
88 | }
89 | }
90 | }
91 | }
92 |
93 | ###############################################################
94 | # K8s configuration
95 | ###############################################################
96 | # If you are having trouble with the exec command, you can try the token technique.
97 | # Put token = data.aws_eks_cluster_auth.cluster_auth.token in place of exec
98 | # data "aws_eks_cluster_auth" "cluster_auth" {
99 | # name = "coder"
100 | # }
101 |
102 | provider "kubernetes" {
103 | host = module.eks.cluster_endpoint
104 | cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
105 | #token = data.aws_eks_cluster_auth.cluster_auth.token
106 | exec {
107 | api_version = "client.authentication.k8s.io/v1beta1"
108 | args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
109 | command = "aws"
110 | }
111 | }
112 |
113 | resource "kubernetes_namespace" "coder_namespace" {
114 | metadata {
115 | name = "coder"
116 | }
117 | }
118 |
119 | ###############################################################
120 | # Helm configuration
121 | ###############################################################
122 | provider "helm" {
123 | kubernetes {
124 | host = module.eks.cluster_endpoint
125 | cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
126 | exec {
127 | api_version = "client.authentication.k8s.io/v1beta1"
128 | args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
129 | command = "aws"
130 | }
131 | }
132 | }
133 |
134 | resource "helm_release" "pg_cluster" {
135 | name = "postgresql"
136 | namespace = kubernetes_namespace.coder_namespace.metadata.0.name
137 |
138 | repository = "https://charts.bitnami.com/bitnami"
139 | chart = "postgresql"
140 |
141 | set {
142 | name = "auth.username"
143 | value = "coder"
144 | }
145 |
146 | set {
147 | name = "auth.password"
148 | value = "${var.db_password}"
149 | }
150 |
151 | set {
152 | name = "auth.database"
153 | value = "coder"
154 | }
155 |
156 | set {
157 | name = "persistence.size"
158 | value = "10Gi"
159 | }
160 | }
161 |
162 | resource "helm_release" "coder" {
163 | name = "coder"
164 | namespace = kubernetes_namespace.coder_namespace.metadata.0.name
165 |
166 | chart = "https://github.com/coder/coder/releases/download/v${var.coder_version}/coder_helm_${var.coder_version}.tgz"
167 |
168 | values = [
169 | < coder --> Kubernetes Dashboard (upper right)
11 | 2. Change the namespace to coder (upper left)
12 | 3. Go to Services and the public IP should be on the right.
13 | 4. Create the initial username and password.
14 | 5. Go to Templates, click Develop in Kubernetes, and click use template
15 | 6. Click create template (it will refresh and prompt for 3 more template inputs)
16 | 7. Set var.use_kubeconfig to false
17 | 8. Set var.namespace to coder
18 | 9. Click create template
19 |
20 | With the admin user created and the template imported, we are ready to launch a workspace based on that template.
21 |
22 | 1. Click create workspace from the kubernetes template (templates/kubernetes/workspace)
23 | 2. Give it a name and click create
24 | 3. Within three minutes, the workspace should launch.
25 |
26 | From there, you can click the Terminal button to get an interactive session in the k8s container, or you can click code-server to open up a VSCode window and start coding!
27 |
28 | ## In Terraform: dial tcp 127.0.0.1:80: connect: connection refused
29 |
30 | This is a Terraform bug that is really obnoxious. The only way to fix it is to manually delete things in the console/state until you get back to a good state. Or you need to fork my repo.
31 |
32 | Terraform does an "optimization" where it will skip calculating a data resource if it depends on an object that needs to be recreated. For this repo, that data resource is the one that uses your IBMCloud credentials to pull k8s config for deploying the helm charts. If you modify the k8s cluster, Terraform can skip trying to calculate the datasource. When the provider tries to run, it doesn't find the config file, and defaults to 127.0.0.1.
33 |
34 | There are many Github issues filed on this topic, and they all come down to the same solution: install ibmcloud's cli and plugin in your CI system, and farming out to the command line to generate kubectl. Unfortunately, IBM Cloud's CLI is not installed by default on most Terraform CI systems, so you'd have to run on a custom docker image and the rabbit hole grows.
35 |
36 | ## Workspace launches but code-server/VSCode doesn't work?
37 |
38 | IBM does weird stuff with it's volumes. Use the terminal to chown ~, and it should work.
39 |
--------------------------------------------------------------------------------
/ibmcloud-k8s/main.tf:
--------------------------------------------------------------------------------
1 | terraform {
2 | required_providers {
3 | ibm = {
4 | source = "IBM-Cloud/ibm"
5 | version = ">= 1.12.0"
6 | }
7 | }
8 | }
9 |
10 | variable "coder_version" {
11 | default = "0.13.6"
12 | }
13 |
14 | # Change this password away from the default if you are doing
15 | # anything more than a testing stack.
16 | variable "db_password" {
17 | default = "coder"
18 | }
19 |
20 | ###############################################################
21 | # Networking configuration
22 | ###############################################################
23 | # Set IC_API_KEY
24 | provider "ibm" {
25 | }
26 |
27 | resource "ibm_is_vpc" "coder" {
28 | name = "codervpc"
29 | }
30 |
31 | resource "ibm_is_subnet" "coder" {
32 | name = "codersubnet"
33 | vpc = ibm_is_vpc.coder.id
34 | zone = "us-south-1"
35 | total_ipv4_address_count = 256
36 | }
37 |
38 | resource "ibm_is_public_gateway" "coder" {
39 | name = "coder-gateway"
40 | vpc = ibm_is_vpc.coder.id
41 | zone = "us-south-1"
42 | }
43 |
44 | resource "ibm_is_subnet_public_gateway_attachment" "coder" {
45 | subnet = ibm_is_subnet.coder.id
46 | public_gateway = ibm_is_public_gateway.coder.id
47 | }
48 | ###############################################################
49 | # K8s configuration
50 | ###############################################################
51 | # If the node gets stuck on waiting for the VPE Gateway, then
52 | # open the cloud shell and run:
53 | # ~$ ibmcloud ks cluster master refresh -c coder
54 | resource "ibm_container_vpc_cluster" "coder" {
55 | name = "coder"
56 | vpc_id = ibm_is_vpc.coder.id
57 | flavor = "bx2.4x16" # ibmcloud ks flavors --zone us-south-1
58 | worker_count = 2
59 |
60 | zones {
61 | subnet_id = ibm_is_subnet.coder.id
62 | name = ibm_is_subnet.coder.zone
63 | }
64 |
65 | depends_on = [
66 | ibm_is_subnet_public_gateway_attachment.coder
67 | ]
68 | }
69 |
70 | data "ibm_container_cluster_config" "coder" {
71 | cluster_name_id = ibm_container_vpc_cluster.coder.name
72 | admin = true
73 | }
74 |
75 | # To authenticate with kubectl, use IBM Cloud Shell
76 | # ~$ ibmcloud ks cluster config --cluster coder
77 | # ~$ kubectl get pods -n coder
78 | provider "kubernetes" {
79 | host = data.ibm_container_cluster_config.coder.host
80 | client_certificate = data.ibm_container_cluster_config.coder.admin_certificate
81 | client_key = data.ibm_container_cluster_config.coder.admin_key
82 | cluster_ca_certificate = data.ibm_container_cluster_config.coder.ca_certificate
83 | }
84 |
85 | resource "kubernetes_namespace" "coder_namespace" {
86 | metadata {
87 | name = "coder"
88 | }
89 | }
90 |
91 | ###############################################################
92 | # Coder configuration
93 | ###############################################################
94 | provider "helm" {
95 | kubernetes {
96 | host = data.ibm_container_cluster_config.coder.host
97 | client_certificate = data.ibm_container_cluster_config.coder.admin_certificate
98 | client_key = data.ibm_container_cluster_config.coder.admin_key
99 | cluster_ca_certificate = data.ibm_container_cluster_config.coder.ca_certificate
100 | }
101 | }
102 |
103 | # kubectl logs postgresql-0 -n coder
104 | resource "helm_release" "pg_cluster" {
105 | name = "postgresql"
106 | namespace = kubernetes_namespace.coder_namespace.metadata.0.name
107 |
108 | repository = "https://charts.bitnami.com/bitnami"
109 | chart = "postgresql"
110 |
111 | # The default IBM storage class mounts the directory in as
112 | # owned by nobody, causes Postgres to fail. Simplest fix is
113 | # to use a different storage type.
114 | # https://github.com/bitnami/charts/issues/4737
115 | set {
116 | name = "primary.persistence.storageClass"
117 | value = "ibmc-vpc-block-custom"
118 | }
119 |
120 | set {
121 | name = "auth.username"
122 | value = "coder"
123 | }
124 |
125 | set {
126 | name = "auth.password"
127 | value = "${var.db_password}"
128 | }
129 |
130 | set {
131 | name = "auth.database"
132 | value = "coder"
133 | }
134 |
135 | set {
136 | name = "persistence.size"
137 | value = "10Gi"
138 | }
139 | }
140 |
141 | resource "helm_release" "coder" {
142 | name = "coder"
143 | namespace = kubernetes_namespace.coder_namespace.metadata.0.name
144 |
145 | chart = "https://github.com/coder/coder/releases/download/v${var.coder_version}/coder_helm_${var.coder_version}.tgz"
146 |
147 | values = [
148 | < Load Balancer and copy the IP address on the far right.
16 | 2. Create the initial username and password.
17 | 3. Go to Templates, click Develop in Kubernetes, and click use template
18 | 4. Click create template (it will refresh and prompt for 3 more template inputs)
19 | 5. Set var.use_kubeconfig to false
20 | 6. Set var.namespace to coder
21 | 7. Click create template
22 |
23 | With the admin user created and the template imported, we are ready to launch a workspace based on that template.
24 |
25 | 1. Click create workspace from the kubernetes template (templates/kubernetes/workspace)
26 | 2. Give it a name and click create
27 | 3. Within three minutes, the workspace should launch.
28 |
29 | From there, you can click the Terminal button to get an interactive session in the k8s container, or you can click code-server to open up a VSCode window and start coding!
--------------------------------------------------------------------------------
/ovhcloud-k8s/main.tf:
--------------------------------------------------------------------------------
1 | terraform {
2 | required_providers {
3 | ovh = {
4 | source = "ovh/ovh"
5 | }
6 | }
7 | }
8 |
9 | variable "coder_version" {
10 | default = "0.13.6"
11 | }
12 |
13 | # Change this password away from the default if you are doing
14 | # anything more than a testing stack.
15 | variable "db_password" {
16 | default = "coder"
17 | }
18 |
19 | ###############################################################
20 | # K8s configuration
21 | ###############################################################
22 | # Set OVH_APPLICATION_KEY, OVH_APPLICATION_SECRET, OVH_CONSUMER_KEY
23 | # https://api.us.ovhcloud.com/createToken/?GET=/*&POST=/*&PUT=/*&DELETE=/*
24 | # Set OVH_CLOUD_PROJECT_SERVICE to your Project ID
25 | provider "ovh" {
26 | endpoint = "ovh-us"
27 | }
28 |
29 | resource "ovh_cloud_project_kube" "coder" {
30 | name = "coder_cluster"
31 | region = "US-EAST-VA-1"
32 | }
33 |
34 |
35 | resource "ovh_cloud_project_kube_nodepool" "coder" {
36 | kube_id = ovh_cloud_project_kube.coder.id
37 | name = "coder-pool" //Warning: "_" char is not allowed!
38 | flavor_name = "d2-8"
39 | desired_nodes = 2
40 | max_nodes = 2
41 | min_nodes = 2
42 | }
43 |
44 | provider "kubernetes" {
45 | host = yamldecode(ovh_cloud_project_kube.coder.kubeconfig).clusters[0].cluster.server
46 | token = yamldecode(ovh_cloud_project_kube.coder.kubeconfig).users[0].user.token
47 | cluster_ca_certificate = base64decode(yamldecode(ovh_cloud_project_kube.coder.kubeconfig).clusters[0].cluster.certificate-authority-data)
48 |
49 | }
50 |
51 | resource "kubernetes_namespace" "coder_namespace" {
52 | metadata {
53 | name = "coder"
54 | }
55 |
56 | depends_on = [
57 | ovh_cloud_project_kube_nodepool.coder
58 | ]
59 | }
60 |
61 | ###############################################################
62 | # Coder configuration
63 | ###############################################################
64 | provider "helm" {
65 | kubernetes {
66 | host = yamldecode(ovh_cloud_project_kube.coder.kubeconfig).clusters[0].cluster.server
67 | token = yamldecode(ovh_cloud_project_kube.coder.kubeconfig).users[0].user.token
68 | cluster_ca_certificate = base64decode(yamldecode(ovh_cloud_project_kube.coder.kubeconfig).clusters[0].cluster.certificate-authority-data)
69 | }
70 | }
71 |
72 | # kubectl logs postgresql-0 -n coder
73 | resource "helm_release" "pg_cluster" {
74 | name = "postgresql"
75 | namespace = kubernetes_namespace.coder_namespace.metadata.0.name
76 |
77 | repository = "https://charts.bitnami.com/bitnami"
78 | chart = "postgresql"
79 |
80 | set {
81 | name = "auth.username"
82 | value = "coder"
83 | }
84 |
85 | set {
86 | name = "auth.password"
87 | value = var.db_password
88 | }
89 |
90 | set {
91 | name = "auth.database"
92 | value = "coder"
93 | }
94 |
95 | set {
96 | name = "persistence.size"
97 | value = "10Gi"
98 | }
99 | }
100 |
101 | resource "helm_release" "coder" {
102 | name = "coder"
103 | namespace = kubernetes_namespace.coder_namespace.metadata.0.name
104 |
105 | chart = "https://github.com/coder/coder/releases/download/v${var.coder_version}/coder_helm_${var.coder_version}.tgz"
106 |
107 | values = [
108 | <