├── .gitignore ├── README.md ├── eks ├── cluster.tf ├── cluster_role.tf ├── cluster_sg.tf ├── node_group.tf ├── node_group_role.tf ├── node_sg.tf └── vars.tf ├── main.tf ├── provider.tf ├── raw-manifests ├── aws-auth.yaml ├── pod.yaml └── service.yaml ├── variables.tf └── vpc ├── control_plane_sg.tf ├── data_plane_sg.tf ├── nat_gw.tf ├── output.tf ├── public_sg.tf ├── vars.tf └── vpc.tf /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | .idea 3 | *.iml 4 | **/.terragrunt-cache/* 5 | 6 | # Local .terraform directories and files 7 | **/.terraform/* 8 | .terraform* 9 | 10 | # .tfstate files 11 | *.tfstate 12 | *.tfstate.* 13 | .idea/* 14 | 15 | # .tfvars files 16 | sensitive.tfvars 17 | 18 | # Crash log files 19 | crash.log 20 | 21 | # Ignore override files as they are usually used to override resources locally and so 22 | # are not checked in 23 | override.tf 24 | override.tf.json 25 | *_override.tf 26 | *_override.tf.json 27 | 28 | # Include override files you do wish to add to version control using negated pattern 29 | # 30 | # !example_override.tf 31 | 32 | # Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan 33 | # example: *tfplan* -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Amazon EKS Cluster 2 | This repository contains source code to provision an EKS cluster in AWS using Terraform. 3 | 4 | ## Prerequisites 5 | * AWS account 6 | * AWS profile configured with CLI on local machine 7 | * [Terraform](https://www.terraform.io/) 8 | * [kubectl](https://kubernetes.io/docs/tasks/tools/) 9 | 10 | ## Project Structure 11 | 12 | ``` 13 | ├── README.md 14 | ├── eks 15 | | ├── cluster.tf 16 | | ├── cluster_role.tf 17 | | ├── cluster_sg.tf 18 | | ├── node_group.tf 19 | | ├── node_group_role.tf 20 | | ├── node_sg.tf 21 | | └── vars.tf 22 | ├── main.tf 23 | ├── provider.tf 24 | ├── raw-manifests 25 | | ├── aws-auth.yaml 26 | | ├── pod.yaml 27 | | └── service.yaml 28 | ├── variables.tf 29 | └── vpc 30 | ├── control_plane_sg.tf 31 | ├── data_plane_sg.tf 32 | ├── nat_gw.tf 33 | ├── output.tf 34 | ├── public_sg.tf 35 | ├── vars.tf 36 | └── vpc.tf 37 | ``` 38 | 39 | ## Remote Backend State Configuration 40 | To configure remote backend state for your infrastructure, create an S3 bucket and DynamoDB table before running *terraform init*. In the case that you want to use local state persistence, update the *provider.tf* accordingly and don't bother with creating an S3 bucket and DynamoDB table. 41 | 42 | ### Create S3 Bucket for State Backend 43 | ```aws s3api create-bucket --bucket --region --create-bucket-configuration LocationConstraint=``` 44 | 45 | ### Create DynamoDB table for State Locking 46 | ```aws dynamodb create-table --table-name --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1``` 47 | 48 | ## Provision Infrastructure 49 | Review the *main.tf* to update the node size configurations (i.e. desired, maximum, and minimum). When you're ready, run the following commands: 50 | 1. `terraform init` - Initialize the project, setup the state persistence (whether local or remote) and download the API plugins. 51 | 2. `terraform plan` - Print the plan of the desired state without changing the state. 52 | 3. `terraform apply` - Print the desired state of infrastructure changes with the option to execute the plan and provision. 53 | 54 | ## Connect To Cluster 55 | Using the same AWS account profile that provisioned the infrastructure, you can connect to your cluster by updating your local kube config with the following command: 56 | `aws eks --region update-kubeconfig --name ` 57 | 58 | ## Map IAM Users & Roles to EKS Cluster 59 | If you want to map additional IAM users or roles to your Kubernetes cluster, you will have to update the `aws-auth` *ConfigMap* by adding the respective ARN and a Kubernetes username value to the mapRole or mapUser property as an array item. 60 | 61 | ``` 62 | apiVersion: v1 63 | kind: ConfigMap 64 | metadata: 65 | name: aws-auth 66 | namespace: kube-system 67 | data: 68 | mapRoles: | 69 | - rolearn: arn:aws:iam:::role/ 70 | username: system:node:{{EC2PrivateDNSName}} 71 | groups: 72 | - system:bootstrappers 73 | - system:nodes 74 | - rolearn: arn:aws:iam:::role/ops-role 75 | username: ops-role 76 | mapUsers: | 77 | - userarn: arn:aws:iam:::user/developer-user 78 | username: developer-user 79 | ``` 80 | 81 | When you are done with modifications to the aws-auth ConfigMap, you can run `kubectl apply -f auth-auth.yaml`. An example of this manifest file exists in the raw-manifests directory. 82 | 83 | For a more in-depth explanation on this, you can read [this post](https://medium.com/swlh/secure-an-amazon-eks-cluster-with-iam-rbac-b78be0cd95c9). 84 | 85 | ## Deploy Application 86 | To deploy a simple application to you cluster, redirect to the directory called raw-manifests and apply the *pod.yaml* and *service.yaml* manifest files to create a Pod and expose the application with a LoadBalancer Service. 87 | 1. `kubectl apply -f service.yaml` 88 | 2. `kubectl apply -f pod.yaml` -------------------------------------------------------------------------------- /eks/cluster.tf: -------------------------------------------------------------------------------- 1 | resource "aws_eks_cluster" "main" { 2 | name = var.eks_cluster_name 3 | role_arn = aws_iam_role.eks_cluster.arn 4 | 5 | vpc_config { 6 | security_group_ids = [aws_security_group.eks_cluster.id, aws_security_group.eks_nodes.id] 7 | endpoint_private_access = var.endpoint_private_access 8 | endpoint_public_access = var.endpoint_public_access 9 | subnet_ids = var.eks_cluster_subnet_ids 10 | } 11 | 12 | # Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling. 13 | # Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups. 14 | depends_on = [ 15 | aws_iam_role_policy_attachment.aws_eks_cluster_policy 16 | ] 17 | } 18 | 19 | ### External cli kubergrunt 20 | data "external" "thumb" { 21 | program = ["kubergrunt", "eks", "oidc-thumbprint", "--issuer-url", aws_eks_cluster.main.identity.0.oidc.0.issuer] 22 | } 23 | 24 | resource "aws_iam_openid_connect_provider" "main" { 25 | client_id_list = ["sts.amazonaws.com"] 26 | thumbprint_list = [data.external.thumb.result.thumbprint] 27 | url = aws_eks_cluster.main.identity.0.oidc.0.issuer 28 | } -------------------------------------------------------------------------------- /eks/cluster_role.tf: -------------------------------------------------------------------------------- 1 | #https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html 2 | 3 | resource "aws_iam_role" "eks_cluster" { 4 | name = "${var.eks_cluster_name}-cluster" 5 | 6 | assume_role_policy = <:role/-worker 9 | username: system:node:{{EC2PrivateDNSName}} 10 | groups: 11 | - system:bootstrappers 12 | - system:nodes 13 | mapUsers: | 14 | - userarn: arn:aws:iam:::user/ 15 | username: 16 | groups: 17 | - system:masters -------------------------------------------------------------------------------- /raw-manifests/pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: express-test 5 | labels: 6 | app: express-test 7 | spec: 8 | containers: 9 | - name: express-test 10 | image: lukondefmwila/express-test:latest 11 | resources: 12 | limits: 13 | memory: "128Mi" 14 | cpu: "500m" 15 | ports: 16 | - containerPort: 8080 -------------------------------------------------------------------------------- /raw-manifests/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: express-test-svc 5 | spec: 6 | selector: 7 | app: express-test 8 | type: LoadBalancer 9 | ports: 10 | - protocol: TCP 11 | port: 8080 12 | targetPort: 8080 -------------------------------------------------------------------------------- /variables.tf: -------------------------------------------------------------------------------- 1 | variable "profile" { 2 | description = "AWS profile" 3 | type = string 4 | } 5 | 6 | variable "region" { 7 | description = "AWS region to deploy to" 8 | default = "eu-west-1" 9 | type = string 10 | } 11 | 12 | variable "cluster_name" { 13 | description = "EKS cluster name" 14 | type = string 15 | } -------------------------------------------------------------------------------- /vpc/control_plane_sg.tf: -------------------------------------------------------------------------------- 1 | # Security group for control plane 2 | resource "aws_security_group" "control_plane_sg" { 3 | name = "k8s-control-plane-sg" 4 | vpc_id = aws_vpc.custom_vpc.id 5 | 6 | tags = { 7 | Name = "k8s-control-plane-sg" 8 | } 9 | } 10 | 11 | # Security group traffic rules 12 | ## Ingress rule 13 | resource "aws_security_group_rule" "control_plane_inbound" { 14 | security_group_id = aws_security_group.control_plane_sg.id 15 | type = "ingress" 16 | from_port = 0 17 | to_port = 65535 18 | protocol = "tcp" 19 | cidr_blocks = flatten([var.private_subnet_cidr_blocks, var.public_subnet_cidr_blocks]) 20 | } 21 | 22 | ## Egress rule 23 | resource "aws_security_group_rule" "control_plane_outbound" { 24 | security_group_id = aws_security_group.control_plane_sg.id 25 | type = "egress" 26 | from_port = 0 27 | to_port = 65535 28 | protocol = "-1" 29 | cidr_blocks = ["0.0.0.0/0"] 30 | } -------------------------------------------------------------------------------- /vpc/data_plane_sg.tf: -------------------------------------------------------------------------------- 1 | # Security group for data plane 2 | resource "aws_security_group" "data_plane_sg" { 3 | name = "k8s-data-plane-sg" 4 | vpc_id = aws_vpc.custom_vpc.id 5 | 6 | tags = { 7 | Name = "k8s-data-plane-sg" 8 | } 9 | } 10 | 11 | # Security group traffic rules 12 | ## Ingress rule 13 | resource "aws_security_group_rule" "nodes" { 14 | description = "Allow nodes to communicate with each other" 15 | security_group_id = aws_security_group.data_plane_sg.id 16 | type = "ingress" 17 | from_port = 0 18 | to_port = 65535 19 | protocol = "-1" 20 | cidr_blocks = flatten([var.private_subnet_cidr_blocks, var.public_subnet_cidr_blocks]) 21 | } 22 | 23 | resource "aws_security_group_rule" "nodes_inbound" { 24 | description = "Allow worker Kubelets and pods to receive communication from the cluster control plane" 25 | security_group_id = aws_security_group.data_plane_sg.id 26 | type = "ingress" 27 | from_port = 1025 28 | to_port = 65535 29 | protocol = "tcp" 30 | cidr_blocks = flatten([var.private_subnet_cidr_blocks]) 31 | } 32 | 33 | ## Egress rule 34 | resource "aws_security_group_rule" "node_outbound" { 35 | security_group_id = aws_security_group.data_plane_sg.id 36 | type = "egress" 37 | from_port = 0 38 | to_port = 0 39 | protocol = "-1" 40 | cidr_blocks = ["0.0.0.0/0"] 41 | } -------------------------------------------------------------------------------- /vpc/nat_gw.tf: -------------------------------------------------------------------------------- 1 | # Create Elastic IP 2 | resource "aws_eip" "main" { 3 | vpc = true 4 | } 5 | 6 | # Create NAT Gateway 7 | resource "aws_nat_gateway" "main" { 8 | allocation_id = aws_eip.main.id 9 | subnet_id = aws_subnet.public_subnet[0].id 10 | 11 | tags = { 12 | Name = "NAT Gateway for Custom Kubernetes Cluster" 13 | } 14 | } 15 | 16 | # Add route to route table 17 | resource "aws_route" "main" { 18 | route_table_id = aws_vpc.custom_vpc.default_route_table_id 19 | destination_cidr_block = "0.0.0.0/0" 20 | nat_gateway_id = aws_nat_gateway.main.id 21 | } -------------------------------------------------------------------------------- /vpc/output.tf: -------------------------------------------------------------------------------- 1 | output vpc_arn { 2 | value = aws_vpc.custom_vpc.arn 3 | } 4 | 5 | output vpc_id { 6 | value = aws_vpc.custom_vpc.id 7 | } 8 | 9 | output private_subnet_ids { 10 | value = aws_subnet.private_subnet.*.id 11 | } 12 | 13 | output public_subnet_ids { 14 | value = aws_subnet.public_subnet.*.id 15 | } 16 | 17 | output control_plane_sg_security_group_id { 18 | value = aws_security_group.control_plane_sg.id 19 | } 20 | 21 | output data_plane_sg_security_group_id { 22 | value = aws_security_group.data_plane_sg.id 23 | } 24 | 25 | output public_subnet_security_group_id { 26 | value = aws_security_group.public_sg.id 27 | } -------------------------------------------------------------------------------- /vpc/public_sg.tf: -------------------------------------------------------------------------------- 1 | # Security group for public subnet resources 2 | resource "aws_security_group" "public_sg" { 3 | name = "public-sg" 4 | vpc_id = aws_vpc.custom_vpc.id 5 | 6 | tags = { 7 | Name = "public-sg" 8 | } 9 | } 10 | 11 | # Security group traffic rules 12 | ## Ingress rule 13 | resource "aws_security_group_rule" "sg_ingress_public_443" { 14 | security_group_id = aws_security_group.public_sg.id 15 | type = "ingress" 16 | from_port = 443 17 | to_port = 443 18 | protocol = "tcp" 19 | cidr_blocks = ["0.0.0.0/0"] 20 | } 21 | 22 | resource "aws_security_group_rule" "sg_ingress_public_80" { 23 | security_group_id = aws_security_group.public_sg.id 24 | type = "ingress" 25 | from_port = 80 26 | to_port = 80 27 | protocol = "tcp" 28 | cidr_blocks = ["0.0.0.0/0"] 29 | } 30 | 31 | ## Egress rule 32 | resource "aws_security_group_rule" "sg_egress_public" { 33 | security_group_id = aws_security_group.public_sg.id 34 | type = "egress" 35 | from_port = 0 36 | to_port = 0 37 | protocol = "-1" 38 | cidr_blocks = ["0.0.0.0/0"] 39 | } -------------------------------------------------------------------------------- /vpc/vars.tf: -------------------------------------------------------------------------------- 1 | variable "eks_cluster_name" { 2 | description = "The name of the EKS cluster" 3 | type = string 4 | } 5 | 6 | variable "vpc_tag_name" { 7 | type = string 8 | description = "Name tag for the VPC" 9 | } 10 | 11 | variable "route_table_tag_name" { 12 | type = string 13 | default = "main" 14 | description = "Route table description" 15 | } 16 | 17 | variable "vpc_cidr_block" { 18 | type = string 19 | default = "10.0.0.0/16" 20 | description = "CIDR block range for vpc" 21 | } 22 | 23 | variable "private_subnet_cidr_blocks" { 24 | type = list(string) 25 | default = ["10.0.0.0/24", "10.0.1.0/24"] 26 | description = "CIDR block range for the private subnet" 27 | } 28 | 29 | variable "public_subnet_cidr_blocks" { 30 | type = list(string) 31 | default = ["10.0.2.0/24", "10.0.3.0/24"] 32 | description = "CIDR block range for the public subnet" 33 | } 34 | 35 | variable "private_subnet_tag_name" { 36 | type = string 37 | default = "Custom Kubernetes cluster private subnet" 38 | description = "Name tag for the private subnet" 39 | } 40 | 41 | variable "public_subnet_tag_name" { 42 | type = string 43 | default = "Custom Kubernetes cluster public subnet" 44 | description = "Name tag for the public subnet" 45 | } 46 | 47 | variable "availability_zones" { 48 | type = list(string) 49 | default = ["eu-west-1a", "eu-west-1b"] 50 | description = "List of availability zones for the selected region" 51 | } 52 | 53 | variable "region" { 54 | description = "aws region to deploy to" 55 | type = string 56 | } -------------------------------------------------------------------------------- /vpc/vpc.tf: -------------------------------------------------------------------------------- 1 | ### VPC Network Setup 2 | resource "aws_vpc" "custom_vpc" { 3 | # Your VPC must have DNS hostname and DNS resolution support. 4 | # Otherwise, your worker nodes cannot register with your cluster. 5 | 6 | cidr_block = var.vpc_cidr_block 7 | enable_dns_support = true 8 | enable_dns_hostnames = true 9 | 10 | tags = { 11 | Name = "${var.vpc_tag_name}" 12 | "kubernetes.io/cluster/${var.eks_cluster_name}" = "shared" 13 | } 14 | } 15 | 16 | # Create the private subnet 17 | resource "aws_subnet" "private_subnet" { 18 | count = length(var.availability_zones) 19 | vpc_id = aws_vpc.custom_vpc.id 20 | cidr_block = element(var.private_subnet_cidr_blocks, count.index) 21 | availability_zone = element(var.availability_zones, count.index) 22 | 23 | tags = { 24 | Name = "${var.private_subnet_tag_name}" 25 | "kubernetes.io/cluster/${var.eks_cluster_name}" = "shared" 26 | "kubernetes.io/role/internal-elb" = 1 27 | } 28 | } 29 | 30 | # Create the public subnet 31 | resource "aws_subnet" "public_subnet" { 32 | count = length(var.availability_zones) 33 | vpc_id = "${aws_vpc.custom_vpc.id}" 34 | cidr_block = element(var.public_subnet_cidr_blocks, count.index) 35 | availability_zone = element(var.availability_zones, count.index) 36 | 37 | tags = { 38 | Name = "${var.public_subnet_tag_name}" 39 | "kubernetes.io/cluster/${var.eks_cluster_name}" = "shared" 40 | "kubernetes.io/role/elb" = 1 41 | } 42 | 43 | map_public_ip_on_launch = true 44 | } 45 | 46 | # Create IGW for the public subnets 47 | resource "aws_internet_gateway" "igw" { 48 | vpc_id = "${aws_vpc.custom_vpc.id}" 49 | 50 | tags = { 51 | Name = "${var.vpc_tag_name}" 52 | } 53 | } 54 | 55 | # Route the public subnet traffic through the IGW 56 | resource "aws_route_table" "main" { 57 | vpc_id = "${aws_vpc.custom_vpc.id}" 58 | 59 | route { 60 | cidr_block = "0.0.0.0/0" 61 | gateway_id = "${aws_internet_gateway.igw.id}" 62 | } 63 | 64 | tags = { 65 | Name = "${var.route_table_tag_name}" 66 | } 67 | } 68 | 69 | # Route table and subnet associations 70 | resource "aws_route_table_association" "internet_access" { 71 | count = length(var.availability_zones) 72 | subnet_id = "${aws_subnet.public_subnet[count.index].id}" 73 | route_table_id = "${aws_route_table.main.id}" 74 | } --------------------------------------------------------------------------------