├── .terraform-version ├── CONTRIBUTING.md ├── examples ├── irsa │ ├── variables.tf │ ├── outputs.tf │ ├── locals.tf │ ├── cluster-autoscaler-chart-values.yaml │ ├── irsa.tf │ ├── main.tf │ └── README.md ├── spot_instances │ ├── variables.tf │ ├── outputs.tf │ └── main.tf ├── README.md ├── basic │ ├── outputs.tf │ ├── variables.tf │ └── main.tf └── managed_node_groups │ ├── outputs.tf │ ├── variables.tf │ └── main.tf ├── modules ├── node_groups │ ├── versions.tf │ ├── data.tf │ ├── outputs.tf │ ├── random.tf │ ├── locals.tf │ ├── variables.tf │ ├── node_groups.tf │ └── README.md ├── aws_auth │ ├── outputs.tf │ ├── versions.tf │ ├── templates │ │ └── worker-role.tpl │ ├── aws_auth.tf │ ├── variables.tf │ └── README.md ├── control_plane │ ├── versions.tf │ ├── kubectl.tf │ ├── locals.tf │ ├── templates │ │ └── kubeconfig.tpl │ ├── irsa.tf │ ├── data.tf │ ├── outputs.tf │ ├── cluster.tf │ ├── README.md │ └── variables.tf └── worker_groups │ ├── versions.tf │ ├── templates │ ├── userdata.sh.tpl │ └── userdata_windows.tpl │ ├── random.tf │ ├── data.tf │ ├── outputs.tf │ ├── variables.tf │ ├── README.md │ ├── locals.tf │ └── worker_groups.tf ├── versions.tf ├── .pre-commit-config.yaml ├── Makefile ├── .editorconfig ├── .github └── ISSUE_TEMPLATE │ ├── feature_request.md │ ├── bug_report.md │ └── user_story.md ├── .chglog ├── config.yml └── CHANGELOG.tpl.md ├── .gitignore ├── outputs.tf ├── main.tf ├── CHANGELOG.md ├── variables.tf └── README.md /.terraform-version: -------------------------------------------------------------------------------- 1 | 0.12.20 2 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | -------------------------------------------------------------------------------- /examples/irsa/variables.tf: -------------------------------------------------------------------------------- 1 | variable "region" { 2 | default = "us-west-2" 3 | } 4 | -------------------------------------------------------------------------------- /examples/spot_instances/variables.tf: -------------------------------------------------------------------------------- 1 | variable "region" { 2 | default = "us-west-2" 3 | } 4 | 5 | -------------------------------------------------------------------------------- /examples/irsa/outputs.tf: -------------------------------------------------------------------------------- 1 | output "aws_account_id" { 2 | value = data.aws_caller_identity.current.account_id 3 | } 4 | -------------------------------------------------------------------------------- /modules/node_groups/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 0.12.9" 3 | 4 | required_providers { 5 | aws = ">= 2.52.0" 6 | random = ">= 2.1" 7 | } 8 | } 9 | -------------------------------------------------------------------------------- /modules/aws_auth/outputs.tf: -------------------------------------------------------------------------------- 1 | output "config_map_aws_auth" { 2 | description = "A kubernetes configuration to authenticate to this EKS cluster." 3 | value = kubernetes_config_map.aws_auth.* 4 | } 5 | -------------------------------------------------------------------------------- /examples/irsa/locals.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | cluster_name = "test-eks-irsa" 3 | k8s_service_account_namespace = "kube-system" 4 | k8s_service_account_name = "cluster-autoscaler-aws-cluster-autoscaler" 5 | } 6 | -------------------------------------------------------------------------------- /modules/control_plane/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 0.12.9" 3 | 4 | required_providers { 5 | aws = ">= 2.52.0" 6 | local = ">= 1.2" 7 | template = ">= 2.1" 8 | } 9 | } 10 | -------------------------------------------------------------------------------- /modules/worker_groups/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 0.12.9" 3 | 4 | required_providers { 5 | aws = ">= 2.52.0" 6 | template = ">= 2.1" 7 | random = ">= 2.1" 8 | } 9 | } 10 | -------------------------------------------------------------------------------- /modules/aws_auth/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 0.12.9" 3 | 4 | required_providers { 5 | aws = ">= 2.52.0" 6 | null = ">= 2.1" 7 | template = ">= 2.1" 8 | kubernetes = ">= 1.6.2" 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /modules/aws_auth/templates/worker-role.tpl: -------------------------------------------------------------------------------- 1 | - rolearn: ${instance_role_arn} 2 | username: system:node:{{EC2PrivateDNSName}} 3 | groups: 4 | - system:bootstrappers 5 | - system:nodes 6 | %{~ if platform == "windows" ~} 7 | - eks:kube-proxy-windows 8 | %{~ endif ~} 9 | -------------------------------------------------------------------------------- /versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 0.12.9" 3 | 4 | required_providers { 5 | aws = ">= 2.52.0" 6 | local = ">= 1.2" 7 | null = ">= 2.1" 8 | template = ">= 2.1" 9 | random = ">= 2.1" 10 | kubernetes = ">= 1.6.2" 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /examples/irsa/cluster-autoscaler-chart-values.yaml: -------------------------------------------------------------------------------- 1 | awsRegion: us-west-2 2 | 3 | rbac: 4 | create: true 5 | serviceAccountAnnotations: 6 | eks.amazonaws.com/role-arn: "arn:aws:iam:::role/cluster-autoscaler" 7 | 8 | autoDiscovery: 9 | clusterName: test-eks-irsa 10 | enabled: true 11 | -------------------------------------------------------------------------------- /modules/node_groups/data.tf: -------------------------------------------------------------------------------- 1 | data "aws_iam_policy_document" "node_groups_assume_role_policy" { 2 | statement { 3 | sid = "EKSNodeGroupAssumeRole" 4 | 5 | actions = [ 6 | "sts:AssumeRole", 7 | ] 8 | 9 | principals { 10 | type = "Service" 11 | identifiers = ["ec2.amazonaws.com"] 12 | } 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | exclude: vendor 2 | repos: 3 | - repo: https://github.com/antonbabenko/pre-commit-terraform 4 | rev: v1.25.0 5 | hooks: 6 | - id: terraform_fmt 7 | - id: terraform_docs 8 | - id: terraform_tflint 9 | - repo: https://github.com/pre-commit/pre-commit-hooks 10 | rev: v2.4.0 11 | hooks: 12 | - id: check-merge-conflict -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: release 2 | 3 | TYPE := patch 4 | VERSION := $(shell semtag final -s $(TYPE) -o) 5 | 6 | release: 7 | git checkout master 8 | git pull origin master 9 | @echo $(VERSION) | grep "ERROR" && exit 1 || true 10 | git-chglog -o CHANGELOG.md --next-tag $(VERSION) 11 | git add CHANGELOG.md 12 | git commit -m "chore(release): Update changelog for $(VERSION)" 13 | git tag $(VERSION) 14 | git push origin master --tags -------------------------------------------------------------------------------- /modules/worker_groups/templates/userdata.sh.tpl: -------------------------------------------------------------------------------- 1 | #!/bin/bash -xe 2 | 3 | # Allow user supplied pre userdata code 4 | ${pre_userdata} 5 | 6 | # Bootstrap and join the cluster 7 | /etc/eks/bootstrap.sh --b64-cluster-ca '${cluster_auth_base64}' --apiserver-endpoint '${endpoint}' ${bootstrap_extra_args} --kubelet-extra-args "${kubelet_extra_args}" '${cluster_name}' 8 | 9 | # Allow user supplied userdata code 10 | ${additional_userdata} 11 | -------------------------------------------------------------------------------- /modules/control_plane/kubectl.tf: -------------------------------------------------------------------------------- 1 | 2 | resource "local_file" "kubeconfig" { 3 | count = var.write_kubeconfig && var.create_eks ? 1 : 0 4 | content = data.template_file.kubeconfig[0].rendered 5 | filename = substr(var.config_output_path, -1, 1) == "/" ? "${var.config_output_path}kubeconfig_${var.cluster_name}" : var.config_output_path 6 | directory_permission = "0750" 7 | file_permission = "0600" 8 | } 9 | -------------------------------------------------------------------------------- /modules/worker_groups/templates/userdata_windows.tpl: -------------------------------------------------------------------------------- 1 | 2 | ${pre_userdata} 3 | 4 | [string]$EKSBinDir = "$env:ProgramFiles\Amazon\EKS" 5 | [string]$EKSBootstrapScriptName = 'Start-EKSBootstrap.ps1' 6 | [string]$EKSBootstrapScriptFile = "$EKSBinDir\$EKSBootstrapScriptName" 7 | & $EKSBootstrapScriptFile -EKSClusterName ${cluster_name} -KubeletExtraArgs '${kubelet_extra_args}' 3>&1 4>&1 5>&1 6>&1 8 | $LastError = if ($?) { 0 } else { $Error[0].Exception.HResult } 9 | 10 | ${additional_userdata} 11 | 12 | -------------------------------------------------------------------------------- /examples/README.md: -------------------------------------------------------------------------------- 1 | # Examples 2 | 3 | These serve a few purposes: 4 | 5 | 1. Shows developers how to use the module in a straightforward way as integrated with other terraform community supported modules. 6 | 2. Serves as the test infrastructure for CI on the project. 7 | 3. Provides a simple way to play with the Kubernetes cluster you create. 8 | 9 | ## IAM Permissions 10 | 11 | You can see the minimum IAM Permissions required [here](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/iam-permissions.md). 12 | -------------------------------------------------------------------------------- /modules/control_plane/locals.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | cluster_security_group_id = var.cluster_create_security_group ? aws_security_group.cluster.0.id : var.cluster_security_group_id 3 | cluster_iam_role_name = var.manage_cluster_iam_resources ? aws_iam_role.cluster.0.name : var.cluster_iam_role_name 4 | cluster_iam_role_arn = var.manage_cluster_iam_resources ? aws_iam_role.cluster.0.arn : data.aws_iam_role.custom_cluster_iam_role.0.arn 5 | kubeconfig_name = var.kubeconfig_name == "" ? "eks_${var.cluster_name}" : var.kubeconfig_name 6 | } 7 | -------------------------------------------------------------------------------- /modules/node_groups/outputs.tf: -------------------------------------------------------------------------------- 1 | output "node_groups" { 2 | description = "Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values" 3 | value = aws_eks_node_group.node_groups 4 | } 5 | 6 | output "aws_auth_roles" { 7 | description = "Roles for use in aws-auth ConfigMap" 8 | value = [ 9 | for k, v in local.node_groups_expanded : { 10 | instance_role_arn = lookup(v, "iam_role_arn", aws_iam_role.node_groups[0].arn) 11 | platform = "linux" 12 | } 13 | ] 14 | } 15 | -------------------------------------------------------------------------------- /.editorconfig: -------------------------------------------------------------------------------- 1 | # EditorConfig is awesome: http://EditorConfig.org 2 | # Uses editorconfig to maintain consistent coding styles 3 | 4 | # top-most EditorConfig file 5 | root = true 6 | 7 | # Unix-style newlines with a newline ending every file 8 | [*] 9 | charset = utf-8 10 | end_of_line = lf 11 | indent_size = 2 12 | indent_style = space 13 | insert_final_newline = true 14 | max_line_length = 80 15 | trim_trailing_whitespace = true 16 | 17 | [*.{tf,tfvars,hcl}] 18 | indent_size = 2 19 | indent_style = space 20 | 21 | [*.md] 22 | max_line_length = 0 23 | trim_trailing_whitespace = false 24 | 25 | [Makefile] 26 | tab_width = 2 27 | indent_style = tab 28 | 29 | [COMMIT_EDITMSG] 30 | max_line_length = 0 31 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature Request 3 | about: Suggest an idea for new functionality or a shiny new feature 4 | labels: enhancement, needs triage 5 | --- 6 | 7 | **Is your feature request related to a problem? Please describe.** 8 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 9 | 10 | **Describe the solution you'd like** 11 | A clear and concise description of what you want to happen. 12 | 13 | **Describe alternatives you've considered** 14 | A clear and concise description of any alternative solutions or features you've considered. 15 | 16 | **Additional context** 17 | Add any other context or screenshots about the feature request here. 18 | -------------------------------------------------------------------------------- /modules/node_groups/random.tf: -------------------------------------------------------------------------------- 1 | resource "random_pet" "node_groups" { 2 | for_each = local.node_groups_expanded 3 | 4 | separator = "-" 5 | length = 2 6 | 7 | keepers = { 8 | ami_type = lookup(each.value, "ami_type", null) 9 | disk_size = lookup(each.value, "disk_size", null) 10 | instance_type = each.value["instance_type"] 11 | iam_role_arn = each.value["iam_role_arn"] 12 | 13 | key_name = each.value["key_name"] 14 | 15 | source_security_group_ids = join("|", compact( 16 | lookup(each.value, "source_security_group_ids", []) 17 | )) 18 | subnet_ids = join("|", each.value["subnets"]) 19 | node_group_name = join("-", [var.cluster_name, each.key]) 20 | } 21 | } 22 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug Report 3 | about: Create a report to help us improve a current feature 4 | labels: bug, needs triage 5 | --- 6 | 7 | **Describe the bug** 8 | A clear and concise description of what the bug is. 9 | 10 | **To Reproduce** 11 | Steps to reproduce the behavior: 12 | 1. Go to '...' 13 | 2. Click on '....' 14 | 3. Scroll down to '....' 15 | 4. See error 16 | 17 | **Expected behavior** 18 | A clear and concise description of what you expected to happen. 19 | 20 | **Screenshots** 21 | If applicable, add screenshots to help explain your problem. 22 | 23 | **Additional context** 24 | Add any other context about the problem here such as: tool versions, OS, links to 25 | source code or resources. 26 | -------------------------------------------------------------------------------- /modules/control_plane/templates/kubeconfig.tpl: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | preferences: {} 3 | kind: Config 4 | 5 | clusters: 6 | - cluster: 7 | server: ${endpoint} 8 | certificate-authority-data: ${cluster_auth_base64} 9 | name: ${kubeconfig_name} 10 | 11 | contexts: 12 | - context: 13 | cluster: ${kubeconfig_name} 14 | user: ${kubeconfig_name} 15 | name: ${kubeconfig_name} 16 | 17 | current-context: ${kubeconfig_name} 18 | 19 | users: 20 | - name: ${kubeconfig_name} 21 | user: 22 | exec: 23 | apiVersion: client.authentication.k8s.io/v1alpha1 24 | command: ${aws_authenticator_command} 25 | args: 26 | ${aws_authenticator_command_args} 27 | ${aws_authenticator_additional_args} 28 | ${aws_authenticator_env_variables} 29 | -------------------------------------------------------------------------------- /examples/basic/outputs.tf: -------------------------------------------------------------------------------- 1 | output "cluster_endpoint" { 2 | description = "Endpoint for EKS control plane." 3 | value = module.eks.cluster_endpoint 4 | } 5 | 6 | output "cluster_security_group_id" { 7 | description = "Security group ids attached to the cluster control plane." 8 | value = module.eks.cluster_security_group_id 9 | } 10 | 11 | output "kubectl_config" { 12 | description = "kubectl config as generated by the module." 13 | value = module.eks.kubeconfig 14 | } 15 | 16 | output "config_map_aws_auth" { 17 | description = "A kubernetes configuration to authenticate to this EKS cluster." 18 | value = module.eks.config_map_aws_auth 19 | } 20 | 21 | output "region" { 22 | description = "AWS region." 23 | value = var.region 24 | } 25 | 26 | -------------------------------------------------------------------------------- /examples/spot_instances/outputs.tf: -------------------------------------------------------------------------------- 1 | output "cluster_endpoint" { 2 | description = "Endpoint for EKS control plane." 3 | value = module.eks.cluster_endpoint 4 | } 5 | 6 | output "cluster_security_group_id" { 7 | description = "Security group ids attached to the cluster control plane." 8 | value = module.eks.cluster_security_group_id 9 | } 10 | 11 | output "kubectl_config" { 12 | description = "kubectl config as generated by the module." 13 | value = module.eks.kubeconfig 14 | } 15 | 16 | output "config_map_aws_auth" { 17 | description = "A kubernetes configuration to authenticate to this EKS cluster." 18 | value = module.eks.config_map_aws_auth 19 | } 20 | 21 | output "region" { 22 | description = "AWS region." 23 | value = var.region 24 | } 25 | 26 | -------------------------------------------------------------------------------- /.chglog/config.yml: -------------------------------------------------------------------------------- 1 | style: github 2 | template: CHANGELOG.tpl.md 3 | info: 4 | title: CHANGELOG 5 | repository_url: https://github.com/devopsmakers/terraform-aws-eks 6 | options: 7 | commit_groups: 8 | title_maps: 9 | build: 🏭 Build 10 | chore: 🔧 Maintenance 11 | ci: 💜 Continuous Integration 12 | docs: 📘 Documentation 13 | feat: ✨ Features 14 | fix: 🐛 Bug Fixes 15 | perf: 🚀 Performance Improvements 16 | refactor: 💎 Code Refactoring 17 | revert: ◀️ Revert Change 18 | security: 🛡 Security Fix 19 | style: 🎶 Code Style 20 | test: 💚 Testing 21 | header: 22 | pattern: '^(\w*)(?:\(([\w\$\.\-\*\s]*)\))?\:\s(.*)$' 23 | pattern_maps: 24 | - Type 25 | - Scope 26 | - Subject 27 | notes: 28 | keywords: 29 | - BREAKING CHANGE 30 | -------------------------------------------------------------------------------- /modules/control_plane/irsa.tf: -------------------------------------------------------------------------------- 1 | # Enable IAM Roles for EKS Service-Accounts (IRSA). 2 | 3 | # The Root CA Thumbprint for an OpenID Connect Identity Provider is currently 4 | # Being passed as a default value which is the same for all regions and 5 | # Is valid until (Jun 28 17:39:16 2034 GMT). 6 | # https://crt.sh/?q=9E99A48A9960B14926BB7F3B02E22DA2B0AB7280 7 | # https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc_verify-thumbprint.html 8 | # https://github.com/terraform-providers/terraform-provider-aws/issues/10104 9 | 10 | resource "aws_iam_openid_connect_provider" "oidc_provider" { 11 | count = var.create_eks && var.enable_irsa ? 1 : 0 12 | client_id_list = ["sts.amazonaws.com"] 13 | thumbprint_list = [var.eks_oidc_root_ca_thumbprint] 14 | url = flatten(concat(aws_eks_cluster.this[*].identity[*].oidc.0.issuer, [""]))[0] 15 | } 16 | -------------------------------------------------------------------------------- /examples/managed_node_groups/outputs.tf: -------------------------------------------------------------------------------- 1 | output "cluster_endpoint" { 2 | description = "Endpoint for EKS control plane." 3 | value = module.eks.cluster_endpoint 4 | } 5 | 6 | output "cluster_security_group_id" { 7 | description = "Security group ids attached to the cluster control plane." 8 | value = module.eks.cluster_security_group_id 9 | } 10 | 11 | output "kubectl_config" { 12 | description = "kubectl config as generated by the module." 13 | value = module.eks.kubeconfig 14 | } 15 | 16 | output "config_map_aws_auth" { 17 | description = "A kubernetes configuration to authenticate to this EKS cluster." 18 | value = module.eks.config_map_aws_auth 19 | } 20 | 21 | output "region" { 22 | description = "AWS region." 23 | value = var.region 24 | } 25 | 26 | output "node_groups" { 27 | description = "Outputs from node groups" 28 | value = module.eks.node_groups 29 | } 30 | -------------------------------------------------------------------------------- /modules/worker_groups/random.tf: -------------------------------------------------------------------------------- 1 | resource "random_pet" "worker_groups" { 2 | for_each = local.worker_groups_expanded 3 | 4 | separator = "-" 5 | length = 2 6 | 7 | keepers = { 8 | ami_id = coalesce(each.value["ami_id"], each.value["platform"] == "windows" ? local.default_ami_id_windows : local.default_ami_id_linux) 9 | root_volume_size = lookup(each.value, "root_volume_size", null) 10 | instance_type = each.value["instance_type"] 11 | 12 | override_instance_types = join("|", compact( 13 | lookup(each.value, "override_instance_types", []) 14 | )) 15 | 16 | iam_role_id = each.value["iam_role_id"] 17 | key_name = each.value["key_name"] 18 | 19 | source_security_group_ids = join("|", compact( 20 | lookup(each.value, "source_security_group_ids", []) 21 | )) 22 | 23 | subnet_ids = join("|", each.value["subnets"]) 24 | worker_group_name = join("-", [var.cluster_name, each.key]) 25 | } 26 | } 27 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # IDE Files 2 | .idea 3 | *.iml 4 | *.swp 5 | .DS_Store 6 | 7 | 8 | # Local .terraform and .terragrunt-cache directories 9 | **/.terraform/* 10 | **/.terragrunt-cache/* 11 | 12 | # .tfstate files 13 | *.tfstate 14 | *.tfstate.* 15 | 16 | # Crash log files 17 | crash.log 18 | 19 | # Ignore any .tfvars files that are generated automatically for each Terraform run. Most 20 | # .tfvars files are managed as part of configuration and so should be included in 21 | # version control. 22 | # 23 | # example.tfvars 24 | 25 | # Ignore override files as they are usually used to override resources locally and so 26 | # are not checked in 27 | override.tf 28 | override.tf.json 29 | *_override.tf 30 | *_override.tf.json 31 | 32 | # Include override files you do wish to add to version control using negated pattern 33 | # 34 | # !example_override.tf 35 | 36 | # Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan 37 | # example: *tfplan* 38 | 39 | kubeconfig_* 40 | -------------------------------------------------------------------------------- /modules/node_groups/locals.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | node_group_defaults = { 3 | iam_role_arn = concat(aws_iam_role.node_groups.*.arn, [""])[0] 4 | instance_type = "m4.large" # Size of the node group instances. 5 | desired_capacity = "1" # Desired node group capacity in the autoscaling group. Note: Ignored on change. Hint: Use the Cluster Autoscaler. 6 | max_capacity = "3" # Maximum node group capacity in the autoscaling group. 7 | min_capacity = "1" # Minimum node group capacity in the autoscaling group. Note: Should =< desired_capacity 8 | key_name = "" # The key name that should be used for the instances in the autoscaling group 9 | subnets = var.subnets # A list of subnets to place the nodes in. i.e. ["subnet-123", "subnet-456", "subnet-789"] 10 | } 11 | 12 | # Merge defaults and per-group values to make code cleaner 13 | node_groups_expanded = { for k, v in var.node_groups : k => merge( 14 | local.node_group_defaults, 15 | var.node_groups_defaults, 16 | v, 17 | ) if var.create_eks } 18 | } 19 | -------------------------------------------------------------------------------- /modules/aws_auth/aws_auth.tf: -------------------------------------------------------------------------------- 1 | data "template_file" "map_instances" { 2 | count = var.create_eks ? length(var.map_instances) : 0 3 | template = file("${path.module}/templates/worker-role.tpl") 4 | 5 | vars = var.map_instances[count.index] 6 | } 7 | 8 | data "aws_eks_cluster" "this" { 9 | name = var.cluster_name 10 | } 11 | 12 | resource "null_resource" "wait_for_cluster" { 13 | count = var.create_eks && var.manage_aws_auth ? 1 : 0 14 | 15 | provisioner "local-exec" { 16 | environment = { 17 | ENDPOINT = data.aws_eks_cluster.this.endpoint 18 | } 19 | 20 | command = var.wait_for_cluster_cmd 21 | } 22 | } 23 | 24 | resource "kubernetes_config_map" "aws_auth" { 25 | count = var.create_eks && var.manage_aws_auth ? 1 : 0 26 | 27 | depends_on = [ 28 | null_resource.wait_for_cluster[0] 29 | ] 30 | 31 | metadata { 32 | name = "aws-auth" 33 | namespace = "kube-system" 34 | } 35 | 36 | data = { 37 | mapRoles = < 3 | ## [Unreleased] 4 | {{ if .Unreleased.CommitGroups -}} 5 | {{ range .Unreleased.CommitGroups -}} 6 | ### {{ .Title }} 7 | {{ range .Commits -}} 8 | - {{ if .Scope }}**{{ .Scope }}:** {{ end }}[{{ .Hash.Short }}]({{ $.Info.RepositoryURL }}/commit/{{ .Hash.Short }}) {{ .Subject }} 9 | {{ end -}} 10 | {{ end }} 11 | {{ end -}} 12 | {{ range .Versions }} 13 | 14 | ## {{ if .Tag.Previous }}[{{ .Tag.Name }}]{{ else }}{{ .Tag.Name }}{{ end }} - {{ datetime "2006-01-02" .Tag.Date }} 15 | {{ if .CommitGroups -}} 16 | {{ range .CommitGroups -}} 17 | ### {{ .Title }} 18 | {{ range .Commits -}} 19 | - {{ if .Scope }}**{{ .Scope }}:** {{ end }}[{{ .Hash.Short }}]({{ $.Info.RepositoryURL }}/commit/{{ .Hash.Short }}) {{ .Subject }} 20 | {{ end }} 21 | {{ end -}} 22 | {{ end -}} 23 | 24 | {{- if .NoteGroups }} 25 | {{ range .NoteGroups -}} 26 | ### {{ .Title }} 27 | {{ range .Notes -}} 28 | {{ .Body }} 29 | {{ end }} 30 | {{ end -}} 31 | {{ end -}} 32 | {{ end -}} 33 | 34 | {{- if .Versions }} 35 | [Unreleased]: {{ .Info.RepositoryURL }}/compare/{{ $latest := index .Versions 0 }}{{ $latest.Tag.Name }}...HEAD 36 | {{ range .Versions -}} 37 | {{ if .Tag.Previous -}} 38 | [{{ .Tag.Name }}]: {{ $.Info.RepositoryURL }}/compare/{{ .Tag.Previous.Name }}...{{ .Tag.Name }} 39 | {{ end -}} 40 | {{ end -}} 41 | {{ end -}} 42 | -------------------------------------------------------------------------------- /modules/aws_auth/variables.tf: -------------------------------------------------------------------------------- 1 | variable "create_eks" { 2 | description = "Controls if EKS resources should be created (it affects almost all resources)." 3 | type = bool 4 | default = true 5 | } 6 | 7 | variable "cluster_name" { 8 | description = "Name of the EKS cluster." 9 | type = string 10 | } 11 | 12 | variable "manage_aws_auth" { 13 | description = "Whether to apply the aws-auth configmap file." 14 | default = true 15 | } 16 | 17 | variable "map_accounts" { 18 | description = "Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format." 19 | type = list(string) 20 | default = [] 21 | } 22 | 23 | variable "map_instances" { 24 | description = "IAM instance roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format." 25 | type = list(object({ 26 | instance_role_arn = string 27 | platform = string 28 | })) 29 | default = [] 30 | } 31 | 32 | variable "map_roles" { 33 | description = "Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format." 34 | type = list(object({ 35 | rolearn = string 36 | username = string 37 | groups = list(string) 38 | })) 39 | default = [] 40 | } 41 | 42 | variable "map_users" { 43 | description = "Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format." 44 | type = list(object({ 45 | userarn = string 46 | username = string 47 | groups = list(string) 48 | })) 49 | default = [] 50 | } 51 | 52 | variable "wait_for_cluster_cmd" { 53 | description = "Custom local-exec command to execute for determining if the eks cluster is healthy. Cluster endpoint will be available as an environment variable called ENDPOINT" 54 | type = string 55 | default = "until wget --no-check-certificate -O - -q $ENDPOINT/healthz >/dev/null; do sleep 4; done" 56 | } 57 | -------------------------------------------------------------------------------- /examples/irsa/irsa.tf: -------------------------------------------------------------------------------- 1 | module "iam_assumable_role_admin" { 2 | source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc" 3 | version = "~> v2.6.0" 4 | create_role = true 5 | role_name = "cluster-autoscaler" 6 | provider_url = replace(module.eks.cluster_oidc_issuer_url, "https://", "") 7 | role_policy_arns = [aws_iam_policy.cluster_autoscaler.arn] 8 | oidc_fully_qualified_subjects = ["system:serviceaccount:${local.k8s_service_account_namespace}:${local.k8s_service_account_name}"] 9 | } 10 | 11 | resource "aws_iam_policy" "cluster_autoscaler" { 12 | name_prefix = "cluster-autoscaler" 13 | description = "EKS cluster-autoscaler policy for cluster ${module.eks.cluster_id}" 14 | policy = data.aws_iam_policy_document.cluster_autoscaler.json 15 | } 16 | 17 | data "aws_iam_policy_document" "cluster_autoscaler" { 18 | statement { 19 | sid = "clusterAutoscalerAll" 20 | effect = "Allow" 21 | 22 | actions = [ 23 | "autoscaling:DescribeAutoScalingGroups", 24 | "autoscaling:DescribeAutoScalingInstances", 25 | "autoscaling:DescribeLaunchConfigurations", 26 | "autoscaling:DescribeTags", 27 | "ec2:DescribeLaunchTemplateVersions", 28 | ] 29 | 30 | resources = ["*"] 31 | } 32 | 33 | statement { 34 | sid = "clusterAutoscalerOwn" 35 | effect = "Allow" 36 | 37 | actions = [ 38 | "autoscaling:SetDesiredCapacity", 39 | "autoscaling:TerminateInstanceInAutoScalingGroup", 40 | "autoscaling:UpdateAutoScalingGroup", 41 | ] 42 | 43 | resources = ["*"] 44 | 45 | condition { 46 | test = "StringEquals" 47 | variable = "autoscaling:ResourceTag/kubernetes.io/cluster/${module.eks.cluster_id}" 48 | values = ["owned"] 49 | } 50 | 51 | condition { 52 | test = "StringEquals" 53 | variable = "autoscaling:ResourceTag/k8s.io/cluster-autoscaler/enabled" 54 | values = ["true"] 55 | } 56 | } 57 | } 58 | -------------------------------------------------------------------------------- /modules/aws_auth/README.md: -------------------------------------------------------------------------------- 1 | # eks `aws_auth` submodule 2 | 3 | 4 | ## Providers 5 | 6 | | Name | Version | 7 | |------|---------| 8 | | aws | >= 2.52.0 | 9 | | kubernetes | >= 1.6.2 | 10 | | null | >= 2.1 | 11 | | template | >= 2.1 | 12 | 13 | ## Inputs 14 | 15 | | Name | Description | Type | Default | Required | 16 | |------|-------------|------|---------|:-----:| 17 | | cluster\_name | Name of the EKS cluster. | `string` | n/a | yes | 18 | | create\_eks | Controls if EKS resources should be created (it affects almost all resources). | `bool` | `true` | no | 19 | | manage\_aws\_auth | Whether to apply the aws-auth configmap file. | `bool` | `true` | no | 20 | | map\_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | `list(string)` | `[]` | no | 21 | | map\_instances | IAM instance roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. |
list(object({
instance_role_arn = string
platform = string
}))
| `[]` | no | 22 | | map\_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. |
list(object({
rolearn = string
username = string
groups = list(string)
}))
| `[]` | no | 23 | | map\_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. |
list(object({
userarn = string
username = string
groups = list(string)
}))
| `[]` | no | 24 | | wait\_for\_cluster\_cmd | Custom local-exec command to execute for determining if the eks cluster is healthy. Cluster endpoint will be available as an environment variable called ENDPOINT | `string` | `"until wget --no-check-certificate -O - -q $ENDPOINT/healthz \u003e/dev/null; do sleep 4; done"` | no | 25 | 26 | ## Outputs 27 | 28 | | Name | Description | 29 | |------|-------------| 30 | | config\_map\_aws\_auth | A kubernetes configuration to authenticate to this EKS cluster. | 31 | 32 | 33 | -------------------------------------------------------------------------------- /modules/control_plane/data.tf: -------------------------------------------------------------------------------- 1 | data "aws_iam_policy_document" "cluster_assume_role_policy" { 2 | statement { 3 | sid = "EKSClusterAssumeRole" 4 | 5 | actions = [ 6 | "sts:AssumeRole", 7 | ] 8 | 9 | principals { 10 | type = "Service" 11 | identifiers = ["eks.amazonaws.com"] 12 | } 13 | } 14 | } 15 | 16 | data "template_file" "kubeconfig" { 17 | count = var.create_eks ? 1 : 0 18 | template = file("${path.module}/templates/kubeconfig.tpl") 19 | 20 | vars = { 21 | kubeconfig_name = local.kubeconfig_name 22 | endpoint = aws_eks_cluster.this[0].endpoint 23 | cluster_auth_base64 = aws_eks_cluster.this[0].certificate_authority[0].data 24 | aws_authenticator_command = var.kubeconfig_aws_authenticator_command 25 | aws_authenticator_command_args = length(var.kubeconfig_aws_authenticator_command_args) > 0 ? " - ${join( 26 | "\n - ", 27 | var.kubeconfig_aws_authenticator_command_args, 28 | )}" : " - ${join( 29 | "\n - ", 30 | formatlist("\"%s\"", ["token", "-i", aws_eks_cluster.this[0].name]), 31 | )}" 32 | aws_authenticator_additional_args = length(var.kubeconfig_aws_authenticator_additional_args) > 0 ? " - ${join( 33 | "\n - ", 34 | var.kubeconfig_aws_authenticator_additional_args, 35 | )}" : "" 36 | aws_authenticator_env_variables = length(var.kubeconfig_aws_authenticator_env_variables) > 0 ? " env:\n${join( 37 | "\n", 38 | data.template_file.aws_authenticator_env_variables.*.rendered, 39 | )}" : "" 40 | } 41 | } 42 | 43 | data "template_file" "aws_authenticator_env_variables" { 44 | count = length(var.kubeconfig_aws_authenticator_env_variables) 45 | 46 | template = <= 1.14 ? data.aws_eks_cluster.this.version : 1.14}-*" 7 | ) 8 | } 9 | 10 | data "aws_eks_cluster" "this" { 11 | name = var.cluster_name 12 | } 13 | 14 | data "aws_iam_policy_document" "workers_assume_role_policy" { 15 | statement { 16 | sid = "EKSWorkerAssumeRole" 17 | 18 | actions = [ 19 | "sts:AssumeRole", 20 | ] 21 | 22 | principals { 23 | type = "Service" 24 | identifiers = ["ec2.amazonaws.com"] 25 | } 26 | } 27 | } 28 | 29 | data "aws_ami" "eks_worker" { 30 | filter { 31 | name = "name" 32 | values = [local.worker_ami_name_filter] 33 | } 34 | 35 | most_recent = true 36 | 37 | owners = [var.worker_ami_owner_id] 38 | } 39 | 40 | data "aws_ami" "eks_worker_windows" { 41 | filter { 42 | name = "name" 43 | values = [local.worker_ami_name_filter_windows] 44 | } 45 | 46 | filter { 47 | name = "platform" 48 | values = ["windows"] 49 | } 50 | 51 | most_recent = true 52 | 53 | owners = [var.worker_ami_owner_id_windows] 54 | } 55 | 56 | data "template_file" "launch_template_userdata" { 57 | for_each = local.worker_groups_expanded 58 | 59 | template = coalesce( 60 | each.value["userdata_template_file"], 61 | file( 62 | each.value["platform"] == "windows" 63 | ? "${path.module}/templates/userdata_windows.tpl" 64 | : "${path.module}/templates/userdata.sh.tpl" 65 | ) 66 | ) 67 | 68 | vars = merge({ 69 | platform = each.value["platform"] 70 | cluster_name = var.cluster_name 71 | endpoint = data.aws_eks_cluster.this.endpoint 72 | cluster_auth_base64 = data.aws_eks_cluster.this.certificate_authority.0.data 73 | pre_userdata = each.value["pre_userdata"] 74 | additional_userdata = each.value["additional_userdata"] 75 | bootstrap_extra_args = each.value["bootstrap_extra_args"] 76 | kubelet_extra_args = each.value["kubelet_extra_args"] 77 | }, 78 | each.value["userdata_template_extra_args"] 79 | ) 80 | } 81 | 82 | data "aws_iam_instance_profile" "custom_worker_group_launch_template_iam_instance_profile" { 83 | for_each = var.manage_worker_iam_resources ? {} : local.worker_groups_expanded 84 | 85 | name = each.value["iam_instance_profile_name"] 86 | } 87 | 88 | data "aws_region" "current" {} 89 | -------------------------------------------------------------------------------- /modules/worker_groups/outputs.tf: -------------------------------------------------------------------------------- 1 | output "aws_auth_roles" { 2 | description = "Roles for use in aws-auth ConfigMap" 3 | value = [ 4 | for k, v in local.worker_groups_expanded : { 5 | instance_role_arn = lookup(v, "iam_role_arn", try(aws_iam_role.worker_groups[0].arn, "")) 6 | platform = v["platform"] 7 | } 8 | ] 9 | } 10 | 11 | output "workers_asg_arns" { 12 | description = "IDs of the autoscaling groups containing workers." 13 | value = values(aws_autoscaling_group.worker_groups).*.arn 14 | } 15 | 16 | output "workers_asg_names" { 17 | description = "Names of the autoscaling groups containing workers." 18 | value = values(aws_autoscaling_group.worker_groups).*.id 19 | } 20 | 21 | output "workers_user_data" { 22 | description = "User data of worker groups" 23 | value = values(data.template_file.launch_template_userdata).*.rendered 24 | } 25 | 26 | output "workers_default_ami_id" { 27 | description = "ID of the default worker group AMI" 28 | value = data.aws_ami.eks_worker.id 29 | } 30 | 31 | output "workers_launch_template_ids" { 32 | description = "IDs of the worker launch templates." 33 | value = values(aws_launch_template.worker_groups).*.id 34 | } 35 | 36 | output "workers_launch_template_arns" { 37 | description = "ARNs of the worker launch templates." 38 | value = values(aws_launch_template.worker_groups).*.arn 39 | } 40 | 41 | output "workers_launch_template_latest_versions" { 42 | description = "Latest versions of the worker launch templates." 43 | value = values(aws_launch_template.worker_groups).*.latest_version 44 | } 45 | 46 | output "worker_security_group_id" { 47 | description = "Security group ID attached to the EKS workers." 48 | value = local.worker_security_group_id 49 | } 50 | 51 | output "worker_iam_instance_profile_arns" { 52 | description = "default IAM instance profile ARN for EKS worker groups" 53 | value = values(aws_iam_instance_profile.worker_groups).*.arn 54 | } 55 | 56 | output "worker_iam_instance_profile_names" { 57 | description = "default IAM instance profile name for EKS worker groups" 58 | value = values(aws_iam_instance_profile.worker_groups).*.name 59 | } 60 | 61 | output "worker_iam_role_name" { 62 | description = "default IAM role name for EKS worker groups" 63 | value = coalescelist( 64 | aws_iam_role.worker_groups.*.name, 65 | values(data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile).*.role_name, 66 | [""] 67 | )[0] 68 | } 69 | 70 | output "worker_iam_role_arn" { 71 | description = "default IAM role ARN for EKS worker groups" 72 | value = coalescelist( 73 | aws_iam_role.worker_groups.*.arn, 74 | values(data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile).*.role_arn, 75 | [""] 76 | )[0] 77 | } 78 | -------------------------------------------------------------------------------- /examples/irsa/README.md: -------------------------------------------------------------------------------- 1 | # IAM Roles for Service Accounts 2 | 3 | This example shows how to create an IAM role to be used for a Kubernetes `ServiceAccount`. It will create a policy and role to be used by the [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) using the [public Helm chart](https://github.com/helm/charts/tree/master/stable/cluster-autoscaler). 4 | 5 | The AWS documentation for IRSA is here: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html 6 | 7 | ## Setup 8 | 9 | Run Terraform: 10 | 11 | ``` 12 | terraform init 13 | terraform apply 14 | ``` 15 | 16 | Set kubectl context to the new cluster: `export KUBECONFIG=kubeconfig_test-eks-irsa` 17 | 18 | Check that there is a node that is `Ready`: 19 | 20 | ``` 21 | $ kubectl get nodes 22 | NAME STATUS ROLES AGE VERSION 23 | ip-10-0-2-190.us-west-2.compute.internal Ready 6m39s v1.14.8-eks-b8860f 24 | ``` 25 | 26 | Replace `` with your AWS account ID in `cluster-autoscaler-chart-values.yaml`. There is output from terraform for this. 27 | 28 | Install the chart using the provided values file: 29 | 30 | ``` 31 | helm install --name cluster-autoscaler --namespace kube-system stable/cluster-autoscaler --values=cluster-autoscaler-chart-values.yaml 32 | ``` 33 | 34 | ## Verify 35 | 36 | Ensure the cluster-autoscaler pod is running: 37 | 38 | ``` 39 | $ kubectl --namespace=kube-system get pods -l "app.kubernetes.io/name=aws-cluster-autoscaler" 40 | NAME READY STATUS RESTARTS AGE 41 | cluster-autoscaler-aws-cluster-autoscaler-5545d4b97-9ztpm 1/1 Running 0 3m 42 | ``` 43 | 44 | Observe the `AWS_*` environment variables that were added to the pod automatically by EKS: 45 | 46 | ``` 47 | kubectl --namespace=kube-system get pods -l "app.kubernetes.io/name=aws-cluster-autoscaler" -o yaml | grep -A3 AWS_ROLE_ARN 48 | 49 | - name: AWS_ROLE_ARN 50 | value: arn:aws:iam::xxxxxxxxx:role/cluster-autoscaler 51 | - name: AWS_WEB_IDENTITY_TOKEN_FILE 52 | value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token 53 | ``` 54 | 55 | Verify it is working by checking the logs, you should see that it has discovered the autoscaling group successfully: 56 | 57 | ``` 58 | kubectl --namespace=kube-system logs -l "app.kubernetes.io/name=aws-cluster-autoscaler" 59 | 60 | I0128 14:59:00.901513 1 auto_scaling_groups.go:354] Regenerating instance to ASG map for ASGs: [test-eks-irsa-worker-group-12020012814125354700000000e] 61 | I0128 14:59:00.969875 1 auto_scaling_groups.go:138] Registering ASG test-eks-irsa-worker-group-12020012814125354700000000e 62 | I0128 14:59:00.969906 1 aws_manager.go:263] Refreshed ASG list, next refresh after 2020-01-28 15:00:00.969901767 +0000 UTC m=+61.310501783 63 | ``` 64 | -------------------------------------------------------------------------------- /examples/managed_node_groups/main.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 0.12.6" 3 | } 4 | 5 | provider "aws" { 6 | version = ">= 2.28.1" 7 | region = var.region 8 | } 9 | 10 | provider "random" { 11 | version = "~> 2.1" 12 | } 13 | 14 | provider "local" { 15 | version = "~> 1.2" 16 | } 17 | 18 | provider "null" { 19 | version = "~> 2.1" 20 | } 21 | 22 | provider "template" { 23 | version = "~> 2.1" 24 | } 25 | 26 | data "aws_eks_cluster" "cluster" { 27 | name = module.eks.cluster_id 28 | } 29 | 30 | data "aws_eks_cluster_auth" "cluster" { 31 | name = module.eks.cluster_id 32 | } 33 | 34 | provider "kubernetes" { 35 | host = data.aws_eks_cluster.cluster.endpoint 36 | cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) 37 | token = data.aws_eks_cluster_auth.cluster.token 38 | load_config_file = false 39 | version = "1.10" 40 | } 41 | 42 | data "aws_availability_zones" "available" { 43 | } 44 | 45 | locals { 46 | cluster_name = "test-eks-${random_string.suffix.result}" 47 | } 48 | 49 | resource "random_string" "suffix" { 50 | length = 8 51 | special = false 52 | } 53 | 54 | module "vpc" { 55 | source = "terraform-aws-modules/vpc/aws" 56 | version = "~> 2.6" 57 | 58 | name = "test-vpc" 59 | cidr = "172.16.0.0/16" 60 | azs = data.aws_availability_zones.available.names 61 | private_subnets = ["172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24"] 62 | public_subnets = ["172.16.4.0/24", "172.16.5.0/24", "172.16.6.0/24"] 63 | enable_nat_gateway = true 64 | single_nat_gateway = true 65 | enable_dns_hostnames = true 66 | 67 | tags = { 68 | "kubernetes.io/cluster/${local.cluster_name}" = "shared" 69 | } 70 | 71 | public_subnet_tags = { 72 | "kubernetes.io/cluster/${local.cluster_name}" = "shared" 73 | "kubernetes.io/role/elb" = "1" 74 | } 75 | 76 | private_subnet_tags = { 77 | "kubernetes.io/cluster/${local.cluster_name}" = "shared" 78 | "kubernetes.io/role/internal-elb" = "1" 79 | } 80 | } 81 | 82 | module "eks" { 83 | source = "../.." 84 | cluster_name = local.cluster_name 85 | subnets = module.vpc.private_subnets 86 | 87 | tags = { 88 | Environment = "test" 89 | GithubRepo = "terraform-aws-eks" 90 | GithubOrg = "terraform-aws-modules" 91 | } 92 | 93 | vpc_id = module.vpc.vpc_id 94 | 95 | node_groups_defaults = { 96 | ami_type = "AL2_x86_64" 97 | disk_size = 50 98 | } 99 | 100 | node_groups = { 101 | example = { 102 | desired_capacity = 1 103 | max_capacity = 10 104 | min_capacity = 1 105 | 106 | instance_type = "m5.large" 107 | k8s_labels = { 108 | Environment = "test" 109 | GithubRepo = "terraform-aws-eks" 110 | GithubOrg = "terraform-aws-modules" 111 | } 112 | additional_tags = { 113 | ExtraTag = "example" 114 | } 115 | } 116 | } 117 | 118 | map_roles = var.map_roles 119 | map_users = var.map_users 120 | map_accounts = var.map_accounts 121 | } 122 | -------------------------------------------------------------------------------- /modules/control_plane/cluster.tf: -------------------------------------------------------------------------------- 1 | resource "aws_cloudwatch_log_group" "this" { 2 | count = length(var.cluster_enabled_log_types) > 0 && var.create_eks ? 1 : 0 3 | name = "/aws/eks/${var.cluster_name}/cluster" 4 | retention_in_days = var.cluster_log_retention_in_days 5 | kms_key_id = var.cluster_log_kms_key_id 6 | tags = var.tags 7 | } 8 | 9 | resource "aws_eks_cluster" "this" { 10 | count = var.create_eks ? 1 : 0 11 | name = var.cluster_name 12 | enabled_cluster_log_types = var.cluster_enabled_log_types 13 | role_arn = local.cluster_iam_role_arn 14 | version = var.cluster_version 15 | tags = var.tags 16 | 17 | vpc_config { 18 | security_group_ids = [local.cluster_security_group_id] 19 | subnet_ids = var.subnets 20 | endpoint_private_access = var.cluster_endpoint_private_access 21 | endpoint_public_access = var.cluster_endpoint_public_access 22 | public_access_cidrs = var.cluster_endpoint_public_access_cidrs 23 | } 24 | 25 | dynamic encryption_config { 26 | for_each = toset(var.cluster_encryption_key_arn != "" ? ["encryption_enabled"] : []) 27 | 28 | content { 29 | provider { 30 | key_arn = var.cluster_encryption_key_arn 31 | } 32 | resources = var.cluster_encryption_resources 33 | } 34 | } 35 | 36 | timeouts { 37 | create = var.cluster_create_timeout 38 | delete = var.cluster_delete_timeout 39 | } 40 | 41 | depends_on = [ 42 | aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy, 43 | aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy, 44 | aws_cloudwatch_log_group.this 45 | ] 46 | } 47 | 48 | resource "aws_security_group" "cluster" { 49 | count = var.cluster_create_security_group && var.create_eks ? 1 : 0 50 | name_prefix = var.cluster_name 51 | description = "EKS cluster security group." 52 | vpc_id = var.vpc_id 53 | tags = merge( 54 | var.tags, 55 | { 56 | "Name" = "${var.cluster_name}-eks_cluster_sg" 57 | }, 58 | ) 59 | } 60 | 61 | resource "aws_security_group_rule" "cluster_egress_internet" { 62 | count = var.cluster_create_security_group && var.create_eks ? 1 : 0 63 | description = "Allow cluster egress access to the Internet." 64 | protocol = "-1" 65 | security_group_id = local.cluster_security_group_id 66 | cidr_blocks = ["0.0.0.0/0"] 67 | from_port = 0 68 | to_port = 0 69 | type = "egress" 70 | } 71 | 72 | resource "aws_iam_role" "cluster" { 73 | count = var.manage_cluster_iam_resources && var.create_eks ? 1 : 0 74 | name_prefix = var.cluster_name 75 | assume_role_policy = data.aws_iam_policy_document.cluster_assume_role_policy.json 76 | permissions_boundary = var.permissions_boundary 77 | path = var.iam_path 78 | force_detach_policies = true 79 | tags = var.tags 80 | } 81 | 82 | resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" { 83 | count = var.manage_cluster_iam_resources && var.create_eks ? 1 : 0 84 | policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy" 85 | role = local.cluster_iam_role_name 86 | } 87 | 88 | resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSServicePolicy" { 89 | count = var.manage_cluster_iam_resources && var.create_eks ? 1 : 0 90 | policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy" 91 | role = local.cluster_iam_role_name 92 | } 93 | -------------------------------------------------------------------------------- /modules/node_groups/node_groups.tf: -------------------------------------------------------------------------------- 1 | resource "aws_eks_node_group" "node_groups" { 2 | for_each = local.node_groups_expanded 3 | 4 | node_group_name = lookup(each.value, "name", join("-", [var.cluster_name, each.key, random_pet.node_groups[each.key].id])) 5 | 6 | cluster_name = var.cluster_name 7 | node_role_arn = each.value["iam_role_arn"] 8 | subnet_ids = each.value["subnets"] 9 | 10 | scaling_config { 11 | desired_size = each.value["desired_capacity"] 12 | max_size = each.value["max_capacity"] 13 | min_size = each.value["min_capacity"] 14 | } 15 | 16 | ami_type = lookup(each.value, "ami_type", null) 17 | disk_size = lookup(each.value, "disk_size", null) 18 | instance_types = [each.value["instance_type"]] 19 | release_version = lookup(each.value, "ami_release_version", null) 20 | 21 | dynamic "remote_access" { 22 | for_each = each.value["key_name"] != "" ? [{ 23 | ec2_ssh_key = each.value["key_name"] 24 | source_security_group_ids = lookup(each.value, "source_security_group_ids", []) 25 | }] : [] 26 | 27 | content { 28 | ec2_ssh_key = remote_access.value["ec2_ssh_key"] 29 | source_security_group_ids = remote_access.value["source_security_group_ids"] 30 | } 31 | } 32 | 33 | version = lookup(each.value, "version", null) 34 | 35 | labels = merge( 36 | lookup(var.node_groups_defaults, "k8s_labels", {}), 37 | lookup(var.node_groups[each.key], "k8s_labels", {}) 38 | ) 39 | 40 | tags = merge( 41 | var.tags, 42 | lookup(var.node_groups_defaults, "additional_tags", {}), 43 | lookup(var.node_groups[each.key], "additional_tags", {}), 44 | ) 45 | 46 | lifecycle { 47 | create_before_destroy = true 48 | ignore_changes = [scaling_config.0.desired_size] 49 | } 50 | 51 | depends_on = [ 52 | aws_iam_role_policy_attachment.nodes_AmazonEKSWorkerNodePolicy, 53 | aws_iam_role_policy_attachment.nodes_AmazonEKS_CNI_Policy, 54 | aws_iam_role_policy_attachment.nodes_AmazonEC2ContainerRegistryReadOnly, 55 | ] 56 | } 57 | 58 | resource "aws_iam_role" "node_groups" { 59 | count = var.manage_node_iam_resources && var.create_eks ? 1 : 0 60 | name_prefix = var.node_groups_role_name != "" ? null : var.cluster_name 61 | name = var.node_groups_role_name != "" ? var.node_groups_role_name : null 62 | assume_role_policy = data.aws_iam_policy_document.node_groups_assume_role_policy.json 63 | permissions_boundary = var.permissions_boundary 64 | path = var.iam_path 65 | force_detach_policies = true 66 | tags = var.tags 67 | } 68 | 69 | resource "aws_iam_role_policy_attachment" "nodes_AmazonEKSWorkerNodePolicy" { 70 | count = var.manage_node_iam_resources && var.create_eks ? 1 : 0 71 | policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy" 72 | role = aws_iam_role.node_groups[0].name 73 | } 74 | 75 | resource "aws_iam_role_policy_attachment" "nodes_AmazonEKS_CNI_Policy" { 76 | count = var.manage_node_iam_resources && var.attach_node_cni_policy && var.create_eks ? 1 : 0 77 | policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy" 78 | role = aws_iam_role.node_groups[0].name 79 | } 80 | 81 | resource "aws_iam_role_policy_attachment" "nodes_AmazonEC2ContainerRegistryReadOnly" { 82 | count = var.manage_node_iam_resources && var.create_eks ? 1 : 0 83 | policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" 84 | role = aws_iam_role.node_groups[0].name 85 | } 86 | 87 | resource "aws_iam_role_policy_attachment" "nodes_additional_policies" { 88 | count = var.manage_node_iam_resources && var.create_eks ? length(var.node_groups_additional_policies) : 0 89 | role = aws_iam_role.node_groups[0].name 90 | policy_arn = var.node_groups_additional_policies[count.index] 91 | } 92 | -------------------------------------------------------------------------------- /modules/node_groups/README.md: -------------------------------------------------------------------------------- 1 | # eks `node_groups` submodule 2 | 3 | This submodule is designed for use by both the parent `eks` module and by the user. 4 | 5 | 6 | `node_groups` is a map of maps. Key of first level will be used as unique value for `for_each` resources and in the `aws_eks_node_group` name. Inner map can take the below values. 7 | 8 | | Name | Description | Type | If unset | 9 | |------|-------------|:----:|:-----:| 10 | | additional\_tags | Additional tags to apply to node group | `map(string)` | Only `var.tags` applied | 11 | | ami\_release\_version | AMI version of workers | `string` | Provider default behavior | 12 | | ami\_type | AMI Type. See Terraform or AWS docs | `string` | Provider default behavior | 13 | | desired\_capacity | Desired number of workers | `number` | `1` | 14 | | disk\_size | Workers' disk size | `number` | Provider default behavior | 15 | | iam\_role\_arn | IAM role ARN for workers | `string` | `aws_iam_role.node_groups[0].arn` | 16 | | instance\_type | Workers' instance type | `string` | `m4.large` | 17 | | k8s\_labels | Kubernetes labels | `map(string)` | No labels applied | 18 | | key\_name | Key name for workers. Set to empty string to disable remote access | `string` | `""` | 19 | | max\_capacity | Max number of workers | `number` | `3` | 20 | | min\_capacity | Min number of workers | `number` | `1` | 21 | | name | Name of the node group | string | Auto generated | 22 | | source\_security\_group\_ids | Source security groups for remote access to workers | `list(string)` | If key\_name is specified: THE REMOTE ACCESS WILL BE OPENED TO THE WORLD | 23 | | subnets | Subnets to contain workers | `list(string)` | `var.subnets` | 24 | | version | Kubernetes version | `string` | Provider default behavior | 25 | 26 | 27 | ## Providers 28 | 29 | | Name | Version | 30 | |------|---------| 31 | | aws | >= 2.52.0 | 32 | | random | >= 2.1 | 33 | 34 | ## Inputs 35 | 36 | | Name | Description | Type | Default | Required | 37 | |------|-------------|------|---------|:-----:| 38 | | attach\_node\_cni\_policy | Whether to attach the Amazon managed `AmazonEKS_CNI_Policy` IAM policy to the default node groups IAM role. WARNING: If set `false` the permissions must be assigned to the `aws-node` DaemonSet pods via another method or nodes will not be able to join the cluster. | `bool` | `true` | no | 39 | | cluster\_name | Name of parent cluster. | `string` | n/a | yes | 40 | | create\_eks | Controls if EKS resources should be created (it affects almost all resources). | `bool` | `true` | no | 41 | | iam\_path | If provided, all IAM roles will be created on this path. | `string` | `"/"` | no | 42 | | manage\_node\_iam\_resources | Whether to let the module manage node group IAM resources. If set to false, iam\_instance\_profile\_name must be specified for nodes. | `bool` | `true` | no | 43 | | node\_groups | Map of map of node groups to create. See documentation above for more details. | `any` | `{}` | no | 44 | | node\_groups\_additional\_policies | Additional policies to be added to node groups. | `list(string)` | `[]` | no | 45 | | node\_groups\_defaults | Map of values to be applied to all node groups. See documentation above for more details. | `any` | `{}` | no | 46 | | node\_groups\_role\_name | User defined node groups role name. | `string` | `""` | no | 47 | | permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | `string` | n/a | yes | 48 | | subnets | A list of subnets to place the EKS cluster and nodes within. | `list(string)` | n/a | yes | 49 | | tags | A map of tags to add to all resources. | `map(string)` | n/a | yes | 50 | 51 | ## Outputs 52 | 53 | | Name | Description | 54 | |------|-------------| 55 | | aws\_auth\_roles | Roles for use in aws-auth ConfigMap | 56 | | node\_groups | Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values | 57 | 58 | 59 | -------------------------------------------------------------------------------- /examples/basic/main.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 0.12.0" 3 | } 4 | 5 | provider "aws" { 6 | version = ">= 2.28.1" 7 | region = var.region 8 | } 9 | 10 | provider "random" { 11 | version = "~> 2.1" 12 | } 13 | 14 | provider "local" { 15 | version = "~> 1.2" 16 | } 17 | 18 | provider "null" { 19 | version = "~> 2.1" 20 | } 21 | 22 | provider "template" { 23 | version = "~> 2.1" 24 | } 25 | 26 | data "aws_eks_cluster" "cluster" { 27 | name = module.eks.cluster_id 28 | } 29 | 30 | data "aws_eks_cluster_auth" "cluster" { 31 | name = module.eks.cluster_id 32 | } 33 | 34 | provider "kubernetes" { 35 | host = data.aws_eks_cluster.cluster.endpoint 36 | cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) 37 | token = data.aws_eks_cluster_auth.cluster.token 38 | load_config_file = false 39 | version = "1.10" 40 | } 41 | 42 | data "aws_availability_zones" "available" { 43 | } 44 | 45 | locals { 46 | cluster_name = "test-eks-${random_string.suffix.result}" 47 | } 48 | 49 | resource "random_string" "suffix" { 50 | length = 8 51 | special = false 52 | } 53 | 54 | resource "aws_security_group" "worker_group_mgmt_one" { 55 | name_prefix = "worker_group_mgmt_one" 56 | vpc_id = module.vpc.vpc_id 57 | 58 | ingress { 59 | from_port = 22 60 | to_port = 22 61 | protocol = "tcp" 62 | 63 | cidr_blocks = [ 64 | "10.0.0.0/8", 65 | ] 66 | } 67 | } 68 | 69 | resource "aws_security_group" "worker_group_mgmt_two" { 70 | name_prefix = "worker_group_mgmt_two" 71 | vpc_id = module.vpc.vpc_id 72 | 73 | ingress { 74 | from_port = 22 75 | to_port = 22 76 | protocol = "tcp" 77 | 78 | cidr_blocks = [ 79 | "192.168.0.0/16", 80 | ] 81 | } 82 | } 83 | 84 | resource "aws_security_group" "all_worker_mgmt" { 85 | name_prefix = "all_worker_management" 86 | vpc_id = module.vpc.vpc_id 87 | 88 | ingress { 89 | from_port = 22 90 | to_port = 22 91 | protocol = "tcp" 92 | 93 | cidr_blocks = [ 94 | "10.0.0.0/8", 95 | "172.16.0.0/12", 96 | "192.168.0.0/16", 97 | ] 98 | } 99 | } 100 | 101 | module "vpc" { 102 | source = "terraform-aws-modules/vpc/aws" 103 | version = "2.6.0" 104 | 105 | name = "test-vpc" 106 | cidr = "10.0.0.0/16" 107 | azs = data.aws_availability_zones.available.names 108 | private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"] 109 | public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"] 110 | enable_nat_gateway = true 111 | single_nat_gateway = true 112 | enable_dns_hostnames = true 113 | 114 | tags = { 115 | "kubernetes.io/cluster/${local.cluster_name}" = "shared" 116 | } 117 | 118 | public_subnet_tags = { 119 | "kubernetes.io/cluster/${local.cluster_name}" = "shared" 120 | "kubernetes.io/role/elb" = "1" 121 | } 122 | 123 | private_subnet_tags = { 124 | "kubernetes.io/cluster/${local.cluster_name}" = "shared" 125 | "kubernetes.io/role/internal-elb" = "1" 126 | } 127 | } 128 | 129 | module "eks" { 130 | source = "../.." 131 | cluster_name = local.cluster_name 132 | subnets = module.vpc.private_subnets 133 | 134 | tags = { 135 | Environment = "test" 136 | GithubRepo = "terraform-aws-eks" 137 | GithubOrg = "terraform-aws-modules" 138 | } 139 | 140 | vpc_id = module.vpc.vpc_id 141 | 142 | worker_groups = [ 143 | { 144 | name = "worker-group-1" 145 | instance_type = "t2.small" 146 | additional_userdata = "echo foo bar" 147 | desired_capacity = 2 148 | additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id] 149 | }, 150 | { 151 | name = "worker-group-2" 152 | instance_type = "t2.medium" 153 | additional_userdata = "echo foo bar" 154 | additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id] 155 | desired_capacity = 1 156 | }, 157 | ] 158 | 159 | worker_additional_security_group_ids = [aws_security_group.all_worker_mgmt.id] 160 | map_roles = var.map_roles 161 | map_users = var.map_users 162 | map_accounts = var.map_accounts 163 | } 164 | -------------------------------------------------------------------------------- /outputs.tf: -------------------------------------------------------------------------------- 1 | output "cluster_id" { 2 | description = "The name/id of the EKS cluster." 3 | value = module.control_plane.cluster_id 4 | } 5 | 6 | output "cluster_arn" { 7 | description = "The Amazon Resource Name (ARN) of the cluster." 8 | value = module.control_plane.cluster_arn 9 | } 10 | 11 | output "cluster_certificate_authority_data" { 12 | description = "Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster." 13 | value = module.control_plane.cluster_certificate_authority_data 14 | } 15 | 16 | output "cluster_endpoint" { 17 | description = "The endpoint for your EKS Kubernetes API." 18 | value = module.control_plane.cluster_endpoint 19 | } 20 | 21 | output "cluster_version" { 22 | description = "The Kubernetes server version for the EKS cluster." 23 | value = module.control_plane.cluster_version 24 | } 25 | 26 | output "cluster_security_group_id" { 27 | description = "Security group ID attached to the EKS cluster." 28 | value = module.control_plane.cluster_security_group_id 29 | } 30 | 31 | output "config_map_aws_auth" { 32 | description = "A kubernetes configuration to authenticate to this EKS cluster." 33 | value = module.aws_auth.config_map_aws_auth 34 | } 35 | 36 | output "cluster_iam_role_arn" { 37 | description = "IAM role ARN of the EKS cluster." 38 | value = module.control_plane.cluster_iam_role_arn 39 | } 40 | 41 | output "cluster_oidc_issuer_url" { 42 | description = "The URL on the EKS cluster OIDC Issuer" 43 | value = module.control_plane.cluster_oidc_issuer_url 44 | } 45 | 46 | output "cloudwatch_log_group_name" { 47 | description = "Name of cloudwatch log group created" 48 | value = module.control_plane.cloudwatch_log_group_name 49 | } 50 | 51 | output "kubeconfig" { 52 | description = "kubectl config file contents for this EKS cluster." 53 | value = module.control_plane.kubeconfig 54 | } 55 | 56 | output "kubeconfig_filename" { 57 | description = "The filename of the generated kubectl config." 58 | value = module.control_plane.kubeconfig_filename 59 | } 60 | 61 | output "oidc_provider_arn" { 62 | description = "The ARN of the OIDC Provider if `enable_irsa = true`." 63 | value = module.control_plane.oidc_provider_arn 64 | } 65 | 66 | output "workers_asg_arns" { 67 | description = "IDs of the autoscaling groups containing workers." 68 | value = module.worker_groups.workers_asg_arns 69 | } 70 | 71 | output "workers_asg_names" { 72 | description = "Names of the autoscaling groups containing workers." 73 | value = module.worker_groups.workers_asg_names 74 | } 75 | 76 | output "workers_user_data" { 77 | description = "User data of worker groups" 78 | value = module.worker_groups.workers_user_data 79 | } 80 | 81 | output "workers_default_ami_id" { 82 | description = "ID of the default worker group AMI" 83 | value = module.worker_groups.workers_default_ami_id 84 | } 85 | 86 | output "workers_launch_template_ids" { 87 | description = "IDs of the worker launch templates." 88 | value = module.worker_groups.workers_launch_template_ids 89 | } 90 | 91 | output "workers_launch_template_arns" { 92 | description = "ARNs of the worker launch templates." 93 | value = module.worker_groups.workers_launch_template_arns 94 | } 95 | 96 | output "workers_launch_template_latest_versions" { 97 | description = "Latest versions of the worker launch templates." 98 | value = module.worker_groups.workers_launch_template_latest_versions 99 | } 100 | 101 | output "worker_security_group_id" { 102 | description = "Security group ID attached to the EKS workers." 103 | value = module.worker_groups.worker_security_group_id 104 | } 105 | 106 | output "worker_iam_instance_profile_arns" { 107 | description = "default IAM instance profile ARN for EKS worker groups" 108 | value = module.worker_groups.worker_iam_instance_profile_arns 109 | } 110 | 111 | output "worker_iam_instance_profile_names" { 112 | description = "default IAM instance profile name for EKS worker groups" 113 | value = module.worker_groups.worker_iam_instance_profile_names 114 | } 115 | 116 | output "worker_iam_role_name" { 117 | description = "default IAM role name for EKS worker groups" 118 | value = module.worker_groups.worker_iam_role_name 119 | } 120 | 121 | output "worker_iam_role_arn" { 122 | description = "default IAM role ARN for EKS worker groups" 123 | value = module.worker_groups.worker_iam_role_arn 124 | } 125 | 126 | output "node_groups" { 127 | description = "Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys" 128 | value = module.node_groups.node_groups 129 | } 130 | -------------------------------------------------------------------------------- /main.tf: -------------------------------------------------------------------------------- 1 | module "control_plane" { 2 | source = "./modules/control_plane" 3 | 4 | cluster_create_security_group = var.cluster_create_security_group 5 | cluster_create_timeout = var.cluster_create_timeout 6 | cluster_delete_timeout = var.cluster_delete_timeout 7 | cluster_enabled_log_types = var.cluster_enabled_log_types 8 | cluster_encryption_key_arn = var.cluster_encryption_key_arn 9 | cluster_encryption_resources = var.cluster_encryption_resources 10 | cluster_endpoint_private_access = var.cluster_endpoint_private_access 11 | cluster_endpoint_public_access = var.cluster_endpoint_public_access 12 | cluster_endpoint_public_access_cidrs = var.cluster_endpoint_public_access_cidrs 13 | cluster_iam_role_name = var.cluster_iam_role_name 14 | cluster_log_kms_key_id = var.cluster_log_kms_key_id 15 | cluster_log_retention_in_days = var.cluster_log_retention_in_days 16 | cluster_name = var.cluster_name 17 | cluster_security_group_id = var.cluster_security_group_id 18 | cluster_version = var.cluster_version 19 | config_output_path = var.config_output_path 20 | create_eks = var.create_eks 21 | eks_oidc_root_ca_thumbprint = var.eks_oidc_root_ca_thumbprint 22 | enable_irsa = var.enable_irsa 23 | iam_path = var.iam_path 24 | kubeconfig_aws_authenticator_additional_args = var.kubeconfig_aws_authenticator_additional_args 25 | kubeconfig_aws_authenticator_command = var.kubeconfig_aws_authenticator_command 26 | kubeconfig_aws_authenticator_command_args = var.kubeconfig_aws_authenticator_command_args 27 | kubeconfig_aws_authenticator_env_variables = var.kubeconfig_aws_authenticator_env_variables 28 | kubeconfig_name = var.kubeconfig_name 29 | manage_cluster_iam_resources = var.manage_cluster_iam_resources 30 | permissions_boundary = var.permissions_boundary 31 | subnets = var.subnets 32 | tags = var.tags 33 | vpc_id = var.vpc_id 34 | write_kubeconfig = var.write_kubeconfig 35 | } 36 | 37 | module "worker_groups" { 38 | source = "./modules/worker_groups" 39 | 40 | cluster_name = module.control_plane.cluster_id 41 | cluster_security_group_id = module.control_plane.cluster_security_group_id 42 | 43 | attach_worker_cni_policy = var.attach_worker_cni_policy 44 | create_eks = var.create_eks 45 | iam_path = var.iam_path 46 | manage_worker_iam_resources = var.manage_worker_iam_resources 47 | permissions_boundary = var.permissions_boundary 48 | subnets = var.subnets 49 | tags = var.tags 50 | vpc_id = var.vpc_id 51 | worker_additional_security_group_ids = var.worker_additional_security_group_ids 52 | worker_ami_name_filter = var.worker_ami_name_filter 53 | worker_ami_name_filter_windows = var.worker_ami_name_filter_windows 54 | worker_ami_owner_id = var.worker_ami_owner_id 55 | worker_ami_owner_id_windows = var.worker_ami_owner_id_windows 56 | worker_create_initial_lifecycle_hooks = var.worker_create_initial_lifecycle_hooks 57 | worker_create_security_group = var.worker_create_security_group 58 | worker_groups = var.worker_groups 59 | worker_groups_additional_policies = var.worker_groups_additional_policies 60 | worker_groups_defaults = var.worker_groups_defaults 61 | worker_groups_role_name = var.worker_groups_role_name 62 | worker_security_group_id = var.worker_security_group_id 63 | worker_sg_ingress_from_port = var.worker_sg_ingress_from_port 64 | } 65 | 66 | module "node_groups" { 67 | source = "./modules/node_groups" 68 | 69 | cluster_name = module.control_plane.cluster_id 70 | 71 | attach_node_cni_policy = var.attach_node_cni_policy 72 | create_eks = var.create_eks 73 | iam_path = var.iam_path 74 | manage_node_iam_resources = var.manage_node_iam_resources 75 | node_groups = var.node_groups 76 | node_groups_additional_policies = var.node_groups_additional_policies 77 | node_groups_defaults = var.node_groups_defaults 78 | node_groups_role_name = var.node_groups_role_name 79 | permissions_boundary = var.permissions_boundary 80 | subnets = var.subnets 81 | tags = var.tags 82 | } 83 | 84 | module "aws_auth" { 85 | source = "./modules/aws_auth" 86 | 87 | cluster_name = module.control_plane.cluster_id 88 | map_instances = concat(module.worker_groups.aws_auth_roles, module.node_groups.aws_auth_roles) 89 | 90 | create_eks = var.create_eks 91 | manage_aws_auth = var.manage_aws_auth 92 | map_accounts = var.map_accounts 93 | map_roles = var.map_roles 94 | map_users = var.map_users 95 | wait_for_cluster_cmd = var.wait_for_cluster_cmd 96 | } 97 | -------------------------------------------------------------------------------- /modules/worker_groups/variables.tf: -------------------------------------------------------------------------------- 1 | variable "create_eks" { 2 | description = "Controls if EKS resources should be created (it affects almost all resources)." 3 | type = bool 4 | default = true 5 | } 6 | 7 | variable "cluster_name" { 8 | description = "Name of the parent EKS cluster." 9 | type = string 10 | } 11 | 12 | variable "tags" { 13 | description = "A map of tags to add to all resources." 14 | type = map(string) 15 | } 16 | 17 | variable "worker_groups_defaults" { 18 | description = "Map of values to be applied to all worker groups. See documentation above for more details." 19 | type = any 20 | default = {} 21 | } 22 | 23 | variable "worker_groups" { 24 | description = "Map of map of worker groups to create. See documentation above for more details." 25 | type = any 26 | default = {} 27 | } 28 | 29 | variable "subnets" { 30 | description = "A list of subnets to place the EKS cluster and workers within." 31 | type = list(string) 32 | } 33 | 34 | variable "worker_groups_role_name" { 35 | description = "User defined worker groups role name." 36 | type = string 37 | default = "" 38 | } 39 | 40 | variable "permissions_boundary" { 41 | description = "If provided, all IAM roles will be created with this permissions boundary attached." 42 | type = string 43 | default = null 44 | } 45 | 46 | variable "iam_path" { 47 | description = "If provided, all IAM roles will be created on this path." 48 | type = string 49 | default = "/" 50 | } 51 | 52 | variable "attach_worker_cni_policy" { 53 | description = "Whether to attach the Amazon managed `AmazonEKS_CNI_Policy` IAM policy to the default worker groups IAM role. WARNING: If set `false` the permissions must be assigned to the `aws-worker` DaemonSet pods via another method or workers will not be able to join the cluster." 54 | type = bool 55 | default = true 56 | } 57 | 58 | variable "worker_groups_additional_policies" { 59 | description = "Additional policies to be added to worker groups." 60 | type = list(string) 61 | default = [] 62 | } 63 | 64 | variable "workers_role_name" { 65 | description = "User defined workers role name." 66 | type = string 67 | default = "" 68 | } 69 | 70 | variable "worker_ami_name_filter" { 71 | description = "Name filter for AWS EKS worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used." 72 | type = string 73 | default = "" 74 | } 75 | 76 | variable "worker_ami_name_filter_windows" { 77 | description = "Name filter for AWS EKS Windows worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used." 78 | type = string 79 | default = "" 80 | } 81 | 82 | variable "worker_ami_owner_id" { 83 | description = "The ID of the owner for the AMI to use for the AWS EKS workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft')." 84 | type = string 85 | default = "602401143452" // The ID of the owner of the official AWS EKS AMIs. 86 | } 87 | 88 | variable "worker_ami_owner_id_windows" { 89 | description = "The ID of the owner for the AMI to use for the AWS EKS Windows workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft')." 90 | type = string 91 | default = "801119661308" // The ID of the owner of the official AWS EKS Windows AMIs. 92 | } 93 | 94 | variable "manage_worker_iam_resources" { 95 | description = "Whether to let the module manage worker IAM resources. If set to false, iam_instance_profile_name must be specified for workers." 96 | type = bool 97 | default = true 98 | } 99 | 100 | variable "worker_create_security_group" { 101 | description = "Whether to create a security group for the workers or attach the workers to `worker_security_group_id`." 102 | type = bool 103 | default = true 104 | } 105 | 106 | variable "worker_create_initial_lifecycle_hooks" { 107 | description = "Whether to create initial lifecycle hooks provided in worker groups." 108 | type = bool 109 | default = false 110 | } 111 | 112 | variable "workers_additional_policies" { 113 | description = "Additional policies to be added to workers" 114 | type = list(string) 115 | default = [] 116 | } 117 | 118 | variable "vpc_id" { 119 | description = "VPC where the cluster and workers will be deployed." 120 | type = string 121 | } 122 | 123 | variable "worker_additional_security_group_ids" { 124 | description = "A list of additional security group ids to attach to worker instances" 125 | type = list(string) 126 | default = [] 127 | } 128 | 129 | variable "cluster_security_group_id" { 130 | description = "If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the workers" 131 | type = string 132 | } 133 | 134 | variable "worker_sg_ingress_from_port" { 135 | description = "Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443)." 136 | type = number 137 | default = 1025 138 | } 139 | 140 | variable "worker_security_group_id" { 141 | description = "If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the EKS cluster." 142 | type = string 143 | default = "" 144 | } 145 | -------------------------------------------------------------------------------- /modules/worker_groups/README.md: -------------------------------------------------------------------------------- 1 | # eks `worker_groups` submodule 2 | 3 | This submodule is designed for use by both the parent `eks` module and by the user. 4 | 5 | > :warning: **Launch Configuration driven worker groups have been superceded by Launch Template driven worker groups** 6 | 7 | `worker_groups` is a map of maps. Key of first level will be used as unique value for `for_each` resources and in the `aws_autoscaling_group` and `aws_launch_template` name. Inner map can take the below values. 8 | 9 | 10 | ## Providers 11 | 12 | | Name | Version | 13 | |------|---------| 14 | | aws | >= 2.52.0 | 15 | | random | >= 2.1 | 16 | | template | >= 2.1 | 17 | 18 | ## Inputs 19 | 20 | | Name | Description | Type | Default | Required | 21 | |------|-------------|------|---------|:-----:| 22 | | attach\_worker\_cni\_policy | Whether to attach the Amazon managed `AmazonEKS_CNI_Policy` IAM policy to the default worker groups IAM role. WARNING: If set `false` the permissions must be assigned to the `aws-worker` DaemonSet pods via another method or workers will not be able to join the cluster. | `bool` | `true` | no | 23 | | cluster\_name | Name of the parent EKS cluster. | `string` | n/a | yes | 24 | | cluster\_security\_group\_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the workers | `string` | n/a | yes | 25 | | create\_eks | Controls if EKS resources should be created (it affects almost all resources). | `bool` | `true` | no | 26 | | iam\_path | If provided, all IAM roles will be created on this path. | `string` | `"/"` | no | 27 | | manage\_worker\_iam\_resources | Whether to let the module manage worker IAM resources. If set to false, iam\_instance\_profile\_name must be specified for workers. | `bool` | `true` | no | 28 | | permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | `string` | n/a | yes | 29 | | subnets | A list of subnets to place the EKS cluster and workers within. | `list(string)` | n/a | yes | 30 | | tags | A map of tags to add to all resources. | `map(string)` | n/a | yes | 31 | | vpc\_id | VPC where the cluster and workers will be deployed. | `string` | n/a | yes | 32 | | worker\_additional\_security\_group\_ids | A list of additional security group ids to attach to worker instances | `list(string)` | `[]` | no | 33 | | worker\_ami\_name\_filter | Name filter for AWS EKS worker AMI. If not provided, the latest official AMI for the specified 'cluster\_version' is used. | `string` | `""` | no | 34 | | worker\_ami\_name\_filter\_windows | Name filter for AWS EKS Windows worker AMI. If not provided, the latest official AMI for the specified 'cluster\_version' is used. | `string` | `""` | no | 35 | | worker\_ami\_owner\_id | The ID of the owner for the AMI to use for the AWS EKS workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft'). | `string` | `"602401143452"` | no | 36 | | worker\_ami\_owner\_id\_windows | The ID of the owner for the AMI to use for the AWS EKS Windows workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft'). | `string` | `"801119661308"` | no | 37 | | worker\_create\_initial\_lifecycle\_hooks | Whether to create initial lifecycle hooks provided in worker groups. | `bool` | `false` | no | 38 | | worker\_create\_security\_group | Whether to create a security group for the workers or attach the workers to `worker_security_group_id`. | `bool` | `true` | no | 39 | | worker\_groups | Map of map of worker groups to create. See documentation above for more details. | `any` | `{}` | no | 40 | | worker\_groups\_additional\_policies | Additional policies to be added to worker groups. | `list(string)` | `[]` | no | 41 | | worker\_groups\_defaults | Map of values to be applied to all worker groups. See documentation above for more details. | `any` | `{}` | no | 42 | | worker\_groups\_role\_name | User defined worker groups role name. | `string` | `""` | no | 43 | | worker\_security\_group\_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the EKS cluster. | `string` | `""` | no | 44 | | worker\_sg\_ingress\_from\_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | `number` | `1025` | no | 45 | | workers\_additional\_policies | Additional policies to be added to workers | `list(string)` | `[]` | no | 46 | | workers\_role\_name | User defined workers role name. | `string` | `""` | no | 47 | 48 | ## Outputs 49 | 50 | | Name | Description | 51 | |------|-------------| 52 | | aws\_auth\_roles | Roles for use in aws-auth ConfigMap | 53 | | worker\_iam\_instance\_profile\_arns | default IAM instance profile ARN for EKS worker groups | 54 | | worker\_iam\_instance\_profile\_names | default IAM instance profile name for EKS worker groups | 55 | | worker\_iam\_role\_arn | default IAM role ARN for EKS worker groups | 56 | | worker\_iam\_role\_name | default IAM role name for EKS worker groups | 57 | | worker\_security\_group\_id | Security group ID attached to the EKS workers. | 58 | | workers\_asg\_arns | IDs of the autoscaling groups containing workers. | 59 | | workers\_asg\_names | Names of the autoscaling groups containing workers. | 60 | | workers\_default\_ami\_id | ID of the default worker group AMI | 61 | | workers\_launch\_template\_arns | ARNs of the worker launch templates. | 62 | | workers\_launch\_template\_ids | IDs of the worker launch templates. | 63 | | workers\_launch\_template\_latest\_versions | Latest versions of the worker launch templates. | 64 | | workers\_user\_data | User data of worker groups | 65 | 66 | 67 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ## [Unreleased] 4 | 5 | 6 | ## [v10.2.3] - 2020-05-18 7 | ### 🐛 Bug Fixes 8 | - [377e8a5](https://github.com/devopsmakers/terraform-aws-eks/commit/377e8a5) issue with placement groups ([#5](https://github.com/devopsmakers/terraform-aws-eks/issues/5)) 9 | 10 | 11 | 12 | ## [v10.2.2] - 2020-05-05 13 | ### 🐛 Bug Fixes 14 | - [5000dda](https://github.com/devopsmakers/terraform-aws-eks/commit/5000dda) iam_instance_profile_name empty when manage_worker_iam_resources=false 15 | - **worker_groups:** [c1ae5c1](https://github.com/devopsmakers/terraform-aws-eks/commit/c1ae5c1) index error when create_eks is false ([#3](https://github.com/devopsmakers/terraform-aws-eks/issues/3)) 16 | 17 | ### 🔧 Maintenance 18 | - **release:** [a5be1d5](https://github.com/devopsmakers/terraform-aws-eks/commit/a5be1d5) Update changelog for v10.2.2 19 | 20 | 21 | 22 | ## [v10.2.1] - 2020-03-12 23 | ### 🐛 Bug Fixes 24 | - [3f81efb](https://github.com/devopsmakers/terraform-aws-eks/commit/3f81efb) Add missing security group rule to allow workers to communicate with the cluster API 25 | 26 | ### 🔧 Maintenance 27 | - **release:** [a0fcf8a](https://github.com/devopsmakers/terraform-aws-eks/commit/a0fcf8a) Update changelog for v10.2.1 28 | 29 | 30 | 31 | ## [v10.2.0] - 2020-03-12 32 | ### 🐛 Bug Fixes 33 | - [7491eb9](https://github.com/devopsmakers/terraform-aws-eks/commit/7491eb9) worker group security group id output 34 | 35 | ### 🔧 Maintenance 36 | - **release:** [919c4a0](https://github.com/devopsmakers/terraform-aws-eks/commit/919c4a0) Update changelog for v10.2.0 37 | 38 | 39 | 40 | ## [v10.1.1] - 2020-03-11 41 | ### 🐛 Bug Fixes 42 | - [19f40b1](https://github.com/devopsmakers/terraform-aws-eks/commit/19f40b1) Use cluster_name rather than cluster_endpoint for consistency 43 | 44 | ### 🔧 Maintenance 45 | - **release:** [a697d89](https://github.com/devopsmakers/terraform-aws-eks/commit/a697d89) Update changelog for v10.1.1 46 | 47 | 48 | 49 | ## [v10.1.0] - 2020-03-11 50 | ### ✨ Features 51 | - [7b2d414](https://github.com/devopsmakers/terraform-aws-eks/commit/7b2d414) Add encryption_config capabilities, default to EKS v1.15 52 | 53 | ### 🔧 Maintenance 54 | - **release:** [fff4f90](https://github.com/devopsmakers/terraform-aws-eks/commit/fff4f90) Update changelog for v10.1.0 55 | 56 | 57 | 58 | ## [v10.0.3] - 2020-03-11 59 | ### 🐛 Bug Fixes 60 | - [ae728e4](https://github.com/devopsmakers/terraform-aws-eks/commit/ae728e4) Pass in cluster_endpoint 61 | 62 | ### 🔧 Maintenance 63 | - **release:** [5a8de60](https://github.com/devopsmakers/terraform-aws-eks/commit/5a8de60) Update changelog for v10.0.3 64 | 65 | 66 | 67 | ## [v10.0.2] - 2020-03-11 68 | ### 🐛 Bug Fixes 69 | - [addaae3](https://github.com/devopsmakers/terraform-aws-eks/commit/addaae3) Pass wait_for_cluster_cmd from parent module 70 | 71 | ### 🔧 Maintenance 72 | - **release:** [4a6bfcf](https://github.com/devopsmakers/terraform-aws-eks/commit/4a6bfcf) Update changelog for v10.0.2 73 | 74 | 75 | 76 | ## [v10.0.1] - 2020-03-11 77 | ### 🐛 Bug Fixes 78 | - [d856d03](https://github.com/devopsmakers/terraform-aws-eks/commit/d856d03) Add wait_for_cluster to aws_auth module 79 | 80 | ### 🔧 Maintenance 81 | - **release:** [83a62c8](https://github.com/devopsmakers/terraform-aws-eks/commit/83a62c8) Update changelog for v10.0.1 82 | 83 | 84 | 85 | ## [v10.0.0] - 2020-03-10 86 | ### ✨ Features 87 | - [1db1d2a](https://github.com/devopsmakers/terraform-aws-eks/commit/1db1d2a) Add worker groups submodules 88 | - [392ecfd](https://github.com/devopsmakers/terraform-aws-eks/commit/392ecfd) Enable management of the aws-auth ConfigMap as a module 89 | - **control_plane:** [f1ca63c](https://github.com/devopsmakers/terraform-aws-eks/commit/f1ca63c) Move control plane related resources to their own submodule 90 | 91 | ### 🐛 Bug Fixes 92 | - [bb9874b](https://github.com/devopsmakers/terraform-aws-eks/commit/bb9874b) Take ami id into account for random_pet keeper 93 | - **kubeconfig:** [2838035](https://github.com/devopsmakers/terraform-aws-eks/commit/2838035) Set sensible file and directory permissions on kubeconfig file 94 | 95 | ### 🔧 Maintenance 96 | - [f2a4434](https://github.com/devopsmakers/terraform-aws-eks/commit/f2a4434) Add initial submodule files 97 | - [05b7d54](https://github.com/devopsmakers/terraform-aws-eks/commit/05b7d54) Update README with intention 98 | - **release:** [ef0cbc4](https://github.com/devopsmakers/terraform-aws-eks/commit/ef0cbc4) Update changelog for v10.0.0 99 | - **release:** [df29db6](https://github.com/devopsmakers/terraform-aws-eks/commit/df29db6) Update changelog for v10.0.0 100 | - **release:** [565ab90](https://github.com/devopsmakers/terraform-aws-eks/commit/565ab90) Update changelog for v9.0.2 101 | 102 | 103 | 104 | ## v9.1.0 - 2020-03-06 105 | ### 🔧 Maintenance 106 | - [6338f6d](https://github.com/devopsmakers/terraform-aws-eks/commit/6338f6d) Initial commit 107 | 108 | 109 | [Unreleased]: https://github.com/devopsmakers/terraform-aws-eks/compare/v10.2.3...HEAD 110 | [v10.2.3]: https://github.com/devopsmakers/terraform-aws-eks/compare/v10.2.2...v10.2.3 111 | [v10.2.2]: https://github.com/devopsmakers/terraform-aws-eks/compare/v10.2.1...v10.2.2 112 | [v10.2.1]: https://github.com/devopsmakers/terraform-aws-eks/compare/v10.2.0...v10.2.1 113 | [v10.2.0]: https://github.com/devopsmakers/terraform-aws-eks/compare/v10.1.1...v10.2.0 114 | [v10.1.1]: https://github.com/devopsmakers/terraform-aws-eks/compare/v10.1.0...v10.1.1 115 | [v10.1.0]: https://github.com/devopsmakers/terraform-aws-eks/compare/v10.0.3...v10.1.0 116 | [v10.0.3]: https://github.com/devopsmakers/terraform-aws-eks/compare/v10.0.2...v10.0.3 117 | [v10.0.2]: https://github.com/devopsmakers/terraform-aws-eks/compare/v10.0.1...v10.0.2 118 | [v10.0.1]: https://github.com/devopsmakers/terraform-aws-eks/compare/v10.0.0...v10.0.1 119 | [v10.0.0]: https://github.com/devopsmakers/terraform-aws-eks/compare/v9.1.0...v10.0.0 120 | -------------------------------------------------------------------------------- /modules/control_plane/README.md: -------------------------------------------------------------------------------- 1 | # eks `control_plane` submodule 2 | 3 | This submodule is designed for use by both the parent `eks` module and by the user. 4 | 5 | 6 | ## Providers 7 | 8 | | Name | Version | 9 | |------|---------| 10 | | aws | >= 2.52.0 | 11 | | local | >= 1.2 | 12 | | template | >= 2.1 | 13 | 14 | ## Inputs 15 | 16 | | Name | Description | Type | Default | Required | 17 | |------|-------------|------|---------|:-----:| 18 | | cluster\_create\_security\_group | Whether to create a security group for the cluster or attach the cluster to `cluster_security_group_id`. | `bool` | `true` | no | 19 | | cluster\_create\_timeout | Timeout value when creating the EKS cluster. | `string` | `"30m"` | no | 20 | | cluster\_delete\_timeout | Timeout value when deleting the EKS cluster. | `string` | `"15m"` | no | 21 | | cluster\_enabled\_log\_types | A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | `list(string)` | `[]` | no | 22 | | cluster\_encryption\_key\_arn | KMS Key ARN to encrypt EKS resources with. | `string` | `""` | no | 23 | | cluster\_encryption\_resources | A list of the EKS resources to encrypt. | `list(string)` |
[
"secrets"
]
| no | 24 | | cluster\_endpoint\_private\_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. | `bool` | `false` | no | 25 | | cluster\_endpoint\_public\_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. | `bool` | `true` | no | 26 | | cluster\_endpoint\_public\_access\_cidrs | List of CIDR blocks which can access the Amazon EKS public API server endpoint. | `list(string)` |
[
"0.0.0.0/0"
]
| no | 27 | | cluster\_iam\_role\_name | IAM role name for the cluster. Only applicable if manage\_cluster\_iam\_resources is set to false. | `string` | `""` | no | 28 | | cluster\_log\_kms\_key\_id | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | `string` | `""` | no | 29 | | cluster\_log\_retention\_in\_days | Number of days to retain log events. Default retention - 90 days. | `number` | `90` | no | 30 | | cluster\_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | `string` | n/a | yes | 31 | | cluster\_security\_group\_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the workers | `string` | `""` | no | 32 | | cluster\_version | Kubernetes version to use for the EKS cluster. | `string` | `"1.15"` | no | 33 | | config\_output\_path | Where to save the Kubectl config file (if `write_kubeconfig = true`). Assumed to be a directory if the value ends with a forward slash `/`. | `string` | `"./"` | no | 34 | | create\_eks | Controls if EKS resources should be created (it affects almost all resources) | `bool` | `true` | no | 35 | | eks\_oidc\_root\_ca\_thumbprint | Thumbprint of Root CA for EKS OIDC, Valid until 2037 | `string` | `"9e99a48a9960b14926bb7f3b02e22da2b0ab7280"` | no | 36 | | enable\_irsa | Whether to create OpenID Connect Provider for EKS to enable IRSA | `bool` | `false` | no | 37 | | iam\_path | If provided, all IAM roles will be created on this path. | `string` | `"/"` | no | 38 | | kubeconfig\_aws\_authenticator\_additional\_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | `list(string)` | `[]` | no | 39 | | kubeconfig\_aws\_authenticator\_command | Command to use to fetch AWS EKS credentials. | `string` | `"aws-iam-authenticator"` | no | 40 | | kubeconfig\_aws\_authenticator\_command\_args | Default arguments passed to the authenticator command. Defaults to [token -i $cluster\_name]. | `list(string)` | `[]` | no | 41 | | kubeconfig\_aws\_authenticator\_env\_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS\_PROFILE = "eks"}. | `map(string)` | `{}` | no | 42 | | kubeconfig\_name | Override the default name used for items kubeconfig. | `string` | `""` | no | 43 | | manage\_cluster\_iam\_resources | Whether to let the module manage cluster IAM resources. If set to false, cluster\_iam\_role\_name must be specified. | `bool` | `true` | no | 44 | | permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | `string` | n/a | yes | 45 | | subnets | A list of subnets to place the EKS cluster and workers within. | `list(string)` | n/a | yes | 46 | | tags | A map of tags to add to all resources. | `map(string)` | `{}` | no | 47 | | vpc\_id | VPC where the cluster and workers will be deployed. | `string` | n/a | yes | 48 | | write\_kubeconfig | Whether to write a Kubectl config file containing the cluster configuration. Saved to `config_output_path`. | `bool` | `true` | no | 49 | 50 | ## Outputs 51 | 52 | | Name | Description | 53 | |------|-------------| 54 | | cloudwatch\_log\_group\_name | Name of cloudwatch log group created | 55 | | cluster\_arn | The Amazon Resource Name (ARN) of the cluster. | 56 | | cluster\_certificate\_authority\_data | Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster. | 57 | | cluster\_endpoint | The endpoint for your EKS Kubernetes API. | 58 | | cluster\_iam\_role\_arn | IAM role ARN of the EKS cluster. | 59 | | cluster\_id | The name/id of the EKS cluster. | 60 | | cluster\_oidc\_issuer\_url | The URL on the EKS cluster OIDC Issuer | 61 | | cluster\_security\_group\_id | Security group ID attached to the EKS cluster. | 62 | | cluster\_version | The Kubernetes server version for the EKS cluster. | 63 | | kubeconfig | kubectl config file contents for this EKS cluster. | 64 | | kubeconfig\_filename | The filename of the generated kubectl config. | 65 | | oidc\_provider\_arn | The ARN of the OIDC Provider if `enable_irsa = true`. | 66 | 67 | 68 | -------------------------------------------------------------------------------- /modules/control_plane/variables.tf: -------------------------------------------------------------------------------- 1 | variable "create_eks" { 2 | description = "Controls if EKS resources should be created (it affects almost all resources)" 3 | type = bool 4 | default = true 5 | } 6 | 7 | variable "cluster_version" { 8 | description = "Kubernetes version to use for the EKS cluster." 9 | type = string 10 | default = "1.15" 11 | } 12 | 13 | variable "cluster_enabled_log_types" { 14 | default = [] 15 | description = "A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)" 16 | type = list(string) 17 | } 18 | variable "cluster_log_kms_key_id" { 19 | default = "" 20 | description = "If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html)" 21 | type = string 22 | } 23 | variable "cluster_log_retention_in_days" { 24 | default = 90 25 | description = "Number of days to retain log events. Default retention - 90 days." 26 | type = number 27 | } 28 | 29 | variable "cluster_name" { 30 | description = "Name of the EKS cluster. Also used as a prefix in names of related resources." 31 | type = string 32 | } 33 | 34 | variable "cluster_security_group_id" { 35 | description = "If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the workers" 36 | type = string 37 | default = "" 38 | } 39 | 40 | variable "cluster_create_timeout" { 41 | description = "Timeout value when creating the EKS cluster." 42 | type = string 43 | default = "30m" 44 | } 45 | 46 | variable "cluster_delete_timeout" { 47 | description = "Timeout value when deleting the EKS cluster." 48 | type = string 49 | default = "15m" 50 | } 51 | 52 | variable "cluster_create_security_group" { 53 | description = "Whether to create a security group for the cluster or attach the cluster to `cluster_security_group_id`." 54 | type = bool 55 | default = true 56 | } 57 | 58 | variable "iam_path" { 59 | description = "If provided, all IAM roles will be created on this path." 60 | type = string 61 | default = "/" 62 | } 63 | 64 | variable "cluster_endpoint_private_access" { 65 | description = "Indicates whether or not the Amazon EKS private API server endpoint is enabled." 66 | type = bool 67 | default = false 68 | } 69 | 70 | variable "cluster_endpoint_public_access" { 71 | description = "Indicates whether or not the Amazon EKS public API server endpoint is enabled." 72 | type = bool 73 | default = true 74 | } 75 | 76 | variable "cluster_endpoint_public_access_cidrs" { 77 | description = "List of CIDR blocks which can access the Amazon EKS public API server endpoint." 78 | type = list(string) 79 | default = ["0.0.0.0/0"] 80 | } 81 | 82 | variable "manage_cluster_iam_resources" { 83 | description = "Whether to let the module manage cluster IAM resources. If set to false, cluster_iam_role_name must be specified." 84 | type = bool 85 | default = true 86 | } 87 | 88 | variable "cluster_iam_role_name" { 89 | description = "IAM role name for the cluster. Only applicable if manage_cluster_iam_resources is set to false." 90 | type = string 91 | default = "" 92 | } 93 | 94 | variable "permissions_boundary" { 95 | description = "If provided, all IAM roles will be created with this permissions boundary attached." 96 | type = string 97 | default = null 98 | } 99 | 100 | variable "enable_irsa" { 101 | description = "Whether to create OpenID Connect Provider for EKS to enable IRSA" 102 | type = bool 103 | default = false 104 | } 105 | 106 | variable "subnets" { 107 | description = "A list of subnets to place the EKS cluster and workers within." 108 | type = list(string) 109 | } 110 | 111 | variable "tags" { 112 | description = "A map of tags to add to all resources." 113 | type = map(string) 114 | default = {} 115 | } 116 | 117 | variable "vpc_id" { 118 | description = "VPC where the cluster and workers will be deployed." 119 | type = string 120 | } 121 | 122 | variable "config_output_path" { 123 | description = "Where to save the Kubectl config file (if `write_kubeconfig = true`). Assumed to be a directory if the value ends with a forward slash `/`." 124 | type = string 125 | default = "./" 126 | } 127 | 128 | variable "write_kubeconfig" { 129 | description = "Whether to write a Kubectl config file containing the cluster configuration. Saved to `config_output_path`." 130 | type = bool 131 | default = true 132 | } 133 | 134 | variable "kubeconfig_aws_authenticator_command" { 135 | description = "Command to use to fetch AWS EKS credentials." 136 | type = string 137 | default = "aws-iam-authenticator" 138 | } 139 | 140 | variable "kubeconfig_aws_authenticator_command_args" { 141 | description = "Default arguments passed to the authenticator command. Defaults to [token -i $cluster_name]." 142 | type = list(string) 143 | default = [] 144 | } 145 | 146 | variable "kubeconfig_aws_authenticator_additional_args" { 147 | description = "Any additional arguments to pass to the authenticator such as the role to assume. e.g. [\"-r\", \"MyEksRole\"]." 148 | type = list(string) 149 | default = [] 150 | } 151 | 152 | variable "kubeconfig_aws_authenticator_env_variables" { 153 | description = "Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = \"eks\"}." 154 | type = map(string) 155 | default = {} 156 | } 157 | 158 | variable "kubeconfig_name" { 159 | description = "Override the default name used for items kubeconfig." 160 | type = string 161 | default = "" 162 | } 163 | 164 | variable "eks_oidc_root_ca_thumbprint" { 165 | type = string 166 | description = "Thumbprint of Root CA for EKS OIDC, Valid until 2037" 167 | default = "9e99a48a9960b14926bb7f3b02e22da2b0ab7280" 168 | } 169 | 170 | variable "cluster_encryption_key_arn" { 171 | type = string 172 | description = "KMS Key ARN to encrypt EKS resources with." 173 | default = "" 174 | } 175 | 176 | variable "cluster_encryption_resources" { 177 | type = list(string) 178 | description = "A list of the EKS resources to encrypt." 179 | default = ["secrets"] 180 | } 181 | -------------------------------------------------------------------------------- /modules/worker_groups/locals.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | asg_tags = [ 3 | for item in keys(var.tags) : 4 | map( 5 | "key", item, 6 | "value", element(values(var.tags), index(keys(var.tags), item)), 7 | "propagate_at_launch", "true" 8 | ) 9 | ] 10 | 11 | default_iam_role_id = concat(aws_iam_role.worker_groups.*.id, [""])[0] 12 | default_ami_id_linux = data.aws_ami.eks_worker.id 13 | default_ami_id_windows = data.aws_ami.eks_worker_windows.id 14 | 15 | worker_create_security_group = var.worker_create_security_group && var.create_eks 16 | 17 | worker_groups_defaults = { 18 | name = "" # Name of the worker group. Literal count.index will never be used but if name is not set, the count.index interpolation will be used. 19 | tags = [] # A list of map defining extra tags to be applied to the worker group autoscaling group. 20 | ami_id = "" # AMI ID for the eks workers. If none is provided, Terraform will search for the latest version of their EKS optimized worker AMI based on platform. 21 | desired_capacity = "1" # Desired worker capacity in the autoscaling group and changing its value will not affect the autoscaling group's desired capacity because the cluster-autoscaler manages up and down scaling of the nodes. Cluster-autoscaler add nodes when pods are in pending state and remove the nodes when they are not required by modifying the desirec_capacity of the autoscaling group. Although an issue exists in which if the value of the min_size is changed it modifies the value of desired_capacity. 22 | max_size = "3" # Maximum worker capacity in the autoscaling group. 23 | min_size = "1" # Minimum worker capacity in the autoscaling group. NOTE: Change in this paramater will affect the desired_capacity, like changing its value to 2 will change desired_capacity value to 2 but bringing back it to 1 will not affect the desired_capacity. 24 | force_delete = false # Enable forced deletion for the autoscaling group. 25 | initial_lifecycle_hooks = [] # Initital lifecycle hook for the autoscaling group. 26 | recreate_on_change = false # Recreate the autoscaling group when the Launch Template or Launch Configuration change. 27 | default_cooldown = null # The amount of time, in seconds, after a scaling activity completes before another scaling activity can start. 28 | health_check_grace_period = null # Time in seconds after instance comes into service before checking health. 29 | instance_type = "m4.large" # Size of the workers instances. 30 | spot_price = "" # Cost of spot instance. 31 | placement_tenancy = "" # The tenancy of the instance. Valid values are "default" or "dedicated". 32 | root_volume_size = "100" # root volume size of workers instances. 33 | root_volume_type = "gp2" # root volume type of workers instances, can be 'standard', 'gp2', or 'io1' 34 | root_iops = "0" # The amount of provisioned IOPS. This must be set with a volume_type of "io1". 35 | key_name = "" # The key name that should be used for the instances in the autoscaling group 36 | pre_userdata = "" # userdata to pre-append to the default userdata. 37 | userdata_template_file = "" # alternate template to use for userdata 38 | userdata_template_extra_args = {} # Additional arguments to use when expanding the userdata template file 39 | bootstrap_extra_args = "" # Extra arguments passed to the bootstrap.sh script from the EKS AMI (Amazon Machine Image). 40 | additional_userdata = "" # userdata to append to the default userdata. 41 | ebs_optimized = true # sets whether to use ebs optimization on supported types. 42 | enable_monitoring = true # Enables/disables detailed monitoring. 43 | public_ip = false # Associate a public ip address with a worker 44 | kubelet_extra_args = "" # This string is passed directly to kubelet if set. Useful for adding labels or taints. 45 | subnets = var.subnets # A list of subnets to place the worker nodes in. i.e. ["subnet-123", "subnet-456", "subnet-789"] 46 | additional_security_group_ids = [] # A list of additional security group ids to include in worker launch config 47 | protect_from_scale_in = false # Prevent AWS from scaling in, so that cluster-autoscaler is solely responsible. 48 | iam_instance_profile_name = "" # A custom IAM instance profile name. Used when manage_worker_iam_resources is set to false. Incompatible with iam_role_id. 49 | iam_role_id = local.default_iam_role_id # A custom IAM role id. Incompatible with iam_instance_profile_name. Literal local.default_iam_role_id will never be used but if iam_role_id is not set, the local.default_iam_role_id interpolation will be used. 50 | suspended_processes = ["AZRebalance"] # A list of processes to suspend. i.e. ["AZRebalance", "HealthCheck", "ReplaceUnhealthy"] 51 | target_group_arns = null # A list of Application LoadBalancer (ALB) target group ARNs to be associated to the autoscaling group 52 | enabled_metrics = [] # A list of metrics to be collected i.e. ["GroupMinSize", "GroupMaxSize", "GroupDesiredCapacity"] 53 | placement_group = null # The name of the placement group into which to launch the instances, if any. 54 | service_linked_role_arn = "" # Arn of custom service linked role that Auto Scaling group will use. Useful when you have encrypted EBS 55 | termination_policies = [] # A list of policies to decide how the instances in the auto scale group should be terminated. 56 | platform = "linux" # Platform of workers. either "linux" or "windows" 57 | max_instance_lifetime = 0 # Maximum number of seconds instances can run in the ASG. 0 is unlimited. 58 | # Settings for launch templates 59 | root_block_device_name = data.aws_ami.eks_worker.root_device_name # Root device name for workers. If non is provided, will assume default AMI was used. 60 | root_kms_key_id = "" # The KMS key to use when encrypting the root storage device 61 | launch_template_version = "$Latest" # The lastest version of the launch template to use in the autoscaling group 62 | launch_template_placement_tenancy = "default" # The placement tenancy for instances 63 | launch_template_placement_group = null # The name of the placement group into which to launch the instances, if any. 64 | root_encrypted = "" # Whether the volume should be encrypted or not 65 | eni_delete = true # Delete the Elastic Network Interface (ENI) on termination (if set to false you will have to manually delete before destroying) 66 | cpu_credits = "standard" # T2/T3 unlimited mode, can be 'standard' or 'unlimited'. Used 'standard' mode as default to avoid paying higher costs 67 | market_type = null 68 | # Settings for launch templates with mixed instances policy 69 | override_instance_types = [] # A list of override instance types for mixed instances policy 70 | on_demand_allocation_strategy = null # Strategy to use when launching on-demand instances. Valid values: prioritized. 71 | on_demand_base_capacity = "0" # Absolute minimum amount of desired capacity that must be fulfilled by on-demand instances 72 | on_demand_percentage_above_base_capacity = "0" # Percentage split between on-demand and Spot instances above the base on-demand capacity 73 | spot_allocation_strategy = "lowest-price" # Valid options are 'lowest-price' and 'capacity-optimized'. If 'lowest-price', the Auto Scaling group launches instances using the Spot pools with the lowest price, and evenly allocates your instances across the number of Spot pools. If 'capacity-optimized', the Auto Scaling group launches instances using Spot pools that are optimally chosen based on the available Spot capacity. 74 | spot_instance_pools = 10 # "Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify." 75 | spot_max_price = "" # Maximum price per unit hour that the user is willing to pay for the Spot instances. Default is the on-demand price 76 | } 77 | 78 | # Merge defaults and per-group values to make code cleaner 79 | worker_groups_expanded = { for k, v in var.worker_groups : k => merge( 80 | local.worker_groups_defaults, 81 | var.worker_groups_defaults, 82 | v, 83 | ) if var.create_eks } 84 | 85 | worker_security_group_id = local.worker_create_security_group ? aws_security_group.worker_groups.0.id : var.worker_security_group_id 86 | 87 | policy_arn_prefix = contains(["cn-northwest-1", "cn-north-1"], data.aws_region.current.name) ? "arn:aws-cn:iam::aws:policy" : "arn:aws:iam::aws:policy" 88 | 89 | ebs_optimized_not_supported = [ 90 | "c1.medium", 91 | "c3.8xlarge", 92 | "c3.large", 93 | "c5d.12xlarge", 94 | "c5d.24xlarge", 95 | "c5d.metal", 96 | "cc2.8xlarge", 97 | "cr1.8xlarge", 98 | "g2.8xlarge", 99 | "g4dn.metal", 100 | "hs1.8xlarge", 101 | "i2.8xlarge", 102 | "m1.medium", 103 | "m1.small", 104 | "m2.xlarge", 105 | "m3.large", 106 | "m3.medium", 107 | "m5ad.16xlarge", 108 | "m5ad.8xlarge", 109 | "m5dn.metal", 110 | "m5n.metal", 111 | "r3.8xlarge", 112 | "r3.large", 113 | "r5ad.16xlarge", 114 | "r5ad.8xlarge", 115 | "r5dn.metal", 116 | "r5n.metal", 117 | "t1.micro", 118 | "t2.2xlarge", 119 | "t2.large", 120 | "t2.medium", 121 | "t2.micro", 122 | "t2.nano", 123 | "t2.small", 124 | "t2.xlarge" 125 | ] 126 | } 127 | -------------------------------------------------------------------------------- /variables.tf: -------------------------------------------------------------------------------- 1 | variable "cluster_enabled_log_types" { 2 | default = [] 3 | description = "A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)" 4 | type = list(string) 5 | } 6 | variable "cluster_log_kms_key_id" { 7 | default = "" 8 | description = "If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html)" 9 | type = string 10 | } 11 | variable "cluster_log_retention_in_days" { 12 | default = 90 13 | description = "Number of days to retain log events. Default retention - 90 days." 14 | type = number 15 | } 16 | 17 | variable "cluster_name" { 18 | description = "Name of the EKS cluster. Also used as a prefix in names of related resources." 19 | type = string 20 | } 21 | 22 | variable "cluster_security_group_id" { 23 | description = "If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the workers" 24 | type = string 25 | default = "" 26 | } 27 | 28 | variable "cluster_version" { 29 | description = "Kubernetes version to use for the EKS cluster." 30 | type = string 31 | default = "1.15" 32 | } 33 | 34 | variable "config_output_path" { 35 | description = "Where to save the Kubectl config file (if `write_kubeconfig = true`). Assumed to be a directory if the value ends with a forward slash `/`." 36 | type = string 37 | default = "./" 38 | } 39 | 40 | variable "write_kubeconfig" { 41 | description = "Whether to write a Kubectl config file containing the cluster configuration. Saved to `config_output_path`." 42 | type = bool 43 | default = true 44 | } 45 | 46 | variable "manage_aws_auth" { 47 | description = "Whether to apply the aws-auth configmap file." 48 | default = true 49 | } 50 | 51 | variable "map_accounts" { 52 | description = "Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format." 53 | type = list(string) 54 | default = [] 55 | } 56 | 57 | variable "map_roles" { 58 | description = "Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format." 59 | type = list(object({ 60 | rolearn = string 61 | username = string 62 | groups = list(string) 63 | })) 64 | default = [] 65 | } 66 | 67 | variable "map_users" { 68 | description = "Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format." 69 | type = list(object({ 70 | userarn = string 71 | username = string 72 | groups = list(string) 73 | })) 74 | default = [] 75 | } 76 | 77 | variable "subnets" { 78 | description = "A list of subnets to place the EKS cluster and workers within." 79 | type = list(string) 80 | } 81 | 82 | variable "tags" { 83 | description = "A map of tags to add to all resources." 84 | type = map(string) 85 | default = {} 86 | } 87 | 88 | variable "vpc_id" { 89 | description = "VPC where the cluster and workers will be deployed." 90 | type = string 91 | } 92 | 93 | variable "worker_groups" { 94 | description = "A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers_group_defaults for valid keys." 95 | type = any 96 | default = [] 97 | } 98 | 99 | variable "worker_groups_defaults" { 100 | description = "Override default values for target groups. See worker_group_defaults in local.tf for valid keys." 101 | type = any 102 | default = {} 103 | } 104 | 105 | variable "worker_groups_launch_template" { 106 | description = "A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys." 107 | type = any 108 | default = [] 109 | } 110 | 111 | variable "worker_security_group_id" { 112 | description = "If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the EKS cluster." 113 | type = string 114 | default = "" 115 | } 116 | 117 | variable "worker_ami_name_filter" { 118 | description = "Name filter for AWS EKS worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used." 119 | type = string 120 | default = "" 121 | } 122 | 123 | variable "worker_ami_name_filter_windows" { 124 | description = "Name filter for AWS EKS Windows worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used." 125 | type = string 126 | default = "" 127 | } 128 | 129 | variable "worker_ami_owner_id" { 130 | description = "The ID of the owner for the AMI to use for the AWS EKS workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft')." 131 | type = string 132 | default = "602401143452" // The ID of the owner of the official AWS EKS AMIs. 133 | } 134 | 135 | variable "worker_ami_owner_id_windows" { 136 | description = "The ID of the owner for the AMI to use for the AWS EKS Windows workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft')." 137 | type = string 138 | default = "801119661308" // The ID of the owner of the official AWS EKS Windows AMIs. 139 | } 140 | 141 | variable "worker_additional_security_group_ids" { 142 | description = "A list of additional security group ids to attach to worker instances" 143 | type = list(string) 144 | default = [] 145 | } 146 | 147 | variable "worker_sg_ingress_from_port" { 148 | description = "Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443)." 149 | type = number 150 | default = 1025 151 | } 152 | 153 | variable "worker_groups_additional_policies" { 154 | description = "Additional policies to be added to workers" 155 | type = list(string) 156 | default = [] 157 | } 158 | 159 | variable "node_groups_additional_policies" { 160 | description = "Additional policies to be added to workers" 161 | type = list(string) 162 | default = [] 163 | } 164 | 165 | variable "kubeconfig_aws_authenticator_command" { 166 | description = "Command to use to fetch AWS EKS credentials." 167 | type = string 168 | default = "aws-iam-authenticator" 169 | } 170 | 171 | variable "kubeconfig_aws_authenticator_command_args" { 172 | description = "Default arguments passed to the authenticator command. Defaults to [token -i $cluster_name]." 173 | type = list(string) 174 | default = [] 175 | } 176 | 177 | variable "kubeconfig_aws_authenticator_additional_args" { 178 | description = "Any additional arguments to pass to the authenticator such as the role to assume. e.g. [\"-r\", \"MyEksRole\"]." 179 | type = list(string) 180 | default = [] 181 | } 182 | 183 | variable "kubeconfig_aws_authenticator_env_variables" { 184 | description = "Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = \"eks\"}." 185 | type = map(string) 186 | default = {} 187 | } 188 | 189 | variable "kubeconfig_name" { 190 | description = "Override the default name used for items kubeconfig." 191 | type = string 192 | default = "" 193 | } 194 | 195 | variable "cluster_create_timeout" { 196 | description = "Timeout value when creating the EKS cluster." 197 | type = string 198 | default = "30m" 199 | } 200 | 201 | variable "cluster_delete_timeout" { 202 | description = "Timeout value when deleting the EKS cluster." 203 | type = string 204 | default = "15m" 205 | } 206 | 207 | variable "wait_for_cluster_cmd" { 208 | description = "Custom local-exec command to execute for determining if the eks cluster is healthy. Cluster endpoint will be available as an environment variable called ENDPOINT" 209 | type = string 210 | default = "until wget --no-check-certificate -O - -q $ENDPOINT/healthz >/dev/null; do sleep 4; done" 211 | } 212 | 213 | variable "cluster_create_security_group" { 214 | description = "Whether to create a security group for the cluster or attach the cluster to `cluster_security_group_id`." 215 | type = bool 216 | default = true 217 | } 218 | 219 | variable "worker_create_security_group" { 220 | description = "Whether to create a security group for the workers or attach the workers to `worker_security_group_id`." 221 | type = bool 222 | default = true 223 | } 224 | 225 | variable "worker_create_initial_lifecycle_hooks" { 226 | description = "Whether to create initial lifecycle hooks provided in worker groups." 227 | type = bool 228 | default = false 229 | } 230 | 231 | variable "permissions_boundary" { 232 | description = "If provided, all IAM roles will be created with this permissions boundary attached." 233 | type = string 234 | default = null 235 | } 236 | 237 | variable "iam_path" { 238 | description = "If provided, all IAM roles will be created on this path." 239 | type = string 240 | default = "/" 241 | } 242 | 243 | variable "cluster_endpoint_private_access" { 244 | description = "Indicates whether or not the Amazon EKS private API server endpoint is enabled." 245 | type = bool 246 | default = false 247 | } 248 | 249 | variable "cluster_endpoint_public_access" { 250 | description = "Indicates whether or not the Amazon EKS public API server endpoint is enabled." 251 | type = bool 252 | default = true 253 | } 254 | 255 | variable "cluster_endpoint_public_access_cidrs" { 256 | description = "List of CIDR blocks which can access the Amazon EKS public API server endpoint." 257 | type = list(string) 258 | default = ["0.0.0.0/0"] 259 | } 260 | 261 | variable "manage_cluster_iam_resources" { 262 | description = "Whether to let the module manage cluster IAM resources. If set to false, cluster_iam_role_name must be specified." 263 | type = bool 264 | default = true 265 | } 266 | 267 | variable "cluster_iam_role_name" { 268 | description = "IAM role name for the cluster. Only applicable if manage_cluster_iam_resources is set to false." 269 | type = string 270 | default = "" 271 | } 272 | 273 | variable "manage_worker_iam_resources" { 274 | description = "Whether to let the module manage worker IAM resources. If set to false, iam_instance_profile_name must be specified for workers." 275 | type = bool 276 | default = true 277 | } 278 | 279 | variable "manage_node_iam_resources" { 280 | description = "Whether to let the module manage worker IAM resources. If set to false, iam_instance_profile_name must be specified for workers." 281 | type = bool 282 | default = true 283 | } 284 | 285 | variable "worker_groups_role_name" { 286 | description = "User defined workers role name." 287 | type = string 288 | default = "" 289 | } 290 | 291 | variable "node_groups_role_name" { 292 | description = "User defined workers role name." 293 | type = string 294 | default = "" 295 | } 296 | 297 | variable "attach_worker_cni_policy" { 298 | description = "Whether to attach the Amazon managed `AmazonEKS_CNI_Policy` IAM policy to the default worker IAM role. WARNING: If set `false` the permissions must be assigned to the `aws-node` DaemonSet pods via another method or nodes will not be able to join the cluster." 299 | type = bool 300 | default = true 301 | } 302 | 303 | variable "attach_node_cni_policy" { 304 | description = "Whether to attach the Amazon managed `AmazonEKS_CNI_Policy` IAM policy to the default worker IAM role. WARNING: If set `false` the permissions must be assigned to the `aws-node` DaemonSet pods via another method or nodes will not be able to join the cluster." 305 | type = bool 306 | default = true 307 | } 308 | 309 | variable "create_eks" { 310 | description = "Controls if EKS resources should be created (it affects almost all resources)" 311 | type = bool 312 | default = true 313 | } 314 | 315 | variable "node_groups_defaults" { 316 | description = "Map of values to be applied to all node groups. See `node_groups` module's documentaton for more details" 317 | type = any 318 | default = {} 319 | } 320 | 321 | variable "node_groups" { 322 | description = "Map of map of node groups to create. See `node_groups` module's documentation for more details" 323 | type = any 324 | default = {} 325 | } 326 | 327 | variable "enable_irsa" { 328 | description = "Whether to create OpenID Connect Provider for EKS to enable IRSA" 329 | type = bool 330 | default = false 331 | } 332 | 333 | variable "eks_oidc_root_ca_thumbprint" { 334 | type = string 335 | description = "Thumbprint of Root CA for EKS OIDC, Valid until 2037" 336 | default = "9e99a48a9960b14926bb7f3b02e22da2b0ab7280" 337 | } 338 | 339 | variable "cluster_encryption_key_arn" { 340 | type = string 341 | description = "KMS Key ARN to encrypt EKS secrets with." 342 | default = "" 343 | } 344 | 345 | variable "cluster_encryption_resources" { 346 | type = list(string) 347 | description = "A list of the EKS resources to encrypt." 348 | default = ["secrets"] 349 | } 350 | -------------------------------------------------------------------------------- /modules/worker_groups/worker_groups.tf: -------------------------------------------------------------------------------- 1 | # Worker Groups using Launch Templates 2 | 3 | resource "aws_autoscaling_group" "worker_groups" { 4 | for_each = local.worker_groups_expanded 5 | 6 | name_prefix = join( 7 | "-", 8 | compact( 9 | [ 10 | var.cluster_name, 11 | coalesce(each.value["name"], each.key), 12 | each.value["recreate_on_change"] ? random_pet.worker_groups[each.key].id : "" 13 | ] 14 | ) 15 | ) 16 | 17 | desired_capacity = each.value["desired_capacity"] 18 | max_size = each.value["max_size"] 19 | min_size = each.value["min_size"] 20 | force_delete = each.value["force_delete"] 21 | target_group_arns = each.value["target_group_arns"] 22 | service_linked_role_arn = each.value["service_linked_role_arn"] 23 | vpc_zone_identifier = each.value["subnets"] 24 | protect_from_scale_in = each.value["protect_from_scale_in"] 25 | suspended_processes = each.value["suspended_processes"] 26 | enabled_metrics = each.value["enabled_metrics"] 27 | placement_group = each.value["placement_group"] 28 | termination_policies = each.value["termination_policies"] 29 | max_instance_lifetime = each.value["max_instance_lifetime"] 30 | default_cooldown = each.value["default_cooldown"] 31 | health_check_grace_period = each.value["health_check_grace_period"] 32 | 33 | dynamic "mixed_instances_policy" { 34 | iterator = item 35 | for_each = (lookup(each.value, "override_instance_types", null) != null) || (lookup(each.value, "on_demand_allocation_strategy", null) != null) ? list(each.value) : [] 36 | 37 | content { 38 | instances_distribution { 39 | 40 | on_demand_base_capacity = item.value["on_demand_base_capacity"] 41 | on_demand_percentage_above_base_capacity = item.value["on_demand_percentage_above_base_capacity"] 42 | on_demand_allocation_strategy = lookup(item.value, "on_demand_allocation_strategy", "prioritized") 43 | 44 | spot_allocation_strategy = item.value["spot_allocation_strategy"] 45 | spot_instance_pools = item.value["spot_instance_pools"] 46 | spot_max_price = item.value["spot_max_price"] 47 | } 48 | 49 | launch_template { 50 | 51 | launch_template_specification { 52 | launch_template_id = aws_launch_template.worker_groups[each.key].id 53 | version = item.value["launch_template_version"] 54 | } 55 | 56 | dynamic "override" { 57 | for_each = item.value["override_instance_types"] 58 | 59 | content { 60 | instance_type = override.value 61 | } 62 | } 63 | } 64 | } 65 | } 66 | 67 | dynamic "launch_template" { 68 | iterator = item 69 | for_each = (lookup(each.value, "override_instance_types", null) != null) || (lookup(each.value, "on_demand_allocation_strategy", null) != null) ? [] : list(each.value) 70 | 71 | content { 72 | id = aws_launch_template.worker_groups[each.key].id 73 | version = item.value["launch_template_version"] 74 | } 75 | } 76 | 77 | dynamic "initial_lifecycle_hook" { 78 | for_each = var.worker_create_initial_lifecycle_hooks ? each.value["initial_lifecycle_hooks"] : [] 79 | content { 80 | name = initial_lifecycle_hook.value["name"] 81 | lifecycle_transition = initial_lifecycle_hook.value["lifecycle_transition"] 82 | notification_metadata = lookup(initial_lifecycle_hook.value, "notification_metadata", null) 83 | heartbeat_timeout = lookup(initial_lifecycle_hook.value, "heartbeat_timeout", null) 84 | notification_target_arn = lookup(initial_lifecycle_hook.value, "notification_target_arn", null) 85 | role_arn = lookup(initial_lifecycle_hook.value, "role_arn", null) 86 | default_result = lookup(initial_lifecycle_hook.value, "default_result", null) 87 | } 88 | } 89 | 90 | tags = concat( 91 | [ 92 | { 93 | "key" = "Name" 94 | "value" = "${var.cluster_name}-${coalesce(each.value["name"], each.key)}-eks_asg" 95 | "propagate_at_launch" = true 96 | }, 97 | { 98 | "key" = "kubernetes.io/cluster/${var.cluster_name}" 99 | "value" = "owned" 100 | "propagate_at_launch" = true 101 | }, 102 | ], 103 | local.asg_tags, 104 | each.value["tags"] 105 | ) 106 | 107 | lifecycle { 108 | create_before_destroy = true 109 | ignore_changes = [desired_capacity] 110 | } 111 | 112 | depends_on = [ 113 | aws_iam_role_policy_attachment.workers_AmazonEKSWorkerNodePolicy, 114 | aws_iam_role_policy_attachment.workers_AmazonEKS_CNI_Policy, 115 | aws_iam_role_policy_attachment.workers_AmazonEC2ContainerRegistryReadOnly, 116 | ] 117 | } 118 | 119 | resource "aws_launch_template" "worker_groups" { 120 | for_each = local.worker_groups_expanded 121 | 122 | name_prefix = "${var.cluster_name}-${coalesce(each.value["name"], each.key)}" 123 | 124 | network_interfaces { 125 | associate_public_ip_address = each.value["public_ip"] 126 | delete_on_termination = each.value["eni_delete"] 127 | security_groups = flatten([ 128 | local.worker_security_group_id, 129 | var.worker_additional_security_group_ids, 130 | each.value["additional_security_group_ids"], 131 | ]) 132 | } 133 | 134 | iam_instance_profile { 135 | name = coalescelist( 136 | var.manage_worker_iam_resources ? [aws_iam_instance_profile.worker_groups[each.key].name] : [], 137 | var.manage_worker_iam_resources ? [] : [data.aws_iam_instance_profile.custom_worker_group_launch_template_iam_instance_profile[each.key].name], 138 | [""] 139 | )[0] 140 | } 141 | 142 | image_id = coalesce(each.value["ami_id"], each.value["platform"] == "windows" ? local.default_ami_id_windows : local.default_ami_id_linux) 143 | 144 | instance_type = each.value["instance_type"] 145 | key_name = each.value["key_name"] 146 | 147 | user_data = base64encode( 148 | data.template_file.launch_template_userdata[each.key].rendered, 149 | ) 150 | 151 | ebs_optimized = contains( 152 | local.ebs_optimized_not_supported, 153 | each.value["instance_type"] 154 | ) ? false : each.value["ebs_optimized"] 155 | 156 | credit_specification { 157 | cpu_credits = each.value["cpu_credits"] 158 | } 159 | 160 | monitoring { 161 | enabled = each.value["enable_monitoring"] 162 | } 163 | 164 | dynamic "placement" { 165 | for_each = each.value["launch_template_placement_group"] != null ? [each.value["launch_template_placement_group"]] : [] 166 | 167 | content { 168 | tenancy = each.value["launch_template_placement_tenancy"] 169 | group_name = placement.value 170 | } 171 | } 172 | 173 | dynamic "instance_market_options" { 174 | for_each = lookup(each.value, "market_type", null) == null ? [] : list(lookup(each.value, "market_type", null)) 175 | content { 176 | market_type = instance_market_options.value 177 | } 178 | } 179 | 180 | block_device_mappings { 181 | device_name = each.value["root_block_device_name"] 182 | 183 | ebs { 184 | volume_size = each.value["root_volume_size"] 185 | volume_type = each.value["root_volume_type"] 186 | iops = each.value["root_iops"] 187 | encrypted = each.value["root_encrypted"] 188 | kms_key_id = each.value["root_kms_key_id"] 189 | delete_on_termination = true 190 | } 191 | } 192 | 193 | dynamic "tag_specifications" { 194 | for_each = ["volume", "instance"] 195 | 196 | content { 197 | resource_type = tag_specifications.value 198 | 199 | tags = merge( 200 | { 201 | "Name" = "${var.cluster_name}-${coalesce(each.value["name"], each.key)}-eks_asg", 202 | "kubernetes.io/cluster/${var.cluster_name}" = "owned", 203 | }, 204 | var.tags, 205 | ) 206 | } 207 | } 208 | 209 | tags = var.tags 210 | 211 | lifecycle { 212 | create_before_destroy = true 213 | } 214 | } 215 | 216 | resource "aws_iam_instance_profile" "worker_groups" { 217 | for_each = var.manage_worker_iam_resources ? local.worker_groups_expanded : {} 218 | 219 | name_prefix = "${var.cluster_name}-${coalesce(each.value["name"], each.key)}" 220 | role = each.value["iam_role_id"] 221 | path = var.iam_path 222 | } 223 | 224 | resource "aws_security_group" "worker_groups" { 225 | count = local.worker_create_security_group ? 1 : 0 226 | 227 | name_prefix = var.cluster_name 228 | description = "Security group for all workers in the cluster." 229 | vpc_id = var.vpc_id 230 | tags = merge( 231 | var.tags, 232 | { 233 | "Name" = "${var.cluster_name}-eks_workers_sg" 234 | "kubernetes.io/cluster/${var.cluster_name}" = "owned" 235 | }, 236 | ) 237 | } 238 | 239 | resource "aws_security_group_rule" "workers_egress_internet" { 240 | count = local.worker_create_security_group ? 1 : 0 241 | description = "Allow nodes all egress to the Internet." 242 | protocol = "-1" 243 | security_group_id = local.worker_security_group_id 244 | cidr_blocks = ["0.0.0.0/0"] 245 | from_port = 0 246 | to_port = 0 247 | type = "egress" 248 | } 249 | 250 | resource "aws_security_group_rule" "workers_ingress_self" { 251 | count = local.worker_create_security_group ? 1 : 0 252 | description = "Allow node to communicate with each other." 253 | protocol = "-1" 254 | security_group_id = local.worker_security_group_id 255 | source_security_group_id = local.worker_security_group_id 256 | from_port = 0 257 | to_port = 65535 258 | type = "ingress" 259 | } 260 | 261 | resource "aws_security_group_rule" "workers_ingress_cluster" { 262 | count = local.worker_create_security_group ? 1 : 0 263 | description = "Allow workers pods to receive communication from the cluster control plane." 264 | protocol = "tcp" 265 | security_group_id = local.worker_security_group_id 266 | source_security_group_id = var.cluster_security_group_id 267 | from_port = var.worker_sg_ingress_from_port 268 | to_port = 65535 269 | type = "ingress" 270 | } 271 | 272 | resource "aws_security_group_rule" "workers_ingress_cluster_kubelet" { 273 | count = local.worker_create_security_group ? var.worker_sg_ingress_from_port > 10250 ? 1 : 0 : 0 274 | description = "Allow workers Kubelets to receive communication from the cluster control plane." 275 | protocol = "tcp" 276 | security_group_id = local.worker_security_group_id 277 | source_security_group_id = var.cluster_security_group_id 278 | from_port = 10250 279 | to_port = 10250 280 | type = "ingress" 281 | } 282 | 283 | resource "aws_security_group_rule" "workers_ingress_cluster_https" { 284 | count = local.worker_create_security_group ? 1 : 0 285 | description = "Allow pods running extension API servers on port 443 to receive communication from cluster control plane." 286 | protocol = "tcp" 287 | security_group_id = local.worker_security_group_id 288 | source_security_group_id = var.cluster_security_group_id 289 | from_port = 443 290 | to_port = 443 291 | type = "ingress" 292 | } 293 | 294 | resource "aws_security_group_rule" "cluster_https_workers_ingress" { 295 | count = local.worker_create_security_group ? 1 : 0 296 | description = "Allow pods to communicate with the EKS cluster API." 297 | protocol = "tcp" 298 | security_group_id = var.cluster_security_group_id 299 | source_security_group_id = local.worker_security_group_id 300 | from_port = 443 301 | to_port = 443 302 | type = "ingress" 303 | } 304 | 305 | resource "aws_iam_role" "worker_groups" { 306 | count = var.manage_worker_iam_resources && var.create_eks ? 1 : 0 307 | name_prefix = var.workers_role_name != "" ? null : var.cluster_name 308 | name = var.workers_role_name != "" ? var.workers_role_name : null 309 | assume_role_policy = data.aws_iam_policy_document.workers_assume_role_policy.json 310 | permissions_boundary = var.permissions_boundary 311 | path = var.iam_path 312 | force_detach_policies = true 313 | tags = var.tags 314 | } 315 | 316 | resource "aws_iam_role_policy_attachment" "workers_AmazonEKSWorkerNodePolicy" { 317 | count = var.manage_worker_iam_resources && var.create_eks ? 1 : 0 318 | policy_arn = "${local.policy_arn_prefix}/AmazonEKSWorkerNodePolicy" 319 | role = aws_iam_role.worker_groups[0].name 320 | } 321 | 322 | resource "aws_iam_role_policy_attachment" "workers_AmazonEKS_CNI_Policy" { 323 | count = var.manage_worker_iam_resources && var.attach_worker_cni_policy && var.create_eks ? 1 : 0 324 | policy_arn = "${local.policy_arn_prefix}/AmazonEKS_CNI_Policy" 325 | role = aws_iam_role.worker_groups[0].name 326 | } 327 | 328 | resource "aws_iam_role_policy_attachment" "workers_AmazonEC2ContainerRegistryReadOnly" { 329 | count = var.manage_worker_iam_resources && var.create_eks ? 1 : 0 330 | policy_arn = "${local.policy_arn_prefix}/AmazonEC2ContainerRegistryReadOnly" 331 | role = aws_iam_role.worker_groups[0].name 332 | } 333 | 334 | resource "aws_iam_role_policy_attachment" "workers_additional_policies" { 335 | count = var.manage_worker_iam_resources && var.create_eks ? length(var.workers_additional_policies) : 0 336 | role = aws_iam_role.worker_groups[0].name 337 | policy_arn = var.workers_additional_policies[count.index] 338 | } 339 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # terraform-aws-eks 2 | 3 | [![Conventional Commits](https://img.shields.io/badge/Conventional%20Commits-1.0.0-green.svg)](https://conventionalcommits.org) 4 | 5 | This is a complete rework of the upstream community EKS module: https://github.com/terraform-aws-modules/terraform-aws-eks 6 | 7 | > :warning: **Only `Terraform >= 0.12` will be supported. Based on `v9.0.x` of the upstream module.** 8 | 9 | The interface to the module is ~the same~ similar, but it attempts to be more flexible 10 | by allowing users to create and use components separately by splitting out 11 | sub-modules for: 12 | - EKS Control Plane 13 | - EKS Worker Groups 14 | - EKS Managed Node Groups 15 | - `aws-auth` Configuration 16 | 17 | The submodules are designed to be used as individual modules to help the user 18 | perform actions in between creating the control plane and creating workers and nodes 19 | (Custom CNI Configuration). 20 | 21 | By breaking out separate sub modules we create a clearer separation of concerns and 22 | reduce tight coupling of control plane and worker nodes whilst maintaining the same 23 | interface for seamless migration to this module. The interface has become an example 24 | implementation of the sub-modules. 25 | 26 | ## :rotating_light: Major Changes 27 | There are some core implementation changes from the original `eks` module: 28 | 29 | 1. Launch Configuration support removed in favour of Launch Template driven 30 | `worker_groups` sub-module. They're doing the same things with no benefit to 31 | supporting both LC's and LT's (to my current knowledge). `worker_groups_launch_template` has been dropped 32 | and `worker_groups` now creates LT's. 33 | 34 | 2. Simplified code through merging defaults. A pattern I saw in the `node_groups` 35 | sub-module which I really liked. Merging the defaults local, defaults variable and 36 | and map values: 37 | ``` 38 | # Merge defaults and per-group values to make code cleaner 39 | worker_groups_expanded = { for k, v in var.worker_groups : k => merge( 40 | local.worker_groups_defaults, 41 | var.worker_groups_defaults, 42 | v, 43 | ) if var.create_eks } 44 | ``` 45 | 46 | It means that code moves from this: 47 | ``` 48 | enabled_metrics = lookup( 49 | var.worker_groups[count.index], 50 | "enabled_metrics", 51 | local.workers_group_defaults["enabled_metrics"] 52 | ) 53 | ``` 54 | 55 | To this: 56 | ``` 57 | enabled_metrics = each.value["enabled_metrics"] 58 | ``` 59 | 3. Enabling a map of maps for `worker_groups`. By passing in a map of maps we can 60 | add and remove `worker_groups` without affecting the existing resources. 61 | A list of maps still works with all of the issues when removing objects from the list. 62 | 63 | With the sub-module approach, there's nothing stopping a user from using a module 64 | instance per worker_group further isolating the data structures in the state file. 65 | 66 | 67 | ## Providers 68 | 69 | No provider. 70 | 71 | ## Inputs 72 | 73 | | Name | Description | Type | Default | Required | 74 | |------|-------------|------|---------|:-----:| 75 | | attach\_node\_cni\_policy | Whether to attach the Amazon managed `AmazonEKS_CNI_Policy` IAM policy to the default worker IAM role. WARNING: If set `false` the permissions must be assigned to the `aws-node` DaemonSet pods via another method or nodes will not be able to join the cluster. | `bool` | `true` | no | 76 | | attach\_worker\_cni\_policy | Whether to attach the Amazon managed `AmazonEKS_CNI_Policy` IAM policy to the default worker IAM role. WARNING: If set `false` the permissions must be assigned to the `aws-node` DaemonSet pods via another method or nodes will not be able to join the cluster. | `bool` | `true` | no | 77 | | cluster\_create\_security\_group | Whether to create a security group for the cluster or attach the cluster to `cluster_security_group_id`. | `bool` | `true` | no | 78 | | cluster\_create\_timeout | Timeout value when creating the EKS cluster. | `string` | `"30m"` | no | 79 | | cluster\_delete\_timeout | Timeout value when deleting the EKS cluster. | `string` | `"15m"` | no | 80 | | cluster\_enabled\_log\_types | A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | `list(string)` | `[]` | no | 81 | | cluster\_encryption\_key\_arn | KMS Key ARN to encrypt EKS secrets with. | `string` | `""` | no | 82 | | cluster\_encryption\_resources | A list of the EKS resources to encrypt. | `list(string)` |
[
"secrets"
]
| no | 83 | | cluster\_endpoint\_private\_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled. | `bool` | `false` | no | 84 | | cluster\_endpoint\_public\_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled. | `bool` | `true` | no | 85 | | cluster\_endpoint\_public\_access\_cidrs | List of CIDR blocks which can access the Amazon EKS public API server endpoint. | `list(string)` |
[
"0.0.0.0/0"
]
| no | 86 | | cluster\_iam\_role\_name | IAM role name for the cluster. Only applicable if manage\_cluster\_iam\_resources is set to false. | `string` | `""` | no | 87 | | cluster\_log\_kms\_key\_id | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | `string` | `""` | no | 88 | | cluster\_log\_retention\_in\_days | Number of days to retain log events. Default retention - 90 days. | `number` | `90` | no | 89 | | cluster\_name | Name of the EKS cluster. Also used as a prefix in names of related resources. | `string` | n/a | yes | 90 | | cluster\_security\_group\_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the workers | `string` | `""` | no | 91 | | cluster\_version | Kubernetes version to use for the EKS cluster. | `string` | `"1.15"` | no | 92 | | config\_output\_path | Where to save the Kubectl config file (if `write_kubeconfig = true`). Assumed to be a directory if the value ends with a forward slash `/`. | `string` | `"./"` | no | 93 | | create\_eks | Controls if EKS resources should be created (it affects almost all resources) | `bool` | `true` | no | 94 | | eks\_oidc\_root\_ca\_thumbprint | Thumbprint of Root CA for EKS OIDC, Valid until 2037 | `string` | `"9e99a48a9960b14926bb7f3b02e22da2b0ab7280"` | no | 95 | | enable\_irsa | Whether to create OpenID Connect Provider for EKS to enable IRSA | `bool` | `false` | no | 96 | | iam\_path | If provided, all IAM roles will be created on this path. | `string` | `"/"` | no | 97 | | kubeconfig\_aws\_authenticator\_additional\_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. | `list(string)` | `[]` | no | 98 | | kubeconfig\_aws\_authenticator\_command | Command to use to fetch AWS EKS credentials. | `string` | `"aws-iam-authenticator"` | no | 99 | | kubeconfig\_aws\_authenticator\_command\_args | Default arguments passed to the authenticator command. Defaults to [token -i $cluster\_name]. | `list(string)` | `[]` | no | 100 | | kubeconfig\_aws\_authenticator\_env\_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS\_PROFILE = "eks"}. | `map(string)` | `{}` | no | 101 | | kubeconfig\_name | Override the default name used for items kubeconfig. | `string` | `""` | no | 102 | | manage\_aws\_auth | Whether to apply the aws-auth configmap file. | `bool` | `true` | no | 103 | | manage\_cluster\_iam\_resources | Whether to let the module manage cluster IAM resources. If set to false, cluster\_iam\_role\_name must be specified. | `bool` | `true` | no | 104 | | manage\_node\_iam\_resources | Whether to let the module manage worker IAM resources. If set to false, iam\_instance\_profile\_name must be specified for workers. | `bool` | `true` | no | 105 | | manage\_worker\_iam\_resources | Whether to let the module manage worker IAM resources. If set to false, iam\_instance\_profile\_name must be specified for workers. | `bool` | `true` | no | 106 | | map\_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | `list(string)` | `[]` | no | 107 | | map\_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. |
list(object({
rolearn = string
username = string
groups = list(string)
}))
| `[]` | no | 108 | | map\_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. |
list(object({
userarn = string
username = string
groups = list(string)
}))
| `[]` | no | 109 | | node\_groups | Map of map of node groups to create. See `node_groups` module's documentation for more details | `any` | `{}` | no | 110 | | node\_groups\_additional\_policies | Additional policies to be added to workers | `list(string)` | `[]` | no | 111 | | node\_groups\_defaults | Map of values to be applied to all node groups. See `node_groups` module's documentaton for more details | `any` | `{}` | no | 112 | | node\_groups\_role\_name | User defined workers role name. | `string` | `""` | no | 113 | | permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | `string` | n/a | yes | 114 | | subnets | A list of subnets to place the EKS cluster and workers within. | `list(string)` | n/a | yes | 115 | | tags | A map of tags to add to all resources. | `map(string)` | `{}` | no | 116 | | vpc\_id | VPC where the cluster and workers will be deployed. | `string` | n/a | yes | 117 | | wait\_for\_cluster\_cmd | Custom local-exec command to execute for determining if the eks cluster is healthy. Cluster endpoint will be available as an environment variable called ENDPOINT | `string` | `"until wget --no-check-certificate -O - -q $ENDPOINT/healthz \u003e/dev/null; do sleep 4; done"` | no | 118 | | worker\_additional\_security\_group\_ids | A list of additional security group ids to attach to worker instances | `list(string)` | `[]` | no | 119 | | worker\_ami\_name\_filter | Name filter for AWS EKS worker AMI. If not provided, the latest official AMI for the specified 'cluster\_version' is used. | `string` | `""` | no | 120 | | worker\_ami\_name\_filter\_windows | Name filter for AWS EKS Windows worker AMI. If not provided, the latest official AMI for the specified 'cluster\_version' is used. | `string` | `""` | no | 121 | | worker\_ami\_owner\_id | The ID of the owner for the AMI to use for the AWS EKS workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft'). | `string` | `"602401143452"` | no | 122 | | worker\_ami\_owner\_id\_windows | The ID of the owner for the AMI to use for the AWS EKS Windows workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft'). | `string` | `"801119661308"` | no | 123 | | worker\_create\_initial\_lifecycle\_hooks | Whether to create initial lifecycle hooks provided in worker groups. | `bool` | `false` | no | 124 | | worker\_create\_security\_group | Whether to create a security group for the workers or attach the workers to `worker_security_group_id`. | `bool` | `true` | no | 125 | | worker\_groups | A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers\_group\_defaults for valid keys. | `any` | `[]` | no | 126 | | worker\_groups\_additional\_policies | Additional policies to be added to workers | `list(string)` | `[]` | no | 127 | | worker\_groups\_defaults | Override default values for target groups. See worker\_group\_defaults in local.tf for valid keys. | `any` | `{}` | no | 128 | | worker\_groups\_launch\_template | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers\_group\_defaults for valid keys. | `any` | `[]` | no | 129 | | worker\_groups\_role\_name | User defined workers role name. | `string` | `""` | no | 130 | | worker\_security\_group\_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the EKS cluster. | `string` | `""` | no | 131 | | worker\_sg\_ingress\_from\_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). | `number` | `1025` | no | 132 | | write\_kubeconfig | Whether to write a Kubectl config file containing the cluster configuration. Saved to `config_output_path`. | `bool` | `true` | no | 133 | 134 | ## Outputs 135 | 136 | | Name | Description | 137 | |------|-------------| 138 | | cloudwatch\_log\_group\_name | Name of cloudwatch log group created | 139 | | cluster\_arn | The Amazon Resource Name (ARN) of the cluster. | 140 | | cluster\_certificate\_authority\_data | Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster. | 141 | | cluster\_endpoint | The endpoint for your EKS Kubernetes API. | 142 | | cluster\_iam\_role\_arn | IAM role ARN of the EKS cluster. | 143 | | cluster\_id | The name/id of the EKS cluster. | 144 | | cluster\_oidc\_issuer\_url | The URL on the EKS cluster OIDC Issuer | 145 | | cluster\_security\_group\_id | Security group ID attached to the EKS cluster. | 146 | | cluster\_version | The Kubernetes server version for the EKS cluster. | 147 | | config\_map\_aws\_auth | A kubernetes configuration to authenticate to this EKS cluster. | 148 | | kubeconfig | kubectl config file contents for this EKS cluster. | 149 | | kubeconfig\_filename | The filename of the generated kubectl config. | 150 | | node\_groups | Outputs from EKS node groups. Map of maps, keyed by var.node\_groups keys | 151 | | oidc\_provider\_arn | The ARN of the OIDC Provider if `enable_irsa = true`. | 152 | | worker\_iam\_instance\_profile\_arns | default IAM instance profile ARN for EKS worker groups | 153 | | worker\_iam\_instance\_profile\_names | default IAM instance profile name for EKS worker groups | 154 | | worker\_iam\_role\_arn | default IAM role ARN for EKS worker groups | 155 | | worker\_iam\_role\_name | default IAM role name for EKS worker groups | 156 | | worker\_security\_group\_id | Security group ID attached to the EKS workers. | 157 | | workers\_asg\_arns | IDs of the autoscaling groups containing workers. | 158 | | workers\_asg\_names | Names of the autoscaling groups containing workers. | 159 | | workers\_default\_ami\_id | ID of the default worker group AMI | 160 | | workers\_launch\_template\_arns | ARNs of the worker launch templates. | 161 | | workers\_launch\_template\_ids | IDs of the worker launch templates. | 162 | | workers\_launch\_template\_latest\_versions | Latest versions of the worker launch templates. | 163 | | workers\_user\_data | User data of worker groups | 164 | 165 | 166 | --------------------------------------------------------------------------------