├── examples ├── complete │ ├── outputs.tf │ ├── variables.tf │ ├── versions.tf │ └── main.tf ├── simple │ ├── outputs.tf │ ├── variables.tf │ ├── README.md │ ├── versions.tf │ └── main.tf ├── 1master-1nodegroup │ ├── outputs.tf │ ├── variables.tf │ ├── README.md │ ├── versions.tf │ └── main.tf ├── 1master-2nodegroup │ ├── variables.tf │ ├── README.md │ ├── outputs.tf │ ├── versions.tf │ └── main.tf └── 1master-1nodegroup-enable-logging │ ├── outputs.tf │ ├── variables.tf │ ├── README.md │ ├── versions.tf │ └── main.tf ├── data.tf ├── CHANGELOG.md ├── versions.tf ├── .github ├── dependabot.yml ├── workflows │ └── pipeline.yml └── CODE_OF_CONDUCT.md ├── .pre-commit-config.yaml ├── .editorconfig ├── locals.tf ├── .gitignore ├── outputs.tf ├── main.tf ├── node_groups.tf ├── variables.tf ├── LICENSE └── README.md /examples/complete/outputs.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /examples/simple/outputs.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /examples/simple/variables.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /examples/complete/variables.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /examples/1master-1nodegroup/outputs.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /examples/1master-1nodegroup/variables.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /examples/1master-2nodegroup/variables.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /data.tf: -------------------------------------------------------------------------------- 1 | data "yandex_client_config" "client" {} 2 | -------------------------------------------------------------------------------- /examples/1master-1nodegroup-enable-logging/outputs.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /examples/1master-1nodegroup-enable-logging/variables.tf: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /examples/simple/README.md: -------------------------------------------------------------------------------- 1 | ```shell 2 | export YC_FOLDER_ID='xxxx' 3 | terraform init 4 | terraform plan 5 | terraform apply 6 | ``` 7 | -------------------------------------------------------------------------------- /examples/1master-1nodegroup/README.md: -------------------------------------------------------------------------------- 1 | ```shell 2 | export YC_FOLDER_ID='xxx' 3 | terraform init 4 | terraform plan 5 | terraform apply 6 | ``` 7 | -------------------------------------------------------------------------------- /examples/1master-2nodegroup/README.md: -------------------------------------------------------------------------------- 1 | ```shell 2 | export YC_FOLDER_ID='xxx' 3 | terraform init 4 | terraform plan 5 | terraform apply 6 | ``` 7 | -------------------------------------------------------------------------------- /examples/1master-1nodegroup-enable-logging/README.md: -------------------------------------------------------------------------------- 1 | ```shell 2 | export YC_FOLDER_ID='xxxx' 3 | terraform init 4 | terraform plan 5 | terraform apply 6 | ``` 7 | -------------------------------------------------------------------------------- /examples/1master-2nodegroup/outputs.tf: -------------------------------------------------------------------------------- 1 | output "get_credentials_command" { 2 | description = "Command to get kubeconfig for the cluster" 3 | value = module.kube.get_credentials_command 4 | } 5 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | ## v1.46.0 2 | ## v1.45.0 3 | ## v1.44.0 4 | ## v1.43.0 5 | ## v1.42.0 6 | ## v1.41.0 7 | ## v1.40.0 8 | ## v1.39.0 9 | ## v1.38.0 10 | ## v1.37.0 11 | ## v1.36.0 12 | ## v1.35.0 13 | ## v1.34.0 14 | ## v1.33.0 15 | ## v1.32.0 16 | ## v1.31.0 17 | ## v1.30.0 18 | ## v1.29.0 19 | -------------------------------------------------------------------------------- /versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | yandex = { 4 | source = "yandex-cloud/yandex" 5 | version = ">= 0.72.0" 6 | } 7 | tls = { 8 | source = "hashicorp/tls" 9 | version = ">= 3.1.0" 10 | } 11 | } 12 | required_version = ">= 1.3" 13 | } 14 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | updates: 3 | - package-ecosystem: "github-actions" 4 | directory: "/" 5 | schedule: 6 | interval: "weekly" 7 | - package-ecosystem: "terraform" 8 | directory: "/" 9 | schedule: 10 | interval: "weekly" 11 | open-pull-requests-limit: 3 12 | -------------------------------------------------------------------------------- /examples/simple/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | yandex = { 4 | source = "yandex-cloud/yandex" 5 | version = ">= 0.72.0" 6 | } 7 | tls = { 8 | source = "hashicorp/tls" 9 | version = ">= 3.1.0" 10 | } 11 | } 12 | required_version = ">= 1.3" 13 | } 14 | -------------------------------------------------------------------------------- /examples/complete/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | yandex = { 4 | source = "yandex-cloud/yandex" 5 | version = ">= 0.72.0" 6 | } 7 | tls = { 8 | source = "hashicorp/tls" 9 | version = ">= 3.1.0" 10 | } 11 | } 12 | required_version = ">= 1.3" 13 | } 14 | -------------------------------------------------------------------------------- /examples/1master-1nodegroup/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | yandex = { 4 | source = "yandex-cloud/yandex" 5 | version = ">= 0.72.0" 6 | } 7 | tls = { 8 | source = "hashicorp/tls" 9 | version = ">= 3.1.0" 10 | } 11 | } 12 | required_version = ">= 1.3" 13 | } 14 | -------------------------------------------------------------------------------- /examples/1master-2nodegroup/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | yandex = { 4 | source = "yandex-cloud/yandex" 5 | version = ">= 0.72.0" 6 | } 7 | tls = { 8 | source = "hashicorp/tls" 9 | version = ">= 3.1.0" 10 | } 11 | } 12 | required_version = ">= 1.3" 13 | } 14 | -------------------------------------------------------------------------------- /examples/1master-1nodegroup-enable-logging/versions.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | yandex = { 4 | source = "yandex-cloud/yandex" 5 | version = ">= 0.72.0" 6 | } 7 | tls = { 8 | source = "hashicorp/tls" 9 | version = ">= 3.1.0" 10 | } 11 | } 12 | required_version = ">= 1.3" 13 | } 14 | -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | repos: 2 | - repo: https://github.com/antonbabenko/pre-commit-terraform 3 | rev: v1.101.1 4 | hooks: 5 | - id: terraform_fmt 6 | args: 7 | - --args=-recursive 8 | - id: terraform_validate 9 | - id: terraform_docs 10 | args: 11 | - --args=--lockfile=false 12 | - id: terraform_tflint 13 | 14 | - repo: https://github.com/pre-commit/pre-commit-hooks 15 | rev: v6.0.0 16 | hooks: 17 | - id: check-merge-conflict 18 | - id: end-of-file-fixer 19 | - id: trailing-whitespace 20 | -------------------------------------------------------------------------------- /.editorconfig: -------------------------------------------------------------------------------- 1 | # EditorConfig is awesome: http://EditorConfig.org 2 | # Uses editorconfig to maintain consistent coding styles 3 | 4 | # top-most EditorConfig file 5 | root = true 6 | 7 | # Unix-style newlines with a newline ending every file 8 | [*] 9 | charset = utf-8 10 | end_of_line = lf 11 | indent_size = 2 12 | indent_style = space 13 | insert_final_newline = true 14 | max_line_length = 80 15 | trim_trailing_whitespace = true 16 | 17 | [*.{tf,tfvars}] 18 | indent_size = 2 19 | indent_style = space 20 | 21 | [*.md] 22 | max_line_length = 0 23 | trim_trailing_whitespace = false 24 | 25 | [Makefile] 26 | tab_width = 2 27 | indent_style = tab 28 | 29 | [COMMIT_EDITMSG] 30 | max_line_length = 0 31 | -------------------------------------------------------------------------------- /locals.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | master_regions = length(var.master_locations) > 1 ? [ 3 | { 4 | region = var.master_region 5 | locations = var.master_locations 6 | } 7 | ] : [] 8 | 9 | master_locations = length(var.master_locations) > 1 ? [] : var.master_locations 10 | 11 | generated_ssh_key = var.generate_default_ssh_key ? [ 12 | "${var.nodes_default_ssh_user}:${tls_private_key.default_ssh_key[0].public_key_openssh}" 13 | ] : [] 14 | 15 | node_groups_ssh_keys_metadata = length(var.node_groups_ssh_keys) > 0 ? { 16 | ssh-keys = join("\n", concat(flatten([ 17 | for username, ssh_keys in var.node_groups_ssh_keys : [ 18 | for ssh_key in ssh_keys 19 | : "${username}:${ssh_key}" 20 | ] 21 | ], [local.generated_ssh_key]) 22 | )) 23 | } : {} 24 | 25 | node_groups_locations = var.node_groups_locations != null ? var.node_groups_locations : var.master_locations 26 | } 27 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Local .terraform directories 2 | **/.terraform/* 3 | 4 | # .tfstate files 5 | *.tfstate 6 | *.tfstate.* 7 | 8 | # terraform lockfile 9 | .terraform.lock.hcl 10 | 11 | # Crash log files 12 | crash.log 13 | 14 | # Exclude all .tfvars files, which are likely to contain sentitive data, such as 15 | # password, private keys, and other secrets. These should not be part of version 16 | # control as they are data points which are potentially sensitive and subject 17 | # to change depending on the environment. 18 | # 19 | *.tfvars 20 | 21 | # Ignore override files as they are usually used to override resources locally and so 22 | # are not checked in 23 | override.tf 24 | override.tf.json 25 | *_override.tf 26 | *_override.tf.json 27 | 28 | # Include override files you do wish to add to version control using negated pattern 29 | # 30 | # !example_override.tf 31 | 32 | # Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan 33 | # example: *tfplan* 34 | 35 | # Ignore CLI configuration files 36 | .terraformrc 37 | terraform.rc 38 | 39 | # Ignore IDE files 40 | .idea/* 41 | -------------------------------------------------------------------------------- /examples/simple/main.tf: -------------------------------------------------------------------------------- 1 | data "yandex_client_config" "client" {} 2 | 3 | module "iam_accounts" { 4 | source = "git::https://github.com/terraform-yacloud-modules/terraform-yandex-iam.git//modules/iam-account?ref=v1.0.0" 5 | 6 | name = "test-iam" 7 | folder_roles = [ 8 | "container-registry.images.puller", 9 | "k8s.clusters.agent", 10 | "k8s.tunnelClusters.agent", 11 | "load-balancer.admin", 12 | "logging.writer", 13 | "vpc.privateAdmin", 14 | "vpc.publicAdmin", 15 | "vpc.user", 16 | ] 17 | cloud_roles = [] 18 | enable_static_access_key = false 19 | enable_api_key = false 20 | enable_account_key = false 21 | 22 | } 23 | 24 | module "network" { 25 | source = "git::https://github.com/terraform-yacloud-modules/terraform-yandex-vpc.git?ref=v1.0.0" 26 | 27 | folder_id = data.yandex_client_config.client.folder_id 28 | 29 | blank_name = "redis-vpc-nat-gateway" 30 | labels = { 31 | repo = "terraform-yacloud-modules/terraform-yandex-vpc" 32 | } 33 | 34 | azs = ["ru-central1-a", "ru-central1-b", "ru-central1-d"] 35 | 36 | private_subnets = [["10.10.0.0/24"], ["10.11.0.0/24"], ["10.12.0.0/24"]] 37 | 38 | create_vpc = true 39 | create_nat_gateway = true 40 | } 41 | 42 | module "kube" { 43 | source = "../../" 44 | 45 | network_id = module.network.vpc_id 46 | 47 | name = "test-kubernetes" 48 | 49 | service_account_id = module.iam_accounts.id 50 | node_service_account_id = module.iam_accounts.id 51 | 52 | master_locations = [ 53 | { 54 | zone = "ru-central1-a" 55 | subnet_id = module.network.private_subnets_ids[0] 56 | } 57 | ] 58 | 59 | depends_on = [ 60 | module.iam_accounts 61 | ] 62 | } 63 | -------------------------------------------------------------------------------- /examples/1master-1nodegroup-enable-logging/main.tf: -------------------------------------------------------------------------------- 1 | data "yandex_client_config" "client" {} 2 | 3 | module "network" { 4 | source = "git::https://github.com/terraform-yacloud-modules/terraform-yandex-vpc.git?ref=v1.0.0" 5 | 6 | folder_id = data.yandex_client_config.client.folder_id 7 | 8 | blank_name = "vpc-nat-gateway" 9 | labels = { 10 | repo = "terraform-yacloud-modules/terraform-yandex-vpc" 11 | } 12 | 13 | azs = ["ru-central1-a"] 14 | 15 | private_subnets = [["10.4.0.0/24"]] 16 | 17 | create_vpc = true 18 | create_nat_gateway = true 19 | } 20 | 21 | module "iam_accounts" { 22 | source = "git::https://github.com/terraform-yacloud-modules/terraform-yandex-iam.git//modules/iam-account?ref=v1.0.0" 23 | 24 | name = "iam" 25 | folder_roles = [ 26 | "container-registry.images.puller", 27 | "k8s.clusters.agent", 28 | "k8s.tunnelClusters.agent", 29 | "load-balancer.admin", 30 | "logging.writer", 31 | "vpc.privateAdmin", 32 | "vpc.publicAdmin", 33 | "vpc.user", 34 | ] 35 | cloud_roles = [] 36 | enable_static_access_key = false 37 | enable_api_key = false 38 | enable_account_key = false 39 | 40 | } 41 | 42 | module "kube" { 43 | source = "../../" 44 | 45 | network_id = module.network.vpc_id 46 | 47 | name = "k8s-test2" 48 | 49 | service_account_id = module.iam_accounts.id 50 | node_service_account_id = module.iam_accounts.id 51 | 52 | master_locations = [ 53 | { 54 | zone = "ru-central1-a" 55 | subnet_id = module.network.private_subnets_ids[0] 56 | } 57 | ] 58 | 59 | master_logging = { 60 | enabled = true 61 | } 62 | 63 | node_groups = { 64 | "default" = { 65 | nat = true 66 | cores = 2 67 | memory = 8 68 | subnet_ids = [module.network.private_subnets_ids[0]] 69 | fixed_scale = { 70 | size = 3 71 | } 72 | } 73 | } 74 | 75 | depends_on = [module.iam_accounts] 76 | 77 | } 78 | -------------------------------------------------------------------------------- /outputs.tf: -------------------------------------------------------------------------------- 1 | output "external_v4_endpoint" { 2 | description = "An IPv4 external network address that is assigned to the master" 3 | value = yandex_kubernetes_cluster.main.master[0].external_v4_endpoint 4 | } 5 | 6 | output "internal_v4_endpoint" { 7 | description = "An IPv4 internal network address that is assigned to the master" 8 | value = yandex_kubernetes_cluster.main.master[0].internal_v4_endpoint 9 | } 10 | 11 | output "cluster_ca_certificate" { 12 | description = "PEM-encoded public certificate that is the root of trust for the K8S cluster" 13 | value = yandex_kubernetes_cluster.main.master[0].cluster_ca_certificate 14 | } 15 | 16 | output "cluster_id" { 17 | description = "ID of a new K8S cluster" 18 | value = yandex_kubernetes_cluster.main.id 19 | } 20 | 21 | output "node_groups" { 22 | description = "Attributes of yandex_node_group resources created in cluster" 23 | value = yandex_kubernetes_node_group.node_groups 24 | } 25 | 26 | output "default_ssh_key_pub" { 27 | description = "Default node groups that is attached to all node groups" 28 | value = var.generate_default_ssh_key ? tls_private_key.default_ssh_key[0].public_key_openssh : null 29 | } 30 | 31 | output "default_ssh_key_prv" { 32 | description = "Default node groups that is attached to all node groups" 33 | value = var.generate_default_ssh_key ? tls_private_key.default_ssh_key[0].private_key_openssh : null 34 | } 35 | 36 | output "get_credentials_command" { 37 | description = "Command to get kubeconfig for the cluster" 38 | value = "yc managed-kubernetes cluster get-credentials --id ${yandex_kubernetes_cluster.main.id} --external" 39 | } 40 | 41 | output "log_group_id" { 42 | description = "ID of the Yandex Cloud Logging group" 43 | value = var.master_logging["create_log_group"] ? yandex_logging_group.main[0].id : null 44 | } 45 | 46 | output "log_group_name" { 47 | description = "Name of the Yandex Cloud Logging group" 48 | value = var.master_logging["create_log_group"] ? yandex_logging_group.main[0].name : null 49 | } 50 | -------------------------------------------------------------------------------- /examples/1master-2nodegroup/main.tf: -------------------------------------------------------------------------------- 1 | data "yandex_client_config" "client" {} 2 | 3 | module "iam_accounts" { 4 | source = "git::https://github.com/terraform-yacloud-modules/terraform-yandex-iam.git//modules/iam-account?ref=v1.0.0" 5 | 6 | name = "iam" 7 | folder_roles = [ 8 | "container-registry.images.puller", 9 | "k8s.clusters.agent", 10 | "k8s.tunnelClusters.agent", 11 | "load-balancer.admin", 12 | "logging.writer", 13 | "vpc.privateAdmin", 14 | "vpc.publicAdmin", 15 | "vpc.user", 16 | ] 17 | cloud_roles = [] 18 | enable_static_access_key = false 19 | enable_api_key = false 20 | enable_account_key = false 21 | 22 | } 23 | 24 | module "network" { 25 | source = "git::https://github.com/terraform-yacloud-modules/terraform-yandex-vpc.git?ref=v1.0.0" 26 | 27 | folder_id = data.yandex_client_config.client.folder_id 28 | 29 | blank_name = "vpc-nat-gateway" 30 | labels = { 31 | repo = "terraform-yacloud-modules/terraform-yandex-vpc" 32 | } 33 | 34 | azs = ["ru-central1-a"] 35 | 36 | private_subnets = [["10.4.0.0/24"]] 37 | 38 | create_vpc = true 39 | create_nat_gateway = true 40 | } 41 | 42 | module "kube" { 43 | source = "../../" 44 | 45 | network_id = module.network.vpc_id 46 | 47 | name = "k8s-test3" 48 | enable_oslogin = true 49 | 50 | service_account_id = module.iam_accounts.id 51 | node_service_account_id = module.iam_accounts.id 52 | 53 | master_locations = [ 54 | { 55 | zone = "ru-central1-a" 56 | subnet_id = module.network.private_subnets_ids[0] 57 | } 58 | ] 59 | 60 | node_groups = { 61 | "fixed-scale" = { 62 | subnet_ids = [module.network.private_subnets_ids[0]] 63 | nat = true 64 | cores = 2 65 | memory = 4 66 | fixed_scale = { 67 | size = 1 68 | } 69 | } 70 | 71 | "auto-scale" = { 72 | subnet_ids = [module.network.private_subnets_ids[0]] 73 | nat = true 74 | cores = 2 75 | memory = 8 76 | auto_scale = { 77 | min = 1 78 | max = 5 79 | initial = 1 80 | } 81 | } 82 | } 83 | 84 | depends_on = [module.iam_accounts] 85 | 86 | } 87 | -------------------------------------------------------------------------------- /.github/workflows/pipeline.yml: -------------------------------------------------------------------------------- 1 | name: "Terraform" 2 | on: 3 | push: 4 | branches: 5 | - main 6 | pull_request: 7 | 8 | env: 9 | tf_version: 1.3.9 10 | tflint_version: v0.45.0 11 | 12 | concurrency: 13 | group: ci-pipeline-${{ github.workflow }}-${{ github.event.pull_request.number || github.event.pull_request.head.ref || github.ref }} 14 | 15 | jobs: 16 | linters: 17 | name: "Terraform Linters" 18 | runs-on: ubuntu-24.04 19 | defaults: 20 | run: 21 | shell: bash 22 | steps: 23 | - name: Check out code 24 | uses: actions/checkout@v6 25 | with: 26 | fetch-depth: 0 27 | - name: Setup Terraform 28 | uses: hashicorp/setup-terraform@v3 29 | with: 30 | terraform_version: ${{ env.tf_version }} 31 | terraform_wrapper: false 32 | - name: Setup TFLint 33 | uses: terraform-linters/setup-tflint@v6 34 | with: 35 | tflint_version: ${{ env.tflint_version }} 36 | - name: Setup TFLint cache plugin dir 37 | uses: actions/cache@v5.0.1 38 | with: 39 | path: ~/.tflint.d/plugins 40 | key: tflint-${{ hashFiles('.tflint.hcl') }} 41 | - name: Test code with terraform fmt 42 | run: terraform fmt --recursive -check=true --diff 43 | continue-on-error: true 44 | - name: Test code with TFLint 45 | continue-on-error: true 46 | run: | 47 | tflint --init 48 | tflint -f compact 49 | - name: Test code with TFSec 50 | continue-on-error: true 51 | uses: aquasecurity/tfsec-action@v1.0.3 52 | with: 53 | soft_fail: true 54 | - name: Test code with Checkov 55 | uses: bridgecrewio/checkov-action@v12 56 | with: 57 | directory: / 58 | framework: terraform 59 | soft_fail: true 60 | quiet: true 61 | download_external_modules: false 62 | semver: 63 | name: "Set code version tag" 64 | runs-on: ubuntu-24.04 65 | permissions: 66 | contents: write 67 | needs: 68 | - linters 69 | defaults: 70 | run: 71 | shell: bash 72 | steps: 73 | - name: Check out code 74 | uses: actions/checkout@v6 75 | if: github.event_name == 'pull_request' 76 | with: 77 | fetch-depth: 0 78 | ref: ${{ github.event.pull_request.head.ref }} 79 | - name: Check out code 80 | uses: actions/checkout@v6 81 | if: github.event_name == 'push' 82 | with: 83 | fetch-depth: 0 84 | - name: Set application version 85 | id: set_version 86 | uses: kvendingoldo/git-flow-action@v2.2.0 87 | with: 88 | enable_github_release: true 89 | auto_release_branches: main 90 | tag_prefix_release: "v" 91 | github_token: "${{ secrets.GITHUB_TOKEN }}" 92 | -------------------------------------------------------------------------------- /examples/1master-1nodegroup/main.tf: -------------------------------------------------------------------------------- 1 | data "yandex_client_config" "client" {} 2 | 3 | module "network" { 4 | source = "git::https://github.com/terraform-yacloud-modules/terraform-yandex-vpc.git?ref=v1.0.0" 5 | 6 | folder_id = data.yandex_client_config.client.folder_id 7 | 8 | blank_name = "vpc-nat-gateway" 9 | labels = { 10 | repo = "terraform-yacloud-modules/terraform-yandex-vpc" 11 | } 12 | 13 | azs = ["ru-central1-a"] 14 | 15 | private_subnets = [["10.4.0.0/24"]] 16 | 17 | create_vpc = true 18 | create_nat_gateway = true 19 | } 20 | 21 | module "iam_accounts" { 22 | source = "git::https://github.com/terraform-yacloud-modules/terraform-yandex-iam.git//modules/iam-account?ref=v1.0.0" 23 | 24 | name = "iam" 25 | folder_roles = [ 26 | "container-registry.images.puller", 27 | "k8s.clusters.agent", 28 | "k8s.tunnelClusters.agent", 29 | "load-balancer.admin", 30 | "logging.writer", 31 | "vpc.privateAdmin", 32 | "vpc.publicAdmin", 33 | "vpc.user", 34 | ] 35 | cloud_roles = [] 36 | enable_static_access_key = false 37 | enable_api_key = false 38 | enable_account_key = false 39 | 40 | } 41 | 42 | module "kube" { 43 | source = "../../" 44 | 45 | network_id = module.network.vpc_id 46 | 47 | name = "k8s-test" 48 | description = "Test Kubernetes cluster" 49 | labels = { 50 | environment = "test" 51 | project = "terraform-yacloud-modules" 52 | } 53 | 54 | cluster_ipv4_range = "10.112.0.0/16" 55 | service_ipv4_range = "10.113.0.0/16" 56 | node_ipv4_cidr_mask_size = 24 57 | 58 | service_account_id = module.iam_accounts.id 59 | node_service_account_id = module.iam_accounts.id 60 | 61 | release_channel = "STABLE" 62 | master_version = "1.30" 63 | 64 | master_public_ip = true 65 | master_auto_upgrade = false 66 | 67 | cni_type = "calico" 68 | 69 | workload_identity_federation = { 70 | enabled = false 71 | } 72 | 73 | master_locations = [ 74 | { 75 | zone = "ru-central1-a" 76 | subnet_id = module.network.private_subnets_ids[0] 77 | } 78 | ] 79 | 80 | master_maintenance_windows = [ 81 | { 82 | start_time = "23:00" 83 | duration = "3h" 84 | } 85 | ] 86 | 87 | master_logging = { 88 | enabled = false 89 | create_log_group = true 90 | log_group_retention_period = "168h" 91 | audit_enabled = true 92 | kube_apiserver_enabled = true 93 | cluster_autoscaler_enabled = true 94 | events_enabled = true 95 | } 96 | 97 | node_groups = { 98 | "default" = { 99 | description = "Default node group" 100 | subnet_ids = [module.network.private_subnets_ids[0]] 101 | nat = true 102 | cores = 2 103 | memory = 8 104 | core_fraction = 100 105 | boot_disk_type = "network-hdd" 106 | boot_disk_size = 100 107 | preemptible = false 108 | fixed_scale = { 109 | size = 3 110 | } 111 | auto_repair = true 112 | auto_upgrade = true 113 | node_labels = { 114 | node-type = "default" 115 | } 116 | } 117 | } 118 | 119 | generate_default_ssh_key = true 120 | nodes_default_ssh_user = "ubuntu" 121 | 122 | depends_on = [module.iam_accounts] 123 | 124 | } 125 | -------------------------------------------------------------------------------- /.github/CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to making participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, sex characteristics, gender identity and expression, 9 | level of experience, education, socio-economic status, nationality, personal 10 | appearance, race, religion, or sexual identity and orientation. 11 | 12 | ## Our Standards 13 | 14 | Examples of behavior that contributes to creating a positive environment 15 | include: 16 | 17 | - Using welcoming and inclusive language 18 | - Being respectful of differing viewpoints and experiences 19 | - Gracefully accepting constructive criticism 20 | - Focusing on what is best for the community 21 | - Showing empathy towards other community members 22 | 23 | Examples of unacceptable behavior by participants include: 24 | 25 | - The use of sexualized language or imagery and unwelcome sexual attention or 26 | advances 27 | - Trolling, insulting/derogatory comments, and personal or political attacks 28 | - Public or private harassment 29 | - Publishing others' private information, such as a physical or electronic 30 | address, without explicit permission 31 | - Other conduct which could reasonably be considered inappropriate in a 32 | professional setting 33 | 34 | ## Our Responsibilities 35 | 36 | Project maintainers are responsible for clarifying the standards of acceptable 37 | behavior and are expected to take appropriate and fair corrective action in 38 | response to any instances of unacceptable behavior. 39 | 40 | Project maintainers have the right and responsibility to remove, edit, or 41 | reject comments, commits, code, wiki edits, issues, and other contributions 42 | that are not aligned to this Code of Conduct, or to ban temporarily or 43 | permanently any contributor for other behaviors that they deem inappropriate, 44 | threatening, offensive, or harmful. 45 | 46 | ## Scope 47 | 48 | This Code of Conduct applies both within project spaces and in public spaces 49 | when an individual is representing the project or its community. Examples of 50 | representing a project or community include using an official project e-mail 51 | address, posting via an official social media account, or acting as an appointed 52 | representative at an online or offline event. Representation of a project may be 53 | further defined and clarified by project maintainers. 54 | 55 | ## Enforcement 56 | 57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 58 | reported by contacting the project team at kvendingoldo@gmail.com. All 59 | complaints will be reviewed and investigated and will result in a response that 60 | is deemed necessary and appropriate to the circumstances. The project team is 61 | obligated to maintain confidentiality with regard to the reporter of an incident. 62 | Further details of specific enforcement policies may be posted separately. 63 | 64 | Project maintainers who do not follow or enforce the Code of Conduct in good 65 | faith may face temporary or permanent repercussions as determined by other 66 | members of the project's leadership. 67 | 68 | ## Attribution 69 | 70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, 71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 72 | 73 | [homepage]: https://www.contributor-covenant.org 74 | 75 | For answers to common questions about this code of conduct, see 76 | https://www.contributor-covenant.org/faq 77 | -------------------------------------------------------------------------------- /main.tf: -------------------------------------------------------------------------------- 1 | resource "yandex_logging_group" "main" { 2 | folder_id = data.yandex_client_config.client.folder_id 3 | count = var.master_logging["create_log_group"] ? 1 : 0 4 | 5 | name = var.name 6 | labels = var.labels 7 | 8 | retention_period = var.master_logging["log_group_retention_period"] 9 | } 10 | 11 | resource "yandex_kubernetes_cluster" "main" { 12 | folder_id = data.yandex_client_config.client.folder_id 13 | 14 | name = var.name 15 | description = var.description 16 | labels = var.labels 17 | 18 | network_id = var.network_id 19 | cluster_ipv4_range = var.cluster_ipv4_range 20 | cluster_ipv6_range = var.cluster_ipv6_range 21 | node_ipv4_cidr_mask_size = var.node_ipv4_cidr_mask_size 22 | service_ipv4_range = var.service_ipv4_range 23 | service_ipv6_range = var.service_ipv6_range 24 | 25 | dynamic "network_implementation" { 26 | for_each = var.cni_type == "cilium" ? [1] : [] 27 | content { 28 | cilium {} 29 | } 30 | } 31 | 32 | service_account_id = var.service_account_id 33 | node_service_account_id = var.node_service_account_id 34 | 35 | release_channel = var.release_channel 36 | network_policy_provider = var.cni_type == "calico" ? "CALICO" : null 37 | 38 | dynamic "kms_provider" { 39 | for_each = var.kms_provider_key_id != null ? [var.kms_provider_key_id] : [] 40 | content { 41 | key_id = kms_provider.value 42 | } 43 | } 44 | 45 | dynamic "workload_identity_federation" { 46 | for_each = var.workload_identity_federation["enabled"] ? [1] : [] 47 | content { 48 | enabled = true 49 | } 50 | } 51 | 52 | master { 53 | version = var.master_version 54 | public_ip = var.master_public_ip 55 | security_group_ids = var.master_security_group_ids 56 | 57 | maintenance_policy { 58 | auto_upgrade = var.master_auto_upgrade 59 | 60 | dynamic "maintenance_window" { 61 | for_each = var.master_maintenance_windows 62 | 63 | content { 64 | day = lookup(maintenance_window.value, "day", null) 65 | start_time = maintenance_window.value["start_time"] 66 | duration = maintenance_window.value["duration"] 67 | } 68 | } 69 | } 70 | 71 | dynamic "zonal" { 72 | for_each = local.master_locations 73 | 74 | content { 75 | zone = zonal.value["zone"] 76 | subnet_id = zonal.value["subnet_id"] 77 | } 78 | } 79 | 80 | dynamic "regional" { 81 | for_each = local.master_regions 82 | 83 | content { 84 | region = regional.value["region"] 85 | 86 | dynamic "location" { 87 | for_each = regional.value["locations"] 88 | 89 | content { 90 | zone = location.value["zone"] 91 | subnet_id = location.value["subnet_id"] 92 | } 93 | } 94 | } 95 | } 96 | 97 | master_logging { 98 | enabled = var.master_logging["enabled"] 99 | log_group_id = var.master_logging["log_group_id"] != "" ? var.master_logging["log_group_id"] : (var.master_logging["create_log_group"] ? yandex_logging_group.main[0].id : null) 100 | audit_enabled = var.master_logging["enabled"] ? var.master_logging["audit_enabled"] : null 101 | kube_apiserver_enabled = var.master_logging["enabled"] ? var.master_logging["kube_apiserver_enabled"] : null 102 | cluster_autoscaler_enabled = var.master_logging["enabled"] ? var.master_logging["cluster_autoscaler_enabled"] : null 103 | events_enabled = var.master_logging["enabled"] ? var.master_logging["events_enabled"] : null 104 | } 105 | } 106 | } 107 | -------------------------------------------------------------------------------- /examples/complete/main.tf: -------------------------------------------------------------------------------- 1 | data "yandex_client_config" "client" {} 2 | 3 | module "network" { 4 | source = "git::https://github.com/terraform-yacloud-modules/terraform-yandex-vpc.git?ref=v1.0.0" 5 | 6 | folder_id = data.yandex_client_config.client.folder_id 7 | 8 | blank_name = "vpc-nat-gateway" 9 | labels = { 10 | repo = "terraform-yacloud-modules/terraform-yandex-vpc" 11 | } 12 | 13 | azs = ["ru-central1-a", "ru-central1-b", "ru-central1-d"] 14 | 15 | private_subnets = [["10.4.0.0/24"], ["10.5.0.0/24"], ["10.6.0.0/24"]] 16 | 17 | create_vpc = true 18 | create_nat_gateway = true 19 | } 20 | 21 | module "iam_accounts" { 22 | source = "git::https://github.com/terraform-yacloud-modules/terraform-yandex-iam.git//modules/iam-account?ref=v1.0.0" 23 | 24 | name = "iam" 25 | folder_roles = [ 26 | "container-registry.images.puller", 27 | "k8s.clusters.agent", 28 | "k8s.tunnelClusters.agent", 29 | "load-balancer.admin", 30 | "logging.writer", 31 | "vpc.privateAdmin", 32 | "vpc.publicAdmin", 33 | "vpc.user", 34 | ] 35 | cloud_roles = [] 36 | enable_static_access_key = false 37 | enable_api_key = false 38 | enable_account_key = false 39 | 40 | } 41 | 42 | module "kube" { 43 | source = "../../" 44 | 45 | network_id = module.network.vpc_id 46 | 47 | name = "k8s-test4" 48 | description = "Complete Kubernetes cluster with all features" 49 | labels = { 50 | environment = "test" 51 | project = "terraform-yacloud-modules" 52 | type = "complete" 53 | } 54 | 55 | cluster_ipv4_range = "10.112.0.0/16" 56 | service_ipv4_range = "10.113.0.0/16" 57 | node_ipv4_cidr_mask_size = 24 58 | 59 | service_account_id = module.iam_accounts.id 60 | node_service_account_id = module.iam_accounts.id 61 | 62 | release_channel = "STABLE" 63 | master_version = "1.30" 64 | 65 | master_public_ip = true 66 | master_auto_upgrade = false 67 | 68 | cni_type = "calico" 69 | 70 | workload_identity_federation = { 71 | enabled = false 72 | } 73 | 74 | master_region = "ru-central1" 75 | 76 | master_locations = [ 77 | { 78 | zone = "ru-central1-a" 79 | subnet_id = module.network.private_subnets_ids[0] 80 | }, 81 | { 82 | zone = "ru-central1-b" 83 | subnet_id = module.network.private_subnets_ids[1] 84 | }, 85 | { 86 | zone = "ru-central1-d" 87 | subnet_id = module.network.private_subnets_ids[2] 88 | } 89 | ] 90 | 91 | master_maintenance_windows = [ 92 | { 93 | start_time = "23:00" 94 | duration = "3h" 95 | } 96 | ] 97 | 98 | master_logging = { 99 | enabled = false 100 | create_log_group = true 101 | log_group_retention_period = "168h" 102 | audit_enabled = true 103 | kube_apiserver_enabled = true 104 | cluster_autoscaler_enabled = true 105 | events_enabled = true 106 | } 107 | 108 | node_name_prefix = "ng" 109 | 110 | node_groups = { 111 | "fixed-scale" = { 112 | description = "Fixed scale node group across all zones" 113 | nat = true 114 | cores = 2 115 | memory = 4 116 | core_fraction = 100 117 | boot_disk_type = "network-hdd" 118 | boot_disk_size = 100 119 | preemptible = false 120 | fixed_scale = { 121 | size = 1 122 | } 123 | zones = ["ru-central1-a", "ru-central1-b", "ru-central1-d"] 124 | subnet_ids = [module.network.private_subnets_ids[0], module.network.private_subnets_ids[1], module.network.private_subnets_ids[2]] 125 | auto_repair = true 126 | auto_upgrade = true 127 | node_labels = { 128 | node-type = "fixed" 129 | } 130 | } 131 | 132 | "auto-scale-a" = { 133 | description = "Auto scale node group in zone A" 134 | nat = true 135 | cores = 2 136 | memory = 8 137 | core_fraction = 100 138 | boot_disk_type = "network-ssd" 139 | boot_disk_size = 120 140 | preemptible = false 141 | auto_scale = { 142 | min = 1 143 | max = 5 144 | initial = 1 145 | } 146 | zones = ["ru-central1-a"] 147 | subnet_ids = [module.network.private_subnets_ids[0]] 148 | auto_repair = true 149 | auto_upgrade = true 150 | node_labels = { 151 | node-type = "auto" 152 | zone = "a" 153 | } 154 | } 155 | 156 | "auto-scale-b" = { 157 | description = "Auto scale node group in zone B" 158 | nat = true 159 | cores = 2 160 | memory = 8 161 | core_fraction = 100 162 | boot_disk_type = "network-ssd" 163 | boot_disk_size = 120 164 | preemptible = false 165 | auto_scale = { 166 | min = 1 167 | max = 5 168 | initial = 1 169 | } 170 | zones = ["ru-central1-b"] 171 | subnet_ids = [module.network.private_subnets_ids[1]] 172 | auto_repair = true 173 | auto_upgrade = true 174 | node_labels = { 175 | node-type = "auto" 176 | zone = "b" 177 | } 178 | } 179 | } 180 | 181 | generate_default_ssh_key = true 182 | nodes_default_ssh_user = "ubuntu" 183 | enable_oslogin = true 184 | 185 | depends_on = [module.iam_accounts] 186 | 187 | } 188 | -------------------------------------------------------------------------------- /node_groups.tf: -------------------------------------------------------------------------------- 1 | resource "tls_private_key" "default_ssh_key" { 2 | count = var.generate_default_ssh_key ? 1 : 0 3 | 4 | algorithm = "RSA" 5 | } 6 | 7 | resource "yandex_kubernetes_node_group" "node_groups" { 8 | for_each = var.node_groups 9 | 10 | cluster_id = yandex_kubernetes_cluster.main.id 11 | name = var.node_name_prefix != "" ? format("%s-%s", var.node_name_prefix, each.key) : each.key 12 | 13 | description = each.value["description"] 14 | labels = lookup(each.value, "labels", var.labels) 15 | 16 | version = lookup(each.value, "version", var.master_version) 17 | 18 | instance_template { 19 | name = each.value["instance_name_template"] 20 | platform_id = each.value["platform_id"] 21 | metadata = merge( 22 | local.node_groups_ssh_keys_metadata, 23 | each.value["metadata"], 24 | var.enable_oslogin ? { "enable-oslogin" = "true" } : {} 25 | ) 26 | 27 | resources { 28 | memory = each.value["memory"] 29 | cores = each.value["cores"] 30 | core_fraction = each.value["core_fraction"] 31 | gpus = each.value["gpus"] 32 | } 33 | 34 | boot_disk { 35 | type = each.value["boot_disk_type"] 36 | size = each.value["boot_disk_size"] 37 | } 38 | 39 | scheduling_policy { 40 | preemptible = each.value["preemptible"] 41 | } 42 | 43 | dynamic "placement_policy" { 44 | for_each = compact([each.value["placement_group_id"]]) 45 | 46 | content { 47 | placement_group_id = placement_policy.value 48 | } 49 | } 50 | 51 | dynamic "gpu_settings" { 52 | for_each = each.value["gpu_settings"] != null ? [each.value["gpu_settings"]] : [] 53 | content { 54 | gpu_cluster_id = lookup(gpu_settings.value, "gpu_cluster_id", null) 55 | gpu_environment = lookup(gpu_settings.value, "gpu_environment", null) 56 | } 57 | } 58 | 59 | dynamic "container_network" { 60 | for_each = each.value["container_network_mtu"] != null ? [each.value["container_network_mtu"]] : [] 61 | content { 62 | pod_mtu = container_network.value 63 | } 64 | } 65 | 66 | network_interface { 67 | # 68 | # The logic is the following: 69 | # try subnet_ids in each node group and then if "node_groups" object contains "zones" key, take all "subnet_ids" 70 | # variables in a list format based on "zones" from "node_groups_locations" variable. 71 | # 72 | # otherwise, take the first one list of objects from "node_groups_locations" 73 | # 74 | subnet_ids = try(each.value["subnet_ids"], each.value["zones"] != null ? [ 75 | for zone in each.value["zones"] : lookup( 76 | { for item in local.node_groups_locations : item.zone => item.subnet_id }, 77 | zone, 78 | null 79 | ) 80 | if lookup({ for item in local.node_groups_locations : item.zone => item.subnet_id }, zone, null) != null 81 | ] : [ 82 | for location in [local.node_groups_locations[0]] : location.subnet_id 83 | ]) 84 | 85 | ipv4 = true 86 | ipv6 = false 87 | nat = each.value["nat"] 88 | security_group_ids = each.value.security_group_ids != null ? each.value.security_group_ids : var.node_groups_default_security_groups_ids 89 | 90 | dynamic "ipv4_dns_records" { 91 | for_each = each.value["ipv4_dns_records"] != null ? each.value["ipv4_dns_records"] : [] 92 | content { 93 | fqdn = ipv4_dns_records.value["fqdn"] 94 | dns_zone_id = lookup(ipv4_dns_records.value, "dns_zone_id", null) 95 | ptr = lookup(ipv4_dns_records.value, "ptr", false) 96 | ttl = lookup(ipv4_dns_records.value, "ttl", null) 97 | } 98 | } 99 | 100 | dynamic "ipv6_dns_records" { 101 | for_each = each.value["ipv6_dns_records"] != null ? each.value["ipv6_dns_records"] : [] 102 | content { 103 | fqdn = ipv6_dns_records.value["fqdn"] 104 | dns_zone_id = lookup(ipv6_dns_records.value, "dns_zone_id", null) 105 | ptr = lookup(ipv6_dns_records.value, "ptr", false) 106 | ttl = lookup(ipv6_dns_records.value, "ttl", null) 107 | } 108 | } 109 | } 110 | 111 | network_acceleration_type = each.value["network_acceleration_type"] 112 | 113 | dynamic "container_runtime" { 114 | for_each = compact([each.value["container_runtime_type"]]) 115 | 116 | content { 117 | type = container_runtime.value 118 | } 119 | } 120 | } 121 | 122 | scale_policy { 123 | dynamic "fixed_scale" { 124 | for_each = each.value["fixed_scale"] != null && each.value["auto_scale"] == null ? [1] : [] 125 | 126 | content { 127 | size = each.value["fixed_scale"]["size"] 128 | } 129 | } 130 | 131 | dynamic "auto_scale" { 132 | for_each = each.value["fixed_scale"] == null && each.value["auto_scale"] != null ? [1] : [] 133 | 134 | content { 135 | min = each.value["auto_scale"]["min"] 136 | max = each.value["auto_scale"]["max"] 137 | initial = each.value["auto_scale"]["initial"] 138 | } 139 | } 140 | } 141 | 142 | allocation_policy { 143 | dynamic "location" { 144 | for_each = each.value["zones"] != null ? each.value["zones"] : [ 145 | for location in [local.node_groups_locations[0]] : location.zone 146 | ] 147 | 148 | content { 149 | zone = location.value 150 | } 151 | } 152 | } 153 | 154 | maintenance_policy { 155 | auto_repair = each.value["auto_repair"] 156 | auto_upgrade = each.value["auto_upgrade"] 157 | 158 | dynamic "maintenance_window" { 159 | for_each = lookup(each.value, "maintenance_windows", null) != null ? each.value["maintenance_windows"] : [] 160 | 161 | content { 162 | day = lookup(maintenance_window.value, "day", null) 163 | start_time = maintenance_window.value["start_time"] 164 | duration = maintenance_window.value["duration"] 165 | } 166 | } 167 | } 168 | 169 | node_labels = each.value["node_labels"] 170 | node_taints = each.value["node_taints"] 171 | allowed_unsafe_sysctls = each.value["allowed_unsafe_sysctls"] 172 | 173 | dynamic "deploy_policy" { 174 | for_each = each.value["max_expansion"] != null || each.value["max_unavailable"] != null ? [1] : [] 175 | 176 | content { 177 | max_expansion = each.value["max_expansion"] 178 | max_unavailable = each.value["max_unavailable"] 179 | } 180 | } 181 | } 182 | -------------------------------------------------------------------------------- /variables.tf: -------------------------------------------------------------------------------- 1 | # 2 | # yandex cloud coordinates 3 | # 4 | 5 | # 6 | # naming 7 | # 8 | variable "name" { 9 | description = "K8S cluster name" 10 | type = string 11 | 12 | validation { 13 | condition = length(var.name) > 0 && can(regex("^[a-zA-Z][a-zA-Z0-9-]*$", var.name)) 14 | error_message = "Cluster name must be non-empty and can only contain alphanumeric characters and hyphens" 15 | } 16 | } 17 | 18 | variable "description" { 19 | description = "K8S cluster description" 20 | type = string 21 | default = "" 22 | } 23 | 24 | variable "labels" { 25 | description = "A set of labels to assign to the K8S cluster" 26 | type = map(string) 27 | default = {} 28 | } 29 | 30 | # 31 | # K8S сluster network 32 | # 33 | variable "network_id" { 34 | description = "The ID of the cluster network" 35 | type = string 36 | default = null 37 | } 38 | 39 | variable "cluster_ipv4_range" { 40 | description = <<-EOF 41 | CIDR block. IP range for allocating pod addresses. It should not overlap with 42 | any subnet in the network the K8S cluster located in. Static routes will 43 | be set up for this CIDR blocks in node subnets 44 | EOF 45 | type = string 46 | default = null 47 | 48 | validation { 49 | condition = var.cluster_ipv4_range == null ? true : can(regex("^([0-9]{1,3}\\.){3}[0-9]{1,3}/[0-9]{1,2}$", var.cluster_ipv4_range)) && cidrsubnet(var.cluster_ipv4_range, 0, 0) != null 50 | error_message = "cluster_ipv4_range must be a valid CIDR format (e.g., 10.112.0.0/16) and valid subnet" 51 | } 52 | 53 | } 54 | 55 | variable "cluster_ipv6_range" { 56 | description = "Identical to cluster_ipv4_range but for IPv6 protocol" 57 | type = string 58 | default = null 59 | } 60 | 61 | variable "node_ipv4_cidr_mask_size" { 62 | description = <<-EOF 63 | Size of the masks that are assigned to each node in the cluster. Effectively 64 | limits maximum number of pods for each node 65 | EOF 66 | type = number 67 | default = null 68 | 69 | validation { 70 | condition = var.node_ipv4_cidr_mask_size == null ? true : contains([0, 24, 25, 26, 27, 28], var.node_ipv4_cidr_mask_size) 71 | error_message = "node_ipv4_cidr_mask_size must be one of: 0, 24, 25, 26, 27, 28" 72 | } 73 | 74 | } 75 | 76 | variable "service_ipv4_range" { 77 | description = <<-EOF 78 | CIDR block. IP range K8S service K8S cluster IP addresses 79 | will be allocated from. It should not overlap with any subnet in the network 80 | the K8S cluster located in 81 | EOF 82 | type = string 83 | default = null 84 | 85 | validation { 86 | condition = var.service_ipv4_range == null ? true : can(regex("^([0-9]{1,3}\\.){3}[0-9]{1,3}/[0-9]{1,2}$", var.service_ipv4_range)) 87 | error_message = "service_ipv4_range must be a valid CIDR format (e.g., 10.113.0.0/16)" 88 | } 89 | } 90 | 91 | variable "service_ipv6_range" { 92 | description = "Identical to service_ipv4_range but for IPv6 protocol" 93 | type = string 94 | default = null 95 | } 96 | 97 | variable "cni_type" { 98 | description = "Type of K8S CNI which will be used for the cluster" 99 | type = string 100 | default = "calico" 101 | } 102 | 103 | # 104 | # Cluster IAM 105 | # 106 | variable "service_account_id" { 107 | description = <<-EOF 108 | ID of existing service account to be used for provisioning Compute Cloud 109 | and VPC resources for K8S cluster. Selected service account should have 110 | edit role on the folder where the K8S cluster will be located and on the 111 | folder where selected network resides 112 | EOF 113 | type = string 114 | default = null 115 | } 116 | 117 | variable "node_service_account_id" { 118 | description = <<-EOF 119 | ID of service account to be used by the worker nodes of the K8S 120 | cluster to access Container Registry or to push node logs and metrics. 121 | 122 | If omitted or equal to `service_account_id`, service account will be used 123 | as node service account. 124 | EOF 125 | type = string 126 | default = null 127 | } 128 | 129 | # 130 | # Cluster options 131 | # 132 | variable "release_channel" { 133 | description = "K8S cluster release channel" 134 | type = string 135 | default = "STABLE" 136 | 137 | validation { 138 | condition = contains(["RAPID", "REGULAR", "STABLE", "RELEASE_CHANNEL_UNSPECIFIED"], var.release_channel) 139 | error_message = "release_channel must be one of: RAPID, REGULAR, STABLE, RELEASE_CHANNEL_UNSPECIFIED" 140 | } 141 | 142 | } 143 | 144 | variable "kms_provider_key_id" { 145 | description = "K8S cluster KMS key ID" 146 | type = string 147 | default = null 148 | } 149 | 150 | # 151 | # Cluster advanced options 152 | # 153 | variable "workload_identity_federation" { 154 | description = "Workload Identity Federation configuration" 155 | type = object({ 156 | enabled = optional(bool, false) 157 | }) 158 | default = { 159 | enabled = false 160 | } 161 | } 162 | 163 | # 164 | # Master options 165 | # 166 | variable "master_version" { 167 | description = "Version of K8S that will be used for master" 168 | type = string 169 | default = "1.30" 170 | } 171 | 172 | variable "master_public_ip" { 173 | description = "Boolean flag. When true, K8S master will have visible ipv4 address" 174 | type = bool 175 | default = true 176 | } 177 | 178 | variable "master_security_group_ids" { 179 | description = "List of security group IDs to which the K8S cluster belongs" 180 | type = set(string) 181 | default = null 182 | } 183 | 184 | variable "master_region" { 185 | description = <<-EOF 186 | Name of region where cluster will be created. Required for regional cluster, 187 | not used for zonal cluster 188 | EOF 189 | type = string 190 | default = null 191 | } 192 | 193 | variable "master_locations" { 194 | description = <<-EOF 195 | List of locations where cluster will be created. If list contains only one 196 | location, will be created zonal cluster, if more than one -- regional 197 | EOF 198 | type = list(object({ 199 | subnet_id = string 200 | zone = string 201 | })) 202 | } 203 | 204 | variable "master_auto_upgrade" { 205 | description = "Boolean flag that specifies if master can be upgraded automatically" 206 | type = bool 207 | default = false 208 | } 209 | 210 | variable "master_maintenance_windows" { 211 | description = < 14 | ## Requirements 15 | 16 | | Name | Version | 17 | |------|---------| 18 | | [terraform](#requirement\_terraform) | >= 1.3 | 19 | | [tls](#requirement\_tls) | >= 3.1.0 | 20 | | [yandex](#requirement\_yandex) | >= 0.72.0 | 21 | 22 | ## Providers 23 | 24 | | Name | Version | 25 | |------|---------| 26 | | [tls](#provider\_tls) | >= 3.1.0 | 27 | | [yandex](#provider\_yandex) | >= 0.72.0 | 28 | 29 | ## Modules 30 | 31 | No modules. 32 | 33 | ## Resources 34 | 35 | | Name | Type | 36 | |------|------| 37 | | [tls_private_key.default_ssh_key](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) | resource | 38 | | [yandex_kubernetes_cluster.main](https://registry.terraform.io/providers/yandex-cloud/yandex/latest/docs/resources/kubernetes_cluster) | resource | 39 | | [yandex_kubernetes_node_group.node_groups](https://registry.terraform.io/providers/yandex-cloud/yandex/latest/docs/resources/kubernetes_node_group) | resource | 40 | | [yandex_logging_group.main](https://registry.terraform.io/providers/yandex-cloud/yandex/latest/docs/resources/logging_group) | resource | 41 | | [yandex_client_config.client](https://registry.terraform.io/providers/yandex-cloud/yandex/latest/docs/data-sources/client_config) | data source | 42 | 43 | ## Inputs 44 | 45 | | Name | Description | Type | Default | Required | 46 | |------|-------------|------|---------|:--------:| 47 | | [cluster\_ipv4\_range](#input\_cluster\_ipv4\_range) | CIDR block. IP range for allocating pod addresses. It should not overlap with
any subnet in the network the K8S cluster located in. Static routes will
be set up for this CIDR blocks in node subnets | `string` | `null` | no | 48 | | [cluster\_ipv6\_range](#input\_cluster\_ipv6\_range) | Identical to cluster\_ipv4\_range but for IPv6 protocol | `string` | `null` | no | 49 | | [cni\_type](#input\_cni\_type) | Type of K8S CNI which will be used for the cluster | `string` | `"calico"` | no | 50 | | [description](#input\_description) | K8S cluster description | `string` | `""` | no | 51 | | [enable\_oslogin](#input\_enable\_oslogin) | Enable OS Login for node groups | `bool` | `false` | no | 52 | | [generate\_default\_ssh\_key](#input\_generate\_default\_ssh\_key) | If true, SSH key for node groups will be generated | `bool` | `true` | no | 53 | | [kms\_provider\_key\_id](#input\_kms\_provider\_key\_id) | K8S cluster KMS key ID | `string` | `null` | no | 54 | | [labels](#input\_labels) | A set of labels to assign to the K8S cluster | `map(string)` | `{}` | no | 55 | | [master\_auto\_upgrade](#input\_master\_auto\_upgrade) | Boolean flag that specifies if master can be upgraded automatically | `bool` | `false` | no | 56 | | [master\_locations](#input\_master\_locations) | List of locations where cluster will be created. If list contains only one
location, will be created zonal cluster, if more than one -- regional |
list(object({
subnet_id = string
zone = string
}))
| n/a | yes | 57 | | [master\_logging](#input\_master\_logging) | Master logging |
object({
enabled = bool
create_log_group = optional(bool, true)
log_group_retention_period = optional(string, "168h")
log_group_id = optional(string, "")
audit_enabled = optional(bool, true)
kube_apiserver_enabled = optional(bool, true)
cluster_autoscaler_enabled = optional(bool, true)
events_enabled = optional(bool, true)
})
|
{
"enabled": false
}
| no | 58 | | [master\_maintenance\_windows](#input\_master\_maintenance\_windows) | List of structures that specifies maintenance windows,
when auto update for master is allowed

E.g:
master_maintenance_windows = [
{
start_time = "10:00"
duration = "5h"
}
]
| `list(map(string))` |
[
{
"duration": "3h",
"start_time": "23:00"
}
]
| no | 59 | | [master\_public\_ip](#input\_master\_public\_ip) | Boolean flag. When true, K8S master will have visible ipv4 address | `bool` | `true` | no | 60 | | [master\_region](#input\_master\_region) | Name of region where cluster will be created. Required for regional cluster,
not used for zonal cluster | `string` | `null` | no | 61 | | [master\_security\_group\_ids](#input\_master\_security\_group\_ids) | List of security group IDs to which the K8S cluster belongs | `set(string)` | `null` | no | 62 | | [master\_version](#input\_master\_version) | Version of K8S that will be used for master | `string` | `"1.30"` | no | 63 | | [name](#input\_name) | K8S cluster name | `string` | n/a | yes | 64 | | [network\_id](#input\_network\_id) | The ID of the cluster network | `string` | `null` | no | 65 | | [node\_groups](#input\_node\_groups) | K8S node groups |
map(object({
description = optional(string, null)
labels = optional(map(string), null)
version = optional(string, null)
metadata = optional(map(string), {})
platform_id = optional(string, null)
memory = optional(number, 2)
cores = optional(number, 2)
core_fraction = optional(number, 100)
gpus = optional(number, null)
boot_disk_type = optional(string, "network-hdd")
boot_disk_size = optional(number, 100)
preemptible = optional(bool, false)
placement_group_id = optional(string, null)
nat = optional(bool, false)
security_group_ids = optional(list(string))
network_acceleration_type = optional(string, "standard")
container_runtime_type = optional(string, "containerd")
fixed_scale = optional(map(string), null)
auto_scale = optional(map(string), null)
auto_repair = optional(bool, true)
auto_upgrade = optional(bool, true)
maintenance_windows = optional(list(any))
node_labels = optional(map(string), null)
node_taints = optional(list(string), null)
allowed_unsafe_sysctls = optional(list(string), [])
max_expansion = optional(number, null)
max_unavailable = optional(number, null)
zones = optional(list(string), null)
subnet_ids = optional(list(string), null)
gpu_settings = optional(map(string), null)
container_network_mtu = optional(number, null)
instance_name_template = optional(string, null)
placement_policy = optional(map(string), null)
ipv4_dns_records = optional(list(map(string)), [])
ipv6_dns_records = optional(list(map(string)), [])
}))
| `{}` | no | 66 | | [node\_groups\_default\_security\_groups\_ids](#input\_node\_groups\_default\_security\_groups\_ids) | A list of default IDs for node groups. Will be used if node\_groups[].security\_group\_ids is empty | `list(string)` | `[]` | no | 67 | | [node\_groups\_locations](#input\_node\_groups\_locations) | Locations of K8S node groups. If omitted, master\_locations will be used |
list(object({
subnet_id = string
zone = string
}))
| `null` | no | 68 | | [node\_groups\_ssh\_keys](#input\_node\_groups\_ssh\_keys) | Map containing SSH keys to install on all K8S node servers by default | `map(list(string))` | `{}` | no | 69 | | [node\_ipv4\_cidr\_mask\_size](#input\_node\_ipv4\_cidr\_mask\_size) | Size of the masks that are assigned to each node in the cluster. Effectively
limits maximum number of pods for each node | `number` | `null` | no | 70 | | [node\_name\_prefix](#input\_node\_name\_prefix) | The prefix for node group name | `string` | `""` | no | 71 | | [node\_service\_account\_id](#input\_node\_service\_account\_id) | ID of service account to be used by the worker nodes of the K8S
cluster to access Container Registry or to push node logs and metrics.

If omitted or equal to `service_account_id`, service account will be used
as node service account. | `string` | `null` | no | 72 | | [nodes\_default\_ssh\_user](#input\_nodes\_default\_ssh\_user) | Default SSH user for node groups. Used only if generate\_default\_ssh\_key == true | `string` | `"ubuntu"` | no | 73 | | [release\_channel](#input\_release\_channel) | K8S cluster release channel | `string` | `"STABLE"` | no | 74 | | [service\_account\_id](#input\_service\_account\_id) | ID of existing service account to be used for provisioning Compute Cloud
and VPC resources for K8S cluster. Selected service account should have
edit role on the folder where the K8S cluster will be located and on the
folder where selected network resides | `string` | `null` | no | 75 | | [service\_ipv4\_range](#input\_service\_ipv4\_range) | CIDR block. IP range K8S service K8S cluster IP addresses
will be allocated from. It should not overlap with any subnet in the network
the K8S cluster located in | `string` | `null` | no | 76 | | [service\_ipv6\_range](#input\_service\_ipv6\_range) | Identical to service\_ipv4\_range but for IPv6 protocol | `string` | `null` | no | 77 | | [workload\_identity\_federation](#input\_workload\_identity\_federation) | Workload Identity Federation configuration |
object({
enabled = optional(bool, false)
})
|
{
"enabled": false
}
| no | 78 | 79 | ## Outputs 80 | 81 | | Name | Description | 82 | |------|-------------| 83 | | [cluster\_ca\_certificate](#output\_cluster\_ca\_certificate) | PEM-encoded public certificate that is the root of trust for the K8S cluster | 84 | | [cluster\_id](#output\_cluster\_id) | ID of a new K8S cluster | 85 | | [default\_ssh\_key\_prv](#output\_default\_ssh\_key\_prv) | Default node groups that is attached to all node groups | 86 | | [default\_ssh\_key\_pub](#output\_default\_ssh\_key\_pub) | Default node groups that is attached to all node groups | 87 | | [external\_v4\_endpoint](#output\_external\_v4\_endpoint) | An IPv4 external network address that is assigned to the master | 88 | | [get\_credentials\_command](#output\_get\_credentials\_command) | Command to get kubeconfig for the cluster | 89 | | [internal\_v4\_endpoint](#output\_internal\_v4\_endpoint) | An IPv4 internal network address that is assigned to the master | 90 | | [log\_group\_id](#output\_log\_group\_id) | ID of the Yandex Cloud Logging group | 91 | | [log\_group\_name](#output\_log\_group\_name) | Name of the Yandex Cloud Logging group | 92 | | [node\_groups](#output\_node\_groups) | Attributes of yandex\_node\_group resources created in cluster | 93 | 94 | 95 | ## License 96 | 97 | Apache-2.0 Licensed. 98 | See [LICENSE](https://github.com/terraform-yacloud-modules/terraform-yandex-kubernetes/blob/main/LICENSE). 99 | --------------------------------------------------------------------------------