├── .gitignore ├── 01-benefits-iac ├── .terraform.lock.hcl └── vpc.tf ├── 02-hcl └── hcl.tf ├── 03-first-tf-project ├── .terraform.lock.hcl ├── providers.tf └── s3.tf ├── 04-backends ├── .terraform.lock.hcl ├── dev.s3.tfbackend ├── prod.s3.tfbackend ├── providers.tf └── s3.tf ├── 05-providers ├── .terraform.lock.hcl └── providers.tf ├── 06-resources ├── .terraform.lock.hcl ├── README.md ├── compute.tf ├── networking.tf └── provider.tf ├── 07-data-sources ├── .terraform.lock.hcl ├── compute.tf └── provider.tf ├── 08-input-vars-locals-outputs ├── .terraform.lock.hcl ├── compute.tf ├── outputs.tf ├── override.tfvars ├── provider.tf ├── s3.tf ├── shared-locals.tf ├── terraform.tfvars └── variables.tf ├── 09-expressions ├── for-lists.tf ├── for-maps.tf ├── lists-maps.tf ├── operators.tf ├── provider.tf ├── splat.tf ├── terraform.tfvars └── variables.tf ├── 10-functions ├── functions-examples.tf ├── provider.tf └── users.yaml ├── 11-multiple-resources ├── .terraform.lock.hcl ├── compute.tf ├── data.tf ├── networking.tf ├── provider.tf ├── terraform.tfvars └── variables.tf ├── 12-public-modules ├── .terraform.lock.hcl ├── compute.tf ├── networking.tf ├── provider.tf └── shared-data.tf ├── 13-local-modules ├── .terraform.lock.hcl ├── README.md ├── compute.tf ├── modules │ └── networking │ │ ├── LICENSE │ │ ├── README.md │ │ ├── examples │ │ └── complete │ │ │ └── main.tf │ │ ├── outputs.tf │ │ ├── providers.tf │ │ ├── variables.tf │ │ └── vpc.tf ├── networking.tf ├── outputs.tf └── providers.tf ├── 14-use-own-module ├── .terraform.lock.hcl ├── networking.tf └── providers.tf ├── 15-object-validation ├── .terraform.lock.hcl ├── compute.tf ├── networking.tf ├── provider.tf └── variables.tf ├── 16-state-manipulation ├── .terraform.lock.hcl ├── import.tf ├── modules │ └── compute │ │ ├── compute.tf │ │ ├── provider.tf │ │ └── variables.tf ├── move-state.tf ├── provider.tf ├── remove.tf └── taint.tf ├── 17-workspaces ├── .terraform.lock.hcl ├── dev.tfvars ├── int.tfvars ├── prod.tfvars ├── provider.tf ├── s3.tf ├── staging.tfvars └── variables.tf ├── 18-terraform-cloud ├── .terraform.lock.hcl ├── compute.tf ├── provider.tf ├── random.tf ├── s3.tf └── variables.tf ├── README.md ├── exercises ├── exercise01-benefits_iac_aws_console.md ├── exercise02-terraform.md ├── exercise03-terraform_stages.md ├── exercise04-hcl.md ├── exercise05-first_tf_project.md ├── exercise06-terraform_cli.md ├── exercise07-s3_backend.md ├── exercise08-partial_backend_config.md ├── exercise09-providers.md ├── exercise10-data_source_ami_information.md ├── exercise11-data_source_aws_caller_identity_region.md ├── exercise12-data_source_vpc_information.md ├── exercise13-data_source_iam_policy.md ├── exercise14-aws_region_variables.md ├── exercise15-ec2_instance_size_volume.md ├── exercise16-objects_volume_configuration.md ├── exercise17-tfvars.md ├── exercise18-auto_tfvars.md ├── exercise19-locals.md ├── exercise20-outputs.md ├── exercise21-sensitive_values.md ├── exercise22-operators.md ├── exercise23-for_lists.md ├── exercise24-for_maps.md ├── exercise25-lists_maps.md ├── exercise26-functions.md ├── exercise27-creating_count.md ├── exercise28-referencing_count.md ├── exercise29-multiple_ec2_list_input.md ├── exercise30-allow_nginx_image.md ├── exercise31-validation_list_input.md ├── exercise32-multiple_ec2_map_input.md ├── exercise33-validation_map_input.md ├── exercise34-provide_subnet_information.md ├── exercise35-using_aws_vpc_module.md ├── exercise36-using_aws_ec2_module.md ├── exercise37-creating_own_vpc_module.md ├── exercise38-using_object_inputs.md ├── exercise39-receiving_subnet_configuration.md ├── exercise40-validating_azs.md ├── exercise41-public_private_subnets.md ├── exercise42-defining_outputs.md ├── exercise43-testing_module.md ├── exercise44-preconditions.md ├── exercise45-postconditions.md ├── exercise46-check_blocks.md ├── exercise47-moved_blocks.md ├── exercise48-import_block.md ├── exercise49-removed_block.md ├── exercise50-taints.md ├── exercise51-workspaces_cli.md ├── exercise52-multiple_workspaces.md ├── exercise53-tfvars_workspace_configuration.md ├── exercise54-tf_cloud_workspace.md ├── exercise55-tf_cloud_resources.md ├── exercise56-tf_cloud_authenticating_aws.md ├── exercise57-tf_cloud_workspace_variables.md ├── exercise58-tf_cloud_vcs_integration.md ├── exercise59-tf_cloud_speculative_plans.md ├── exercise60-tf_cloud_private_registries.md └── exercise61-tf_cloud_cleanup.md ├── proj01-s3-static-website ├── .terraform.lock.hcl ├── build │ ├── error.html │ └── index.html ├── outputs.tf ├── provider.tf └── s3.tf ├── proj02-iam-users ├── .terraform.lock.hcl ├── provider.tf ├── roles.tf ├── user-roles.yaml └── users.tf ├── proj03-import-lambda ├── .terraform.lock.hcl ├── build │ └── index.mjs ├── cloudwatch.tf ├── iam.tf ├── lambda.tf ├── lambda.zip ├── outputs.tf └── provider.tf ├── proj04-rds-module ├── .terraform.lock.hcl ├── README.md ├── modules │ └── rds │ │ ├── networking-validation.tf │ │ ├── outputs.tf │ │ ├── provider.tf │ │ ├── rds.tf │ │ └── variables.tf ├── networking.tf ├── outputs.tf ├── provider.tf └── rds.tf ├── proj05-tf-cloud-oidc ├── .terraform.lock.hcl ├── oidc.tf ├── provider.tf ├── s3.tf ├── terraform.tfvars └── variables.tf └── projects ├── assets ├── proj00-vpc-ec2-nginx.png └── proj02-iam-users.png ├── proj00-vpc-ec2.md ├── proj01-s3-static-website.md ├── proj02-iam-users.md ├── proj03-import-lambda.md ├── proj04-rds-module.md └── proj05-tf-cloud-oidc.md /.gitignore: -------------------------------------------------------------------------------- 1 | .env 2 | .terraform/ 3 | **.tfstate* 4 | .prettierrc -------------------------------------------------------------------------------- /01-benefits-iac/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.37.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:jy1tY8vUGirfcC9GwSS2Uf01GXcxnNnotvIK/WjI2WI=", 9 | "zh:00f40a3d9593476693a7a72d993fd289f7be374fe3f2799776c6296eb6ff890a", 10 | "zh:1010a9fbf55852a8da3473de4ec0f1fcf29efa85d66f61cbe2b086dbbd7747ae", 11 | "zh:103a5674d1eb1cff05fe35e9baa9875afd18d740868b63f9c0c25eadb5eb4eb7", 12 | "zh:270ac1b7a1327c1456a43df44c0b5cc3e26ed6d8861a709adeea1da684a563f5", 13 | "zh:424362c02c8917c0586f3dd49aca27b7e0c21f5a23374b7045e9be3b5646c028", 14 | "zh:549fa2ea187964ab9a0c354310947ead30e09b3199db1ff377c21d7547d78299", 15 | "zh:6492d2ccc7f7d60e83cd8b7244adc53f30efc17d84b1ffc1b8fd6c385f8255fd", 16 | "zh:66fb7b3b8a357071d26c5996c16d426edf07502a05ac86f4a6f73646ee7d1bbb", 17 | "zh:6ecc05fb466d06ea8945564d2cdb8c2a8827d8cfca1550e9fb7eac0e95920196", 18 | "zh:7932360b627b211dad937d278a8692a6c52bd6c0a71e4ec9e94ccbe825053822", 19 | "zh:97ed1b4a18842c4d56a735329e87b4ef91a47e820e5a5c3c2dd64e293408bfc8", 20 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 21 | "zh:d5e022052011e1984b9c2f8bc5a6b05c909e3b5bf40c3baddf191bf90e3169c2", 22 | "zh:d7e9488b2ce5904efb91c8577b3fe9b0cd599c4cd508f1f163f292930f54fdf0", 23 | "zh:e57cd93d5cd81dd0f446076af6e47a53ce83df2947ec64ed39a1090d4bdf8f0b", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /01-benefits-iac/vpc.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "~> 5.0" 6 | } 7 | } 8 | } 9 | 10 | provider "aws" { 11 | region = "eu-west-1" 12 | } 13 | 14 | resource "aws_vpc" "demo_vpc" { 15 | cidr_block = "10.0.0.0/16" 16 | 17 | tags = { 18 | Name = "Terraform VPC" 19 | } 20 | } 21 | 22 | resource "aws_subnet" "public_subnet" { 23 | vpc_id = aws_vpc.demo_vpc.id 24 | cidr_block = "10.0.0.0/24" 25 | } 26 | 27 | resource "aws_subnet" "private_subnet" { 28 | vpc_id = aws_vpc.demo_vpc.id 29 | cidr_block = "10.0.1.0/24" 30 | } 31 | 32 | resource "aws_internet_gateway" "igw" { 33 | vpc_id = aws_vpc.demo_vpc.id 34 | } 35 | 36 | resource "aws_route_table" "public_rtb" { 37 | vpc_id = aws_vpc.demo_vpc.id 38 | 39 | route { 40 | cidr_block = "0.0.0.0/0" 41 | gateway_id = aws_internet_gateway.igw.id 42 | } 43 | } 44 | 45 | resource "aws_route_table_association" "public_subnet" { 46 | subnet_id = aws_subnet.public_subnet.id 47 | route_table_id = aws_route_table.public_rtb.id 48 | } 49 | -------------------------------------------------------------------------------- /02-hcl/hcl.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "5.37.0" 6 | } 7 | } 8 | } 9 | 10 | # Actively managed by us, by our Terraform project 11 | resource "aws_s3_bucket" "my_bucket" { 12 | bucket = var.bucket_name 13 | } 14 | 15 | # Managed somewhere else, we just want to use in our project 16 | data "aws_s3_bucket" "my_external_bucket" { 17 | bucket = "not-managed-by-us" 18 | } 19 | 20 | variable "bucket_name" { 21 | type = string 22 | description = "My variable used to set bucket name" 23 | default = "my_default_bucket_name" 24 | } 25 | 26 | output "bucket_id" { 27 | value = aws_s3_bucket.my_bucket.id 28 | } 29 | 30 | locals { 31 | local_example = "This is a local variable" 32 | } 33 | 34 | module "my_module" { 35 | source = "./module-example" 36 | } 37 | -------------------------------------------------------------------------------- /03-first-tf-project/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.37.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:jy1tY8vUGirfcC9GwSS2Uf01GXcxnNnotvIK/WjI2WI=", 9 | "zh:00f40a3d9593476693a7a72d993fd289f7be374fe3f2799776c6296eb6ff890a", 10 | "zh:1010a9fbf55852a8da3473de4ec0f1fcf29efa85d66f61cbe2b086dbbd7747ae", 11 | "zh:103a5674d1eb1cff05fe35e9baa9875afd18d740868b63f9c0c25eadb5eb4eb7", 12 | "zh:270ac1b7a1327c1456a43df44c0b5cc3e26ed6d8861a709adeea1da684a563f5", 13 | "zh:424362c02c8917c0586f3dd49aca27b7e0c21f5a23374b7045e9be3b5646c028", 14 | "zh:549fa2ea187964ab9a0c354310947ead30e09b3199db1ff377c21d7547d78299", 15 | "zh:6492d2ccc7f7d60e83cd8b7244adc53f30efc17d84b1ffc1b8fd6c385f8255fd", 16 | "zh:66fb7b3b8a357071d26c5996c16d426edf07502a05ac86f4a6f73646ee7d1bbb", 17 | "zh:6ecc05fb466d06ea8945564d2cdb8c2a8827d8cfca1550e9fb7eac0e95920196", 18 | "zh:7932360b627b211dad937d278a8692a6c52bd6c0a71e4ec9e94ccbe825053822", 19 | "zh:97ed1b4a18842c4d56a735329e87b4ef91a47e820e5a5c3c2dd64e293408bfc8", 20 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 21 | "zh:d5e022052011e1984b9c2f8bc5a6b05c909e3b5bf40c3baddf191bf90e3169c2", 22 | "zh:d7e9488b2ce5904efb91c8577b3fe9b0cd599c4cd508f1f163f292930f54fdf0", 23 | "zh:e57cd93d5cd81dd0f446076af6e47a53ce83df2947ec64ed39a1090d4bdf8f0b", 24 | ] 25 | } 26 | 27 | provider "registry.terraform.io/hashicorp/random" { 28 | version = "3.6.0" 29 | constraints = "~> 3.0" 30 | hashes = [ 31 | "h1:I8MBeauYA8J8yheLJ8oSMWqB0kovn16dF/wKZ1QTdkk=", 32 | "zh:03360ed3ecd31e8c5dac9c95fe0858be50f3e9a0d0c654b5e504109c2159287d", 33 | "zh:1c67ac51254ba2a2bb53a25e8ae7e4d076103483f55f39b426ec55e47d1fe211", 34 | "zh:24a17bba7f6d679538ff51b3a2f378cedadede97af8a1db7dad4fd8d6d50f829", 35 | "zh:30ffb297ffd1633175d6545d37c2217e2cef9545a6e03946e514c59c0859b77d", 36 | "zh:454ce4b3dbc73e6775f2f6605d45cee6e16c3872a2e66a2c97993d6e5cbd7055", 37 | "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", 38 | "zh:91df0a9fab329aff2ff4cf26797592eb7a3a90b4a0c04d64ce186654e0cc6e17", 39 | "zh:aa57384b85622a9f7bfb5d4512ca88e61f22a9cea9f30febaa4c98c68ff0dc21", 40 | "zh:c4a3e329ba786ffb6f2b694e1fd41d413a7010f3a53c20b432325a94fa71e839", 41 | "zh:e2699bc9116447f96c53d55f2a00570f982e6f9935038c3810603572693712d0", 42 | "zh:e747c0fd5d7684e5bfad8aa0ca441903f15ae7a98a737ff6aca24ba223207e2c", 43 | "zh:f1ca75f417ce490368f047b63ec09fd003711ae48487fba90b4aba2ccf71920e", 44 | ] 45 | } 46 | -------------------------------------------------------------------------------- /03-first-tf-project/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | random = { 9 | source = "hashicorp/random" 10 | version = "~> 3.0" 11 | } 12 | } 13 | } 14 | 15 | provider "aws" { 16 | region = "eu-west-1" 17 | } 18 | -------------------------------------------------------------------------------- /03-first-tf-project/s3.tf: -------------------------------------------------------------------------------- 1 | resource "random_id" "bucket_suffix" { 2 | byte_length = 6 3 | } 4 | 5 | resource "aws_s3_bucket" "example_bucket" { 6 | bucket = "example-bucket-${random_id.bucket_suffix.hex}" 7 | } 8 | 9 | output "bucket_name" { 10 | value = aws_s3_bucket.example_bucket.bucket 11 | } -------------------------------------------------------------------------------- /04-backends/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.37.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:jy1tY8vUGirfcC9GwSS2Uf01GXcxnNnotvIK/WjI2WI=", 9 | "zh:00f40a3d9593476693a7a72d993fd289f7be374fe3f2799776c6296eb6ff890a", 10 | "zh:1010a9fbf55852a8da3473de4ec0f1fcf29efa85d66f61cbe2b086dbbd7747ae", 11 | "zh:103a5674d1eb1cff05fe35e9baa9875afd18d740868b63f9c0c25eadb5eb4eb7", 12 | "zh:270ac1b7a1327c1456a43df44c0b5cc3e26ed6d8861a709adeea1da684a563f5", 13 | "zh:424362c02c8917c0586f3dd49aca27b7e0c21f5a23374b7045e9be3b5646c028", 14 | "zh:549fa2ea187964ab9a0c354310947ead30e09b3199db1ff377c21d7547d78299", 15 | "zh:6492d2ccc7f7d60e83cd8b7244adc53f30efc17d84b1ffc1b8fd6c385f8255fd", 16 | "zh:66fb7b3b8a357071d26c5996c16d426edf07502a05ac86f4a6f73646ee7d1bbb", 17 | "zh:6ecc05fb466d06ea8945564d2cdb8c2a8827d8cfca1550e9fb7eac0e95920196", 18 | "zh:7932360b627b211dad937d278a8692a6c52bd6c0a71e4ec9e94ccbe825053822", 19 | "zh:97ed1b4a18842c4d56a735329e87b4ef91a47e820e5a5c3c2dd64e293408bfc8", 20 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 21 | "zh:d5e022052011e1984b9c2f8bc5a6b05c909e3b5bf40c3baddf191bf90e3169c2", 22 | "zh:d7e9488b2ce5904efb91c8577b3fe9b0cd599c4cd508f1f163f292930f54fdf0", 23 | "zh:e57cd93d5cd81dd0f446076af6e47a53ce83df2947ec64ed39a1090d4bdf8f0b", 24 | ] 25 | } 26 | 27 | provider "registry.terraform.io/hashicorp/random" { 28 | version = "3.6.0" 29 | constraints = "~> 3.0" 30 | hashes = [ 31 | "h1:I8MBeauYA8J8yheLJ8oSMWqB0kovn16dF/wKZ1QTdkk=", 32 | "zh:03360ed3ecd31e8c5dac9c95fe0858be50f3e9a0d0c654b5e504109c2159287d", 33 | "zh:1c67ac51254ba2a2bb53a25e8ae7e4d076103483f55f39b426ec55e47d1fe211", 34 | "zh:24a17bba7f6d679538ff51b3a2f378cedadede97af8a1db7dad4fd8d6d50f829", 35 | "zh:30ffb297ffd1633175d6545d37c2217e2cef9545a6e03946e514c59c0859b77d", 36 | "zh:454ce4b3dbc73e6775f2f6605d45cee6e16c3872a2e66a2c97993d6e5cbd7055", 37 | "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", 38 | "zh:91df0a9fab329aff2ff4cf26797592eb7a3a90b4a0c04d64ce186654e0cc6e17", 39 | "zh:aa57384b85622a9f7bfb5d4512ca88e61f22a9cea9f30febaa4c98c68ff0dc21", 40 | "zh:c4a3e329ba786ffb6f2b694e1fd41d413a7010f3a53c20b432325a94fa71e839", 41 | "zh:e2699bc9116447f96c53d55f2a00570f982e6f9935038c3810603572693712d0", 42 | "zh:e747c0fd5d7684e5bfad8aa0ca441903f15ae7a98a737ff6aca24ba223207e2c", 43 | "zh:f1ca75f417ce490368f047b63ec09fd003711ae48487fba90b4aba2ccf71920e", 44 | ] 45 | } 46 | -------------------------------------------------------------------------------- /04-backends/dev.s3.tfbackend: -------------------------------------------------------------------------------- 1 | # For illustration purposes, not actively used. 2 | 3 | bucket = "terraform-course-lauromueller-remote-backend" 4 | key = "04-backends/dev/state.tfstate" 5 | region = "eu-west-1" -------------------------------------------------------------------------------- /04-backends/prod.s3.tfbackend: -------------------------------------------------------------------------------- 1 | # For illustration purposes, not actively used. 2 | 3 | bucket = "terraform-course-lauromueller-remote-backend" 4 | key = "04-backends/prod/state.tfstate" 5 | region = "eu-west-1" -------------------------------------------------------------------------------- /04-backends/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | random = { 9 | source = "hashicorp/random" 10 | version = "~> 3.0" 11 | } 12 | } 13 | 14 | backend "s3" { 15 | bucket = "terraform-course-lauromueller-remote-backend" 16 | key = "04-backends/state.tfstate" 17 | region = "eu-west-1" 18 | } 19 | } 20 | 21 | provider "aws" { 22 | region = "eu-west-1" 23 | } 24 | -------------------------------------------------------------------------------- /04-backends/s3.tf: -------------------------------------------------------------------------------- 1 | resource "random_id" "bucket_suffix" { 2 | byte_length = 6 3 | } 4 | 5 | resource "aws_s3_bucket" "example_bucket" { 6 | bucket = "example-bucket-${random_id.bucket_suffix.hex}" 7 | } 8 | 9 | output "bucket_name" { 10 | value = aws_s3_bucket.example_bucket.bucket 11 | } -------------------------------------------------------------------------------- /05-providers/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.38.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:axFddT4mkdtZREgkDXwXdzZGm1qxheF0fLN7S7bJJX4=", 9 | "zh:0d58264440fd28b6729990b48d8fd61e732f5570689d17bbbc0c5f2324d3dd00", 10 | "zh:175e24a3d399495fc91da359cc30a9fe06b7eeb98804816abcf1493859f6d28e", 11 | "zh:244a1f56d6710cc1a643f602a185b46d3cd064f6df60330006f92ab32f3ff60c", 12 | "zh:30dd99413867b1be808b656551a2f0452e4e37787f963780c51f1f85bf406441", 13 | "zh:3629d4e212c8ffd8e74c4ab9e9d22ca7fff803052366d011c014591fa65beb48", 14 | "zh:521badb184bbdde5dddb1228f7a241997db52ea51c9f8039ed5a626362952cf4", 15 | "zh:5580a937e1f5fa59c16c4b9802079aa45a16c7c69e5b7d4e97aebf2c0fb4bd00", 16 | "zh:87b801057d492ff0adc82ce6251871d87bdf5890749fe5753f447ec6fe4710ff", 17 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 18 | "zh:9c44e0c143f1d021440e9c448a9bc595f51a95e6cc382fcffe9db6d3b17f24c2", 19 | "zh:b7e6b7b182932a3dbb6ca5f8ebb8d37befe1456f3dffaafb37cee07dc0473696", 20 | "zh:d43fcf4f59cf79b1be3bec164d95fe9edc3fe39195a83226b911918a6538c8b3", 21 | "zh:ec3e383ce1e414f0bd7d3fe73409ff7d2777a5da27248b70fd5df1df323d920b", 22 | "zh:f729b443179bb115bbcbb0369fe46640de1c6dbd627b52694e9b3b8a41ec7881", 23 | "zh:fd532b707746145d3c6d3507bca2b8d44cc618b3d5006db99426221b71db7da7", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /05-providers/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | } 9 | } 10 | 11 | provider "aws" { 12 | region = "eu-west-1" 13 | } 14 | 15 | provider "aws" { 16 | region = "us-east-1" 17 | alias = "us-east" 18 | } 19 | 20 | resource "aws_s3_bucket" "eu_west_1" { 21 | bucket = "some-random-bucket-name-aosdhfoadhfu" 22 | } 23 | 24 | resource "aws_s3_bucket" "us_east_1" { 25 | bucket = "some-random-bucket-name-18736481364" 26 | provider = aws.us-east 27 | } -------------------------------------------------------------------------------- /06-resources/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.38.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:axFddT4mkdtZREgkDXwXdzZGm1qxheF0fLN7S7bJJX4=", 9 | "zh:0d58264440fd28b6729990b48d8fd61e732f5570689d17bbbc0c5f2324d3dd00", 10 | "zh:175e24a3d399495fc91da359cc30a9fe06b7eeb98804816abcf1493859f6d28e", 11 | "zh:244a1f56d6710cc1a643f602a185b46d3cd064f6df60330006f92ab32f3ff60c", 12 | "zh:30dd99413867b1be808b656551a2f0452e4e37787f963780c51f1f85bf406441", 13 | "zh:3629d4e212c8ffd8e74c4ab9e9d22ca7fff803052366d011c014591fa65beb48", 14 | "zh:521badb184bbdde5dddb1228f7a241997db52ea51c9f8039ed5a626362952cf4", 15 | "zh:5580a937e1f5fa59c16c4b9802079aa45a16c7c69e5b7d4e97aebf2c0fb4bd00", 16 | "zh:87b801057d492ff0adc82ce6251871d87bdf5890749fe5753f447ec6fe4710ff", 17 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 18 | "zh:9c44e0c143f1d021440e9c448a9bc595f51a95e6cc382fcffe9db6d3b17f24c2", 19 | "zh:b7e6b7b182932a3dbb6ca5f8ebb8d37befe1456f3dffaafb37cee07dc0473696", 20 | "zh:d43fcf4f59cf79b1be3bec164d95fe9edc3fe39195a83226b911918a6538c8b3", 21 | "zh:ec3e383ce1e414f0bd7d3fe73409ff7d2777a5da27248b70fd5df1df323d920b", 22 | "zh:f729b443179bb115bbcbb0369fe46640de1c6dbd627b52694e9b3b8a41ec7881", 23 | "zh:fd532b707746145d3c6d3507bca2b8d44cc618b3d5006db99426221b71db7da7", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /06-resources/README.md: -------------------------------------------------------------------------------- 1 | Steps for implementation: 2 | 3 | 1. [DONE] Deploy a VPC and a subnet 4 | 2. [DONE] Deploy an internet gateway and associate it with the VPC 5 | 3. [DONE] Setup a route table with a route to the IGW and associate it with the subnet 6 | 4. [DONE] Deploy an EC2 instance inside of the created subnet and associate a public IP 7 | 5. [DONE] Associate a security group that allows public ingress 8 | 6. [DONE] Change the EC2 instance to use a publicly available NGINX AMI 9 | 7. [DONE] Destroy everything 10 | -------------------------------------------------------------------------------- /06-resources/compute.tf: -------------------------------------------------------------------------------- 1 | resource "aws_instance" "web" { 2 | # AMI ID NGINX = ami-0dfee6e7eb44d480b 3 | # AMI ID Ubuntu = ami-0652a081025ec9fee 4 | ami = "ami-0dfee6e7eb44d480b" 5 | associate_public_ip_address = true 6 | instance_type = "t2.micro" 7 | subnet_id = aws_subnet.public.id 8 | vpc_security_group_ids = [aws_security_group.public_http_traffic.id] 9 | root_block_device { 10 | delete_on_termination = true 11 | volume_size = 10 12 | volume_type = "gp3" 13 | } 14 | 15 | tags = merge(local.common_tags, { 16 | Name = "06-resources-ec2" 17 | }) 18 | 19 | lifecycle { 20 | create_before_destroy = true 21 | } 22 | } 23 | 24 | resource "aws_security_group" "public_http_traffic" { 25 | description = "Security group allowing traffic on ports 443 and 80" 26 | name = "public-http-traffic" 27 | vpc_id = aws_vpc.main.id 28 | 29 | tags = merge(local.common_tags, { 30 | Name = "06-resources-sg" 31 | }) 32 | } 33 | 34 | resource "aws_vpc_security_group_ingress_rule" "http" { 35 | security_group_id = aws_security_group.public_http_traffic.id 36 | cidr_ipv4 = "0.0.0.0/0" 37 | from_port = 80 38 | to_port = 80 39 | ip_protocol = "tcp" 40 | } 41 | 42 | resource "aws_vpc_security_group_ingress_rule" "https" { 43 | security_group_id = aws_security_group.public_http_traffic.id 44 | cidr_ipv4 = "0.0.0.0/0" 45 | from_port = 443 46 | to_port = 443 47 | ip_protocol = "tcp" 48 | } -------------------------------------------------------------------------------- /06-resources/networking.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | common_tags = { 3 | ManagedBy = "Terraform" 4 | Project = "06-resources" 5 | CostCenter = "1234" 6 | } 7 | } 8 | 9 | resource "aws_vpc" "main" { 10 | cidr_block = "10.0.0.0/16" 11 | 12 | tags = merge(local.common_tags, { 13 | Name = "06-resources" 14 | }) 15 | } 16 | 17 | resource "aws_subnet" "public" { 18 | vpc_id = aws_vpc.main.id 19 | cidr_block = "10.0.0.0/24" 20 | 21 | tags = merge(local.common_tags, { 22 | Name = "06-resources-public" 23 | }) 24 | } 25 | 26 | resource "aws_internet_gateway" "main" { 27 | vpc_id = aws_vpc.main.id 28 | 29 | tags = merge(local.common_tags, { 30 | Name = "06-resources-main" 31 | }) 32 | } 33 | 34 | resource "aws_route_table" "public" { 35 | vpc_id = aws_vpc.main.id 36 | route { 37 | cidr_block = "0.0.0.0/0" 38 | gateway_id = aws_internet_gateway.main.id 39 | } 40 | 41 | tags = merge(local.common_tags, { 42 | Name = "06-resources-main" 43 | }) 44 | } 45 | 46 | resource "aws_route_table_association" "public" { 47 | subnet_id = aws_subnet.public.id 48 | route_table_id = aws_route_table.public.id 49 | } -------------------------------------------------------------------------------- /06-resources/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | } 9 | } 10 | 11 | provider "aws" { 12 | region = "eu-west-1" 13 | } -------------------------------------------------------------------------------- /07-data-sources/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.38.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:axFddT4mkdtZREgkDXwXdzZGm1qxheF0fLN7S7bJJX4=", 9 | "zh:0d58264440fd28b6729990b48d8fd61e732f5570689d17bbbc0c5f2324d3dd00", 10 | "zh:175e24a3d399495fc91da359cc30a9fe06b7eeb98804816abcf1493859f6d28e", 11 | "zh:244a1f56d6710cc1a643f602a185b46d3cd064f6df60330006f92ab32f3ff60c", 12 | "zh:30dd99413867b1be808b656551a2f0452e4e37787f963780c51f1f85bf406441", 13 | "zh:3629d4e212c8ffd8e74c4ab9e9d22ca7fff803052366d011c014591fa65beb48", 14 | "zh:521badb184bbdde5dddb1228f7a241997db52ea51c9f8039ed5a626362952cf4", 15 | "zh:5580a937e1f5fa59c16c4b9802079aa45a16c7c69e5b7d4e97aebf2c0fb4bd00", 16 | "zh:87b801057d492ff0adc82ce6251871d87bdf5890749fe5753f447ec6fe4710ff", 17 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 18 | "zh:9c44e0c143f1d021440e9c448a9bc595f51a95e6cc382fcffe9db6d3b17f24c2", 19 | "zh:b7e6b7b182932a3dbb6ca5f8ebb8d37befe1456f3dffaafb37cee07dc0473696", 20 | "zh:d43fcf4f59cf79b1be3bec164d95fe9edc3fe39195a83226b911918a6538c8b3", 21 | "zh:ec3e383ce1e414f0bd7d3fe73409ff7d2777a5da27248b70fd5df1df323d920b", 22 | "zh:f729b443179bb115bbcbb0369fe46640de1c6dbd627b52694e9b3b8a41ec7881", 23 | "zh:fd532b707746145d3c6d3507bca2b8d44cc618b3d5006db99426221b71db7da7", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /07-data-sources/compute.tf: -------------------------------------------------------------------------------- 1 | data "aws_ami" "ubuntu" { 2 | most_recent = true 3 | owners = ["099720109477"] # Owner is Canonical 4 | 5 | filter { 6 | name = "name" 7 | values = ["ubuntu/images/hvm-ssd/ubuntu-*-22.04-amd64-server-*"] 8 | } 9 | 10 | filter { 11 | name = "virtualization-type" 12 | values = ["hvm"] 13 | } 14 | } 15 | 16 | data "aws_caller_identity" "current" {} 17 | 18 | data "aws_region" "current" {} 19 | 20 | data "aws_vpc" "prod_vpc" { 21 | tags = { 22 | Env = "Prod" 23 | } 24 | } 25 | 26 | data "aws_availability_zones" "available" { 27 | state = "available" 28 | } 29 | 30 | data "aws_iam_policy_document" "static_website" { 31 | statement { 32 | sid = "PublicReadGetObject" 33 | 34 | principals { 35 | type = "*" 36 | identifiers = ["*"] 37 | } 38 | 39 | actions = ["s3:GetObject"] 40 | 41 | resources = ["arn:aws:s3:::*/*"] 42 | } 43 | } 44 | 45 | resource "aws_s3_bucket" "public_read_bucket" { 46 | bucket = "my-public-read-bucket" 47 | } 48 | 49 | output "iam_policy" { 50 | value = data.aws_iam_policy_document.static_website.json 51 | } 52 | 53 | output "azs" { 54 | value = data.aws_availability_zones.available 55 | } 56 | 57 | output "prod_vpc_id" { 58 | value = data.aws_vpc.prod_vpc.id 59 | } 60 | 61 | output "ubuntu_ami_data" { 62 | value = data.aws_ami.ubuntu.id 63 | } 64 | 65 | output "aws_caller_identity" { 66 | value = data.aws_caller_identity.current 67 | } 68 | 69 | output "aws_region" { 70 | value = data.aws_region.current 71 | } 72 | 73 | resource "aws_instance" "web" { 74 | # AMI ID NGINX = ami-0dfee6e7eb44d480b 75 | ami = data.aws_ami.ubuntu.id 76 | associate_public_ip_address = true 77 | instance_type = "t2.micro" 78 | 79 | root_block_device { 80 | delete_on_termination = true 81 | volume_size = 10 82 | volume_type = "gp3" 83 | } 84 | } -------------------------------------------------------------------------------- /07-data-sources/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | } 9 | } 10 | 11 | provider "aws" { 12 | region = "eu-west-1" 13 | } 14 | -------------------------------------------------------------------------------- /08-input-vars-locals-outputs/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.39.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:5u4QIR/mnFh1bcDpKE5TulO38AaVV5omcDBbz90X+5o=", 9 | "zh:01e405306470ed784bc9d38dbaeff394bd2c0f7d58e5592c5d0165c87d84e4b0", 10 | "zh:0328fbd42a91e50601318d2c364a80ebd3b4e5755c85df6fafd2fed80bc54598", 11 | "zh:042420e08d1ef1e2ed51c394539a6db27e031bedb9eaa19db9ccaf9fa93b9b36", 12 | "zh:47dee460b2c06676e16f5070fff71e5e9eb24b74bcd7b4f7b53eea8ebd3c972f", 13 | "zh:49c34f21a05bf5150e3a0a87c997fcd831ae81e3f2df86191e2fdd231525b585", 14 | "zh:4d5a98726216e260296bdc13e562179a743ef4738b65154da697c2d9e9eb5c9b", 15 | "zh:7fb08af13c868d8d20519b373eeb6707a0cea078495123e28e71727993474f53", 16 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 17 | "zh:a57e7e44f79991e55be02f2b147a433b5a2690188be0510409b79e9b00e37009", 18 | "zh:aeeb120cd6245f67ae3e52e13005c2dc1f091787564dd6b39ee6c05b2c239c53", 19 | "zh:b5d39c6e55d0355f9fd93a802355166bd27edaf9acb753b190562474447674b2", 20 | "zh:cfcc1922b380db5f90fd5ffc1e49fb9f316e6c41de4603f65b05d20ace72e15e", 21 | "zh:d221d8cea09229743d2647944c0420a8e9e5d7e3ff088b30cbce85a8b051dcab", 22 | "zh:dccca0239df6d39686b0f0947040b3c0e1270a4bab268c402b742cf5a7759296", 23 | "zh:ef0564fb70e8210db9bf472f662e093a1a1f204a1c1cb46467231be1a78882f3", 24 | ] 25 | } 26 | 27 | provider "registry.terraform.io/hashicorp/random" { 28 | version = "3.6.0" 29 | constraints = "~> 3.0" 30 | hashes = [ 31 | "h1:I8MBeauYA8J8yheLJ8oSMWqB0kovn16dF/wKZ1QTdkk=", 32 | "zh:03360ed3ecd31e8c5dac9c95fe0858be50f3e9a0d0c654b5e504109c2159287d", 33 | "zh:1c67ac51254ba2a2bb53a25e8ae7e4d076103483f55f39b426ec55e47d1fe211", 34 | "zh:24a17bba7f6d679538ff51b3a2f378cedadede97af8a1db7dad4fd8d6d50f829", 35 | "zh:30ffb297ffd1633175d6545d37c2217e2cef9545a6e03946e514c59c0859b77d", 36 | "zh:454ce4b3dbc73e6775f2f6605d45cee6e16c3872a2e66a2c97993d6e5cbd7055", 37 | "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", 38 | "zh:91df0a9fab329aff2ff4cf26797592eb7a3a90b4a0c04d64ce186654e0cc6e17", 39 | "zh:aa57384b85622a9f7bfb5d4512ca88e61f22a9cea9f30febaa4c98c68ff0dc21", 40 | "zh:c4a3e329ba786ffb6f2b694e1fd41d413a7010f3a53c20b432325a94fa71e839", 41 | "zh:e2699bc9116447f96c53d55f2a00570f982e6f9935038c3810603572693712d0", 42 | "zh:e747c0fd5d7684e5bfad8aa0ca441903f15ae7a98a737ff6aca24ba223207e2c", 43 | "zh:f1ca75f417ce490368f047b63ec09fd003711ae48487fba90b4aba2ccf71920e", 44 | ] 45 | } 46 | -------------------------------------------------------------------------------- /08-input-vars-locals-outputs/compute.tf: -------------------------------------------------------------------------------- 1 | data "aws_ami" "ubuntu" { 2 | most_recent = true 3 | owners = ["099720109477"] # Owner is Canonical 4 | 5 | filter { 6 | name = "name" 7 | values = ["ubuntu/images/hvm-ssd/ubuntu-*-22.04-amd64-server-*"] 8 | } 9 | 10 | filter { 11 | name = "virtualization-type" 12 | values = ["hvm"] 13 | } 14 | } 15 | 16 | # resource "aws_instance" "compute" { 17 | # ami = data.aws_ami.ubuntu.id 18 | # instance_type = var.ec2_instance_type 19 | 20 | # root_block_device { 21 | # delete_on_termination = true 22 | # volume_size = var.ec2_volume_config.size 23 | # volume_type = var.ec2_volume_config.type 24 | # } 25 | 26 | # tags = merge(local.common_tags, var.additional_tags) 27 | # } -------------------------------------------------------------------------------- /08-input-vars-locals-outputs/outputs.tf: -------------------------------------------------------------------------------- 1 | output "s3_bucket_name" { 2 | value = aws_s3_bucket.project_bucket.bucket 3 | sensitive = true 4 | description = "The name of the S3 bucket" 5 | } 6 | 7 | output "sensitive_var" { 8 | sensitive = true 9 | value = var.my_sensitive_value 10 | } -------------------------------------------------------------------------------- /08-input-vars-locals-outputs/override.tfvars: -------------------------------------------------------------------------------- 1 | additional_tags = { 2 | ValuesFrom = "override.tfvars" 3 | } -------------------------------------------------------------------------------- /08-input-vars-locals-outputs/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | 9 | random = { 10 | source = "hashicorp/random" 11 | version = "~> 3.0" 12 | } 13 | } 14 | } 15 | 16 | provider "aws" { 17 | region = "eu-west-1" 18 | } -------------------------------------------------------------------------------- /08-input-vars-locals-outputs/s3.tf: -------------------------------------------------------------------------------- 1 | resource "random_id" "project_bucket_suffix" { 2 | byte_length = 4 3 | } 4 | 5 | resource "aws_s3_bucket" "project_bucket" { 6 | bucket = "${local.project}-${random_id.project_bucket_suffix.hex}" 7 | 8 | tags = merge(local.common_tags, var.additional_tags) 9 | } -------------------------------------------------------------------------------- /08-input-vars-locals-outputs/shared-locals.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | project = "08-input-vars-locals-outputs" 3 | project_owner = "terraform-course" 4 | cost_center = "1234" 5 | managed_by = "Terraform" 6 | } 7 | 8 | locals { 9 | common_tags = { 10 | project = local.project 11 | project_owner = local.project_owner 12 | cost_center = local.cost_center 13 | managed_by = local.managed_by 14 | sensitive_tag = var.my_sensitive_value 15 | } 16 | } -------------------------------------------------------------------------------- /08-input-vars-locals-outputs/terraform.tfvars: -------------------------------------------------------------------------------- 1 | ec2_instance_type = "t2.micro" 2 | 3 | ec2_volume_config = { 4 | size = 10 5 | type = "gp2" 6 | } 7 | 8 | additional_tags = { 9 | ValuesFrom = "terraform.tfvars" 10 | } -------------------------------------------------------------------------------- /08-input-vars-locals-outputs/variables.tf: -------------------------------------------------------------------------------- 1 | variable "ec2_instance_type" { 2 | type = string 3 | description = "The type of the managed EC2 instances." 4 | 5 | validation { 6 | condition = contains(["t2.micro", "t3.micro"], var.ec2_instance_type) 7 | error_message = "Only supports t2.micro and t3.micro" 8 | } 9 | } 10 | 11 | variable "ec2_volume_config" { 12 | type = object({ 13 | size = number 14 | type = string 15 | }) 16 | description = "The size and type of the root block volume for EC2 instances." 17 | 18 | default = { 19 | size = 10 20 | type = "gp3" 21 | } 22 | } 23 | 24 | variable "additional_tags" { 25 | type = map(string) 26 | default = {} 27 | } 28 | 29 | variable "my_sensitive_value" { 30 | type = string 31 | sensitive = true 32 | } -------------------------------------------------------------------------------- /09-expressions/for-lists.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | double_numbers = [for num in var.numbers_list : num * 2] 3 | even_numbers = [for num in var.numbers_list : num if num % 2 == 0] 4 | firstnames = [for person in var.objects_list : person.firstname] 5 | fullnames = [ 6 | for person in var.objects_list : "${person.firstname} ${person.lastname}" 7 | ] 8 | } 9 | 10 | output "double_numbers" { 11 | value = local.double_numbers 12 | } 13 | 14 | output "even_numbers" { 15 | value = local.even_numbers 16 | } 17 | 18 | output "firstnames" { 19 | value = local.firstnames 20 | } 21 | 22 | output "fullnames" { 23 | value = local.fullnames 24 | } -------------------------------------------------------------------------------- /09-expressions/for-maps.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | doubles_map = { for key, value in var.numbers_map : key => value * 2 } 3 | even_map = { for key, value in var.numbers_map : key => 4 | value * 2 if value % 2 == 0 5 | } 6 | } 7 | 8 | output "doubles_map" { 9 | value = local.doubles_map 10 | } 11 | 12 | output "even_map" { 13 | value = local.even_map 14 | } -------------------------------------------------------------------------------- /09-expressions/lists-maps.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | users_map = { 3 | for user_info in var.users : user_info.username => user_info.role... 4 | } 5 | users_map2 = { 6 | for username, roles in local.users_map : username => { 7 | roles = roles 8 | } 9 | } 10 | usernames_from_map = [for username, roles in local.users_map : username] 11 | } 12 | 13 | output "users_map" { 14 | value = local.users_map 15 | } 16 | 17 | output "users_map2" { 18 | value = local.users_map2 19 | } 20 | 21 | output "usernames_from_map" { 22 | value = local.usernames_from_map 23 | } 24 | 25 | output "user_to_output_roles" { 26 | value = local.users_map2[var.user_to_output].roles 27 | } -------------------------------------------------------------------------------- /09-expressions/operators.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | math = 2 * 2 # Math operators: *, /, +, -, - 3 | equality = 2 != 2 # Equality operators: ==, != 4 | comparison = 2 < 1 # Comparison operators: <, <=, >, >= 5 | logical = true || false 6 | } 7 | 8 | output "operators" { 9 | value = { 10 | math = local.math 11 | equality = local.equality 12 | comparison = local.comparison 13 | logical = local.logical 14 | } 15 | } -------------------------------------------------------------------------------- /09-expressions/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | } -------------------------------------------------------------------------------- /09-expressions/splat.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | firstnames_from_splat = var.objects_list[*].firstname 3 | roles_from_splat = [for username, user_props in local.users_map2 : user_props.roles] 4 | roles_from_splat_with_values = values(local.users_map2)[*].roles 5 | } 6 | 7 | output "firstnames_from_splat" { 8 | value = local.firstnames_from_splat 9 | } 10 | 11 | output "roles_from_splat" { 12 | value = local.roles_from_splat 13 | } -------------------------------------------------------------------------------- /09-expressions/terraform.tfvars: -------------------------------------------------------------------------------- 1 | numbers_list = [1, 2, 3, 4, 5] 2 | 3 | numbers_map = { 4 | one = 1 5 | two = 2 6 | three = 3 7 | four = 4 8 | five = 5 9 | } 10 | 11 | objects_list = [ 12 | { 13 | firstname = "John" 14 | lastname = "Smith" 15 | }, 16 | { 17 | firstname = "Jane" 18 | lastname = "Smith" 19 | }, 20 | { 21 | firstname = "Lauro" 22 | lastname = "Müller" 23 | } 24 | ] 25 | 26 | users = [ 27 | { 28 | role = "admin" 29 | username = "john" 30 | }, 31 | { 32 | role = "developer" 33 | username = "jane" 34 | }, 35 | { 36 | role = "auditor" 37 | username = "jane" 38 | }, 39 | { 40 | role = "auditor" 41 | username = "lauro" 42 | } 43 | ] 44 | 45 | user_to_output = "jane" -------------------------------------------------------------------------------- /09-expressions/variables.tf: -------------------------------------------------------------------------------- 1 | variable "numbers_list" { 2 | type = list(number) 3 | } 4 | 5 | variable "numbers_map" { 6 | type = map(number) 7 | } 8 | 9 | variable "objects_list" { 10 | type = list(object({ 11 | firstname = string 12 | lastname = string 13 | })) 14 | } 15 | 16 | variable "users" { 17 | type = list(object({ 18 | username = string 19 | role = string 20 | })) 21 | } 22 | 23 | variable "user_to_output" { 24 | type = string 25 | } -------------------------------------------------------------------------------- /10-functions/functions-examples.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | name = "Lauro Müller" 3 | age = -15 4 | my_object = { 5 | key1 = 10 6 | key2 = "my_value" 7 | } 8 | } 9 | 10 | output "example1" { 11 | value = startswith(lower(local.name), "john") 12 | } 13 | 14 | output "example2" { 15 | value = pow(local.age, 2) 16 | } 17 | 18 | output "example3" { 19 | value = yamldecode(file("${path.module}/users.yaml")).users[*].name 20 | } 21 | 22 | output "example4" { 23 | value = jsonencode(local.my_object) 24 | } -------------------------------------------------------------------------------- /10-functions/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | } -------------------------------------------------------------------------------- /10-functions/users.yaml: -------------------------------------------------------------------------------- 1 | users: 2 | - name: Lauro 3 | group: developers 4 | - name: John 5 | group: auditors -------------------------------------------------------------------------------- /11-multiple-resources/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.39.1" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:V29aKJqUWugn9F2pBIaWLwHLTmgFpirKEv1aNRX1kV4=", 9 | "zh:05c50a5d8edb3ba4ebc4eb6e0d0b5e319142f5983b27821710ed7d475d335bdc", 10 | "zh:082986a5784dd21957e632371b289e549f051a4ea21d5c78c6d744c3537f03c5", 11 | "zh:192ae622ba562eacc4921ed549a794506179233d724fdd15a4f147f3400724a0", 12 | "zh:19a1d4637a62de90b0da174c0bf01000cd900488f7e8f709d8a37f082c59756b", 13 | "zh:1d7689a8583515f1705972d7ce57ccfab96215b19905530d2c78c02dcfaff583", 14 | "zh:22c446a21209a52ab74b4ba1ede0b220531e97ce479430047e493a2c45e1d8cb", 15 | "zh:4154de82290ab4e9f81bac1ea62342de8b3b7a608f99258c190d4dd1c6663e47", 16 | "zh:6bc4859ccdc54f28af9286b2fa090a31dcb345138d68c471510b737f6a052011", 17 | "zh:73c69e000e0b321e78a4a12fef60d37285f2afec0ea7be9e06163d985101cb59", 18 | "zh:890a3422f5e445b49bae30facf448d0ec9cd647e9155d0b685b5b39e9d331a94", 19 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 20 | "zh:9cd88bec0f5205df9032e3126d4e57edd1c5cc8d45cda25626882dafc485a3b0", 21 | "zh:a3a8e3276d0fbf051bbafa192a2998b05745f2cf285ac8c36a9ad167a75c037f", 22 | "zh:d47e4dcf4c0ad71b9a7c720be4f3a89f6786a82e77bbe8d950794562792a1da5", 23 | "zh:f74e5b2af508c7de80a6ae5198df54a795eeba5058a0cd247828943f0c54f6e0", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /11-multiple-resources/compute.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | ami_ids = { 3 | ubuntu = data.aws_ami.ubuntu.id 4 | nginx = data.aws_ami.nginx.id 5 | } 6 | } 7 | 8 | data "aws_ami" "ubuntu" { 9 | most_recent = true 10 | owners = ["099720109477"] # Owner is Canonical 11 | 12 | filter { 13 | name = "name" 14 | values = ["ubuntu/images/hvm-ssd/ubuntu-*-22.04-amd64-server-*"] 15 | } 16 | 17 | filter { 18 | name = "virtualization-type" 19 | values = ["hvm"] 20 | } 21 | } 22 | 23 | data "aws_ami" "nginx" { 24 | most_recent = true 25 | 26 | filter { 27 | name = "name" 28 | values = ["bitnami-nginx-1.25.4-*-linux-debian-12-x86_64-hvm-ebs-*"] 29 | } 30 | 31 | filter { 32 | name = "virtualization-type" 33 | values = ["hvm"] 34 | } 35 | } 36 | 37 | resource "aws_instance" "from_list" { 38 | count = length(var.ec2_instance_config_list) 39 | ami = local.ami_ids[var.ec2_instance_config_list[count.index].ami] 40 | instance_type = var.ec2_instance_config_list[count.index].instance_type 41 | subnet_id = aws_subnet.main[ 42 | var.ec2_instance_config_list[count.index].subnet_name 43 | ].id 44 | 45 | tags = { 46 | Name = "${local.project}-${count.index}" 47 | Project = local.project 48 | } 49 | } 50 | 51 | resource "aws_instance" "from_map" { 52 | # each.key => holds the key of each key-value pair in the map 53 | # each.value => holds the value of each key-value pair in the map 54 | for_each = var.ec2_instance_config_map 55 | ami = local.ami_ids[each.value.ami] 56 | instance_type = each.value.instance_type 57 | subnet_id = aws_subnet.main[each.value.subnet_name].id 58 | 59 | tags = { 60 | Name = "${local.project}-${each.key}" 61 | Project = local.project 62 | } 63 | } -------------------------------------------------------------------------------- /11-multiple-resources/data.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | project = "11-multiple-resources" 3 | } 4 | -------------------------------------------------------------------------------- /11-multiple-resources/networking.tf: -------------------------------------------------------------------------------- 1 | resource "aws_vpc" "main" { 2 | cidr_block = "10.0.0.0/16" 3 | 4 | tags = { 5 | Project = local.project 6 | Name = local.project 7 | } 8 | } 9 | 10 | resource "aws_subnet" "main" { 11 | for_each = var.subnet_config 12 | vpc_id = aws_vpc.main.id 13 | cidr_block = each.value.cidr_block 14 | 15 | tags = { 16 | Project = local.project 17 | Name = "${local.project}-${each.key}" 18 | } 19 | } 20 | -------------------------------------------------------------------------------- /11-multiple-resources/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | } 9 | } 10 | 11 | provider "aws" { 12 | region = "eu-west-1" 13 | } -------------------------------------------------------------------------------- /11-multiple-resources/terraform.tfvars: -------------------------------------------------------------------------------- 1 | subnet_config = { 2 | default = { 3 | cidr_block = "10.0.0.0/24" 4 | } 5 | 6 | subnet_1 = { 7 | cidr_block = "10.0.1.0/24" 8 | } 9 | } 10 | 11 | ec2_instance_config_map = { 12 | ubuntu_1 = { 13 | instance_type = "t2.micro", 14 | ami = "ubuntu" 15 | } 16 | 17 | nginx_1 = { 18 | instance_type = "t2.micro", 19 | ami = "nginx" 20 | subnet_name = "subnet_1" 21 | } 22 | } -------------------------------------------------------------------------------- /11-multiple-resources/variables.tf: -------------------------------------------------------------------------------- 1 | variable "subnet_config" { 2 | type = map(object({ 3 | cidr_block = string 4 | })) 5 | 6 | # Ensure that all provided CIDR blocks are valid. 7 | validation { 8 | condition = alltrue([ 9 | for config in values(var.subnet_config) : can(cidrnetmask(config.cidr_block)) 10 | ]) 11 | error_message = "At least one of the provided CIDR blocks is not valid." 12 | } 13 | } 14 | 15 | variable "ec2_instance_config_list" { 16 | type = list(object({ 17 | instance_type = string 18 | ami = string 19 | subnet_name = optional(string, "default") 20 | })) 21 | 22 | default = [] 23 | 24 | # Ensure that only t2.micro is used 25 | validation { 26 | condition = alltrue([ 27 | for config in var.ec2_instance_config_list : contains(["t2.micro"], config.instance_type) 28 | ]) 29 | error_message = "Only t2.micro instances are allowed." 30 | } 31 | 32 | # Ensure that only ubuntu and nginx images are used. 33 | validation { 34 | condition = alltrue([ 35 | for config in var.ec2_instance_config_list : contains(["nginx", "ubuntu"], config.ami) 36 | ]) 37 | error_message = "At least one of the provided \"ami\" values is not supported.\nSupported \"ami\" values: \"ubuntu\", \"nginx\"." 38 | } 39 | } 40 | 41 | variable "ec2_instance_config_map" { 42 | type = map(object({ 43 | instance_type = string 44 | ami = string 45 | subnet_name = optional(string, "default") 46 | })) 47 | 48 | # Ensure that only t2.micro is used 49 | validation { 50 | condition = alltrue([ 51 | for config in values(var.ec2_instance_config_map) : contains(["t2.micro"], config.instance_type) 52 | ]) 53 | error_message = "Only t2.micro instances are allowed." 54 | } 55 | 56 | # Ensure that only ubuntu and nginx images are used. 57 | validation { 58 | condition = alltrue([ 59 | for config in values(var.ec2_instance_config_map) : contains(["nginx", "ubuntu"], config.ami) 60 | ]) 61 | error_message = "At least one of the provided \"ami\" values is not supported.\nSupported \"ami\" values: \"ubuntu\", \"nginx\"." 62 | } 63 | } -------------------------------------------------------------------------------- /12-public-modules/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.40.0" 6 | constraints = ">= 5.20.0" 7 | hashes = [ 8 | "h1:KEqMoJwLw6Z9bTO4K8nPVvQQa6YiM+bvz89Sw7tNFJw=", 9 | "zh:11f177a2385703740bd26d0652d3dba08575101d7639f386ce5637bdb0e29a13", 10 | "zh:203fc43e69634f1bd487a9dc24b01944dfd568beac78e491f26677d103d343ed", 11 | "zh:3697ebad4929da30ea98276a85d4ce5ebfc48508f4dd149e17e1dcdc7f306c6e", 12 | "zh:421e0799756587e728f75a9024b8d4e38707cd6d65cf0710cb8d189062c85a58", 13 | "zh:4be2adcd4c32a66159c532908f0d425d793c814b3686832e9af549b1515ae032", 14 | "zh:55778b32470212ce6bbfd402529c88e7ea6ba34b0882f85d6ea001ff5c6255a5", 15 | "zh:689a4c1fd1e1d5dab7b169759389c76f25e366f19a470971674321d6fca09791", 16 | "zh:68a23eda608573a053e8738894457bd0c11766bc243e68826c78ab6b5a144710", 17 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 18 | "zh:a1580115c22564e5752e569dc40482503de6cced44da3e9431885cd9d4bf18ea", 19 | "zh:b127756d7ee513691e76c211570580c10eaa2f7a7e4fd27c3566a48ec214991c", 20 | "zh:b7ccea7a759940c8dcf8726272eed6653eed0b31f7223f71e829a344627afd39", 21 | "zh:bb130fc50494fd45406e04b44d242da9a8f138a4a43feb65cf9e86d13aa13629", 22 | "zh:cf1c972c90d5f22c9705274a33792275e284a0a3fcac12ce4083b5a4480463f4", 23 | "zh:ebe60d3887b23703ca6a4c65b15c6d7b8d93ba27a028d996d17882fe6e98d5c0", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /12-public-modules/compute.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | instance_type = "t2.micro" 3 | } 4 | 5 | data "aws_ami" "ubuntu" { 6 | most_recent = true 7 | owners = ["099720109477"] # Owner is Canonical 8 | 9 | filter { 10 | name = "name" 11 | values = ["ubuntu/images/hvm-ssd/ubuntu-*-22.04-amd64-server-*"] 12 | } 13 | 14 | filter { 15 | name = "virtualization-type" 16 | values = ["hvm"] 17 | } 18 | } 19 | 20 | module "ec2" { 21 | source = "terraform-aws-modules/ec2-instance/aws" 22 | version = "5.6.1" 23 | 24 | name = local.project_name 25 | ami = data.aws_ami.ubuntu.id 26 | instance_type = local.instance_type 27 | vpc_security_group_ids = [module.vpc.default_security_group_id] 28 | subnet_id = module.vpc.public_subnets[0] 29 | 30 | tags = local.common_tags 31 | } -------------------------------------------------------------------------------- /12-public-modules/networking.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | vpc_cidr = "10.0.0.0/16" 3 | private_subnet_cidrs = ["10.0.0.0/24"] 4 | public_subnet_cidrs = ["10.0.128.0/24"] 5 | } 6 | 7 | data "aws_availability_zones" "azs" { 8 | state = "available" 9 | } 10 | 11 | module "vpc" { 12 | source = "terraform-aws-modules/vpc/aws" 13 | version = "5.5.3" 14 | 15 | cidr = local.vpc_cidr 16 | name = local.project_name 17 | azs = data.aws_availability_zones.azs.names 18 | private_subnets = local.private_subnet_cidrs 19 | public_subnets = local.public_subnet_cidrs 20 | 21 | tags = local.common_tags 22 | } -------------------------------------------------------------------------------- /12-public-modules/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | } 9 | } 10 | 11 | provider "aws" { 12 | region = "eu-west-1" 13 | } -------------------------------------------------------------------------------- /12-public-modules/shared-data.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | project_name = "12-public-modules" 3 | common_tags = { 4 | Project = local.project_name 5 | ManagedBy = "Terraform" 6 | } 7 | } -------------------------------------------------------------------------------- /13-local-modules/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.41.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:DiX7N35G2NUQRyRGy90+gyePnhP4w77f8LrJUronotE=", 9 | "zh:0553331a6287c146353b6daf6f71987d8c000f407b5e29d6e004ea88faec2e67", 10 | "zh:1a11118984bb2950e8ee7ef17b0f91fc9eb4a42c8e7a9cafd7eb4aca771d06e4", 11 | "zh:236fedd266d152a8233a7fe27ffdd99ca27d9e66a9618a988a4c3da1ac24a33f", 12 | "zh:34bc482ea04cf30d4d216afa55eecf66854e1acf93892cb28a6b5af91d43c9b7", 13 | "zh:39d7eb15832fe339bf46e3bab9852280762a1817bf1afc459eecd430e20e3ad5", 14 | "zh:39fb07429c51556b05170ec2b6bd55e2487adfe1606761eaf1f2a43c4bb20e47", 15 | "zh:71d7cd3013e2f3fa0f65194af29ee6f5fa905e0df2b72b723761dc953f4512ea", 16 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 17 | "zh:9b271ae12394e7e2ce6da568b42226a146e90fd705e02a670fcb93618c4aa19f", 18 | "zh:a884dd978859d001709681f9513ba0fbb0753d1d459a7f3434ecc5f1b8699c49", 19 | "zh:b8c3c7dc10ae4f6143168042dcf8dee63527b103cc37abc238ea06150af38b6e", 20 | "zh:ba94ffe0893ad60c0b70c402e163b4df2cf417e93474a9cc1a37535bba18f22d", 21 | "zh:d5ba851d971ff8d796afd9a100acf55eaac0c197c6ab779787797ce66f419f0e", 22 | "zh:e8c090d0c4f730c4a610dc4f0c22b177a0376d6f78679fc3f1d557b469e656f4", 23 | "zh:ed7623acde26834672969dcb5befdb62900d9f216d32e7478a095d2b040a0ea7", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /13-local-modules/README.md: -------------------------------------------------------------------------------- 1 | A networking module that should: 2 | 1. [DONE] Create a VPC with a given CIDR block 3 | 2. Allow the user to provide the configuration for multiple subnets: 4 | 2.1 [DONE] The user should be able to provide CIDR blocks 5 | 2.2 [DONE] The user should be able to provide AWS AZ 6 | 2.3 The user should be able to mark a subnet as public or private 7 | 2.3.1 If at least one subnet is public, we need to deploy an IGW 8 | 2.3.2 We need to associate the public subnets with a public RTB -------------------------------------------------------------------------------- /13-local-modules/compute.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | project_name = "13-local-modules" 3 | } 4 | 5 | data "aws_ami" "ubuntu" { 6 | most_recent = true 7 | owners = ["099720109477"] # Owner is Canonical 8 | 9 | filter { 10 | name = "name" 11 | values = ["ubuntu/images/hvm-ssd/ubuntu-*-22.04-amd64-server-*"] 12 | } 13 | 14 | filter { 15 | name = "virtualization-type" 16 | values = ["hvm"] 17 | } 18 | } 19 | 20 | resource "aws_instance" "this" { 21 | ami = data.aws_ami.ubuntu.id 22 | instance_type = "t2.micro" 23 | subnet_id = module.vpc.private_subnets["subnet_1"].subnet_id 24 | 25 | tags = { 26 | Name = local.project_name 27 | Project = local.project_name 28 | } 29 | } -------------------------------------------------------------------------------- /13-local-modules/modules/networking/LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 - Lauro Fialho Müller 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /13-local-modules/modules/networking/README.md: -------------------------------------------------------------------------------- 1 | # Networking Module 2 | 3 | This module manages the creation of VPCs and Subnets, allowing for the creation of both private and public subnets. 4 | 5 | Example usage: 6 | ``` 7 | module "vpc" { 8 | source = "./modules/networking" 9 | 10 | vpc_config = { 11 | cidr_block = "10.0.0.0/16" 12 | name = "your_vpc" 13 | } 14 | 15 | subnet_config = { 16 | subnet_1 = { 17 | cidr_block = "10.0.0.0/24" 18 | az = "eu-west-1a" 19 | } 20 | subnet_2 = { 21 | cidr_block = "10.0.1.0/24" 22 | public = true 23 | az = "eu-west-1b" 24 | } 25 | } 26 | } 27 | ``` -------------------------------------------------------------------------------- /13-local-modules/modules/networking/examples/complete/main.tf: -------------------------------------------------------------------------------- 1 | module "vpc" { 2 | source = "./modules/networking" 3 | 4 | vpc_config = { 5 | cidr_block = "10.0.0.0/16" 6 | name = "13-local-modules" 7 | } 8 | 9 | subnet_config = { 10 | subnet_1 = { 11 | cidr_block = "10.0.0.0/24" 12 | az = "eu-west-1a" 13 | } 14 | subnet_2 = { 15 | cidr_block = "10.0.1.0/24" 16 | # Public subnets are indicated by setting the "public" option to true. 17 | public = true 18 | az = "eu-west-1b" 19 | } 20 | } 21 | } -------------------------------------------------------------------------------- /13-local-modules/modules/networking/outputs.tf: -------------------------------------------------------------------------------- 1 | # 1. VPC ID 2 | # 2. Public subnets - subnet_key => { subnet_id, availability_zone } 3 | # 3. Private subnets - subnet_key => { subnet_id, availability_zone } 4 | 5 | locals { 6 | output_public_subnets = { 7 | for key in keys(local.public_subnets) : key => { 8 | subnet_id = aws_subnet.this[key].id 9 | availability_zone = aws_subnet.this[key].availability_zone 10 | } 11 | } 12 | 13 | output_private_subnets = { 14 | for key in keys(local.private_subnets) : key => { 15 | subnet_id = aws_subnet.this[key].id 16 | availability_zone = aws_subnet.this[key].availability_zone 17 | } 18 | } 19 | } 20 | 21 | output "vpc_id" { 22 | description = "The AWS ID from the created VPC" 23 | value = aws_vpc.this.id 24 | } 25 | 26 | output "public_subnets" { 27 | description = "The ID and the availability zone of public subnets." 28 | value = local.output_public_subnets 29 | } 30 | 31 | output "private_subnets" { 32 | description = "The ID and the availability zone of private subnets." 33 | value = local.output_private_subnets 34 | } -------------------------------------------------------------------------------- /13-local-modules/modules/networking/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "~> 5.0" 6 | } 7 | } 8 | } -------------------------------------------------------------------------------- /13-local-modules/modules/networking/variables.tf: -------------------------------------------------------------------------------- 1 | variable "vpc_config" { 2 | type = object({ 3 | cidr_block = string 4 | name = string 5 | }) 6 | 7 | validation { 8 | condition = can(cidrnetmask(var.vpc_config.cidr_block)) 9 | error_message = "The cidr_block config option must contain a valid CIDR block." 10 | } 11 | } 12 | 13 | variable "subnet_config" { 14 | type = map(object({ 15 | cidr_block = string 16 | public = optional(bool, false) 17 | az = string 18 | })) 19 | 20 | validation { 21 | condition = alltrue([ 22 | for config in values(var.subnet_config) : can(cidrnetmask(config.cidr_block)) 23 | ]) 24 | error_message = "The cidr_block config option must contain a valid CIDR block." 25 | } 26 | } -------------------------------------------------------------------------------- /13-local-modules/modules/networking/vpc.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | public_subnets = { 3 | for key, config in var.subnet_config : key => config if config.public 4 | } 5 | 6 | private_subnets = { 7 | for key, config in var.subnet_config : key => config if !config.public 8 | } 9 | } 10 | 11 | data "aws_availability_zones" "available" { 12 | state = "available" 13 | } 14 | 15 | resource "aws_vpc" "this" { 16 | cidr_block = var.vpc_config.cidr_block 17 | 18 | tags = { 19 | Name = var.vpc_config.name 20 | } 21 | } 22 | 23 | resource "aws_subnet" "this" { 24 | for_each = var.subnet_config 25 | vpc_id = aws_vpc.this.id 26 | availability_zone = each.value.az 27 | cidr_block = each.value.cidr_block 28 | 29 | tags = { 30 | Name = each.key 31 | Access = each.value.public ? "Public" : "Private" 32 | } 33 | 34 | lifecycle { 35 | precondition { 36 | condition = contains(data.aws_availability_zones.available.names, each.value.az) 37 | error_message = <<-EOT 38 | The AZ "${each.value.az}" provided for the subnet "${each.key}" is invalid. 39 | 40 | The applied AWS region "${data.aws_availability_zones.available.id}" supports the following AZs: 41 | [${join(", ", data.aws_availability_zones.available.names)}] 42 | EOT 43 | } 44 | } 45 | } 46 | 47 | resource "aws_internet_gateway" "this" { 48 | count = length(local.public_subnets) > 0 ? 1 : 0 49 | vpc_id = aws_vpc.this.id 50 | } 51 | 52 | resource "aws_route_table" "public_rtb" { 53 | count = length(local.public_subnets) > 0 ? 1 : 0 54 | vpc_id = aws_vpc.this.id 55 | 56 | route { 57 | cidr_block = "0.0.0.0/0" 58 | gateway_id = aws_internet_gateway.this[0].id 59 | } 60 | } 61 | 62 | resource "aws_route_table_association" "public" { 63 | for_each = local.public_subnets 64 | 65 | subnet_id = aws_subnet.this[each.key].id 66 | route_table_id = aws_route_table.public_rtb[0].id 67 | } -------------------------------------------------------------------------------- /13-local-modules/networking.tf: -------------------------------------------------------------------------------- 1 | module "vpc" { 2 | source = "./modules/networking" 3 | 4 | vpc_config = { 5 | cidr_block = "10.0.0.0/16" 6 | name = "13-local-modules" 7 | } 8 | 9 | subnet_config = { 10 | subnet_1 = { 11 | cidr_block = "10.0.0.0/24" 12 | az = "eu-west-1a" 13 | } 14 | subnet_2 = { 15 | cidr_block = "10.0.1.0/24" 16 | public = true 17 | az = "eu-west-1b" 18 | } 19 | subnet_3 = { 20 | cidr_block = "10.0.2.0/24" 21 | public = true 22 | az = "eu-west-1c" 23 | } 24 | subnet_4 = { 25 | cidr_block = "10.0.3.0/24" 26 | az = "eu-west-1a" 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /13-local-modules/outputs.tf: -------------------------------------------------------------------------------- 1 | output "module_vpc_id" { 2 | value = module.vpc.vpc_id 3 | } 4 | 5 | output "module_public_subnets" { 6 | value = module.vpc.public_subnets 7 | } 8 | 9 | output "module_private_subnets" { 10 | value = module.vpc.private_subnets 11 | } -------------------------------------------------------------------------------- /13-local-modules/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "~> 5.0" 6 | } 7 | } 8 | } 9 | 10 | provider "aws" { 11 | region = "eu-west-1" 12 | } -------------------------------------------------------------------------------- /14-use-own-module/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.41.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:DiX7N35G2NUQRyRGy90+gyePnhP4w77f8LrJUronotE=", 9 | "zh:0553331a6287c146353b6daf6f71987d8c000f407b5e29d6e004ea88faec2e67", 10 | "zh:1a11118984bb2950e8ee7ef17b0f91fc9eb4a42c8e7a9cafd7eb4aca771d06e4", 11 | "zh:236fedd266d152a8233a7fe27ffdd99ca27d9e66a9618a988a4c3da1ac24a33f", 12 | "zh:34bc482ea04cf30d4d216afa55eecf66854e1acf93892cb28a6b5af91d43c9b7", 13 | "zh:39d7eb15832fe339bf46e3bab9852280762a1817bf1afc459eecd430e20e3ad5", 14 | "zh:39fb07429c51556b05170ec2b6bd55e2487adfe1606761eaf1f2a43c4bb20e47", 15 | "zh:71d7cd3013e2f3fa0f65194af29ee6f5fa905e0df2b72b723761dc953f4512ea", 16 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 17 | "zh:9b271ae12394e7e2ce6da568b42226a146e90fd705e02a670fcb93618c4aa19f", 18 | "zh:a884dd978859d001709681f9513ba0fbb0753d1d459a7f3434ecc5f1b8699c49", 19 | "zh:b8c3c7dc10ae4f6143168042dcf8dee63527b103cc37abc238ea06150af38b6e", 20 | "zh:ba94ffe0893ad60c0b70c402e163b4df2cf417e93474a9cc1a37535bba18f22d", 21 | "zh:d5ba851d971ff8d796afd9a100acf55eaac0c197c6ab779787797ce66f419f0e", 22 | "zh:e8c090d0c4f730c4a610dc4f0c22b177a0376d6f78679fc3f1d557b469e656f4", 23 | "zh:ed7623acde26834672969dcb5befdb62900d9f216d32e7478a095d2b040a0ea7", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /14-use-own-module/networking.tf: -------------------------------------------------------------------------------- 1 | module "networking-tf-course" { 2 | source = "udemy-lauromueller/networking-tf-course/aws" 3 | version = "0.1.1" 4 | 5 | vpc_config = { 6 | cidr_block = "10.0.0.0/16" 7 | name = "13-local-modules" 8 | } 9 | 10 | subnet_config = { 11 | subnet_1 = { 12 | cidr_block = "10.0.0.0/24" 13 | az = "eu-west-1a" 14 | } 15 | subnet_2 = { 16 | cidr_block = "10.0.1.0/24" 17 | public = true 18 | az = "eu-west-1b" 19 | } 20 | } 21 | } -------------------------------------------------------------------------------- /14-use-own-module/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "~> 5.0" 6 | } 7 | } 8 | } 9 | 10 | provider "aws" { 11 | region = "eu-west-1" 12 | } -------------------------------------------------------------------------------- /15-object-validation/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.41.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:DiX7N35G2NUQRyRGy90+gyePnhP4w77f8LrJUronotE=", 9 | "zh:0553331a6287c146353b6daf6f71987d8c000f407b5e29d6e004ea88faec2e67", 10 | "zh:1a11118984bb2950e8ee7ef17b0f91fc9eb4a42c8e7a9cafd7eb4aca771d06e4", 11 | "zh:236fedd266d152a8233a7fe27ffdd99ca27d9e66a9618a988a4c3da1ac24a33f", 12 | "zh:34bc482ea04cf30d4d216afa55eecf66854e1acf93892cb28a6b5af91d43c9b7", 13 | "zh:39d7eb15832fe339bf46e3bab9852280762a1817bf1afc459eecd430e20e3ad5", 14 | "zh:39fb07429c51556b05170ec2b6bd55e2487adfe1606761eaf1f2a43c4bb20e47", 15 | "zh:71d7cd3013e2f3fa0f65194af29ee6f5fa905e0df2b72b723761dc953f4512ea", 16 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 17 | "zh:9b271ae12394e7e2ce6da568b42226a146e90fd705e02a670fcb93618c4aa19f", 18 | "zh:a884dd978859d001709681f9513ba0fbb0753d1d459a7f3434ecc5f1b8699c49", 19 | "zh:b8c3c7dc10ae4f6143168042dcf8dee63527b103cc37abc238ea06150af38b6e", 20 | "zh:ba94ffe0893ad60c0b70c402e163b4df2cf417e93474a9cc1a37535bba18f22d", 21 | "zh:d5ba851d971ff8d796afd9a100acf55eaac0c197c6ab779787797ce66f419f0e", 22 | "zh:e8c090d0c4f730c4a610dc4f0c22b177a0376d6f78679fc3f1d557b469e656f4", 23 | "zh:ed7623acde26834672969dcb5befdb62900d9f216d32e7478a095d2b040a0ea7", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /15-object-validation/compute.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | allowed_instance_types = ["t2.micro", "t3.micro"] 3 | } 4 | 5 | data "aws_ami" "ubuntu" { 6 | most_recent = true 7 | owners = ["099720109477"] # Owner is Canonical 8 | 9 | filter { 10 | name = "name" 11 | values = ["ubuntu/images/hvm-ssd/ubuntu-*-22.04-amd64-server-*"] 12 | } 13 | 14 | filter { 15 | name = "virtualization-type" 16 | values = ["hvm"] 17 | } 18 | } 19 | 20 | resource "aws_instance" "this" { 21 | ami = data.aws_ami.ubuntu.id 22 | instance_type = var.instance_type 23 | subnet_id = aws_subnet.this[0].id 24 | 25 | root_block_device { 26 | delete_on_termination = true 27 | volume_size = 10 28 | volume_type = "gp3" 29 | } 30 | 31 | tags = { 32 | CostCenter = "1234" 33 | } 34 | 35 | lifecycle { 36 | create_before_destroy = true 37 | 38 | postcondition { 39 | condition = contains(local.allowed_instance_types, self.instance_type) 40 | error_message = "Self invalid instance type" 41 | } 42 | } 43 | } 44 | 45 | check "cost_center_check" { 46 | assert { 47 | condition = can(aws_instance.this.tags.CostCenter != "") 48 | error_message = "Your AWS Instance does not have a CostCenter tag." 49 | } 50 | } -------------------------------------------------------------------------------- /15-object-validation/networking.tf: -------------------------------------------------------------------------------- 1 | data "aws_vpc" "default" { 2 | default = true 3 | } 4 | 5 | data "aws_availability_zones" "available" { 6 | state = "available" 7 | } 8 | 9 | resource "aws_subnet" "this" { 10 | count = 4 11 | vpc_id = data.aws_vpc.default.id 12 | cidr_block = "172.31.${128 + count.index}.0/24" 13 | availability_zone = data.aws_availability_zones.available.names[ 14 | count.index % length(data.aws_availability_zones.available.names) 15 | ] 16 | 17 | lifecycle { 18 | postcondition { 19 | condition = contains(data.aws_availability_zones.available.names, self.availability_zone) 20 | error_message = "Invalid AZ" 21 | } 22 | } 23 | } 24 | 25 | check "high_availability_check" { 26 | assert { 27 | condition = length(toset([for subnet in aws_subnet.this : subnet.availability_zone])) > 1 28 | error_message = <<-EOT 29 | You are deploying all subnets within the same AZ. 30 | Please consider distributing them across AZs for higher availability. 31 | EOT 32 | } 33 | } 34 | -------------------------------------------------------------------------------- /15-object-validation/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | } 9 | } 10 | 11 | provider "aws" { 12 | region = "eu-west-1" 13 | } -------------------------------------------------------------------------------- /15-object-validation/variables.tf: -------------------------------------------------------------------------------- 1 | variable "instance_type" { 2 | type = string 3 | default = "t2.micro" 4 | } -------------------------------------------------------------------------------- /16-state-manipulation/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.42.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:Yxsj34z606m8wssYDHyleuBlQ9i+94MHwRs38thQPZU=", 9 | "zh:0fb12bd56a3ad777b29f957c56dd2119776dbc01b6074458f597990e368c82de", 10 | "zh:16e99c13bef6e3777f67c240c916f57c01c9c142254cfb2720e08281ff906447", 11 | "zh:218268f5fe73bcb19e9a996f781ab66df0da9e333d1c60612e3c51ad28a5105f", 12 | "zh:220b17f7053d11548f35136669687d30ef1f1036e15393275325fd2b9654c715", 13 | "zh:2256cfd74988ce05eada76b42efffc6fe2bf4d60b61f57e4db4fcf65ced4c666", 14 | "zh:52da19f531e0cb9828f73bca620e30264e63a494bd7f9ce826aabcf010d3a241", 15 | "zh:56069ce08d720280ba39aaf2fdd40c4357ffb54262c80e4d39c4e540a38e76af", 16 | "zh:82c81398e68324029167f813fbb7c54fa3d233e99fa05001d85cbce8bdd08bb3", 17 | "zh:82d6eaa87f5ab318959064e6c89adc2baabaf70b13f2f7de866f62416de05352", 18 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 19 | "zh:ade8490cfdd8de8b9a82986588595b67e0ad1048d9e2d3a6f5164320179c2cd0", 20 | "zh:b094ef56ae9bfffd586f46d4f7fb0097798738df758a8f3c51578ee163495c7e", 21 | "zh:bd5e68e1e454bae0f8d73cff8448e814a35855a561c33b745e1b8b525fb06c9f", 22 | "zh:c111c6a854bf121facca1642d528bfa80fb4214554ac6c33e4a59c86bc605b71", 23 | "zh:e04df69a557adbcdf8efc77eb45be748f0acbe800ccede1e0895393c87722a0f", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /16-state-manipulation/import.tf: -------------------------------------------------------------------------------- 1 | /* 2 | 1. Import via CLI 3 | 2. Import via the import block 4 | */ 5 | 6 | resource "aws_s3_bucket" "remote_state" { 7 | bucket = "terraform-course-lauromueller-remote-backend" 8 | 9 | tags = { 10 | ManagedBy = "Terraform" 11 | } 12 | } 13 | 14 | import { 15 | to = aws_s3_bucket_public_access_block.remote_state 16 | id = aws_s3_bucket.remote_state.bucket 17 | } 18 | 19 | resource "aws_s3_bucket_public_access_block" "remote_state" { 20 | bucket = aws_s3_bucket.remote_state.bucket 21 | 22 | block_public_acls = true 23 | block_public_policy = true 24 | ignore_public_acls = true 25 | restrict_public_buckets = true 26 | } -------------------------------------------------------------------------------- /16-state-manipulation/modules/compute/compute.tf: -------------------------------------------------------------------------------- 1 | resource "aws_instance" "this" { 2 | ami = var.ami_id 3 | instance_type = "t2.micro" 4 | } -------------------------------------------------------------------------------- /16-state-manipulation/modules/compute/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "~> 5.0" 6 | } 7 | } 8 | } 9 | -------------------------------------------------------------------------------- /16-state-manipulation/modules/compute/variables.tf: -------------------------------------------------------------------------------- 1 | variable "ami_id" { 2 | type = string 3 | description = "The ID of the AMI to use for the EC2 instance." 4 | } -------------------------------------------------------------------------------- /16-state-manipulation/move-state.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | ec2_names = ["instance1", "instance2"] 3 | } 4 | 5 | data "aws_ami" "ubuntu" { 6 | most_recent = true 7 | owners = ["099720109477"] # Owner is Canonical 8 | 9 | filter { 10 | name = "name" 11 | values = ["ubuntu/images/hvm-ssd/ubuntu-*-22.04-amd64-server-*"] 12 | } 13 | 14 | filter { 15 | name = "virtualization-type" 16 | values = ["hvm"] 17 | } 18 | } 19 | 20 | moved { 21 | from = aws_instance.new_list[0] 22 | to = aws_instance.new_list["instance1"] 23 | } 24 | 25 | moved { 26 | from = aws_instance.new_list[1] 27 | to = aws_instance.new_list["instance2"] 28 | } 29 | 30 | moved { 31 | from = aws_instance.new_list["instance1"] 32 | to = aws_instance.new_final 33 | } 34 | 35 | moved { 36 | from = aws_instance.new_list["instance2"] 37 | to = module.compute.aws_instance.this 38 | } 39 | 40 | resource "aws_instance" "new_final" { 41 | ami = data.aws_ami.ubuntu.id 42 | instance_type = "t2.micro" 43 | } 44 | 45 | module "compute" { 46 | source = "./modules/compute" 47 | ami_id = data.aws_ami.ubuntu.id 48 | } -------------------------------------------------------------------------------- /16-state-manipulation/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | } 9 | } 10 | 11 | provider "aws" { 12 | region = "eu-west-1" 13 | } -------------------------------------------------------------------------------- /16-state-manipulation/remove.tf: -------------------------------------------------------------------------------- 1 | /* 2 | 1. Remove via CLI 3 | 2. Remove via the removed block 4 | */ 5 | 6 | removed { 7 | from = aws_s3_bucket.my_new_bucket 8 | 9 | lifecycle { 10 | destroy = false 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /16-state-manipulation/taint.tf: -------------------------------------------------------------------------------- 1 | resource "aws_s3_bucket" "tainted" { 2 | bucket = "my-tainted-bucket-19384981jhahds" 3 | } 4 | 5 | resource "aws_s3_bucket_public_access_block" "from_tainted" { 6 | bucket = aws_s3_bucket.tainted.bucket 7 | 8 | block_public_acls = true 9 | block_public_policy = true 10 | ignore_public_acls = true 11 | restrict_public_buckets = true 12 | } 13 | 14 | resource "aws_vpc" "this" { 15 | cidr_block = "10.0.0.0/16" 16 | } 17 | 18 | resource "aws_subnet" "this" { 19 | vpc_id = aws_vpc.this.id 20 | cidr_block = "10.0.0.0/24" 21 | } -------------------------------------------------------------------------------- /17-workspaces/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.42.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:Yxsj34z606m8wssYDHyleuBlQ9i+94MHwRs38thQPZU=", 9 | "zh:0fb12bd56a3ad777b29f957c56dd2119776dbc01b6074458f597990e368c82de", 10 | "zh:16e99c13bef6e3777f67c240c916f57c01c9c142254cfb2720e08281ff906447", 11 | "zh:218268f5fe73bcb19e9a996f781ab66df0da9e333d1c60612e3c51ad28a5105f", 12 | "zh:220b17f7053d11548f35136669687d30ef1f1036e15393275325fd2b9654c715", 13 | "zh:2256cfd74988ce05eada76b42efffc6fe2bf4d60b61f57e4db4fcf65ced4c666", 14 | "zh:52da19f531e0cb9828f73bca620e30264e63a494bd7f9ce826aabcf010d3a241", 15 | "zh:56069ce08d720280ba39aaf2fdd40c4357ffb54262c80e4d39c4e540a38e76af", 16 | "zh:82c81398e68324029167f813fbb7c54fa3d233e99fa05001d85cbce8bdd08bb3", 17 | "zh:82d6eaa87f5ab318959064e6c89adc2baabaf70b13f2f7de866f62416de05352", 18 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 19 | "zh:ade8490cfdd8de8b9a82986588595b67e0ad1048d9e2d3a6f5164320179c2cd0", 20 | "zh:b094ef56ae9bfffd586f46d4f7fb0097798738df758a8f3c51578ee163495c7e", 21 | "zh:bd5e68e1e454bae0f8d73cff8448e814a35855a561c33b745e1b8b525fb06c9f", 22 | "zh:c111c6a854bf121facca1642d528bfa80fb4214554ac6c33e4a59c86bc605b71", 23 | "zh:e04df69a557adbcdf8efc77eb45be748f0acbe800ccede1e0895393c87722a0f", 24 | ] 25 | } 26 | 27 | provider "registry.terraform.io/hashicorp/random" { 28 | version = "3.6.0" 29 | constraints = "~> 3.0" 30 | hashes = [ 31 | "h1:I8MBeauYA8J8yheLJ8oSMWqB0kovn16dF/wKZ1QTdkk=", 32 | "zh:03360ed3ecd31e8c5dac9c95fe0858be50f3e9a0d0c654b5e504109c2159287d", 33 | "zh:1c67ac51254ba2a2bb53a25e8ae7e4d076103483f55f39b426ec55e47d1fe211", 34 | "zh:24a17bba7f6d679538ff51b3a2f378cedadede97af8a1db7dad4fd8d6d50f829", 35 | "zh:30ffb297ffd1633175d6545d37c2217e2cef9545a6e03946e514c59c0859b77d", 36 | "zh:454ce4b3dbc73e6775f2f6605d45cee6e16c3872a2e66a2c97993d6e5cbd7055", 37 | "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", 38 | "zh:91df0a9fab329aff2ff4cf26797592eb7a3a90b4a0c04d64ce186654e0cc6e17", 39 | "zh:aa57384b85622a9f7bfb5d4512ca88e61f22a9cea9f30febaa4c98c68ff0dc21", 40 | "zh:c4a3e329ba786ffb6f2b694e1fd41d413a7010f3a53c20b432325a94fa71e839", 41 | "zh:e2699bc9116447f96c53d55f2a00570f982e6f9935038c3810603572693712d0", 42 | "zh:e747c0fd5d7684e5bfad8aa0ca441903f15ae7a98a737ff6aca24ba223207e2c", 43 | "zh:f1ca75f417ce490368f047b63ec09fd003711ae48487fba90b4aba2ccf71920e", 44 | ] 45 | } 46 | -------------------------------------------------------------------------------- /17-workspaces/dev.tfvars: -------------------------------------------------------------------------------- 1 | bucket_count = 1 -------------------------------------------------------------------------------- /17-workspaces/int.tfvars: -------------------------------------------------------------------------------- 1 | bucket_count = 1 -------------------------------------------------------------------------------- /17-workspaces/prod.tfvars: -------------------------------------------------------------------------------- 1 | bucket_count = 3 -------------------------------------------------------------------------------- /17-workspaces/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | random = { 9 | source = "hashicorp/random" 10 | version = "~> 3.0" 11 | } 12 | } 13 | } 14 | 15 | provider "aws" { 16 | region = "eu-west-1" 17 | } -------------------------------------------------------------------------------- /17-workspaces/s3.tf: -------------------------------------------------------------------------------- 1 | resource "random_id" "bucket_suffix" { 2 | byte_length = 4 3 | } 4 | 5 | /* 6 | dev -> 1 bucket 7 | int -> 1 bucket 8 | staging -> 2 buckets 9 | prod -> 3 buckets 10 | */ 11 | resource "aws_s3_bucket" "this" { 12 | count = var.bucket_count 13 | bucket = "workspaces-demo-${terraform.workspace}-${count.index}-${random_id.bucket_suffix.hex}" 14 | } -------------------------------------------------------------------------------- /17-workspaces/staging.tfvars: -------------------------------------------------------------------------------- 1 | bucket_count = 2 -------------------------------------------------------------------------------- /17-workspaces/variables.tf: -------------------------------------------------------------------------------- 1 | variable "bucket_count" { 2 | type = number 3 | description = "The number of S3 buckets to deploy." 4 | } -------------------------------------------------------------------------------- /18-terraform-cloud/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.42.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:Yxsj34z606m8wssYDHyleuBlQ9i+94MHwRs38thQPZU=", 9 | "zh:0fb12bd56a3ad777b29f957c56dd2119776dbc01b6074458f597990e368c82de", 10 | "zh:16e99c13bef6e3777f67c240c916f57c01c9c142254cfb2720e08281ff906447", 11 | "zh:218268f5fe73bcb19e9a996f781ab66df0da9e333d1c60612e3c51ad28a5105f", 12 | "zh:220b17f7053d11548f35136669687d30ef1f1036e15393275325fd2b9654c715", 13 | "zh:2256cfd74988ce05eada76b42efffc6fe2bf4d60b61f57e4db4fcf65ced4c666", 14 | "zh:52da19f531e0cb9828f73bca620e30264e63a494bd7f9ce826aabcf010d3a241", 15 | "zh:56069ce08d720280ba39aaf2fdd40c4357ffb54262c80e4d39c4e540a38e76af", 16 | "zh:82c81398e68324029167f813fbb7c54fa3d233e99fa05001d85cbce8bdd08bb3", 17 | "zh:82d6eaa87f5ab318959064e6c89adc2baabaf70b13f2f7de866f62416de05352", 18 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 19 | "zh:ade8490cfdd8de8b9a82986588595b67e0ad1048d9e2d3a6f5164320179c2cd0", 20 | "zh:b094ef56ae9bfffd586f46d4f7fb0097798738df758a8f3c51578ee163495c7e", 21 | "zh:bd5e68e1e454bae0f8d73cff8448e814a35855a561c33b745e1b8b525fb06c9f", 22 | "zh:c111c6a854bf121facca1642d528bfa80fb4214554ac6c33e4a59c86bc605b71", 23 | "zh:e04df69a557adbcdf8efc77eb45be748f0acbe800ccede1e0895393c87722a0f", 24 | ] 25 | } 26 | 27 | provider "registry.terraform.io/hashicorp/random" { 28 | version = "3.6.0" 29 | constraints = "~> 3.0" 30 | hashes = [ 31 | "h1:I8MBeauYA8J8yheLJ8oSMWqB0kovn16dF/wKZ1QTdkk=", 32 | "zh:03360ed3ecd31e8c5dac9c95fe0858be50f3e9a0d0c654b5e504109c2159287d", 33 | "zh:1c67ac51254ba2a2bb53a25e8ae7e4d076103483f55f39b426ec55e47d1fe211", 34 | "zh:24a17bba7f6d679538ff51b3a2f378cedadede97af8a1db7dad4fd8d6d50f829", 35 | "zh:30ffb297ffd1633175d6545d37c2217e2cef9545a6e03946e514c59c0859b77d", 36 | "zh:454ce4b3dbc73e6775f2f6605d45cee6e16c3872a2e66a2c97993d6e5cbd7055", 37 | "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", 38 | "zh:91df0a9fab329aff2ff4cf26797592eb7a3a90b4a0c04d64ce186654e0cc6e17", 39 | "zh:aa57384b85622a9f7bfb5d4512ca88e61f22a9cea9f30febaa4c98c68ff0dc21", 40 | "zh:c4a3e329ba786ffb6f2b694e1fd41d413a7010f3a53c20b432325a94fa71e839", 41 | "zh:e2699bc9116447f96c53d55f2a00570f982e6f9935038c3810603572693712d0", 42 | "zh:e747c0fd5d7684e5bfad8aa0ca441903f15ae7a98a737ff6aca24ba223207e2c", 43 | "zh:f1ca75f417ce490368f047b63ec09fd003711ae48487fba90b4aba2ccf71920e", 44 | ] 45 | } 46 | -------------------------------------------------------------------------------- /18-terraform-cloud/compute.tf: -------------------------------------------------------------------------------- 1 | data "aws_ami" "ubuntu" { 2 | most_recent = true 3 | owners = ["099720109477"] # Owner is Canonical 4 | 5 | filter { 6 | name = "name" 7 | values = ["ubuntu/images/hvm-ssd/ubuntu-*-22.04-amd64-server-*"] 8 | } 9 | 10 | filter { 11 | name = "virtualization-type" 12 | values = ["hvm"] 13 | } 14 | } 15 | 16 | resource "aws_instance" "tf_cloud" { 17 | ami = data.aws_ami.ubuntu.id 18 | instance_type = var.ec2_instance_type 19 | 20 | tags = { 21 | Name = "terraform-cloud" 22 | } 23 | } -------------------------------------------------------------------------------- /18-terraform-cloud/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | cloud { 3 | organization = "LauroMueller" 4 | 5 | workspaces { 6 | name = "terraform-cli" 7 | } 8 | } 9 | 10 | required_providers { 11 | random = { 12 | source = "hashicorp/random" 13 | version = "~> 3.0" 14 | } 15 | 16 | aws = { 17 | source = "hashicorp/aws" 18 | version = "~> 5.0" 19 | } 20 | } 21 | } 22 | 23 | provider "aws" { 24 | region = "eu-west-1" 25 | } -------------------------------------------------------------------------------- /18-terraform-cloud/random.tf: -------------------------------------------------------------------------------- 1 | resource "random_id" "this" { 2 | byte_length = 4 3 | } 4 | 5 | output "random_id" { 6 | value = random_id.this.hex 7 | } -------------------------------------------------------------------------------- /18-terraform-cloud/s3.tf: -------------------------------------------------------------------------------- 1 | resource "aws_s3_bucket" "tf_cloud" { 2 | bucket = "terraform-cloud-${random_id.this.hex}" 3 | 4 | tags = { 5 | CreatedBy = "Terraform Cloud" 6 | } 7 | } -------------------------------------------------------------------------------- /18-terraform-cloud/variables.tf: -------------------------------------------------------------------------------- 1 | variable "ec2_instance_type" { 2 | type = string 3 | 4 | validation { 5 | condition = var.ec2_instance_type == "t2.micro" 6 | error_message = "Please use t2.micro to stay within the free tier." 7 | } 8 | } -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Mastering Terraform: From Beginner to Expert 2 | 3 | ### Course link (with a big discount 🙂): https://www.lauromueller.com/courses/mastering-terraform 4 | 5 | **Check my other courses:** 6 | 7 | - 👉 The Complete Docker and Kubernetes Course: From Zero to Hero - https://www.lauromueller.com/courses/docker-kubernetes 8 | - 👉 The Definitive Helm Course: From Beginner to Master - https://www.lauromueller.com/courses/definitive-helm-course 9 | - 👉 Mastering GitHub Actions: From Beginner to Expert - https://www.lauromueller.com/courses/mastering-github-actions 10 | - 👉 Write better code: 20 code smells and how to get rid of them - https://www.lauromueller.com/courses/writing-clean-code 11 | 12 | Welcome everyone! I'm very happy to see you around, and I hope this repository brings lots of value for those learning more about Terraform. Make sure to check the link above for a great discourse on the course in Udemy, where I not only provide theoretical explanations around all the concepts here, but also go in details through the entire coding of the examples in this repository. 13 | 14 | Here are a few tips for you to best navigate the contents of this repository: 15 | 1. The `exercises` folder contains descriptions for all the implemented exercises. You can use it as a guide to try to implement them by yourself before following the solution recordings. 16 | 2. The `projects` folder contains six bigger projects that you can also tackle for an extra challenge 🙂 The solutions for these projects are implemented within their respective folders, **except for project 00, which is implemented inside of the folder `06-resources`**. 17 | 3. The other folders roughly mirror the structure of the course, but there are some course sections that span more than one folder. 18 | 19 | Happy learning! 🚀 20 | 21 | ## Additional Links and Courses: 22 | 23 | **Other repositories included in the course:** 24 | * Networking Module Repository - https://github.com/lm-academy/terraform-aws-networking-tf-course 25 | * Terraform Cloud VCS Integration Repository - https://github.com/lm-academy/terraform-course-example-terraform-cloud 26 | 27 | **Other courses I published in Udemy:** 28 | * Mastering GitHub Actions: From Beginner to Expert - https://www.lauromueller.com/courses/mastering-github-actions 29 | * Write Clean Code: 20 Code Smells and How to Get Rid of Them - https://lauromueller.com/courses/writing-clean-code/ 30 | -------------------------------------------------------------------------------- /exercises/exercise01-benefits_iac_aws_console.md: -------------------------------------------------------------------------------- 1 | # Creating VPCs and Subnets Manually in AWS 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will focus on manually building a simple VPC-subnet infrastructure on the AWS Console. This includes creating a VPC, two subnets (one public and one private), an Internet Gateway, and a public route table. By creating these elements manually, we will gain a deeper understanding of how intricate and challenging the process of creating and managing infrastructure can be. This will provide a foundation for appreciating the benefits of Infrastructure as Code (IaC) in automating these tasks and maintaining consistency across environments. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking at the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. A VPC with a CIDR block of `10.0.0.0/16`. 12 | 2. One public subnet with a CIDR block of `10.0.0.0/24`. 13 | 3. One private subnet with a CIDR block of `10.0.1.0/24`. 14 | 4. One Internet Gateway. 15 | 5. One public route table with a route to the Internet Gateway, and the correct association between the public subnet and the public route table. 16 | 17 | ## Step-by-Step Guide 18 | 19 | 1. Login to your AWS console. 20 | 2. Navigate to the VPC Dashboard. 21 | 3. Click on "Your VPCs" then "Create VPC". 22 | 4. Enter a Name tag and the CIDR block `10.0.0.0/16` then click "Create". 23 | 5. Go back to the VPC Dashboard and click on "Subnets". 24 | 6. Click "Create subnet". 25 | 7. Fill in the Name tag, select the VPC you just created, and enter the CIDR block `10.0.0.0/24` to create the public subnet. 26 | 8. Repeat this process with the CIDR block `10.0.1.0/24` to create the private subnet. 27 | 9. Go back to the VPC Dashboard and click on "Internet Gateways". 28 | 10. Click "Create internet gateway", give it a Name tag, then click "Create". 29 | 11. Select the Internet Gateway you just created and click "Actions", then "Attach to VPC", and select your VPC. 30 | 12. Go back to the VPC Dashboard and click on "Route Tables". 31 | 13. Click "Create route table", enter a Name tag, select your VPC, then click "Create". 32 | 14. Select the Route Table you just created and click on the "Routes" tab, then click "Edit routes". 33 | 15. Click "Add route", for the Destination enter `0.0.0.0/0`, for the Target select the Internet Gateway you created, then click "Save routes". 34 | 16. Click on the "Subnet Associations" tab, then click "Edit subnet associations". 35 | 17. Select the public subnet, then click "Save". 36 | 37 | ## Congratulations on Completing the Exercise! 38 | 39 | Congratulations on successfully completing this challenging exercise! You've manually built a VPC-subnet infrastructure on the AWS Console, and this hands-on experience has given you valuable insight into the process and complexity of creating and managing infrastructure. We hope this exercise has helped you appreciate the benefits of Infrastructure as Code (IaC) in automating these tasks and maintaining consistency across environments. Keep up the great work! 40 | -------------------------------------------------------------------------------- /exercises/exercise03-terraform_stages.md: -------------------------------------------------------------------------------- 1 | # Understanding the Different Terraform Stages 2 | 3 | ## Introduction 4 | 5 | In this exercise, we aim to learn about the different stages of Terraform. We'll go through the process of initializing our working directory, planning and applying our configuration, and finally, destroying the infrastructure when it's no longer needed. This hands-on experience will provide an understanding of the practical benefits and the lifecycle of IAC. 6 | 7 | ## Step-by-Step Guide 8 | 9 | Make sure you are in the `01-benefits-iac`directory and run the following commands, always taking a break to analyze and understand the output of the commands. 10 | 11 | 1. Use the `terraform init` command to initialize your working directory. This command prepares your directory for other Terraform commands and ensures that Terraform has everything it needs to run. 12 | 2. Use `terraform plan` to create an execution plan. This command determines what actions are necessary to achieve the desired state specified in your configuration files. 13 | 3. Once you're satisfied with the plan, apply it using `terraform apply`. This command will make the necessary changes to reach the desired state of your configuration. 14 | 4. When you no longer need the infrastructure, you can use `terraform destroy` to remove all resources created by your Terraform configuration. This will also clean up the state. 15 | 16 | ## Congratulations on Completing the Exercise! 17 | 18 | Great job on completing this exercise! You've now gained hands-on experience with the different stages of Terraform and have a better understanding of Infrastructure as Code. Keep up the good work! 19 | -------------------------------------------------------------------------------- /exercises/exercise06-terraform_cli.md: -------------------------------------------------------------------------------- 1 | # Working with the Terraform CLI 2 | 3 | ## Introduction 4 | 5 | Welcome to this exercise on the Terraform Command Line Interface (CLI)! This hands-on activity will guide you through the essential Terraform commands. You'll learn how to validate your files, format them for better readability, create and apply execution plans, and manage your state. Let's dive in and explore these powerful tools. 6 | 7 | ## Step-by-Step Guide 8 | 9 | Run the following commands in the terminal and inspect their output. To learn more about each command, try adding a `-help` flag to the command! 10 | 11 | - `terraform validate`: Checks the syntax of the Terraform files and verifies that they are internally consistent, but does not ensure that the resources exist or that the providers are properly configured. 12 | - `terraform fmt`: Automatically updates Terraform configuration files to a canonical format and style, improving consistency and readability. The command works only for the files in the current working directory, but you can also add a `-recursive` flag to format `.tf` files in nested directories. 13 | - `terraform plan`: Creates an execution plan, showing what actions Terraform will take to achieve the desired state defined in the Terraform files. This command does not modify the actual resources or state. 14 | - `terraform plan -out `: Similar to `terraform plan`, but it also writes the execution plan to a file that can be used by `terraform apply`, ensuring that exactly the planned actions are taken. 15 | - `terraform apply`: Applies the execution plan, making the necessary changes to reach the desired state of the resources. If you run `terraform plan` with the `-out` option, you can run `terraform apply ` to provide the execution plan. 16 | - `terraform show`: Provides human-readable output from a state or plan file. It's used to inspect the current state or to see the actions planned by a `terraform plan` command. 17 | - `terraform state list`: Lists all resources in the state file, useful for managing and manipulating the state. 18 | - `terraform destroy`: Destroys all resources tracked in the state file. This command is the equivalent of passing a `-destroy` flag to the `terraform apply` command. 19 | - `terraform -help`: Provides help information about Terraform commands. It can be used alone for a general overview, or appended to a specific command for detailed help about that command. 20 | 21 | ## Congratulations on Completing the Exercise! 22 | 23 | Great job on completing this exercise on Terraform CLI! You've taken a significant step in mastering Terraform, and we hope that this knowledge serves you well in your future projects. Keep up the good work! 24 | -------------------------------------------------------------------------------- /exercises/exercise07-s3_backend.md: -------------------------------------------------------------------------------- 1 | # Using a Remote S3 Backend 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will guide you through the steps for setting up an S3 backend for Terraform. This allows you to store your Terraform state in an S3 bucket, providing a secure, remote storage solution. Understanding how to set up and use an S3 backend is crucial for managing and collaborating on Terraform projects. Let's get started. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. A manually created S3 bucket used to store the state file, uniquely named and in the region of your choice. 12 | 2. S3 backend configured in the Terraform configuration file by referencing the created S3 bucket and providing a relevant key to store the state file. 13 | 3. Terraform successfully initialized with the S3 backend. 14 | 4. Terraform configuration successfully applied and state stored in the S3 bucket. 15 | 16 | ## Step-by-Step Guide 17 | 18 | 1. First, make sure you have Terraform version 1.7 or later installed. You can check this by running `terraform version` in your terminal. 19 | 2. Copy the files from the folder `03-first-tf-project` to a new folder named `04-backends`. 20 | 3. You will also need to have the AWS and Random providers installed. These can be installed by running the command `terraform init` inside the `04-backends` folder. 21 | 4. Next, you will need to set up your S3 bucket. Choose a unique name for it, and make sure the bucket is created in the region of your choice. You can set this up through the AWS console or using the AWS CLI. 22 | 5. Now, you can configure the S3 backend in your Terraform configuration. This is done in the `backend "s3"` block in your configuration file. Replace the `` text with the name of the bucket you have manually created, and the `` with the respective region you are using for the course. 23 | ``` 24 | backend "s3" { 25 | bucket = "" 26 | key = "04-backends/state.tfstate" 27 | region = "" 28 | } 29 | ``` 30 | 6. Configure the AWS provider by using the `provider "aws"` block, and make sure it uses the correct region for your projects. 31 | 7. Finally, run `terraform init` again. This will initialize your backend. You should see a message that Terraform has been successfully initialized and that the backend "s3" has been configured. 32 | 8. Now, you can run `terraform apply` to apply your configuration. Terraform will store the state in the S3 bucket you configured. 33 | 9. Make sure to destroy the resources after you complete all the steps! 34 | 35 | ## Congratulations on Completing the Exercise! 36 | 37 | Well done on completing this exercise! You've taken an important step in understanding how to set up and use an S3 backend for Terraform. Keep up the good work! 38 | -------------------------------------------------------------------------------- /exercises/exercise08-partial_backend_config.md: -------------------------------------------------------------------------------- 1 | # Partial Backend Configuration 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will walk you through the process of setting up a partial backend configuration for Terraform. The tutorial will guide you to configure the S3 Backend for Terraform, create separate backend configuration files for development and production environments, and specify which backend configuration file Terraform should use. After completing this exercise, you will have a more in-depth understanding of how to manage different backend configurations for various environments in Terraform. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Successfully configure the S3 Backend for Terraform. 12 | 2. Create a `dev.tfbackend` file to provide the partial backend configuration for the development environment. 13 | 3. Create a `prod.tfbackend` file to provide the partial backend configuration for the production environment. 14 | 4. Be able to specify which backend configuration file Terraform should use by passing it as a command line argument. 15 | 16 | ## Step-by-Step Guide 17 | 18 | 1. Keep working on the files from the folder `04-backends`. 19 | 2. Replace the `backend "s3"` block in your Terraform configuration with the following: 20 | 21 | ``` 22 | backend "s3" { 23 | bucket = "" 24 | region = "" 25 | } 26 | ``` 27 | 28 | Note that we've removed the specific backend key and will instead be using a key that includes the environment from a `tfbackend` file. 29 | 30 | 3. Create a file called `dev.tfbackend`. This file will provide the partial backend configuration for the development environment. In the `dev.tfbackend` file, add the following: 31 | 32 | ``` 33 | key = "04-backends/dev/state.tfstate" 34 | ``` 35 | 36 | This sets the S3 key to a path under `04-backends/dev`. 37 | 38 | 4. Create a file called `prod.tfbackend`. This file will provide the partial backend configuration for the production environment. In the `prod.tfbackend` file, add the following: 39 | 40 | ``` 41 | key = "04-backends/prod/state.tfstate" 42 | ``` 43 | 44 | This sets the S3 key to a path under `04-backends/prod`. 45 | 46 | 5. Pass the backend file to Terraform. You can specify which backend configuration file Terraform should use by passing it as a command line argument. For example, to use the development backend, you would run the following command: 47 | 48 | `terraform init -backend-config=dev.tfbackend` 49 | 50 | Similarly, to use the production backend, you would run: 51 | 52 | `terraform init -backend-config=prod.tfbackend` 53 | 54 | This will ensure that Terraform uses the correct state file depending on whether you're working in the development or production environment. 55 | 56 | Go ahead and give it a try! With these steps, your S3 Backend is configured for Terraform and you have separate backend configuration files for development and production that will store the state in different paths in the S3 bucket. 57 | 58 | 6. Make sure to destroy the resources after you complete all the steps! 59 | 60 | ## Congratulations on Completing the Exercise! 61 | 62 | Great job on successfully completing this exercise! You've learned to set up and manage different backend configurations for various environments in Terraform. This knowledge is crucial for efficient infrastructure management and deployment. Keep up the good work! 63 | -------------------------------------------------------------------------------- /exercises/exercise09-providers.md: -------------------------------------------------------------------------------- 1 | # Working with Providers 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will explore the concept of Providers in Terraform, specifically focusing on AWS providers. We will learn how to configure multiple instances of AWS providers, each targeting a different region. By the end of this exercise, you will be able to deploy resources in different AWS regions using separate provider instances. Let's get started! 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Configure the default AWS provider to use the region of your choice (`eu-west-1` in this case). 12 | 2. Configure another instance of the AWS provider with the region set to `us-east-1` and an alias of `us-east`. 13 | 3. Create an S3 bucket resource in the `eu-west-1` region. 14 | 4. Create an S3 bucket resource in the `us-east-1` region. 15 | 16 | ## Step-by-Step Guide 17 | 18 | 1. Within your Terraform project folder, create a file `provider.tf` and declare the required Terraform version and the AWS provider source and version in a `required_providers` block within a `terraform` block: 19 | 20 | ``` 21 | terraform { 22 | required_version = "~> 1.7" 23 | required_providers { 24 | aws = { 25 | source = "hashicorp/aws" 26 | version = "~> 5.0" 27 | } 28 | } 29 | } 30 | ``` 31 | 32 | 2. Configure the default `aws` provider to use the region of your choice (in my case, this is `eu-west-1`): 33 | 34 | ``` 35 | provider "aws" { 36 | region = "eu-west-1" 37 | } 38 | ``` 39 | 40 | 3. Configure another instance of the `aws` provider by adding another `provider` block and setting the region to `us-east-1`. Also assign it an alias so that it can be later used with resources: 41 | 42 | ``` 43 | provider "aws" { 44 | region = "us-east-1" 45 | alias = "us-east" 46 | } 47 | ``` 48 | 49 | 4. Create an S3 bucket resource in the `eu-west-1` region: 50 | 51 | ``` 52 | resource "aws_s3_bucket" "eu_west_1" { 53 | bucket = "some-random-bucket-name-aosdhfoadhfu" 54 | } 55 | ``` 56 | 57 | 5. Create another S3 bucket resource in the `us-east-1` region. This time, specify the provider by using the alias you created earlier and passing it to the `provider` argument of the resource: 58 | 59 | ``` 60 | resource "aws_s3_bucket" "us_east_1" { 61 | bucket = "some-random-bucket-name-18736481364" 62 | provider = aws.us-east 63 | } 64 | ``` 65 | 66 | 6. Initialize Terraform in your project directory using the command `terraform init`. 67 | 7. Validate your configuration using the command `terraform validate`. 68 | 8. If the configuration is valid, apply it using the command `terraform apply`. 69 | 9. Use the AWS console or CLI to check that the resources were correctly deployed in both regions. 70 | 10. Make sure to destroy the resources after you complete all the steps! 71 | 72 | ## Congratulations on Completing the Exercise! 73 | 74 | Well done on completing this exercise! You've successfully explored the concept of Providers in Terraform and specifically focused on AWS providers. You've learned how to configure multiple instances of AWS providers, each targeting a different region, and you've deployed resources in different AWS regions using separate provider instances. Keep up the good work! 75 | -------------------------------------------------------------------------------- /exercises/exercise11-data_source_aws_caller_identity_region.md: -------------------------------------------------------------------------------- 1 | # Using Data Sources to Fetch AWS Caller Identity and Region 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will be focusing on understanding and utilizing the AWS Caller Identity and AWS Region Data Sources. The aim is to define data sources for both AWS Caller Identity and AWS Region. Then, we will output the returned identity of the caller and the current AWS region. It's a valuable practice to enhance your Terraform skills and familiarity with these particular aspects. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. AWS Caller Identity Data Source: This data source should be defined to return the identity of the caller. 12 | 2. AWS Region Data Source: This data source should be defined to get the current AWS region. 13 | 3. AWS Caller Identity Output: This output should return the identity of the caller. 14 | 4. AWS Region Output: This output should return the current AWS region. 15 | 16 | ## Step-by-Step Guide 17 | 18 | 1. First, define the data source for the AWS Caller Identity. This data source is used to return the identity of the caller. This is achieved with the following code: 19 | 20 | ``` 21 | data "aws_caller_identity" "current" {} 22 | ``` 23 | 24 | 2. Next, define the data source for the AWS Region. This data source is used to get the current region. This is done with the following code: 25 | 26 | ``` 27 | data "aws_region" "current" {} 28 | ``` 29 | 30 | 3. Then, output the AWS Caller Identity. This will return the identity of the caller. This is done with the following code: 31 | 32 | ``` 33 | output "aws_caller_identity" { 34 | value = data.aws_caller_identity.current 35 | } 36 | ``` 37 | 38 | Which kind of information is retrieved through this data source? 39 | 40 | 4. Finally, output the AWS Region. This will return the current region. This is done with the following code: 41 | 42 | ``` 43 | output "aws_region" { 44 | value = data.aws_region.current 45 | } 46 | ``` 47 | 48 | Which kind of information is retrieved through this data source? 49 | 50 | ## Congratulations on Completing the Exercise! 51 | 52 | Well done on successfully completing the exercise on AWS Caller Identity and Region! You've taken another important step in enhancing your Terraform skills. Keep up the great work! 53 | -------------------------------------------------------------------------------- /exercises/exercise12-data_source_vpc_information.md: -------------------------------------------------------------------------------- 1 | # Using Data Sources to Fetch VPC Information 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will learn how to fetch information from a Virtual Private Cloud (VPC) that has been manually created in the AWS console. This will be accomplished by using an `aws_vpc` data source block and specifying filter criteria. This process will provide you with a hands-on understanding of how to retrieve and utilize data about existing resources in your AWS infrastructure. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Manually create a Virtual Private Cloud (VPC) in the AWS console with the following configuration: 12 | - Name: console-managed 13 | - Environment (`Env`) tag: Prod 14 | - CIDR Block: 10.0.0.0/16 15 | 2. Fetch the VPC information based on the `Env` tag via a `aws_vpc` data source block that specifies the correct filter criteria. 16 | 3. Define an output to display the `id` of the retrieved VPC. 17 | 18 | ## Step-by-Step Guide 19 | 20 | ### Creating a VPC in the Console 21 | 22 | For us to be able to fetch information from a VPC managed outside our project, let’s create one manually in the AWS console: 23 | 24 | 1. Log in to the AWS Management Console and navigate to the VPC Dashboard. 25 | 2. Click on the "Your VPCs" link in the left-hand menu. 26 | 3. Click the "Create VPC" button. 27 | 4. Select the “VPC Only” option when AWS asks you whether to create only the VPC or also related resources. 28 | 5. Add a `Name` tag with the value `console-managed`. 29 | 6. Add an `Env` tag with the value `Prod`. 30 | 7. Set `10.0.0.0/16` as the CIDR Block. 31 | 8. Leave the rest of the settings at their default values. 32 | 9. Click the "Create" button. 33 | 10. Your new VPC should now be visible in your VPCs list. 34 | 35 | ### Using Data Source to Fetch VPC Information 36 | 37 | With the VPC created, we can now proceed to fetch its information using a data source in Terraform. A data source in Terraform is a read-only access to existing resources in your infrastructure that might be managed outside of Terraform, or that you only need for reference in your Terraform code. 38 | 39 | 1. Define an `aws_vpc` data source block and specify the filter criteria to use the `Env` tag we have set in the previous section. 40 | 41 | ``` 42 | data "aws_vpc" "prod_vpc" { 43 | tags = { 44 | Env = "Prod" 45 | } 46 | } 47 | ``` 48 | 49 | 2. Define an output value that shows the `id` of the retrieved VPC. 50 | 51 | ``` 52 | output "prod_vpc_id" { 53 | value = data.aws_vpc.prod_vpc.id 54 | } 55 | ``` 56 | 57 | 3. Try to provide a non-existing value for the `Env` tag. What happens when we run a `terraform plan`? 58 | 59 | ## Congratulations on Completing the Exercise! 60 | 61 | Great job on completing this exercise! You've learned how to fetch information from a Virtual Private Cloud (VPC) that has been manually created in the AWS console. Keep up the good work! 62 | -------------------------------------------------------------------------------- /exercises/exercise13-data_source_iam_policy.md: -------------------------------------------------------------------------------- 1 | # Using Data Sources to Create AWS IAM Policies 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will be creating AWS IAM Policies. IAM policies define permissions and can be attached to IAM identities or groups. Our primary objective is to create a policy that gives read access to all objects in S3 buckets for all users. This is essential for making a static website publicly accessible. We will use the `aws_iam_policy_document` data source to create our policy document. The completed policy document will be outputted in JSON format for use in other parts of the code. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Create an IAM policy document using the `aws_iam_policy_document` data source. The policy should apply to all users and give read access to all objects in S3 buckets. 12 | 2. Output the policy document in JSON format for use elsewhere in the code. 13 | 14 | ## Step-by-Step Guide 15 | 16 | 1. Create an IAM policy document by leveraging the `aws_iam_policy_document` data source. This document will specify the permissions that are being granted. In our case, we're creating a policy document that allows read access to all objects in S3 buckets. 17 | 18 | ``` 19 | data "aws_iam_policy_document" "static_website" { 20 | statement { 21 | sid = "PublicReadGetObject" 22 | 23 | principals { 24 | type = "*" 25 | identifiers = ["*"] 26 | } 27 | 28 | actions = ["s3:GetObject"] 29 | 30 | resources = ["arn:aws:s3:::*/*"] 31 | } 32 | } 33 | ``` 34 | 35 | In the `principals` block, we specify that the policy applies to all users (`type = "*"`) because we want the static website to be publicly accessible. 36 | 37 | In the `actions` block, we specify the "s3:GetObject" action. This allows users to retrieve objects from the S3 bucket. 38 | 39 | The `resources` block specifies the resources to which the action applies. In this case, it applies to all objects in all S3 buckets (as indicated by the `"*"` wildcard characters). 40 | 41 | 2. After defining the policy document, output it in JSON format so it can be used elsewhere in the code. 42 | 43 | ``` 44 | output "iam_policy" { 45 | value = data.aws_iam_policy_document.static_website.json 46 | } 47 | ``` 48 | 49 | The `value = data.aws_iam_policy_document.static_website.json` line converts the policy document into JSON format. 50 | 51 | ## Congratulations on Completing the Exercise! 52 | 53 | Well done on successfully completing this exercise! You have now acquired the knowledge to create AWS IAM Policies, a crucial skill for managing access to resources on AWS. Keep practicing to solidify your understanding. Great job! 54 | -------------------------------------------------------------------------------- /exercises/exercise16-objects_volume_configuration.md: -------------------------------------------------------------------------------- 1 | # Using Objects for Volume Configuration 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will delve into the use of objects for volume configuration in EC2 instances. This concept allows us to encapsulate related configuration details into a single, more manageable entity. We will also explore how to incorporate additional tags to an EC2 instance, providing us with more flexibility in managing our AWS resources. This exercise will help reinforce your understanding of Terraform variables and resources, and how they can be used to create more efficient and flexible configurations. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Create a variable `ec2_volume_config`, which contains the volume type and volume type of the root block device for the EC2 instance. 12 | 2. Create a variable `additional_tags`, which allows the user to define additional tags to the EC2 instance. 13 | 3. Update the EC2 instance configuration to leverage the new variables. 14 | 15 | ## Step-by-Step Guide 16 | 17 | 1. Migrate from using the `ec2_volume_type` and `ec2_volume_size` variables to using a single `ec2_volume_config` variable, which is of type `object`. Set sensible defaults and add a helpful description to the new variable. 18 | 19 | ``` 20 | variable "ec2_volume_config" { 21 | type = object({ 22 | size = number 23 | type = string 24 | }) 25 | description = "The size and type of the root block volume for EC2 instances." 26 | 27 | default = { 28 | size = 10 29 | type = "gp3" 30 | } 31 | } 32 | ``` 33 | 34 | 2. Add a new `additional_tags` variable, which is of type `map` of strings and is empty by default. This will allow us to add more tags to our resources if needed. 35 | 36 | ``` 37 | variable "additional_tags" { 38 | type = map(string) 39 | default = {} 40 | } 41 | ``` 42 | 43 | 3. Migrate the EC2 instance resource to use the new variables. 44 | 45 | ``` 46 | resource "aws_instance" "compute" { 47 | ami = data.aws_ami.ubuntu.id 48 | instance_type = var.ec2_instance_type 49 | 50 | root_block_device { 51 | delete_on_termination = true 52 | volume_size = var.ec2_volume_config.size 53 | volume_type = var.ec2_volume_config.type 54 | } 55 | 56 | tags = merge(var.additional_tags, { 57 | ManagedBy = "Terraform" 58 | }) 59 | } 60 | ``` 61 | 62 | ## Congratulations on Completing the Exercise! 63 | 64 | Great job on completing this exercise! You have successfully learned how to use variables of type `object` and `map`. Keep up the good work! 65 | -------------------------------------------------------------------------------- /exercises/exercise17-tfvars.md: -------------------------------------------------------------------------------- 1 | # Working with `tfvars` 2 | 3 | ## Introduction 4 | 5 | In this exercise, we'll explore how to use `.tfvars` files in Terraform to manage and apply variable configurations. We'll be creating files that contain variable configurations and then loading them into our Terraform plan. This is a great way to manage different environments or stages in your infrastructure setup. The entire exercise should give you a hands-on experience of working with `terraform.tfvars` files and how they can be used effectively to manage configurations. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Create a `terraform.tfvars` file that contains configurations suitable for our defined variables. 12 | 2. Rename the file to `dev.terraform.tfvars`, and check what happens when running Terraform commands. 13 | 3. Create a new `prod.terraform.tfvars` file with different variable values should be created. We should be able to test loading this file into the Terraform plan and apply commands. 14 | 15 | ## Step-by-Step Guide 16 | 17 | 1. Create a `terraform.tfvars` file with sensible values for the variables we have defined so far. It’s important that the file is called `terraform.tfvars`, since Terraform automatically looks for this file. 18 | 19 | ``` 20 | ec2_instance_type = "t2.micro" 21 | 22 | ec2_volume_config = { 23 | size = 10 24 | type = "gp2" 25 | } 26 | 27 | additional_tags = { 28 | ValuesFrom = "terraform.tfvars" 29 | } 30 | ``` 31 | 32 | 2. Run the `terraform plan` and `terraform apply` commands to make sure Terraform is correctly loading the values from the `terraform.tfvars` file. 33 | 3. Now rename the file to `dev.terraform.tfvars` and re-run the `terraform plan` and `terraform apply` commands. Is Terraform able to load the values? Terraform does not automatically load `.tfvars` files with other names, but you can pass the `.tfvars` file to the `terraform plan` and `terraform apply` commands using the `-var-file=` option. 34 | 4. Create a new `prod.terraform.tfvars` file. Set different values for the variables, and test how we can load this file into the `terraform plan` and `terraform apply` commands. Do not apply the configuration below, since it falls outside the free tier! 35 | 36 | ``` 37 | ec2_instance_type = "t3.large" 38 | 39 | ec2_volume_config = { 40 | size = 10 41 | type = "gp3" 42 | } 43 | 44 | additional_tags = { 45 | ValuesFrom = "prod.terraform.tfvars" 46 | } 47 | ``` 48 | 49 | 5. Make sure to destroy the resources after you complete all the steps! 50 | 51 | ## Congratulations on Completing the Exercise! 52 | 53 | Congratulations on completing this exercise! You have learned how to use `.tfvars` files in Terraform to manage and apply variable configurations. Keep practicing and continue to enhance your Terraform skills! 54 | -------------------------------------------------------------------------------- /exercises/exercise18-auto_tfvars.md: -------------------------------------------------------------------------------- 1 | # Working with `auto.tfvars` 2 | 3 | ## Introduction 4 | 5 | In this exercise, you'll learn how to work with `auto.tfvars` in Terraform. This will involve creating and configuring a `prod.auto.tfvars` file, which will be used to apply specific configurations when running Terraform in a production environment. You'll also learn how Terraform automatically loads values from `*.auto.tfvars` files and how these values can override those in the `terraform.tfvars` file. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. First, create a new file called `prod.auto.tfvars`. This file will contain specific configurations that you want to apply when running Terraform in the production environment. Make sure that you also have a `terraform.tfvars` file with some default configuration. 10 | 11 | ``` 12 | ec2_instance_type = "t2.micro" 13 | 14 | ec2_volume_config = { 15 | size = 10 16 | type = "gp2" 17 | } 18 | 19 | additional_tags = { 20 | ValuesFrom = "terraform.tfvars" 21 | } 22 | ``` 23 | 24 | 2. Inside `prod.auto.tfvars`, set the `ec2_instance_type` to `t3.large`. This setting specifies the type of EC2 instance that Terraform will deploy. 25 | 26 | ``` 27 | ec2_instance_type = "t3.large" 28 | ``` 29 | 30 | 3. Define the `ec2_volume_config` block. This block defines the configuration of the attached EBS volume. Set the `size` to `10` and the `type` to `gp3`. 31 | 32 | ``` 33 | ec2_volume_config = { 34 | size = 10 35 | type = "gp3" 36 | } 37 | ``` 38 | 39 | 4. Finally, add the `additional_tags` block. This block allows you to add additional tags to your resources. The `ValuesFrom` tag is set to `prod.auto.tfvars`, indicating that the values are taken from the `prod.auto.tfvars` file. 40 | 41 | ``` 42 | additional_tags = { 43 | ValuesFrom = "prod.terraform.tfvars" 44 | } 45 | ``` 46 | 47 | 5. Save the `prod.auto.tfvars` file and run the `terraform plan` command. Which values are taken by Terraform? 48 | 6. Delete the `prod.auto.tfvars` to prevent Terraform from automatically loading values from it in later plan and apply commands. 49 | 50 | The `*.auto.tfvars` files are automatically loaded by Terraform when it runs. The values in the `*.auto.tfvars` files override the values in the `terraform.tfvars` file. This means that if you have the same variable defined in both the `terraform.tfvars` file and an `*.auto.tfvars` file, the value from the `*.auto.tfvars` file will be used. 51 | 52 | ## Congratulations on Completing the Exercise! 53 | 54 | Well done on successfully completing this exercise! You've learned how to use `auto.tfvars` in Terraform, a crucial skill as you continue to develop your cloud infrastructure expertise. Keep up the good work! 55 | -------------------------------------------------------------------------------- /exercises/exercise20-outputs.md: -------------------------------------------------------------------------------- 1 | # Working with Outputs 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will explore how to work with outputs in Terraform. Outputs are a way to expose data about your resources and modules, and can be very helpful in understanding the state of your resources or for integrating with other systems. We will be defining locals, setting up an S3 bucket, and creating an output for the bucket name. We will then run and inspect the output in the terminal and learn how to retrieve the output value outside of Terraform. Let's get started! 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Make sure that you have the locals defined according to previous exercises. 10 | 11 | ``` 12 | locals { 13 | project = "08-input-vars-locals-outputs" 14 | project_owner = "terraform-course" 15 | cost_center = "1234" 16 | managed_by = "Terraform" 17 | } 18 | 19 | locals { 20 | common_tags = { 21 | project = local.project 22 | project_owner = local.project_owner 23 | cost_center = local.cost_center 24 | managed_by = local.managed_by 25 | sensitive_tag = var.my_sensitive_value 26 | } 27 | } 28 | ``` 29 | 30 | 2. Also make sure that the S3 bucket is correctly setup, together with a `random_id` for the suffix. You can comment out the EC2 instance code so that the terraform apply command completes faster. 31 | 32 | ``` 33 | # s3.tf 34 | 35 | resource "random_id" "project_bucket_suffix" { 36 | byte_length = 4 37 | } 38 | resource "aws_s3_bucket" "project_bucket" { 39 | bucket = "${local.project}-${random_id.project_bucket_suffix.hex}" 40 | 41 | tags = merge(local.common_tags, var.additional_tags) 42 | } 43 | 44 | --- 45 | 46 | # compute.tf 47 | 48 | # resource "aws_instance" "compute" { 49 | # ami = data.aws_ami.ubuntu.id 50 | # instance_type = var.ec2_instance_type 51 | 52 | # root_block_device { 53 | # delete_on_termination = true 54 | # volume_size = var.ec2_volume_config.size 55 | # volume_type = var.ec2_volume_config.type 56 | # } 57 | 58 | # tags = merge(local.common_tags, var.additional_tags) 59 | # } 60 | ``` 61 | 62 | 3. Create an output named `s3_bucket_name` to output the bucket name. Add a suitable description to the output. 63 | 64 | ``` 65 | output "s3_bucket_name" { 66 | value = aws_s3_bucket.project_bucket.bucket 67 | description = "The name of the S3 bucket" 68 | } 69 | ``` 70 | 71 | 4. Run and confirm a terraform apply command, and inspect what is output in the terminal. 72 | 5. To retrieve the output value outside terraform, run the command `terraform output `. In our case, this would be `terraform output s3_bucket_name` 73 | 1. You can also add a `-raw` flag to omit the double quotes, so that the value of the output can be used in other shell commands. 74 | 75 | ## Congratulations on Completing the Exercise! 76 | 77 | Great job on completing this exercise! You've taken another significant step in understanding how to work with outputs in Terraform. Keep up the good work, and continue to apply and improve on what you've learned as you progress through the course. 78 | -------------------------------------------------------------------------------- /exercises/exercise21-sensitive_values.md: -------------------------------------------------------------------------------- 1 | # Working with Sensitive Values 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will delve into the process of handling sensitive values in Terraform. We'll learn how to set certain values as sensitive and how to retrieve them. Moreover, we'll understand the implications of setting a sensitive value and how it impacts the logs output by Terraform. This hands-on exercise will provide you a practical understanding of working with sensitive data, which is a crucial aspect of managing infrastructure. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Let’s start by setting the `s3_bucket_name` as sensitive by adding a `sensitive = true` to the output block. 10 | 11 | ``` 12 | output "s3_bucket_name" { 13 | value = aws_s3_bucket.project_bucket.bucket 14 | sensitive = true 15 | description = "The name of the S3 bucket" 16 | } 17 | ``` 18 | 19 | 2. Retrieve the value of the output with `terraform output s3_bucket_name`. As you can see, we have no problems retrieving the value of outputs set as sensitive. 20 | 3. Create a variable called `my_sensitive_value` containing a sensitive value, and reflect this directly in an output named `sensitive_var`: 21 | 22 | ``` 23 | variable "my_sensitive_value" { 24 | type = string 25 | sensitive = true 26 | } 27 | 28 | output "sensitive_var" { 29 | sensitive = true # We must set this to true since the variable is sensitive! 30 | value = var.my_sensitive_value 31 | } 32 | ``` 33 | 34 | 4. Use the `my_sensitive_value` to create a new tag in the `local.common_tags` object we have created in previous lectures. What happens in the logs output by Terraform? 35 | 36 | ## Congratulations on Completing the Exercise! 37 | 38 | Great job on completing this exercise! You've taken a significant step in understanding how to handle sensitive values in Terraform. Keep up the good work! 39 | -------------------------------------------------------------------------------- /exercises/exercise22-operators.md: -------------------------------------------------------------------------------- 1 | # Operators in Terraform 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will explore the different operators available in Terraform. This exercise includes working with mathematical, equality, comparison, and logical operators. You will get a chance to play around with these operators and observe their behavior, which will give you a solid foundation for writing more complex Terraform configurations in the future. Let's get started! 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Create a locals block and play around with the math operators available in Terraform: `*, /, +, -, -`. Their behavior is fairly straightforward, so there isn’t much to explore here. 10 | 2. Try the equality operators `==` and `!=`, which can be used to check whether two values are equal or not. 11 | 3. Comparison operators are also available: `<, <=, >, >=`. Give it a try! 12 | 4. Last but not least, play around with the logical operators `&&`, `||`, and `!`. 13 | 14 | ``` 15 | locals { 16 | math = 2 * 2 17 | equality = 2 != 2 18 | comparison = 2 < 1 19 | logical = true || false 20 | } 21 | ``` 22 | 23 | 5. You can also output the values of these locals to inspect how they look like. Create an `output “operators”` block and run the `terraform plan` command to inspect the results. 24 | 25 | ``` 26 | output "operators" { 27 | value = { 28 | math = local.math 29 | equality = local.equality 30 | comparison = local.comparison 31 | logical = local.logical 32 | } 33 | } 34 | ``` 35 | 36 | ## Congratulations on Completing the Exercise! 37 | 38 | Great job on completing this exercise! You've taken another step in mastering Terraform operators. Keep practicing and continue to develop your skills. Remember that learning is a journey, not a destination. Keep up the good work! 39 | -------------------------------------------------------------------------------- /exercises/exercise24-for_maps.md: -------------------------------------------------------------------------------- 1 | # `for` Expression with Maps 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will explore how to use `for` expressions with maps in Terraform. We will learn how to define a map, create locals that consist of modified versions of this map, and output the results. This exercise will help you understand how to work with maps, a crucial data structure in Terraform, and how to perform operations on them. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Create a variable `numbers_map` of type `map(number)`. This will be the map on which we operate. 12 | 2. Create a local `doubles_map`. This local consists of each key-value pair in `numbers_map`, but with each value doubled. 13 | 3. Create a local `even_map`. This local consists of each key-value pair in `numbers_map`, but only if the value is even. Each value should also be doubled. 14 | 4. Output the results of `doubles_map` and `even_map`. Running the `terraform plan` command will allow you to visualize these outputs. 15 | 16 | ## Step-by-Step Guide 17 | 18 | 1. Start by defining a variable `numbers_map` of type `map(number)`. This will be the map on which we will operate. 19 | 20 | ``` 21 | variable "numbers_map" { 22 | type = map(number) 23 | } 24 | ``` 25 | 26 | 2. Next, create a local `doubles_map`. We use a for loop to iterate over each key-value pair in `numbers_map`. For each pair, we generate a new key-value pair where the value is doubled. 27 | 28 | ``` 29 | locals { 30 | doubles_map = { for key, value in var.numbers_map : key => value * 2 } 31 | } 32 | ``` 33 | 34 | 3. Also create a local `even_map`. Again, use a for loop to iterate over each key-value pair in `numbers_map`. However, this time we only generate a new key-value pair where the value is even, and we also double the values. 35 | 36 | ``` 37 | locals { 38 | doubles_map = { for key, value in var.numbers_map : key => value * 2 } 39 | even_map = { for key, value in var.numbers_map : key => 40 | value * 2 if value % 2 == 0 41 | } 42 | } 43 | ``` 44 | 45 | 4. Finally, output the results of `doubles_map` and `even_map`. Run the `terraform plan` command to visualize the outputs. 46 | 47 | ``` 48 | output "doubles_map" { 49 | value = local.doubles_map 50 | } 51 | 52 | output "even_map" { 53 | value = local.even_map 54 | } 55 | ``` 56 | 57 | In the above code, the for loop in Terraform is utilized to iterate through a map and perform operations on it. The for loop is a powerful tool in Terraform that can be used to transform data structures and perform complex operations. 58 | 59 | ## Congratulations on Completing the Exercise! 60 | 61 | Great job on completing this exercise! You've learned how to work with maps in Terraform, including how to define a map, create locals that consist of modified versions of this map, and output the results. Keep practicing these skills as they are crucial for managing infrastructure as code. Keep up the good work! 62 | -------------------------------------------------------------------------------- /exercises/exercise26-functions.md: -------------------------------------------------------------------------------- 1 | # Working with Functions 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will delve into the practical implementation of various functions in Terraform. We will explore how to use built-in functions to manipulate and transform data types. This includes, among others, working with string functions, mathematical computations, and file encoding and decoding functions. By the end of this exercise, you should have a firm understanding of how to utilize these functions effectively in your Terraform scripts. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Start by defining some local variables using the `locals` block. Declare a `name`, `age`, and `my_object`, which contain reasonable primitive and object values. 10 | 11 | ``` 12 | locals { 13 | name = "Lauro Müller" 14 | age = 15 15 | my_object = { 16 | key1 = 10 17 | key2 = "my_value" 18 | } 19 | } 20 | ``` 21 | 22 | 2. Additionally, create a `users.yaml` file with the following content: 23 | 24 | ``` 25 | users: 26 | - name: Lauro 27 | group: developers 28 | - name: John 29 | group: auditors 30 | ``` 31 | 32 | 3. Create an output that uses the `startswith` function to check if the `local.name` starts with a certain value. Which functions can we use to ensure the comparison is case-insensitive? 33 | 34 | ``` 35 | output "example1" { 36 | value = startswith(lower(local.name), "john") 37 | } 38 | ``` 39 | 40 | 4. Create another output block that calculates the square of `local.age` using the `pow` function. 41 | 42 | ``` 43 | output "example2" { 44 | value = pow(local.age, 2) 45 | } 46 | ``` 47 | 48 | 5. Create a third output block that uses the `yamldecode` function to decode the created YAML file named `users.yaml` within the module's path. Output a list containing only the `name` property of each object. 49 | 50 | ``` 51 | output "example3" { 52 | value = yamldecode(file("${path.module}/users.yaml")).users[*].name 53 | } 54 | ``` 55 | 56 | 6. Finally, create a fourth output block that uses the `jsonencode` function to convert `local.my_object` into a JSON string. 57 | 58 | ``` 59 | output "example4" { 60 | value = jsonencode(local.my_object) 61 | } 62 | ``` 63 | 64 | ## Congratulations on Completing the Exercise! 65 | 66 | Well done on completing this exercise! You've gained practical experience with various Terraform functions, advancing your skills in manipulating and transforming data types. Keep up the good work! 67 | -------------------------------------------------------------------------------- /exercises/exercise30-allow_nginx_image.md: -------------------------------------------------------------------------------- 1 | # Extending the AMIs to also Allow NGINX 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will extend the Amazon Machine Images (AMIs) to include NGINX, a popular open-source web server. This will involve defining a data source for the NGINX Bitnami AMI, extending the `ami_ids` local to include an entry for NGINX, and adding an object for an NGINX instance to the `ec2_instance_config_list` variable. After setting up the infrastructure, we will apply and destroy it to avoid additional costs. This exercise will provide valuable practice in managing complex configurations and manipulating data sources and resources in Terraform. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Define a data source to fetch the AMI ID for the NGINX Bitnami AMI. 12 | 2. Extend the `ami_ids` local to include an entry for NGINX. 13 | 3. Add another object in the `ec2_instance_config_list` to deploy an instance with the NGINX image. 14 | 15 | ## Step-by-Step Guide 16 | 17 | 1. Start by defining a data source for the NGINX Bitnami AMI. The structure is similar to the one from ubuntu, but we need to change its name. Where can we find this information? (Hint: Check the AWS console, we can use the same method as we did in the past to find the AMI ID, but now we should look for the AMI name). 18 | 19 | ``` 20 | data "aws_ami" "nginx" { 21 | most_recent = true 22 | 23 | filter { 24 | name = "name" 25 | values = ["bitnami-nginx-1.25.4-*-linux-debian-12-x86_64-hvm-ebs-*"] 26 | } 27 | 28 | filter { 29 | name = "virtualization-type" 30 | values = ["hvm"] 31 | } 32 | } 33 | ``` 34 | 35 | 2. Extend our `ami_ids` local to include an entry for nginx. 36 | 37 | ``` 38 | locals { 39 | ami_ids = { 40 | ubuntu = data.aws_ami.ubuntu.id 41 | nginx = data.aws_ami.nginx.id 42 | } 43 | } 44 | ``` 45 | 46 | 3. Now extend our `ec2_instance_config_list` variable to include another object for an NGINX instance. 47 | 48 | ``` 49 | ec2_instance_config_list = [ 50 | { 51 | instance_type = "t2.micro", 52 | ami = "ubuntu" 53 | }, 54 | { 55 | instance_type = "t2.micro", 56 | ami = "nginx" 57 | } 58 | ] 59 | ``` 60 | 61 | 4. Run a `terraform apply` command and inspect the changes. Wasn’t it easy to create extend our configuration without having to touch the `aws_instance.from_list` resource? This points towards a well-designed code! 62 | 5. Make sure to destroy the resources after you complete all the steps! 63 | 64 | ## Congratulations on Completing the Exercise! 65 | 66 | Well done on completing this exercise! You've successfully learned how to create multiple EC2 instances based on a list in Terraform. This is a significant step forward in managing complex configurations and manipulating data sources and resources in Terraform. Keep up the good work! 67 | -------------------------------------------------------------------------------- /exercises/exercise35-using_aws_vpc_module.md: -------------------------------------------------------------------------------- 1 | # Using the AWS VPC Module 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will be using the AWS VPC Module to set up a virtual private cloud (VPC) with specific subnet configurations. This exercise is designed to help you become more familiar with Terraform modules and their application in creating and managing AWS resources, particularly VPCs and subnets. By the end of this exercise, you will have a deeper understanding of how to use data sources, how to use the AWS VPC module to create a VPC, and how to specify the configuration for private and public subnets. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Use the public AWS VPC module to create a VPC named `12-public-modules` with a CIDR block of `10.0.0.0/16`. 12 | 2. Fetch the available Availability Zones with a data source. 13 | 3. Extend the VPC module to create a private subnet with a CIDR block of `10.0.0.0/24`. 14 | 4. Extend the VPC module to create a public subnet with a CIDR block of `10.0.128.0/24`. 15 | 16 | ## Step-by-Step Guide 17 | 18 | 1. Create a new folder `12-public-modules` and add a configuration block to the project’s directory. Configure the AWS provider to the relevant region that you have been using. 19 | 20 | ``` 21 | terraform { 22 | required_version = "~> 1.7" 23 | required_providers { 24 | aws = { 25 | source = "hashicorp/aws" 26 | version = "~> 5.0" 27 | } 28 | } 29 | } 30 | 31 | provider "aws" { 32 | region = "eu-west-1" 33 | } 34 | ``` 35 | 36 | 2. Use a data source to fetch the available availability zones for the specified region. 37 | 38 | ``` 39 | data "aws_availability_zones" "azs" { 40 | state = "available" 41 | } 42 | ``` 43 | 44 | 3. Let’s now use the public VPC module from AWS to create our VPC and subnets. Use version `5.5.3` to ensure compatibility. Let’s use the following values for the VPC information: 45 | 46 | 1. VPC CIDR block: `10.0.0.0/16` 47 | 2. VPC name: `12-public-modules` 48 | 3. VPC availability zones: names from the data source we have created. 49 | 4. Private subnet CIDRs: `["10.0.0.0/24"]` 50 | 5. Public subnet CIDRs: `["10.0.128.0/24"]` 51 | 52 | ``` 53 | module "vpc" { 54 | source = "terraform-aws-modules/vpc/aws" 55 | version = "5.5.3" 56 | 57 | cidr = "10.0.0.0/16" 58 | name = "12-public-modules" 59 | azs = data.aws_availability_zones.azs.names 60 | private_subnets = ["10.0.0.0/24"] 61 | public_subnets = ["10.0.128.0/24"] 62 | } 63 | ``` 64 | 65 | 4. Make sure to destroy the resources after you complete all the steps! 66 | 67 | ## Congratulations on Completing the Exercise! 68 | 69 | Great job on completing this exercise! You've taken another step in mastering Terraform and AWS by learning how to work with AWS public modules. Keep up the good work! 70 | -------------------------------------------------------------------------------- /exercises/exercise38-using_object_inputs.md: -------------------------------------------------------------------------------- 1 | # Migrating to Using Object Variables 2 | 3 | ## Introduction 4 | 5 | In this exercise, you will be working on migrating to using object variables in a Terraform module. You'll start by creating a new `aws_vpc` resource within the module, using a `vpc_cidr` variable to provide the CIDR block of the VPC. You'll then add tags to the created VPC, and run a `terraform apply` operation. Following this, you'll modify the `vpc_cidr` variable to become an object that receives both the VPC CIDR block and VPC name, to ensure a successful migration without introducing any changes to the infrastructure. This exercise provides a practical demonstration of how to handle variable migrations in Terraform, allowing you to handle complex configurations with ease. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Create a new `aws_vpc` resource within the module. Leverage the `vpc_cidr` variable to provide the CIDR block of the VPC. 12 | 2. Add sensible tags to the created VPC, including the `Name` tag. 13 | 3. Migrate the variable `vpc_cidr` to an object that receives both the VPC CIDR block and VPC name, and migrate the code without introducing any changes to the infrastructure. 14 | 15 | ## Step-by-Step Guide 16 | 17 | 1. Create a new `aws_vpc` resource within the module. Use the `vpc_cidr` variable to provide the CIDR block of the VPC. 18 | 2. Add tags to the created VPC, including the `Name` tag to be added to the VPC. 19 | 3. Run a `terraform apply` and inspect that the VPC will be created. Confirm the operation and verify that the VPC is successfully created. 20 | 4. Migrate the variable `vpc_cidr` to be an object that receives both the VPC CIDR block and VPC name. Rename the variable to be more suiting to the properties it contains. 21 | 22 | ``` 23 | variable "vpc_config" { 24 | type = object({ 25 | cidr_block = string 26 | name = string 27 | }) 28 | 29 | validation { 30 | condition = can(cidrnetmask(var.vpc_config.cidr_block)) 31 | error_message = "The cidr_block config option must contain a valid CIDR block." 32 | } 33 | } 34 | ``` 35 | 36 | 5. Also, make sure to refactor the `aws_vpc` resource to correctly reference the values received via the new variable. The code calling the module also needs to be refactored. 37 | 38 | ``` 39 | # 13-local-modules/modules/networking/vpc.tf 40 | 41 | resource "aws_vpc" "this" { 42 | cidr_block = var.vpc_config.cidr_block 43 | 44 | tags = { 45 | Name = var.vpc_config.name 46 | } 47 | } 48 | 49 | --- 50 | 51 | # 13-local-modules/networking.tf 52 | 53 | module "vpc" { 54 | source = "./modules/networking" 55 | 56 | vpc_config = { 57 | cidr_block = "10.0.0.0/16" 58 | name = "13-local-modules" 59 | } 60 | } 61 | ``` 62 | 63 | 6. Confirm that we did not introduce any unwanted changes by running a `terraform plan` command and confirming that there are no changes to the infrastructure. 64 | 7. Make sure to destroy the resources after you complete all the steps! 65 | 66 | ## Congratulations on Completing the Exercise! 67 | 68 | Great job on completing this exercise! You've successfully learned how to migrate to using object variables in Terraform. This is a crucial skill in handling complex configurations. Remember, we're not done yet! In the next few exercises, we will continue to implement this module. Keep up the good work! 69 | -------------------------------------------------------------------------------- /exercises/exercise43-testing_module.md: -------------------------------------------------------------------------------- 1 | # Testing Our Module with EC2 Instances 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will be testing our module with Amazon EC2 instances. We will ensure that our networking module is functioning correctly, and then we will create a new EC2 instance inside one of our created subnets. This will help us understand how to integrate our module with other AWS services and verify its functionality. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Start by ensuring that the networking module is correctly working by running `terraform apply` and ensuring that all resources are created. 10 | 2. Create a new EC2 instance using the Ubuntu AMI, and deploy this EC2 instance inside the `subnet_1` subnet that we have created with the module. 11 | 12 | ``` 13 | locals { 14 | project_name = "13-local-modules" 15 | } 16 | 17 | data "aws_ami" "ubuntu" { 18 | most_recent = true 19 | owners = ["099720109477"] # Owner is Canonical 20 | 21 | filter { 22 | name = "name" 23 | values = ["ubuntu/images/hvm-ssd/ubuntu-*-22.04-amd64-server-*"] 24 | } 25 | 26 | filter { 27 | name = "virtualization-type" 28 | values = ["hvm"] 29 | } 30 | } 31 | 32 | resource "aws_instance" "this" { 33 | ami = data.aws_ami.ubuntu.id 34 | instance_type = "t2.micro" 35 | subnet_id = module.vpc.private_subnets["subnet_1"].subnet_id 36 | 37 | tags = { 38 | Name = local.project_name 39 | Project = local.project_name 40 | } 41 | } 42 | ``` 43 | 44 | 3. Make sure that everything is working by running `terraform apply` and inspecting the created infrastructure in the AWS console. 45 | 46 | 4. Make sure to destroy the resources after you complete all the steps! 47 | 48 | ## Congratulations on Completing the Exercise! 49 | 50 | Great job on completing this exercise! You've successfully tested a module with Amazon EC2 instances and integrated it with other AWS services. This is a significant step in understanding how to work with Terraform and AWS. Keep up the good work! 51 | -------------------------------------------------------------------------------- /exercises/exercise49-removed_block.md: -------------------------------------------------------------------------------- 1 | # Removing Infrastructure from Terraform 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will explore how to remove infrastructure from Terraform. This process involves creating an S3 bucket, removing it from the Terraform configuration using the Command Line Interface (CLI), and then reimporting it. We will also experiment with the `removed` block and its functionality. This exercise is designed to give you a better understanding of how Terraform handles resource management and the removal of resources. 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Create a new S3 bucket in Terraform. 12 | 2. Remove the S3 bucket from the Terraform configuration using the CLI approach. 13 | 3. Import the bucket once again and try using the `removed` block for forgetting the resource. 14 | 15 | ## Step-by-Step Guide 16 | 17 | 1. Let’s now evaluate how we can make Terraform forget objects and stop tracking their configuration. This is useful when we want to keep the object but remove it from a specific Terraform configuration. 18 | 2. Add a new file named `remove.tf` and create a new S3 bucket inside the file. 19 | 20 | ``` 21 | resource "aws_s3_bucket" "my_bucket" { 22 | bucket = "random-name-" 23 | } 24 | ``` 25 | 26 | 3. Let’s first try the CLI approach. Run `terraform state rm -dry-run aws_s3_bucket.my_bucket`. This will only show what Terraform plans to forget, without actually forgetting it. Run the command again without the `-dry-run` option. This will remove the resource from the Terraform configuration. 27 | 4. Delete (or comment out) the S3 bucket resource block from the file and run `terraform apply` to confirm that nothing will be deleted. The bucket still exists, but now it’s not tracked by our configuration anymore. 28 | 5. Add the bucket resource block (or uncomment it) back and import the resource into the configuration. 29 | 6. Let’s now try with the `removed` block. Add the following code to the `remove.tf` file. Comment out the S3 bucket code and run `terraform plan`. 30 | 31 | ``` 32 | removed { 33 | from = aws_s3_bucket.my_new_bucket 34 | 35 | lifecycle { 36 | destroy = true 37 | } 38 | } 39 | ``` 40 | 41 | 7. Try changing the `destroy` flag to `false` and run `terraform plan` once again. How is this different from the previous scenario where `destroy` was set to `true`? 42 | 43 | ## Congratulations on Completing the Exercise! 44 | 45 | Great job on completing this exercise! You've taken an important step in understanding how Terraform handles resource management and the removal of resources. Keep up the good work! 46 | -------------------------------------------------------------------------------- /exercises/exercise51-workspaces_cli.md: -------------------------------------------------------------------------------- 1 | # Creating Workspaces in the Terraform CLI 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will delve into the creation of workspaces in the Terraform Command Line Interface (CLI). We will walk through the process step by step, from setting up the initial configuration to creating an S3 bucket for demonstration purposes, and finally, creating a new workspace. This exercise will equip you with the practical knowledge to utilize workspaces effectively in your own projects. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Create a new folder to place the files of this section. Configure the Terraform required version and required providers, as well as the AWS provider region. 10 | 2. Run `terraform workspace -help`, and have a look through the documentation around CLI workspaces. 11 | 3. Run `terraform workspace show` and inspect which workspace we are working with. 12 | 4. Run `terraform workspace list` to list all the available workspaces. 13 | 5. Create a new S3 bucket for demonstration purposes. Include a random ID at the end of the bucket name. Confirm the changes with `terraform apply`. 14 | 15 | ``` 16 | resource "random_id" "bucket_suffix" { 17 | byte_length = 4 18 | } 19 | 20 | resource "aws_s3_bucket" "this" { 21 | count = var.bucket_count 22 | bucket = "workspaces-demo-${random_id.bucket_suffix.hex}" 23 | } 24 | ``` 25 | 26 | 6. Create a new workspace with `terraform workspace new dev`. 27 | 7. Run `terraform apply` and check what will happen. 28 | 8. Make sure to destroy the infrastructure before finishing the exercise! 29 | 30 | ## Congratulations on Completing the Exercise! 31 | 32 | Fantastic job on completing this exercise! You've taken key steps in understanding how to create and manage workspaces in the Terraform CLI. Keep up the great work! 33 | -------------------------------------------------------------------------------- /exercises/exercise52-multiple_workspaces.md: -------------------------------------------------------------------------------- 1 | # Working with Multiple Workspaces 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will explore how to work with multiple Terraform workspaces. We will learn how to extend the bucket name to include the `terraform.workspace` expression in its value, and apply changes across different workspaces. We will also delve into the appropriate and inappropriate use of the `terraform.workspace` value in our Terraform configuration. By the end of this exercise, you will understand how to efficiently manage and manipulate multiple workspaces in Terraform. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Extend the bucket name to include the `terraform.workspace` expression in its value. Apply the changes with terraform apply in both `dev` and `default` workspaces. 10 | 11 | ``` 12 | resource "random_id" "bucket_suffix" { 13 | byte_length = 4 14 | } 15 | 16 | resource "aws_s3_bucket" "this" { 17 | bucket = "workspaces-demo-${terraform.workspace}-${random_id.bucket_suffix.hex}" 18 | } 19 | ``` 20 | 21 | 2. **Bad code spoiler alert!** Let’s explore how not to use the value from terraform.workspace in our Terraform configuration. For that, let’s first create two more workspaces: `prod` and `staging`. Now extend the S3 bucket configuration to deploy multiple buckets based on the following rules: 22 | 23 | 1. 3 buckets should be deployed in the `prod` workspace. 24 | 2. 2 buckets should be deployed in the `staging` workspace. 25 | 3. 1 bucket should be deployed in the `dev` workspace. 26 | 27 | We also need to add the `count.index` value to the bucket name so that it is unique. Here is how **not** to use `terraform.workspace`! 28 | 29 | ``` 30 | resource "aws_s3_bucket" "this" { 31 | count = terraform.workspace == "prod" ? 3 : terraform.workspace == "staging" ? 2 : 1 32 | bucket = "workspaces-demo-${terraform.workspace}-${count.index}-${random_id.bucket_suffix.hex}" 33 | } 34 | ``` 35 | 36 | 3. Try applying this configuration in the three different workspaces we have created. Although this works, the code above is very difficult to read and maintain. There is a much better way of doing that, and we will do that in the next exercise. 37 | 38 | ## Congratulations on Completing the Exercise! 39 | 40 | Well done on completing this exercise! You've made some great progress in understanding how to manage and manipulate multiple workspaces in Terraform. This is a significant step towards mastering Terraform. Keep up the good work! 41 | -------------------------------------------------------------------------------- /exercises/exercise53-tfvars_workspace_configuration.md: -------------------------------------------------------------------------------- 1 | # Using `.tfvars` Files to Store Workspace Configuration 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will learn how to use `.tfvars` files to store workspace-specific configuration in Terraform. We will create different `.tfvars` files for different workspaces and learn how to apply them to manage our infrastructure. This method allows for easy scalability and modification of our configuration depending on the workspace in use. This exercise will guide you through the steps to implement this strategy. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Define a new variable `bucket_count` of type number, which will receive the count of buckets we should deploy. 10 | 2. Create three `.tfvars` files, each one for a workspace. Make sure that the name of the file matches `.tf`. Define the variable in each file with the correct number of buckets we would like to deploy per workspace. 11 | 12 | ``` 13 | # dev.tfvars 14 | 15 | bucket_count = 1 16 | 17 | --- 18 | 19 | # staging.tfvars 20 | 21 | bucket_count = 2 22 | 23 | --- 24 | 25 | # prod.tfvars 26 | 27 | bucket_count = 3 28 | ``` 29 | 30 | 3. Run `terraform apply -var-file=$(terraform workspace show).tfvars`, which will leverage the current workspace to find the correct `.tfvars` file. You can create an alias to this command to make your life easier, since the command remains the same independently of the selected workspace. 31 | 4. Extend the configuration to accommodate a new workspace by creating a workspace named `int` and its respective `int.tfvars` file. Pass a suitable value to bucket_count, and run the terraform apply command again, making sure to pass the `-var-file` option correctly. As you can see, with this approach we can easily extend our configuration without having to touch the underlying Terraform code. 32 | 33 | ## Congratulations on Completing the Exercise! 34 | 35 | Well done on completing this exercise! You've taken a big step towards mastering the use of `.tfvars` files for workspace-specific configuration in Terraform. Keep up the good work! 36 | -------------------------------------------------------------------------------- /exercises/exercise54-tf_cloud_workspace.md: -------------------------------------------------------------------------------- 1 | # Creating a Workspace in Terraform Cloud 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will be creating a workspace in Terraform Cloud. This is a fundamental step in setting up your Infrastructure as Code (IaC) practices. Terraform Cloud is a hosted service that helps you manage and automate your infrastructure, making it easy to create, use, and share infrastructure safely and efficiently. This guide will walk you through the process step by step, from creating an account to integrating your workspace via the CLI method. Let's get started! 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Create an account and login to Terraform Cloud. 12 | 2. Create a new organization and workspace integrated via the CLI method. 13 | 14 | ## Step-by-Step Guide 15 | 16 | 1. Create an account and login into Terraform Cloud. The login process should be fairly straightforward, and, if asked, I would recommend using GitHub to make the VCS integration easier later on. 17 | 2. Create an organization following the steps specified in the UI. 18 | 3. Click on “Create Workspace” in the Dashboard page. 19 | 4. Select “CLI-Driven Workflow” and provide the necessary information to identify the workspace. 20 | 5. If everything goes well, then you should see a page showing that Terraform Cloud is waiting for the configuration. Let’s configure it in the next exercise! 21 | 22 | ## Congratulations on Completing the Exercise! 23 | 24 | Great job on completing this exercise! Keep up the good work! 25 | -------------------------------------------------------------------------------- /exercises/exercise55-tf_cloud_resources.md: -------------------------------------------------------------------------------- 1 | # Creating Our First Resource with Terraform Cloud 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will create our first resource using Terraform Cloud. We will generate a new `random_id`, inspect the results of the run, and apply the run via the User Interface (UI). This exercise will provide a practical understanding of how Terraform Cloud works and the kind of information we can visualize within the platform. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Create a new `random_id` of `byte_length = 4`, and output the hexadecimal value as an output. 10 | 11 | ``` 12 | resource "random_id" "this" { 13 | byte_length = 4 14 | } 15 | 16 | output "random_id" { 17 | value = random_id.this.hex 18 | } 19 | ``` 20 | 21 | 2. Run `terraform apply` and inspect the results of the run. Before confirming on the CLI, jump to Terraform Cloud and inspect what has happened within the workflow. 22 | 3. Apply the run via the UI and check what happens in the terminal once we do that. Also inspect the changes in the UI, and which kind of information we can visualize there. 23 | 24 | ## Congratulations on Completing the Exercise! 25 | 26 | Great job on completing this exercise! You've taken a big step in understanding how to create resources using Terraform Cloud. Keep up the good work! 27 | -------------------------------------------------------------------------------- /exercises/exercise56-tf_cloud_authenticating_aws.md: -------------------------------------------------------------------------------- 1 | # Authenticating into AWS with Environment Variables 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will explore how to authenticate into AWS using environment variables. We will create a new access key for our AWS user, set up these keys as environment variables in Terraform Cloud, and then deploy a new S3 bucket to confirm that our setup is working correctly. Let's dive in! 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Create a new access key for the user we have been using in AWS. 12 | 2. Add the new access key information as environment variables in the created workspace in Terraform Cloud. 13 | 3. Create a new S3 bucket in the Terraform project, and make sure it is successfully deployed in AWS. 14 | 15 | ## Step-by-Step Guide 16 | 17 | 1. Login into AWS with the user you have been using throughout the course. 18 | 2. Navigate to the IAM console within the AWS platform, and select the relevant user. 19 | 3. Under the “Security Credentials” tab, click on “Create access key”. This will create a second access key for you to use in Terraform Cloud. 20 | 4. Select Command Line Interface (CLI), accept the recommendations, and click on “Next”. 21 | 5. Add relevant descriptions, and click on Save. 22 | 6. In Terraform Cloud, select the workspace `terraform-cli`, and click on “Variables” on the left-side menu. 23 | 7. Scroll down until you can see a place to add variables. 24 | 8. Add a new environment variable `AWS_ACCESS_KEY_ID` to store the Key ID from the newly created access key. Mark the environment variable as sensitive. 25 | 9. Add a new environment variable `AWS_SECRET_ACCESS_KEY` to store the Secret Access Key from the newly created access key. Mark the environment variable as sensitive. 26 | 10. Back in the Terraform project, add the `hashicorp/aws` provider, and configure it based on the region you are working with. 27 | 11. Create a new file `s3.tf` to create a new S3 bucket. Use the random_id resource we have created to make the name of the bucket unique. 28 | 29 | ``` 30 | resource "aws_s3_bucket" "tf_cloud" { 31 | bucket = "terraform-cloud-${random_id.this.hex}" 32 | 33 | tags = { 34 | CreatedBy = "Terraform Cloud" 35 | } 36 | } 37 | ``` 38 | 39 | 12. Run `terraform apply` and go back to Terraform Cloud. Confirm that everything is working by confirming the plan and inspecting the AWS S3 console. 40 | 41 | ## Congratulations on Completing the Exercise! 42 | 43 | Well done on successfully completing this exercise! You've taken important steps in learning how to authenticate into AWS using environment variables and deploying resources securely through Terraform Cloud. Keep up the excellent work! 44 | -------------------------------------------------------------------------------- /exercises/exercise57-tf_cloud_workspace_variables.md: -------------------------------------------------------------------------------- 1 | # Understanding Workspace Variables 2 | 3 | ## Introduction 4 | 5 | In this exercise, you'll learn how to use workspace variables in Terraform. This is an important skill to have as it allows you to manage and organize your Terraform configurations in a more efficient manner. You will specifically be creating a variable for an EC2 instance type and storing its value in Terraform Cloud. You will use this variable to deploy an EC2 instance and then destroy all resources at the end of the exercise. This hands-on exercise will help you understand the process of using workspace variables in Terraform and provide you with practical experience. Let's get started! 6 | 7 | ## Desired Outcome 8 | 9 | If you wish to give it a shot before looking into the detailed step-by-step and the solution videos, here is an overview of what the created solution should deploy: 10 | 11 | 1. Create a variable to specify the instance type for an EC2 instance, and store its value in Terraform Cloud. 12 | 2. Deploy the EC2 instance with an Ubuntu AMI. 13 | 3. Destroy all resources at the end of the exercise. 14 | 15 | ## Step-by-Step Guide 16 | 17 | 1. In the Terraform project code, create a new variable named `ec2_instance_type`, of type string, and with a validation ensuring that its value equals the free tier from your region. 18 | 2. Navigate to the `terraform-cli` workspace in Terraform Cloud, and navigate to the Variables section on the left-side menu. 19 | 3. Create a new Terraform variable (not Environment variable), name it `ec2_instance_type`, and add the value of the instance type that is included in your free tier. 20 | 4. In the Terraform project code, create a new `compute.tf` file and add the necessary code for deploying an EC2 instance with the Ubuntu AMI. Use the variable to define the instance type. Also add relevant tags to the EC2 instance. 21 | 5. Run `terraform apply`, and double check that the variable value is being correctly populated (you can do that in the Terraform Cloud portal, or via the CLI). 22 | 6. After the apply is successful, run `terraform destroy` in the CLI and inspect what is different in the Terraform Cloud console. Confirm the destroy operation, and wait for it to complete. 23 | 24 | ## Congratulations on Completing the Exercise! 25 | 26 | Well done on completing the exercise on understanding workspace variables in Terraform! You've taken an important step in mastering the use of Terraform configurations. Keep up the great work! 27 | -------------------------------------------------------------------------------- /exercises/exercise58-tf_cloud_vcs_integration.md: -------------------------------------------------------------------------------- 1 | # Creating a VPC and Subnet via VCS Integration 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will guide you step-by-step through the process of creating a Virtual Private Cloud (VPC) and a Subnet using VCS Integration. We will be using the repository from the previous exercise, modifying it to work with our new configuration. We'll also guide you on how to set up AWS credentials for the workspace, and how to trigger runs to verify everything is working as expected. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. If not yet done so, create a new private repository in the GitHub account you used to login into Terraform Cloud. Clone the repository into your local machine. Make sure it’s not nested under the folder that we are using for the course code! Instead, clone it somewhere else (it can be a sibling directory, for example). 10 | 2. Create a new `provider.tf` file, and add the same configuration we have used for the terraform cloud section of our terraform repository. Remove the `random` provider, and update the workspace to match the name of the newly created one for the VCS integration. 11 | 3. Create a new `variables.tf` file, and a variable `vpc_cidr` of type string. 12 | 13 | ``` 14 | variable "vpc_cidr" { 15 | type = string 16 | } 17 | ``` 18 | 19 | 4. Create a new `networking.tf` file, and add an `aws_vpc` resource that references the created variable to define its CIDR block. 20 | 21 | ``` 22 | resource "aws_vpc" "tf_cloud" { 23 | cidr_block = var.vpc_cidr 24 | 25 | tags = { 26 | Name = "terraform-cloud" 27 | } 28 | } 29 | ``` 30 | 31 | 5. Add, commit, and push the changes to GitHub. Inspect whether there are any changes in the Terraform Cloud workspace. 32 | 6. Configure the `vpc_cidr` variable in the Terraform Cloud platform, and trigger a new run via the UI. 33 | 7. This will not work by default, since we have not set up AWS credentials for the newly created workspace. Set up the necessary AWS credentials as we have already done in previous exercises, and re-trigger the run via the UI. Make sure that everything is working as expected by confirming the plan and inspecting the results. 34 | 8. Return to the code and create a new variable `subnet_cidr` of type string, and a new subnet resource that references the newly created `subnet_cidr` variable. Format the code and commit the changes. 35 | 36 | ``` 37 | variable "subnet_cidr" { 38 | type = string 39 | } 40 | 41 | resource "aws_subnet" "tf_cloud" { 42 | cidr_block = var.subnet_cidr 43 | vpc_id. = aws_vpc.tf_cloud.id 44 | 45 | tags = { 46 | Name = "terraform-cloud" 47 | } 48 | } 49 | ``` 50 | 51 | 9. Before pushing the code, configure the `subnet_cidr` variable to contain a suitable value. Once this is set up, push the code to GitHub. 52 | 10. Confirm that the newly pushed code triggers a run automatically, and confirm that the plan intends to create a new subnet. Apply the changes by approving the plan. 53 | 54 | ## Congratulations on Completing the Exercise! 55 | 56 | Congratulations on successfully completing this exercise! You have learned how to create a Virtual Private Cloud (VPC) and a Subnet using VCS Integration. This is a valuable skill in cloud computing, and you should be proud of your achievement. Keep up the good work! 57 | -------------------------------------------------------------------------------- /exercises/exercise59-tf_cloud_speculative_plans.md: -------------------------------------------------------------------------------- 1 | # Generating Speculative Plans for Pull Requests 2 | 3 | ## Introduction 4 | 5 | This exercise will guide you through the process of creating speculative plans for pull requests in Terraform Cloud. Speculative plans are dry runs that provide a preview of what would happen if you apply a proposed change. You will learn how to trigger and visualize a speculative plan by creating and committing to a new branch, then creating a pull request. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. In the repository we have created for our VCS integration, create a new `s3.tf` file, and create a new S3 bucket. 10 | 2. Instead of committing to the main branch, create a new branch and commit the code in this new branch. Push the changes to the respective remote branch. 11 | 3. Open the GitHub repository, and create a pull request from the new branch into the main branch. This will trigger a speculative plan, and we can visualize that by expanding the “checks” section of our pull request, and by clicking on “Details” next to the Terraform Cloud check. 12 | 4. Inspect the run details, and make sure that the run cannot be applied, since it’s a speculative plan. 13 | 5. Return to GitHub and merge the pull request. Return to Terraform Cloud, and confirm that a new run was triggered after merging the pull request. This run can now be applied, since it was not triggered from a pull request. 14 | 15 | ## Congratulations on Completing the Exercise! 16 | 17 | Great job on completing this exercise! Your understanding of speculative plans in Terraform Cloud has taken a big step forward. Keep up the good work! 18 | -------------------------------------------------------------------------------- /exercises/exercise60-tf_cloud_private_registries.md: -------------------------------------------------------------------------------- 1 | # Publishing Our Networking Module in a Private Registry 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will walk through the process of publishing our networking module in a private registry. This will allow us to share and reuse the code across different projects. We'll use Terraform Cloud and GitHub for this task, and we'll provide step-by-step instructions on how to publish the module, include it in Terraform projects, and move existing resources into the module. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Navigate to the home page of Terraform Cloud and click on “Registry” on the left-side menu. 10 | 2. Click on “Publish” and select “Module”. 11 | 3. Use the already installed GitHub App, and select the repository containing the code for our module (we have already created a repository with a networking module in a previous section, you can use that!). 12 | 4. Select “Tag” for the module publishing type, and click on “Publish module”. 13 | 5. Once the module is successfully published, you should be able to see some information on the right side of the screen regarding how to include this module in Terraform projects. Copy the code block listed under “Usage Instructions”, and head back to the new folder we have created to work with the VCS integration. 14 | 6. Add the code to create an instance of the networking module: 15 | 16 | ``` 17 | module "networking-tf-course" { 18 | source = "app.terraform.io//networking-tf-course/aws" 19 | version = "0.1.1" 20 | 21 | vpc_config = { 22 | cidr_block = var.vpc_cidr 23 | name = "terraform-cloud" 24 | } 25 | 26 | subnet_config = { 27 | subnet_1 = { 28 | cidr_block = "10.0.0.0/24" 29 | az = 30 | } 31 | } 32 | } 33 | ``` 34 | 35 | 7. Make sure that we use the same CIDR blocks that we have used to create the VPC and the subnet, and move the existing resources into the module with the help of moved blocks. 36 | 37 | ``` 38 | moved { 39 | from = aws_vpc.tf_cloud 40 | to = module.networking-tf-course.aws_vpc.this 41 | } 42 | 43 | moved { 44 | from = aws_subnet.tf_cloud 45 | to = module.networking-tf-course.aws_subnet.this["subnet_1"] 46 | } 47 | ``` 48 | 49 | 8. Commit the changes into a new branch named however you prefer, and push the changes to GitHub. Create a new PR and inspect the speculative plan that was triggered. Confirm that the renaming of the subnet is the only intended change. 50 | 9. Merge the PR into the main branch, wait for the new Terraform Cloud plan to finish, and approve it via the UI. 51 | 52 | ## Congratulations on Completing the Exercise! 53 | 54 | Fantastic work on completing this exercise! You've successfully published a networking module in a private registry and have learned how to share and reuse code across different projects. This is a major stride in your journey with Terraform Cloud. Keep up the great work! 55 | -------------------------------------------------------------------------------- /exercises/exercise61-tf_cloud_cleanup.md: -------------------------------------------------------------------------------- 1 | # Cleaning Up Resources in Terraform Cloud 2 | 3 | ## Introduction 4 | 5 | In this exercise, we will be walking through the process of cleaning up resources in Terraform Cloud. A step-by-step guide will instruct you on how to navigate the workspace settings, queue and confirm destruction plans, check for tracked resources, and delete workspaces and modules. This guide will ensure that you efficiently and effectively manage your resources in Terraform Cloud. 6 | 7 | ## Step-by-Step Guide 8 | 9 | 1. Head over to Terraform Cloud, and navigate to the `terraform-vcs` workspace. 10 | 2. Click on “Settings” → “Destruction and Deletion” 11 | 3. Click on “Queue Destroy Plan”, and confirm it once the plan has completed. 12 | 4. Navigate to the home page of Terraform Cloud, and then navigate to the `terraform-cli` workspace. 13 | 5. Confirm that there are no resources tracked under this workspace, and delete the workspace by accessing “Settings” → “Destruction and Deletion”. 14 | 6. Navigate back to the `terraform-vcs` workspace and delete it as well. 15 | 7. Finally, navigate to the Registry page, select the published module, and then click on “Manage module for organization” and then on “Delete module”. Delete all providers and versions for the module, and confirm the deletion. 16 | 17 | ## Congratulations on Completing the Exercise! 18 | 19 | Well done! You have successfully completed this exercise. You now know how to clean up resources in Terraform Cloud. Keep up the great work! 20 | -------------------------------------------------------------------------------- /proj01-s3-static-website/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.38.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:axFddT4mkdtZREgkDXwXdzZGm1qxheF0fLN7S7bJJX4=", 9 | "zh:0d58264440fd28b6729990b48d8fd61e732f5570689d17bbbc0c5f2324d3dd00", 10 | "zh:175e24a3d399495fc91da359cc30a9fe06b7eeb98804816abcf1493859f6d28e", 11 | "zh:244a1f56d6710cc1a643f602a185b46d3cd064f6df60330006f92ab32f3ff60c", 12 | "zh:30dd99413867b1be808b656551a2f0452e4e37787f963780c51f1f85bf406441", 13 | "zh:3629d4e212c8ffd8e74c4ab9e9d22ca7fff803052366d011c014591fa65beb48", 14 | "zh:521badb184bbdde5dddb1228f7a241997db52ea51c9f8039ed5a626362952cf4", 15 | "zh:5580a937e1f5fa59c16c4b9802079aa45a16c7c69e5b7d4e97aebf2c0fb4bd00", 16 | "zh:87b801057d492ff0adc82ce6251871d87bdf5890749fe5753f447ec6fe4710ff", 17 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 18 | "zh:9c44e0c143f1d021440e9c448a9bc595f51a95e6cc382fcffe9db6d3b17f24c2", 19 | "zh:b7e6b7b182932a3dbb6ca5f8ebb8d37befe1456f3dffaafb37cee07dc0473696", 20 | "zh:d43fcf4f59cf79b1be3bec164d95fe9edc3fe39195a83226b911918a6538c8b3", 21 | "zh:ec3e383ce1e414f0bd7d3fe73409ff7d2777a5da27248b70fd5df1df323d920b", 22 | "zh:f729b443179bb115bbcbb0369fe46640de1c6dbd627b52694e9b3b8a41ec7881", 23 | "zh:fd532b707746145d3c6d3507bca2b8d44cc618b3d5006db99426221b71db7da7", 24 | ] 25 | } 26 | 27 | provider "registry.terraform.io/hashicorp/random" { 28 | version = "3.6.0" 29 | constraints = "~> 3.0" 30 | hashes = [ 31 | "h1:I8MBeauYA8J8yheLJ8oSMWqB0kovn16dF/wKZ1QTdkk=", 32 | "zh:03360ed3ecd31e8c5dac9c95fe0858be50f3e9a0d0c654b5e504109c2159287d", 33 | "zh:1c67ac51254ba2a2bb53a25e8ae7e4d076103483f55f39b426ec55e47d1fe211", 34 | "zh:24a17bba7f6d679538ff51b3a2f378cedadede97af8a1db7dad4fd8d6d50f829", 35 | "zh:30ffb297ffd1633175d6545d37c2217e2cef9545a6e03946e514c59c0859b77d", 36 | "zh:454ce4b3dbc73e6775f2f6605d45cee6e16c3872a2e66a2c97993d6e5cbd7055", 37 | "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", 38 | "zh:91df0a9fab329aff2ff4cf26797592eb7a3a90b4a0c04d64ce186654e0cc6e17", 39 | "zh:aa57384b85622a9f7bfb5d4512ca88e61f22a9cea9f30febaa4c98c68ff0dc21", 40 | "zh:c4a3e329ba786ffb6f2b694e1fd41d413a7010f3a53c20b432325a94fa71e839", 41 | "zh:e2699bc9116447f96c53d55f2a00570f982e6f9935038c3810603572693712d0", 42 | "zh:e747c0fd5d7684e5bfad8aa0ca441903f15ae7a98a737ff6aca24ba223207e2c", 43 | "zh:f1ca75f417ce490368f047b63ec09fd003711ae48487fba90b4aba2ccf71920e", 44 | ] 45 | } 46 | -------------------------------------------------------------------------------- /proj01-s3-static-website/build/error.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | My Static S3 Website 5 | 9 | 13 | 14 | 15 | Ooops, this page doesn't exist! 16 | 17 | 18 | -------------------------------------------------------------------------------- /proj01-s3-static-website/build/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | My Static S3 Website 5 | 9 | 13 | 14 | 15 | This is the main index.html 16 | 17 | 18 | -------------------------------------------------------------------------------- /proj01-s3-static-website/outputs.tf: -------------------------------------------------------------------------------- 1 | output "static_website_endpoint" { 2 | value = aws_s3_bucket_website_configuration.static_website.website_endpoint 3 | } -------------------------------------------------------------------------------- /proj01-s3-static-website/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | random = { 9 | source = "hashicorp/random" 10 | version = "~> 3.0" 11 | } 12 | } 13 | } 14 | 15 | provider "aws" { 16 | region = "eu-west-1" 17 | } -------------------------------------------------------------------------------- /proj01-s3-static-website/s3.tf: -------------------------------------------------------------------------------- 1 | resource "random_id" "bucket_suffix" { 2 | byte_length = 4 3 | } 4 | 5 | resource "aws_s3_bucket" "static_website" { 6 | bucket = "terraform-course-project-1-${random_id.bucket_suffix.hex}" 7 | } 8 | 9 | resource "aws_s3_bucket_public_access_block" "static_website" { 10 | bucket = aws_s3_bucket.static_website.id 11 | block_public_acls = false 12 | block_public_policy = false 13 | ignore_public_acls = false 14 | restrict_public_buckets = false 15 | } 16 | 17 | resource "aws_s3_bucket_policy" "static_website_public_read" { 18 | bucket = aws_s3_bucket.static_website.id 19 | 20 | policy = jsonencode({ 21 | Version = "2012-10-17" 22 | Statement = [ 23 | { 24 | Sid = "PublicReadGetObject" 25 | Effect = "Allow" 26 | Principal = "*" 27 | Action = "s3:GetObject" 28 | Resource = "${aws_s3_bucket.static_website.arn}/*" 29 | } 30 | ] 31 | }) 32 | } 33 | 34 | resource "aws_s3_bucket_website_configuration" "static_website" { 35 | bucket = aws_s3_bucket.static_website.id 36 | 37 | index_document { 38 | suffix = "index.html" 39 | } 40 | 41 | error_document { 42 | key = "error.html" 43 | } 44 | } 45 | 46 | resource "aws_s3_object" "index_html" { 47 | bucket = aws_s3_bucket.static_website.id 48 | key = "index.html" 49 | source = "build/index.html" 50 | etag = filemd5("build/index.html") 51 | content_type = "text/html" 52 | } 53 | 54 | resource "aws_s3_object" "error_html" { 55 | bucket = aws_s3_bucket.static_website.id 56 | key = "error.html" 57 | source = "build/error.html" 58 | etag = filemd5("build/error.html") 59 | content_type = "text/html" 60 | } -------------------------------------------------------------------------------- /proj02-iam-users/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.82.2" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:ce6Dw2y4PpuqAPtnQ0dO270dRTmwEARqnfffrE1VYJ8=", 9 | "zh:0262fc96012fb7e173e1b7beadd46dfc25b1dc7eaef95b90e936fc454724f1c8", 10 | "zh:397413613d27f4f54d16efcbf4f0a43c059bd8d827fe34287522ae182a992f9b", 11 | "zh:436c0c5d56e1da4f0a4c13129e12a0b519d12ab116aed52029b183f9806866f3", 12 | "zh:4d942d173a2553d8d532a333a0482a090f4e82a2238acf135578f163b6e68470", 13 | "zh:624aebc549bfbce06cc2ecfd8631932eb874ac7c10eb8466ce5b9a2fbdfdc724", 14 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 15 | "zh:9e632dee2dfdf01b371cca7854b1ec63ceefa75790e619b0642b34d5514c6733", 16 | "zh:a07567acb115b60a3df8f6048d12735b9b3bcf85ec92a62f77852e13d5a3c096", 17 | "zh:ab7002df1a1be6432ac0eb1b9f6f0dd3db90973cd5b1b0b33d2dae54553dfbd7", 18 | "zh:bc1ff65e2016b018b3e84db7249b2cd0433cb5c81dc81f9f6158f2197d6b9fde", 19 | "zh:bcad84b1d767f87af6e1ba3dc97fdb8f2ad5de9224f192f1412b09aba798c0a8", 20 | "zh:cf917dceaa0f9d55d9ff181b5dcc4d1e10af21b6671811b315ae2a6eda866a2a", 21 | "zh:d8e90ecfb3216f3cc13ccde5a16da64307abb6e22453aed2ac3067bbf689313b", 22 | "zh:d9054e0e40705df729682ad34c20db8695d57f182c65963abd151c6aba1ab0d3", 23 | "zh:ecf3a4f3c57eb7e89f71b8559e2a71e4cdf94eea0118ec4f2cb37e4f4d71a069", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /proj02-iam-users/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | } 9 | } 10 | 11 | provider "aws" { 12 | region = "eu-west-1" 13 | } -------------------------------------------------------------------------------- /proj02-iam-users/roles.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | role_policies = { 3 | readonly = [ 4 | "ReadOnlyAccess" 5 | ] 6 | admin = [ 7 | "AdministratorAccess" 8 | ] 9 | auditor = [ 10 | "SecurityAudit" 11 | ] 12 | developer = [ 13 | "AmazonVPCFullAccess", 14 | "AmazonEC2FullAccess", 15 | "AmazonRDSFullAccess" 16 | ] 17 | } 18 | 19 | role_policies_list = flatten([ 20 | for role, policies in local.role_policies : [ 21 | for policy in policies : { 22 | role = role 23 | policy = policy 24 | } 25 | ] 26 | ]) 27 | } 28 | 29 | data "aws_caller_identity" "current" {} 30 | 31 | data "aws_iam_policy_document" "assume_role_policy" { 32 | for_each = toset(keys(local.role_policies)) 33 | 34 | statement { 35 | actions = ["sts:AssumeRole"] 36 | 37 | principals { 38 | type = "AWS" 39 | identifiers = [ 40 | for username in keys(aws_iam_user.users) : "arn:aws:iam::${data.aws_caller_identity.current.account_id}:user/${username}" 41 | if contains(local.users_map[username], each.value) 42 | ] 43 | } 44 | } 45 | } 46 | 47 | resource "aws_iam_role" "roles" { 48 | for_each = toset(keys(local.role_policies)) 49 | 50 | name = each.key 51 | assume_role_policy = data.aws_iam_policy_document.assume_role_policy[each.value].json 52 | } 53 | 54 | data "aws_iam_policy" "managed_policies" { 55 | for_each = toset(local.role_policies_list[*].policy) 56 | arn = "arn:aws:iam::aws:policy/${each.value}" 57 | } 58 | 59 | resource "aws_iam_role_policy_attachment" "role_policy_attachments" { 60 | count = length(local.role_policies_list) 61 | role = aws_iam_role.roles[ 62 | local.role_policies_list[count.index].role 63 | ].name 64 | policy_arn = data.aws_iam_policy.managed_policies[ 65 | local.role_policies_list[count.index].policy 66 | ].arn 67 | } -------------------------------------------------------------------------------- /proj02-iam-users/user-roles.yaml: -------------------------------------------------------------------------------- 1 | # Brief schema reference: 2 | # 3 | # users = { 4 | # username = string 5 | # roles = (readonly | developer | admin | auditor)[] 6 | # }[] 7 | 8 | users: 9 | - username: john 10 | roles: [readonly, developer] 11 | - username: jane 12 | roles: [admin, auditor] 13 | - username: lauro 14 | roles: [readonly] 15 | -------------------------------------------------------------------------------- /proj02-iam-users/users.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | users_from_yaml = yamldecode(file("${path.module}/user-roles.yaml")).users 3 | users_map = { 4 | for user_config in local.users_from_yaml : user_config.username => user_config.roles 5 | } 6 | } 7 | 8 | resource "aws_iam_user" "users" { 9 | for_each = toset(local.users_from_yaml[*].username) 10 | name = each.value 11 | } 12 | 13 | resource "aws_iam_user_login_profile" "users" { 14 | for_each = aws_iam_user.users 15 | user = each.value.name 16 | password_length = 8 17 | 18 | lifecycle { 19 | ignore_changes = [ 20 | password_length, 21 | password_reset_required, 22 | pgp_key 23 | ] 24 | } 25 | } 26 | 27 | output "passwords" { 28 | sensitive = true 29 | value = { 30 | for user, user_login in aws_iam_user_login_profile.users : user => user_login.password 31 | } 32 | } 33 | -------------------------------------------------------------------------------- /proj03-import-lambda/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/archive" { 5 | version = "2.4.2" 6 | constraints = "~> 2.0" 7 | hashes = [ 8 | "h1:1eOz9vM/55vnQjxk23RhnYga7PZq8n2rGxG+2Vx2s6w=", 9 | "zh:08faed7c9f42d82bc3d406d0d9d4971e2d1c2d34eae268ad211b8aca57b7f758", 10 | "zh:3564112ed2d097d7e0672378044a69b06642c326f6f1584d81c7cdd32ebf3a08", 11 | "zh:53cd9afd223c15828c1916e68cb728d2be1cbccb9545568d6c2b122d0bac5102", 12 | "zh:5ae4e41e3a1ce9d40b6458218a85bbde44f21723943982bca4a3b8bb7c103670", 13 | "zh:5b65499218b315b96e95c5d3463ea6d7c66245b59461217c99eaa1611891cd2c", 14 | "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", 15 | "zh:7f45b35a8330bebd184c2545a41782ff58240ed6ba947274d9881dd5da44b02e", 16 | "zh:87e67891033214e55cfead1391d68e6a3bf37993b7607753237e82aa3250bb71", 17 | "zh:de3590d14037ad81fc5cedf7cfa44614a92452d7b39676289b704a962050bc5e", 18 | "zh:e7e6f2ea567f2dbb3baa81c6203be69f9cd6aeeb01204fd93e3cf181e099b610", 19 | "zh:fd24d03c89a7702628c2e5a3c732c0dede56fa75a08da4a1efe17b5f881c88e2", 20 | "zh:febf4b7b5f3ff2adff0573ef6361f09b6638105111644bdebc0e4f575373935f", 21 | ] 22 | } 23 | 24 | provider "registry.terraform.io/hashicorp/aws" { 25 | version = "5.42.0" 26 | constraints = "~> 5.0" 27 | hashes = [ 28 | "h1:Yxsj34z606m8wssYDHyleuBlQ9i+94MHwRs38thQPZU=", 29 | "zh:0fb12bd56a3ad777b29f957c56dd2119776dbc01b6074458f597990e368c82de", 30 | "zh:16e99c13bef6e3777f67c240c916f57c01c9c142254cfb2720e08281ff906447", 31 | "zh:218268f5fe73bcb19e9a996f781ab66df0da9e333d1c60612e3c51ad28a5105f", 32 | "zh:220b17f7053d11548f35136669687d30ef1f1036e15393275325fd2b9654c715", 33 | "zh:2256cfd74988ce05eada76b42efffc6fe2bf4d60b61f57e4db4fcf65ced4c666", 34 | "zh:52da19f531e0cb9828f73bca620e30264e63a494bd7f9ce826aabcf010d3a241", 35 | "zh:56069ce08d720280ba39aaf2fdd40c4357ffb54262c80e4d39c4e540a38e76af", 36 | "zh:82c81398e68324029167f813fbb7c54fa3d233e99fa05001d85cbce8bdd08bb3", 37 | "zh:82d6eaa87f5ab318959064e6c89adc2baabaf70b13f2f7de866f62416de05352", 38 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 39 | "zh:ade8490cfdd8de8b9a82986588595b67e0ad1048d9e2d3a6f5164320179c2cd0", 40 | "zh:b094ef56ae9bfffd586f46d4f7fb0097798738df758a8f3c51578ee163495c7e", 41 | "zh:bd5e68e1e454bae0f8d73cff8448e814a35855a561c33b745e1b8b525fb06c9f", 42 | "zh:c111c6a854bf121facca1642d528bfa80fb4214554ac6c33e4a59c86bc605b71", 43 | "zh:e04df69a557adbcdf8efc77eb45be748f0acbe800ccede1e0895393c87722a0f", 44 | ] 45 | } 46 | -------------------------------------------------------------------------------- /proj03-import-lambda/build/index.mjs: -------------------------------------------------------------------------------- 1 | console.log('Loading function'); 2 | 3 | export const handler = async (event, context) => { 4 | return 'Another welcome message from Terraform'; 5 | }; 6 | -------------------------------------------------------------------------------- /proj03-import-lambda/cloudwatch.tf: -------------------------------------------------------------------------------- 1 | import { 2 | to = aws_cloudwatch_log_group.lambda 3 | id = "/aws/lambda/manually-created-lambda" 4 | } 5 | 6 | resource "aws_cloudwatch_log_group" "lambda" { 7 | name = "/aws/lambda/manually-created-lambda" 8 | } 9 | -------------------------------------------------------------------------------- /proj03-import-lambda/iam.tf: -------------------------------------------------------------------------------- 1 | import { 2 | to = aws_iam_role.lambda_execution_role 3 | id = "manually-created-lambda-role-apq5o1ty" 4 | } 5 | 6 | import { 7 | to = aws_iam_policy.lambda_execution 8 | id = "arn:aws:iam::905418111066:policy/service-role/AWSLambdaBasicExecutionRole-8659e564-50c0-4e55-bc87-330c6cf096b2" 9 | } 10 | 11 | data "aws_caller_identity" "current" {} 12 | data "aws_region" "current" {} 13 | 14 | data "aws_iam_policy_document" "assume_lambda_execution_role" { 15 | statement { 16 | effect = "Allow" 17 | 18 | principals { 19 | type = "Service" 20 | identifiers = ["lambda.amazonaws.com"] 21 | } 22 | 23 | actions = ["sts:AssumeRole"] 24 | } 25 | } 26 | 27 | data "aws_iam_policy_document" "lambda_execution" { 28 | statement { 29 | effect = "Allow" 30 | resources = ["arn:aws:logs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:*"] 31 | actions = ["logs:CreateLogGroup"] 32 | } 33 | 34 | statement { 35 | effect = "Allow" 36 | resources = ["${aws_cloudwatch_log_group.lambda.arn}:*"] 37 | actions = [ 38 | "logs:CreateLogStream", 39 | "logs:PutLogEvents" 40 | ] 41 | } 42 | } 43 | 44 | resource "aws_iam_policy" "lambda_execution" { 45 | name = "AWSLambdaBasicExecutionRole-8659e564-50c0-4e55-bc87-330c6cf096b2" 46 | path = "/service-role/" 47 | policy = data.aws_iam_policy_document.lambda_execution.json 48 | } 49 | 50 | resource "aws_iam_role" "lambda_execution_role" { 51 | assume_role_policy = data.aws_iam_policy_document.assume_lambda_execution_role.json 52 | name = "manually-created-lambda-role-apq5o1ty" 53 | path = "/service-role/" 54 | } 55 | 56 | resource "aws_iam_role_policy_attachment" "lambda_execution" { 57 | role = aws_iam_role.lambda_execution_role.name 58 | policy_arn = aws_iam_policy.lambda_execution.arn 59 | } -------------------------------------------------------------------------------- /proj03-import-lambda/lambda.tf: -------------------------------------------------------------------------------- 1 | import { 2 | to = aws_lambda_function.this 3 | id = "manually-created-lambda" 4 | } 5 | 6 | data "archive_file" "lambda_code" { 7 | type = "zip" 8 | source_file = "${path.root}/build/index.mjs" 9 | output_path = "${path.root}/lambda.zip" 10 | } 11 | 12 | resource "aws_lambda_function" "this" { 13 | description = "A starter AWS Lambda function." 14 | filename = "lambda.zip" 15 | function_name = "manually-created-lambda" 16 | handler = "index.handler" 17 | role = aws_iam_role.lambda_execution_role.arn 18 | runtime = "nodejs18.x" 19 | source_code_hash = data.archive_file.lambda_code.output_base64sha256 20 | 21 | tags = { 22 | "lambda-console:blueprint" = "hello-world" 23 | } 24 | 25 | logging_config { 26 | log_format = "Text" 27 | log_group = aws_cloudwatch_log_group.lambda.name 28 | } 29 | } 30 | 31 | resource "aws_lambda_function_url" "this" { 32 | function_name = aws_lambda_function.this.function_name 33 | authorization_type = "NONE" 34 | } -------------------------------------------------------------------------------- /proj03-import-lambda/lambda.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lm-academy/terraform-course/c4c620f9d634a4f60756b5f20de4c5077437d1ab/proj03-import-lambda/lambda.zip -------------------------------------------------------------------------------- /proj03-import-lambda/outputs.tf: -------------------------------------------------------------------------------- 1 | output "lambda_url" { 2 | value = aws_lambda_function_url.this.function_url 3 | } -------------------------------------------------------------------------------- /proj03-import-lambda/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | archive = { 9 | source = "hashicorp/archive" 10 | version = "~> 2.0" 11 | } 12 | } 13 | } 14 | 15 | provider "aws" { 16 | region = "eu-west-1" 17 | default_tags { 18 | tags = { 19 | ManagedBy = "Terraform" 20 | Project = "proj03-import-lambda" 21 | } 22 | } 23 | } -------------------------------------------------------------------------------- /proj04-rds-module/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.46.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:d0Mf33mbbQujZ/JaYkqmH5gZGvP+iEIWf9yBSiOwimE=", 9 | "zh:05ae6180a7f23071435f6e5e59c19af0b6c5da42ee600c6c1568c8660214d548", 10 | "zh:0d878d1565d5e57ce6b34ec5f04b28662044a50c999ec5770c374aa1f1020de2", 11 | "zh:25ef1467af2514d8011c44759307445f7057836ff87dfe4503c3e1c9776d5c1a", 12 | "zh:26c006df6200f0063b827aab05bec94f9f3f77848e82ed72e48a51d1170d1961", 13 | "zh:37cdf4292649a10f12858622826925e18ad4eca354c31f61d02c66895eb91274", 14 | "zh:4315b0433c2fc512666c74e989e2d95240934ef370bea1c690d36cb02d30c4ce", 15 | "zh:75df0b3f631b78aeff1832cc77d99b527c2a5e79d40f7aac40bdc4a66124dac2", 16 | "zh:90693d936c9a556d2bf945de4920ff82052002eb73139bd7164fafd02920f0ef", 17 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 18 | "zh:c9177ad09804c60fd2ed25950570407b6bdcdf0fcc309e1673b584f06a827fae", 19 | "zh:ca8e8db24a4d62d92afd8d3d383b81a08693acac191a2e0a110fb46deeff56a3", 20 | "zh:d5fa3a36e13957d63bfe9bbd6df0426a2422214403aac9f20b60c36f8d9ebec6", 21 | "zh:e4ede44a112296c9cc77b15e439e41ee15c0e8b3a0dec94ae34df5ebba840e8b", 22 | "zh:f2d4de8d8cde69caffede1544ebea74e69fcc4552e1b79ae053519a05c060706", 23 | "zh:fc19e9266b1841d4a3aeefa8a5b5ad6988baed6540f85a373b6c2d0dc1ca5830", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /proj04-rds-module/README.md: -------------------------------------------------------------------------------- 1 | 1. [DONE] Understand the RDS resource and which necessary resources we need 2 | 2. [DONE] Create a module with the standard structure 3 | 3. [DONE] Implement variable validation 4 | 4. [DONE] Implement networking validation 5 | 4.1 [DONE] Receive subnet ids and security group ids via variables 6 | 4.2 [DONE] For subnets: 7 | 4.2.1 [DONE] Make sure that they are not in the default VPC 8 | 4.2.2 [DONE] Make sure that they are private: 9 | 4.2.2.1 [DONE] Check whether they are tagged with Access=Private 10 | 4.3 [DONE] For security groups: 11 | 4.3.1 [DONE] Make sure that there are no inbound rules for IP addresses 12 | 5. [DONE] Create the necessary resources and make sure the validation is working 13 | 6. [DONE] Create the RDS instance inside of the module 14 | -------------------------------------------------------------------------------- /proj04-rds-module/modules/rds/networking-validation.tf: -------------------------------------------------------------------------------- 1 | ############################## 2 | # Subnet Validation 3 | ############################## 4 | 5 | data "aws_vpc" "default" { 6 | default = true 7 | } 8 | 9 | data "aws_subnet" "input" { 10 | for_each = toset(var.subnet_ids) 11 | id = each.value 12 | 13 | lifecycle { 14 | postcondition { 15 | condition = self.vpc_id != data.aws_vpc.default.id 16 | error_message = <<-EOT 17 | The following subnet is part of the default VPC: 18 | 19 | Name = ${self.tags.Name} 20 | ID = ${self.id} 21 | 22 | Please do not deploy RDS instances in the default VPC. 23 | EOT 24 | } 25 | 26 | postcondition { 27 | condition = can(lower(self.tags.Access) == "private") 28 | error_message = <<-EOT 29 | The following subnet is not marked as private: 30 | 31 | Name = ${self.tags.Name} 32 | ID = ${self.id} 33 | 34 | Please ensure that the subnet is properly tagged by adding the following tags: 35 | 1. Access = "private" 36 | EOT 37 | } 38 | } 39 | } 40 | 41 | ############################## 42 | # Security Group Validation 43 | ############################## 44 | 45 | data "aws_vpc_security_group_rules" "input" { 46 | filter { 47 | name = "group-id" 48 | values = var.security_group_ids 49 | } 50 | } 51 | 52 | data "aws_vpc_security_group_rule" "input" { 53 | for_each = toset(data.aws_vpc_security_group_rules.input.ids) 54 | security_group_rule_id = each.value 55 | 56 | lifecycle { 57 | postcondition { 58 | condition = ( 59 | self.is_egress 60 | ? true 61 | : self.cidr_ipv4 == null 62 | && self.cidr_ipv6 == null 63 | && self.referenced_security_group_id != null 64 | ) 65 | error_message = <<-EOT 66 | The following security group contains an invalid inbound rule: 67 | 68 | ID = ${self.security_group_id} 69 | 70 | Please ensure that the following conditions are met: 71 | 1. Rules must not allow inbound traffic from IP CIDR blocks, only from other security groups. 72 | EOT 73 | } 74 | } 75 | } -------------------------------------------------------------------------------- /proj04-rds-module/modules/rds/outputs.tf: -------------------------------------------------------------------------------- 1 | output "rds_instance_arn" { 2 | value = aws_db_instance.this.arn 3 | description = "The ARN of the created RDS instance." 4 | } 5 | 6 | output "rds_instance_id" { 7 | value = aws_db_instance.this.id 8 | description = "The ID of the created RDS instance." 9 | } 10 | 11 | output "rds_instance_address" { 12 | value = aws_db_instance.this.address 13 | description = "The hostname of the created RDS instance." 14 | } 15 | 16 | output "rds_instance_port" { 17 | value = aws_db_instance.this.port 18 | description = "The port of the created RDS instance." 19 | } 20 | 21 | output "rds_instance_endpoint" { 22 | value = aws_db_instance.this.endpoint 23 | description = "The endpoint of the created RDS instance in address:port format." 24 | } -------------------------------------------------------------------------------- /proj04-rds-module/modules/rds/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "~> 5.0" 6 | } 7 | } 8 | } -------------------------------------------------------------------------------- /proj04-rds-module/modules/rds/rds.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | db_engine = { 3 | postgres-latest = { 4 | engine = "postgres" 5 | version = "16.1" 6 | family = "postgres16" 7 | } 8 | postgres-14 = { 9 | engine = "postgres" 10 | version = "14.11" 11 | family = "postgres14" 12 | } 13 | } 14 | } 15 | 16 | resource "aws_db_subnet_group" "this" { 17 | name = var.project_name 18 | subnet_ids = var.subnet_ids 19 | 20 | tags = { 21 | Name = var.project_name 22 | } 23 | } 24 | 25 | resource "aws_db_parameter_group" "this" { 26 | name = var.project_name 27 | family = local.db_engine[var.engine].family 28 | 29 | parameter { 30 | name = "log_connections" 31 | value = "1" 32 | } 33 | } 34 | 35 | resource "aws_db_instance" "this" { 36 | identifier = var.project_name 37 | instance_class = var.instance_class 38 | allocated_storage = var.storage_size 39 | engine = local.db_engine[var.engine].engine 40 | engine_version = local.db_engine[var.engine].version 41 | username = var.credentials.username 42 | password = var.credentials.password 43 | db_subnet_group_name = aws_db_subnet_group.this.name 44 | publicly_accessible = false 45 | skip_final_snapshot = true 46 | } -------------------------------------------------------------------------------- /proj04-rds-module/modules/rds/variables.tf: -------------------------------------------------------------------------------- 1 | ######################## 2 | # General information 3 | ######################## 4 | 5 | variable "project_name" { 6 | type = string 7 | description = "The project name. Used to name the RDS instance and add relevant tags." 8 | } 9 | 10 | ######################## 11 | # DB configuration 12 | ######################## 13 | 14 | variable "instance_class" { 15 | type = string 16 | default = "db.t3.micro" 17 | description = "The instance class used to create the RDS instance. Requires a free-tier instance class." 18 | 19 | validation { 20 | condition = contains(["db.t3.micro"], var.instance_class) 21 | error_message = "Only db.t3.micro is allowed due to free tier access." 22 | } 23 | } 24 | 25 | variable "storage_size" { 26 | type = number 27 | default = 10 28 | description = "The amount of storage to allocate to the RDS instance. Should be between 5GB and 10GB." 29 | 30 | validation { 31 | condition = var.storage_size >= 5 && var.storage_size <= 10 32 | error_message = "DB storage must be between 5GB and 10 GB" 33 | } 34 | } 35 | 36 | variable "engine" { 37 | type = string 38 | default = "postgres-latest" 39 | description = "Which engine to use for the RDS instance. Currently only postgres is supported." 40 | 41 | validation { 42 | condition = contains(["postgres-latest", "postgres-14"], var.engine) 43 | error_message = "DB engine must be postgres-latest or postgres-14" 44 | } 45 | } 46 | 47 | ######################## 48 | # DB credentials 49 | ######################## 50 | 51 | variable "credentials" { 52 | type = object({ 53 | username = string 54 | password = string 55 | }) 56 | 57 | sensitive = true 58 | description = "The root username and password for the RDS instance creation." 59 | 60 | validation { 61 | condition = ( 62 | length(regexall("[a-zA-Z]+", var.credentials.password)) > 0 63 | && length(regexall("[0-9]+", var.credentials.password)) > 0 64 | && length(regexall("^[a-zA-Z0-9+_?-]{8,}$", var.credentials.password)) > 0 65 | ) 66 | error_message = <<-EOT 67 | Password must comply with the following format: 68 | 69 | 1. Contain at least 1 character 70 | 2. Contain at least 1 digit 71 | 3. Be at least 8 characters long 72 | 4. Contain only the following characters: a-z, A-Z, 0-9, +, _, ?, - 73 | EOT 74 | } 75 | } 76 | 77 | ######################## 78 | # DB network 79 | ######################## 80 | 81 | variable "subnet_ids" { 82 | type = list(string) 83 | description = "Subnet IDs to deploy the RDS instance in." 84 | } 85 | 86 | variable "security_group_ids" { 87 | type = list(string) 88 | description = "Security group IDs to attach to the RDS instance." 89 | } -------------------------------------------------------------------------------- /proj04-rds-module/networking.tf: -------------------------------------------------------------------------------- 1 | ############################## 2 | # VPC & Subnets 3 | ############################## 4 | 5 | data "aws_vpc" "default" { 6 | default = true 7 | } 8 | 9 | resource "aws_vpc" "custom" { 10 | cidr_block = "10.0.0.0/16" 11 | 12 | tags = { 13 | Name = "proj04-custom" 14 | } 15 | } 16 | 17 | moved { 18 | from = aws_subnet.allowed 19 | to = aws_subnet.private1 20 | } 21 | 22 | resource "aws_subnet" "private1" { 23 | vpc_id = aws_vpc.custom.id 24 | cidr_block = "10.0.0.0/24" 25 | availability_zone = "eu-west-1a" 26 | 27 | tags = { 28 | Name = "subnet-custom-vpc-private1" 29 | Access = "private" 30 | } 31 | } 32 | 33 | resource "aws_subnet" "private2" { 34 | vpc_id = aws_vpc.custom.id 35 | cidr_block = "10.0.1.0/24" 36 | availability_zone = "eu-west-1b" 37 | 38 | tags = { 39 | Name = "subnet-custom-vpc-private2" 40 | Access = "private" 41 | } 42 | } 43 | 44 | resource "aws_subnet" "public" { 45 | vpc_id = aws_vpc.custom.id 46 | cidr_block = "10.0.2.0/24" 47 | 48 | tags = { 49 | Name = "subnet-custom-vpc-public1" 50 | } 51 | } 52 | 53 | # For documentation. Not actively used. 54 | resource "aws_subnet" "not_allowed" { 55 | vpc_id = data.aws_vpc.default.id 56 | cidr_block = "172.31.128.0/24" 57 | 58 | tags = { 59 | Name = "subnet-default-vpc" 60 | } 61 | } 62 | 63 | ############################## 64 | # Security Groups 65 | ############################## 66 | 67 | # 1. Source security group - From where traffic is allowed 68 | # 2. Compliant security group 69 | # 2.1 Security group rule 70 | # 3. Non-compliant security group 71 | # 3.1 Security group rule 72 | 73 | resource "aws_security_group" "source" { 74 | name = "source-sg" 75 | description = "SG from where connections are allowed into the DB" 76 | vpc_id = aws_vpc.custom.id 77 | } 78 | 79 | resource "aws_security_group" "compliant" { 80 | name = "compliant-sg" 81 | description = "Compliant security group" 82 | vpc_id = aws_vpc.custom.id 83 | } 84 | 85 | resource "aws_vpc_security_group_ingress_rule" "db" { 86 | security_group_id = aws_security_group.compliant.id 87 | referenced_security_group_id = aws_security_group.source.id 88 | from_port = 5432 89 | to_port = 5432 90 | ip_protocol = "tcp" 91 | } 92 | 93 | resource "aws_security_group" "non_compliant" { 94 | name = "non-compliant-sg" 95 | description = "Non-compliant security group" 96 | vpc_id = aws_vpc.custom.id 97 | } 98 | 99 | resource "aws_vpc_security_group_ingress_rule" "https" { 100 | security_group_id = aws_security_group.non_compliant.id 101 | cidr_ipv4 = "0.0.0.0/0" 102 | from_port = 443 103 | to_port = 443 104 | ip_protocol = "tcp" 105 | } -------------------------------------------------------------------------------- /proj04-rds-module/outputs.tf: -------------------------------------------------------------------------------- 1 | output "rds_endpoint" { 2 | value = module.database.rds_instance_endpoint 3 | } -------------------------------------------------------------------------------- /proj04-rds-module/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | } 9 | } 10 | 11 | provider "aws" { 12 | region = "eu-west-1" 13 | } -------------------------------------------------------------------------------- /proj04-rds-module/rds.tf: -------------------------------------------------------------------------------- 1 | module "database" { 2 | source = "./modules/rds" 3 | 4 | project_name = "proj04-rds-module" 5 | security_group_ids = [ 6 | aws_security_group.compliant.id 7 | ] 8 | subnet_ids = [ 9 | aws_subnet.private1.id, 10 | aws_subnet.private2.id 11 | ] 12 | credentials = { 13 | username = "dbadmin" 14 | password = "abc1+?_ahu" 15 | } 16 | } -------------------------------------------------------------------------------- /proj05-tf-cloud-oidc/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.46.0" 6 | constraints = "~> 5.0" 7 | hashes = [ 8 | "h1:d0Mf33mbbQujZ/JaYkqmH5gZGvP+iEIWf9yBSiOwimE=", 9 | "zh:05ae6180a7f23071435f6e5e59c19af0b6c5da42ee600c6c1568c8660214d548", 10 | "zh:0d878d1565d5e57ce6b34ec5f04b28662044a50c999ec5770c374aa1f1020de2", 11 | "zh:25ef1467af2514d8011c44759307445f7057836ff87dfe4503c3e1c9776d5c1a", 12 | "zh:26c006df6200f0063b827aab05bec94f9f3f77848e82ed72e48a51d1170d1961", 13 | "zh:37cdf4292649a10f12858622826925e18ad4eca354c31f61d02c66895eb91274", 14 | "zh:4315b0433c2fc512666c74e989e2d95240934ef370bea1c690d36cb02d30c4ce", 15 | "zh:75df0b3f631b78aeff1832cc77d99b527c2a5e79d40f7aac40bdc4a66124dac2", 16 | "zh:90693d936c9a556d2bf945de4920ff82052002eb73139bd7164fafd02920f0ef", 17 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 18 | "zh:c9177ad09804c60fd2ed25950570407b6bdcdf0fcc309e1673b584f06a827fae", 19 | "zh:ca8e8db24a4d62d92afd8d3d383b81a08693acac191a2e0a110fb46deeff56a3", 20 | "zh:d5fa3a36e13957d63bfe9bbd6df0426a2422214403aac9f20b60c36f8d9ebec6", 21 | "zh:e4ede44a112296c9cc77b15e439e41ee15c0e8b3a0dec94ae34df5ebba840e8b", 22 | "zh:f2d4de8d8cde69caffede1544ebea74e69fcc4552e1b79ae053519a05c060706", 23 | "zh:fc19e9266b1841d4a3aeefa8a5b5ad6988baed6540f85a373b6c2d0dc1ca5830", 24 | ] 25 | } 26 | 27 | provider "registry.terraform.io/hashicorp/tls" { 28 | version = "4.0.5" 29 | constraints = "~> 4.0" 30 | hashes = [ 31 | "h1:zeG5RmggBZW/8JWIVrdaeSJa0OG62uFX5HY1eE8SjzY=", 32 | "zh:01cfb11cb74654c003f6d4e32bbef8f5969ee2856394a96d127da4949c65153e", 33 | "zh:0472ea1574026aa1e8ca82bb6df2c40cd0478e9336b7a8a64e652119a2fa4f32", 34 | "zh:1a8ddba2b1550c5d02003ea5d6cdda2eef6870ece86c5619f33edd699c9dc14b", 35 | "zh:1e3bb505c000adb12cdf60af5b08f0ed68bc3955b0d4d4a126db5ca4d429eb4a", 36 | "zh:6636401b2463c25e03e68a6b786acf91a311c78444b1dc4f97c539f9f78de22a", 37 | "zh:76858f9d8b460e7b2a338c477671d07286b0d287fd2d2e3214030ae8f61dd56e", 38 | "zh:a13b69fb43cb8746793b3069c4d897bb18f454290b496f19d03c3387d1c9a2dc", 39 | "zh:a90ca81bb9bb509063b736842250ecff0f886a91baae8de65c8430168001dad9", 40 | "zh:c4de401395936e41234f1956ebadbd2ed9f414e6908f27d578614aaa529870d4", 41 | "zh:c657e121af8fde19964482997f0de2d5173217274f6997e16389e7707ed8ece8", 42 | "zh:d68b07a67fbd604c38ec9733069fbf23441436fecf554de6c75c032f82e1ef19", 43 | "zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c", 44 | ] 45 | } 46 | -------------------------------------------------------------------------------- /proj05-tf-cloud-oidc/oidc.tf: -------------------------------------------------------------------------------- 1 | import { 2 | to = aws_iam_openid_connect_provider.terraform_cloud 3 | id = "arn:aws:iam::905418111066:oidc-provider/app.terraform.io" 4 | } 5 | 6 | import { 7 | to = aws_iam_role.terraform_cloud_admin 8 | id = "terraform-cloud-automation-admin" 9 | } 10 | 11 | import { 12 | to = aws_iam_role_policy_attachment.terraform_cloud_admin 13 | id = "${aws_iam_role.terraform_cloud_admin.name}/${data.aws_iam_policy.admin.arn}" 14 | } 15 | 16 | data "tls_certificate" "terraform_cloud" { 17 | url = "https://${var.terraform_cloud_hostname}" 18 | } 19 | 20 | resource "aws_iam_openid_connect_provider" "terraform_cloud" { 21 | url = data.tls_certificate.terraform_cloud.url 22 | client_id_list = [var.terraform_cloud_audience] 23 | thumbprint_list = [data.tls_certificate.terraform_cloud.certificates[0].sha1_fingerprint] 24 | 25 | tags = { 26 | Name = "Terraform Cloud" 27 | } 28 | } 29 | 30 | data "aws_iam_policy_document" "terraform_cloud_admin_assume_policy" { 31 | statement { 32 | effect = "Allow" 33 | principals { 34 | type = "Federated" 35 | identifiers = [aws_iam_openid_connect_provider.terraform_cloud.arn] 36 | } 37 | actions = ["sts:AssumeRoleWithWebIdentity"] 38 | condition { 39 | test = "StringEquals" 40 | variable = "${var.terraform_cloud_hostname}:aud" 41 | values = [var.terraform_cloud_audience] 42 | } 43 | 44 | condition { 45 | test = "StringLike" 46 | variable = "${var.terraform_cloud_hostname}:sub" 47 | values = [for workspace in var.admin_role_workspaces : 48 | "organization:LauroMueller:project:${var.admin_role_project}:workspace:${workspace}:run_phase:*" 49 | ] 50 | } 51 | } 52 | } 53 | 54 | resource "aws_iam_role" "terraform_cloud_admin" { 55 | name = "terraform-cloud-automation-admin" 56 | assume_role_policy = data.aws_iam_policy_document.terraform_cloud_admin_assume_policy.json 57 | } 58 | 59 | data "aws_iam_policy" "admin" { 60 | arn = "arn:aws:iam::aws:policy/AdministratorAccess" 61 | } 62 | 63 | resource "aws_iam_role_policy_attachment" "terraform_cloud_admin" { 64 | role = aws_iam_role.terraform_cloud_admin.name 65 | policy_arn = data.aws_iam_policy.admin.arn 66 | } -------------------------------------------------------------------------------- /proj05-tf-cloud-oidc/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = "~> 1.7" 3 | required_providers { 4 | aws = { 5 | source = "hashicorp/aws" 6 | version = "~> 5.0" 7 | } 8 | tls = { 9 | source = "hashicorp/tls" 10 | version = "~> 4.0" 11 | } 12 | } 13 | 14 | cloud { 15 | organization = "LauroMueller" 16 | 17 | workspaces { 18 | name = "terraform-cli" 19 | } 20 | } 21 | } 22 | 23 | provider "aws" { 24 | region = "eu-west-1" 25 | } -------------------------------------------------------------------------------- /proj05-tf-cloud-oidc/s3.tf: -------------------------------------------------------------------------------- 1 | resource "aws_s3_bucket" "this" { 2 | bucket = "terraform-cloud-iaugfiiazgdf" 3 | } -------------------------------------------------------------------------------- /proj05-tf-cloud-oidc/terraform.tfvars: -------------------------------------------------------------------------------- 1 | admin_role_workspaces = ["terraform-cli", "terraform-cli2"] 2 | admin_role_project = "terraform-oidc" -------------------------------------------------------------------------------- /proj05-tf-cloud-oidc/variables.tf: -------------------------------------------------------------------------------- 1 | variable "terraform_cloud_hostname" { 2 | type = string 3 | default = "app.terraform.io" 4 | description = "Terraform Cloud hostname, without https://" 5 | } 6 | 7 | variable "terraform_cloud_audience" { 8 | type = string 9 | default = "aws.workload.identity" 10 | description = "Terraform Cloud audience used to authenticate into AWS." 11 | } 12 | 13 | variable "admin_role_workspaces" { 14 | type = list(string) 15 | description = "All workspaces that can assume the terraform-cloud-automation-admin role." 16 | } 17 | 18 | variable "admin_role_project" { 19 | type = string 20 | description = "Terraform Cloud project to allow assuming the terraform-cloud-automation-admin role." 21 | } -------------------------------------------------------------------------------- /projects/assets/proj00-vpc-ec2-nginx.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lm-academy/terraform-course/c4c620f9d634a4f60756b5f20de4c5077437d1ab/projects/assets/proj00-vpc-ec2-nginx.png -------------------------------------------------------------------------------- /projects/assets/proj02-iam-users.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lm-academy/terraform-course/c4c620f9d634a4f60756b5f20de4c5077437d1ab/projects/assets/proj02-iam-users.png -------------------------------------------------------------------------------- /projects/proj00-vpc-ec2.md: -------------------------------------------------------------------------------- 1 | # Deploying an NGINX Server in AWS 2 | 3 | In this project, we will be deploying an NGINX server in AWS. We will create a new VPC, set up public and private subnets, and deploy an EC2 instance using the Ubuntu AMI. Later, we will replace the instance with an NGINX Bitnami AMI and associate it with a security group. Finally, we will test the website accessibility and tag the resources with project information. Make sure to use Terraform for as many resources as possible, and remember to delete all the resources at the end of the project to avoid unnecessary costs. 4 | 5 | ## Project Overview 6 | 7 | deploy-vpc-ec2-instance 8 | 9 | ## Desired Outcome 10 | 11 | 1. Deploy a new VPC in AWS in the region of your choice. 12 | 2. Within the VPC, deploy a public and a private subnet. 13 | 1. **Hint:** The public subnet should be associated with a custom route table containing a route to an Internet Gateway. 14 | 3. Create a security group that allows traffic only on ports 80 (HTTP) and 443 (HTTPS). 15 | 4. Deploy an EC2 instance in the public subnet using the Ubuntu AMI. 16 | 5. Delete the previous instance and deploy another EC2 instance using the NGINX Bitnami AMI, since it's free of charge. 17 | 6. Associate the deployed NGINX instance with the created security group, and confirm that it's possible to access the website via its public IP. 18 | 1. **Hint:** You will probably get a certificate error if accessing via `https://`. It's safe to ignore the error and proceed to the website. 19 | 7. Tag resources with useful information about your project. 20 | 8. Make sure to delete all the resources at the end of the project! 21 | -------------------------------------------------------------------------------- /projects/proj01-s3-static-website.md: -------------------------------------------------------------------------------- 1 | # Deploying a Static Website with S3 2 | 3 | This project is designed to provide hands-on experience with deploying a static website using Amazon S3 and Terraform. The goal is to create an S3 bucket, host files for the static website, and manage access permissions and policies. This project is a great opportunity to learn about AWS S3, Terraform, and the principles of Infrastructure as Code (IaC). Make sure to use Terraform for as many resources as possible, and remember to delete all the resources at the end of the project to avoid unnecessary costs. 4 | 5 | ## Desired Outcome 6 | 7 | 1. Deploy an S3 bucket that will host the files for the static website. 8 | 2. Create an `index.html` file and an `error.html` file containing some dummy text. 9 | 1. **Hint:** Here is a source where you can find a [basic template](https://ryanstutorials.net/html-tutorial/html-template.php). 10 | 3. In a first moment, upload these files manually to the bucket via the AWS console. They should be placed in the root directory of the bucket. Later we will upload these files via Terraform. 11 | 4. Disable public access block so that others can access the bucket via the internet. 12 | 5. Create a policy that allows the `s3:GetObject` action for anyone and for all objects within the created bucket. 13 | 6. Create an S3 static website configuration, and link it to the existing bucket. 14 | 7. Create an output with the S3 Static Website endpoint. 15 | 8. After testing that you can access the website via the internet, let's now update the files via Terraform. 16 | 1. Delete the files from the S3 bucket. 17 | 2. Use Terraform to create `aws_s3_object` resources and upload the files. 18 | 9. Tag resources with useful information about your project. 19 | 10. Make sure to delete all the resources at the end of the project! 20 | -------------------------------------------------------------------------------- /projects/proj02-iam-users.md: -------------------------------------------------------------------------------- 1 | # Managing IAM Users and Roles with Terraform 2 | 3 | This project is centered around managing AWS Identity and Access Management (IAM) users and roles using Terraform and YAML. The primary objective is to automate the process of creating users, assigning roles, and ensuring secure role assignment. User information, including usernames and roles, will be stored in a YAML file, while role information will be managed in Terraform. An important aspect of this project is to ensure that roles can only be assumed by the users assigned to them, adding an extra layer of security. 4 | 5 | ## Project Overview 6 | 7 | deploy-vpc-ec2-instance 8 | 9 | ## Desired Outcome 10 | 11 | 1. Store user information (username and their respective roles) in a YAML file. 12 | 2. Store role information (role name and their respective policies) in Terraform. 13 | 1. **Hint:** You can use AWS-Managed policies to make your life easier, but if you wish an extra layer of learning and challenge, by all means go ahead and create your own policies! 14 | 3. Based on the provided YAML file, create users in the AWS account. 15 | 4. Also make sure to create login profiles for the users, so that they can login into the AWS console. 16 | 1. **Hint:** Setting up the necessary elements to send passwords securely via email can be quite cumbersone. Although you should **never do that in real-life, production projects**, for our purposes here free to output this information from Terraform. 17 | 5. Based on the role information stored in Terraform, create the respective roles and attach the correct policies to these roles. 18 | 6. Based on the YAML file, link created users to the respective roles they can assume. 19 | 7. Ensure that roles can only be assumed by the users that are assigned to those roles. 20 | 8. Test everything in the AWS console, it's quite fun! 21 | 9. Make sure to delete all the resources at the end of the project! 22 | -------------------------------------------------------------------------------- /projects/proj03-import-lambda.md: -------------------------------------------------------------------------------- 1 | # Importing Lambda Resources into Terraform 2 | 3 | This project focuses on importing existing AWS resources into Terraform. The primary goal is to learn how to use code generation features from Terraform to help us import several infrastructure components. The project will guide you through the process of creating, importing, and managing Lambda functions. It's important to note that all resources created during this project should be deleted at the end to avoid unnecessary costs. 4 | 5 | ## Desired Outcome 6 | 7 | 1. Create a Lambda function manually in the AWS console. 8 | 1. Use the hello-world blueprint. 9 | 2. For the execution role, select the "Create a new role with basic permissions" option. 10 | 2. Identify all the resources that were created when creating the Lambda function. 11 | 3. Import the Lambda function and its code into Terraform. 12 | 1. **Hint:** Lambda function code can be provided via a ZIP file. Terraform offers ZIP file management via the `archive` provider. 13 | 4. Import the other created resources into Terraform. 14 | 5. Use as much configuration generation as possible, always keeping in mind that the generated code should be first refined before being definitely added to the project. 15 | 6. Make sure to delete all the resources at the end of the project! 16 | -------------------------------------------------------------------------------- /projects/proj04-rds-module.md: -------------------------------------------------------------------------------- 1 | # Creating an RDS Module 2 | 3 | This project revolves around creating a reusable AWS RDS (Relational Database Service) module using Terraform. The primary objective is to automate the deployment and configuration of RDS instances, with a focus on security, modularity and reusability. The project will guide you through the process of defining module variables for various RDS configurations such as instance class, allocated storage, engine type, and credentials. It also includes the setup of subnet IDs and security groups for the RDS instance. An important aspect of this project is the addition of relevant validations for each created variable, ensuring optimal configuration and security. 4 | 5 | ## Desired Outcome 6 | 7 | 1. Create a module using the standard module structure under `modules/rds`. 8 | 1. Feel free not to include the `main.tf` file, but make sure to add the files for variables and outputs. 9 | 2. Define module variables to receive: 10 | 1. The project name 11 | 2. The RDS instance class 12 | 3. The RDS allocated storage 13 | 4. The RDS engine 14 | 5. The RDS credentials 15 | 6. The subnet IDs where to deploy the RDS instance 16 | 7. The security groups to attach to the RDS instance 17 | 3. Add relevant validations for each of the created variables. Here are some examples: 18 | 1. Ensure that the instance class and allocated storage are within the free tier options. 19 | 2. Ensure that the RDS credentials' password contains at least one character and one digit. 20 | 3. Ensure that the subnets provided via the subnet IDs are not deployed in the default VPC. 21 | 4. Ensure that the subnets provided via the subnet IDs are marked as private. 22 | 5. Ensure that the security groups provided via the security group IDs do not allow inbound traffic from IP CIDR blocks. 23 | 4. Outside of the module, create the necessary resources: 24 | 1. Custom VPC and subnets 25 | 2. Security groups and their respective rules 26 | 5. Make sure that the validation is working by creating both compliant and non-compliant resources, and passing them as input variables when calling the module. 27 | 6. Add an `aws_db_instance` resource inside the module and deploy it by running `terraform apply`. 28 | 7. Make sure to delete all the resources at the end of the project! 29 | -------------------------------------------------------------------------------- /projects/proj05-tf-cloud-oidc.md: -------------------------------------------------------------------------------- 1 | # Enabling OIDC for AWS Authentication from Terraform Cloud 2 | 3 | This project focuses on enabling OpenID Connect (OIDC) for AWS authentication from Terraform Cloud. The main objective is to create and manage AWS resources, including an Identity Provider of type Web Identity and a role named `terraform-cloud-automation-admin`, and then import them into Terraform. The project will guide you through the process of setting up the Identity Provider with specific URL and Client ID information, connecting the created role to the Identity Provider, and configuring the Trust Relationship of the role to restrict the subject to a single Terraform Cloud project and workspace. 4 | 5 | ## Desired Outcome 6 | 7 | 1. We will start with manually creating some resources in AWS, and then we will import them into Terraform. 8 | 2. Create an Identity Provider in AWS of type Web Identity. Provide the following information: 9 | 1. URL: `https://app.terraform.io` 10 | 2. Client ID / Audience: `aws.workload.identity` 11 | 3. Create a role `terraform-cloud-automation-admin` in AWS and connect it to the created identity provider, so that it can be assumed through that provider. 12 | 4. Configure the Trust Relationship of the role to restrict the subject to a single Terraform Cloud project and workspace. 13 | 1. **Hint:** You can set this in the role by adding the following block under `Condition` in the Trust relationships tab and substituting the values for `TF-CLOUD-ORG`, `TF-CLOUD-PROJ` and `TF-CLOUD-WORKSPACE` respectively. 14 | ``` 15 | "StringLike": { 16 | "app.terraform.io:sub": [ 17 | "organization:TF-CLOUD-ORG:project:TF-CLOUD-PROJ:workspace:TF-CLOUD-WORKSPACE:run_phase:*" 18 | ] 19 | } 20 | ``` 21 | 5. Attach an admin policy to the created role. 22 | 6. In Terraform Cloud, create a new project and workspace with the same name that you have specified above. 23 | 7. In Terraform Cloud, enable OIDC authentication for the created workspace. 24 | 8. Make sure that the workspace can connect to AWS by creating a dummy S3 bucket. 25 | 9. Once everything is working, import the created AWS resources into Terraform. 26 | 10. Extend the setup to allow a second workspace to assume the role. How can we leverage what we have learned so far to create a well-designed, DRY solution? 27 | 11. Make sure to delete all the resources at the end of the project! 28 | --------------------------------------------------------------------------------