├── Day-1 ├── 01-fundamentals.md ├── 02-getting-started.md ├── 03-install.md ├── 04-aws-connection.md └── PROJECT-ec2-instance-creation │ ├── main.tf │ └── steps.md ├── Day-2 ├── 01-providers.md ├── 02-multiple-providers.md ├── 03-multiple-regions.md ├── 04-required-providers.md ├── 05-variables.md ├── 06-variables-implementation.tf ├── 07-tfvars.md ├── 08-conditional-expressions.md ├── 09-builtin-functions.md └── PROJECT-vpc-with-ec2 │ ├── main.tf │ ├── provider.tf │ ├── userdata.sh │ ├── userdata1.sh │ └── variables.tf ├── Day-3 ├── main.tf ├── modules.md └── modules │ └── ec2_instance │ ├── main.tf │ ├── outputs.tf │ └── variables.tf ├── Day-4 ├── backend.tf ├── main.tf └── statefile_scenarios.md ├── Day-5 ├── app.py ├── main.tf └── provisioners.md ├── Day-6 ├── main.tf ├── modules │ └── ec2_instance │ │ └── main.tf └── terraform.tfvars ├── Day-7 ├── 01-secure-terraform.md ├── 02-vault-integration.md └── main.tf ├── Day-8 └── README.md ├── Images └── codespaces-location.png └── README.md /Day-1/01-fundamentals.md: -------------------------------------------------------------------------------- 1 | # Infrastructure as Code(IaC) 2 | 3 | Before the advent of IaC, infrastructure management was typically a manual and time-consuming process. System administrators and operations teams had to: 4 | 5 | 1. Manually Configure Servers: Servers and other infrastructure components were often set up and configured manually, which could lead to inconsistencies and errors. 6 | 7 | 2. Lack of Version Control: Infrastructure configurations were not typically version-controlled, making it difficult to track changes or revert to previous states. 8 | 9 | 3. Documentation Heavy: Organizations relied heavily on documentation to record the steps and configurations required for different infrastructure setups. This documentation could become outdated quickly. 10 | 11 | 4. Limited Automation: Automation was limited to basic scripting, often lacking the robustness and flexibility offered by modern IaC tools. 12 | 13 | 5. Slow Provisioning: Provisioning new resources or environments was a time-consuming process that involved multiple manual steps, leading to delays in project delivery. 14 | 15 | IaC addresses these challenges by providing a systematic, automated, and code-driven approach to infrastructure management. Popular IaC tools include Terraform, AWS CloudFormation, Azure Resource Manager templates others. 16 | 17 | These tools enable organizations to define, deploy, and manage their infrastructure efficiently and consistently, making it easier to adapt to the dynamic needs of modern applications and services. 18 | 19 | # Why Terraform ? 20 | 21 | There are multiple reasons why Terraform is used over the other IaC tools but below are the main reasons. 22 | 23 | 1. **Multi-Cloud Support**: Terraform is known for its multi-cloud support. It allows you to define infrastructure in a cloud-agnostic way, meaning you can use the same configuration code to provision resources on various cloud providers (AWS, Azure, Google Cloud, etc.) and even on-premises infrastructure. This flexibility can be beneficial if your organization uses multiple cloud providers or plans to migrate between them. 24 | 25 | 2. **Large Ecosystem**: Terraform has a vast ecosystem of providers and modules contributed by both HashiCorp (the company behind Terraform) and the community. This means you can find pre-built modules and configurations for a wide range of services and infrastructure components, saving you time and effort in writing custom configurations. 26 | 27 | 3. **Declarative Syntax**: Terraform uses a declarative syntax, allowing you to specify the desired end-state of your infrastructure. This makes it easier to understand and maintain your code compared to imperative scripting languages. 28 | 29 | 4. **State Management**: Terraform maintains a state file that tracks the current state of your infrastructure. This state file helps Terraform understand the differences between the desired and actual states of your infrastructure, enabling it to make informed decisions when you apply changes. 30 | 31 | 5. **Plan and Apply**: Terraform's "plan" and "apply" workflow allows you to preview changes before applying them. This helps prevent unexpected modifications to your infrastructure and provides an opportunity to review and approve changes before they are implemented. 32 | 33 | 6. **Community Support**: Terraform has a large and active user community, which means you can find answers to common questions, troubleshooting tips, and a wealth of documentation and tutorials online. 34 | 35 | 7. **Integration with Other Tools**: Terraform can be integrated with other DevOps and automation tools, such as Docker, Kubernetes, Ansible, and Jenkins, allowing you to create comprehensive automation pipelines. 36 | 37 | 8. **HCL Language**: Terraform uses HashiCorp Configuration Language (HCL), which is designed specifically for defining infrastructure. It's human-readable and expressive, making it easier for both developers and operators to work with. 38 | -------------------------------------------------------------------------------- /Day-1/02-getting-started.md: -------------------------------------------------------------------------------- 1 | # Getting Started 2 | 3 | To get started with Terraform, it's important to understand some key terminology and concepts. Here are some fundamental terms and explanations. 4 | 5 | 1. **Provider**: A provider is a plugin for Terraform that defines and manages resources for a specific cloud or infrastructure platform. 6 | Examples of providers include AWS, Azure, Google Cloud, and many others. 7 | You configure providers in your Terraform code to interact with the desired infrastructure platform. 8 | 9 | 2. **Resource**: A resource is a specific infrastructure component that you want to create and manage using Terraform. Resources can include virtual machines, databases, storage buckets, network components, and more. Each resource has a type and configuration parameters that you define in your Terraform code. 10 | 11 | 3. **Module**: A module is a reusable and encapsulated unit of Terraform code. Modules allow you to package infrastructure configurations, making it easier to maintain, share, and reuse them across different parts of your infrastructure. Modules can be your own creations or come from the Terraform Registry, which hosts community-contributed modules. 12 | 13 | 4. **Configuration File**: Terraform uses configuration files (often with a `.tf` extension) to define the desired infrastructure state. These files specify providers, resources, variables, and other settings. The primary configuration file is usually named `main.tf`, but you can use multiple configuration files as well. 14 | 15 | 5. **Variable**: Variables in Terraform are placeholders for values that can be passed into your configurations. They make your code more flexible and reusable by allowing you to define values outside of your code and pass them in when you apply the Terraform configuration. 16 | 17 | 6. **Output**: Outputs are values generated by Terraform after the infrastructure has been created or updated. Outputs are typically used to display information or provide values to other parts of your infrastructure stack. 18 | 19 | 7. **State File**: Terraform maintains a state file (often named `terraform.tfstate`) that keeps track of the current state of your infrastructure. This file is crucial for Terraform to understand what resources have been created and what changes need to be made during updates. 20 | 21 | 8. **Plan**: A Terraform plan is a preview of changes that Terraform will make to your infrastructure. When you run `terraform plan`, Terraform analyzes your configuration and current state, then generates a plan detailing what actions it will take during the `apply` step. 22 | 23 | 9. **Apply**: The `terraform apply` command is used to execute the changes specified in the plan. It creates, updates, or destroys resources based on the Terraform configuration. 24 | 25 | 10. **Workspace**: Workspaces in Terraform are a way to manage multiple environments (e.g., development, staging, production) with separate configurations and state files. Workspaces help keep infrastructure configurations isolated and organized. 26 | 27 | 11. **Remote Backend**: A remote backend is a storage location for your Terraform state files that is not stored locally. Popular choices for remote backends include Amazon S3, Azure Blob Storage, or HashiCorp Terraform Cloud. Remote backends enhance collaboration and provide better security and reliability for your state files. 28 | 29 | These are some of the essential terms you'll encounter when working with Terraform. As you start using Terraform for your infrastructure provisioning and management, you'll become more familiar with these concepts and how they fit together in your IaC workflows. -------------------------------------------------------------------------------- /Day-1/03-install.md: -------------------------------------------------------------------------------- 1 | # Install Terraform 2 | 3 | ## Windows 4 | 5 | 1. Install Terraform from the Downloads [Page](https://developer.hashicorp.com/terraform/downloads) 6 | 7 | (or) 8 | 9 | 2. Use GitHub Codespaces (Free for 60 hours per month) 10 | 11 | - Login to your GitHub account 12 | - Click on the Profile Icon to the top right 13 | - Click on "your codespaces" as shown in the [Image](../Images/codespaces-location.png) 14 | - Codespaces will provide you a virtual machine with Ubuntu and VS Code by default. 15 | - Follow the steps provided in the Downloads [Page](https://developer.hashicorp.com/terraform/downloads) for Linux. 16 | 17 | ## Linux 18 | 19 | - Follow the steps provided in the Downloads [Page](https://developer.hashicorp.com/terraform/downloads) for Linux. 20 | 21 | ## macOS 22 | 23 | - Follow the steps provided in the Downloads [Page](https://developer.hashicorp.com/terraform/downloads) for macOS. 24 | -------------------------------------------------------------------------------- /Day-1/04-aws-connection.md: -------------------------------------------------------------------------------- 1 | # Setup Terraform for AWS 2 | 3 | To configure AWS credentials and set up Terraform to work with AWS, you'll need to follow these steps: 4 | 5 | 1. **Install AWS CLI (Command Line Interface)**: 6 | 7 | Make sure you have the AWS CLI installed on your machine. You can download and install it from the [AWS CLI download page](https://aws.amazon.com/cli/). 8 | 9 | 2. **Create an AWS IAM User**: 10 | 11 | To interact with AWS programmatically, you should create an IAM (Identity and Access Management) user with appropriate permissions. Here's how to create one: 12 | 13 | a. Log in to the AWS Management Console with an account that has administrative privileges. 14 | 15 | b. Navigate to the IAM service. 16 | 17 | c. Click on "Users" in the left navigation pane and then click "Add user." 18 | 19 | - Choose a username, select "Programmatic access" as the access type, and click "Next: Permissions." 20 | 21 | - Attach policies to this user based on your requirements. At a minimum, you should attach the "AmazonEC2FullAccess" policy for basic EC2 operations. If you need access to other AWS services, attach the relevant policies accordingly. 22 | 23 | - Review the user's configuration and create the user. Be sure to save the Access Key ID and Secret Access Key that are displayed after user creation; you'll need these for Terraform. 24 | 25 | 3. **Configure AWS CLI Credentials**: 26 | 27 | Use the AWS CLI to configure your credentials. Open a terminal and run: 28 | 29 | ``` 30 | aws configure 31 | ``` 32 | 33 | It will prompt you to enter your AWS Access Key ID, Secret Access Key, default region, and default output format. Enter the credentials you obtained in the previous step. -------------------------------------------------------------------------------- /Day-1/PROJECT-ec2-instance-creation/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" # Set your desired AWS region 3 | } 4 | 5 | resource "aws_instance" "example" { 6 | ami = "ami-0c55b159cbfafe1f0" # Specify an appropriate AMI ID 7 | instance_type = "t2.micro" 8 | } -------------------------------------------------------------------------------- /Day-1/PROJECT-ec2-instance-creation/steps.md: -------------------------------------------------------------------------------- 1 | # Overview of steps 2 | 3 | Create a directory for your Terraform project and create a Terraform configuration file (usually named `main.tf`) in that directory. In this file, you define the AWS provider and resources you want to create. Here's a basic example: 4 | 5 | ```hcl 6 | provider "aws" { 7 | region = "us-east-1" # Set your desired AWS region 8 | } 9 | 10 | resource "aws_instance" "example" { 11 | ami = "ami-0c55b159cbfafe1f0" # Specify an appropriate AMI ID 12 | instance_type = "t2.micro" 13 | } 14 | ``` 15 | 16 | ## Initialize Terraform** 17 | 18 | In your terminal, navigate to the directory containing your Terraform configuration files and run: 19 | 20 | ``` 21 | terraform init 22 | ``` 23 | 24 | This command initializes the Terraform working directory, downloading any necessary provider plugins. 25 | 26 | ## Apply the Configuration 27 | 28 | Run the following command to create the AWS resources defined in your Terraform configuration: 29 | 30 | ``` 31 | terraform apply 32 | ``` 33 | 34 | Terraform will display a plan of the changes it's going to make. Review the plan and type "yes" when prompted to apply it. 35 | 36 | ## Verify Resources 37 | 38 | After Terraform completes the provisioning process, you can verify the resources created in the AWS Management Console or by using AWS CLI commands. 39 | 40 | ## Destroy Resources 41 | 42 | If you want to remove the resources created by Terraform, you can use the following command: 43 | 44 | ``` 45 | terraform destroy 46 | ``` 47 | 48 | Be cautious when using `terraform destroy` as it will delete resources as specified in your Terraform configuration. -------------------------------------------------------------------------------- /Day-2/01-providers.md: -------------------------------------------------------------------------------- 1 | # Providers 2 | 3 | A provider in Terraform is a plugin that enables interaction with an API. 4 | This includes cloud providers, SaaS providers, and other APIs. The providers are specified in the Terraform configuration code. They tell Terraform which services it needs to interact with. 5 | 6 | For example, if you want to use Terraform to create a virtual machine on AWS, you would need to use the aws provider. The aws provider provides a set of resources that Terraform can use to create, manage, and destroy virtual machines on AWS. 7 | 8 | Here is an example of how to use the aws provider in a Terraform configuration: 9 | 10 | ```hcl 11 | provider "aws" { 12 | region = "us-east-1" 13 | } 14 | 15 | resource "aws_instance" "example" { 16 | ami = "ami-0123456789abcdef0" # Change the AMI 17 | instance_type = "t2.micro" 18 | } 19 | ``` 20 | 21 | In this example, we are first defining the aws provider. We are specifying the region as us-east-1. Then, we are defining the `aws_instance` resource. We are specifying the `AMI ID` and the `instance type`. 22 | 23 | When Terraform runs, it will first install the aws provider. Then, it will use the aws provider to create the virtual machine. 24 | 25 | Here are some other examples of providers: 26 | 27 | - `azurerm` - for Azure 28 | - `google` - for Google Cloud Platform 29 | - `kubernetes` - for Kubernetes 30 | - `openstack` - for OpenStack 31 | - `vsphere` - for VMware vSphere 32 | 33 | There are many other providers available, and new ones are being added all the time. 34 | 35 | Providers are an essential part of Terraform. They allow Terraform to interact with a wide variety of cloud providers and other APIs. This makes Terraform a very versatile tool that can be used to manage a wide variety of infrastructure. 36 | 37 | 38 | ## Different ways to configure providers in terraform 39 | 40 | There are three main ways to configure providers in Terraform: 41 | 42 | ### In the root module 43 | 44 | This is the most common way to configure providers. The provider configuration block is placed in the root module of the Terraform configuration. This makes the provider configuration available to all the resources in the configuration. 45 | 46 | ```hcl 47 | provider "aws" { 48 | region = "us-east-1" 49 | } 50 | 51 | resource "aws_instance" "example" { 52 | ami = "ami-0123456789abcdef0" 53 | instance_type = "t2.micro" 54 | } 55 | ``` 56 | 57 | ### In a child module 58 | 59 | You can also configure providers in a child module. This is useful if you want to reuse the same provider configuration in multiple resources. 60 | 61 | ```hcl 62 | module "aws_vpc" { 63 | source = "./aws_vpc" 64 | providers = { 65 | aws = aws.us-west-2 66 | } 67 | } 68 | 69 | resource "aws_instance" "example" { 70 | ami = "ami-0123456789abcdef0" 71 | instance_type = "t2.micro" 72 | depends_on = [module.aws_vpc] 73 | } 74 | ``` 75 | 76 | ### In the required_providers block 77 | 78 | You can also configure providers in the required_providers block. This is useful if you want to make sure that a specific provider version is used. 79 | 80 | ```hcl 81 | terraform { 82 | required_providers { 83 | aws = { 84 | source = "hashicorp/aws" 85 | version = "~> 3.79" 86 | } 87 | } 88 | } 89 | 90 | resource "aws_instance" "example" { 91 | ami = "ami-0123456789abcdef0" 92 | instance_type = "t2.micro" 93 | } 94 | ``` 95 | 96 | The best way to configure providers depends on your specific needs. If you are only using a single provider, then configuring it in the root module is the simplest option. If you are using multiple providers, or if you want to reuse the same provider configuration in multiple resources, then configuring it in a child module is a good option. And if you want to make sure that a specific provider version is used, then configuring it in the required_providers block is the best option. 97 | -------------------------------------------------------------------------------- /Day-2/02-multiple-providers.md: -------------------------------------------------------------------------------- 1 | # Multiple Providers 2 | 3 | You can use multiple providers in one single terraform project. For example, 4 | 5 | 6 | 1. Create a providers.tf file in the root directory of your Terraform project. 7 | 2. In the providers.tf file, define the AWS and Azure providers. For example: 8 | 9 | 10 | ``` 11 | provider "aws" { 12 | region = "us-east-1" 13 | } 14 | 15 | provider "azurerm" { 16 | subscription_id = "your-azure-subscription-id" 17 | client_id = "your-azure-client-id" 18 | client_secret = "your-azure-client-secret" 19 | tenant_id = "your-azure-tenant-id" 20 | } 21 | ``` 22 | 23 | 3. In your other Terraform configuration files, you can then use the aws and azurerm providers to create resources in AWS and Azure, respectively, 24 | 25 | ``` 26 | resource "aws_instance" "example" { 27 | ami = "ami-0123456789abcdef0" 28 | instance_type = "t2.micro" 29 | } 30 | 31 | resource "azurerm_virtual_machine" "example" { 32 | name = "example-vm" 33 | location = "eastus" 34 | size = "Standard_A1" 35 | } 36 | ``` 37 | 38 | -------------------------------------------------------------------------------- /Day-2/03-multiple-regions.md: -------------------------------------------------------------------------------- 1 | # Multiple Region Implementation in Terraform 2 | 3 | You can make use of `alias` keyword to implement multi region infrastructure setup in 4 | terraform. 5 | 6 | ``` 7 | provider "aws" { 8 | alias = "us-east-1" 9 | region = "us-east-1" 10 | } 11 | 12 | provider "aws" { 13 | alias = "us-west-2" 14 | region = "us-west-2" 15 | } 16 | 17 | resource "aws_instance" "example" { 18 | ami = "ami-0123456789abcdef0" 19 | instance_type = "t2.micro" 20 | provider = "aws.us-east-1" 21 | } 22 | 23 | resource "aws_instance" "example2" { 24 | ami = "ami-0123456789abcdef0" 25 | instance_type = "t2.micro" 26 | provider = "aws.us-west-2" 27 | } 28 | ``` -------------------------------------------------------------------------------- /Day-2/04-required-providers.md: -------------------------------------------------------------------------------- 1 | # Provider Configuration 2 | 3 | The required_providers block in Terraform is used to declare and specify the required provider configurations for your Terraform module or configuration. It allows you to specify the provider name, source, and version constraints. 4 | 5 | ``` 6 | terraform { 7 | required_providers { 8 | aws = { 9 | source = "hashicorp/aws" 10 | version = "~> 3.0" 11 | } 12 | azurerm = { 13 | source = "hashicorp/azurerm" 14 | version = ">= 2.0, < 3.0" 15 | } 16 | } 17 | } 18 | ``` -------------------------------------------------------------------------------- /Day-2/05-variables.md: -------------------------------------------------------------------------------- 1 | # Variables 2 | 3 | Input and output variables in Terraform are essential for parameterizing and sharing values within your Terraform configurations and modules. They allow you to make your configurations more dynamic, reusable, and flexible. 4 | 5 | ## Input Variables 6 | 7 | Input variables are used to parameterize your Terraform configurations. They allow you to pass values into your modules or configurations from the outside. Input variables can be defined within a module or at the root level of your configuration. Here's how you define an input variable: 8 | 9 | ```hcl 10 | variable "example_var" { 11 | description = "An example input variable" 12 | type = string 13 | default = "default_value" 14 | } 15 | ``` 16 | 17 | In this example: 18 | 19 | - `variable` is used to declare an input variable named `example_var`. 20 | - `description` provides a human-readable description of the variable. 21 | - `type` specifies the data type of the variable (e.g., `string`, `number`, `list`, `map`, etc.). 22 | - `default` provides a default value for the variable, which is optional. 23 | 24 | You can then use the input variable within your module or configuration like this: 25 | 26 | ```hcl 27 | resource "example_resource" "example" { 28 | name = var.example_var 29 | # other resource configurations 30 | } 31 | ``` 32 | 33 | You reference the input variable using `var.example_var`. 34 | 35 | ## Output Variables 36 | 37 | Output variables allow you to expose values from your module or configuration, making them available for use in other parts of your Terraform setup. Here's how you define an output variable: 38 | 39 | ```hcl 40 | output "example_output" { 41 | description = "An example output variable" 42 | value = resource.example_resource.example.id 43 | } 44 | ``` 45 | 46 | In this example: 47 | 48 | - `output` is used to declare an output variable named `example_output`. 49 | - `description` provides a description of the output variable. 50 | - `value` specifies the value that you want to expose as an output variable. This value can be a resource attribute, a computed value, or any other expression. 51 | 52 | You can reference output variables in the root module or in other modules by using the syntax `module.module_name.output_name`, where `module_name` is the name of the module containing the output variable. 53 | 54 | For example, if you have an output variable named `example_output` in a module called `example_module`, you can access it in the root module like this: 55 | 56 | ```hcl 57 | output "root_output" { 58 | value = module.example_module.example_output 59 | } 60 | ``` 61 | 62 | This allows you to share data and values between different parts of your Terraform configuration and create more modular and maintainable infrastructure-as-code setups. -------------------------------------------------------------------------------- /Day-2/06-variables-implementation.tf: -------------------------------------------------------------------------------- 1 | # Variables Demo 2 | 3 | ```hcl 4 | 5 | # Define an input variable for the EC2 instance type 6 | variable "instance_type" { 7 | description = "EC2 instance type" 8 | type = string 9 | default = "t2.micro" 10 | } 11 | 12 | # Define an input variable for the EC2 instance AMI ID 13 | variable "ami_id" { 14 | description = "EC2 AMI ID" 15 | type = string 16 | } 17 | 18 | # Configure the AWS provider using the input variables 19 | provider "aws" { 20 | region = "us-east-1" 21 | } 22 | 23 | # Create an EC2 instance using the input variables 24 | resource "aws_instance" "example_instance" { 25 | ami = var.ami_id 26 | instance_type = var.instance_type 27 | } 28 | 29 | # Define an output variable to expose the public IP address of the EC2 instance 30 | output "public_ip" { 31 | description = "Public IP address of the EC2 instance" 32 | value = aws_instance.example_instance.public_ip 33 | } 34 | 35 | ``` -------------------------------------------------------------------------------- /Day-2/07-tfvars.md: -------------------------------------------------------------------------------- 1 | # Terraform tfvars 2 | 3 | In Terraform, `.tfvars` files (typically with a `.tfvars` extension) are used to set specific values for input variables defined in your Terraform configuration. 4 | 5 | They allow you to separate configuration values from your Terraform code, making it easier to manage different configurations for different environments (e.g., development, staging, production) or to store sensitive information without exposing it in your code. 6 | 7 | Here's the purpose of `.tfvars` files: 8 | 9 | 1. **Separation of Configuration from Code**: Input variables in Terraform are meant to be configurable so that you can use the same code with different sets of values. Instead of hardcoding these values directly into your `.tf` files, you use `.tfvars` files to keep the configuration separate. This makes it easier to maintain and manage configurations for different environments. 10 | 11 | 2. **Sensitive Information**: `.tfvars` files are a common place to store sensitive information like API keys, access credentials, or secrets. These sensitive values can be kept outside the version control system, enhancing security and preventing accidental exposure of secrets in your codebase. 12 | 13 | 3. **Reusability**: By keeping configuration values in separate `.tfvars` files, you can reuse the same Terraform code with different sets of variables. This is useful for creating infrastructure for different projects or environments using a single set of Terraform modules. 14 | 15 | 4. **Collaboration**: When working in a team, each team member can have their own `.tfvars` file to set values specific to their environment or workflow. This avoids conflicts in the codebase when multiple people are working on the same Terraform project. 16 | 17 | ## Summary 18 | 19 | Here's how you typically use `.tfvars` files 20 | 21 | 1. Define your input variables in your Terraform code (e.g., in a `variables.tf` file). 22 | 23 | 2. Create one or more `.tfvars` files, each containing specific values for those input variables. 24 | 25 | 3. When running Terraform commands (e.g., terraform apply, terraform plan), you can specify which .tfvars file(s) to use with the -var-file option: 26 | 27 | ``` 28 | terraform apply -var-file=dev.tfvars 29 | ``` 30 | 31 | By using `.tfvars` files, you can keep your Terraform code more generic and flexible while tailoring configurations to different scenarios and environments. -------------------------------------------------------------------------------- /Day-2/08-conditional-expressions.md: -------------------------------------------------------------------------------- 1 | # Conditional Expressions 2 | 3 | Conditional expressions in Terraform are used to define conditional logic within your configurations. They allow you to make decisions or set values based on conditions. Conditional expressions are typically used to control whether resources are created or configured based on the evaluation of a condition. 4 | 5 | The syntax for a conditional expression in Terraform is: 6 | 7 | ```hcl 8 | condition ? true_val : false_val 9 | ``` 10 | 11 | - `condition` is an expression that evaluates to either `true` or `false`. 12 | - `true_val` is the value that is returned if the condition is `true`. 13 | - `false_val` is the value that is returned if the condition is `false`. 14 | 15 | Here are some common use cases and examples of how to use conditional expressions in Terraform: 16 | 17 | ## Conditional Resource Creation Example 18 | 19 | ```hcl 20 | resource "aws_instance" "example" { 21 | count = var.create_instance ? 1 : 0 22 | 23 | ami = "ami-XXXXXXXXXXXXXXXXX" 24 | instance_type = "t2.micro" 25 | } 26 | ``` 27 | 28 | In this example, the `count` attribute of the `aws_instance` resource uses a conditional expression. If the `create_instance` variable is `true`, it creates one EC2 instance. If `create_instance` is `false`, it creates zero instances, effectively skipping resource creation. 29 | 30 | # Conditional Variable Assignment Example 31 | 32 | ```hcl 33 | variable "environment" { 34 | description = "Environment type" 35 | type = string 36 | default = "development" 37 | } 38 | 39 | variable "production_subnet_cidr" { 40 | description = "CIDR block for production subnet" 41 | type = string 42 | default = "10.0.1.0/24" 43 | } 44 | 45 | variable "development_subnet_cidr" { 46 | description = "CIDR block for development subnet" 47 | type = string 48 | default = "10.0.2.0/24" 49 | } 50 | 51 | resource "aws_security_group" "example" { 52 | name = "example-sg" 53 | description = "Example security group" 54 | 55 | ingress { 56 | from_port = 22 57 | to_port = 22 58 | protocol = "tcp" 59 | cidr_blocks = var.environment == "production" ? [var.production_subnet_cidr] : [var.development_subnet_cidr] 60 | } 61 | } 62 | 63 | ``` 64 | 65 | In this example, the `locals` block uses a conditional expression to assign a value to the `subnet_cidr` local variable based on the value of the `environment` variable. If `environment` is set to `"production"`, it uses the `production_subnet_cidr` variable; otherwise, it uses the `development_subnet_cidr` variable. 66 | 67 | ## Conditional Resource Configuration 68 | 69 | ```hcl 70 | resource "aws_security_group" "example" { 71 | name = "example-sg" 72 | description = "Example security group" 73 | 74 | ingress { 75 | from_port = 22 76 | to_port = 22 77 | protocol = "tcp" 78 | cidr_blocks = var.enable_ssh ? ["0.0.0.0/0"] : [] 79 | } 80 | } 81 | ``` 82 | 83 | In this example, the `ingress` block within the `aws_security_group` resource uses a conditional expression to control whether SSH access is allowed. If `enable_ssh` is `true`, it allows SSH traffic from any source (`"0.0.0.0/0"`); otherwise, it allows no inbound traffic. 84 | 85 | Conditional expressions in Terraform provide a powerful way to make decisions and customize your infrastructure deployments based on various conditions and variables. They enhance the flexibility and reusability of your Terraform configurations. -------------------------------------------------------------------------------- /Day-2/09-builtin-functions.md: -------------------------------------------------------------------------------- 1 | # Built-in Functions 2 | 3 | Terraform is an infrastructure as code (IaC) tool that allows you to define and provision infrastructure resources in a declarative manner. Terraform provides a wide range of built-in functions that you can use within your configuration files (usually written in HashiCorp Configuration Language, or HCL) to manipulate and transform data. These functions help you perform various tasks when defining your infrastructure. Here are some commonly used built-in functions in Terraform: 4 | 5 | 1. `concat(list1, list2, ...)`: Combines multiple lists into a single list. 6 | 7 | ```hcl 8 | variable "list1" { 9 | type = list 10 | default = ["a", "b"] 11 | } 12 | 13 | variable "list2" { 14 | type = list 15 | default = ["c", "d"] 16 | } 17 | 18 | output "combined_list" { 19 | value = concat(var.list1, var.list2) 20 | } 21 | ``` 22 | 23 | 2. `element(list, index)`: Returns the element at the specified index in a list. 24 | 25 | ```hcl 26 | variable "my_list" { 27 | type = list 28 | default = ["apple", "banana", "cherry"] 29 | } 30 | 31 | output "selected_element" { 32 | value = element(var.my_list, 1) # Returns "banana" 33 | } 34 | ``` 35 | 36 | 3. `length(list)`: Returns the number of elements in a list. 37 | 38 | ```hcl 39 | variable "my_list" { 40 | type = list 41 | default = ["apple", "banana", "cherry"] 42 | } 43 | 44 | output "list_length" { 45 | value = length(var.my_list) # Returns 3 46 | } 47 | ``` 48 | 49 | 4. `map(key, value)`: Creates a map from a list of keys and a list of values. 50 | 51 | ```hcl 52 | variable "keys" { 53 | type = list 54 | default = ["name", "age"] 55 | } 56 | 57 | variable "values" { 58 | type = list 59 | default = ["Alice", 30] 60 | } 61 | 62 | output "my_map" { 63 | value = map(var.keys, var.values) # Returns {"name" = "Alice", "age" = 30} 64 | } 65 | ``` 66 | 67 | 5. `lookup(map, key)`: Retrieves the value associated with a specific key in a map. 68 | 69 | ```hcl 70 | variable "my_map" { 71 | type = map(string) 72 | default = {"name" = "Alice", "age" = "30"} 73 | } 74 | 75 | output "value" { 76 | value = lookup(var.my_map, "name") # Returns "Alice" 77 | } 78 | ``` 79 | 80 | 6. `join(separator, list)`: Joins the elements of a list into a single string using the specified separator. 81 | 82 | ```hcl 83 | variable "my_list" { 84 | type = list 85 | default = ["apple", "banana", "cherry"] 86 | } 87 | 88 | output "joined_string" { 89 | value = join(", ", var.my_list) # Returns "apple, banana, cherry" 90 | } 91 | ``` 92 | 93 | These are just a few examples of the built-in functions available in Terraform. You can find more functions and detailed documentation in the official Terraform documentation, which is regularly updated to include new features and improvements -------------------------------------------------------------------------------- /Day-2/PROJECT-vpc-with-ec2/main.tf: -------------------------------------------------------------------------------- 1 | resource "aws_vpc" "myvpc" { 2 | cidr_block = var.cidr 3 | } 4 | 5 | resource "aws_subnet" "sub1" { 6 | vpc_id = aws_vpc.myvpc.id 7 | cidr_block = "10.0.0.0/24" 8 | availability_zone = "us-east-1a" 9 | map_public_ip_on_launch = true 10 | } 11 | 12 | resource "aws_subnet" "sub2" { 13 | vpc_id = aws_vpc.myvpc.id 14 | cidr_block = "10.0.1.0/24" 15 | availability_zone = "us-east-1b" 16 | map_public_ip_on_launch = true 17 | } 18 | 19 | resource "aws_internet_gateway" "igw" { 20 | vpc_id = aws_vpc.myvpc.id 21 | } 22 | 23 | resource "aws_route_table" "RT" { 24 | vpc_id = aws_vpc.myvpc.id 25 | 26 | route { 27 | cidr_block = "0.0.0.0/0" 28 | gateway_id = aws_internet_gateway.igw.id 29 | } 30 | } 31 | 32 | resource "aws_route_table_association" "rta1" { 33 | subnet_id = aws_subnet.sub1.id 34 | route_table_id = aws_route_table.RT.id 35 | } 36 | 37 | resource "aws_route_table_association" "rta2" { 38 | subnet_id = aws_subnet.sub2.id 39 | route_table_id = aws_route_table.RT.id 40 | } 41 | 42 | resource "aws_security_group" "webSg" { 43 | name = "web" 44 | vpc_id = aws_vpc.myvpc.id 45 | 46 | ingress { 47 | description = "HTTP from VPC" 48 | from_port = 80 49 | to_port = 80 50 | protocol = "tcp" 51 | cidr_blocks = ["0.0.0.0/0"] 52 | } 53 | ingress { 54 | description = "SSH" 55 | from_port = 22 56 | to_port = 22 57 | protocol = "tcp" 58 | cidr_blocks = ["0.0.0.0/0"] 59 | } 60 | 61 | egress { 62 | from_port = 0 63 | to_port = 0 64 | protocol = "-1" 65 | cidr_blocks = ["0.0.0.0/0"] 66 | } 67 | 68 | tags = { 69 | Name = "Web-sg" 70 | } 71 | } 72 | 73 | resource "aws_s3_bucket" "example" { 74 | bucket = "abhisheksterraform2023project" 75 | } 76 | 77 | 78 | resource "aws_instance" "webserver1" { 79 | ami = "ami-0261755bbcb8c4a84" 80 | instance_type = "t2.micro" 81 | vpc_security_group_ids = [aws_security_group.webSg.id] 82 | subnet_id = aws_subnet.sub1.id 83 | user_data = base64encode(file("userdata.sh")) 84 | } 85 | 86 | resource "aws_instance" "webserver2" { 87 | ami = "ami-0261755bbcb8c4a84" 88 | instance_type = "t2.micro" 89 | vpc_security_group_ids = [aws_security_group.webSg.id] 90 | subnet_id = aws_subnet.sub2.id 91 | user_data = base64encode(file("userdata1.sh")) 92 | } 93 | 94 | #create alb 95 | resource "aws_lb" "myalb" { 96 | name = "myalb" 97 | internal = false 98 | load_balancer_type = "application" 99 | 100 | security_groups = [aws_security_group.webSg.id] 101 | subnets = [aws_subnet.sub1.id, aws_subnet.sub2.id] 102 | 103 | tags = { 104 | Name = "web" 105 | } 106 | } 107 | 108 | resource "aws_lb_target_group" "tg" { 109 | name = "myTG" 110 | port = 80 111 | protocol = "HTTP" 112 | vpc_id = aws_vpc.myvpc.id 113 | 114 | health_check { 115 | path = "/" 116 | port = "traffic-port" 117 | } 118 | } 119 | 120 | resource "aws_lb_target_group_attachment" "attach1" { 121 | target_group_arn = aws_lb_target_group.tg.arn 122 | target_id = aws_instance.webserver1.id 123 | port = 80 124 | } 125 | 126 | resource "aws_lb_target_group_attachment" "attach2" { 127 | target_group_arn = aws_lb_target_group.tg.arn 128 | target_id = aws_instance.webserver2.id 129 | port = 80 130 | } 131 | 132 | resource "aws_lb_listener" "listener" { 133 | load_balancer_arn = aws_lb.myalb.arn 134 | port = 80 135 | protocol = "HTTP" 136 | 137 | default_action { 138 | target_group_arn = aws_lb_target_group.tg.arn 139 | type = "forward" 140 | } 141 | } 142 | 143 | output "loadbalancerdns" { 144 | value = aws_lb.myalb.dns_name 145 | } -------------------------------------------------------------------------------- /Day-2/PROJECT-vpc-with-ec2/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "5.11.0" 6 | } 7 | } 8 | } 9 | 10 | provider "aws" { 11 | # Configuration options 12 | region = "us-east-1" 13 | } -------------------------------------------------------------------------------- /Day-2/PROJECT-vpc-with-ec2/userdata.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | apt update 3 | apt install -y apache2 4 | 5 | # Get the instance ID using the instance metadata 6 | INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) 7 | 8 | # Install the AWS CLI 9 | apt install -y awscli 10 | 11 | # Download the images from S3 bucket 12 | #aws s3 cp s3://myterraformprojectbucket2023/project.webp /var/www/html/project.png --acl public-read 13 | 14 | # Create a simple HTML file with the portfolio content and display the images 15 | cat < /var/www/html/index.html 16 | 17 | 18 | 19 | My Portfolio 20 | 31 | 32 | 33 |

Terraform Project Server 1

34 |

Instance ID: $INSTANCE_ID

35 |

Welcome to Abhishek Veeramalla's Channel

36 | 37 | 38 | 39 | EOF 40 | 41 | # Start Apache and enable it on boot 42 | systemctl start apache2 43 | systemctl enable apache2 -------------------------------------------------------------------------------- /Day-2/PROJECT-vpc-with-ec2/userdata1.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | apt update 3 | apt install -y apache2 4 | 5 | # Get the instance ID using the instance metadata 6 | INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) 7 | 8 | # Install the AWS CLI 9 | apt install -y awscli 10 | 11 | # Download the images from S3 bucket 12 | #aws s3 cp s3://myterraformprojectbucket2023/project.webp /var/www/html/project.png --acl public-read 13 | 14 | # Create a simple HTML file with the portfolio content and display the images 15 | cat < /var/www/html/index.html 16 | 17 | 18 | 19 | My Portfolio 20 | 31 | 32 | 33 |

Terraform Project Server 1

34 |

Instance ID: $INSTANCE_ID

35 |

Welcome to CloudChamp's Channel

36 | 37 | 38 | 39 | EOF 40 | 41 | # Start Apache and enable it on boot 42 | systemctl start apache2 43 | systemctl enable apache2 -------------------------------------------------------------------------------- /Day-2/PROJECT-vpc-with-ec2/variables.tf: -------------------------------------------------------------------------------- 1 | variable "cidr" { 2 | default = "10.0.0.0/16" 3 | } -------------------------------------------------------------------------------- /Day-3/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | 5 | module "ec2_instance" { 6 | source = "./modules/ec2_instance" 7 | ami_value = "ami-053b0d53c279acc90" # replace this 8 | instance_type_value = "t2.micro" 9 | subnet_id_value = "subnet-019ea91ed9b5252e7". # replace this 10 | } -------------------------------------------------------------------------------- /Day-3/modules.md: -------------------------------------------------------------------------------- 1 | # Modules 2 | 3 | The advantage of using Terraform modules in your infrastructure as code (IaC) projects lies in improved organization, reusability, and maintainability. Here are the key benefits: 4 | 5 | 1. **Modularity**: Terraform modules allow you to break down your infrastructure configuration into smaller, self-contained components. This modularity makes it easier to manage and reason about your infrastructure because each module handles a specific piece of functionality, such as an EC2 instance, a database, or a network configuration. 6 | 7 | 2. **Reusability**: With modules, you can create reusable templates for common infrastructure components. Instead of rewriting similar configurations for multiple projects, you can reuse modules across different Terraform projects. This reduces duplication and promotes consistency in your infrastructure. 8 | 9 | 3. **Simplified Collaboration**: Modules make it easier for teams to collaborate on infrastructure projects. Different team members can work on separate modules independently, and then these modules can be combined to build complex infrastructure deployments. This division of labor can streamline development and reduce conflicts in the codebase. 10 | 11 | 4. **Versioning and Maintenance**: Modules can have their own versioning, making it easier to manage updates and changes. When you update a module, you can increment its version, and other projects using that module can choose when to adopt the new version, helping to prevent unexpected changes in existing deployments. 12 | 13 | 5. **Abstraction**: Modules can abstract away the complexity of underlying resources. For example, an EC2 instance module can hide the details of security groups, subnets, and other configurations, allowing users to focus on high-level parameters like instance type and image ID. 14 | 15 | 6. **Testing and Validation**: Modules can be individually tested and validated, ensuring that they work correctly before being used in multiple projects. This reduces the risk of errors propagating across your infrastructure. 16 | 17 | 7. **Documentation**: Modules promote self-documentation. When you define variables, outputs, and resource dependencies within a module, it becomes clear how the module should be used, making it easier for others (or your future self) to understand and work with. 18 | 19 | 8. **Scalability**: As your infrastructure grows, modules provide a scalable approach to managing complexity. You can continue to create new modules for different components of your architecture, maintaining a clean and organized codebase. 20 | 21 | 9. **Security and Compliance**: Modules can encapsulate security and compliance best practices. For instance, you can create a module for launching EC2 instances with predefined security groups, IAM roles, and other security-related configurations, ensuring consistency and compliance across your deployments. 22 | -------------------------------------------------------------------------------- /Day-3/modules/ec2_instance/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | 5 | resource "aws_instance" "example" { 6 | ami = var.ami_value 7 | instance_type = var.instance_type_value 8 | subnet_id = var.subnet_id_value 9 | } -------------------------------------------------------------------------------- /Day-3/modules/ec2_instance/outputs.tf: -------------------------------------------------------------------------------- 1 | output "public-ip-address" { 2 | value = aws_instance.example.public_ip 3 | } -------------------------------------------------------------------------------- /Day-3/modules/ec2_instance/variables.tf: -------------------------------------------------------------------------------- 1 | variable "ami_value" { 2 | description = "value for the ami" 3 | } 4 | 5 | variable "instance_type_value" { 6 | description = "value for instance_type" 7 | } 8 | 9 | variable "subnet_id_value" { 10 | description = "value for the subnet_id" 11 | } -------------------------------------------------------------------------------- /Day-4/backend.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | backend "s3" { 3 | bucket = "abhishek-s3-demo-xyz" # change this 4 | key = "abhi/terraform.tfstate" 5 | region = "us-east-1" 6 | encrypt = true 7 | dynamodb_table = "terraform-lock" 8 | } 9 | } -------------------------------------------------------------------------------- /Day-4/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | 5 | resource "aws_instance" "abhishek" { 6 | instance_type = "t2.micro" 7 | ami = "ami-053b0d53c279acc90" # change this 8 | subnet_id = "subnet-019ea91ed9b5252e7" # change this 9 | } 10 | 11 | resource "aws_s3_bucket" "s3_bucket" { 12 | bucket = "abhishek-s3-demo-xyz" # change this 13 | } 14 | 15 | resource "aws_dynamodb_table" "terraform_lock" { 16 | name = "terraform-lock" 17 | billing_mode = "PAY_PER_REQUEST" 18 | hash_key = "LockID" 19 | 20 | attribute { 21 | name = "LockID" 22 | type = "S" 23 | } 24 | } -------------------------------------------------------------------------------- /Day-4/statefile_scenarios.md: -------------------------------------------------------------------------------- 1 | **Terraform State File** 2 | 3 | Terraform is an Infrastructure as Code (IaC) tool used to define and provision infrastructure resources. The Terraform state file is a crucial component of Terraform that helps it keep track of the resources it manages and their current state. This file, often named `terraform.tfstate`, is a JSON or HCL (HashiCorp Configuration Language) formatted file that contains important information about the infrastructure's current state, such as resource attributes, dependencies, and metadata. 4 | 5 | **Advantages of Terraform State File:** 6 | 7 | 1. **Resource Tracking**: The state file keeps track of all the resources managed by Terraform, including their attributes and dependencies. This ensures that Terraform can accurately update or destroy resources when necessary. 8 | 9 | 2. **Concurrency Control**: Terraform uses the state file to lock resources, preventing multiple users or processes from modifying the same resource simultaneously. This helps avoid conflicts and ensures data consistency. 10 | 11 | 3. **Plan Calculation**: Terraform uses the state file to calculate and display the difference between the desired configuration (defined in your Terraform code) and the current infrastructure state. This helps you understand what changes Terraform will make before applying them. 12 | 13 | 4. **Resource Metadata**: The state file stores metadata about each resource, such as unique identifiers, which is crucial for managing resources and understanding their relationships. 14 | 15 | **Disadvantages of Storing Terraform State in Version Control Systems (VCS):** 16 | 17 | 1. **Security Risks**: Sensitive information, such as API keys or passwords, may be stored in the state file if it's committed to a VCS. This poses a security risk because VCS repositories are often shared among team members. 18 | 19 | 2. **Versioning Complexity**: Managing state files in VCS can lead to complex versioning issues, especially when multiple team members are working on the same infrastructure. 20 | 21 | **Overcoming Disadvantages with Remote Backends (e.g., S3):** 22 | 23 | A remote backend stores the Terraform state file outside of your local file system and version control. Using S3 as a remote backend is a popular choice due to its reliability and scalability. Here's how to set it up: 24 | 25 | 1. **Create an S3 Bucket**: Create an S3 bucket in your AWS account to store the Terraform state. Ensure that the appropriate IAM permissions are set up. 26 | 27 | 2. **Configure Remote Backend in Terraform:** 28 | 29 | ```hcl 30 | # In your Terraform configuration file (e.g., main.tf), define the remote backend. 31 | terraform { 32 | backend "s3" { 33 | bucket = "your-terraform-state-bucket" 34 | key = "path/to/your/terraform.tfstate" 35 | region = "us-east-1" # Change to your desired region 36 | encrypt = true 37 | dynamodb_table = "your-dynamodb-table" 38 | } 39 | } 40 | ``` 41 | 42 | Replace `"your-terraform-state-bucket"` and `"path/to/your/terraform.tfstate"` with your S3 bucket and desired state file path. 43 | 44 | 3. **DynamoDB Table for State Locking:** 45 | 46 | To enable state locking, create a DynamoDB table and provide its name in the `dynamodb_table` field. This prevents concurrent access issues when multiple users or processes run Terraform. 47 | 48 | **State Locking with DynamoDB:** 49 | 50 | DynamoDB is used for state locking when a remote backend is configured. It ensures that only one user or process can modify the Terraform state at a time. Here's how to create a DynamoDB table and configure it for state locking: 51 | 52 | 1. **Create a DynamoDB Table:** 53 | 54 | You can create a DynamoDB table using the AWS Management Console or AWS CLI. Here's an AWS CLI example: 55 | 56 | ```sh 57 | aws dynamodb create-table --table-name your-dynamodb-table --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 58 | ``` 59 | 60 | 2. **Configure the DynamoDB Table in Terraform Backend Configuration:** 61 | 62 | In your Terraform configuration, as shown above, provide the DynamoDB table name in the `dynamodb_table` field under the backend configuration. 63 | 64 | By following these steps, you can securely store your Terraform state in S3 with state locking using DynamoDB, mitigating the disadvantages of storing sensitive information in version control systems and ensuring safe concurrent access to your infrastructure. For a complete example in Markdown format, you can refer to the provided example below: 65 | 66 | ```markdown 67 | # Terraform Remote Backend Configuration with S3 and DynamoDB 68 | 69 | ## Create an S3 Bucket for Terraform State 70 | 71 | 1. Log in to your AWS account. 72 | 73 | 2. Go to the AWS S3 service. 74 | 75 | 3. Click the "Create bucket" button. 76 | 77 | 4. Choose a unique name for your bucket (e.g., `your-terraform-state-bucket`). 78 | 79 | 5. Follow the prompts to configure your bucket. Ensure that the appropriate permissions are set. 80 | 81 | ## Configure Terraform Remote Backend 82 | 83 | 1. In your Terraform configuration file (e.g., `main.tf`), define the remote backend: 84 | 85 | ```hcl 86 | terraform { 87 | backend "s3" { 88 | bucket = "your-terraform-state-bucket" 89 | key = "path/to/your/terraform.tfstate" 90 | region = "us-east-1" # Change to your desired region 91 | encrypt = true 92 | dynamodb_table = "your-dynamodb-table" 93 | } 94 | } 95 | ``` 96 | 97 | Replace `"your-terraform-state-bucket"` and `"path/to/your/terraform.tfstate"` with your S3 bucket and desired state file path. 98 | 99 | 2. Create a DynamoDB Table for State Locking: 100 | 101 | ```sh 102 | aws dynamodb create-table --table-name your-dynamodb-table --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 103 | ``` 104 | 105 | Replace `"your-dynamodb-table"` with the desired DynamoDB table name. 106 | 107 | 3. Configure the DynamoDB table name in your Terraform backend configuration, as shown in step 1. 108 | 109 | By following these steps, you can securely store your Terraform state in S3 with state locking using DynamoDB, mitigating the disadvantages of storing sensitive information in version control systems and ensuring safe concurrent access to your infrastructure. 110 | ``` 111 | 112 | Please note that you should adapt the configuration and commands to your specific AWS environment and requirements. -------------------------------------------------------------------------------- /Day-5/app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask 2 | 3 | app = Flask(__name__) 4 | 5 | @app.route("/") 6 | def hello(): 7 | return "Hello, Terraform!" 8 | 9 | if __name__ == "__main__": 10 | app.run(host="0.0.0.0", port=80) 11 | -------------------------------------------------------------------------------- /Day-5/main.tf: -------------------------------------------------------------------------------- 1 | # Define the AWS provider configuration. 2 | provider "aws" { 3 | region = "us-east-1" # Replace with your desired AWS region. 4 | } 5 | 6 | variable "cidr" { 7 | default = "10.0.0.0/16" 8 | } 9 | 10 | resource "aws_key_pair" "example" { 11 | key_name = "terraform-demo-abhi" # Replace with your desired key name 12 | public_key = file("~/.ssh/id_rsa.pub") # Replace with the path to your public key file 13 | } 14 | 15 | resource "aws_vpc" "myvpc" { 16 | cidr_block = var.cidr 17 | } 18 | 19 | resource "aws_subnet" "sub1" { 20 | vpc_id = aws_vpc.myvpc.id 21 | cidr_block = "10.0.0.0/24" 22 | availability_zone = "us-east-1a" 23 | map_public_ip_on_launch = true 24 | } 25 | 26 | resource "aws_internet_gateway" "igw" { 27 | vpc_id = aws_vpc.myvpc.id 28 | } 29 | 30 | resource "aws_route_table" "RT" { 31 | vpc_id = aws_vpc.myvpc.id 32 | 33 | route { 34 | cidr_block = "0.0.0.0/0" 35 | gateway_id = aws_internet_gateway.igw.id 36 | } 37 | } 38 | 39 | resource "aws_route_table_association" "rta1" { 40 | subnet_id = aws_subnet.sub1.id 41 | route_table_id = aws_route_table.RT.id 42 | } 43 | 44 | resource "aws_security_group" "webSg" { 45 | name = "web" 46 | vpc_id = aws_vpc.myvpc.id 47 | 48 | ingress { 49 | description = "HTTP from VPC" 50 | from_port = 80 51 | to_port = 80 52 | protocol = "tcp" 53 | cidr_blocks = ["0.0.0.0/0"] 54 | } 55 | ingress { 56 | description = "SSH" 57 | from_port = 22 58 | to_port = 22 59 | protocol = "tcp" 60 | cidr_blocks = ["0.0.0.0/0"] 61 | } 62 | 63 | egress { 64 | from_port = 0 65 | to_port = 0 66 | protocol = "-1" 67 | cidr_blocks = ["0.0.0.0/0"] 68 | } 69 | 70 | tags = { 71 | Name = "Web-sg" 72 | } 73 | } 74 | 75 | resource "aws_instance" "server" { 76 | ami = "ami-0261755bbcb8c4a84" 77 | instance_type = "t2.micro" 78 | key_name = aws_key_pair.example.key_name 79 | vpc_security_group_ids = [aws_security_group.webSg.id] 80 | subnet_id = aws_subnet.sub1.id 81 | 82 | connection { 83 | type = "ssh" 84 | user = "ubuntu" # Replace with the appropriate username for your EC2 instance 85 | private_key = file("~/.ssh/id_rsa") # Replace with the path to your private key 86 | host = self.public_ip 87 | } 88 | 89 | # File provisioner to copy a file from local to the remote EC2 instance 90 | provisioner "file" { 91 | source = "app.py" # Replace with the path to your local file 92 | destination = "/home/ubuntu/app.py" # Replace with the path on the remote instance 93 | } 94 | 95 | provisioner "remote-exec" { 96 | inline = [ 97 | "echo 'Hello from the remote instance'", 98 | "sudo apt update -y", # Update package lists (for ubuntu) 99 | "sudo apt-get install -y python3-pip", # Example package installation 100 | "cd /home/ubuntu", 101 | "sudo pip3 install flask", 102 | "sudo python3 app.py &", 103 | ] 104 | } 105 | } 106 | 107 | -------------------------------------------------------------------------------- /Day-5/provisioners.md: -------------------------------------------------------------------------------- 1 | Certainly, let's delve deeper into the `file`, `remote-exec`, and `local-exec` provisioners in Terraform, along with examples for each. 2 | 3 | 1. **file Provisioner:** 4 | 5 | The `file` provisioner is used to copy files or directories from the local machine to a remote machine. This is useful for deploying configuration files, scripts, or other assets to a provisioned instance. 6 | 7 | Example: 8 | 9 | ```hcl 10 | resource "aws_instance" "example" { 11 | ami = "ami-0c55b159cbfafe1f0" 12 | instance_type = "t2.micro" 13 | } 14 | 15 | provisioner "file" { 16 | source = "local/path/to/localfile.txt" 17 | destination = "/path/on/remote/instance/file.txt" 18 | connection { 19 | type = "ssh" 20 | user = "ec2-user" 21 | private_key = file("~/.ssh/id_rsa") 22 | } 23 | } 24 | ``` 25 | 26 | In this example, the `file` provisioner copies the `localfile.txt` from the local machine to the `/path/on/remote/instance/file.txt` location on the AWS EC2 instance using an SSH connection. 27 | 28 | 2. **remote-exec Provisioner:** 29 | 30 | The `remote-exec` provisioner is used to run scripts or commands on a remote machine over SSH or WinRM connections. It's often used to configure or install software on provisioned instances. 31 | 32 | Example: 33 | 34 | ```hcl 35 | resource "aws_instance" "example" { 36 | ami = "ami-0c55b159cbfafe1f0" 37 | instance_type = "t2.micro" 38 | } 39 | 40 | provisioner "remote-exec" { 41 | inline = [ 42 | "sudo yum update -y", 43 | "sudo yum install -y httpd", 44 | "sudo systemctl start httpd", 45 | ] 46 | 47 | connection { 48 | type = "ssh" 49 | user = "ec2-user" 50 | private_key = file("~/.ssh/id_rsa") 51 | host = aws_instance.example.public_ip 52 | } 53 | } 54 | ``` 55 | 56 | In this example, the `remote-exec` provisioner connects to the AWS EC2 instance using SSH and runs a series of commands to update the package repositories, install Apache HTTP Server, and start the HTTP server. 57 | 58 | 3. **local-exec Provisioner:** 59 | 60 | The `local-exec` provisioner is used to run scripts or commands locally on the machine where Terraform is executed. It is useful for tasks that don't require remote execution, such as initializing a local database or configuring local resources. 61 | 62 | Example: 63 | 64 | ```hcl 65 | resource "null_resource" "example" { 66 | triggers = { 67 | always_run = "${timestamp()}" 68 | } 69 | 70 | provisioner "local-exec" { 71 | command = "echo 'This is a local command'" 72 | } 73 | } 74 | ``` 75 | 76 | In this example, a `null_resource` is used with a `local-exec` provisioner to run a simple local command that echoes a message to the console whenever Terraform is applied or refreshed. The `timestamp()` function ensures it runs each time. -------------------------------------------------------------------------------- /Day-6/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | 5 | variable "ami" { 6 | description = "value" 7 | } 8 | 9 | variable "instance_type" { 10 | description = "value" 11 | type = map(string) 12 | 13 | default = { 14 | "dev" = "t2.micro" 15 | "stage" = "t2.medium" 16 | "prod" = "t2.xlarge" 17 | } 18 | } 19 | 20 | module "ec2_instance" { 21 | source = "./modules/ec2_instance" 22 | ami = var.ami 23 | instance_type = lookup(var.instance_type, terraform.workspace, "t2.micro") 24 | } -------------------------------------------------------------------------------- /Day-6/modules/ec2_instance/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | 5 | variable "ami" { 6 | description = "This is AMI for the instance" 7 | } 8 | 9 | variable "instance_type" { 10 | description = "This is the instance type, for example: t2.micro" 11 | } 12 | 13 | resource "aws_instance" "example" { 14 | ami = var.ami 15 | instance_type = var.instance_type 16 | } -------------------------------------------------------------------------------- /Day-6/terraform.tfvars: -------------------------------------------------------------------------------- 1 | ami = "ami-053b0d53c279acc90" -------------------------------------------------------------------------------- /Day-7/01-secure-terraform.md: -------------------------------------------------------------------------------- 1 | # Ways to secure Terraform 2 | 3 | There are a few ways to manage sensitive information in Terraform files. Here are some of the most common methods: 4 | 5 | ## Use the sensitive attribute 6 | 7 | - Terraform provides a sensitive attribute that can be used to mark variables and outputs as sensitive. When a variable or output is marked as sensitive, Terraform will not print its value in the console output or in the state file. 8 | 9 | ## Secret management system 10 | 11 | - Store sensitive data in a secret management system. A secret management system is a dedicated system for storing sensitive data, such as passwords, API keys, and SSH keys. Terraform can be configured to read secrets from a secret management system, such as HashiCorp Vault or AWS Secrets Manager. 12 | 13 | ## Remote Backend 14 | 15 | - Encrypt sensitive data. The Terraform state file can be encrypted to protect sensitive data. This can be done by using a secure remote backend, such as Terraform Cloud or S3. 16 | 17 | ## Environment Variables 18 | 19 | - Use environment variables. Sensitive data can also be stored in environment variables. Terraform can read environment variables when it is run. 20 | 21 | Here are some specific examples of how to use these methods: 22 | 23 | To mark a variable as sensitive, you would add the sensitive attribute to the variable declaration. 24 | 25 | For example: 26 | 27 | variable "aws_access_key_id" { 28 | sensitive = true 29 | } 30 | 31 | To store sensitive data in a secret management system, you would first create a secret in the secret management system. Then, you would configure Terraform to read the secret from the secret management system. 32 | 33 | For example, to read a secret from HashiCorp Vault, you would use the vault_generic_secret data source. 34 | 35 | data "vault_generic_secret" "aws_access_key_id" { 36 | path = "secret/aws/access_key_id" 37 | } 38 | 39 | variable "aws_access_key_id" { 40 | value = data.vault_generic_secret.aws_access_key_id.value 41 | } 42 | 43 | To encrypt the Terraform state file, you would first configure a secure remote backend for the state file. Then, you would encrypt the state file using the terraform encrypt command. 44 | 45 | terraform encrypt 46 | 47 | To use environment variables, you would first define the environment variables in your operating system. Then, you would configure Terraform to read the environment variables when it is run. 48 | 49 | For example, to define an environment variable called AWS_ACCESS_KEY_ID, you would use the following command: 50 | 51 | export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID 52 | 53 | Then, you would configure Terraform to read the environment variable by adding the following line to your Terraform configuration file: 54 | 55 | variable "aws_access_key_id" { 56 | source = "env://AWS_ACCESS_KEY_ID" 57 | } 58 | -------------------------------------------------------------------------------- /Day-7/02-vault-integration.md: -------------------------------------------------------------------------------- 1 | # Vault Integration 2 | 3 | Here are the detailed steps for each of these steps: 4 | 5 | ## Create an AWS EC2 instance with Ubuntu 6 | 7 | To create an AWS EC2 instance with Ubuntu, you can use the AWS Management Console or the AWS CLI. Here are the steps involved in creating an EC2 instance using the AWS Management Console: 8 | 9 | - Go to the AWS Management Console and navigate to the EC2 service. 10 | - Click on the Launch Instance button. 11 | - Select the Ubuntu Server xx.xx LTS AMI. 12 | - Select the instance type that you want to use. 13 | - Configure the instance settings. 14 | - Click on the Launch button. 15 | 16 | ## Install Vault on the EC2 instance 17 | 18 | To install Vault on the EC2 instance, you can use the following steps: 19 | 20 | **Install gpg** 21 | 22 | ``` 23 | sudo apt update && sudo apt install gpg 24 | ``` 25 | 26 | **Download the signing key to a new keyring** 27 | 28 | ``` 29 | wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg 30 | ``` 31 | 32 | **Verify the key's fingerprint** 33 | 34 | ``` 35 | gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint 36 | ``` 37 | 38 | **Add the HashiCorp repo** 39 | 40 | ``` 41 | echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list 42 | ``` 43 | 44 | ``` 45 | sudo apt update 46 | ``` 47 | 48 | **Finally, Install Vault** 49 | 50 | ``` 51 | sudo apt install vault 52 | ``` 53 | 54 | ## Start Vault. 55 | 56 | To start Vault, you can use the following command: 57 | 58 | ``` 59 | vault server -dev -dev-listen-address="0.0.0.0:8200" 60 | ``` 61 | 62 | ## Configure Terraform to read the secret from Vault. 63 | 64 | Detailed steps to enable and configure AppRole authentication in HashiCorp Vault: 65 | 66 | 1. **Enable AppRole Authentication**: 67 | 68 | To enable the AppRole authentication method in Vault, you need to use the Vault CLI or the Vault HTTP API. 69 | 70 | **Using Vault CLI**: 71 | 72 | Run the following command to enable the AppRole authentication method: 73 | 74 | ```bash 75 | vault auth enable approle 76 | ``` 77 | 78 | This command tells Vault to enable the AppRole authentication method. 79 | 80 | 2. **Create an AppRole**: 81 | 82 | We need to create policy first, 83 | 84 | ``` 85 | vault policy write terraform - <