├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── components.tf ├── config.tf ├── container-harden.png ├── dist-config.tf ├── files └── assumption-policy.json ├── hardening-pipeline.tfvars ├── image.tf ├── infr-config.tf ├── infra-network-config.tf ├── kms-key.tf ├── main.tf ├── outputs.tf ├── pipeline.tf ├── recipes.tf ├── roles.tf ├── sec-groups.tf ├── trigger-build.tf └── variables.tf /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *main* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of 4 | this software and associated documentation files (the "Software"), to deal in 5 | the Software without restriction, including without limitation the rights to 6 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 7 | the Software, and to permit persons to whom the Software is furnished to do so. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 10 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 11 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 12 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 13 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 15 | 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Terraform EC2 Image Builder Container Hardening Pipeline summary 2 | 3 | This pattern builds an [EC2 Image Builder pipeline](https://docs.aws.amazon.com/imagebuilder/latest/userguide/start-build-image-pipeline.html) that produces a hardened [Amazon Linux 2](https://aws.amazon.com/amazon-linux-2/) base container image. Terraform is used as an infrastructure as code (IaC) tool to configure and provision the infrastructure that is used to create hardened container images. The recipe helps you deploy a [Docker](https://docs.docker.com/)-based Amazon Linux 2 container image that has been hardened according to Red Hat Enterprise Linux (RHEL) 7 STIG Version 3 Release 7 ‒ Medium. (See [STIG-Build-Linux-Medium version 2022.2.1](https://docs.aws.amazon.com/imagebuilder/latest/userguide/toe-stig.html#linux-os-stig) in the _Linux STIG components_ section of the EC2 Image Builder documentation.) This is commonly referred to as a _golden_ container image. 4 | 5 | The build includes two Amazon [EventBridge rules](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rules.html). One rule starts the container image pipeline when the Amazon [Inspector finding](https://docs.aws.amazon.com/inspector/latest/user/findings-managing.html) is **High** or **Critical** so that non-secure images are replaced. (This rule requires both Amazon Inspector and Amazon Elastic Container Registry (Amazon ECR) [enhanced scanning](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-enhanced.html) to be enabled.) The other rule sends notifications to an Amazon Simple Queue Service (Amazon SQS) [queue](https://aws.amazon.com/sqs/) after a successful image push to the Amazon ECR repository, to help you use the latest container images. 6 | 7 | In January 2023, [EC2 Image Builder added support for AWS Marketplace CIS Pre-Hardened images](https://aws.amazon.com/about-aws/whats-new/2023/01/ec2-image-builder-cis-benchmarks-security-hardening-amis/). This achieves a hardening goal, but is only for AMIs, not Container images, and you must sign-up for a [subscription](https://aws.amazon.com/marketplace/seller-profile?id=dfa1e6a8-0b7b-4d35-a59c-ce272caee4fc) in [AWS Marketplace](https://aws.amazon.com/marketplace) to CIS. 8 | 9 | ## Prerequisites 10 | 11 | - An [AWS account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/) that you can deploy the infrastructure in. 12 | - [AWS Command Line Interface (AWS CLI) installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) for setting your AWS credentials for local deployment. 13 | - [Download](https://developer.hashicorp.com/terraform/downloads) and set up Terraform by following the [instructions](https://developer.hashicorp.com/terraform/tutorials/aws-get-started) in the Terraform documentation. 14 | - [Git](https://git-scm.com/) (if you’re provisioning from a local machine). 15 | - A [role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) within the AWS account that you can use to create AWS resources. 16 | - All variables defined in the [.tfvars](https://developer.hashicorp.com/terraform/tutorials/configuration-language/variables) file.  Or, you can define all variables when you apply the Terraform configuration. 17 | 18 | ## Limitations 19 | 20 | - This solution creates an Amazon Virtual Private Cloud (Amazon VPC) infrastructure that includes a [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) and an [internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) for internet connectivity from its private subnet. You cannot use [VPC endpoints](https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/what-are-vpc-endpoints.html), because the [bootstrap process by AWS Task Orchestrator and Executor (AWSTOE](https://aws.amazon.com/premiumsupport/knowledge-center/image-builder-pipeline-execution-error/)) installs AWS CLI version 2 from the internet. 21 | 22 | ## Product versions 23 | 24 | - Amazon Linux 2 25 | - AWS CLI version 1.1 or later 26 | 27 | ## Target technology stack 28 | 29 | This pattern creates 43 resources, including: 30 | 31 | - Two Amazon Simple Storage Service (Amazon S3) [buckets](https://aws.amazon.com/s3/): one for the pipeline component files and one for server access and Amazon VPC flow logs 32 | - An [Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) 33 | - A virtual private cloud (VPC) that contains a public subnet, a private subnet, route tables, a NAT gateway, and an internet gateway 34 | - An EC2 Image Builder pipeline, recipe, and components 35 | - A container image 36 | - An AWS Key Management Service (AWS KMS) [key](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiC5J339rD8AhV-F1kFHSp_CCEQFnoECA8QAQ&url=https%3A%2F%2Faws.amazon.com%2Fkms%2F&usg=AOvVaw3RCXPeRLWlWbJyXWU3HNGF) for image encryption 37 | - An SQS queue 38 | - Three roles: one to run the EC2 Image Builder pipeline, one instance profile for EC2 Image Builder, and one for Amazon EventBridge rules 39 | - Two Amazon EventBridge rules 40 | 41 | ## Structure 42 | 43 | ``` console 44 | ├── components.tf 45 | ├── config.tf 46 | ├── dist-config.tf 47 | ├── files 48 | │   └──assumption-policy.json 49 | ├── hardening-pipeline.tfvars 50 | ├── image.tf 51 | ├── infr-config.tf 52 | ├── infra-network-config.tf 53 | ├── kms-key.tf 54 | ├── main.tf 55 | ├── outputs.tf 56 | ├── pipeline.tf 57 | ├── recipes.tf 58 | ├── roles.tf 59 | ├── sec-groups.tf 60 | ├── trigger-build.tf 61 | └── variables.tf 62 | ``` 63 | 64 | ## Module details 65 | 66 | - components.tf contains an Amazon S3 upload resource to upload the contents of the /files directory. You can also modularly add custom component YAML files here as well. 67 | - /files contains the .yml files that define the components used in components.tf. 68 | - image.tf contains the definitions for the base image operating system. This is where you can modify the definitions for a different base image pipeline. 69 | - infr-config.tf and dist-config.tf contain the resources for the minimum AWS infrastructure needed to spin up and distribute the image. 70 | - infra-network-config.tf contains the minimum VPC infrastructure to deploy the container image into. 71 | - hardening-pipeline.tfvars contains the Terraform variables to be used at apply time. 72 | - pipeline.tf creates and manages an EC2 Image Builder pipeline in Terraform. 73 | - recipes.tf is where you can specify different mixtures of components to create container recipes. 74 | - roles.tf contains the AWS Identity and Access Management (IAM) policy definitions for the Amazon Elastic Compute Cloud (Amazon EC2) instance profile and pipeline deployment role. 75 | - trigger-build.tf contains the EventBridge rules and SQS queue resources. 76 | 77 | ## Target architecture 78 | 79 | ![Deployed Resources Architecture](container-harden.png) 80 | 81 | The diagram illustrates the following workflow: 82 | 83 | 1. EC2 Image Builder builds a container image by using the defined recipe, which installs operating system updates and applies the RHEL Medium STIG to the Amazon Linux 2 base image. 84 | 2. The hardened image is published to a private Amazon ECR registry, and an EventBridge rule sends a message to an SQS queue when the image has been published successfully. 85 | 3. If Amazon Inspector is configured for enhanced scanning, it scans the Amazon ECR registry. 86 | 4. If Amazon Inspector generates a **Critical** or **High** severity finding for the image, an EventBridge rule triggers the EC2 Image Builder pipeline to run again and publish a newly hardened image. 87 | 88 | ## Automation and scale 89 | 90 | - This pattern describes how to provision the infrastructure and build the pipeline on your computer. However, it is intended to be used at scale. Instead of deploying the Terraform modules locally, you can use them in a multi-account environment, such as an [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) with [Account Factory for Terraform](https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/) environment. In that case, you should use [a backend state S3 bucket](https://developer.hashicorp.com/terraform/language/settings/backends/s3) to manage Terraform state files, instead of managing the configuration state locally. 91 | - For scaled use, deploy the solution to one central account, such as a Shared Services or Common Services account, from a Control Tower or landing zone account model, and grant consumer accounts permission to access the Amazon ECR repository and AWS KMS key. For more information about the setup, see the re:Post article [How can I allow a secondary account to push or pull images in my Amazon ECR image repository?](https://repost.aws/knowledge-center/secondary-account-access-ecr) For example, in an [account vending machine](https://www.hashicorp.com/resources/terraform-landing-zones-for-self-service-multi-aws-at-eventbrite) or Account Factory for Terraform, add permissions to each account baseline or account customization baseline to provide access to that Amazon ECR repository and encryption key. 92 | - After the container image pipeline is deployed, you can modify it by using EC2 Image Builder features such as [components](https://docs.aws.amazon.com/imagebuilder/latest/userguide/manage-components.html), which help you package more components into the Docker build. 93 | - The AWS KMS key that is used to encrypt the container image should be shared across the accounts that the image is intended to be used in. 94 | - You can add support for other images by duplicating the entire Terraform module and modifying the following recipes.tf attributes: 95 | - Modify `parent_image = "amazonlinux:latest"` to another image type. 96 | - Modify `repository_name` to point to an existing Amazon ECR repository. This creates another pipeline that deploys a different parent image type to your existing Amazon ECR repository. 97 | 98 | ## Tools 99 | 100 | - Terraform (IaC provisioning) 101 | - Git (if provisioning locally) 102 | - AWS CLI version 1 or version 2 (if provisioning locally) 103 | 104 | ## Deployment steps 105 | 106 | ### Local Deployment 107 | 108 | 1. Setup your AWS temporary credentials. 109 | 110 | See if the AWS CLI is installed: 111 | 112 | ``` bash 113 | $ aws --version 114 | aws-cli/1.16.249 Python/3.6.8... 115 | ``` 116 | 117 |          AWS CLI version 1.1 or higher is fine. 118 | 119 |          If you instead received `command not found` then install the AWS CLI. 120 | 121 | 2. Run aws configure and provide the following values: 122 | ``` bash 123 | $ aws configure 124 | AWS Access Key ID [*************xxxx]: 125 | AWS Secret Access Key [**************xxxx]: 126 | Default region name: [us-east-1]: 127 | Default output format [None]: 128 | ``` 129 | 3. Clone the repository with HTTPS or SSH 130 | 131 | _HTTPS_ 132 | ``` bash 133 | git clone https://github.com/aws-samples/terraform-ec2-image-builder-container-hardening-pipeline.git 134 | ``` 135 |          _SSH_ 136 | 137 | ``` bash 138 | git clone git@github.com:aws-samples/terraform-ec2-image-builder-container-hardening-pipeline.git 139 | ``` 140 | 4. Navigate to the directory containing this solution before running the commands below: 141 | ``` bash 142 | cd terraform-ec2-image-builder-container-hardening-pipeline 143 | ``` 144 | 145 | 5. Update the placeholder variable values in hardening-pipeline.tfvars. You must provide your own `account_id`, `kms_key_alias`, and `aws_s3_ami_resources_bucket`, however, you should also modify the rest of the placeholder variables to match your environment and your desired configuration. 146 | ``` properties 147 | account_id = "" 148 | aws_region = "us-east-1" 149 | vpc_name = "example-hardening-pipeline-vpc" 150 | kms_key_alias = "image-builder-container-key" 151 | ec2_iam_role_name = "example-hardening-instance-role" 152 | hardening_pipeline_role_name = "example-hardening-pipeline-role" 153 | aws_s3_ami_resources_bucket = "example-hardening-ami-resources-bucket-name" 154 | image_name = "example-hardening-al2-container-image" 155 | ecr_name = "example-hardening-container-repo" 156 | recipe_version = "1.0.0" 157 | ebs_root_vol_size = 10 158 | ``` 159 | 160 | 6. The following command initializes, validates and applies the terraform modules to the environment using the variables defined in your .tfvars file: 161 | ``` bash 162 | terraform init && terraform validate && terraform apply -var-file *.tfvars -auto-approve 163 | ``` 164 | 165 | 7. After successful completion of your first Terraform apply, if provisioning locally, you should see this snippet in your local machine’s terminal: 166 | ``` bash 167 | Apply complete! Resources: 43 added, 0 changed, 0 destroyed. 168 | ``` 169 | 170 | 8. *(Optional)* Teardown the infrastructure with the following command: 171 | ``` bash 172 | terraform init && terraform validate && terraform destroy -var-file *.tfvars -auto-approve 173 | ``` 174 | 175 | ## Troubleshooting 176 | 177 | When running Terraform apply or destroy commands from your local machine, you may encounter an error similar to the following: 178 | 179 | ``` properties 180 | Error: configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 123456a9-fbc1-40ed-b8d8-513d0133ba7f, api error InvalidClientTokenId: The security token included in the request is invalid. 181 | ``` 182 | 183 | This error is due to the expiration of the security token for the credentials used in your local machine’s configuration. 184 | 185 | See "[Set and View Configuration Settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods)" from the AWS Command Line Interface Documentation to resolve. 186 | 187 | ## Author 188 | 189 | * Mike Saintcross [@msntx](https://github.com/msntx) -------------------------------------------------------------------------------- /components.tf: -------------------------------------------------------------------------------- 1 | # Upload files to S3 2 | resource "aws_s3_bucket_object" "component_files" { 3 | depends_on = [ 4 | aws_s3_bucket.s3_pipeline_bucket, 5 | aws_kms_key.this 6 | ] 7 | 8 | for_each = fileset(path.module, "files/**/*.yml") 9 | 10 | bucket = var.aws_s3_ami_resources_bucket 11 | key = each.value 12 | source = "${path.module}/${each.value}" 13 | kms_key_id = aws_kms_key.this.id 14 | } 15 | 16 | /* Add custom component resources below 17 | The YAML file referenced in the URI attribute must exist in the files/ directory 18 | Below is an example component. */ 19 | /* resource "aws_imagebuilder_component" "example_custom_component" { 20 | name = "example-custom-component" 21 | platform = "Linux" 22 | uri = "s3://${var.aws_s3_ami_resources_bucket}/files/example-custom-component.yml" 23 | version = "1.0.0" 24 | kms_key_id = aws_kms_key.this.arn 25 | 26 | depends_on = [ 27 | aws_s3_bucket_object.component_files, 28 | aws_kms_key.this 29 | ] 30 | } */ -------------------------------------------------------------------------------- /config.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = var.aws_region 3 | } -------------------------------------------------------------------------------- /container-harden.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/terraform-ec2-image-builder-container-hardening-pipeline/1ab22e45e9e586f952272a636e6a706fca197a35/container-harden.png -------------------------------------------------------------------------------- /dist-config.tf: -------------------------------------------------------------------------------- 1 | resource "aws_ecr_repository" "hardening_pipeline_repo" { 2 | name = var.ecr_name 3 | image_tag_mutability = "IMMUTABLE" 4 | 5 | encryption_configuration { 6 | encryption_type = "KMS" 7 | kms_key = aws_kms_key.this.arn 8 | } 9 | 10 | image_scanning_configuration { 11 | scan_on_push = true 12 | } 13 | 14 | force_delete = true 15 | } 16 | 17 | resource "aws_imagebuilder_distribution_configuration" "this" { 18 | # Modify this name if desired 19 | name = "local-distribution" 20 | 21 | distribution { 22 | ami_distribution_configuration { 23 | 24 | ami_tags = { 25 | Name = "${var.image_name}-{{ imagebuilder:buildDate }}" 26 | } 27 | 28 | name = "${var.image_name}-{{ imagebuilder:buildDate }}" 29 | 30 | launch_permission { 31 | user_ids = [var.account_id] 32 | } 33 | 34 | kms_key_id = aws_kms_key.this.arn 35 | } 36 | region = var.aws_region 37 | } 38 | } -------------------------------------------------------------------------------- /files/assumption-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Principal": { 7 | "Service": "ec2.amazonaws.com" 8 | }, 9 | "Action": "sts:AssumeRole" 10 | } 11 | ] 12 | } -------------------------------------------------------------------------------- /hardening-pipeline.tfvars: -------------------------------------------------------------------------------- 1 | # Enter values for all of the following if you wish to avoid being prompted on each run. 2 | account_id = "" 3 | aws_region = "us-east-1" 4 | vpc_name = "example-hardening-pipeline-vpc" 5 | kms_key_alias = "image-builder-container-key" 6 | ec2_iam_role_name = "example-hardening-instance-role" 7 | hardening_pipeline_role_name = "example-hardening-pipeline-role" 8 | aws_s3_ami_resources_bucket = "example-hardening-ami-resources-bucket-0123" 9 | image_name = "example-hardening-al2-container-image" 10 | ecr_name = "example-hardening-container-repo" 11 | recipe_version = "1.0.0" 12 | ebs_root_vol_size = 10 -------------------------------------------------------------------------------- /image.tf: -------------------------------------------------------------------------------- 1 | resource "aws_imagebuilder_image" "al2_container_latest" { 2 | distribution_configuration_arn = aws_imagebuilder_distribution_configuration.this.arn 3 | container_recipe_arn = aws_imagebuilder_container_recipe.container_image.arn 4 | infrastructure_configuration_arn = aws_imagebuilder_infrastructure_configuration.this.arn 5 | 6 | tags = { 7 | Name = var.image_name 8 | BuiltBy = "hardening-container-pipeline" 9 | } 10 | 11 | depends_on = [ 12 | aws_iam_role.ec2_iam_role, 13 | aws_iam_role.hardening_pipeline_role, 14 | aws_security_group.image_builder_sg, 15 | aws_s3_bucket_object.component_files, 16 | aws_imagebuilder_distribution_configuration.this, 17 | aws_kms_key.this 18 | ] 19 | } -------------------------------------------------------------------------------- /infr-config.tf: -------------------------------------------------------------------------------- 1 | resource "aws_imagebuilder_infrastructure_configuration" "this" { 2 | description = "Container Image Infrastructure configuration" 3 | instance_profile_name = var.ec2_iam_role_name 4 | instance_types = ["t3.micro"] 5 | name = "${var.image_name}-infr" 6 | security_group_ids = [aws_security_group.image_builder_sg.id] 7 | subnet_id = aws_subnet.hardening_pipeline_private.id 8 | terminate_instance_on_failure = true 9 | 10 | logging { 11 | s3_logs { 12 | s3_bucket_name = var.aws_s3_ami_resources_bucket 13 | s3_key_prefix = "image-builder/" 14 | } 15 | } 16 | 17 | tags = local.core_tags 18 | } -------------------------------------------------------------------------------- /infra-network-config.tf: -------------------------------------------------------------------------------- 1 | # Creates all networking infrastructure needed to deploy container image in Private Subnet with a NAT Gateway 2 | # Amazon Linux Container images currently need internet access in order to bootstrap by AWSTOE 3 | # VPC Endpoints cannot be used 4 | # This solution can be modified to utilize only VPC Endpoints when the bootstrap process is updated 5 | resource "aws_vpc" "hardening_pipeline" { 6 | cidr_block = "192.168.0.0/16" 7 | enable_dns_hostnames = true 8 | tags = { 9 | Name = "${var.vpc_name}" 10 | } 11 | } 12 | 13 | resource "aws_flow_log" "hardening_pipeline_flow" { 14 | depends_on = [ 15 | aws_s3_bucket.s3_pipeline_logging_bucket_logs 16 | ] 17 | log_destination = aws_s3_bucket.s3_pipeline_logging_bucket_logs.arn 18 | log_destination_type = "s3" 19 | traffic_type = "ALL" 20 | vpc_id = aws_vpc.hardening_pipeline.id 21 | } 22 | 23 | # Map public IP on launch because we are creating an internet gateway 24 | resource "aws_subnet" "hardening_pipeline_public" { 25 | depends_on = [ 26 | aws_vpc.hardening_pipeline 27 | ] 28 | 29 | vpc_id = aws_vpc.hardening_pipeline.id 30 | cidr_block = "192.168.0.0/24" 31 | availability_zone = "${var.aws_region}a" 32 | map_public_ip_on_launch = true 33 | tags = { 34 | Name = "${var.vpc_name}-public" 35 | } 36 | } 37 | 38 | resource "aws_subnet" "hardening_pipeline_private" { 39 | depends_on = [ 40 | aws_vpc.hardening_pipeline, 41 | aws_subnet.hardening_pipeline_public 42 | ] 43 | 44 | vpc_id = aws_vpc.hardening_pipeline.id 45 | cidr_block = "192.168.1.0/24" 46 | availability_zone = "${var.aws_region}a" 47 | 48 | tags = { 49 | Name = "${var.vpc_name}-private" 50 | } 51 | } 52 | 53 | resource "aws_default_security_group" "hardening_pipeline_vpc_default" { 54 | vpc_id = aws_vpc.hardening_pipeline.id 55 | } 56 | 57 | resource "aws_internet_gateway" "hardening_pipeline_igw" { 58 | depends_on = [ 59 | aws_vpc.hardening_pipeline, 60 | aws_subnet.hardening_pipeline_public, 61 | aws_subnet.hardening_pipeline_private 62 | ] 63 | 64 | vpc_id = aws_vpc.hardening_pipeline.id 65 | tags = { 66 | Name = "${var.vpc_name}-igw" 67 | } 68 | } 69 | resource "aws_route_table" "hardening_pipeline_public_rt" { 70 | depends_on = [ 71 | aws_vpc.hardening_pipeline, 72 | aws_internet_gateway.hardening_pipeline_igw 73 | ] 74 | vpc_id = aws_vpc.hardening_pipeline.id 75 | route { 76 | cidr_block = "0.0.0.0/0" 77 | gateway_id = aws_internet_gateway.hardening_pipeline_igw.id 78 | } 79 | tags = { 80 | Name = "${var.vpc_name}-public-rt" 81 | } 82 | } 83 | resource "aws_route_table_association" "hardening_pipeline_rt_assoc" { 84 | depends_on = [ 85 | aws_vpc.hardening_pipeline, 86 | aws_subnet.hardening_pipeline_public, 87 | aws_subnet.hardening_pipeline_private, 88 | aws_route_table.hardening_pipeline_public_rt 89 | ] 90 | subnet_id = aws_subnet.hardening_pipeline_public.id 91 | route_table_id = aws_route_table.hardening_pipeline_public_rt.id 92 | } 93 | resource "aws_eip" "nat_gateway_eip" { 94 | depends_on = [ 95 | aws_route_table_association.hardening_pipeline_rt_assoc 96 | ] 97 | vpc = true 98 | } 99 | resource "aws_nat_gateway" "hardening_pipeline_nat_gateway" { 100 | depends_on = [ 101 | aws_eip.nat_gateway_eip 102 | ] 103 | allocation_id = aws_eip.nat_gateway_eip.id 104 | subnet_id = aws_subnet.hardening_pipeline_public.id 105 | tags = { 106 | Name = "${var.vpc_name}-nat-gateway" 107 | } 108 | } 109 | resource "aws_route_table" "hardening_pipeline_nat_gateway_rt" { 110 | depends_on = [ 111 | aws_nat_gateway.hardening_pipeline_nat_gateway 112 | ] 113 | vpc_id = aws_vpc.hardening_pipeline.id 114 | route { 115 | cidr_block = "0.0.0.0/0" 116 | nat_gateway_id = aws_nat_gateway.hardening_pipeline_nat_gateway.id 117 | } 118 | tags = { 119 | Name = "${var.vpc_name}-nat-gateway-rt" 120 | } 121 | } 122 | resource "aws_route_table_association" "hardening_pipeline_nat_gw_rt_assoc" { 123 | depends_on = [ 124 | aws_route_table.hardening_pipeline_nat_gateway_rt 125 | ] 126 | subnet_id = aws_subnet.hardening_pipeline_private.id 127 | route_table_id = aws_route_table.hardening_pipeline_nat_gateway_rt.id 128 | } -------------------------------------------------------------------------------- /kms-key.tf: -------------------------------------------------------------------------------- 1 | /* As this is intended to enable a Key Administrator in a multi-account structure 2 | the action and resource definition is broad */ 3 | data "aws_iam_policy_document" "this" { 4 | statement { 5 | sid = "Enable IAM User Permissions" 6 | effect = "Allow" 7 | actions = ["kms:*"] 8 | resources = ["*"] 9 | 10 | principals { 11 | type = "AWS" 12 | identifiers = ["arn:aws:iam::${var.account_id}:root"] 13 | } 14 | } 15 | 16 | statement { 17 | sid = "Allow access for Key Administrators" 18 | effect = "Allow" 19 | actions = ["kms:*"] 20 | resources = ["*"] 21 | 22 | principals { 23 | type = "AWS" 24 | identifiers = [ 25 | "arn:aws:iam::${var.account_id}:role/${local.kms_admin_role_name}" 26 | ] 27 | } 28 | } 29 | 30 | statement { 31 | sid = "Allow use of the key" 32 | effect = "Allow" 33 | actions = [ 34 | "kms:Encrypt", 35 | "kms:Decrypt", 36 | "kms:ReEncrypt", 37 | "kms:GenerateDataKey", 38 | "kms:DescribeKey", 39 | "kms:CreateGrant" 40 | ] 41 | resources = ["*"] 42 | 43 | principals { 44 | type = "AWS" 45 | identifiers = [ 46 | "arn:aws:iam::${var.account_id}:role/${local.kms_admin_role_name}" 47 | ] 48 | } 49 | } 50 | 51 | statement { 52 | sid = "Allow attachment of persistent resources" 53 | effect = "Allow" 54 | actions = [ 55 | "kms:CreateGrant", 56 | "kms:ListGrants", 57 | "kms:RevokeGrant" 58 | ] 59 | resources = ["*"] 60 | 61 | principals { 62 | type = "AWS" 63 | identifiers = [ 64 | "arn:aws:iam::${var.account_id}:role/${local.kms_admin_role_name}" 65 | ] 66 | } 67 | 68 | condition { 69 | test = "Bool" 70 | variable = "kms:GrantIsForAWSResource" 71 | values = ["true"] 72 | } 73 | } 74 | } 75 | 76 | # Creates and manages KMS CMK 77 | resource "aws_kms_key" "this" { 78 | description = "EC2 Image Builder key" 79 | is_enabled = true 80 | enable_key_rotation = true 81 | tags = local.core_tags 82 | policy = data.aws_iam_policy_document.this.json 83 | deletion_window_in_days = 30 84 | } 85 | 86 | # Add an alias to the key 87 | resource "aws_kms_alias" "this" { 88 | name = "alias/${var.kms_key_alias}" 89 | target_key_id = aws_kms_key.this.key_id 90 | } -------------------------------------------------------------------------------- /main.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | core_tags = { 3 | ManagedBy = "Terraform" 4 | } 5 | 6 | # These can be leveraged to customize your deployment. 7 | kms_admin_role_name = var.hardening_pipeline_role_name 8 | ecr_name = var.ecr_name 9 | } 10 | 11 | data "aws_caller_identity" "current" {} -------------------------------------------------------------------------------- /outputs.tf: -------------------------------------------------------------------------------- 1 | output "container_info" { 2 | description = "Various Container Image attributes." 3 | value = aws_imagebuilder_container_recipe.container_image 4 | sensitive = true 5 | } -------------------------------------------------------------------------------- /pipeline.tf: -------------------------------------------------------------------------------- 1 | # Create logging S3 Bucket for Server Access Logs and VPC Flow Logs. 2 | # This bucket is not needed if using in a best practice multi-account strategy environment. 3 | # In a Landing Zone or Control Tower environment, point to your Log Archive account buckets. 4 | # This S3 bucket does not have access logging enabled to avoid recursive logging 5 | resource "aws_s3_bucket" "s3_pipeline_logging_bucket_logs" { 6 | bucket = "${var.aws_s3_ami_resources_bucket}-logs" 7 | force_destroy = true 8 | 9 | tags = { 10 | Name = "${var.aws_s3_ami_resources_bucket}" 11 | } 12 | } 13 | 14 | resource "aws_s3_bucket_server_side_encryption_configuration" "s3_pipeline_logging_bucket_enc" { 15 | bucket = aws_s3_bucket.s3_pipeline_logging_bucket_logs.id 16 | rule { 17 | apply_server_side_encryption_by_default { 18 | sse_algorithm = "AES256" 19 | } 20 | } 21 | } 22 | 23 | resource "aws_s3_bucket_versioning" "s3_pipeline_logging_bucket_version" { 24 | bucket = aws_s3_bucket.s3_pipeline_logging_bucket_logs.id 25 | versioning_configuration { 26 | status = "Enabled" 27 | } 28 | } 29 | 30 | resource "aws_s3_bucket_acl" "s3_pipeline_bucket_logs_acl" { 31 | bucket = aws_s3_bucket.s3_pipeline_logging_bucket_logs.id 32 | acl = "private" 33 | } 34 | 35 | resource "aws_s3_bucket_public_access_block" "s3_pipeline_bucket_logs_block" { 36 | bucket = aws_s3_bucket.s3_pipeline_logging_bucket_logs.id 37 | 38 | block_public_acls = true 39 | block_public_policy = true 40 | ignore_public_acls = true 41 | restrict_public_buckets = true 42 | } 43 | 44 | # Create Pipeline S3 Bucket 45 | resource "aws_s3_bucket" "s3_pipeline_bucket" { 46 | depends_on = [ 47 | aws_s3_bucket.s3_pipeline_logging_bucket_logs 48 | ] 49 | bucket = var.aws_s3_ami_resources_bucket 50 | force_destroy = true 51 | 52 | tags = { 53 | Name = "${var.aws_s3_ami_resources_bucket}" 54 | } 55 | } 56 | 57 | resource "aws_s3_bucket_logging" "s3_pipeline_bucket_log_cfg" { 58 | bucket = aws_s3_bucket.s3_pipeline_bucket.id 59 | target_bucket = aws_s3_bucket.s3_pipeline_logging_bucket_logs.id 60 | target_prefix = "AccessLogs/" 61 | } 62 | 63 | resource "aws_s3_bucket_server_side_encryption_configuration" "s3_pipeline_bucket_enc" { 64 | bucket = aws_s3_bucket.s3_pipeline_bucket.id 65 | rule { 66 | apply_server_side_encryption_by_default { 67 | sse_algorithm = "AES256" 68 | } 69 | } 70 | } 71 | 72 | resource "aws_s3_bucket_acl" "s3_pipeline_bucket_acl" { 73 | bucket = aws_s3_bucket.s3_pipeline_bucket.id 74 | acl = "private" 75 | } 76 | 77 | resource "aws_s3_bucket_policy" "allow_access_from_pipeline_service_role" { 78 | bucket = aws_s3_bucket.s3_pipeline_bucket.id 79 | policy = data.aws_iam_policy_document.allow_access_from_pipeline_service_role.json 80 | } 81 | 82 | resource "aws_s3_bucket_public_access_block" "s3_pipeline_bucket_block" { 83 | bucket = aws_s3_bucket.s3_pipeline_bucket.id 84 | 85 | block_public_acls = true 86 | block_public_policy = true 87 | ignore_public_acls = true 88 | restrict_public_buckets = true 89 | } 90 | 91 | data "aws_iam_policy_document" "allow_access_from_pipeline_service_role" { 92 | depends_on = [ 93 | aws_iam_role.hardening_pipeline_role 94 | ] 95 | statement { 96 | principals { 97 | type = "AWS" 98 | identifiers = ["arn:aws:iam::${var.account_id}:role/${var.hardening_pipeline_role_name}"] 99 | } 100 | 101 | condition { 102 | test = "Bool" 103 | variable = "aws:SecureTransport" 104 | values = ["true"] 105 | } 106 | 107 | actions = [ 108 | "s3:PutObject", 109 | "s3:ListBucket", 110 | "s3:GetObject" 111 | ] 112 | 113 | resources = [ 114 | aws_s3_bucket.s3_pipeline_bucket.arn, 115 | "${aws_s3_bucket.s3_pipeline_bucket.arn}/*", 116 | ] 117 | } 118 | } 119 | 120 | resource "aws_s3_bucket_versioning" "s3_pipeline_bucket_version" { 121 | bucket = aws_s3_bucket.s3_pipeline_bucket.id 122 | versioning_configuration { 123 | status = "Enabled" 124 | } 125 | } 126 | 127 | resource "aws_imagebuilder_image_pipeline" "this" { 128 | container_recipe_arn = aws_imagebuilder_container_recipe.container_image.arn 129 | infrastructure_configuration_arn = aws_imagebuilder_infrastructure_configuration.this.arn 130 | name = var.image_name 131 | status = "ENABLED" 132 | description = "Creates images." 133 | distribution_configuration_arn = aws_imagebuilder_distribution_configuration.this.arn 134 | 135 | schedule { 136 | # This cron schedule is for every Friday at 6 AM, modify it for your purposes. 137 | schedule_expression = "cron(0 6 ? * fri)" 138 | pipeline_execution_start_condition = "EXPRESSION_MATCH_AND_DEPENDENCY_UPDATES_AVAILABLE" 139 | } 140 | 141 | image_tests_configuration { 142 | image_tests_enabled = true 143 | } 144 | 145 | tags = { 146 | "Name" = "${var.image_name}-hardening-container" 147 | } 148 | 149 | depends_on = [ 150 | aws_imagebuilder_container_recipe.container_image, 151 | aws_imagebuilder_infrastructure_configuration.this, 152 | aws_imagebuilder_distribution_configuration.this, 153 | ] 154 | 155 | lifecycle { 156 | create_before_destroy = true 157 | } 158 | } -------------------------------------------------------------------------------- /recipes.tf: -------------------------------------------------------------------------------- 1 | resource "aws_imagebuilder_container_recipe" "container_image" { 2 | 3 | depends_on = [ 4 | aws_ecr_repository.hardening_pipeline_repo 5 | ] 6 | 7 | name = var.image_name 8 | version = "1.0.0" 9 | 10 | container_type = "DOCKER" 11 | parent_image = "amazonlinux:latest" 12 | working_directory = "/tmp" 13 | 14 | target_repository { 15 | repository_name = var.ecr_name 16 | service = "ECR" 17 | } 18 | 19 | instance_configuration { 20 | 21 | block_device_mapping { 22 | device_name = "/dev/xvda" 23 | 24 | ebs { 25 | delete_on_termination = true 26 | volume_size = var.ebs_root_vol_size 27 | volume_type = "gp2" 28 | encrypted = true 29 | kms_key_id = aws_kms_key.this.arn 30 | } 31 | } 32 | 33 | } 34 | 35 | component { 36 | component_arn = "arn:aws:imagebuilder:${var.aws_region}:aws:component/update-linux/x.x.x" 37 | } 38 | 39 | component { 40 | component_arn = "arn:aws:imagebuilder:${var.aws_region}:aws:component/stig-build-linux-medium/x.x.x" 41 | } 42 | 43 | # Add more component ARNs here to customize the recipe 44 | # You can also add custom components if you defined any in components.tf 45 | /* component { 46 | component_arn = aws_imagebuilder_component.example_custom_component.arn 47 | } */ 48 | 49 | dockerfile_template_data = < 0 56 | error_message = "Parameter `aws_s3_ami_resources_bucket` cannot start and end with \"/\", as well as cannot be empty." 57 | } 58 | } 59 | 60 | variable "ebs_root_vol_size" { 61 | type = number 62 | description = "Enter the size (in gigabytes) of the EBS Root Volume." 63 | } 64 | 65 | variable "kms_key_alias" { 66 | type = string 67 | description = "Enter the KMS Key name to be used by the image builder infrastructure configuration." 68 | } --------------------------------------------------------------------------------