├── HML ├── api_helpers │ ├── python │ │ └── requirements.txt │ ├── update_cidr.py │ ├── get_cidr.py │ ├── post-api-helpers.sh │ └── pre-api-helpers.sh └── terraform │ ├── aft-providers.jinja │ ├── backend.jinja │ └── vpc.tf ├── DEV ├── api_helpers │ ├── python │ │ └── requirements.txt │ ├── update_cidr.py │ ├── get_cidr.py │ ├── post-api-helpers.sh │ └── pre-api-helpers.sh └── terraform │ ├── aft-providers.jinja │ ├── backend.jinja │ └── vpc.tf ├── PRODUCTION ├── api_helpers │ ├── python │ │ └── requirements.txt │ ├── update_cidr.py │ ├── get_cidr.py │ ├── post-api-helpers.sh │ └── pre-api-helpers.sh └── terraform │ ├── aft-providers.jinja │ ├── backend.jinja │ ├── flow-logs.tf │ └── vpc.tf ├── PIPELINE ├── api_helpers │ ├── python │ │ └── requirements.txt │ ├── post-api-helpers.sh │ ├── application_script │ │ ├── terraform-deploy-hml.sh │ │ ├── terraform-deploy-prd.sh │ │ └── terraform-deploy-dev.sh │ └── pre-api-helpers.sh ├── pipeline-infra │ ├── modules │ │ └── aws │ │ │ ├── pipeline │ │ │ ├── snsApprove.tf │ │ │ ├── variables.tf │ │ │ ├── codePipeline.tf │ │ │ └── iam.tf │ │ │ ├── codeBuild │ │ │ ├── buildspecActionApply.yml │ │ │ ├── buildspecActionPlanCheckIAC.yml │ │ │ ├── buildspecActionPlanFmt.yml │ │ │ ├── codebuildActionCheckIAC.tf │ │ │ ├── codebuildActionApply.tf │ │ │ ├── codebuildActionPlan.tf │ │ │ ├── variables.tf │ │ │ └── iam.tf │ │ │ ├── state │ │ │ ├── variables.tf │ │ │ ├── dynamodb.tf │ │ │ ├── kms.tf │ │ │ └── s3.tf │ │ │ └── artifact │ │ │ ├── variables.tf │ │ │ ├── kms.tf │ │ │ └── s3.tf │ ├── environments │ │ ├── dev │ │ │ ├── dev.tfbackend │ │ │ └── dev.jinja │ │ ├── hml │ │ │ ├── hml.tfbackend │ │ │ └── hml.jinja │ │ └── prd │ │ │ ├── prd.tfbackend │ │ │ └── prd.jinja │ ├── providers.tf │ ├── variables.tf │ ├── main.tf │ └── README.md └── terraform │ ├── aft-providers.jinja │ ├── main.tf │ ├── backend.jinja │ └── s3-tf-state.tf ├── CODE_OF_CONDUCT.md ├── LICENSE ├── CONTRIBUTING.md └── README.md /HML/api_helpers/python/requirements.txt: -------------------------------------------------------------------------------- 1 | boto3==1.18.28 2 | requests==2.31.0 3 | -------------------------------------------------------------------------------- /DEV/api_helpers/python/requirements.txt: -------------------------------------------------------------------------------- 1 | boto3==1.18.28 2 | requests==2.31.0 3 | 4 | -------------------------------------------------------------------------------- /PRODUCTION/api_helpers/python/requirements.txt: -------------------------------------------------------------------------------- 1 | boto3==1.18.28 2 | requests==2.31.0 3 | -------------------------------------------------------------------------------- /PIPELINE/api_helpers/python/requirements.txt: -------------------------------------------------------------------------------- 1 | boto3==1.18.28 2 | requests==2.31.0 3 | jinja2-cli==0.7.0 4 | Jinja2==3.0.1 5 | -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/pipeline/snsApprove.tf: -------------------------------------------------------------------------------- 1 | resource "aws_sns_topic" "approval" { 2 | name = "approval-${var.application_name}-${var.environment}" 3 | kms_master_key_id = "alias/aws/sns" 4 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/environments/dev/dev.tfbackend: -------------------------------------------------------------------------------- 1 | terraform { 2 | backend "s3" { 3 | bucket = "backend-terraform-pipeline-${data.aws_caller_identity.current.account_id}" 4 | key = "pipeline/dev/terraform.tfstate" 5 | region = "us-east-1" 6 | } 7 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/environments/hml/hml.tfbackend: -------------------------------------------------------------------------------- 1 | terraform { 2 | backend "s3" { 3 | bucket = "backend-terraform-pipeline-${data.aws_caller_identity.current.account_id}" 4 | key = "pipeline/hml/terraform.tfstate" 5 | region = "us-east-1" 6 | } 7 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/environments/prd/prd.tfbackend: -------------------------------------------------------------------------------- 1 | terraform { 2 | backend "s3" { 3 | bucket = "backend-terraform-pipeline-${data.aws_caller_identity.current.account_id}" 4 | key = "pipeline/prd/terraform.tfstate" 5 | region = "us-east-1" 6 | } 7 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/codeBuild/buildspecActionApply.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | phases: 4 | build: 5 | commands: 6 | - cd ${TERRAFORM_PATH} 7 | - terraform init 8 | - terraform workspace select -or-create ${WORKSPACE} 9 | - terraform apply tfplan.out -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/providers.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = var.aws_region 3 | 4 | } 5 | 6 | provider "aws" { 7 | alias = "workload" 8 | region = var.aws_region 9 | # profile = var.workloadAccess 10 | assume_role { 11 | role_arn = var.workloadAccess 12 | session_name = "AWSAFT-Session" 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /DEV/terraform/aft-providers.jinja: -------------------------------------------------------------------------------- 1 | ## Auto generated providers.tf ## 2 | ## Updated on: {{ timestamp }} ## 3 | 4 | provider "aws" { 5 | region = "{{ provider_region }}" 6 | assume_role { 7 | role_arn = "{{ target_admin_role_arn }}" 8 | } 9 | default_tags { 10 | tags = { 11 | managed_by = "AFT" 12 | } 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /HML/terraform/aft-providers.jinja: -------------------------------------------------------------------------------- 1 | ## Auto generated providers.tf ## 2 | ## Updated on: {{ timestamp }} ## 3 | 4 | provider "aws" { 5 | region = "{{ provider_region }}" 6 | assume_role { 7 | role_arn = "{{ target_admin_role_arn }}" 8 | } 9 | default_tags { 10 | tags = { 11 | managed_by = "AFT" 12 | } 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /PIPELINE/terraform/aft-providers.jinja: -------------------------------------------------------------------------------- 1 | ## Auto generated providers.tf ## 2 | ## Updated on: {{ timestamp }} ## 3 | 4 | provider "aws" { 5 | region = "{{ provider_region }}" 6 | assume_role { 7 | role_arn = "{{ target_admin_role_arn }}" 8 | } 9 | default_tags { 10 | tags = { 11 | managed_by = "AFT" 12 | } 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /PRODUCTION/terraform/aft-providers.jinja: -------------------------------------------------------------------------------- 1 | ## Auto generated providers.tf ## 2 | ## Updated on: {{ timestamp }} ## 3 | 4 | provider "aws" { 5 | region = "{{ provider_region }}" 6 | assume_role { 7 | role_arn = "{{ target_admin_role_arn }}" 8 | } 9 | default_tags { 10 | tags = { 11 | managed_by = "AFT" 12 | } 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/codeBuild/buildspecActionPlanCheckIAC.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | phases: 4 | install: 5 | run-as: root 6 | on-failure: ABORT 7 | commands: 8 | - apk add py3-pip 9 | - pip3 install --upgrade pip && pip3 install --upgrade setuptools 10 | - pip3 install checkov 11 | build: 12 | commands: 13 | - cd ${TERRAFORM_PATH} 14 | - checkov -d . --download-external-modules true -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/state/variables.tf: -------------------------------------------------------------------------------- 1 | # Global Variables 2 | variable "aws_region" { 3 | description = "AWS Region for deploying resources" 4 | type = string 5 | } 6 | 7 | variable "environment" { 8 | description = "Lifecycle environment for deployment" 9 | type = string 10 | } 11 | variable "application_name" { 12 | description = "Application hosted in this infrastructure" 13 | type = string 14 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/artifact/variables.tf: -------------------------------------------------------------------------------- 1 | # Global Variables 2 | variable "aws_region" { 3 | description = "AWS Region for deploying resources" 4 | type = string 5 | } 6 | 7 | variable "environment" { 8 | description = "Lifecycle environment for deployment" 9 | type = string 10 | } 11 | variable "application_name" { 12 | description = "Application hosted in this infrastructure" 13 | type = string 14 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/codeBuild/buildspecActionPlanFmt.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | phases: 4 | pre_build: 5 | on-failure: CONTINUE 6 | commands: 7 | - terraform fmt --recursive --check 8 | build: 9 | commands: 10 | - cd ${TERRAFORM_PATH} 11 | - terraform init 12 | - terraform workspace select -or-create ${WORKSPACE} 13 | - terraform plan -var-file=./inventory/${ENV}/global.tfvars --out tfplan.out 14 | artifacts: 15 | files: 16 | - '**/*' -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/state/dynamodb.tf: -------------------------------------------------------------------------------- 1 | resource "aws_dynamodb_table" "state_locking" { 2 | #checkov:skip=CKV_AWS_119 - No needed Customer Managed CMK 3 | hash_key = "LockID" 4 | name = "${data.aws_caller_identity.current.account_id}-${var.application_name}-tflock" 5 | server_side_encryption { 6 | enabled = true 7 | } 8 | point_in_time_recovery { 9 | enabled = true 10 | } 11 | attribute { 12 | name = "LockID" 13 | type = "S" 14 | } 15 | billing_mode = "PAY_PER_REQUEST" 16 | } 17 | 18 | 19 | output "lock_dynamo" { 20 | value = aws_dynamodb_table.state_locking.arn 21 | } 22 | -------------------------------------------------------------------------------- /PIPELINE/terraform/main.tf: -------------------------------------------------------------------------------- 1 | ################################################################################ 2 | # CodeCommit Module 3 | ################################################################################ 4 | data "aws_ssm_parameter" "project" { 5 | name = "/aft/account-request/custom-fields/project" 6 | } 7 | 8 | resource "aws_codecommit_repository" "pipeline" { 9 | #checkov:skip=CKV2_AWS_37 - Pipeline that uses this repository already has the approval rules 10 | repository_name = "${data.aws_ssm_parameter.project.value}-IAC-Repository" 11 | description = "This is the Pipeline Repository" 12 | } 13 | 14 | -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/environments/prd/prd.jinja: -------------------------------------------------------------------------------- 1 | ## Global Variables 2 | aws_region = "{{ provider_region }}" 3 | environment = "{{ environment }}" 4 | application_name = "{{ projectName }}" 5 | workloadAccess = "arn:aws:iam::{{ workloadAccount }}:role/AWSAFTExecution" 6 | 7 | # codeBuild 8 | terraform_path = "./" 9 | workloadAccount = "{{ workloadAccount }}" 10 | 11 | 12 | # pipeline 13 | source_branch_name = "main" 14 | source_repository_arn = "arn:aws:codecommit:{{ provider_region }}:{{ pipelineAccount }}:{{ projectName }}-IAC-Repository" 15 | source_repository_name = "{{ projectName }}-IAC-Repository" 16 | approve_comment = "Looks good?" -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/environments/hml/hml.jinja: -------------------------------------------------------------------------------- 1 | ## Global Variables 2 | aws_region = "{{ provider_region }}" 3 | environment = "{{ environment }}" 4 | application_name = "{{ projectName }}" 5 | workloadAccess = "arn:aws:iam::{{ workloadAccount }}:role/AWSAFTExecution" 6 | 7 | # codeBuild 8 | terraform_path = "./" 9 | workloadAccount = "{{ workloadAccount }}" 10 | 11 | 12 | # pipeline 13 | source_branch_name = "develop" 14 | source_repository_arn = "arn:aws:codecommit:{{ provider_region }}:{{ pipelineAccount }}:{{ projectName }}-IAC-Repository" 15 | source_repository_name = "{{ projectName }}-IAC-Repository" 16 | approve_comment = "Looks good?" -------------------------------------------------------------------------------- /DEV/terraform/backend.jinja: -------------------------------------------------------------------------------- 1 | ## Auto generated backend.tf ## 2 | ## Updated on: {{ timestamp }} ## 3 | 4 | {% if tf_distribution_type == "oss" -%} 5 | terraform { 6 | required_version = ">= 0.15.0" 7 | backend "s3" { 8 | region = "{{ region }}" 9 | bucket = "{{ bucket }}" 10 | key = "{{ key }}" 11 | dynamodb_table = "{{ dynamodb_table }}" 12 | encrypt = "true" 13 | kms_key_id = "{{ kms_key_id }}" 14 | role_arn = "{{ aft_admin_role_arn }}" 15 | } 16 | } 17 | {% else -%} 18 | terraform { 19 | backend "remote" { 20 | organization = "{{ terraform_org_name }}" 21 | workspaces { 22 | name = "{{ terraform_workspace_name }}" 23 | } 24 | } 25 | } 26 | {% endif %} 27 | -------------------------------------------------------------------------------- /HML/terraform/backend.jinja: -------------------------------------------------------------------------------- 1 | ## Auto generated backend.tf ## 2 | ## Updated on: {{ timestamp }} ## 3 | 4 | {% if tf_distribution_type == "oss" -%} 5 | terraform { 6 | required_version = ">= 0.15.0" 7 | backend "s3" { 8 | region = "{{ region }}" 9 | bucket = "{{ bucket }}" 10 | key = "{{ key }}" 11 | dynamodb_table = "{{ dynamodb_table }}" 12 | encrypt = "true" 13 | kms_key_id = "{{ kms_key_id }}" 14 | role_arn = "{{ aft_admin_role_arn }}" 15 | } 16 | } 17 | {% else -%} 18 | terraform { 19 | backend "remote" { 20 | organization = "{{ terraform_org_name }}" 21 | workspaces { 22 | name = "{{ terraform_workspace_name }}" 23 | } 24 | } 25 | } 26 | {% endif %} 27 | -------------------------------------------------------------------------------- /PIPELINE/terraform/backend.jinja: -------------------------------------------------------------------------------- 1 | ## Auto generated backend.tf ## 2 | ## Updated on: {{ timestamp }} ## 3 | 4 | {% if tf_distribution_type == "oss" -%} 5 | terraform { 6 | required_version = ">= 0.15.0" 7 | backend "s3" { 8 | region = "{{ region }}" 9 | bucket = "{{ bucket }}" 10 | key = "{{ key }}" 11 | dynamodb_table = "{{ dynamodb_table }}" 12 | encrypt = "true" 13 | kms_key_id = "{{ kms_key_id }}" 14 | role_arn = "{{ aft_admin_role_arn }}" 15 | } 16 | } 17 | {% else -%} 18 | terraform { 19 | backend "remote" { 20 | organization = "{{ terraform_org_name }}" 21 | workspaces { 22 | name = "{{ terraform_workspace_name }}" 23 | } 24 | } 25 | } 26 | {% endif %} 27 | -------------------------------------------------------------------------------- /PRODUCTION/terraform/backend.jinja: -------------------------------------------------------------------------------- 1 | ## Auto generated backend.tf ## 2 | ## Updated on: {{ timestamp }} ## 3 | 4 | {% if tf_distribution_type == "oss" -%} 5 | terraform { 6 | required_version = ">= 0.15.0" 7 | backend "s3" { 8 | region = "{{ region }}" 9 | bucket = "{{ bucket }}" 10 | key = "{{ key }}" 11 | dynamodb_table = "{{ dynamodb_table }}" 12 | encrypt = "true" 13 | kms_key_id = "{{ kms_key_id }}" 14 | role_arn = "{{ aft_admin_role_arn }}" 15 | } 16 | } 17 | {% else -%} 18 | terraform { 19 | backend "remote" { 20 | organization = "{{ terraform_org_name }}" 21 | workspaces { 22 | name = "{{ terraform_workspace_name }}" 23 | } 24 | } 25 | } 26 | {% endif %} 27 | -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/environments/dev/dev.jinja: -------------------------------------------------------------------------------- 1 | ## Global Variables 2 | aws_region = "{{ provider_region }}" 3 | environment = "{{ environment }}" 4 | application_name = "{{ projectName }}" 5 | workloadAccess = "arn:aws:iam::{{ workloadAccount }}:role/AWSAFTExecution" 6 | # workloadAccess = "{{ workloadAccess }}" 7 | 8 | # codeBuild 9 | terraform_path = "./" 10 | workloadAccount = "{{ workloadAccount }}" 11 | 12 | 13 | # pipeline 14 | source_branch_name = "develop" 15 | # source_repository_arn = "arn:aws:codecommit:us-east-1:760226449147:Pipeline-IAC-Repository" 16 | source_repository_arn = "arn:aws:codecommit:{{ provider_region }}:{{ pipelineAccount }}:{{ projectName }}-IAC-Repository" 17 | source_repository_name = "{{ projectName }}-IAC-Repository" 18 | approve_comment = "Looks good?" -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/state/kms.tf: -------------------------------------------------------------------------------- 1 | data "aws_caller_identity" "current" {} 2 | resource "aws_kms_key" "state" { 3 | description = "This key is used to encrypt bucket objects" 4 | deletion_window_in_days = 10 5 | policy = jsonencode({ 6 | Version = "2012-10-17" 7 | Statement = [ 8 | { 9 | Sid = "Enable IAM User Permissions" 10 | Effect = "Allow" 11 | Principal = { 12 | AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root" 13 | } 14 | Action = "*" 15 | Resource = "*" 16 | }, 17 | ] 18 | }) 19 | 20 | enable_key_rotation = true 21 | } 22 | 23 | resource "aws_kms_alias" "state" { 24 | name = "alias/TerraformState" 25 | target_key_id = aws_kms_key.state.key_id 26 | } 27 | -------------------------------------------------------------------------------- /DEV/api_helpers/update_cidr.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # Copyright Amazon.com, Inc. or its affiliates. All rights reserved. 3 | # SPDX-License-Identifier: Apache-2.0 4 | import boto3 5 | import json 6 | import sys 7 | 8 | ipamlambda = sys.argv[5] 9 | ipamapigtw = sys.argv[4] 10 | region = sys.argv[3] 11 | cidr_block = sys.argv[2] 12 | current_VPC_ID = sys.argv[1] 13 | 14 | lambda_client = boto3.client('lambda') 15 | params = { 16 | "RequestType": "Create", 17 | "ResourceProperties": { 18 | "ApiRegion": region, 19 | "Cidr": cidr_block, 20 | "VpcId": current_VPC_ID, 21 | "CidrApiEndpoint": ipamapigtw 22 | 23 | } 24 | } 25 | 26 | response = lambda_client.invoke( 27 | FunctionName=ipamlambda, 28 | Payload=json.dumps(params), 29 | ) 30 | 31 | print(response['Payload'].read().decode("utf-8")) 32 | -------------------------------------------------------------------------------- /HML/api_helpers/update_cidr.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # Copyright Amazon.com, Inc. or its affiliates. All rights reserved. 3 | # SPDX-License-Identifier: Apache-2.0 4 | import boto3 5 | import json 6 | import sys 7 | 8 | ipamlambda = sys.argv[5] 9 | ipamapigtw = sys.argv[4] 10 | region = sys.argv[3] 11 | cidr_block = sys.argv[2] 12 | current_VPC_ID = sys.argv[1] 13 | 14 | lambda_client = boto3.client('lambda') 15 | params = { 16 | "RequestType": "Create", 17 | "ResourceProperties": { 18 | "ApiRegion": region, 19 | "Cidr": cidr_block, 20 | "VpcId": current_VPC_ID, 21 | "CidrApiEndpoint": ipamapigtw 22 | 23 | } 24 | } 25 | 26 | response = lambda_client.invoke( 27 | FunctionName=ipamlambda, 28 | Payload=json.dumps(params), 29 | ) 30 | 31 | print(response['Payload'].read().decode("utf-8")) 32 | -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/artifact/kms.tf: -------------------------------------------------------------------------------- 1 | data "aws_caller_identity" "current" {} 2 | resource "aws_kms_key" "artifact" { 3 | description = "This key is used to encrypt bucket objects" 4 | deletion_window_in_days = 10 5 | policy = jsonencode({ 6 | Version = "2012-10-17" 7 | Statement = [ 8 | { 9 | Sid = "Enable IAM User Permissions" 10 | Effect = "Allow" 11 | Principal = { 12 | AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root" 13 | } 14 | Action = "*" 15 | Resource = "*" 16 | }, 17 | ] 18 | }) 19 | 20 | enable_key_rotation = true 21 | } 22 | 23 | resource "aws_kms_alias" "artifact" { 24 | name = "alias/${var.environment}/CodeArtifactKey" 25 | target_key_id = aws_kms_key.artifact.key_id 26 | } 27 | 28 | output "artifact_kms" { 29 | value = aws_kms_key.artifact.arn 30 | } -------------------------------------------------------------------------------- /PRODUCTION/api_helpers/update_cidr.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # Copyright Amazon.com, Inc. or its affiliates. All rights reserved. 3 | # SPDX-License-Identifier: Apache-2.0 4 | import boto3 5 | import json 6 | import sys 7 | 8 | ipamlambda = sys.argv[5] 9 | ipamapigtw = sys.argv[4] 10 | region = sys.argv[3] 11 | cidr_block = sys.argv[2] 12 | current_VPC_ID = sys.argv[1] 13 | 14 | lambda_client = boto3.client('lambda') 15 | params = { 16 | "RequestType": "Create", 17 | "ResourceProperties": { 18 | "ApiRegion": region, 19 | "Cidr": cidr_block, 20 | "VpcId": current_VPC_ID, 21 | "CidrApiEndpoint": ipamapigtw 22 | 23 | } 24 | } 25 | 26 | response = lambda_client.invoke( 27 | FunctionName=ipamlambda, 28 | Payload=json.dumps(params), 29 | ) 30 | 31 | print(response['Payload'].read().decode("utf-8")) 32 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT No Attribution 2 | 3 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of 6 | this software and associated documentation files (the "Software"), to deal in 7 | the Software without restriction, including without limitation the rights to 8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 9 | the Software, and to permit persons to whom the Software is furnished to do so. 10 | 11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 12 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 13 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 14 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 15 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 16 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 17 | 18 | -------------------------------------------------------------------------------- /PIPELINE/terraform/s3-tf-state.tf: -------------------------------------------------------------------------------- 1 | data "aws_caller_identity" "current" {} 2 | 3 | resource "aws_s3_bucket" "state" { 4 | #checkov:skip=CKV_AWS_144 - No needed cross-region replication enabled 5 | #checkov:skip=CKV_AWS_21 - No needed versioning enabled 6 | #checkov:skip=CKV_AWS_18 - No needed access log enabled 7 | #checkov:skip=CKV_AWS_145 - S3 default encryption enabled 8 | #checkov:skip=CKV2_AWS_61 - No needed lifecycle configuration 9 | #checkov:skip=CKV2_AWS_62 - No needed notification 10 | bucket = "backend-terraform-pipeline-${data.aws_caller_identity.current.account_id}" 11 | } 12 | 13 | resource "aws_s3_bucket_server_side_encryption_configuration" "state" { 14 | bucket = aws_s3_bucket.state.bucket 15 | 16 | rule { 17 | apply_server_side_encryption_by_default { 18 | sse_algorithm = "AES256" 19 | } 20 | } 21 | } 22 | 23 | resource "aws_s3_bucket_public_access_block" "state" { 24 | bucket = aws_s3_bucket.state.id 25 | block_public_acls = true 26 | block_public_policy = true 27 | ignore_public_acls = true 28 | restrict_public_buckets = true 29 | } 30 | 31 | -------------------------------------------------------------------------------- /DEV/api_helpers/get_cidr.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # Copyright Amazon.com, Inc. or its affiliates. All rights reserved. 3 | # SPDX-License-Identifier: Apache-2.0 4 | import boto3 5 | import json 6 | import sys 7 | 8 | account_number = sys.argv[1] 9 | account_name = sys.argv[2] 10 | region = sys.argv[3] 11 | ipamapigtw = sys.argv[4] 12 | ipamlambda = sys.argv[5] 13 | lambda_client = boto3.client('lambda') 14 | params = { "RequestType": "Create", 15 | "ResourceProperties": { 16 | "ApiRegion": region, 17 | "AccountId": account_number, 18 | "Region": region , 19 | "ProjectCode": account_name, 20 | "Prefix": "22", 21 | "Requestor": "aft-automation", 22 | "Env": "dev", 23 | "Reason": "Requesting a new CIDR Range", 24 | "CidrApiEndpoint": ipamapigtw 25 | } 26 | } 27 | 28 | response = lambda_client.invoke( 29 | FunctionName=ipamlambda, 30 | Payload=json.dumps(params), 31 | 32 | 33 | ) 34 | 35 | print(response['Payload'].read().decode("utf-8")) 36 | -------------------------------------------------------------------------------- /HML/api_helpers/get_cidr.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # Copyright Amazon.com, Inc. or its affiliates. All rights reserved. 3 | # SPDX-License-Identifier: Apache-2.0 4 | import boto3 5 | import json 6 | import sys 7 | 8 | account_number = sys.argv[1] 9 | account_name = sys.argv[2] 10 | region = sys.argv[3] 11 | ipamapigtw = sys.argv[4] 12 | ipamlambda = sys.argv[5] 13 | lambda_client = boto3.client('lambda') 14 | params = { "RequestType": "Create", 15 | "ResourceProperties": { 16 | "ApiRegion": region, 17 | "AccountId": account_number, 18 | "Region": region , 19 | "ProjectCode": account_name, 20 | "Prefix": "22", 21 | "Requestor": "aft-automation", 22 | "Env": "hml", 23 | "Reason": "Requesting a new CIDR Range", 24 | "CidrApiEndpoint": ipamapigtw 25 | } 26 | } 27 | 28 | response = lambda_client.invoke( 29 | FunctionName=ipamlambda, 30 | Payload=json.dumps(params), 31 | 32 | 33 | ) 34 | 35 | print(response['Payload'].read().decode("utf-8")) 36 | -------------------------------------------------------------------------------- /PRODUCTION/api_helpers/get_cidr.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # Copyright Amazon.com, Inc. or its affiliates. All rights reserved. 3 | # SPDX-License-Identifier: Apache-2.0 4 | import boto3 5 | import json 6 | import sys 7 | 8 | account_number = sys.argv[1] 9 | account_name = sys.argv[2] 10 | region = sys.argv[3] 11 | ipamapigtw = sys.argv[4] 12 | ipamlambda = sys.argv[5] 13 | lambda_client = boto3.client('lambda') 14 | params = { "RequestType": "Create", 15 | "ResourceProperties": { 16 | "ApiRegion": region, 17 | "AccountId": account_number, 18 | "Region": region , 19 | "ProjectCode": account_name, 20 | "Prefix": "22", 21 | "Requestor": "aft-automation", 22 | "Env": "prod", 23 | "Reason": "Requesting a new CIDR Range", 24 | "CidrApiEndpoint": ipamapigtw 25 | } 26 | } 27 | 28 | response = lambda_client.invoke( 29 | FunctionName=ipamlambda, 30 | Payload=json.dumps(params), 31 | 32 | 33 | ) 34 | 35 | print(response['Payload'].read().decode("utf-8")) 36 | -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/state/s3.tf: -------------------------------------------------------------------------------- 1 | resource "aws_s3_bucket" "state" { 2 | #checkov:skip=CKV_AWS_144 - No needed cross-region replication enabled 3 | #checkov:skip=CKV_AWS_21 - No needed versioning enabled 4 | #checkov:skip=CKV_AWS_18 - No needed access log enabled 5 | #checkov:skip=CKV_AWS_145 - S3 default encryption enabled 6 | #checkov:skip=CKV2_AWS_61 - No needed lifecycle configuration 7 | #checkov:skip=CKV2_AWS_62 - No needed notification 8 | 9 | bucket = "${data.aws_caller_identity.current.account_id}-${var.application_name}-tfstate" 10 | } 11 | 12 | resource "aws_s3_bucket_server_side_encryption_configuration" "state" { 13 | bucket = aws_s3_bucket.state.bucket 14 | 15 | rule { 16 | apply_server_side_encryption_by_default { 17 | sse_algorithm = "AES256" 18 | } 19 | } 20 | } 21 | 22 | resource "aws_s3_bucket_public_access_block" "state" { 23 | bucket = aws_s3_bucket.state.id 24 | block_public_acls = true 25 | block_public_policy = true 26 | ignore_public_acls = true 27 | restrict_public_buckets = true 28 | } 29 | 30 | 31 | 32 | 33 | output "state_bucket" { 34 | value = aws_s3_bucket.state.arn 35 | } 36 | -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/codeBuild/codebuildActionCheckIAC.tf: -------------------------------------------------------------------------------- 1 | # ------------------------------------------------------------ 2 | # CodeBuild Check IAC 3 | # ------------------------------------------------------------ 4 | resource "aws_codebuild_project" "plan_check" { 5 | #checkov:skip=CKV_AWS_314 - No need 6 | name = "${var.application_name}-TerraformCheckIAC-${var.environment}" 7 | description = "Project to execute terraform plan" 8 | service_role = aws_iam_role.codebuild_project_plan_fmt.arn 9 | encryption_key = var.artifact_kms 10 | 11 | artifacts { 12 | type = "CODEPIPELINE" 13 | } 14 | 15 | environment { 16 | compute_type = "BUILD_GENERAL1_SMALL" 17 | image = "hashicorp/terraform:latest" 18 | type = "LINUX_CONTAINER" 19 | image_pull_credentials_type = "CODEBUILD" 20 | } 21 | 22 | source { 23 | type = "CODEPIPELINE" 24 | buildspec = templatefile("${path.module}/buildspecActionPlanCheckIAC.yml", { 25 | TERRAFORM_PATH = var.terraform_path, 26 | }) 27 | } 28 | } 29 | 30 | output "check_project" { 31 | value = aws_codebuild_project.plan_check.name 32 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/codeBuild/codebuildActionApply.tf: -------------------------------------------------------------------------------- 1 | # ------------------------------------------------------------ 2 | # CodeBuild Apply 3 | # ------------------------------------------------------------ 4 | resource "aws_codebuild_project" "apply" { 5 | #checkov:skip=CKV_AWS_314 - No need 6 | 7 | name = "${var.application_name}-TerraformApply-${var.environment}" 8 | description = "Project to execute terraform apply" 9 | service_role = aws_iam_role.codebuild_project_apply.arn 10 | encryption_key = var.artifact_kms 11 | 12 | artifacts { 13 | type = "CODEPIPELINE" 14 | } 15 | 16 | environment { 17 | compute_type = "BUILD_GENERAL1_SMALL" 18 | image = "hashicorp/terraform:latest" 19 | type = "LINUX_CONTAINER" 20 | image_pull_credentials_type = "CODEBUILD" 21 | } 22 | 23 | source { 24 | type = "CODEPIPELINE" 25 | buildspec = templatefile("${path.module}/buildspecActionApply.yml", { 26 | TERRAFORM_PATH = var.terraform_path, 27 | WORKSPACE = var.workspace, 28 | ENV = var.environment, 29 | }) 30 | } 31 | } 32 | 33 | output "apply_project" { 34 | value = aws_codebuild_project.apply.name 35 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/codeBuild/codebuildActionPlan.tf: -------------------------------------------------------------------------------- 1 | # ------------------------------------------------------------ 2 | # CodeBuild Plan 3 | # ------------------------------------------------------------ 4 | resource "aws_codebuild_project" "plan_fmt" { 5 | #checkov:skip=CKV_AWS_314 - No need 6 | name = "${var.application_name}-TerraformPlanFmt-${var.environment}" 7 | description = "Project to execute terraform plan and fmt" 8 | service_role = aws_iam_role.codebuild_project_plan_fmt.arn 9 | encryption_key = var.artifact_kms 10 | 11 | artifacts { 12 | type = "CODEPIPELINE" 13 | } 14 | 15 | environment { 16 | compute_type = "BUILD_GENERAL1_SMALL" 17 | image = "hashicorp/terraform:latest" 18 | type = "LINUX_CONTAINER" 19 | image_pull_credentials_type = "CODEBUILD" 20 | } 21 | 22 | source { 23 | type = "CODEPIPELINE" 24 | buildspec = templatefile("${path.module}/buildspecActionPlanFmt.yml", { 25 | TERRAFORM_PATH = var.terraform_path, 26 | WORKSPACE = var.workspace, 27 | ENV = var.environment, 28 | }) 29 | } 30 | } 31 | 32 | output "plan_project" { 33 | value = aws_codebuild_project.plan_fmt.name 34 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/variables.tf: -------------------------------------------------------------------------------- 1 | # Global Variables 2 | variable "aws_region" { 3 | description = "AWS Region for deploying resources" 4 | type = string 5 | } 6 | 7 | variable "environment" { 8 | description = "Lifecycle environment for deployment" 9 | type = string 10 | } 11 | variable "application_name" { 12 | description = "Application hosted in this infrastructure" 13 | type = string 14 | } 15 | 16 | variable "workloadAccess" { 17 | description = "" 18 | type = string 19 | } 20 | 21 | #codeBuild 22 | 23 | variable "terraform_path" { 24 | description = "Path where the terraform code is located" 25 | type = string 26 | } 27 | 28 | variable "workloadAccount" { 29 | description = "Workload Account ID" 30 | type = string 31 | } 32 | 33 | 34 | 35 | 36 | 37 | #pipeline 38 | 39 | variable "source_repository_name" { 40 | description = "Repository name where terraform code is located" 41 | type = string 42 | } 43 | 44 | variable "source_branch_name" { 45 | description = "Name of the branch where the pipeline will be listening" 46 | type = string 47 | } 48 | 49 | variable "source_repository_arn" { 50 | description = "ARN where terraform code is located" 51 | type = string 52 | } 53 | 54 | 55 | variable "approve_comment" { 56 | description = "comment that will go in the approval email" 57 | type = string 58 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/codeBuild/variables.tf: -------------------------------------------------------------------------------- 1 | # Global Variables 2 | variable "aws_region" { 3 | description = "AWS Region for deploying resources" 4 | type = string 5 | } 6 | 7 | variable "environment" { 8 | description = "Lifecycle environment for deployment" 9 | type = string 10 | } 11 | variable "application_name" { 12 | description = "Application hosted in this infrastructure" 13 | type = string 14 | } 15 | 16 | variable "backend_s3_arn" { 17 | description = "" 18 | type = string 19 | } 20 | 21 | variable "backend_lock_dynamodb_arn" { 22 | description = "" 23 | type = string 24 | } 25 | 26 | variable "artifact_s3" { 27 | description = "" 28 | type = string 29 | } 30 | 31 | variable "artifact_bucket_name" { 32 | description = "" 33 | type = string 34 | } 35 | 36 | variable "artifact_kms" { 37 | description = "" 38 | type = string 39 | } 40 | 41 | variable "source_branch_name" { 42 | description = "" 43 | type = string 44 | } 45 | 46 | 47 | variable "terraform_path" { 48 | description = "Path where the terraform code is located" 49 | type = string 50 | } 51 | 52 | variable "workloadAccount" { 53 | description = "Workload Account ID" 54 | type = string 55 | } 56 | 57 | variable "workspace" { 58 | description = "Terraform Worspace name to be used" 59 | type = string 60 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/artifact/s3.tf: -------------------------------------------------------------------------------- 1 | resource "aws_s3_bucket" "artifact" { 2 | #checkov:skip=CKV_AWS_144 - No needed cross-region replication enabled 3 | #checkov:skip=CKV_AWS_21 - No needed versioning enabled 4 | #checkov:skip=CKV_AWS_18 - No needed access log enabled 5 | #checkov:skip=CKV2_AWS_61 - No needed lifecycle configuration 6 | #checkov:skip=CKV2_AWS_62 - No needed notification 7 | bucket = "${data.aws_caller_identity.current.account_id}-${var.application_name}-${var.environment}" 8 | } 9 | 10 | resource "aws_s3_bucket_server_side_encryption_configuration" "artifact" { 11 | bucket = aws_s3_bucket.artifact.bucket 12 | 13 | rule { 14 | apply_server_side_encryption_by_default { 15 | kms_master_key_id = aws_kms_key.artifact.arn 16 | sse_algorithm = "aws:kms" 17 | } 18 | } 19 | } 20 | 21 | resource "aws_s3_bucket_public_access_block" "artifact" { 22 | bucket = aws_s3_bucket.artifact.id 23 | block_public_acls = true 24 | block_public_policy = true 25 | ignore_public_acls = true 26 | restrict_public_buckets = true 27 | } 28 | 29 | resource "aws_s3_bucket_object" "object" { 30 | #checkov:skip=CKV_AWS_186 - Bucket encryption enabled 31 | bucket = aws_s3_bucket.artifact.bucket 32 | key = "dynamic-submodule/SubModule.sh" 33 | source = "./dynamic-submodule/SubModule.sh" 34 | etag = filemd5("./dynamic-submodule/SubModule.sh") 35 | } 36 | 37 | 38 | output "artifact_bucket" { 39 | value = aws_s3_bucket.artifact.arn 40 | } 41 | 42 | output "artifact_bucket_name" { 43 | value = aws_s3_bucket.artifact.bucket 44 | } 45 | -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/pipeline/variables.tf: -------------------------------------------------------------------------------- 1 | # Global Variables 2 | variable "aws_region" { 3 | description = "AWS Region for deploying resources" 4 | type = string 5 | } 6 | 7 | variable "environment" { 8 | description = "Lifecycle environment for deployment" 9 | type = string 10 | } 11 | variable "application_name" { 12 | description = "Application hosted in this infrastructure" 13 | type = string 14 | } 15 | 16 | #pipeline 17 | 18 | variable "source_repository_name" { 19 | description = "Repository name where terraform code is located" 20 | type = string 21 | } 22 | 23 | variable "source_branch_name" { 24 | description = "Name of the branch where the pipeline will be listening" 25 | type = string 26 | } 27 | 28 | variable "source_repository_arn" { 29 | description = "ARN where terraform code is located" 30 | type = string 31 | } 32 | 33 | variable "subModules_project" { 34 | description = "" 35 | type = string 36 | } 37 | 38 | variable "check_project" { 39 | description = "" 40 | type = string 41 | } 42 | 43 | variable "plan_project" { 44 | description = "" 45 | type = string 46 | } 47 | 48 | variable "apply_project" { 49 | description = "" 50 | type = string 51 | } 52 | 53 | variable "artifact_s3" { 54 | description = "" 55 | type = string 56 | } 57 | 58 | variable "artifact_kms" { 59 | description = "" 60 | type = string 61 | } 62 | 63 | variable "artifact_bucket_name" { 64 | description = "" 65 | type = string 66 | } 67 | 68 | 69 | variable "approve_comment" { 70 | description = "comment that will go in the approval email" 71 | type = string 72 | } -------------------------------------------------------------------------------- /PIPELINE/api_helpers/post-api-helpers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Executing Post-API Helpers" 4 | echo "---------------------" 5 | echo "---------------------" 6 | echo "---------------------Get Child Account Number---------------------" 7 | accountnumber=$(aws sts get-caller-identity --query Account --output text) 8 | echo "accountnumber: $accountnumber" 9 | export AWS_PROFILE=aft-target 10 | echo "aws sts get-caller-identity" 11 | aws sts get-caller-identity 12 | 13 | echo "---------------------Installing terraform and jinja2---------------------" 14 | sudo yum install -y yum-utils shadow-utils 15 | sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo 16 | sudo yum -y install terraform 17 | 18 | echo "---------------------Change AWS_PROFILE=aft-management-admin---------------------" 19 | export AWS_PROFILE=aft-management-admin 20 | 21 | echo "---------------------Ambiente DEV---------------------" 22 | echo "---------------------Executando terraform application pipeline---------------------" 23 | chmod +x $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/application_script/terraform-deploy-dev.sh 24 | $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/application_script/terraform-deploy-dev.sh 25 | 26 | 27 | echo "---------------------Ambiente HML---------------------" 28 | echo "---------------------Executando terraform application pipeline---------------------" 29 | chmod +x $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/application_script/terraform-deploy-hml.sh 30 | $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/application_script/terraform-deploy-hml.sh 31 | 32 | echo "---------------------Ambiente PRD---------------------" 33 | echo "---------------------Executando terraform application pipeline---------------------" 34 | chmod +x $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/application_script/terraform-deploy-prd.sh 35 | $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/application_script/terraform-deploy-prd.sh -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/main.tf: -------------------------------------------------------------------------------- 1 | data "aws_caller_identity" "current" {} 2 | 3 | terraform { 4 | backend "s3" {} 5 | } 6 | 7 | module "state" { 8 | source = "./modules/aws/state" 9 | aws_region = var.aws_region 10 | application_name = var.application_name 11 | environment = var.environment 12 | } 13 | 14 | module "artifact" { 15 | source = "./modules/aws/artifact" 16 | aws_region = var.aws_region 17 | application_name = var.application_name 18 | environment = var.environment 19 | } 20 | 21 | module "codeBuild" { 22 | source = "./modules/aws/codeBuild" 23 | providers = { 24 | aws.workload = aws.workload 25 | } 26 | aws_region = var.aws_region 27 | application_name = var.application_name 28 | environment = var.environment 29 | terraform_path = var.terraform_path 30 | backend_s3_arn = "arn:aws:s3:::${data.aws_caller_identity.current.account_id}-${var.application_name}-tfstate" 31 | backend_lock_dynamodb_arn = "arn:aws:dynamodb:${var.aws_region}:${data.aws_caller_identity.current.account_id}:table/${data.aws_caller_identity.current.account_id}-${var.application_name}-tflock" 32 | artifact_s3 = module.artifact.artifact_bucket 33 | artifact_kms = module.artifact.artifact_kms 34 | workloadAccount = var.workloadAccount 35 | source_branch_name = var.source_branch_name 36 | workspace = var.environment 37 | artifact_bucket_name = module.artifact.artifact_bucket_name 38 | 39 | } 40 | 41 | module "pipeline" { 42 | source = "./modules/aws/pipeline" 43 | aws_region = var.aws_region 44 | application_name = var.application_name 45 | environment = var.environment 46 | artifact_s3 = module.artifact.artifact_bucket 47 | artifact_kms = module.artifact.artifact_kms 48 | apply_project = module.codeBuild.apply_project 49 | plan_project = module.codeBuild.plan_project 50 | check_project = module.codeBuild.check_project 51 | subModules_project = module.codeBuild.subModules_project 52 | source_branch_name = var.source_branch_name 53 | source_repository_arn = var.source_repository_arn 54 | source_repository_name = var.source_repository_name 55 | artifact_bucket_name = module.artifact.artifact_bucket_name 56 | approve_comment = var.approve_comment 57 | } -------------------------------------------------------------------------------- /HML/api_helpers/post-api-helpers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Executing Post-API Helpers" 4 | echo "---------------------" 5 | echo "---------------------" 6 | echo "---------------------Get Child Account Number---------------------" 7 | accountnumber=$(aws sts get-caller-identity --query Account --output text) 8 | regionCidr=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/region" --query Parameter.Value --output text) 9 | ipamApiGtw=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/ipamapigtw" --query Parameter.Value --output text) 10 | ipamLambda=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/ipamlambda" --query Parameter.Value --output text) 11 | echo $accountnumber 12 | echo "---------------------" 13 | echo "---------------------" 14 | echo "---------------------Change AWS_PROFILE=AFT-TARGET-admin---------------------" 15 | export AWS_PROFILE=aft-target 16 | echo "aws sts get-caller-identity" 17 | aws sts get-caller-identity 18 | echo "aws sts get-caller-identity" 19 | echo "---------------------" 20 | echo "---------------------" 21 | echo "---------------------Access folder: CD \API_Helpers---------------------" 22 | cd $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/ 23 | echo "---------------------Done!---------------------" 24 | echo "---------------------" 25 | echo "---------------------" 26 | echo "---------------------Get pathStore, CidrBlock and currentVPCID variables---------------------" 27 | pathStore="/$accountnumber/vpc/cidr" 28 | CidrBlock=`aws ssm get-parameter --name "/$accountnumber/vpc/cidr" --query Parameter.Value --output text` 29 | currentVPCID=`aws ec2 describe-vpcs --query "Vpcs[?CidrBlock=='$CidrBlock'].VpcId" --output text` 30 | echo "pathStore: $pathStore" 31 | echo "CidrBlock: $CidrBlock" 32 | echo "currentVPCID: $currentVPCID" 33 | echo "---------------------Change AWS_PROFILE=SHARED-management-admin---------------------" 34 | export AWS_PROFILE=aft-management 35 | echo "aws sts get-caller-identity" 36 | aws sts get-caller-identity 37 | echo "aws sts get-caller-identity" 38 | echo "---------------------Executing ./update_cidr.py---------------------" 39 | response=$(python3 ./update_cidr.py $currentVPCID $CidrBlock $regionCidr $ipamApiGtw $ipamLambda) 40 | echo "---------------------Done!---------------------" 41 | echo "---------------------" 42 | echo "---------------------" 43 | echo "---------------------print python response!---------------------" 44 | echo $response 45 | echo "---------------------Acessando conta AFT ---------------------" 46 | export AWS_PROFILE=aft-management 47 | aws sts get-caller-identity 48 | echo "---------------------Criando SSM Pameter Store Jinja via bash ---------------------" 49 | jinja_json="{\"environment\":\"hml\",\"workloadAccount\":\"$accountnumber\"}" 50 | aws ssm put-parameter --name "/hml/terraform-parameters/jinja" --value $jinja_json --type "String" --overwrite -------------------------------------------------------------------------------- /PIPELINE/api_helpers/application_script/terraform-deploy-hml.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "---------------------Get Child Account Number---------------------" 3 | export AWS_PROFILE=aft-target 4 | echo "aws sts get-caller-identity" 5 | aws sts get-caller-identity 6 | accountnumber=$(aws sts get-caller-identity --query Account --output text) 7 | echo $accountnumber 8 | 9 | echo "---------------------Pegar SSM Paremeter Store JINJA de HML Da Conta AFT---------------------" 10 | export AWS_PROFILE=aft-management 11 | aws sts get-caller-identity 12 | jinja_parameters=`aws ssm get-parameter --name "/hml/terraform-parameters/jinja" --query Parameter.Value --output text` 13 | 14 | echo "---------------------Tratamento do JSON---------------------" 15 | export environment=`echo $jinja_parameters | jq -r '.environment'` 16 | export workloadAccount=`echo $jinja_parameters | jq -r '.workloadAccount'` 17 | echo "environment=$environment" 18 | echo "workloadAccount=$workloadAccount" 19 | echo "---------------------Jinja template apply na pasta environments/hml---------------------" 20 | cd $DEFAULT_PATH/$CUSTOMIZATION/pipeline-infra/environments/hml/ 21 | rm -rf hml.tfvars 22 | export AWS_PROFILE=aft-target 23 | echo "aws sts get-caller-identity" 24 | aws sts get-caller-identity 25 | projectName=`aws ssm get-parameter --name "/aft/account-request/custom-fields/project" --query Parameter.Value --output text` 26 | 27 | for f in *.jinja; do jinja2 $f -D pipelineAccount=$accountnumber -D provider_region=$CT_MGMT_REGION -D projectName=$projectName -D workloadAccount=$workloadAccount -D environment=$environment>> ./$(basename $f .jinja).tfvars; done 28 | cat hml.tfvars 29 | echo "---------------------CD to pipeline-infra root folder---------------------" 30 | cd $DEFAULT_PATH/$CUSTOMIZATION/pipeline-infra/ 31 | echo $PWD 32 | echo "---------------------Acessando conta Pipeline---------------------" 33 | export AWS_PROFILE=aft-target 34 | echo "aws sts get-caller-identity" 35 | aws sts get-caller-identity 36 | 37 | echo "---------------------Terraform apply---------------------" 38 | echo "---------------------Trying to terraform init the Pipeline-Infra @PipelineAccount to create pipeline @HmlAccount!---------------------" 39 | TFSTATE_BUCKET="backend-terraform-pipeline-$accountnumber" 40 | TFSTATE_KEY="pipeline/hml/terraform.tfstate" 41 | TFSTATE_REGION="$CT_MGMT_REGION" 42 | echo "TFSTATE_BUCKET=$TFSTATE_BUCKET" 43 | echo "TFSTATE_KEY=$TFSTATE_KEY" 44 | echo "TFSTATE_REGION=$TFSTATE_REGION" 45 | 46 | terraform init -reconfigure \ 47 | -backend-config="bucket=$TFSTATE_BUCKET" \ 48 | -backend-config="key=$TFSTATE_KEY" \ 49 | -backend-config="profile=aft-target" \ 50 | -backend-config="region=$TFSTATE_REGION" 51 | 52 | terraform plan -target="module.artifact" -target="module.codeBuild" -target="module.pipeline" -var-file=environments/hml/hml.tfvars -out state-hml.tfplan 53 | terraform apply "state-hml.tfplan" -------------------------------------------------------------------------------- /DEV/api_helpers/post-api-helpers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Executing Post-API Helpers" 4 | echo "---------------------" 5 | echo "---------------------" 6 | echo "---------------------Get Child Account Number---------------------" 7 | accountnumber=$(aws sts get-caller-identity --query Account --output text) 8 | regionCidr=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/region" --query Parameter.Value --output text) 9 | ipamApiGtw=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/ipamapigtw" --query Parameter.Value --output text) 10 | ipamLambda=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/ipamlambda" --query Parameter.Value --output text) 11 | echo "accountnumber: $accountnumber" 12 | echo "---------------------" 13 | echo "---------------------" 14 | echo "---------------------Change AWS_PROFILE=AFT-TARGET-admin---------------------" 15 | export AWS_PROFILE=aft-target 16 | echo "aws sts get-caller-identity" 17 | aws sts get-caller-identity 18 | echo "---------------------" 19 | echo "---------------------" 20 | echo "---------------------Access folder: CD \API_Helpers---------------------" 21 | cd $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/ 22 | echo "---------------------Done!---------------------" 23 | echo "---------------------" 24 | echo "---------------------" 25 | echo "---------------------Get path_store, cidr_block and current_VPC_ID variables---------------------" 26 | path_store="/$accountnumber/vpc/cidr" 27 | cidr_block=`aws ssm get-parameter --name $path_store --query Parameter.Value --output text` 28 | current_VPC_ID=`aws ec2 describe-vpcs --query "Vpcs[?CidrBlock=='$cidr_block'].VpcId" --output text` 29 | echo "path_store: $path_store" 30 | echo "cidr_block: $cidr_block" 31 | echo "current_VPC_ID: $current_VPC_ID" 32 | 33 | echo "---------------------Change AWS_PROFILE=SHARED-management-admin---------------------" 34 | export AWS_PROFILE=aft-management 35 | echo "aws sts get-caller-identity" 36 | aws sts get-caller-identity 37 | echo "aws sts get-caller-identity" 38 | echo "---------------------Executing ./update_cidr.py---------------------" 39 | response=$(python3 ./update_cidr.py $current_VPC_ID $cidr_block $regionCidr $ipamApiGtw $ipamLambda) 40 | echo "---------------------Done!---------------------" 41 | echo "---------------------" 42 | echo "---------------------" 43 | echo "---------------------print python response!---------------------" 44 | echo $response 45 | 46 | echo "---------------------Accessing AFT Management Account---------------------" 47 | export AWS_PROFILE=aft-management 48 | aws sts get-caller-identity 49 | echo "---------------------Criando SSM Pameter Store Jinja via bash ---------------------" 50 | jinja_json="{\"environment\":\"dev\",\"workloadAccount\":\"$accountnumber\"}" 51 | aws ssm put-parameter --name "/dev/terraform-parameters/jinja" --value $jinja_json --type "String" --overwrite -------------------------------------------------------------------------------- /PIPELINE/api_helpers/application_script/terraform-deploy-prd.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "---------------------Get Child Account Number---------------------" 3 | export AWS_PROFILE=aft-target 4 | echo "aws sts get-caller-identity" 5 | aws sts get-caller-identity 6 | accountnumber=$(aws sts get-caller-identity --query Account --output text) 7 | echo $accountnumber 8 | 9 | echo "---------------------Pegar SSM Paremeter Store JINJA de PRD Da Conta AFT---------------------" 10 | export AWS_PROFILE=aft-management 11 | aws sts get-caller-identity 12 | jinja_parameters=`aws ssm get-parameter --name "/prd/terraform-parameters/jinja" --query Parameter.Value --output text` 13 | 14 | echo "---------------------Tratamento do JSON---------------------" 15 | export environment=`echo $jinja_parameters | jq -r '.environment'` 16 | export workloadAccount=`echo $jinja_parameters | jq -r '.workloadAccount'` 17 | echo "environment=$environment" 18 | echo "workloadAccount=$workloadAccount" 19 | echo "---------------------Jinja template apply na pasta environments/prd---------------------" 20 | cd $DEFAULT_PATH/$CUSTOMIZATION/pipeline-infra/environments/prd/ 21 | rm -rf prd.tfvars 22 | export AWS_PROFILE=aft-target 23 | echo "aws sts get-caller-identity" 24 | aws sts get-caller-identity 25 | projectName=`aws ssm get-parameter --name "/aft/account-request/custom-fields/project" --query Parameter.Value --output text` 26 | 27 | for f in *.jinja; do jinja2 $f -D pipelineAccount=$accountnumber -D provider_region=$CT_MGMT_REGION -D workloadAccount=$workloadAccount -D projectName=$projectName -D environment=$environment>> ./$(basename $f .jinja).tfvars; done 28 | cat prd.tfvars 29 | echo "---------------------CD to pipeline-infra root folder---------------------" 30 | cd $DEFAULT_PATH/$CUSTOMIZATION/pipeline-infra/ 31 | echo $PWD 32 | echo "---------------------Acessando conta Pipeline---------------------" 33 | export AWS_PROFILE=aft-target 34 | echo "aws sts get-caller-identity" 35 | aws sts get-caller-identity 36 | 37 | echo "---------------------Terraform apply---------------------" 38 | echo "---------------------Trying to terraform init the Pipeline-Infra @PipelineAccount to create pipeline @prdAccount!---------------------" 39 | TFSTATE_BUCKET="backend-terraform-pipeline-$accountnumber" 40 | TFSTATE_KEY="pipeline/prd/terraform.tfstate" 41 | TFSTATE_REGION="$CT_MGMT_REGION" 42 | echo "TFSTATE_BUCKET=$TFSTATE_BUCKET" 43 | echo "TFSTATE_KEY=$TFSTATE_KEY" 44 | echo "TFSTATE_REGION=$TFSTATE_REGION" 45 | 46 | terraform init -reconfigure \ 47 | -backend-config="bucket=$TFSTATE_BUCKET" \ 48 | -backend-config="key=$TFSTATE_KEY" \ 49 | -backend-config="profile=aft-target" \ 50 | -backend-config="region=$TFSTATE_REGION" 51 | 52 | terraform plan -target="module.artifact" -target="module.codeBuild" -target="module.pipeline" -var-file=environments/prd/prd.tfvars -out state-prd.tfplan 53 | terraform apply "state-prd.tfplan" 54 | 55 | -------------------------------------------------------------------------------- /PRODUCTION/api_helpers/post-api-helpers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Executing Post-API Helpers" 4 | echo "---------------------" 5 | echo "---------------------" 6 | echo "---------------------Get Child Account Number---------------------" 7 | accountnumber=$(aws sts get-caller-identity --query Account --output text) 8 | regionCidr=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/region" --query Parameter.Value --output text) 9 | ipamApiGtw=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/ipamapigtw" --query Parameter.Value --output text) 10 | ipamLambda=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/ipamlambda" --query Parameter.Value --output text) 11 | echo $accountnumber 12 | echo "---------------------" 13 | echo "---------------------" 14 | echo "---------------------Change AWS_PROFILE=AFT-TARGET-admin---------------------" 15 | export AWS_PROFILE=aft-target 16 | echo "aws sts get-caller-identity" 17 | aws sts get-caller-identity 18 | echo "aws sts get-caller-identity" 19 | echo "---------------------" 20 | echo "---------------------" 21 | echo "---------------------Access folder: CD \API_Helpers---------------------" 22 | cd $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/ 23 | echo "---------------------Done!---------------------" 24 | echo "---------------------" 25 | echo "---------------------" 26 | echo "---------------------Get path_store, cidr_block and current_VPC_ID variables---------------------" 27 | path_store="/$accountnumber/vpc/cidr" 28 | cidr_block=`aws ssm get-parameter --name "/$accountnumber/vpc/cidr" --query Parameter.Value --output text` 29 | current_VPC_ID=`aws ec2 describe-vpcs --query "Vpcs[?CidrBlock=='$cidr_block'].VpcId" --output text` 30 | echo "path_store: $path_store" 31 | echo "cidr_block: $cidr_block" 32 | echo "current_VPC_ID: $current_VPC_ID" 33 | echo "---------------------Change AWS_PROFILE=SHARED-management-admin---------------------" 34 | export AWS_PROFILE=aft-management 35 | echo "aws sts get-caller-identity" 36 | aws sts get-caller-identity 37 | echo "aws sts get-caller-identity" 38 | echo "---------------------Executing ./update_cidr.py---------------------" 39 | response=$(python3 ./update_cidr.py $current_VPC_ID $cidr_block $regionCidr $ipamApiGtw $ipamLambda) 40 | echo "---------------------Done!---------------------" 41 | echo "---------------------" 42 | echo "---------------------" 43 | echo "---------------------print python response!---------------------" 44 | echo $response 45 | echo "---------------------Acessando conta AFT ---------------------" 46 | export AWS_PROFILE=aft-management 47 | aws sts get-caller-identity 48 | echo "---------------------Criando SSM Pameter Store Jinja via bash ---------------------" 49 | jinja_json="{\"environment\":\"prd\",\"workloadAccount\":\"$accountnumber\"}" 50 | aws ssm put-parameter --name "/prd/terraform-parameters/jinja" --value $jinja_json --type "String" --overwrite -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/pipeline/codePipeline.tf: -------------------------------------------------------------------------------- 1 | resource "aws_codepipeline" "pipeline" { 2 | name = "pipeline-iac-${var.application_name}-${var.environment}" 3 | role_arn = aws_iam_role.codepipeline.arn 4 | 5 | artifact_store { 6 | location = var.artifact_bucket_name 7 | type = "S3" 8 | 9 | encryption_key { 10 | id = var.artifact_kms 11 | type = "KMS" 12 | } 13 | } 14 | 15 | stage { 16 | name = "Source" 17 | 18 | action { 19 | name = "Source" 20 | category = "Source" 21 | owner = "AWS" 22 | provider = "CodeCommit" 23 | version = "1" 24 | output_artifacts = ["source_artifact"] 25 | configuration = { 26 | RepositoryName = var.source_repository_name 27 | BranchName = var.source_branch_name 28 | PollForSourceChanges = true 29 | OutputArtifactFormat = "CODEBUILD_CLONE_REF" 30 | } 31 | role_arn = aws_iam_role.codepipeline_action_source.arn 32 | } 33 | } 34 | 35 | stage { 36 | name = "CheckIAC-and-PlanFMT" 37 | 38 | action { 39 | name = "CheckIAC" 40 | category = "Build" 41 | owner = "AWS" 42 | provider = "CodeBuild" 43 | input_artifacts = ["source_artifact"] 44 | version = "1" 45 | configuration = { 46 | ProjectName = var.check_project 47 | } 48 | role_arn = aws_iam_role.codepipeline_action_plan_fmt.arn 49 | } 50 | 51 | action { 52 | name = "TerraformPlanAndFmt" 53 | category = "Build" 54 | owner = "AWS" 55 | provider = "CodeBuild" 56 | input_artifacts = ["source_artifact"] 57 | output_artifacts = ["plan"] 58 | version = "1" 59 | configuration = { 60 | ProjectName = var.plan_project 61 | } 62 | role_arn = aws_iam_role.codepipeline_action_plan_fmt.arn 63 | } 64 | 65 | } 66 | 67 | 68 | stage { 69 | name = "Approval" 70 | 71 | action { 72 | name = "Approval" 73 | category = "Approval" 74 | owner = "AWS" 75 | provider = "Manual" 76 | version = "1" 77 | 78 | configuration = { 79 | NotificationArn = aws_sns_topic.approval.arn 80 | CustomData = var.approve_comment 81 | } 82 | role_arn = aws_iam_role.codepipeline_action_apply.arn 83 | } 84 | } 85 | 86 | stage { 87 | name = "Apply" 88 | 89 | action { 90 | name = "TerraformApply" 91 | category = "Build" 92 | owner = "AWS" 93 | provider = "CodeBuild" 94 | input_artifacts = ["plan"] 95 | version = "1" 96 | 97 | configuration = { 98 | ProjectName = var.apply_project 99 | } 100 | role_arn = aws_iam_role.codepipeline_action_apply.arn 101 | } 102 | } 103 | } -------------------------------------------------------------------------------- /PIPELINE/api_helpers/application_script/terraform-deploy-dev.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "---------------------Get Child Account Number---------------------" 3 | export AWS_PROFILE=aft-target 4 | echo "aws sts get-caller-identity" 5 | aws sts get-caller-identity 6 | accountnumber=$(aws sts get-caller-identity --query Account --output text) 7 | echo $accountnumber 8 | 9 | echo "---------------------Extrai SSM Paremeter Store JINJA de DEV Da Conta AFT---------------------" 10 | export AWS_PROFILE=aft-management 11 | aws sts get-caller-identity 12 | jinja_parameters=`aws ssm get-parameter --name "/dev/terraform-parameters/jinja" --query Parameter.Value --output text` 13 | 14 | echo "---------------------Tratamento do JSON---------------------" 15 | export environment=`echo $jinja_parameters | jq -r '.environment'` 16 | export workloadAccount=`echo $jinja_parameters | jq -r '.workloadAccount'` 17 | echo "environment=$environment" 18 | echo "workloadAccount=$workloadAccount" 19 | echo "---------------------Jinja template apply na pasta environments/dev---------------------" 20 | cd $DEFAULT_PATH/$CUSTOMIZATION/pipeline-infra/environments/dev/ 21 | 22 | export AWS_PROFILE=aft-target 23 | echo "aws sts get-caller-identity" 24 | aws sts get-caller-identity 25 | projectName=`aws ssm get-parameter --name "/aft/account-request/custom-fields/project" --query Parameter.Value --output text` 26 | rm -rf dev.tfvars 27 | for f in *.jinja; do jinja2 $f -D pipelineAccount=$accountnumber -D provider_region=$CT_MGMT_REGION -D workloadAccount=$workloadAccount -D projectName=$projectName -D environment=$environment>> ./$(basename $f .jinja).tfvars; done 28 | cat dev.tfvars 29 | echo "---------------------CD to pipeline-infra root folder---------------------" 30 | cd $DEFAULT_PATH/$CUSTOMIZATION/pipeline-infra/ 31 | echo $PWD 32 | echo "---------------------Acessando conta Pipeline---------------------" 33 | export AWS_PROFILE=aft-target 34 | echo "aws sts get-caller-identity" 35 | aws sts get-caller-identity 36 | 37 | echo "---------------------Terraform init---------------------" 38 | echo "---------------------Trying to terraform init the Pipeline-Infra @PipelineAccount to create pipeline @DevAccount!---------------------" 39 | TFSTATE_BUCKET="backend-terraform-pipeline-$accountnumber" 40 | TFSTATE_KEY="pipeline/dev/terraform.tfstate" 41 | TFSTATE_REGION="$CT_MGMT_REGION" 42 | echo "TFSTATE_BUCKET=$TFSTATE_BUCKET" 43 | echo "TFSTATE_KEY=$TFSTATE_KEY" 44 | echo "TFSTATE_REGION=$TFSTATE_REGION" 45 | 46 | terraform init -reconfigure \ 47 | -backend-config="bucket=$TFSTATE_BUCKET" \ 48 | -backend-config="key=$TFSTATE_KEY" \ 49 | -backend-config="profile=aft-target" \ 50 | -backend-config="region=$TFSTATE_REGION" 51 | 52 | echo "---------------------Terraform apply for module.state---------------------" 53 | terraform plan -target="module.state" -var-file=environments/dev/dev.tfvars -out state-dev.tfplan 54 | terraform apply "state-dev.tfplan" 55 | 56 | echo "---------------------Terraform apply for module.artifact---------------------" 57 | terraform plan -target="module.artifact" -target="module.codeBuild" -target="module.pipeline" -var-file=environments/dev/dev.tfvars -out state-dev.tfplan 58 | terraform apply "state-dev.tfplan" -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *main* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | -------------------------------------------------------------------------------- /PRODUCTION/terraform/flow-logs.tf: -------------------------------------------------------------------------------- 1 | resource "aws_kms_key" "flow_log" { 2 | description = "This key is used to encrypt bucket objects" 3 | deletion_window_in_days = 10 4 | policy = < dev.txt 74 | aftnumber=$(aws sts get-caller-identity --query Account --output text) 75 | aws s3 cp dev.txt s3://aft-backend-$aftnumber-primary-region/account-number-pipeline/dev.txt -------------------------------------------------------------------------------- /DEV/terraform/vpc.tf: -------------------------------------------------------------------------------- 1 | data "aws_region" "current" {} 2 | locals { 3 | region = data.aws_region.current.name 4 | cidr = data.aws_ssm_parameter.cidr.value 5 | cidr_subnets = nonsensitive(cidrsubnets(local.cidr, 4, 4, 4, 4, 4, 4)) 6 | azs = ["${local.region}a", "${local.region}b", "${local.region}c"] 7 | } 8 | 9 | data "aws_caller_identity" "current" {} 10 | 11 | data "aws_ssm_parameter" "cidr" { 12 | name = "/${data.aws_caller_identity.current.account_id}/vpc/cidr" 13 | } 14 | 15 | data "aws_ssm_parameter" "project" { 16 | name = "/aft/account-request/custom-fields/project" 17 | } 18 | 19 | resource "aws_vpc" "vpc" { 20 | #checkov:skip=CKV2_AWS_11: No need Dev environment. 21 | #checkov:skip=CKV2_AWS_12: Already done. 22 | cidr_block = local.cidr 23 | tags = { 24 | Name = "${data.aws_ssm_parameter.project.value}-DEV" 25 | } 26 | } 27 | 28 | resource "aws_default_security_group" "default" { 29 | vpc_id = aws_vpc.vpc.id 30 | 31 | ingress { 32 | protocol = "-1" 33 | self = true 34 | from_port = 0 35 | to_port = 0 36 | } 37 | 38 | egress { 39 | from_port = 0 40 | to_port = 0 41 | protocol = "-1" 42 | cidr_blocks = ["0.0.0.0/0"] 43 | } 44 | } 45 | 46 | resource "aws_internet_gateway" "gw" { 47 | vpc_id = aws_vpc.vpc.id 48 | 49 | tags = { 50 | Name = "${data.aws_ssm_parameter.project.value}-DEV-public-subnet" 51 | } 52 | } 53 | 54 | resource "aws_eip" "eip" { 55 | #checkov:skip=CKV2_AWS_19: EIP attached to Nat Gateway. 56 | domain = "vpc" 57 | } 58 | 59 | 60 | resource "aws_subnet" "public" { 61 | count = "${length(local.azs)}" 62 | 63 | vpc_id = aws_vpc.vpc.id 64 | availability_zone = local.azs[count.index] 65 | 66 | cidr_block = local.cidr_subnets[count.index+3] 67 | 68 | tags = { 69 | Name = "${data.aws_ssm_parameter.project.value}-DEV-public-subnet" 70 | } 71 | depends_on = [ aws_internet_gateway.gw ] 72 | 73 | } 74 | 75 | resource "aws_nat_gateway" "ngw" { 76 | allocation_id = aws_eip.eip.id 77 | subnet_id = aws_subnet.public[0].id 78 | 79 | tags = { 80 | Name = "${data.aws_ssm_parameter.project.value}-DEV-gw-NAT" 81 | } 82 | 83 | depends_on = [ aws_internet_gateway.gw ] 84 | } 85 | 86 | resource "aws_subnet" "private" { 87 | count = "${length(local.azs)}" 88 | vpc_id = aws_vpc.vpc.id 89 | availability_zone = local.azs[count.index] 90 | 91 | cidr_block = local.cidr_subnets[count.index] 92 | 93 | tags = { 94 | Name = "${data.aws_ssm_parameter.project.value}-DEV-private-subnet" 95 | } 96 | 97 | depends_on = [ aws_nat_gateway.ngw ] 98 | } 99 | 100 | resource "aws_route_table" "main-public" { 101 | vpc_id = aws_vpc.vpc.id 102 | route { 103 | cidr_block = "0.0.0.0/0" 104 | gateway_id = aws_internet_gateway.gw.id 105 | } 106 | tags = { 107 | Name = "${data.aws_ssm_parameter.project.value}-DEV-public-route" 108 | } 109 | 110 | depends_on = [ aws_internet_gateway.gw ] 111 | 112 | } 113 | 114 | resource "aws_route_table" "main-private" { 115 | vpc_id = aws_vpc.vpc.id 116 | route { 117 | cidr_block = "0.0.0.0/0" 118 | gateway_id = aws_nat_gateway.ngw.id 119 | } 120 | tags = { 121 | Name = "${data.aws_ssm_parameter.project.value}-DEV-private-route" 122 | } 123 | depends_on = [ aws_nat_gateway.ngw ] 124 | 125 | } 126 | 127 | resource "aws_route_table_association" "public" { 128 | count = "${length(local.azs)}" 129 | subnet_id = "${element(aws_subnet.public.*.id, count.index)}" 130 | route_table_id = aws_route_table.main-public.id 131 | } 132 | 133 | resource "aws_route_table_association" "private" { 134 | count = "${length(local.azs)}" 135 | subnet_id = "${element(aws_subnet.private.*.id, count.index)}" 136 | route_table_id = aws_route_table.main-private.id 137 | } -------------------------------------------------------------------------------- /HML/terraform/vpc.tf: -------------------------------------------------------------------------------- 1 | data "aws_region" "current" {} 2 | locals { 3 | region = data.aws_region.current.name 4 | cidr = data.aws_ssm_parameter.cidr.value 5 | cidr_subnets = nonsensitive(cidrsubnets(local.cidr, 4, 4, 4, 4, 4, 4)) 6 | azs = ["${local.region}a", "${local.region}b", "${local.region}c"] 7 | } 8 | 9 | data "aws_caller_identity" "current" {} 10 | 11 | data "aws_ssm_parameter" "cidr" { 12 | name = "/${data.aws_caller_identity.current.account_id}/vpc/cidr" 13 | } 14 | 15 | data "aws_ssm_parameter" "project" { 16 | name = "/aft/account-request/custom-fields/project" 17 | } 18 | 19 | resource "aws_default_security_group" "default" { 20 | vpc_id = aws_vpc.vpc.id 21 | 22 | ingress { 23 | protocol = "-1" 24 | self = true 25 | from_port = 0 26 | to_port = 0 27 | } 28 | 29 | egress { 30 | from_port = 0 31 | to_port = 0 32 | protocol = "-1" 33 | cidr_blocks = ["0.0.0.0/0"] 34 | } 35 | } 36 | 37 | resource "aws_vpc" "vpc" { 38 | #checkov:skip=CKV2_AWS_11: No need HML environment. 39 | #checkov:skip=CKV2_AWS_12: Already done. 40 | cidr_block = local.cidr 41 | tags = { 42 | Name = "${data.aws_ssm_parameter.project.value}-HML" 43 | } 44 | } 45 | 46 | resource "aws_internet_gateway" "gw" { 47 | vpc_id = aws_vpc.vpc.id 48 | 49 | tags = { 50 | Name = "${data.aws_ssm_parameter.project.value}-HML-public-subnet" 51 | } 52 | } 53 | 54 | resource "aws_eip" "eip" { 55 | #checkov:skip=CKV2_AWS_19: EIP attached to Nat Gateway. 56 | domain = "vpc" 57 | } 58 | 59 | 60 | resource "aws_subnet" "public" { 61 | count = "${length(local.azs)}" 62 | 63 | vpc_id = aws_vpc.vpc.id 64 | availability_zone = local.azs[count.index] 65 | 66 | cidr_block = local.cidr_subnets[count.index+3] 67 | 68 | tags = { 69 | Name = "${data.aws_ssm_parameter.project.value}-HML-public-subnet" 70 | } 71 | depends_on = [ aws_internet_gateway.gw ] 72 | 73 | } 74 | 75 | resource "aws_nat_gateway" "ngw" { 76 | allocation_id = aws_eip.eip.id 77 | subnet_id = aws_subnet.public[0].id 78 | 79 | tags = { 80 | Name = "${data.aws_ssm_parameter.project.value}-HML-gw-NAT" 81 | } 82 | 83 | depends_on = [ aws_internet_gateway.gw ] 84 | } 85 | 86 | resource "aws_subnet" "private" { 87 | count = "${length(local.azs)}" 88 | vpc_id = aws_vpc.vpc.id 89 | availability_zone = local.azs[count.index] 90 | 91 | cidr_block = local.cidr_subnets[count.index] 92 | 93 | tags = { 94 | Name = "${data.aws_ssm_parameter.project.value}-HML-private-subnet" 95 | } 96 | 97 | depends_on = [ aws_nat_gateway.ngw ] 98 | } 99 | 100 | resource "aws_route_table" "main-public" { 101 | vpc_id = aws_vpc.vpc.id 102 | route { 103 | cidr_block = "0.0.0.0/0" 104 | gateway_id = aws_internet_gateway.gw.id 105 | } 106 | tags = { 107 | Name = "${data.aws_ssm_parameter.project.value}-HML-public-route" 108 | } 109 | 110 | depends_on = [ aws_internet_gateway.gw ] 111 | 112 | } 113 | 114 | resource "aws_route_table" "main-private" { 115 | vpc_id = aws_vpc.vpc.id 116 | route { 117 | cidr_block = "0.0.0.0/0" 118 | gateway_id = aws_nat_gateway.ngw.id 119 | } 120 | tags = { 121 | Name = "${data.aws_ssm_parameter.project.value}-HML-private-route" 122 | } 123 | depends_on = [ aws_nat_gateway.ngw ] 124 | 125 | } 126 | 127 | resource "aws_route_table_association" "public" { 128 | count = "${length(local.azs)}" 129 | subnet_id = "${element(aws_subnet.public.*.id, count.index)}" 130 | route_table_id = aws_route_table.main-public.id 131 | } 132 | 133 | resource "aws_route_table_association" "private" { 134 | count = "${length(local.azs)}" 135 | subnet_id = "${element(aws_subnet.private.*.id, count.index)}" 136 | route_table_id = aws_route_table.main-private.id 137 | } -------------------------------------------------------------------------------- /HML/api_helpers/pre-api-helpers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Executing Pre-API Helpers" 4 | echo "---------------------" 5 | echo "---------------------" 6 | echo "---------------------Get Child Account Number---------------------" 7 | accountnumber=$(aws sts get-caller-identity --query Account --output text) 8 | accountName=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/project" --query Parameter.Value --output text) 9 | regionCidr=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/region" --query Parameter.Value --output text) 10 | ipamApiGtw=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/ipamapigtw" --query Parameter.Value --output text) 11 | ipamLambda=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/ipamlambda" --query Parameter.Value --output text) 12 | echo "accountnumber: $accountnumber" 13 | has_parameter_store=`aws ssm get-parameter --name "/$accountnumber/vpc/cidr" --query Parameter.Value --output text` 14 | echo "---------------------Echo Parameter Store---------------------" 15 | echo $has_parameter_store 16 | echo "---------------------" 17 | echo "---------------------" 18 | echo "---------------------Change AWS_PROFILE=SHARED-management-admin---------------------" 19 | export AWS_PROFILE=aft-management 20 | #export AWS_PROFILE=ct-management 21 | echo "aws sts get-caller-identity" 22 | aws sts get-caller-identity 23 | echo "---------------------" 24 | echo "---------------------" 25 | echo "---------------------Access folder: CD \API_Helpers---------------------" 26 | cd $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/ 27 | echo "---------------------Done!---------------------" 28 | echo "---------------------" 29 | echo "---------------------" 30 | 31 | if [[ -z "$has_parameter_store" ]]; then 32 | echo "---------------------Executing ./get_cidr.py---------------------" 33 | cidr=$(python3 ./get_cidr.py $accountnumber $accountName $regionCidr $ipamApiGtw $ipamLambda) 34 | echo "---------------------Done!---------------------" 35 | echo "---------------------" 36 | echo "---------------------" 37 | echo "---------------------Show CIDR before convertion---------------------" 38 | echo $cidr 39 | echo "---------------------" 40 | echo "---------------------" 41 | echo "---------------------Convert CIDR string---------------------" 42 | IFS='""' 43 | read -ra ARR <<< "$cidr" 44 | cidr="${ARR[3]}" 45 | echo "---------------------Done!---------------------" 46 | echo "---------------------" 47 | echo "---------------------" 48 | echo "---------------------Show CIDR after convertion---------------------" 49 | echo $cidr 50 | echo "---------------------" 51 | echo "---------------------" 52 | echo "---------------------" 53 | echo "---------------------" 54 | echo "---------------------Change AWS_PROFILE=AFT-TARGET-admin---------------------" 55 | export AWS_PROFILE=aft-target 56 | echo "aws sts get-caller-identity" 57 | aws sts get-caller-identity 58 | echo "---------------------" 59 | echo "---------------------" 60 | echo "---------------------Creating CIDR parameter store---------------------" 61 | aws ssm put-parameter --name "/$accountnumber/vpc/cidr" --value "$cidr" --type "String" --overwrite 62 | echo "---------------------Done!---------------------" 63 | echo "---------------------" 64 | echo "---------------------" 65 | else 66 | echo "SSM parameter store for CIDR already exists, no need for a new creation." 67 | fi 68 | 69 | echo "---------------------Change AWS_PROFILE=AFT-TARGET-admin---------------------" 70 | export AWS_PROFILE=aft-management 71 | echo "---------------------aws sts get-caller-identity---------------------" 72 | aws sts get-caller-identity 73 | echo $accountnumber > hml.txt 74 | aftnumber=$(aws sts get-caller-identity --query Account --output text) 75 | aws s3 cp hml.txt s3://aft-backend-$aftnumber-primary-region/account-number-pipeline/hml.txt -------------------------------------------------------------------------------- /PRODUCTION/api_helpers/pre-api-helpers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Executing Pre-API Helpers" 4 | echo "---------------------" 5 | echo "---------------------" 6 | echo "---------------------Get Child Account Number---------------------" 7 | accountnumber=$(aws sts get-caller-identity --query Account --output text) 8 | accountName=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/project" --query Parameter.Value --output text) 9 | regionCidr=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/region" --query Parameter.Value --output text) 10 | ipamApiGtw=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/ipamapigtw" --query Parameter.Value --output text) 11 | ipamLambda=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/ipamlambda" --query Parameter.Value --output text) 12 | echo "accountnumber: $accountnumber" 13 | has_parameter_store=`aws ssm get-parameter --name "/$accountnumber/vpc/cidr" --query Parameter.Value --output text` 14 | echo "---------------------Echo Parameter Store---------------------" 15 | echo $has_parameter_store 16 | echo "---------------------" 17 | echo "---------------------" 18 | echo "---------------------Change AWS_PROFILE=SHARED-management-admin---------------------" 19 | export AWS_PROFILE=aft-management 20 | #export AWS_PROFILE=ct-management 21 | echo "aws sts get-caller-identity" 22 | aws sts get-caller-identity 23 | echo "---------------------" 24 | echo "---------------------" 25 | echo "---------------------Access folder: CD \API_Helpers---------------------" 26 | cd $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/ 27 | echo "---------------------Done!---------------------" 28 | echo "---------------------" 29 | echo "---------------------" 30 | 31 | if [[ -z "$has_parameter_store" ]]; then 32 | echo "---------------------Executing ./get_cidr.py---------------------" 33 | cidr=$(python3 ./get_cidr.py $accountnumber $accountName $regionCidr $ipamApiGtw $ipamLambda) 34 | echo "---------------------Done!---------------------" 35 | echo "---------------------" 36 | echo "---------------------" 37 | echo "---------------------Show CIDR before convertion---------------------" 38 | echo $cidr 39 | echo "---------------------" 40 | echo "---------------------" 41 | echo "---------------------Convert CIDR string---------------------" 42 | IFS='""' 43 | read -ra ARR <<< "$cidr" 44 | cidr="${ARR[3]}" 45 | echo "---------------------Done!---------------------" 46 | echo "---------------------" 47 | echo "---------------------" 48 | echo "---------------------Show CIDR after convertion---------------------" 49 | echo $cidr 50 | echo "---------------------" 51 | echo "---------------------" 52 | echo "---------------------" 53 | echo "---------------------" 54 | echo "---------------------Change AWS_PROFILE=AFT-TARGET-admin---------------------" 55 | export AWS_PROFILE=aft-target 56 | echo "aws sts get-caller-identity" 57 | aws sts get-caller-identity 58 | echo "aws sts get-caller-identity" 59 | echo "---------------------" 60 | echo "---------------------" 61 | echo "---------------------Creating CIDR parameter store---------------------" 62 | aws ssm put-parameter --name "/$accountnumber/vpc/cidr" --value "$cidr" --type "String" --overwrite 63 | echo "---------------------Done!---------------------" 64 | echo "---------------------" 65 | echo "---------------------" 66 | else 67 | echo "SSM parameter store for CIDR already exists, no need for a new creation." 68 | fi 69 | 70 | echo "---------------------Change AWS_PROFILE=AFT-TARGET-admin---------------------" 71 | export AWS_PROFILE=aft-management 72 | echo "---------------------aws sts get-caller-identity---------------------" 73 | aws sts get-caller-identity 74 | echo $accountnumber > prd.txt 75 | aftnumber=$(aws sts get-caller-identity --query Account --output text) 76 | aws s3 cp prd.txt s3://aft-backend-$aftnumber-primary-region/account-number-pipeline/prd.txt -------------------------------------------------------------------------------- /PRODUCTION/terraform/vpc.tf: -------------------------------------------------------------------------------- 1 | data "aws_region" "current" {} 2 | locals { 3 | region = data.aws_region.current.name 4 | cidr = data.aws_ssm_parameter.cidr.value 5 | cidr_subnets = nonsensitive(cidrsubnets(local.cidr, 4, 4, 4, 4, 4, 4)) 6 | azs = ["${local.region}a", "${local.region}b", "${local.region}c"] 7 | } 8 | 9 | data "aws_caller_identity" "current" {} 10 | 11 | data "aws_ssm_parameter" "cidr" { 12 | name = "/${data.aws_caller_identity.current.account_id}/vpc/cidr" 13 | } 14 | 15 | data "aws_ssm_parameter" "project" { 16 | name = "/aft/account-request/custom-fields/project" 17 | } 18 | 19 | resource "aws_default_security_group" "default" { 20 | vpc_id = aws_vpc.vpc.id 21 | 22 | ingress { 23 | protocol = "-1" 24 | self = true 25 | from_port = 0 26 | to_port = 0 27 | } 28 | 29 | egress { 30 | from_port = 0 31 | to_port = 0 32 | protocol = "-1" 33 | cidr_blocks = ["0.0.0.0/0"] 34 | } 35 | } 36 | 37 | resource "aws_vpc" "vpc" { 38 | #checkov:skip=CKV2_AWS_12: Already done. 39 | cidr_block = local.cidr 40 | tags = { 41 | Name = "${data.aws_ssm_parameter.project.value}-PROD" 42 | } 43 | } 44 | 45 | resource "aws_internet_gateway" "gw" { 46 | vpc_id = aws_vpc.vpc.id 47 | 48 | tags = { 49 | Name = "${data.aws_ssm_parameter.project.value}-PROD-public-subnet" 50 | } 51 | } 52 | 53 | resource "aws_eip" "eip" { 54 | #checkov:skip=CKV2_AWS_19: EIP attached to Nat Gateway. 55 | count = "${length(local.azs)}" 56 | domain = "vpc" 57 | } 58 | 59 | 60 | resource "aws_subnet" "public" { 61 | count = "${length(local.azs)}" 62 | 63 | vpc_id = aws_vpc.vpc.id 64 | availability_zone = local.azs[count.index] 65 | 66 | cidr_block = local.cidr_subnets[count.index+3] 67 | 68 | tags = { 69 | Name = "${data.aws_ssm_parameter.project.value}-PROD-public-subnet" 70 | } 71 | depends_on = [ aws_internet_gateway.gw ] 72 | 73 | } 74 | 75 | resource "aws_nat_gateway" "ngw" { 76 | count = "${length(local.azs)}" 77 | allocation_id = aws_eip.eip[count.index].id 78 | subnet_id = aws_subnet.public[count.index].id 79 | 80 | tags = { 81 | Name = "${data.aws_ssm_parameter.project.value}-PROD-gw-NAT-${local.azs[count.index]}" 82 | } 83 | 84 | depends_on = [ aws_internet_gateway.gw ] 85 | } 86 | 87 | resource "aws_subnet" "private" { 88 | count = "${length(local.azs)}" 89 | vpc_id = aws_vpc.vpc.id 90 | availability_zone = local.azs[count.index] 91 | 92 | cidr_block = local.cidr_subnets[count.index] 93 | 94 | tags = { 95 | Name = "${data.aws_ssm_parameter.project.value}-PROD-private-subnet" 96 | } 97 | 98 | depends_on = [ aws_nat_gateway.ngw ] 99 | } 100 | 101 | resource "aws_route_table" "main-public" { 102 | vpc_id = aws_vpc.vpc.id 103 | route { 104 | cidr_block = "0.0.0.0/0" 105 | gateway_id = aws_internet_gateway.gw.id 106 | } 107 | tags = { 108 | Name = "${data.aws_ssm_parameter.project.value}-PROD-public-route" 109 | } 110 | 111 | depends_on = [ aws_internet_gateway.gw ] 112 | 113 | } 114 | 115 | resource "aws_route_table" "main-private" { 116 | count = "${length(local.azs)}" 117 | vpc_id = aws_vpc.vpc.id 118 | route { 119 | cidr_block = "0.0.0.0/0" 120 | gateway_id = aws_nat_gateway.ngw[count.index].id 121 | } 122 | tags = { 123 | Name = "${data.aws_ssm_parameter.project.value}-PROD-private-route-${local.azs[count.index]}" 124 | } 125 | depends_on = [ aws_nat_gateway.ngw ] 126 | 127 | } 128 | 129 | resource "aws_route_table_association" "public" { 130 | count = "${length(local.azs)}" 131 | subnet_id = "${element(aws_subnet.public.*.id, count.index)}" 132 | route_table_id = aws_route_table.main-public.id 133 | } 134 | 135 | resource "aws_route_table_association" "private" { 136 | count = "${length(local.azs)}" 137 | subnet_id = "${element(aws_subnet.private.*.id, count.index)}" 138 | route_table_id = aws_route_table.main-private[count.index].id 139 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/README.md: -------------------------------------------------------------------------------- 1 | # pipeline-infra 2 | 3 | ## Getting started 4 | 5 | Project responsible for creating the CI/CD framework for Infrastructure as Code, using the Terraform language. In addition to integration and automatic depoy, security validations (static analysis) are performed using the tool [checkov](https://www.checkov.io/). 6 | 7 | ## Instructions 8 | 9 | Run the code once for each environment. 10 | For example: If you have a development, homolog and production environment. The code must be executed 3 times. 11 | 12 | OBS: only the state module should be executed only once (in the first environment). 13 | 14 | ## Structure 15 | 16 | The solution is composed of following modules: 17 | 18 | - artifact 19 | - codebuild 20 | - pipeline 21 | - state 22 | 23 | The templates are organized in the following directories: 24 | 25 | - **modules**: Contains the source code for each module 26 | - ****: Cloud provider specific directory 27 | - **artifact**: Source code for artifact module 28 | - Function: Create all resources for store artifacts. 29 | - **codebuild**: Source code for codebuild module 30 | - Function: Responsible for creating all codebuild resources necessary for build and deploy. 31 | - **pipeline**: Source code for pipeline module 32 | - Function: Create all pipeline orquestration. 33 | - **state**: Source code for state module 34 | - Function: Create resources for store Terraform State information 35 | 36 | ## Fill variables 37 | 38 | In the [global.tfvars](./global.tfvars) file, the values for the variables needed to execute the code should be placed: 39 | 40 | - **Global Variables** 41 | - aws_region: Region to be used 42 | - environment: Environment name ("dev", "hml", "prd") 43 | - application_name: Application name 44 | - workloadAccess: Profile that gives access to the workload account that will be deployed 45 | - **codeBuild** 46 | - terraform_path: Path where the terraform code is located 47 | - workloadAccount: Workload Account ID 48 | - **pipeline** 49 | - source_branch_name: Name of the branch where the pipeline will be listening 50 | - source_repository_arn: ARN where terraform code is located 51 | - source_repository_name: Repository name where terraform code is located 52 | - email: pipeline approver email 53 | - approve_comment: comment that will go in the approval email 54 | 55 | --- 56 | 57 | ## Deploy 58 | 59 | Edit the [global.tfvars](./global.tfvars) file as per the guidelines above. 60 | 61 | ⚠️ 62 | 63 | > Run the following commands, if this is the first environment to be deployed in the pipeline: 64 | 65 | ``` 66 | terraform init 67 | terraform plan -target="module.state" -var-file="global.tfvars" 68 | terraform apply -target="module.state" -var-file="global.tfvars" 69 | 70 | ``` 71 | 72 | ⭐ 73 | 74 | > For ALL environments: Run the following commands 75 | 76 | ``` 77 | terraform init 78 | terraform plan -target="module.artifact" -target="module.codeBuild" -target="module.pipeline" -var-file="global.tfvars" 79 | terraform apply -target="module.artifact" -target="module.codeBuild" -target="module.pipeline" -var-file="global.tfvars" 80 | 81 | ``` 82 | 83 | --- 84 | 85 | ## Using the pipeline 86 | 87 | ⭐ 88 | 89 | > After pipeline creation 90 | > In the project code (application infrastructure), edit the to put the backend information to use the S3 and DynamoDB created to control the terraform state. Example: 91 | 92 | ``` 93 | terraform { 94 | backend "s3" { 95 | bucket = "--tfstate" 96 | workspace_key_prefix = "appname" 97 | key = "terraform.tfstate" 98 | region = "region" 99 | dynamodb_table = "--tflock" 100 | } 101 | } 102 | 103 | ``` 104 | 105 | - Terraform Workspaces need to be created: 106 | - In the directory containing the Terraform code of the project (application infrastructure), execute the commands below: 107 | 108 | ``` 109 | terraform init 110 | terraform workspace new dev 111 | terraform workspace new hml 112 | terraform workspace new prd 113 | ``` 114 | 115 | - In the project code (application infrastructure), variable files need to be in the following structure: 116 | 117 | - - **inventory** 118 | - **** 119 | - **global.tfvars** 120 | 121 | ## Issues 122 | 123 | 1. Windows users are having problem with stacksets instance. For Mac is working as intended. 124 | - https://github.com/hashicorp/terraform-provider-aws/issues/23349 125 | 2. 'arn:aws:s3:::lidiofflinetestes' resourcetaggingapi brings tags even when does not have tag 126 | 127 | --- 128 | 129 | a 130 | -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/codeBuild/iam.tf: -------------------------------------------------------------------------------- 1 | data "aws_caller_identity" "current" {} 2 | 3 | # ------------------------------------------------------------ 4 | # STS Assume Cross Account Role 5 | # ------------------------------------------------------------ 6 | resource "aws_iam_policy" "sts_cross" { 7 | name = "${var.application_name}-stsAssume-InfraBuildRole-${var.environment}" 8 | description = "Allow CodeBuild assume cross account role" 9 | 10 | policy = jsonencode({ 11 | Version = "2012-10-17" 12 | Statement = [ 13 | { 14 | Sid = "STSCrossAccount" 15 | Action = [ 16 | "sts:AssumeRole" 17 | ] 18 | Effect = "Allow" 19 | Resource = [ 20 | "arn:aws:iam::${var.workloadAccount}:role/InfraBuildRole-${var.application_name}" 21 | ] 22 | } 23 | ] 24 | }) 25 | } 26 | 27 | 28 | # ------------------------------------------------------------ 29 | # IAM Apply 30 | # ------------------------------------------------------------ 31 | resource "aws_iam_role" "codebuild_project_apply" { 32 | name = "${var.application_name}-TerraformCodeBuildProjectApplyRole-${var.environment}" 33 | 34 | assume_role_policy = jsonencode({ 35 | Version = "2012-10-17" 36 | Statement = [ 37 | { 38 | Action = "sts:AssumeRole" 39 | Principal = { 40 | Service = "codebuild.amazonaws.com" 41 | } 42 | Effect = "Allow" 43 | }, 44 | ] 45 | }) 46 | } 47 | 48 | resource "aws_iam_role_policy_attachment" "codebuild_project_apply_power_user" { 49 | role = aws_iam_role.codebuild_project_apply.name 50 | policy_arn = "arn:aws:iam::aws:policy/PowerUserAccess" 51 | } 52 | 53 | resource "aws_iam_role_policy_attachment" "codebuild_project_apply_iam_full_access" { 54 | #checkov:skip=CKV2_AWS_56 - Pipeline Access to create resources in workload account 55 | role = aws_iam_role.codebuild_project_apply.name 56 | policy_arn = "arn:aws:iam::aws:policy/IAMFullAccess" 57 | } 58 | 59 | resource "aws_iam_role_policy_attachment" "codebuild_project_apply_sts" { 60 | role = aws_iam_role.codebuild_project_apply.name 61 | policy_arn = aws_iam_policy.sts_cross.arn 62 | } 63 | 64 | resource "aws_iam_role" "codebuild_project_plan_fmt" { 65 | name = "${var.application_name}-TerraformCodeBuildProjectPlanFmtRole-${var.environment}" 66 | 67 | assume_role_policy = jsonencode({ 68 | Version = "2012-10-17" 69 | Statement = [ 70 | { 71 | Action = "sts:AssumeRole" 72 | Principal = { 73 | Service = "codebuild.amazonaws.com" 74 | } 75 | Effect = "Allow" 76 | }, 77 | ] 78 | }) 79 | } 80 | 81 | # ------------------------------------------------------------ 82 | # IAM Plan 83 | # ------------------------------------------------------------ 84 | resource "aws_iam_policy" "codebuild_project_plan_fmt" { 85 | name = "${var.application_name}-TerraformCodeBuildActionPlanAndFmtPolicy-${var.environment}" 86 | description = "Allow CodeBuild to access to CloudWatch, Artifact store and backend" 87 | 88 | policy = jsonencode({ 89 | Version = "2012-10-17" 90 | Statement = [ 91 | { 92 | Sid = "AllowWriteLogToCloudWatchLogs" 93 | Action = [ 94 | "logs:CreateLogGroup", 95 | "logs:CreateLogStream", 96 | "logs:PutLogEvents" 97 | ] 98 | Effect = "Allow" 99 | Resource = ["*"] 100 | }, 101 | { 102 | Sid = "AllowArtifactStoreAccess" 103 | Action = [ 104 | "s3:PutObject", 105 | "s3:GetObject" 106 | ] 107 | Effect = "Allow" 108 | Resource = [ 109 | "${var.artifact_s3}/*", 110 | ] 111 | }, 112 | { 113 | Sid = "AllowUseKMSKey" 114 | Action = [ 115 | "kms:Decrypt", 116 | "kms:DescribeKey", 117 | "kms:Encrypt", 118 | "kms:ReEncrypt*", 119 | "kms:GenerateDatakey*" 120 | ] 121 | Effect = "Allow" 122 | Resource = var.artifact_kms 123 | }, 124 | { 125 | Sid = "AllowListBackendS3" 126 | Action = [ 127 | "s3:ListBucket", 128 | ] 129 | Effect = "Allow" 130 | Resource = var.backend_s3_arn 131 | }, 132 | { 133 | Sid = "AllowAccessToBackendS3" 134 | Action = [ 135 | "s3:GetObject", 136 | "s3:PutObject", 137 | ] 138 | Effect = "Allow" 139 | Resource = "${var.backend_s3_arn}/*" 140 | }, 141 | { 142 | Sid = "AllowAccessToBackendDynamoDBLockTable" 143 | Action = [ 144 | "dynamodb:GetItem", 145 | "dynamodb:PutItem", 146 | "dynamodb:DeleteItem" 147 | ] 148 | Effect = "Allow" 149 | Resource = var.backend_lock_dynamodb_arn 150 | }, 151 | ] 152 | }) 153 | } 154 | 155 | resource "aws_iam_role_policy_attachment" "codebuild_project_plan_fmt" { 156 | role = aws_iam_role.codebuild_project_plan_fmt.name 157 | policy_arn = aws_iam_policy.codebuild_project_plan_fmt.arn 158 | } 159 | 160 | resource "aws_iam_role_policy_attachment" "codebuild_project_plan_fmt_read_only" { 161 | role = aws_iam_role.codebuild_project_plan_fmt.name 162 | policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess" 163 | } 164 | 165 | resource "aws_iam_role_policy_attachment" "codebuild_project_plan_sts" { 166 | role = aws_iam_role.codebuild_project_plan_fmt.name 167 | policy_arn = aws_iam_policy.sts_cross.arn 168 | } 169 | 170 | 171 | # ------------------------------------------------------------ 172 | # IAM Cross-Account 173 | # ------------------------------------------------------------ 174 | provider "aws" { 175 | alias = "workload" 176 | } 177 | 178 | resource "aws_iam_role" "infra_build_workload" { 179 | name = "InfraBuildRole-${var.application_name}" 180 | provider = aws.workload 181 | assume_role_policy = jsonencode({ 182 | Version = "2012-10-17" 183 | Statement = [ 184 | { 185 | Action = "sts:AssumeRole" 186 | Principal = { 187 | AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root" 188 | } 189 | Effect = "Allow" 190 | }, 191 | ] 192 | }) 193 | } 194 | 195 | resource "aws_iam_role_policy_attachment" "infra_build_workload" { 196 | #checkov:skip=CKV_AWS_274 - Pipeline Access to create resources in workload account 197 | provider = aws.workload 198 | role = aws_iam_role.infra_build_workload.name 199 | policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess" 200 | } -------------------------------------------------------------------------------- /PIPELINE/pipeline-infra/modules/aws/pipeline/iam.tf: -------------------------------------------------------------------------------- 1 | data "aws_caller_identity" "current" {} 2 | # ------------------------------------------------------------ 3 | # IAM Role - CodePipeline Service role 4 | # ------------------------------------------------------------ 5 | resource "aws_iam_role" "codepipeline" { 6 | name = "${var.application_name}-TerraformCodePipelineRole-${var.environment}" 7 | 8 | assume_role_policy = jsonencode({ 9 | Version = "2012-10-17" 10 | Statement = [ 11 | { 12 | Action = "sts:AssumeRole" 13 | Principal = { 14 | Service = "codepipeline.amazonaws.com" 15 | } 16 | Effect = "Allow" 17 | }, 18 | ] 19 | }) 20 | } 21 | 22 | resource "aws_iam_policy" "codepipeline" { 23 | name = "${var.application_name}-TerraformCodePipelinePolicy-${var.environment}" 24 | description = "Allow CodePipeline to access to CodeCommit, Artifact store and assume IAM roles declared at each action" 25 | 26 | policy = jsonencode({ 27 | Version = "2012-10-17" 28 | Statement = [ 29 | { 30 | Sid = "AllowAssumeRole" 31 | Action = [ 32 | "sts:AssumeRole" 33 | ] 34 | Effect = "Allow" 35 | Resource = [ 36 | aws_iam_role.codepipeline_action_source.arn, 37 | aws_iam_role.codepipeline_action_plan_fmt.arn 38 | ] 39 | } 40 | ] 41 | }) 42 | } 43 | 44 | resource "aws_iam_role_policy_attachment" "codepipeline" { 45 | role = aws_iam_role.codepipeline.name 46 | policy_arn = aws_iam_policy.codepipeline.arn 47 | } 48 | 49 | # ------------------------------------------------------------ 50 | # IAM Role - CodePipeline Action role: Source 51 | # ------------------------------------------------------------ 52 | resource "aws_iam_role" "codepipeline_action_source" { 53 | name = "${var.application_name}-TerraformCodePipelineActionSourceRole-${var.environment}" 54 | 55 | assume_role_policy = jsonencode({ 56 | Version = "2012-10-17" 57 | Statement = [ 58 | { 59 | Action = "sts:AssumeRole" 60 | Principal = { 61 | AWS = aws_iam_role.codepipeline.arn 62 | } 63 | Effect = "Allow" 64 | }, 65 | ] 66 | }) 67 | } 68 | 69 | resource "aws_iam_policy" "codepipeline_action_source" { 70 | name = "${var.application_name}-TerraformCodePipelineActionSourcePolicy-${var.environment}" 71 | description = "Allow CodePipeline to access to CodeCommit and Artifact store" 72 | 73 | policy = jsonencode({ 74 | Version = "2012-10-17" 75 | Statement = [ 76 | { 77 | Sid = "AllowCodeCommitAccess" 78 | Action = [ 79 | "codecommit:GetBranch", 80 | "codecommit:GetRepository", 81 | "codecommit:GetCommit", 82 | "codecommit:GetUploadArchiveStatus", 83 | "codecommit:UploadArchive", 84 | "codecommit:CancelUploadArchive" 85 | ] 86 | Effect = "Allow" 87 | Resource = [ 88 | var.source_repository_arn 89 | ] 90 | }, 91 | { 92 | Sid = "AllowArtifactStoreAccess" 93 | Action = [ 94 | "s3:PutObject", 95 | "s3:GetObject", 96 | ] 97 | Effect = "Allow" 98 | Resource = [ 99 | "${var.artifact_s3}/*", 100 | ] 101 | }, 102 | { 103 | Sid = "AllowUseKMSKey" 104 | Action = [ 105 | "kms:Decrypt", 106 | "kms:DescribeKey", 107 | "kms:Encrypt", 108 | "kms:ReEncrypt*", 109 | "kms:GenerateDatakey*" 110 | ] 111 | Effect = "Allow" 112 | Resource = var.artifact_kms 113 | } 114 | ] 115 | }) 116 | } 117 | 118 | resource "aws_iam_role_policy_attachment" "codepipeline_action_source" { 119 | role = aws_iam_role.codepipeline_action_source.name 120 | policy_arn = aws_iam_policy.codepipeline_action_source.arn 121 | } 122 | 123 | # ------------------------------------------------------------ 124 | # IAM Role - CodePipeline Action role: PlanAndFmt 125 | # ------------------------------------------------------------ 126 | resource "aws_iam_role" "codepipeline_action_plan_fmt" { 127 | name = "${var.application_name}-TerraformCodePipelineActionPlanAndFmtRole-${var.environment}" 128 | 129 | assume_role_policy = jsonencode({ 130 | Version = "2012-10-17" 131 | Statement = [ 132 | { 133 | Action = "sts:AssumeRole" 134 | Principal = { 135 | AWS = aws_iam_role.codepipeline.arn 136 | } 137 | Effect = "Allow" 138 | }, 139 | ] 140 | }) 141 | } 142 | 143 | resource "aws_iam_policy" "codepipeline_action_plan_fmt" { 144 | name = "${var.application_name}-TerraformCodePipelineActionPlanAndFmtPolicy-${var.environment}" 145 | description = "Allow CodePipeline to start or stop codebuild" 146 | 147 | policy = jsonencode({ 148 | Version = "2012-10-17" 149 | Statement = [ 150 | { 151 | Sid = "AllowCodeBuildAccess" 152 | Action = [ 153 | "codebuild:BatchGetBuilds", 154 | "codebuild:StartBuild", 155 | "codebuild:StopBuild" 156 | ] 157 | Effect = "Allow" 158 | Resource = [ 159 | "arn:aws:codebuild:${var.aws_region}:${data.aws_caller_identity.current.account_id}:project/${var.application_name}-TerraformPlanFmt-${var.environment}", 160 | "arn:aws:codebuild:${var.aws_region}:${data.aws_caller_identity.current.account_id}:project/${var.application_name}-TerraformCheckIAC-${var.environment}", 161 | "arn:aws:codebuild:${var.aws_region}:${data.aws_caller_identity.current.account_id}:project/${var.application_name}-TerraformApply-${var.environment}" 162 | ] 163 | } 164 | ] 165 | }) 166 | } 167 | 168 | resource "aws_iam_role_policy_attachment" "codepipeline_action_plan_fmt" { 169 | role = aws_iam_role.codepipeline_action_plan_fmt.name 170 | policy_arn = aws_iam_policy.codepipeline_action_plan_fmt.arn 171 | } 172 | 173 | # ------------------------------------------------------------ 174 | # IAM Role - CodePipeline Action role: Apply 175 | # ------------------------------------------------------------ 176 | resource "aws_iam_role" "codepipeline_action_apply" { 177 | name = "${var.application_name}-TerraformCodePipelineActionApplyRole-${var.environment}" 178 | 179 | assume_role_policy = jsonencode({ 180 | Version = "2012-10-17" 181 | Statement = [ 182 | { 183 | Action = "sts:AssumeRole" 184 | Principal = { 185 | AWS = aws_iam_role.codepipeline.arn 186 | } 187 | Effect = "Allow" 188 | } 189 | ] 190 | }) 191 | } 192 | 193 | resource "aws_iam_policy" "codepipeline_action_apply" { 194 | name = "${var.application_name}-TerraformCodePipelineActionApplyPolicy-${var.environment}" 195 | description = "Allow CodePipeline to start or stop codebuild" 196 | 197 | policy = jsonencode({ 198 | Version = "2012-10-17" 199 | Statement = [ 200 | { 201 | Sid = "AllowCodeBuildAccess" 202 | Action = [ 203 | "codebuild:BatchGetBuilds", 204 | "codebuild:StartBuild", 205 | "codebuild:StopBuild" 206 | ] 207 | Effect = "Allow" 208 | Resource = [ 209 | "arn:aws:codebuild:${var.aws_region}:${data.aws_caller_identity.current.account_id}:project/${var.application_name}-TerraformPlanFmt-${var.environment}", 210 | "arn:aws:codebuild:${var.aws_region}:${data.aws_caller_identity.current.account_id}:project/${var.application_name}-TerraformCheckIAC-${var.environment}", 211 | "arn:aws:codebuild:${var.aws_region}:${data.aws_caller_identity.current.account_id}:project/${var.application_name}-TerraformApply-${var.environment}" 212 | ] 213 | }, 214 | { 215 | Sid = "AllowSendNotification" 216 | Action = [ 217 | "sns:Publish" 218 | ] 219 | Effect = "Allow" 220 | Resource = aws_sns_topic.approval.arn 221 | } 222 | ] 223 | }) 224 | } 225 | 226 | resource "aws_iam_role_policy_attachment" "codepipeline_action_apply" { 227 | role = aws_iam_role.codepipeline_action_apply.name 228 | policy_arn = aws_iam_policy.codepipeline_action_apply.arn 229 | } -------------------------------------------------------------------------------- /PIPELINE/api_helpers/pre-api-helpers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Executing Pre-API Helpers" 4 | projectName=$(aws ssm get-parameter --name "/aft/account-request/custom-fields/project" --query Parameter.Value --output text) 5 | echo "---------------------" 6 | echo "---------------------" 7 | echo "---------------------printenv---------------------" 8 | printenv 9 | echo "---------------------Get Child Account Number---------------------" 10 | accountnumber=$(aws sts get-caller-identity --query Account --output text) 11 | echo $accountnumber 12 | echo "---------------------" 13 | echo "---------------------" 14 | echo "---------------------Change AWS_PROFILE=SHARED-management-admin---------------------" 15 | export AWS_PROFILE=aft-management 16 | echo "aws sts get-caller-identity" 17 | aws sts get-caller-identity 18 | echo "---------------------" 19 | echo "---------------------" 20 | echo "---------------------Access folder: CD \API_Helpers---------------------" 21 | cd $DEFAULT_PATH/$CUSTOMIZATION/api_helpers/ 22 | echo "---------------------Done!---------------------" 23 | echo "---------------------" 24 | echo "---------------------" 25 | echo "---------------------Change AWS_PROFILE=aft-management-admin---------------------" 26 | export AWS_PROFILE=aft-management-admin 27 | echo "---------------------aws sts get-caller-identity---------------------" 28 | aws sts get-caller-identity 29 | echo "---------------------" 30 | echo "---------------------" 31 | aftnumber=$(aws sts get-caller-identity --query Account --output text) 32 | echo "---------------------Get project members account ---------------------" 33 | field="{\":project\":{\"S\":\"project\\\":\\\"$projectName\"}}" 34 | mails=$(aws dynamodb scan --table-name aft-request --filter-expression "contains(custom_fields,:project)" --expression-attribute-values $field | jq '.Items[].id.S') 35 | for item in $mails; do 36 | x="{\":email\":{\"S\":$item}}" 37 | aws dynamodb scan --table-name aft-request-metadata --filter-expression " email = :email" --expression-attribute-values $x | jq -j '.Items[].id.S, ";" ,.Items[].account_customizations_name.S,"\n"' >> MembersAccounts.txt 38 | done 39 | mv MembersAccounts.txt $DEFAULT_PATH/$CUSTOMIZATION/terraform/MembersAccounts.txt 40 | 41 | echo "---------------------Create roles In account workload ---------------------" 42 | cat "$DEFAULT_PATH/$CUSTOMIZATION/terraform/MembersAccounts.txt" 43 | file="$DEFAULT_PATH/$CUSTOMIZATION/terraform/MembersAccounts.txt" 44 | while read -r line; do 45 | workloadAccountId=$(echo $line | sed -r 's/\"//g' | cut -d';' -f1) 46 | flavor="$(echo $line | sed -r 's/\"//g' |cut -d';' -f2)" 47 | case $flavor in 48 | ("DEV") dev_account=$workloadAccountId ;; 49 | ("HML") hml_account=$workloadAccountId ;; 50 | ("PRODUCTION") prd_account=$workloadAccountId ;; 51 | esac 52 | done <$file 53 | echo "---------------------" 54 | echo "---------------------" 55 | echo "---------------------Creating Trust Policy to Pipeline Account have access to Workloads accounts---------------------" 56 | echo '{' > Role-Trust-Policy.json 57 | echo ' "Version": "2012-10-17",' >> Role-Trust-Policy.json 58 | echo ' "Statement": [' >> Role-Trust-Policy.json 59 | echo ' {' >> Role-Trust-Policy.json 60 | echo ' "Effect": "Allow",' >> Role-Trust-Policy.json 61 | echo ' "Principal": {' >> Role-Trust-Policy.json 62 | echo ' "AWS": [' >> Role-Trust-Policy.json 63 | echo ' "arn:aws:sts::'$aftnumber':assumed-role/AWSAFTAdmin/AWSAFT-Session",' >> Role-Trust-Policy.json 64 | echo ' "arn:aws:sts::'$accountnumber':assumed-role/AWSAFTExecution/AWSAFT-Session"' >> Role-Trust-Policy.json 65 | echo ' ]' >> Role-Trust-Policy.json 66 | echo ' },' >> Role-Trust-Policy.json 67 | echo ' "Action": "sts:AssumeRole"' >> Role-Trust-Policy.json 68 | echo ' }' >> Role-Trust-Policy.json 69 | echo ' ]' >> Role-Trust-Policy.json 70 | echo '}' >> Role-Trust-Policy.json 71 | 72 | echo "---------------------" 73 | echo "---------------------" 74 | echo "---------------------cat Role-Trust-Policy.json---------------------" 75 | cat Role-Trust-Policy.json 76 | echo "---------------------" 77 | echo "---------------------" 78 | echo "---------------------Creating AWS Profile for dev-account---------------------" 79 | profile="dev-account" 80 | CREDENTIALS=$(aws sts assume-role --role-arn "arn:aws:iam::$dev_account:role/AWSAFTExecution" --role-session-name "AWSAFT-Session") 81 | echo $CREDENTIALS 82 | aws_access_key_id="$(echo "${CREDENTIALS}" | jq --raw-output ".Credentials[\"AccessKeyId\"]")" 83 | aws_secret_access_key="$(echo "${CREDENTIALS}" | jq --raw-output ".Credentials[\"SecretAccessKey\"]")" 84 | aws_session_token="$(echo "${CREDENTIALS}" | jq --raw-output ".Credentials[\"SessionToken\"]")" 85 | 86 | aws configure set aws_access_key_id "${aws_access_key_id}" --profile "${profile}" 87 | aws configure set aws_secret_access_key "${aws_secret_access_key}" --profile "${profile}" 88 | aws configure set aws_session_token "${aws_session_token}" --profile "${profile}" 89 | 90 | echo "---------------------Update assume role policy for dev-account---------------------" 91 | aws iam update-assume-role-policy --role-name AWSAFTExecution --policy-document file://Role-Trust-Policy.json --profile ${profile} 92 | 93 | 94 | 95 | echo "---------------------Change AWS_PROFILE=aft-management-admin---------------------" 96 | export AWS_PROFILE=aft-management-admin 97 | echo "---------------------aws sts get-caller-identity---------------------" 98 | aws sts get-caller-identity 99 | 100 | echo "---------------------" 101 | echo "---------------------" 102 | echo "---------------------Creating Trust Policy to Pipeline Account have access to Workloads accounts---------------------" 103 | 104 | echo '{' > Role-Trust-Policy.json 105 | echo ' "Version": "2012-10-17",' >> Role-Trust-Policy.json 106 | echo ' "Statement": [' >> Role-Trust-Policy.json 107 | echo ' {' >> Role-Trust-Policy.json 108 | echo ' "Effect": "Allow",' >> Role-Trust-Policy.json 109 | echo ' "Principal": {' >> Role-Trust-Policy.json 110 | echo ' "AWS": [' >> Role-Trust-Policy.json 111 | echo ' "arn:aws:sts::'$aftnumber':assumed-role/AWSAFTAdmin/AWSAFT-Session",' >> Role-Trust-Policy.json 112 | echo ' "arn:aws:sts::'$accountnumber':assumed-role/AWSAFTExecution/AWSAFT-Session"' >> Role-Trust-Policy.json 113 | echo ' ]' >> Role-Trust-Policy.json 114 | echo ' },' >> Role-Trust-Policy.json 115 | echo ' "Action": "sts:AssumeRole"' >> Role-Trust-Policy.json 116 | echo ' }' >> Role-Trust-Policy.json 117 | echo ' ]' >> Role-Trust-Policy.json 118 | echo '}' >> Role-Trust-Policy.json 119 | 120 | echo "---------------------" 121 | echo "---------------------" 122 | echo "---------------------cat Role-Trust-Policy.json---------------------" 123 | cat Role-Trust-Policy.json 124 | 125 | echo "---------------------" 126 | echo "---------------------" 127 | echo "---------------------Creating AWS Profile for hml-account---------------------" 128 | profile="hml-account" 129 | CREDENTIALS=$(aws sts assume-role --role-arn "arn:aws:iam::$hml_account:role/AWSAFTExecution" --role-session-name "AWSAFT-Session") 130 | echo $CREDENTIALS 131 | aws_access_key_id="$(echo "${CREDENTIALS}" | jq --raw-output ".Credentials[\"AccessKeyId\"]")" 132 | aws_secret_access_key="$(echo "${CREDENTIALS}" | jq --raw-output ".Credentials[\"SecretAccessKey\"]")" 133 | aws_session_token="$(echo "${CREDENTIALS}" | jq --raw-output ".Credentials[\"SessionToken\"]")" 134 | 135 | echo $aws_access_key_id 136 | echo $aws_secret_access_key 137 | echo $aws_session_token 138 | aws configure set aws_access_key_id "${aws_access_key_id}" --profile "${profile}" 139 | aws configure set aws_secret_access_key "${aws_secret_access_key}" --profile "${profile}" 140 | aws configure set aws_session_token "${aws_session_token}" --profile "${profile}" 141 | echo "---------------------" 142 | echo "---------------------" 143 | echo "---------------------Update assume role policy for hml-account---------------------" 144 | aws iam update-assume-role-policy --role-name AWSAFTExecution --policy-document file://Role-Trust-Policy.json --profile ${profile} 145 | 146 | echo "---------------------" 147 | echo "---------------------" 148 | echo "---------------------Change AWS_PROFILE=aft-management-admin---------------------" 149 | export AWS_PROFILE=aft-management-admin 150 | echo "---------------------aws sts get-caller-identity---------------------" 151 | aws sts get-caller-identity 152 | 153 | echo "---------------------" 154 | echo "---------------------" 155 | echo "---------------------Creating Trust Policy to Pipeline Account have access to Workloads accounts---------------------" 156 | 157 | 158 | echo '{' > Role-Trust-Policy.json 159 | echo ' "Version": "2012-10-17",' >> Role-Trust-Policy.json 160 | echo ' "Statement": [' >> Role-Trust-Policy.json 161 | echo ' {' >> Role-Trust-Policy.json 162 | echo ' "Effect": "Allow",' >> Role-Trust-Policy.json 163 | echo ' "Principal": {' >> Role-Trust-Policy.json 164 | echo ' "AWS": [' >> Role-Trust-Policy.json 165 | echo ' "arn:aws:sts::'$aftnumber':assumed-role/AWSAFTAdmin/AWSAFT-Session",' >> Role-Trust-Policy.json 166 | echo ' "arn:aws:sts::'$accountnumber':assumed-role/AWSAFTExecution/AWSAFT-Session"' >> Role-Trust-Policy.json 167 | echo ' ]' >> Role-Trust-Policy.json 168 | echo ' },' >> Role-Trust-Policy.json 169 | echo ' "Action": "sts:AssumeRole"' >> Role-Trust-Policy.json 170 | echo ' }' >> Role-Trust-Policy.json 171 | echo ' ]' >> Role-Trust-Policy.json 172 | echo '}' >> Role-Trust-Policy.json 173 | 174 | echo "---------------------" 175 | echo "---------------------" 176 | echo "---------------------cat Role-Trust-Policy.json---------------------" 177 | cat Role-Trust-Policy.json 178 | 179 | echo "---------------------" 180 | echo "---------------------" 181 | echo "---------------------Creating AWS Profile for prd-account---------------------" 182 | profile="prd-account" 183 | CREDENTIALS=$(aws sts assume-role --role-arn "arn:aws:iam::$prd_account:role/AWSAFTExecution" --role-session-name "AWSAFT-Session") 184 | echo $CREDENTIALS 185 | aws_access_key_id="$(echo "${CREDENTIALS}" | jq --raw-output ".Credentials[\"AccessKeyId\"]")" 186 | aws_secret_access_key="$(echo "${CREDENTIALS}" | jq --raw-output ".Credentials[\"SecretAccessKey\"]")" 187 | aws_session_token="$(echo "${CREDENTIALS}" | jq --raw-output ".Credentials[\"SessionToken\"]")" 188 | 189 | echo $aws_access_key_id 190 | echo $aws_secret_access_key 191 | echo $aws_session_token 192 | aws configure set aws_access_key_id "${aws_access_key_id}" --profile "${profile}" 193 | aws configure set aws_secret_access_key "${aws_secret_access_key}" --profile "${profile}" 194 | aws configure set aws_session_token "${aws_session_token}" --profile "${profile}" 195 | 196 | echo "---------------------" 197 | echo "---------------------" 198 | echo "---------------------Update assume role policy for prd-account---------------------" 199 | aws iam update-assume-role-policy --role-name AWSAFTExecution --policy-document file://Role-Trust-Policy.json --profile ${profile} -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # aft-account-customizations-examples 2 | This project contains a practical use case of the AFT account customizations logic. More details about the AFT see [AFT Doc](https://developer.hashicorp.com/terraform/tutorials/aws/aws-control-tower-aft) 3 | 4 | ## Background and solutions considerations 5 | When working in multi-account scenarios it is quite common to come across the following situation: 6 | - 3 Workload accounts (1 development account, 1 staging account and 1 production account) 7 | - 1 Account for SLDC (Pipelines, Version Control and GITOPS) 8 | - In this type of account, cross-account access to the workload accounts is necessary so that the pipelines can deliver the codes in their respective environments 9 | 10 | It turns out to be a challenge to manage the creation of these accounts in a standardized manner. Beyond this point, we have to think about managing the distribution of IPs so that the VPCs don't have ranges that conflict between them, which could cause future problems in case of need for a VPC Peering. Another challenge is automating all the manual steps that Setup requires. 11 | 12 | Thinking about these challenges, an intelligence was developed using AFT Account Customizations that allows us to create workload accounts (dev, hml, prod) already with a VPC and its CIDR managed by the [CIDR MGMT](https://github.com/aws-samples/aws-vpc-cidr-mgmt) solution. In addition, intelligence allows the creation of a pipeline account already with a repository in the code commit and a pipeline (which delivers terraform codes) for each environment with cross-account roles for workload access. 13 | 14 | # Solution overview 15 | 16 | ## Workload accounts (dev, hml e prod) 17 | 18 | Workload accounts will have to customize the creation of the VPC following best practices and with its IP addressing managed by [CIDR MGMT](https://github.com/aws-samples/aws-vpc-cidr-mgmt). 19 | 20 | ### Getting CIDR available for use 21 | 22 | in path "aft-account-customizations / account-type / api_helpers" there is the `pre-api-helpers.sh` script it will execute the following steps: 23 | - Check if the account has already allocated an IP in **CIDR MGMT**, querying the AWS SSM Parameter Store with the name "/account-number/vpc/cidr" 24 | - If there is no CIDR allocated, Python `get_cidr.py` will be executed which will allocate it in **CIDR MGMT** and will record the Parameter Store with the allocated CIDR 25 | 26 | ### VPC creation 27 | 28 | In the path "aft-account-customizations / account-type / api_helpers / terraform" there is the VPC creation IAC that will do: 29 | - Query the Parameter Store to retrieve the CIDR allocated to that account; 30 | - Create the VPC according to the environment 31 | - Production → VPC with 3 public subnets and 3 private subnets with Nat Gateway in all 3 AZs; 32 | - Staging → VPC with 3 public subnets and 3 private subnets with Nat Gateway single AZ; 33 | - Development → VPC with 3 public subnets and 3 private subnets with Nat Gateway single AZ 34 | 35 | ### Updating CIDR MGMT with VPC ID 36 | 37 | In the path "aft-account-customizations / account-type / api_helpers" there is the script `post-api-helpers.sh`, it will execute the following steps: 38 | - Run Python `update_cidr.py` which will update the **CIDR MGMT** associating the VPC ID (created in the previous step) with the allocated CIDR 39 | 40 | ``` 41 | +-----------------+ 42 | | SH | 43 | | pre-api-helpers | 44 | +--------+--------+ 45 | | 46 | | 47 | +--------+--------+ 48 | +------->+ Parameter Store + 49 | | +--------+--------+ 50 | | | 51 | | | 52 | | v 53 | | +--------+--------+ 54 | | / \ 55 | | / \ +---------------+ 56 | Get CIDR | have allocated CIDR? |+---No----> + get_cidr.py + 57 | | \ / +---------------+ 58 | | \ / | 59 | | +-------------------+ No 60 | | | | 61 | | Yes v 62 | | | +-----------------+ 63 | | v + CIDR Management + 64 | | +------------------+ +-----------------+ 65 | x--------| Terraform | 66 | +------------------+ 67 | | 68 | | 69 | v 70 | +------------------+ 71 | | VPC | 72 | +------------------+ 73 | 74 | 75 | +------------------+ 76 | | SH | 77 | | post-api-helpers | 78 | +------------------+ 79 | | 80 | | 81 | v 82 | +------------------+ +-----------------+ 83 | + update_cidr.py +----------> + CIDR Management + 84 | +------------------+ +-----------------+ 85 | ``` 86 | 87 | ## Pipeline Accounts 88 | 89 | Pipeline accounts will be created with an AWS CodeCommit repository (to house the environment's IAC), an infrastructure pipeline for each environment (dev, hml, and prd), and cross-account roles for the pipeline to work properly. 90 | 91 | ## Creating cross-account roles 92 | 93 | To find the workload accounts that should have the cross-account roles, the intelligence queries the **AFT DynamoDB** and collects the respective account IDs. 94 | 95 | In the "aft-account-customizations / PIPELINE / api_helpers" folder there is the `pre-api-helpers.sh` script, it will perform the following steps: 96 | 97 | - Query the DynamoDB **aft-request** table collecting the emails used to request accounts, using the Project name as a filter. 98 | - With the email list, a query is performed on the **aft-request-metadata** table collecting the account IDs and their type 99 | - For each Workload account: 100 | 101 | - The script will enter the workload account and change the trust policy of the role "AWSAFTExecution" allowing the pipeline account to assume 102 | 103 | ### AWS CodeCommit repository 104 | 105 | In the path "aft-account-customizations / PIPELINE / api_helpers / terraform" there is the IAC for creating the CodeCommit repository, where the IAC for creating the project environment will be housed and in turn will sensitize the pipeline. 106 | 107 | ### IAC Pipeline Deployment 108 | 109 | In the path "aft-account-customizations / PIPELINE / api_helpers" there is the script `post-api-helpers.sh`, it will perform the following steps: 110 | - For each environment (dev, hml and prd): 111 | - Will populate the variables dynamically from the pipeline creation script using the Jinja tool (detailed below) 112 | - It will run the terraform script to create the pipeline according to the environment. In this step, the AWS CodeCommit repository will be linked (for the pipeline to make use of it) and the previously created cross-account access 113 | 114 | ## Jinja 115 | 116 | Jinja is a fast, expressive, and extensible templating engine. Special placeholders in the template let you write code similar to Python syntax. Then the model receives data to render the final document. For more details check the product documentation. 117 | Here, we are using Jinja to assemble Terraform's variables and backend files dynamically, allowing us to leave nothing hardcoded and the solution to be scalable. 118 | 119 | In the "aft-account-customizations / account-type / api_helpers / terraform" folder there are *.jinja files where information is replaced dynamically and used during Terraform execution. 120 | 121 | ## Folder Structure 122 | ``` 123 | ├── CODE_OF_CONDUCT.md 124 | ├── CONTRIBUTING.md 125 | ├── DEV 126 | │ ├── api_helpers 127 | │ │ ├── get_cidr.py 128 | │ │ ├── post-api-helpers.sh 129 | │ │ ├── pre-api-helpers.sh 130 | │ │ ├── python 131 | │ │ │ └── requirements.txt 132 | │ │ └── update_cidr.py 133 | │ └── terraform 134 | │ ├── aft-providers.jinja 135 | │ ├── backend.jinja 136 | │ └── vpc.tf 137 | ├── HML 138 | │ ├── api_helpers 139 | │ │ ├── get_cidr.py 140 | │ │ ├── post-api-helpers.sh 141 | │ │ ├── pre-api-helpers.sh 142 | │ │ ├── python 143 | │ │ │ └── requirements.txt 144 | │ │ └── update_cidr.py 145 | │ └── terraform 146 | │ ├── aft-providers.jinja 147 | │ ├── backend.jinja 148 | │ └── vpc.tf 149 | ├── LICENSE 150 | ├── PIPELINE 151 | │ ├── api_helpers 152 | │ │ ├── application_script 153 | │ │ │ ├── terraform-deploy-dev.sh 154 | │ │ │ ├── terraform-deploy-hml.sh 155 | │ │ │ └── terraform-deploy-prd.sh 156 | │ │ ├── post-api-helpers.sh 157 | │ │ ├── pre-api-helpers.sh 158 | │ │ └── python 159 | │ │ └── requirements.txt 160 | │ ├── pipeline-infra 161 | │ │ ├── README.md 162 | │ │ ├── environments 163 | │ │ │ ├── dev 164 | │ │ │ │ ├── dev.jinja 165 | │ │ │ │ └── dev.tfbackend 166 | │ │ │ ├── hml 167 | │ │ │ │ ├── hml.jinja 168 | │ │ │ │ └── hml.tfbackend 169 | │ │ │ └── prd 170 | │ │ │ ├── prd.jinja 171 | │ │ │ └── prd.tfbackend 172 | │ │ ├── main.tf 173 | │ │ ├── modules 174 | │ │ │ └── aws 175 | │ │ │ ├── artifact 176 | │ │ │ │ ├── kms.tf 177 | │ │ │ │ ├── s3.tf 178 | │ │ │ │ └── variables.tf 179 | │ │ │ ├── codeBuild 180 | │ │ │ │ ├── buildspecActionApply.yml 181 | │ │ │ │ ├── buildspecActionPlanCheckIAC.yml 182 | │ │ │ │ ├── buildspecActionPlanFmt.yml 183 | │ │ │ │ ├── codebuildActionApply.tf 184 | │ │ │ │ ├── codebuildActionCheckIAC.tf 185 | │ │ │ │ ├── codebuildActionPlan.tf 186 | │ │ │ │ ├── iam.tf 187 | │ │ │ │ └── variables.tf 188 | │ │ │ ├── pipeline 189 | │ │ │ │ ├── codePipeline.tf 190 | │ │ │ │ ├── iam.tf 191 | │ │ │ │ ├── snsApprove.tf 192 | │ │ │ │ └── variables.tf 193 | │ │ │ └── state 194 | │ │ │ ├── dynamodb.tf 195 | │ │ │ ├── kms.tf 196 | │ │ │ ├── s3.tf 197 | │ │ │ └── variables.tf 198 | │ │ ├── providers.tf 199 | │ │ └── variables.tf 200 | │ └── terraform 201 | │ ├── aft-providers.jinja 202 | │ ├── backend.jinja 203 | │ ├── main.tf 204 | │ ├── modules 205 | │ │ └── vpc 206 | │ │ ├── CHANGELOG.md 207 | │ │ ├── LICENSE 208 | │ │ ├── README.md 209 | │ │ ├── UPGRADE-3.0.md 210 | │ │ ├── main.tf 211 | │ │ ├── modules 212 | │ │ │ └── vpc-endpoints 213 | │ │ │ ├── README.md 214 | │ │ │ ├── main.tf 215 | │ │ │ ├── outputs.tf 216 | │ │ │ ├── variables.tf 217 | │ │ │ └── versions.tf 218 | │ │ ├── outputs.tf 219 | │ │ ├── tobeerased 220 | │ │ ├── variables.tf 221 | │ │ ├── versions.tf 222 | │ │ └── vpc-flow-logs.tf 223 | │ ├── s3-tf-state.tf 224 | │ ├── terraform.tfvars 225 | │ └── variables.tf 226 | ├── PRODUCTION 227 | │ ├── api_helpers 228 | │ │ ├── get_cidr.py 229 | │ │ ├── post-api-helpers.sh 230 | │ │ ├── pre-api-helpers.sh 231 | │ │ ├── python 232 | │ │ │ └── requirements.txt 233 | │ │ └── update_cidr.py 234 | │ └── terraform 235 | │ ├── aft-providers.jinja 236 | │ ├── backend.jinja 237 | │ └── vpc.tf 238 | └── README.md 239 | 240 | ``` 241 | ## Deployment 242 | 243 | ### To use this solution you must: 244 | - AFT installed and configured [How to Install AFT](https://developer.hashicorp.com/terraform/tutorials/aws/aws-control-tower-aft) 245 | - CIDR MGMT installed and configured (same region of AFT) [How to Install CIDR MGMT](https://github.com/aws-samples/aws-vpc-cidr-mgmt#deployment) 246 | 247 | ### Deploy 248 | - Clone this repository 249 | - Replace your AFT **aft-account-customizations** repository files with the files from this REPO 250 | 251 | ### Usage 252 | - When creating an account with this scenario (dev, hml and prd), just inform the account type correctly in the **aft-account-request** file in the **account_customizations_name** field (the account type must be exactly the same as the folder name in **aft-account-customizations**) 253 | - Inform the following parameters in the **aft-account-request** file **custom_fields** field 254 | - project = "project name" (will be concatenated to the resources name) 255 | - region = "region name" 256 | - ipamapigtw = "Execution URL of CIDR MGMT API Gateway" (Ex. "https://id-gateway.execute-api.us-west-2.amazonaws.com/v0/") 257 | - ipamlambda = "Name of the lambda created in the CIDR MGMT deployment" 258 | 259 | ## Credits 260 | 261 | This project was architected and developed by Lucas Lopes, Rubens Dias and Victor Okuhama. 262 | 263 | ## Security 264 | 265 | See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. 266 | 267 | ## License 268 | 269 | This library is licensed under the MIT-0 License. See the LICENSE file. 270 | 271 | --------------------------------------------------------------------------------