├── stacks ├── __init__.py ├── pipeline_infra_stack.py ├── parameter_stack.py ├── repo_stack.py ├── s3_pipeline_stack.py ├── cloudformation_pipeline_stack.py └── cross_account_role_stack.py ├── requirements.txt ├── docs ├── s3.png └── cloudformation.png ├── cdk.json ├── .gitignore ├── example-sam-buildspec.yml ├── source.bat ├── example-cdk-buildspec.yml ├── example-s3-buildspec.yml ├── LICENSE ├── setup.py ├── .github └── workflows │ └── codeql-analysis.yml ├── app.py └── README.md /stacks/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | -e . 2 | -------------------------------------------------------------------------------- /docs/s3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/guysqr/cross-account-codepipeline-cdk/HEAD/docs/s3.png -------------------------------------------------------------------------------- /docs/cloudformation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/guysqr/cross-account-codepipeline-cdk/HEAD/docs/cloudformation.png -------------------------------------------------------------------------------- /cdk.json: -------------------------------------------------------------------------------- 1 | { 2 | "app": "python3 app.py", 3 | "context": { 4 | "@aws-cdk/core:enableStackNameDuplicates": "true", 5 | "aws-cdk:enableDiffNoFail": "true" 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.swp 2 | package-lock.json 3 | __pycache__ 4 | .pytest_cache 5 | .env 6 | *.egg-info 7 | .DS_Store 8 | stacks/.DS_Store 9 | .vscode 10 | 11 | # CDK asset staging directory 12 | .cdk.staging 13 | cdk.out -------------------------------------------------------------------------------- /example-sam-buildspec.yml: -------------------------------------------------------------------------------- 1 | # buildspec.yml 2 | version: 0.2 3 | phases: 4 | install: 5 | runtime-versions: 6 | python: 3.7 7 | build: 8 | commands: 9 | - sam build 10 | post_build: 11 | commands: 12 | - sam package --s3-bucket $PACKAGE_BUCKET --output-template-file packaged.yaml 13 | - ls -al $CODEBUILD_SRC_DIR 14 | artifacts: 15 | files: 16 | - packaged.yaml 17 | -------------------------------------------------------------------------------- /source.bat: -------------------------------------------------------------------------------- 1 | @echo off 2 | 3 | rem The sole purpose of this script is to make the command 4 | rem 5 | rem source .env/bin/activate 6 | rem 7 | rem (which activates a Python virtualenv on Linux or Mac OS X) work on Windows. 8 | rem On Windows, this command just runs this batch file (the argument is ignored). 9 | rem 10 | rem Now we don't need to document a Windows command for activating a virtualenv. 11 | 12 | echo Executing .env\Scripts\activate.bat for you 13 | .env\Scripts\activate.bat 14 | -------------------------------------------------------------------------------- /example-cdk-buildspec.yml: -------------------------------------------------------------------------------- 1 | # buildspec.yml 2 | version: 0.2 3 | phases: 4 | install: 5 | runtime-versions: 6 | python: 3.8 7 | commands: 8 | - npm install -g aws-cdk@1.102.0 9 | - pip install -r requirements.txt 10 | build: 11 | commands: 12 | - cdk synth -c environment=$ENVIRONMENT > packaged.yaml 13 | post_build: 14 | commands: 15 | - ls -al $CODEBUILD_SRC_DIR 16 | - more packaged.yaml 17 | artifacts: 18 | files: 19 | - packaged.yaml 20 | -------------------------------------------------------------------------------- /example-s3-buildspec.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | env: 3 | parameter-store: 4 | REACT_APP_USERPOOL_ID: /pipeline-pwa-master/cognito_userpool 5 | REACT_APP_CLIENT_ID: /pipeline-pwa-master/cognito_client_id 6 | REACT_APP_API_KEY: /pipeline-pwa-master/api_key 7 | phases: 8 | install: 9 | commands: 10 | - npm i npm@latest -g 11 | - pip install --upgrade pip 12 | - pip install --upgrade awscli 13 | pre_build: 14 | commands: 15 | - npm install 16 | build: 17 | commands: 18 | - export REACT_APP_NODE_ENV=$ENVIRONMENT 19 | - echo REACT_APP_NODE_ENV=$REACT_APP_NODE_ENV 20 | - npm run build 21 | 22 | artifacts: 23 | base-directory: build 24 | discard-paths: no 25 | files: 26 | - "**/*" 27 | base-directory: build 28 | name: build-artifacts 29 | discard-paths: no 30 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Guy Morton 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /stacks/pipeline_infra_stack.py: -------------------------------------------------------------------------------- 1 | from aws_cdk import ( 2 | core as cdk, 3 | aws_codebuild as codebuild, 4 | aws_codecommit as codecommit, 5 | aws_codepipeline as codepipeline, 6 | aws_codepipeline_actions as codepipeline_actions, 7 | aws_lambda as lambda_, 8 | aws_s3 as s3, 9 | aws_iam as iam, 10 | aws_kms as kms, 11 | ) 12 | 13 | 14 | class PipelineInfraStack(cdk.Stack): 15 | def __init__( 16 | self, 17 | scope: cdk.Construct, 18 | id: str, 19 | target_account_id: str, 20 | repo_name: str, 21 | repo_branch: str, 22 | **kwargs 23 | ) -> None: 24 | super().__init__(scope, id, **kwargs) 25 | 26 | pipeline_key = kms.Key( 27 | self, 28 | "PipelineKey", 29 | description="CICD CMK shared with " + target_account_id, 30 | alias="cicd-" + repo_name + "-" + repo_branch + "-" + target_account_id, 31 | enable_key_rotation=False, 32 | trust_account_identities=True, 33 | ) 34 | 35 | # the target account needs to be able to use the key to decrypt the artifacts 36 | pipeline_key.grant_decrypt(iam.AccountPrincipal(account_id=target_account_id)) 37 | 38 | cdk.CfnOutput( 39 | self, 40 | "PipelineKeyArnOutput", 41 | value=pipeline_key.key_arn, 42 | export_name=self.stack_name + ":PipelineKeyArn", 43 | ) 44 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import setuptools 2 | 3 | 4 | with open("README.md") as fp: 5 | long_description = fp.read() 6 | 7 | 8 | setuptools.setup( 9 | name="Pipeline-Creator", 10 | version="0.0.1", 11 | description="A CDK Python app to build AWS CodePipelines", 12 | long_description=long_description, 13 | long_description_content_type="text/markdown", 14 | author="guy.morton@versent.com.au", 15 | package_dir={"": "stacks"}, 16 | packages=setuptools.find_packages(where="stacks"), 17 | install_requires=[ 18 | "aws-cdk.core", 19 | "aws_cdk.aws_apigateway", 20 | "aws_cdk.aws_codedeploy", 21 | "aws_cdk.aws_lambda", 22 | "aws_cdk.aws_codebuild", 23 | "aws_cdk.aws_codepipeline", 24 | "aws_cdk.aws_codecommit", 25 | "aws_cdk.aws_codepipeline_actions", 26 | "aws_cdk.aws_s3", 27 | "aws_cdk.aws_iam", 28 | "aws_cdk.aws_logs", 29 | "aws_cdk.pipelines", 30 | ], 31 | python_requires=">=3.6", 32 | classifiers=[ 33 | "Development Status :: 4 - Beta", 34 | "Intended Audience :: Developers", 35 | "License :: OSI Approved :: Apache Software License", 36 | "Programming Language :: JavaScript", 37 | "Programming Language :: Python :: 3 :: Only", 38 | "Programming Language :: Python :: 3.6", 39 | "Programming Language :: Python :: 3.7", 40 | "Programming Language :: Python :: 3.8", 41 | "Topic :: Software Development :: Code Generators", 42 | "Topic :: Utilities", 43 | "Typing :: Typed", 44 | ], 45 | ) 46 | -------------------------------------------------------------------------------- /stacks/parameter_stack.py: -------------------------------------------------------------------------------- 1 | from aws_cdk import ( 2 | core as cdk, 3 | aws_ssm as ssm, 4 | ) 5 | 6 | 7 | class ParameterStack(cdk.Stack): 8 | def __init__( 9 | self, 10 | scope: cdk.Construct, 11 | id: str, 12 | parameter_list: str, 13 | repo_name: str, 14 | repo_branch: str, 15 | **kwargs 16 | ) -> None: 17 | super().__init__(scope, id, **kwargs) 18 | 19 | params_to_create = parameter_list.split(",") 20 | 21 | if not params_to_create: 22 | raise ValueError( 23 | "A parameter list needs to be provided as `-c parameter_list=`" 24 | ) 25 | 26 | if repo_name == None: 27 | raise ValueError( 28 | "The repo name needs to be provided as `-c repo=`" 29 | ) 30 | 31 | if repo_branch == None: 32 | raise ValueError( 33 | "The branch this pipeline will deploy must be provided as `-c branch=`" 34 | ) 35 | 36 | for param in params_to_create: 37 | split_params = param.split(":") 38 | name = split_params[0] 39 | try: 40 | value = split_params[1] 41 | except IndexError: 42 | value = "placeholder" 43 | pipeline_name = "pipeline-" + "-".join((repo_name, repo_branch)) 44 | param_name = "/" + "/".join((pipeline_name, name)) 45 | 46 | ssm.StringParameter( 47 | self, 48 | "ParameterFor" + name.capitalize(), 49 | description="Param for " + name, 50 | parameter_name=param_name, 51 | string_value=value, 52 | ) 53 | 54 | cdk.CfnOutput( 55 | self, 56 | "ParameterFor" + name.capitalize() + "Output", 57 | value=param_name, 58 | export_name=self.stack_name + ":" + pipeline_name + "-" + name, 59 | ) 60 | 61 | 62 | # cdk deploy parameter-stack-reponame-branch -c repo=reponame -c branch=branch -c parameter_list=foo:value,bar:othervalue --profile contactmap 63 | -------------------------------------------------------------------------------- /stacks/repo_stack.py: -------------------------------------------------------------------------------- 1 | from aws_cdk import ( 2 | core, 3 | aws_codebuild as codebuild, 4 | aws_codecommit as codecommit, 5 | aws_codepipeline as codepipeline, 6 | aws_codepipeline_actions as codepipeline_actions, 7 | aws_lambda as lambda_, 8 | aws_s3 as s3, 9 | aws_iam as iam, 10 | ) 11 | 12 | 13 | class RepoStack(core.Stack): 14 | def __init__( 15 | self, scope: core.Construct, id: str, repo_name: str, **kwargs 16 | ) -> None: 17 | super().__init__(scope, id, **kwargs) 18 | 19 | if repo_name == None: 20 | raise ValueError("Need repo name to be provided as `-c repo=`") 21 | 22 | repo_desc = "Repo created by " + self.stack_name + "." 23 | if repo_name.endswith("-mirror"): 24 | repo_desc = ( 25 | repo_desc 26 | + " NOTE: THIS IS A MIRROR REPO AND SHOULD NOT BE CHECKED OUT OR COMMITTED TO BY DEVELOPERS. It is here to acts as a source stage for CodePipeline only." 27 | ) 28 | 29 | # create the code repo we'll be mirroring to for the source stage 30 | code_repo = codecommit.Repository( 31 | self, 32 | "Repository", 33 | repository_name=repo_name, 34 | description=repo_desc, 35 | ) 36 | 37 | # create the Gitlab user and attach least privileges inline policy 38 | gitlab_user = iam.User( 39 | self, "GitlabIamUser", user_name=repo_name + "-gitlab-user" 40 | ) 41 | mirror_policy_statement = iam.PolicyStatement( 42 | actions=["codecommit:GitPull", "codecommit:GitPush"], 43 | effect=iam.Effect.ALLOW, 44 | resources=[code_repo.repository_arn], 45 | ) 46 | 47 | custom_policy_document = iam.PolicyDocument( 48 | statements=[mirror_policy_statement] 49 | ) 50 | 51 | gitlab_policy = iam.Policy( 52 | self, "GitlabMirrorPolicy", document=custom_policy_document 53 | ) 54 | gitlab_user.attach_inline_policy(gitlab_policy) 55 | 56 | core.CfnOutput( 57 | self, 58 | "CodeCommitRepoCloneUrlHttp", 59 | value=code_repo.repository_clone_url_http, 60 | ) 61 | core.CfnOutput(self, "IamUserForGitlab", value=gitlab_user.user_name) 62 | -------------------------------------------------------------------------------- /.github/workflows/codeql-analysis.yml: -------------------------------------------------------------------------------- 1 | # For most projects, this workflow file will not need changing; you simply need 2 | # to commit it to your repository. 3 | # 4 | # You may wish to alter this file to override the set of languages analyzed, 5 | # or to provide custom queries or build logic. 6 | # 7 | # ******** NOTE ******** 8 | # We have attempted to detect the languages in your repository. Please check 9 | # the `language` matrix defined below to confirm you have the correct set of 10 | # supported CodeQL languages. 11 | # 12 | name: "CodeQL" 13 | 14 | on: 15 | push: 16 | branches: [ main, cdk2 ] 17 | pull_request: 18 | # The branches below must be a subset of the branches above 19 | branches: [ main, cdk2 ] 20 | schedule: 21 | - cron: '29 14 * * 5' 22 | 23 | jobs: 24 | analyze: 25 | name: Analyze 26 | runs-on: ubuntu-latest 27 | permissions: 28 | actions: read 29 | contents: read 30 | security-events: write 31 | 32 | strategy: 33 | fail-fast: false 34 | matrix: 35 | language: [ 'python' ] 36 | # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ] 37 | # Learn more: 38 | # https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed 39 | 40 | steps: 41 | - name: Checkout repository 42 | uses: actions/checkout@v2 43 | 44 | # Initializes the CodeQL tools for scanning. 45 | - name: Initialize CodeQL 46 | uses: github/codeql-action/init@v1 47 | with: 48 | languages: ${{ matrix.language }} 49 | # If you wish to specify custom queries, you can do so here or in a config file. 50 | # By default, queries listed here will override any specified in a config file. 51 | # Prefix the list here with "+" to use these queries and those in the config file. 52 | # queries: ./path/to/local/query, your-org/your-repo/queries@main 53 | 54 | # Autobuild attempts to build any compiled languages (C/C++, C#, or Java). 55 | # If this step fails, then you should remove it and run the build manually (see below) 56 | - name: Autobuild 57 | uses: github/codeql-action/autobuild@v1 58 | 59 | # ℹ️ Command-line programs to run using the OS shell. 60 | # 📚 https://git.io/JvXDl 61 | 62 | # ✏️ If the Autobuild fails above, remove it and uncomment the following three lines 63 | # and modify them (or add more) to build your code if your project 64 | # uses a compiled language 65 | 66 | #- run: | 67 | # make bootstrap 68 | # make release 69 | 70 | - name: Perform CodeQL Analysis 71 | uses: github/codeql-action/analyze@v1 72 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | from aws_cdk import core 4 | 5 | from stacks.repo_stack import RepoStack 6 | from stacks.cloudformation_pipeline_stack import CloudformationPipelineStack 7 | from stacks.s3_pipeline_stack import S3PipelineStack 8 | from stacks.cross_account_role_stack import CrossAccountRoleStack 9 | from stacks.pipeline_infra_stack import PipelineInfraStack 10 | from stacks.parameter_stack import ParameterStack 11 | 12 | import os 13 | 14 | app = core.App() 15 | 16 | account_num = os.environ["CDK_DEFAULT_ACCOUNT"] 17 | deploy_region = os.environ["CDK_DEFAULT_REGION"] 18 | 19 | # repo config 20 | repo = app.node.try_get_context("repo") 21 | branch = app.node.try_get_context("branch") 22 | github_oauth_token = app.node.try_get_context("github_oauth_token") 23 | repo_owner = app.node.try_get_context("repo_owner") 24 | 25 | # buckets 26 | target_bucket = app.node.try_get_context("target_bucket") 27 | artifact_bucket = app.node.try_get_context("artifact_bucket") 28 | 29 | # build vars 30 | build_env = app.node.try_get_context("build_env") 31 | stack_name = app.node.try_get_context("stack_name") 32 | approvers = app.node.try_get_context("approvers") 33 | devops_account_id = app.node.try_get_context("devops_account_id") 34 | target_account_id = app.node.try_get_context("target_account_id") 35 | pipeline_key_arn = app.node.try_get_context("pipeline_key_arn") 36 | cross_account_role = app.node.try_get_context("cross_account_role_arn") 37 | deployment_role_arn = app.node.try_get_context("deployment_role_arn") 38 | parameter_list = app.node.try_get_context("parameter_list") 39 | region = app.node.try_get_context("region") 40 | 41 | if region: 42 | deploy_region = region 43 | 44 | deploy_environment = core.Environment(account=account_num, region=deploy_region) 45 | 46 | if build_env == None: 47 | build_env = "" 48 | 49 | if repo and branch: 50 | 51 | if target_account_id: 52 | # create in the devops account 53 | PipelineInfraStack( 54 | app, 55 | "create-pipeline-infra-" + repo + "-" + branch, 56 | target_account_id=target_account_id, 57 | repo_name=repo, 58 | repo_branch=branch, 59 | env=deploy_environment, 60 | ) 61 | 62 | if devops_account_id: 63 | CrossAccountRoleStack( 64 | app, 65 | "create-cross-account-role-" + repo + "-" + branch, 66 | devops_account_id=devops_account_id, 67 | pipeline_key_arn=pipeline_key_arn, 68 | artifact_bucket=artifact_bucket, 69 | target_bucket=target_bucket, 70 | env=deploy_environment, 71 | ) 72 | 73 | if repo: 74 | RepoStack( 75 | app, 76 | "create-repo-" + repo, 77 | repo_name=repo, 78 | env=deploy_environment, 79 | ) 80 | 81 | if all([target_bucket, repo, branch, cross_account_role]): 82 | S3PipelineStack( 83 | app, 84 | "s3-create-pipeline-" + repo + "-" + branch, 85 | repo_name=repo, 86 | repo_branch=branch, 87 | build_env=build_env, 88 | target_bucket=target_bucket, 89 | approvers=approvers, 90 | cross_account_role_arn=cross_account_role, 91 | env=deploy_environment, 92 | github_oauth_token=github_oauth_token, 93 | repo_owner=repo_owner, 94 | ) 95 | 96 | if all([repo, branch, cross_account_role, deployment_role_arn]): 97 | CloudformationPipelineStack( 98 | app, 99 | "cf-create-pipeline-" + repo + "-" + branch, 100 | repo_name=repo, 101 | repo_branch=branch, 102 | github_oauth_token=github_oauth_token, 103 | build_env=build_env, 104 | stack_name=stack_name, 105 | repo_owner=repo_owner, 106 | cross_account_role_arn=cross_account_role, 107 | deployment_role_arn=deployment_role_arn, 108 | approvers=approvers, 109 | env=deploy_environment, 110 | ) 111 | 112 | if all([parameter_list, repo, branch]): 113 | ParameterStack( 114 | app, 115 | "parameter-stack-" + repo + "-" + branch, 116 | repo_name=repo, 117 | repo_branch=branch, 118 | parameter_list=parameter_list, 119 | env=deploy_environment, 120 | ) 121 | 122 | if build_env: 123 | app.node.apply_aspect(core.Tag("environment-type", build_env)) 124 | 125 | app.node.apply_aspect(core.Tag("repository", repo)) 126 | app.node.apply_aspect(core.Tag("branch", branch)) 127 | # more tags here 128 | 129 | app.synth() 130 | -------------------------------------------------------------------------------- /stacks/s3_pipeline_stack.py: -------------------------------------------------------------------------------- 1 | from aws_cdk import ( 2 | core as cdk, 3 | aws_codebuild as codebuild, 4 | aws_codecommit as codecommit, 5 | aws_codepipeline as codepipeline, 6 | aws_codepipeline_actions as codepipeline_actions, 7 | aws_s3 as s3, 8 | aws_iam as iam, 9 | aws_kms as kms, 10 | ) 11 | 12 | 13 | class S3PipelineStack(cdk.Stack): 14 | def __init__( 15 | self, 16 | scope: cdk.Construct, 17 | id: str, 18 | repo_name: str, 19 | repo_branch: str, 20 | build_env: str, 21 | target_bucket: str, 22 | approvers: str, 23 | cross_account_role_arn: str, 24 | github_oauth_token: str, 25 | repo_owner: str, 26 | **kwargs 27 | ) -> None: 28 | super().__init__(scope, id, **kwargs) 29 | 30 | if target_bucket == None: 31 | raise ValueError( 32 | "The target bucket name needs to be provided as `-c target_bucket=`" 33 | ) 34 | 35 | if repo_name == None: 36 | raise ValueError( 37 | "The repo name needs to be provided as `-c repo=`" 38 | ) 39 | 40 | if repo_branch == None: 41 | raise ValueError( 42 | "The branch this pipeline will deploy must be provided as `-c branch=`" 43 | ) 44 | 45 | if cross_account_role_arn == None: 46 | raise ValueError( 47 | "The cross account role this pipeline will assume must be provided as `-c cross_account_role_arn=`" 48 | ) 49 | 50 | deploy_bucket = s3.Bucket.from_bucket_name( 51 | self, "BucketByAtt", bucket_name=target_bucket 52 | ) 53 | 54 | # get the key ARN from the create-pipeline-infra stack 55 | pipeline_key = kms.Key.from_key_arn( 56 | self, 57 | "DeployKey", 58 | key_arn=cdk.Fn.import_value( 59 | self.stack_name.replace("s3-create-pipeline", "create-pipeline-infra") 60 | + ":PipelineKeyArn" 61 | ), 62 | ) 63 | # Create the artifacts bucket we'll use 64 | artifacts_bucket = s3.Bucket( 65 | self, 66 | "PipelineArtifactsBucket", 67 | bucket_key_enabled=True, 68 | encryption_key=pipeline_key, 69 | encryption=s3.BucketEncryption.KMS, 70 | ) 71 | 72 | # create the pipeline and tell it to use the artifacts bucket 73 | pipeline = codepipeline.Pipeline( 74 | self, 75 | "Pipeline-" + repo_name + "-" + repo_branch, 76 | artifact_bucket=artifacts_bucket, 77 | pipeline_name="pipeline-" + repo_name + "-" + repo_branch, 78 | cross_account_keys=True, 79 | ) 80 | 81 | # create the source stage, which grabs the code from the repo and outputs it as an artifact 82 | source_output = codepipeline.Artifact() 83 | 84 | if github_oauth_token and repo_owner: 85 | pipeline.add_stage( 86 | stage_name="Source", 87 | actions=[ 88 | codepipeline_actions.GitHubSourceAction( 89 | oauth_token=github_oauth_token, 90 | owner=repo_owner, 91 | action_name="GetGitHubSource", 92 | repo=repo_name, 93 | branch=repo_branch, 94 | trigger=codepipeline_actions.GitHubTrigger.WEBHOOK, 95 | output=source_output, 96 | ) 97 | ], 98 | ) 99 | else: 100 | pipeline.add_stage( 101 | stage_name="Source", 102 | actions=[ 103 | codepipeline_actions.CodeCommitSourceAction( 104 | action_name="GetSource", 105 | repository=codecommit.Repository.from_repository_name( 106 | self, "Repo", repo_name 107 | ), 108 | output=source_output, 109 | branch=repo_branch, 110 | ) 111 | ], 112 | ) 113 | 114 | # create the build stage which takes the source artifact and outputs the built artifact 115 | build_output = codepipeline.Artifact() 116 | build_project = codebuild.PipelineProject( 117 | self, 118 | "Build", 119 | build_spec=codebuild.BuildSpec.from_source_filename("buildspec.yml"), 120 | environment={"build_image": codebuild.LinuxBuildImage.AMAZON_LINUX_2_3}, 121 | environment_variables={ 122 | "PACKAGE_BUCKET": codebuild.BuildEnvironmentVariable( 123 | value=artifacts_bucket.bucket_name 124 | ), 125 | "ENVIRONMENT": codebuild.BuildEnvironmentVariable(value=build_env), 126 | }, 127 | ) 128 | # add permission to get parameter store values 129 | ssm_access = iam.PolicyStatement( 130 | actions=["ssm:GetParameters"], effect=iam.Effect.ALLOW, resources=["*"] 131 | ) 132 | build_project.add_to_role_policy(ssm_access) 133 | 134 | pipeline.add_stage( 135 | stage_name="Build", 136 | actions=[ 137 | codepipeline_actions.CodeBuildAction( 138 | action_name="Build", 139 | project=build_project, 140 | input=source_output, 141 | outputs=[build_output], 142 | ) 143 | ], 144 | ) 145 | if approvers: 146 | pipeline.add_stage( 147 | stage_name="ManualApproval", 148 | actions=[ 149 | codepipeline_actions.ManualApprovalAction( 150 | notify_emails=approvers.split(","), 151 | action_name="AwaitApproval", 152 | ) 153 | ], 154 | ) 155 | 156 | ########################################################## 157 | # S3 DEPLOYMENT 158 | ########################################################## 159 | 160 | cross_account_role = iam.Role.from_role_arn( 161 | self, 162 | "CrossAccountRole", 163 | role_arn=cross_account_role_arn, 164 | ) 165 | 166 | pipeline.add_stage( 167 | stage_name="Deploy", 168 | actions=[ 169 | codepipeline_actions.S3DeployAction( 170 | bucket=deploy_bucket, 171 | input=build_output, 172 | action_name="S3Deploy", 173 | role=cross_account_role, 174 | ), 175 | ], 176 | ) 177 | 178 | cdk.CfnOutput(self, "ArtifactBucketArn", value=artifacts_bucket.bucket_arn) 179 | -------------------------------------------------------------------------------- /stacks/cloudformation_pipeline_stack.py: -------------------------------------------------------------------------------- 1 | from aws_cdk import ( 2 | core as cdk, 3 | aws_codebuild as codebuild, 4 | aws_codecommit as codecommit, 5 | aws_codepipeline as codepipeline, 6 | aws_codepipeline_actions as codepipeline_actions, 7 | aws_lambda as lambda_, 8 | aws_s3 as s3, 9 | aws_iam as iam, 10 | aws_kms as kms, 11 | aws_logs as logs, 12 | ) 13 | 14 | 15 | class CloudformationPipelineStack(cdk.Stack): 16 | def __init__( 17 | self, 18 | scope: cdk.Construct, 19 | id: str, 20 | repo_name: str, 21 | repo_branch: str, 22 | build_env: str, 23 | cross_account_role_arn: str, 24 | deployment_role_arn: str, 25 | github_oauth_token: str, 26 | stack_name: str, 27 | repo_owner: str, 28 | approvers: str, 29 | **kwargs 30 | ) -> None: 31 | super().__init__(scope, id, **kwargs) 32 | 33 | if repo_name == None: 34 | raise ValueError( 35 | "The repo name needs to be provided as `-c repo=`" 36 | ) 37 | 38 | if repo_branch == None: 39 | raise ValueError( 40 | "The branch this pipeline will deploy must be provided as `-c branch=`" 41 | ) 42 | 43 | if cross_account_role_arn == None: 44 | raise ValueError( 45 | "The cross account role this pipeline will assume must be provided as `-c cross_account_role_arn=`" 46 | ) 47 | 48 | pipeline_key = kms.Key.from_key_arn( 49 | self, 50 | "DeployKey", 51 | key_arn=cdk.Fn.import_value( 52 | self.stack_name.replace("cf-create-pipeline", "create-pipeline-infra") 53 | + ":PipelineKeyArn" 54 | ), 55 | ) 56 | 57 | # Create the artifacts bucket we'll use 58 | artifacts_bucket = s3.Bucket( 59 | self, 60 | "PipelineArtifactsBucket", 61 | encryption_key=pipeline_key, 62 | encryption=s3.BucketEncryption.KMS, 63 | removal_policy=cdk.RemovalPolicy.DESTROY, 64 | ) 65 | 66 | cross_account_role = iam.Role.from_role_arn( 67 | self, 68 | "CrossAccountRole", 69 | role_arn=cross_account_role_arn, 70 | ) 71 | 72 | # derive the target account id from the supplied role 73 | target_account_id = cross_account_role.principal_account 74 | 75 | deployment_role = iam.Role.from_role_arn( 76 | self, 77 | "DeploymentRole", 78 | role_arn=deployment_role_arn, 79 | ) 80 | 81 | artifacts_bucket.add_to_resource_policy( 82 | iam.PolicyStatement( 83 | actions=["s3:Get*", "s3:Put*"], 84 | resources=[artifacts_bucket.arn_for_objects("*")], 85 | principals=[iam.AccountPrincipal(account_id=target_account_id)], 86 | ) 87 | ) 88 | 89 | artifacts_bucket.add_to_resource_policy( 90 | iam.PolicyStatement( 91 | actions=["s3:List*"], 92 | resources=[ 93 | artifacts_bucket.bucket_arn, 94 | artifacts_bucket.arn_for_objects("*"), 95 | ], 96 | principals=[iam.AccountPrincipal(account_id=target_account_id)], 97 | ) 98 | ) 99 | 100 | # create the pipeline and tell it to use the artifacts bucket 101 | pipeline = codepipeline.Pipeline( 102 | self, 103 | "Pipeline-" + repo_name + "-" + repo_branch, 104 | artifact_bucket=artifacts_bucket, 105 | pipeline_name="pipeline-" + repo_name + "-" + repo_branch, 106 | cross_account_keys=True, 107 | restart_execution_on_update=True, 108 | ) 109 | 110 | # allow the pipeline to assume any role in the target account 111 | cross_account_access = iam.PolicyStatement( 112 | actions=["sts:AssumeRole"], 113 | effect=iam.Effect.ALLOW, 114 | resources=["arn:aws:iam::" + target_account_id + ":role/*"], 115 | ) 116 | 117 | pipeline.add_to_role_policy(cross_account_access) 118 | 119 | # create the source stage, which grabs the code from the repo and outputs it as an artifact 120 | source_output = codepipeline.Artifact() 121 | 122 | if github_oauth_token and repo_owner: 123 | pipeline.add_stage( 124 | stage_name="Source", 125 | actions=[ 126 | codepipeline_actions.GitHubSourceAction( 127 | oauth_token=github_oauth_token, 128 | owner=repo_owner, 129 | action_name="GetGitHubSource", 130 | repo=repo_name, 131 | branch=repo_branch, 132 | trigger=codepipeline_actions.GitHubTrigger.WEBHOOK, 133 | output=source_output, 134 | ) 135 | ], 136 | ) 137 | else: 138 | pipeline.add_stage( 139 | stage_name="Source", 140 | actions=[ 141 | codepipeline_actions.CodeCommitSourceAction( 142 | action_name="GetSource", 143 | repository=codecommit.Repository.from_repository_name( 144 | self, "Repo", repo_name 145 | ), 146 | output=source_output, 147 | branch=repo_branch, 148 | ) 149 | ], 150 | ) 151 | 152 | # create the build stage which takes the source artifact and outputs the built artifact 153 | # to allow use of docker, need privileged flag to be set to True 154 | build_output = codepipeline.Artifact() 155 | build_project = codebuild.PipelineProject( 156 | self, 157 | "Build", 158 | build_spec=codebuild.BuildSpec.from_source_filename("buildspec.yml"), 159 | logging=codebuild.LoggingOptions( 160 | cloud_watch=codebuild.CloudWatchLoggingOptions( 161 | enabled=True, 162 | log_group=logs.LogGroup( 163 | self, 164 | "PipelineLogs", 165 | ), 166 | ) 167 | ), 168 | environment={ 169 | "build_image": codebuild.LinuxBuildImage.AMAZON_LINUX_2_3, 170 | "privileged": True, 171 | }, 172 | environment_variables={ 173 | "PACKAGE_BUCKET": codebuild.BuildEnvironmentVariable( 174 | value=artifacts_bucket.bucket_name 175 | ), 176 | "ENVIRONMENT": codebuild.BuildEnvironmentVariable(value=build_env), 177 | }, 178 | ) 179 | pipeline.add_stage( 180 | stage_name="Build", 181 | actions=[ 182 | codepipeline_actions.CodeBuildAction( 183 | action_name="Build", 184 | project=build_project, 185 | input=source_output, 186 | outputs=[build_output], 187 | ) 188 | ], 189 | ) 190 | 191 | # create the deployment stages that take the built artifact and create a change set then deploy it 192 | 193 | ########################################################## 194 | # CLOUDFORMATION DEPLOYMENT 195 | ########################################################## 196 | 197 | # let's map some new ones to old ones so we don't get into trouble... 198 | stack_name = stack_name or repo_name + "-" + repo_branch + "-stack" 199 | 200 | # only set Environment is build_env is set 201 | if build_env: 202 | parameters = {"Environment": build_env} 203 | else: 204 | parameters = None 205 | 206 | pipeline.add_stage( 207 | stage_name="CreateChangeSet", 208 | actions=[ 209 | codepipeline_actions.CloudFormationCreateReplaceChangeSetAction( 210 | change_set_name=stack_name + "-changeset", 211 | action_name="CreateChangeSet", 212 | template_path=build_output.at_path("packaged.yaml"), 213 | stack_name=stack_name, 214 | cfn_capabilities=[ 215 | cdk.CfnCapabilities.NAMED_IAM, 216 | cdk.CfnCapabilities.AUTO_EXPAND, 217 | ], 218 | admin_permissions=True, 219 | parameter_overrides=parameters, 220 | role=cross_account_role, 221 | deployment_role=deployment_role, 222 | ), 223 | ], 224 | ) 225 | 226 | if approvers: 227 | pipeline.add_stage( 228 | stage_name="ApproveChangeSet", 229 | actions=[ 230 | codepipeline_actions.ManualApprovalAction( 231 | notify_emails=approvers.split(","), 232 | action_name="AwaitApproval", 233 | ) 234 | ], 235 | ) 236 | 237 | pipeline.add_stage( 238 | stage_name="DeployChangeSet", 239 | actions=[ 240 | codepipeline_actions.CloudFormationExecuteChangeSetAction( 241 | change_set_name=stack_name + "-changeset", 242 | stack_name=stack_name, 243 | action_name="Deploy", 244 | role=cross_account_role, 245 | ), 246 | ], 247 | ) 248 | 249 | cdk.CfnOutput(self, "ArtifactBucketArn", value=artifacts_bucket.bucket_arn) 250 | cdk.CfnOutput(self, "ArtifactBucketName", value=artifacts_bucket.bucket_name) 251 | -------------------------------------------------------------------------------- /stacks/cross_account_role_stack.py: -------------------------------------------------------------------------------- 1 | from aws_cdk import ( 2 | core as cdk, 3 | aws_codebuild as codebuild, 4 | aws_codecommit as codecommit, 5 | aws_codepipeline as codepipeline, 6 | aws_codepipeline_actions as codepipeline_actions, 7 | aws_lambda as lambda_, 8 | aws_s3 as s3, 9 | aws_iam as iam, 10 | aws_kms as kms, 11 | ) 12 | 13 | #################################################################################################### 14 | # This stack needs to be created in the target account the pipeline will deploy to 15 | #################################################################################################### 16 | 17 | 18 | class CrossAccountRoleStack(cdk.Stack): 19 | def __init__( 20 | self, 21 | scope: cdk.Construct, 22 | id: str, 23 | devops_account_id: str, 24 | pipeline_key_arn: str, 25 | target_bucket: str = None, 26 | artifact_bucket: str = None, 27 | **kwargs 28 | ) -> None: 29 | super().__init__(scope, id, **kwargs) 30 | 31 | if devops_account_id == None: 32 | raise ValueError( 33 | "The AWS account ID to be trusted needs to be provided as `-c devops_account_id=`" 34 | ) 35 | 36 | if pipeline_key_arn == None: 37 | raise ValueError( 38 | "The KMS key from the devops account needs to be provided as `-c pipeline_key_arn=`" 39 | ) 40 | 41 | # if no artifact bucket is provided, allow access to all buckets 42 | # otherwise specify only the artifact bucket 43 | pipeline_s3_resources = "arn:aws:s3:::*" 44 | if artifact_bucket: 45 | art_bucket = s3.Bucket.from_bucket_name( 46 | self, "ArtBucketByAtt", bucket_name=artifact_bucket 47 | ) 48 | pipeline_s3_resources = art_bucket.arn_for_objects("*") 49 | 50 | # Start with an empty policy statements list 51 | policy_statements = [] 52 | deploy_policy_statements = [] 53 | 54 | #################################################################################################### 55 | # These policies are required in both deployment types, to read artifacts and use the key 56 | # see https://aws.amazon.com/premiumsupport/knowledge-center/codepipeline-deploy-cloudformation/ 57 | #################################################################################################### 58 | 59 | policy_statements.append( 60 | iam.PolicyStatement( 61 | actions=["cloudformation:*", "iam:PassRole"], 62 | effect=iam.Effect.ALLOW, 63 | resources=["*"], 64 | ) 65 | ) 66 | 67 | policy_statements.append( 68 | iam.PolicyStatement( 69 | actions=["s3:Get*", "s3:Put*", "s3:ListBucket"], 70 | effect=iam.Effect.ALLOW, 71 | resources=[pipeline_s3_resources], 72 | ) 73 | ) 74 | 75 | policy_statements.append( 76 | iam.PolicyStatement( 77 | actions=[ 78 | "kms:Decrypt", 79 | ], 80 | effect=iam.Effect.ALLOW, 81 | resources=[pipeline_key_arn], 82 | ) 83 | ) 84 | # allow this role to get items from the parameter store 85 | policy_statements.append( 86 | iam.PolicyStatement( 87 | actions=["ssm:GetParameters"], 88 | effect=iam.Effect.ALLOW, 89 | resources=["*"], 90 | ) 91 | ) 92 | 93 | #################################################################################################### 94 | # If you pass a target_bucket value, then we build a cross account role for S3Deploy 95 | #################################################################################################### 96 | if target_bucket: 97 | dep_bucket = s3.Bucket.from_bucket_name( 98 | self, 99 | "BucketByAtt", 100 | bucket_name=target_bucket, 101 | ) 102 | # Add the S3 deploy action to put objects in the deployment bucket 103 | policy_statements.append( 104 | iam.PolicyStatement( 105 | actions=["s3:DeleteObject*", "s3:PutObject*", "s3:Abort*"], 106 | effect=iam.Effect.ALLOW, 107 | resources=[dep_bucket.bucket_arn, dep_bucket.arn_for_objects("*")], 108 | ) 109 | ) 110 | 111 | #################################################################################################### 112 | # Otherwise, we build a cross account role for Cloudformation Deploy 113 | # 114 | # Note you may need to change permissions granted in this role if you are deploying more than 115 | # Lambda & API Gateway 116 | #################################################################################################### 117 | else: 118 | deploy_policy_statements.append( 119 | iam.PolicyStatement( 120 | actions=[ 121 | "cloudformation:CreateChangeSet", 122 | ], 123 | effect=iam.Effect.ALLOW, 124 | resources=[ 125 | "arn:aws:cloudformation:*:aws:transform/Serverless-2016-10-31" 126 | ], 127 | ) 128 | ) 129 | 130 | deploy_policy_statements.append( 131 | iam.PolicyStatement( 132 | actions=[ 133 | "cloudformation:CreateChangeSet", 134 | "cloudformation:DeleteStack", 135 | "cloudformation:DescribeChangeSet", 136 | "cloudformation:DescribeStackEvents", 137 | "cloudformation:DescribeStacks", 138 | "cloudformation:ExecuteChangeSet", 139 | "cloudformation:GetTemplateSummary", 140 | ], 141 | effect=iam.Effect.ALLOW, 142 | resources=["arn:aws:cloudformation:*:" + self.account + ":stack/*"], 143 | ) 144 | ) 145 | deploy_policy_statements.append( 146 | iam.PolicyStatement( 147 | actions=["wafv2:*"], 148 | effect=iam.Effect.ALLOW, 149 | resources=[ 150 | "arn:aws:wafv2:" + self.region + ":" + self.account + ":*" 151 | ], 152 | ) 153 | ) 154 | deploy_policy_statements.append( 155 | iam.PolicyStatement( 156 | actions=["apigateway:*"], 157 | effect=iam.Effect.ALLOW, 158 | resources=["arn:aws:apigateway:" + self.region + "::*"], 159 | ) 160 | ) 161 | deploy_policy_statements.append( 162 | iam.PolicyStatement( 163 | actions=["route53:*"], 164 | effect=iam.Effect.ALLOW, 165 | resources=["arn:aws:route53:::*"], 166 | ) 167 | ) 168 | deploy_policy_statements.append( 169 | iam.PolicyStatement( 170 | actions=["events:*"], 171 | effect=iam.Effect.ALLOW, 172 | resources=[ 173 | "arn:aws:events:" + self.region + ":" + self.account + ":rule/*" 174 | ], 175 | ) 176 | ) 177 | 178 | deploy_policy_statements.append( 179 | iam.PolicyStatement( 180 | actions=[ 181 | "iam:AttachRolePolicy", 182 | "iam:DeleteRole", 183 | "iam:DetachRolePolicy", 184 | "iam:GetRole", 185 | "iam:PassRole", 186 | "iam:TagRole", 187 | "iam:CreateRole", 188 | "iam:DeleteRolePolicy", 189 | "iam:PutRolePolicy", 190 | "iam:GetRolePolicy", 191 | "iam:CreateServiceLinkedRole", 192 | ], 193 | effect=iam.Effect.ALLOW, 194 | resources=["arn:aws:iam::" + self.account + ":role/*"], 195 | ) 196 | ) 197 | lambda_resource_prefix = "arn:aws:lambda:" 198 | deploy_policy_statements.append( 199 | iam.PolicyStatement( 200 | actions=[ 201 | "lambda:AddPermission", 202 | "lambda:CreateFunction", 203 | "lambda:DeleteFunction", 204 | "lambda:GetFunction", 205 | "lambda:GetFunctionConfiguration", 206 | "lambda:ListTags", 207 | "lambda:RemovePermission", 208 | "lambda:TagResource", 209 | "lambda:UntagResource", 210 | "lambda:UpdateFunctionCode", 211 | "lambda:UpdateFunctionConfiguration", 212 | "lambda:PublishLayerVersion", 213 | "lambda:GetLayerVersion", 214 | "lambda:EnableReplication*", 215 | "lambda:ListVersionsByFunction", 216 | "lambda:PublishVersion", 217 | ], 218 | effect=iam.Effect.ALLOW, 219 | resources=[ 220 | lambda_resource_prefix 221 | + self.region 222 | + ":" 223 | + self.account 224 | + ":*:*", 225 | lambda_resource_prefix 226 | + self.region 227 | + ":" 228 | + self.account 229 | + ":layer:*:*", 230 | lambda_resource_prefix 231 | + self.region 232 | + ":" 233 | + self.account 234 | + ":layer:*", 235 | lambda_resource_prefix 236 | + "ap-southeast-2:580247275435:layer:LambdaInsightsExtension:14", 237 | ], 238 | ) 239 | ) 240 | 241 | deploy_policy_statements.append( 242 | iam.PolicyStatement( 243 | actions=[ 244 | "secretsmanager:*", 245 | ], 246 | effect=iam.Effect.ALLOW, 247 | resources=["*"], 248 | ) 249 | ) 250 | 251 | deploy_policy_statements.append( 252 | iam.PolicyStatement( 253 | actions=["cloudfront:*", "acm:*"], 254 | effect=iam.Effect.ALLOW, 255 | resources=["*"], 256 | ) 257 | ) 258 | deploy_policy_statements.append( 259 | iam.PolicyStatement( 260 | actions=[ 261 | "logs:CreateLogGroup", 262 | "logs:PutRetentionPolicy", 263 | "logs:DeleteLogGroup", 264 | "logs:DescribeLogGroups", 265 | ], 266 | effect=iam.Effect.ALLOW, 267 | resources=["*"], 268 | ) 269 | ) 270 | 271 | deploy_policy_statements.append( 272 | iam.PolicyStatement( 273 | actions=[ 274 | "ec2:DescribeSecurityGroups", 275 | "ec2:CreateSecurityGroup", 276 | "ec2:DeleteSecurityGroup", 277 | "ec2:AuthorizeSecurityGroupIngress", 278 | "ec2:CreateVpcEndpoint", 279 | "ec2:ModifyVpcEndpoint", 280 | "ec2:DescribeVpcEndpoints", 281 | "ec2:DeleteVpcEndpoints", 282 | "ec2:DescribeSubnets", 283 | "ec2:DescribeVpcs", 284 | "ec2:DescribeNetworkInterfaces", 285 | "ec2:CreateTags", 286 | ], 287 | effect=iam.Effect.ALLOW, 288 | resources=["*"], 289 | ) 290 | ) 291 | 292 | deploy_policy_statements.append( 293 | iam.PolicyStatement( 294 | actions=["dynamodb:*"], 295 | effect=iam.Effect.ALLOW, 296 | resources=[ 297 | "arn:aws:dynamodb:" 298 | + self.region 299 | + ":" 300 | + self.account 301 | + ":table/*" 302 | ], 303 | ) 304 | ) 305 | 306 | deploy_policy_statements.append( 307 | iam.PolicyStatement( 308 | actions=["states:*"], 309 | effect=iam.Effect.ALLOW, 310 | resources=[ 311 | "arn:aws:states:" + self.region + ":" + self.account + ":*" 312 | ], 313 | ) 314 | ) 315 | 316 | # allow the cross account role to be assumed by the devops account 317 | cross_account_role = iam.Role( 318 | self, 319 | "CrossAccountRole", 320 | assumed_by=iam.AccountPrincipal(devops_account_id), 321 | inline_policies=[ 322 | iam.PolicyDocument(statements=policy_statements), 323 | ], 324 | ) 325 | 326 | if target_bucket == None: 327 | # merge the basic permissions with the extra cloudformation permissions 328 | policy_statements.extend(deploy_policy_statements) 329 | # create the deployment role, and allow it to be assumed by the Cloudformation service principal 330 | deployment_role = iam.Role( 331 | self, 332 | "DeploymentRole", 333 | assumed_by=iam.ServicePrincipal(service="cloudformation.amazonaws.com"), 334 | inline_policies=[ 335 | iam.PolicyDocument(statements=policy_statements), 336 | ], 337 | ) 338 | cdk.CfnOutput( 339 | self, 340 | "DeploymentRoleArnOutput", 341 | description="This role is for CloudFormation deployments and is configured in the deployment_role property of the CloudFormationCreateReplaceChangeSetAction action", 342 | value=deployment_role.role_arn, 343 | ) 344 | 345 | cdk.CfnOutput( 346 | self, 347 | "CrossAccountRoleArnOutput", 348 | description="This role is assumed by the pipeline to operate on the target account.", 349 | value=cross_account_role.role_arn, 350 | ) 351 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Cross Account deployment via a DevOps account using AWS CodePipeline & CDK 2 | 3 | This is a CDK project designed to facilitate the setup of CodePipeline and supporting infrastructure required to achieve the deployment of a Git repo from a central DevOps account into multiple "environment" accounts (eg, development, staging, production). 4 | 5 | This project supports two deployment models: 6 | 7 | - Deploy using Cloudformation (eg for SAM and simple\* CDK projects) 8 | - Deploy using S3 (for PWA, SPA etc) 9 | 10 | And the following Git providers: 11 | 12 | - GitHub 13 | - CodeCommit 14 | - GitLab 15 | 16 | If your repo is already in CodeCommit, you can scoot straight to the section [Cross-Account Deployment with CDK](#cross-account-deployment-with-cdk). 17 | 18 | If your repo is in GitHub, you will need to get an oauth token and other details to enable that to be set up. 19 | 20 | If your repo is in Bitbucket, you will need to add support for that, but as Bitbucket is supported by AWS as a source it should be easy. 21 | 22 | If your code is in GitLab, this project has a solution that will allow you to use AWS CodePipeline to deploy it. 23 | 24 | ### Why not just use GitLab CI/CD? 25 | 26 | GitLab CI/CD pipelines run in a GitLab-owned environment and require AWS-account-altering credentials to be stored in GitLab. Choosing instead to deploy using AWS CodePipeline allows you to keep AWS-altering credentials inside your AWS environment while also leveraging the service integrations AWS provides as part of that service. 27 | 28 | You can continue to use GitLab CI for executing tests (and the like) as part of your Git workflow, eg to run tests before merges. 29 | 30 | --- 31 | 32 | ## If you're using GitLab and you want to deploy using AWS CodePipeline 33 | 34 | We're going to use repo mirroring to reflect your GitLab repo into AWS CodeCommit. This is a simple solution and requires minimal permissions to be granted to GitLab. 35 | 36 | ### Create a mirror repo in CodeCommit 37 | 38 | The `create-repo-` stack will allow you to create a CodeCommit repo and IAM user with minimal privileges that can be used to mirror your GitLab repo into CodeCommit. 39 | 40 | Run this stack, swapping `` for the name of the CodeCommit repo you want to create. I suggest you use the original repo name with `-mirror` on the end as your CodeCommit repo name, but you can pass whatever name you like (so long as it's a combination of letters, numbers, periods, underscores, and dashes between 1 and 100 characters in length. 41 | 42 | The repo and pipeline stacks should be created in your DevOps/Shared Services account, so use a profile here that will deploy to that account. Also, you will want to create the repo in the same AWS region as your deployment pipeline. 43 | 44 | ``` 45 | cdk deploy create-repo- \ 46 | -c repo= \ 47 | -c region= \ 48 | --profile 49 | ``` 50 | 51 | CloudFormation will output some useful items, specifically 52 | 53 | - The IAM user you will use to generate Git credentials 54 | - The Git clone URL for http access that you will need to set up in GitLab 55 | 56 | > The IAM user has a single inline policy attached to it that only allows the following on the repo you just created: 57 | > 58 | > - codecommit:GitPull 59 | > - codecommit:GitPush 60 | 61 | ### Get the AWS Git credentials 62 | 63 | Once you have the IAM user, go to the IAM console and go to the Security Credentials tab for the user and under the section `HTTPS Git credentials for AWS CodeCommit` click the `Generate Credentials` button. 64 | 65 | You can either download the credentials or just copy and paste them into GitLab now (recommended). 66 | 67 | ### Configuring GitLab 68 | 69 | In GitLab, open the repository to be push-mirrored. 70 | 71 | Go to `Settings > Repository`, and then expand `Mirroring repositories`. Fill in the Git repository URL field using this format: 72 | 73 | ``` 74 | https://@ 75 | ``` 76 | 77 | Replace `` with the AWS special HTTPS Git user ID from the `HTTPS Git credentials for AWS CodeCommit` created earlier (**NOT** the IAM user name). 78 | 79 | Replace `` with the clone URL from CloudFormation. 80 | 81 | For Mirror direction, select `Push`. 82 | 83 | For Authentication method, select `Password` and fill in the Password field with the password created earlier in `HTTPS Git credentials for AWS CodeCommit`. 84 | 85 | The option `Mirror only protected branches` should be chosen for CodeCommit as it pushes more frequently (from every five minutes to every minute). CodePipeline requires individual pipeline setups for named branches you wish to have a AWS CI setup for. Because feature branches that have dynamic names are unsupported, configuring `Mirror only protected branches` doesn’t cause flexibility problems with CodePipeline integration as long as you are also willing to protect all the named branches you want to build CodePipelines for. 86 | 87 | Select `Mirror repository`. You should see the mirrored repository appear. 88 | 89 | To test mirroring by forcing a push, select the half-circle arrows button (hover text is Update now). If Last successful update shows a date, you have configured mirroring correctly. If it isn’t working correctly, a red error tag appears and shows the error message as hover text. 90 | 91 | (adapted from https://docs.gitlab.com/ee/user/project/repository/repository_mirroring.html#setting-up-a-push-mirror-from-gitlab-to-aws-codecommit) 92 | 93 | --- 94 | 95 | # Cross-Account Deployment with CDK 96 | 97 | Now you have your code in a supported source repo, let's build our cross-account pipelines! 98 | 99 | ## About Stack Names and Context variables 100 | 101 | Each set of infra is specific to a single repo and branch combination, so all stacks will be named using the model: 102 | 103 | ``` 104 | -- 105 | ``` 106 | 107 | You will also need to supply the name of the repo and the branch as context variables to each `deploy` command, like so: 108 | 109 | ``` 110 | cdk deploy base-stack-name-reponame-branch -c repo= -c branch= 111 | ``` 112 | 113 | These context variables will be used when creating resources inside the stacks. 114 | 115 | Additional context variables can be specified on the command line to pass additional configuration into the stacks, as needed. For example, unless you are always deploying to your default region, you may need to pass the region you want to create your stack in using `-c region=`. 116 | 117 | > This project requires you to use the same region for all the components you build for a given repo. If your code is in CodeCommit and you need to deploy parts of the same repo to different regions you will need to either mirror it to multiple regions or modify this project to enable [cross-region actions](https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-cross-region.html). I found cross-region support a bit patchy and (because I didn't need it) I elected to keep things simple by keeping the repo and pipeline components all in the same region. 118 | 119 | ## Cross-Account Deployment Prerequisites 120 | 121 | Before we can achieve cross-account deployments we need to create some infrastructure that will facilitate sharing data and performing operations between accounts - a shared KMS key and some cross-account roles. 122 | 123 | ### Create the shared KMS key 124 | 125 | Firstly we will use KMS to create a customer-managed key (CMK). This will be unique to each distinct repo and branch combination, with the expectation that a single repo and branch combination will be deployed to a single environment account. 126 | 127 | > Reminder: you will need to specify any region that is not your default. If you don't specify `region` your default region will be used. 128 | 129 | ``` 130 | cdk deploy create-pipeline-infra-- \ 131 | -c repo= \ 132 | -c branch= \ 133 | -c region= \ 134 | -c target_account_id= \ 135 | --profile 136 | ``` 137 | 138 | This stack will output the ARN of the CMK key it generates, and it will apply a key policy allowing the target account to use it to decrypt objects as required. 139 | 140 | > The ARN of this key will be used in the next stack. 141 | 142 | ### Create the Cross-Account roles 143 | 144 | Now we need to create the role(s) that will be assumed in the target account by CodePipeline and the S3 and CloudFormation deploy actions. 145 | 146 | > Note we are deploying this stack into the target account, so use a profile linked to that account, not the DevOps account. 147 | > 148 | > To facilitate enabling of inbound access to the target account from the DevOps account, we are also passing the ID of the DevOps/Shared Services account as a context variable. This will be used to create the needed trust relationships. 149 | 150 | There are two ways to build this stack depending on which type of pipeline you are building. 151 | 152 | **Option 1. If you are building a CloudFormation pipeline:** 153 | 154 | ``` 155 | cdk deploy create-cross-account-role-- \ 156 | -c repo= \ 157 | -c branch= \ 158 | -c region= \ 159 | -c devops_account_id= \ 160 | -c pipeline_key_arn= \ 161 | --profile 162 | ``` 163 | 164 | This stack will output the ARNs of the two roles it creates, which will be required in the next stack. 165 | 166 | **Option 2. If you are building an S3 pipeline:** 167 | 168 | We need also to pass the name of the bucket we will write the deployment artifacts to, because the pipeline will need to be granted explicit access to operate on it. 169 | 170 | ``` 171 | cdk deploy create-cross-account-role-- \ 172 | -c repo= \ 173 | -c branch= \ 174 | -c region= \ 175 | -c target_bucket= \ 176 | -c devops_account_id= \ 177 | -c pipeline_key_arn= \ 178 | --profile 179 | ``` 180 | 181 | This stack will output the ARNs of the role it creates, which will be required in the next stack. 182 | 183 | Note you can also pass `-c artifact_bucket=` to restrict the role(s) to only be able to access the artifact S3 bucket. We don't require it to be added here because there is a circular dependency between it and the cross-account roles. You can come back later and update this stack to add it if you wish to limit the scope of S3 access in the cross-account roles. 184 | 185 | --- 186 | 187 | ## S3 Deployment Pipeline 188 | 189 | This pipeline will include an S3Deploy action - see the [example-s3-buildspec.yml](./example-s3-buildspec.yml) file for an example of a simple buildspec that builds and deploys a project using `npm` commands. The buildspec must be called `buildspec.yml` and appear in the root of your Git project. More [info about buildspec.yml](https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html). 190 | 191 | ![Diagram of S3 pipeline](docs/s3.png "Diagram of S3 pipeline") 192 | 193 | To create this pipeline, we will deploy an instance of the `s3-create-pipeline` stack. 194 | 195 | > This project assumes some other build process has already created the bucket you will be deploying to, as your target bucket will typically have things like CloudFront distributions associated with them, etc. 196 | 197 | This stack requires you to pass some additional required context variables: 198 | 199 | - The S3 bucket you will deploy to as `-c target_bucket=` 200 | - The cross-account role to assume in the target account as `-c cross_account_role_arn=` 201 | - A value for build environment, which will be available as `$ENVIRONMENT` in the `buildspec.yml` file: `-c build_env=staging`. This can be used to modify build scripts for dev/staging/production. 202 | 203 | Optionally, you can pass a list of approvers which will create a manual approval stage: `-c approvers=` 204 | 205 | The final CLI command should look something like this (assuming no manual approval stage): 206 | 207 | ``` 208 | cdk deploy s3-create-pipeline-- \ 209 | -c target_bucket= \ 210 | -c repo= \ 211 | -c region= \ 212 | -c branch= \ 213 | -c build_env= \ 214 | -c cross_account_role_arn= \ 215 | --profile 216 | ``` 217 | 218 | ### Adding a manual approval stage 219 | 220 | For deployments into your production environment, add the `approvers` context variable: 221 | 222 | ``` 223 | cdk deploy s3-create-pipeline-- \ 224 | -c target_bucket= \ 225 | -c repo= \ 226 | -c region= \ 227 | -c branch= \ 228 | -c build_env= \ 229 | -c approvers="guy.morton@versent.com.au," \ 230 | -c cross_account_role_arn= \ 231 | --profile 232 | ``` 233 | 234 | This will create the ManualApproval stage and the related SNS topics etc. 235 | 236 | ### Triggering the pipeline 237 | 238 | To trigger the pipeline, merge code changes to the `` branch of ``. You can also run it manually via the CodePipeline console, or via an AWS CLI command: 239 | 240 | ``` 241 | % aws codepipeline start-pipeline-execution --name --region --profile 242 | ``` 243 | 244 | --- 245 | 246 | ## CloudFormation Deployment Pipeline 247 | 248 | Use this pipeline type if your project build stage outputs CloudFormation, eg you have a SAM or CDK project or other project that deploys via CloudFormation. See the [example-sam-buildspec.yml](./example-sam-buildspec.yml) file for an example of a simple buildspec that builds and deploys a SAM project and [example-cdk-buildspec.yml](./example-cdk-buildspec.yml) file for an example that builds and deploys a CDK project. 249 | 250 | ![Diagram of CloudFormation pipeline](docs/cloudformation.png "Diagram of CloudFormation pipeline") 251 | 252 | The process is similar to the S3 process but we don't need the name of the deployment bucket and instead need the name of the CloudFormation deployment role we created earlier. This is the role that CloudFormation will assume in the target account to deploy your stack. 253 | 254 | The final CLI command should look something like this: 255 | 256 | ``` 257 | cdk deploy cf-create-pipeline-- \ 258 | -c repo= \ 259 | -c region= \ 260 | -c branch= \ 261 | -c deployment_role_arn="" \ 262 | -c cross_account_role_arn="" \ 263 | --profile 264 | ``` 265 | 266 | > This stack will output the name and ARN of the artifact bucket. You can use the name to update the cross-account role stack to narrow its access to S3. 267 | 268 | As with the S3 Pipeline, you can add manual approval as a stage before the CloudFormation Execute Change Set stage by adding the `-c approvers=""`. 269 | 270 | ``` 271 | cdk deploy cf-create-pipeline-- \ 272 | -c repo= \ 273 | -c region= \ 274 | -c branch= \ 275 | -c approvers="guy.morton@versent.com.au," \ 276 | -c deployment_role_arn="" \ 277 | -c cross_account_role_arn="" \ 278 | --profile 279 | ``` 280 | 281 | ### Triggering the pipeline 282 | 283 | To trigger the pipeline, merge code changes to the `` branch of ``. You can also run it manually via the CodePipeline console, or via an AWS CLI command: 284 | 285 | ``` 286 | % aws codepipeline start-pipeline-execution --name --region --profile 287 | ``` 288 | 289 | ## The Parameter Stack 290 | 291 | There is one more stack in this project, and it's there as a utility should you want to use it. It will allow you to quickly create one or more repo+branch scoped paramaters in Parameter Store. There is an example of how to add Parameter Store values to your `buildspec.yml` in [example-s3-buildspec.yml](./example-s3-buildspec.yml) 292 | 293 | To use it: 294 | 295 | ``` 296 | cdk deploy parameter-stack-- \ 297 | -c repo= \ 298 | -c region= \ 299 | -c parameter_list=:,, \ 300 | --profile 301 | ``` 302 | 303 | Note you can choose to set a value or not, using a colon `:` to separate the key and value. If you don't specify a value we will set the value to `placeholder`. 304 | 305 | To allow for multiple copies of the same parameters to co-exist in the one account and region, parameters are scoped to your repo and branch name by using standard slash notation: 306 | 307 | ``` 308 | /pipeline--/ 309 | ``` 310 | 311 | Note it can only be used to create StringValue types, but you could hack it to support StringLists if you wanted to. 312 | 313 | ## Refining the deployment permissions 314 | 315 | Note that the deployment role created in this project contains a bunch of permissions I have needed for deploying my stacks, but yours may need more permissions, or fewer. 316 | 317 | > If you get issues with the deploy pipeline in the target account failing because of missing permissions you can add permissions for the CloudFormation deploy role in the `cross-account-role` stack. 318 | 319 | Unfortunately, knowing ahead of time what permissions you will need is an imperfect science. Here are some strategies you might adopt: 320 | 321 | **Option 1. Give your deployment role admin access.** 322 | 323 | This is not really recommended, because we should be aiming to grant least privilege at all times. 324 | 325 | **Option 2. Iterate on failure.** 326 | 327 | Run your deployment and parse the errors when it fails to be deployed via CloudFormation. Add the missing permissions iteratively. This can be time consuming and a bit frustrating but it works. 328 | 329 | **Option 3. Attempt to calculate the permissions you'll need.** 330 | 331 | **For SAM stacks**, you can generate the transformed template using this command: 332 | 333 | ``` 334 | sam validate --debug --profile |& sed '1,2d' | sed -n -e :a -e '1,3!{P;N;D;};N;ba' > transformed.yaml 335 | ``` 336 | 337 | You can then use https://github.com/iann0036/aws-leastprivilege to extract a policy document that estimates the least privileges that will be required to run your template. This document can be merged with the policy created in this stack. 338 | 339 | **For CDK stacks** you can run the same tool over the synthed stack templates generated into cdk.out. 340 | 341 | The process of restricting permissions is a bit trial and error, but this should give you a starting point. You may need to iterate deployments and adjust if deployment failures occur when you limit permissions. 342 | 343 | --- 344 | 345 | ## Modifying/contributing to this project 346 | 347 | Hopefully this project is useful to you, but there are plenty of other scenarios that will have other requirements, so feel free to fork and modify this project to meet your exact requirements. 348 | 349 | If you create a pipeline type or make other improvements you are welcome to submit a PR to merge it into this project. 350 | 351 | --- 352 | 353 | ## Troubleshooting Checklist 354 | 355 | If something is not working, check the following 356 | 357 | 1. The cross-account role needs permissions to access the artifact bucket 358 | 2. The cross-account role needs to trust the devops account to allow it to assume it 359 | 3. The deployment role needs to trust the Cloudformation service principal 360 | 4. Both roles need to include permissions to access the shared CMK key 361 | 5. The CMK needs to have a key policy allowing it to be used by the target account 362 | 6. The pipeline role needs to be allowed to assume the roles in the target account 363 | 364 | Note that all of the above should be organised for you if you deploy this project as directed. 365 | 366 | --- 367 | 368 | ## Generic CDK doco 369 | 370 | The `cdk.json` file tells the CDK Toolkit how to execute your app. 371 | 372 | This project is set up like a standard Python project. The initialization 373 | process also creates a virtualenv within this project, stored under the .env 374 | directory. To create the virtualenv it assumes that there is a `python3` 375 | (or `python` for Windows) executable in your path with access to the `venv` 376 | package. If for any reason the automatic creation of the virtualenv fails, 377 | you can create the virtualenv manually. 378 | 379 | To manually create a virtualenv on MacOS and Linux: 380 | 381 | ``` 382 | $ python3 -m venv .env 383 | ``` 384 | 385 | After the init process completes and the virtualenv is created, you can use the following 386 | step to activate your virtualenv. 387 | 388 | ``` 389 | $ source .env/bin/activate 390 | ``` 391 | 392 | If you are a Windows platform, you would activate the virtualenv like this: 393 | 394 | ``` 395 | % .env\Scripts\activate.bat 396 | ``` 397 | 398 | Once the virtualenv is activated, you can install the required dependencies. 399 | 400 | ``` 401 | $ pip install -r requirements.txt 402 | ``` 403 | 404 | At this point you can now synthesize the CloudFormation template for this code. 405 | 406 | ``` 407 | $ cdk synth 408 | ``` 409 | 410 | To add additional dependencies, for example other CDK libraries, just add 411 | them to your `setup.py` file and rerun the `pip install -r requirements.txt` 412 | command. 413 | 414 | ## Useful commands 415 | 416 | - `cdk ls` list all stacks in the app 417 | - `cdk synth` emits the synthesized CloudFormation template 418 | - `cdk deploy` deploy this stack to your default AWS account/region 419 | - `cdk diff` compare deployed stack with current state 420 | - `cdk docs` open CDK documentation 421 | 422 | Enjoy! 423 | 424 | --- 425 | 426 | \* supports CDK projects that synth a standalone template with no external assets 427 | --------------------------------------------------------------------------------