├── .dockerignore ├── .gitignore ├── requirements.txt ├── config └── appconfig.cfg ├── wsgi.py ├── Dockerfile ├── LICENSE ├── release.yml ├── app.py └── README.md /.dockerignore: -------------------------------------------------------------------------------- 1 | .git 2 | venv -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | build 3 | docker-compose.yml -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | boto3 2 | flask 3 | gunicorn 4 | SQLAlchemy==1.3.16 5 | Flask-SQLAlchemy==2.4.1 6 | psycopg2 7 | jsonify 8 | -------------------------------------------------------------------------------- /config/appconfig.cfg: -------------------------------------------------------------------------------- 1 | PG_USERNAME='postgres' 2 | PG_PASSWORD='postgres' 3 | PG_HOST='db' 4 | PG_PORT='5432' 5 | PG_DATABASE='postgres' 6 | STAGE='local' -------------------------------------------------------------------------------- /wsgi.py: -------------------------------------------------------------------------------- 1 | from app import app 2 | import os 3 | 4 | if __name__ == '__main__': 5 | stage = os.environ.get('STAGE', 'local') 6 | the_port = 8000 7 | app.run(host='0.0.0.0', port=the_port) -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | # FROM python:3.6 2 | FROM public.ecr.aws/bitnami/python:3.6 3 | COPY --from=public.ecr.aws/tinystacks/secret-env-vars-wrapper:latest-x86 /opt /opt 4 | COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.3.2-x86_64 /lambda-adapter /opt/extensions/lambda-adapter 5 | 6 | # Create app directory 7 | WORKDIR /app 8 | 9 | # Copy the requirements file 10 | COPY requirements.txt . 11 | 12 | # Install app dependencies 13 | RUN apt-get update 14 | RUN echo Y | apt-get install libpq-dev python-dev 15 | RUN pip install -r requirements.txt 16 | 17 | # start the virtual environment 18 | RUN python3 -m venv venv 19 | 20 | # Copy the whole folder inside the Image filesystem 21 | COPY . . 22 | 23 | EXPOSE 8000 24 | 25 | CMD /opt/tinystacks-secret-env-vars-wrapper gunicorn --bind 0.0.0.0:8000 wsgi:app 26 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 3-Clause License 2 | 3 | Copyright (c) 2021, TinyStacks 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | 1. Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | 2. Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | 3. Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /release.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | phases: 3 | build: 4 | commands: 5 | - docker login -u AWS -p $(aws ecr get-login-password --region $AWS_REGION) $ECR_ENDPOINT 6 | - docker pull $ECR_IMAGE_URL:$PREVIOUS_STAGE_NAME 7 | - docker tag $ECR_IMAGE_URL:$PREVIOUS_STAGE_NAME $ECR_IMAGE_URL:$STAGE_NAME 8 | - docker push $ECR_IMAGE_URL:$STAGE_NAME 9 | post_build: 10 | on-failure: CONTINUE 11 | commands: 12 | - region="${STAGE_REGION:-$AWS_REGION}" 13 | - | 14 | if [ ! -z "$LAMBDA_FUNCTION_NAME" -a "$LAMBDA_FUNCTION_NAME" != "placeholder" ]; 15 | then 16 | aws lambda update-function-code --function-name "$LAMBDA_FUNCTION_NAME" --image-uri "$ECR_IMAGE_URL:$STAGE_NAME" --region $region 17 | imageSha=$(docker images --no-trunc --quiet $ECR_IMAGE_URL:$PREVIOUS_STAGE_NAME); 18 | aws lambda tag-resource --resource "$LAMBDA_FUNCTION_ARN" --tags "IMAGE_SHA=$imageSha" 19 | else 20 | echo 'Not a serverless stage' 21 | if [ -z "$SERVICE_NAME" ] || [ "$SERVICE_NAME" == "placeholder" ]; 22 | then 23 | echo 'Service is not ready yet. Repository tagged correctly, exiting now'; 24 | else 25 | TASK_ARN=$(aws ecs describe-services --cluster $CLUSTER_ARN --services $SERVICE_NAME | jq -r '.services[].taskDefinition') 26 | TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_ARN" | jq -r '.taskDefinition') 27 | CONTAINER_DEFINITIONS=$(jq -r '.containerDefinitions' <<< $TASK_DEFINITION) 28 | COMMIT_SHA=$ECR_IMAGE_URL:$(git rev-parse HEAD) 29 | CONTAINER_INDEX=$(echo $CONTAINER_DEFINITIONS | jq --arg IMAGE_URL "$ECR_IMAGE_URL" 'map(.image | contains($IMAGE_URL)) | index(true)' ) 30 | NEW_TASK_DEFINTIION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$COMMIT_SHA" --arg INDEX "$CONTAINER_INDEX" '.containerDefinitions[$INDEX | tonumber].image = $IMAGE| del(.revision) | del(.status) | del(.statusString) | del(.registeredAt) | del(.taskDefinitionArn) | del(.compatibilities) | del(.requiresAttributes) | del(.registeredBy)' ) 31 | NEW_DEF_ARN=$(aws ecs register-task-definition --cli-input-json "$NEW_TASK_DEFINTIION" | jq -r '.taskDefinition.taskDefinitionArn') 32 | aws ecs update-service --service $SERVICE_NAME --cluster $CLUSTER_ARN --task-definition "$NEW_DEF_ARN" --force-new-deployment 33 | fi 34 | fi 35 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, request, Response, jsonify 2 | import boto3 3 | import os 4 | import datetime 5 | import uuid 6 | import json 7 | from boto3.dynamodb.conditions import Key 8 | from flask_sqlalchemy import SQLAlchemy 9 | 10 | # App Initialization 11 | app = Flask(__name__) 12 | 13 | # app.config.from_pyfile('./config/appconfig.cfg') 14 | 15 | pg_user = os.environ.get('PG_USERNAME', 'postgres') 16 | pg_password = os.environ.get('PG_PASSWORD','undefined') 17 | pg_host = os.environ.get('PG_HOST','localhost') 18 | pg_port = os.environ.get('PG_PORT','5432') 19 | pg_database = os.environ.get('PG_DATABASE','postgres') 20 | 21 | CONF = f"postgresql://{pg_user}:{pg_password}@{pg_host}:{pg_port}/{pg_database}" 22 | app.config['SQLALCHEMY_DATABASE_URI'] = CONF 23 | 24 | app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False 25 | app.secret_key = 'secret' 26 | db = SQLAlchemy(app) 27 | 28 | # Model Class for Postgres integration 29 | class Item(db.Model): 30 | id = db.Column(db.Integer, primary_key=True) 31 | title = db.Column(db.String(200), unique=True, nullable=False) 32 | content = db.Column(db.String(200), nullable=False) 33 | 34 | def __init__(self, title, content): 35 | self.title = title 36 | self.content = content 37 | 38 | # DynamoDB 39 | dynamodb_client = None 40 | table = None 41 | 42 | def dynamodb_table(): 43 | global dynamodb_client 44 | global table 45 | if (dynamodb_client == None): 46 | dynamodb_client = boto3.resource('dynamodb') 47 | table = dynamodb_client.Table(os.environ.get('TABLE_NAME', 'TableNameEnvVarNotSet')) 48 | return table 49 | 50 | def cognito_client(): 51 | if (cognito == none): 52 | cognito = boto3.client('cognito-idp') 53 | return cognito 54 | 55 | local_items = [] 56 | 57 | @app.route('/', methods=['GET']) 58 | def get(): 59 | return "hello world!" 60 | 61 | @app.route('/ping', methods=['GET']) 62 | def ping(): 63 | return "pong" 64 | 65 | @app.route('/healthy', methods=['GET']) 66 | def healthy(): 67 | return "healthy" 68 | 69 | @app.route('/postgres-item', methods=['GET']) 70 | def itemget(): 71 | query = '''CREATE TABLE if not exists item(id serial PRIMARY KEY, title VARCHAR (200) UNIQUE NOT NULL, content VARCHAR (200) NOT NULL);''' 72 | db.engine.execute(query) 73 | 74 | items = [] 75 | for item in db.session.query(Item).all(): 76 | del item.__dict__['_sa_instance_state'] 77 | items.append(item.__dict__) 78 | 79 | return jsonify(items) 80 | 81 | @app.route('/postgres-item/', methods=['GET']) 82 | def itemget_one(id): 83 | query = '''CREATE TABLE if not exists item(id serial PRIMARY KEY, title VARCHAR (200) UNIQUE NOT NULL, content VARCHAR (200) NOT NULL);''' 84 | db.engine.execute(query) 85 | 86 | item = db.session.query(Item).filter(Item.id == id).one() 87 | print(item.__dict__, flush=True) 88 | del item.__dict__['_sa_instance_state'] 89 | return jsonify(item.__dict__) 90 | 91 | 92 | @app.route('/postgres-item', methods=['POST']) 93 | def itemadd(): 94 | query = '''CREATE TABLE if not exists item(id serial PRIMARY KEY, title VARCHAR (200) UNIQUE NOT NULL, content VARCHAR (200) NOT NULL);''' 95 | db.engine.execute(query) 96 | 97 | request_data = request.get_json() 98 | title = request_data["title"] 99 | content = request_data["content"] 100 | 101 | entry = Item(title, content) 102 | db.session.add(entry) 103 | db.session.commit() 104 | 105 | return jsonify("item created") 106 | 107 | @app.route('/postgres-item', methods=['PUT']) 108 | def update_item(): 109 | query = '''CREATE TABLE if not exists item(id serial PRIMARY KEY, title VARCHAR (200) UNIQUE NOT NULL, content VARCHAR (200) NOT NULL);''' 110 | db.engine.execute(query) 111 | 112 | request_data = request.get_json() 113 | title = request_data["title"] 114 | content = request_data["content"] 115 | 116 | entry = Item(title, content) 117 | db.session.update(entry) 118 | db.session.commit() 119 | 120 | return jsonify("item updated") 121 | 122 | @app.route('/postgres-item/', methods=['DELETE']) 123 | def delete_item(): 124 | get_item = Item.query.get(id) 125 | db.session.delete(get_item) 126 | db.session.commit() 127 | return make_response("", 204) 128 | 129 | # Get All items 130 | @app.route('/local-item', methods=['GET']) 131 | def get_item(): 132 | return make_response(json.dumps(local_items)) 133 | 134 | # Get One Item 135 | @app.route('/local-item/', methods=['GET']) 136 | def get_one_item(id): 137 | for item in local_items: 138 | if item['id'] == id: 139 | return make_response(json.dumps(item)) 140 | 141 | # Create a new item 142 | @app.route('/local-item', methods=['POST']) 143 | def put_item(): 144 | request_data = request.get_json() 145 | print(request_data['title']) 146 | item = {'id': request_data['id'],'title': request_data['title'], 'content': request_data['content']} 147 | local_items.append(item) 148 | return make_response("") 149 | 150 | # Delete an item 151 | @app.route('/local-item/', methods=['DELETE']) 152 | def delete_local_item(id): 153 | for item in local_items: 154 | if item['id'] == id: 155 | local_items.remove(item) 156 | return make_response(json.dumps(local_items)) 157 | 158 | @app.route('/dynamodb-item', methods=['PUT']) 159 | def put_db_item(): 160 | request_data = request.get_json() 161 | dynamodb_table().put_item( 162 | Item={ 163 | 'userId': "SYSTEM", 164 | 'itemId': str(uuid.uuid4()), 165 | 'title': request_data['title'], 166 | 'content': request_data['content'] 167 | } 168 | ) 169 | return make_response("") 170 | 171 | @app.route('/dynamodb-item', methods=['GET']) 172 | def get_db_items(): 173 | return make_response(str(dynamodb_table().scan()['Items'])) 174 | 175 | @app.route('/dynamodb-item', methods=['DELETE']) 176 | def delete_db_item(): 177 | request_data = request.get_json() 178 | dynamodb_table().delete_item( 179 | Key={ 180 | 'userId': "SYSTEM", 181 | 'itemId': str(uuid.uuid4()), 182 | } 183 | ) 184 | return make_response("") 185 | 186 | def make_response(rv): 187 | resp = Response(rv) 188 | resp.headers['Access-Control-Allow-Origin'] = '*' 189 | resp.headers['Access-Control-Allow-Credentials'] = 'true' 190 | return resp 191 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## AWS Docker Templates - Flask 2 | 3 | The AWS Docker Templates for Flask from TinyStacks enable launching a [Flask](https://flask.palletsprojects.com/en/2.0.x/) application as a Docker container using an AWS CodePipeline pipeline. The template includes its own small Flask sample application, enabling developers to start a new Flask project immediately. Developers can also take the AWS CodePipeline-specific files in this template and use them to ship an existing Flask application as a Docker image on AWS. 4 | 5 | ## License 6 | 7 | This sample code is made available under a modified MIT license. See the LICENSE file. 8 | 9 | ## Outline 10 | 11 | - [Prerequisites](#prerequisites) 12 | - [Overview](#overview) 13 | - [Sample Application](#sample-application) 14 | - [Dockerfile](#dockerfile) 15 | - [Build Template](#build-template) 16 | - [Release Template](#release-template) 17 | - [Getting Started](#getting-started) 18 | - [Existing Project](#existing-project) 19 | - [Known Limitations](#known-limitations) 20 | 21 | ## Prerequisites 22 | 23 | If you wish to build and test the Flask server (both as a standalone server and hosted inside of a Docker container) before publishing on AWS, you should have [Python](https://www.python.org/downloads/), [Flask](https://flask.palletsprojects.com/en/2.0.x/), and [Docker](https://docs.docker.com/get-docker/) installed locally. 24 | 25 | This solution makes use of a Docker image that comes with a Flask environment pre-installed. If you wish to run just the Docker container locally, you will only need Docker. 26 | 27 | This document also assumes that you have access to an AWS account. If you do not have one, [create one before getting started](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/). 28 | 29 | Since Flask is written in Python, this application uses [the AWS SDK for Python (Boto3)](https://aws.amazon.com/sdk-for-python/) to connect with AWS services. If executed locally, the use of AWS DynamoDB and Amazon Cognito in the Flask application will require that you have valid credentials for AWS saved on your local computer. We recommend configuring your credentials locally as a login profile and using the `AWS_PROFILE` environment variable to designate which set of credentials to use. For other options on setting AWS credentials, see the AWS documentation on [setting credentials in Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html). 30 | 31 | Further prerequisites for running these templates on AWS are provided below. 32 | 33 | ## Overview 34 | 35 | Flask is a lightweight Python-based framework for developing REST APIs. The hallmark of the Flask framework is that it is minimal and unopinionated; it provides a basic framework for API definition and routing and leaves other major decisions, such as data storage, entirely up to the user. 36 | 37 | This sample contains the following files: 38 | 39 | * A sample Flask API, defined in the app.py file. 40 | * A Dockerfile that builds the Flask file as a Docker image. 41 | * A requirements.txt file that defines the libraries your Flask application will require when installed inside of Docker. 42 | * A wsgi.py file for running your Flask application under the Gunicorn server on your Docker image. 43 | * A `build.yml` file for AWS CodeBuild that builds the image and pushes it to Amazon Elastic Container Registry (ECR). 44 | * A `release.yml` file for AWS CodeBuild that deploys the image stored in ECR to a Amazon Elastic Container Service (ECS) cluster. 45 | 46 | Users can use the `build.yml` and `release.yml` YAML files to create an AWS CodePipeline pipeline that compiles the latest application into a Docker image, which is then stored in an Amazon Elastic Container Registry (ECR) registry that is accessible to the user. The Flask application itself is deployed onto AWS as a Docker container using Amazon Elastic Container Service (Amazon ECS) onto one of the user's available ECS clusters. 47 | 48 | ### Sample Application 49 | 50 | The sample application is a simple CRUD (Create/Read/Update/Delete) application that can store data either in memory or in an AWS DynamoDB database table. When this application runs, it presents a set of REST API endpoints that other applications can call to store data. 51 | 52 | The file `app.py` defines the REST API endpoints for the application. There are three sets of endpoints defined, each with slightly different functionality. 53 | 54 | | Endpoint Type | Description | 55 | | ------------- | ------------- | 56 | | `/item` | Stores the Item in memory. | 57 | | `/db-item` | Stores the item in an AWS DynamoDB table. | 58 | | `/authenticated-item` | Like `/db-item`, but requires that the API user be logged in with an Amazon Cognito Identity. All records saved with this API are saved with the user's Cognito ID. When performing read and update operations with this API, users can only access the records that they created. | 59 | 60 | The server uses the same endpoint for all CRUD operations, distinguishing between them with HTTP verbs: 61 | 62 | | Endpoint Type | Description | 63 | | ------------- | ------------- | 64 | | PUT | Create | 65 | | GET | Read | 66 | | POST | Update | 67 | | DELETE | Delete | 68 | 69 | #### Running the Flask Server Directly 70 | 71 | To test out the sample application directly before you package it into a Dockerfile, clone this project locally, then create a virtual environment. 72 | 73 | ``` 74 | python -m venv venv 75 | ``` 76 | 77 | Next, activate your virtual environment and run the Flask application. (Note: You only need to define the variables AWS_PROFILE and TABLE_NAME if you plan to test the endpoints that store data in a DynamoDB table. Otherwise, these can be omitted.) 78 | 79 | On Linux: 80 | 81 | ``` 82 | . venv/bin/activate 83 | 84 | pip install -r requirements.txt 85 | 86 | AWS_PROFILE=[profile name] 87 | TABLE_NAME=[table name] 88 | 89 | flask run -p 8000 --host 0.0.0.0 90 | ``` 91 | 92 | On Windows (Powershell): 93 | 94 | ``` 95 | venv\Scripts\activate 96 | 97 | pip install -r requirements.txt 98 | 99 | $env:AWS_PROFILE=[profile name] 100 | $env:TABLE_NAME=[table name] 101 | 102 | flask run -p 8000 --host 0.0.0.0 103 | ``` 104 | 105 | To test that the server is running, test its `/ping` endpoint from a separate command line window: 106 | 107 | ``` 108 | curl http://127.0.0.1/ping 109 | ``` 110 | 111 | If the server is running, this call will return an HTTP 200 (OK) result code. 112 | 113 | #### Adding an Item in Memory 114 | 115 | To add an item in memory, call the `/item` endpoint with an HTTP PUT verb. This can be done on Unix/MacOS systems using cUrl: 116 | 117 | ``` 118 | curl -H "Content-Type: application/json" -X PUT -d '{"title":"my title", "content" : "my content"}' "http://127.0.0.1/item" 119 | ``` 120 | 121 | On Windows Powershell, use `Invoke-WebRequest`: 122 | 123 | ```powershell 124 | $item = @{ 125 | title="my title" 126 | content="my content" 127 | } 128 | $json = $item |convertto-json 129 | $response = Invoke-WebRequest 'http://127.0.0.1/item' -Method Put -Body $json -ContentType 'application/json' 130 | ``` 131 | 132 | The return result will be the same item but with a UUID that serves as its index key into the in-memory dictionary where the entry is stored. 133 | 134 | #### Adding an Item to a DynamoDB Table 135 | 136 | The sample application can store items as full JSON files in a DynamoDB table. The name of the table used is retrieved from the environment variable `TABLE_NAME`. 137 | 138 | To write to DynamoDB, the application must be running in a context in which it has been granted permissions to access the DynamoDB table. 139 | 140 | #### Adding an Item to the Postgres table 141 | 142 | This sample application can store new data in an existing Postges Database. 143 | 144 | To run proprely, please use these environment variables: 145 | 146 | PG_DATABASE= 147 | PG_USER= 148 | PG_PASSWORD= 149 | PG_HOST= 150 | PG_PORT= 151 | 152 | to get all the items: 153 | ``` 154 | curl --location --request GET '127.0.0.1:8000/postgres-item' 155 | ``` 156 | 157 | an example to create a new item: 158 | ``` 159 | curl --location --request POST 'localhost:8000/postgres-item' --header 'Content-Type: application/json' --data-raw '{"title":"theTitle","content":"theContent"}' 160 | ``` 161 | 162 | ### Dockerfile 163 | 164 | The Dockerfile copies the sample application into a Docker image and runs a Flask server. 165 | 166 | A Dockerfile uses a Docker base image stored in a public Docker repoistory and then adds functionality to the image required by your application. This project's Dockerfile is derived from [Bitnami's Python 3.6 image](https://gallery.ecr.aws/bitnami/python), which is freely available on the [Amazon ECR Public Gallery](https://gallery.ecr.aws/). The Dockerfile performs the following steps: 167 | 168 | * Copies the Flask application into a working directory on the Docker image. 169 | * Installs the contents of the requirements.txt file using pip. Requirements.txt contains a list of the Python modules that are required to run your Flask application. The current configuration installs boto3 (the Python AWS library) for connectivity to DynamoDB; Flask; and gunicorn, a lightweight [WSGI](https://en.wikipedia.org/wiki/Web_Server_Gateway_Interface) server for hosting Flask. 170 | * Starts Gunicorn on the container. 171 | 172 | If you have Docker installed, you can build and try out the sample application locally. Open a command prompt to the directory containing the Dockerfile and run the following command: 173 | 174 | ``` 175 | docker build -t tinystacks/flask-crud-app:latest . 176 | ``` 177 | 178 | Once built, run the Docker command locally, mapping port 8080 on your host machine to port 8000 on the container: 179 | 180 | ``` 181 | docker run -p 8080:8000 -d tinystacks/flask-crud-app:latest 182 | ``` 183 | 184 | To test that the server is running, test its `/ping` endpoint from the command line. This time, you will change the port to 8080 to test that it's running from the running Docker container: 185 | 186 | ``` 187 | curl http://127.0.0.1:8080/ping 188 | ``` 189 | 190 | If the server is running, this call will return an HTTP 200 (OK) result code. 191 | 192 | ### Build Template 193 | 194 | The `build.yml` file is an AWS CodeBuild file that builds your Dockerfile and publishes the output to an Amazon ECR registry. 195 | 196 | To publish to Amazon ECR, the build script first needs to obtain login credentials to the repository. It does this using a combination of the AWS CLI command `aws ecr get-login-password` and the `docker login` command. After authentication, the script then builds your Docker image, names it, and tags it with the name `latest` to mark it as the most recent build. Finally, it performs a `docker push`, publishing the new Docker image to your Amazon ECR Docker repository. 197 | 198 | ```yml 199 | version: 0.2 200 | phases: 201 | build: 202 | commands: 203 | - aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_ENDPOINT 204 | - docker build -t builtimage . 205 | - docker tag builtimage:latest $ECR_IMAGE_URL:latest 206 | - docker push $ECR_IMAGE_URL:latest 207 | ``` 208 | 209 | To run this in AWS CodeBuild, your build pipeline needs to define the following environment variables: 210 | 211 | * **ECR_ENDPOINT**: The name of the Amazon ECR repository to publish to. This variable takes the format: *<accountnumber>*.dkr.ecr.*<aws-region>*.amazonaws.com 212 | * **ECR_IMAGE_URL**: The name of the Amazon ECR repository plus the name of the container you are publishing. This should take the format: *<accountnumber>*.dkr.ecr.*<aws-region>*.amazonaws.com/aws-docker-flask 213 | 214 | The variable `AWS_REGION` is a default global variable that will default to the same AWS region in which your build pipeline is defined. If you need to publish to an Amazon ECR repository in another region, modify this script to use a custom environment variable specifying the correct region. For more information on environment variables, see [Environment variables in build environments](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html) on the AWS Web site. 215 | 216 | ### Release Template 217 | 218 | The `release.yml` file is another AWS CodeBuild file that takes the build output from the `build.yml` files (a Docker container image in an Amazon ECR repository) and runs it within an Amazon ECS cluster to which the pipeline has access. 219 | 220 | After logging in to the ECR repository using the `docker login` command, the script pulls down the image that was compiled and changes its tag from the name of the previous build to the name of the new build. Once the container's label has been updated, the script updates a defined service in Amazon ECS that pulls its image from our published Docker container. 221 | 222 | ```yaml 223 | version: 0.2 224 | phases: 225 | build: 226 | commands: 227 | - aws ecr get-login-password | docker login --username AWS --password-stdin $ECR_ENDPOINT 228 | - docker pull $ECR_IMAGE_URL:$PREVIOUS_STAGE_NAME 229 | - docker tag $ECR_IMAGE_URL:$PREVIOUS_STAGE_NAME $ECR_IMAGE_URL:$STAGE_NAME 230 | - docker push $ECR_IMAGE_URL:$STAGE_NAME 231 | - aws ecs update-service --service $SERVICE_NAME --cluster $CLUSTER_ARN --force-new-deployment 232 | ``` 233 | 234 | In addition to the variables discussed for `build.yml`, `release.yml` requires several environment variables defined in order to run: 235 | 236 | * **PREVIOUS_STAGE_NAME**: The previous build number or build stage name. This should be the previous Docker image tag name generated by the previous build. 237 | * **STAGE_NAME**: The current build number or build stage name you wish to use (e.g., `latest`). 238 | * **SERVICE_NAME**: The name of the Amazon ECS service to run. You will need to define this service yourself once you have the URI to your published container. 239 | * **CLUSTER_ARN**: The name of the cluster within your Amazon ECS service to which the release script will deploy the container. This should be the name of a custer that is running one or more instances of the service referenced by `SERVICE_NAME`. 240 | 241 | ## Getting Started 242 | 243 | ### Existing Project 244 | 245 | If you already have an existing Flask application, you can use the core files included in this sample to run them on a Docker container in AWS. Make sure to use your own existing requirements.txt file if you have one. 246 | 247 | If your project is already Dockerized (i.e., it has its own Dockerfile), then simply copy over the `build.yml` and `release.yml` files into the root of your existing project. 248 | 249 | If your project is not Dockerized, you will also need to copy over the Dockerfile included in this sample. If your application uses a different port than port 8000, you will also need to update the `EXPOSE` line in the Dockerfile to use a different port: 250 | 251 | ```Dockerfile 252 | EXPOSE 8000 253 | ``` 254 | 255 | ## Known Limitations 256 | 257 | The container runs Gunicorn directly, which is not recommended in a production setting. The recommended production configuration is to [run Gunicorn behind a reverse proxy such as Nginx](https://docs.gunicorn.org/en/stable/deploy.html) to prevent potential denial of service attacks. 258 | --------------------------------------------------------------------------------