├── README.md ├── amazon-api-gateway ├── api-lambda-dynamodb-hol.md ├── rest-api-lambda-proxy.py └── url-commands-to-run.md ├── amazon-cloudfront ├── .DS_Store ├── cloudfront-cache-and-behavior.md ├── files │ ├── coffee.jpg │ ├── data-analysis.pdf │ ├── documents.pdf │ └── donut.jpg └── index.html ├── amazon-cloudwatch ├── .DS_Store └── custom-cloudwatch-metrics.md ├── amazon-dynamodb ├── DynamoDB CLI Commands.sh ├── addupdateorder.json └── mystore.json ├── amazon-ebs └── amazon-ebs-volumes.md ├── amazon-ec2 ├── create-asg-alb-cli.md ├── ec2-user-data-web-app.md ├── generate-load-on-alb.md ├── user-data-metadata.md └── user-data-web-server.sh ├── amazon-ecs └── ec2-cluster-instances.md ├── amazon-efs ├── .DS_Store └── working-with-efs.md ├── amazon-elasticache └── add-retrieve-value.md ├── amazon-s3 ├── .DS_Store ├── event-notification-sns.json ├── permissions-lesson │ ├── bucket-policy.json │ ├── identity-policy-deny.json │ └── identity-policy.json ├── presigned_index.html ├── s3-enforce-kms-encryption.json ├── s3-replication-permissions.json └── s3-trust-policy.json ├── amazon-sqs ├── aws-sqs-cli-commands.md └── lambda-code-sns-sqs.py ├── amazon-vpc ├── .DS_Store └── custom-vpc.md ├── aws-cloudformation ├── .DS_Store ├── 1-ec2-template.yml ├── 2-ec2-template.yml ├── 3-ec2-template.yml ├── Create Nested Stack using the AWS CLI.md └── create-vpc-with-cloudformation.yaml ├── aws-copilot └── copilot-commands.md ├── aws-developer-tools ├── AppSpec Examples │ ├── appspec-example-ec2.yml │ ├── appspec-example-ecs.yml │ └── appspec-example-lambda.yml ├── appspec.yml ├── basic-git-commands.md ├── buildspec.yml └── nodejs-blue.zip ├── aws-iam ├── .DS_Store ├── profile-ec2-full-access.config └── sts-assume-role.json ├── aws-kms └── kms-key-ebs-rds.json ├── aws-lambda ├── destinations-and-dlq.md ├── event-source-mapping.md ├── invoke-lambda-versions-aliases.md ├── invoking-functions.md ├── lambda-environ-test.md └── sam-cli-commands.md ├── aws-secrets-manager └── secrets-manager-cli-commands.md ├── aws-step-functions └── Step Functions with Lambda.md └── fargate-blue-green-ci-cd ├── appspec.yaml ├── create-service.json ├── ecr-allow-all.json ├── fargate-ci-cd-instructions.md └── taskdef.json /README.md: -------------------------------------------------------------------------------- 1 | # AWS Certified Developer Associate (DVA-C02) Course Code 2 | *By [Digital Cloud Training](https://digitalcloud.training/) - Course Author Neal Davis* 3 | 4 | ## How to Use the Course Code 5 | 6 | The code used throughout the course has been zipped up and is available for download from this repo. Please download the code to your computer and unzip the contents. When course updates are made the file may be updated and you will need to ensure you download the latest version. 7 | 8 | ## Course Overview 9 | 10 | In this course, you’ll learn everything you need to know to ace your AWS Certified Developer Associate exam. 11 | 12 | Use the practical exercises to learn how to architect and build applications on Amazon Web Services (AWS). 13 | 14 | The course includes many visual slides to help you understand the concepts. 15 | 16 | At the end of each section use the exam cram lesson for quick revision of the important facts and take the quizzes to test your knowledge. 17 | 18 | ***What you will learn: *** 19 | 20 | This course covers all of the following topics that are included in the AWS Certified Developer Associate (DVA-C02) exam guide and are featuring on the exam today: 21 | 22 | - AWS Serverless including AWS Lambda, Amazon API Gateway, Amazon DynamoDB, AWS Cognito and the AWS serverless application model (SAM) 23 | - Docker containers on AWS with Amazon Elastic Container Service (ECS) and AWS Fargate 24 | - AWS Developer Tools including AWS CodeCommit, CodeBuild, CodeDeploy, CodePipeline and CodeStar 25 | - Amazon EC2 including Auto Scaling and Elastic Load Balancing (ELB) 26 | - Amazon Virtual Private Cloud (VPC) including Security Groups, Network ACLs and Flow Logs 27 | - Amazon Route 53 and Amazon CloudFront 28 | - AWS Storage including Amazon S3 and Amazon EBS 29 | - The AWS CLI, SDK and APIs 30 | - AWS CloudFormation and Elastic Beanstalk 31 | - Amazon RDS, ElastiCache, and DynamoDB DAX 32 | - AWS Application integration services including Amazon SQS, SNS, AWS Step Functions and Amazon Kinesis 33 | - Monitoring, logging and tracing with Amazon CloudWatch, CloudTrail and AWS X-Ray 34 | - Security and access management with AWS IAM, Cognito, KMS, ACM, Systems Manager, and Secrets Manager 35 | 36 | Learn more and [enroll in this course](https://digitalcloud.training/aws-certified-developer-associate/) now to get started with your AWS Certified Developer Associate certification 37 | -------------------------------------------------------------------------------- /amazon-api-gateway/api-lambda-dynamodb-hol.md: -------------------------------------------------------------------------------- 1 | ## 1 - Create IAM policy for Lambda execution role 2 | 3 | Create a policy named lambda-apigateway-policy 4 | 5 | Use the following JSON: 6 | 7 | ```json 8 | { 9 | "Version": "2012-10-17", 10 | "Statement": [ 11 | { 12 | "Sid": "Stmt1428341300017", 13 | "Action": [ 14 | "dynamodb:DeleteItem", 15 | "dynamodb:GetItem", 16 | "dynamodb:PutItem", 17 | "dynamodb:Query", 18 | "dynamodb:Scan", 19 | "dynamodb:UpdateItem" 20 | ], 21 | "Effect": "Allow", 22 | "Resource": "*" 23 | }, 24 | { 25 | "Sid": "", 26 | "Resource": "*", 27 | "Action": [ 28 | "logs:CreateLogGroup", 29 | "logs:CreateLogStream", 30 | "logs:PutLogEvents" 31 | ], 32 | "Effect": "Allow" 33 | } 34 | ] 35 | } 36 | ``` 37 | 38 | ## 2 - Create the execution role 39 | 40 | 41 | Create a role named lambda-apigateway-role 42 | 43 | Use case should be Lambda 44 | 45 | Attach the lambda-apigateway-policy 46 | 47 | 48 | ## 3 - Create the Lambda function 49 | 50 | Create a function named "LambdaFunctionOverHttps" 51 | 52 | Use the latest Python runtime 53 | 54 | Use the lambda-apigateway-role as the execution role 55 | 56 | Add the following code: 57 | 58 | ```python 59 | import json 60 | import boto3 61 | 62 | dynamo = boto3.client('dynamodb') 63 | 64 | def lambda_handler(event, context): 65 | 66 | operation = event.get('operation') 67 | 68 | # Ensure TableName is included in payload 69 | if 'tableName' in event: 70 | event['payload']['TableName'] = event['tableName'] 71 | 72 | try: 73 | if operation == 'create': 74 | event['payload']['Item'] = format_item(event['payload']['Item']) 75 | dynamo.put_item(**event['payload']) 76 | key = {k: v for k, v in event['payload']['Item'].items() if 'id' in k.lower()} 77 | response = dynamo.get_item(TableName=event['payload']['TableName'], Key=key) 78 | elif operation == 'read': 79 | event['payload']['Key'] = format_item(event['payload']['Key']) 80 | response = dynamo.get_item(**event['payload']) 81 | elif operation == 'update': 82 | event['payload']['Key'] = format_item(event['payload']['Key']) 83 | event['payload']['AttributeUpdates'] = format_updates(event['payload']['AttributeUpdates']) 84 | response = dynamo.update_item(**event['payload']) 85 | elif operation == 'delete': 86 | event['payload']['Key'] = format_item(event['payload']['Key']) 87 | response = dynamo.delete_item(**event['payload']) 88 | elif operation == 'list': 89 | response = dynamo.scan(**event['payload']) 90 | elif operation == 'echo': 91 | response = "Success" 92 | elif operation == 'ping': 93 | response = "pong" 94 | else: 95 | raise ValueError(f"Unknown operation: {operation}") 96 | 97 | return { 98 | 'statusCode': 200, 99 | 'body': json.dumps(response) 100 | } 101 | except Exception as e: 102 | return { 103 | 'statusCode': 400, 104 | 'body': json.dumps({'error': str(e)}) 105 | } 106 | 107 | def format_item(raw_item): 108 | 109 | formatted_item = {} 110 | for key, value in raw_item.items(): 111 | if isinstance(value, str): 112 | formatted_item[key] = {"S": value} 113 | elif isinstance(value, int) or isinstance(value, float): 114 | formatted_item[key] = {"N": str(value)} 115 | elif isinstance(value, list): 116 | formatted_item[key] = {"L": [format_item(item) if isinstance(item, dict) else item for item in value]} 117 | elif isinstance(value, dict): 118 | formatted_item[key] = {"M": format_item(value)} 119 | else: 120 | raise ValueError(f"Unsupported type for key {key}: {type(value)}") 121 | return formatted_item 122 | 123 | def format_updates(raw_updates): 124 | 125 | formatted_updates = {} 126 | for key, value in raw_updates.items(): 127 | action = value.get("Action", "PUT") # Default action is PUT 128 | formatted_value = format_item({key: value["Value"]})[key] 129 | formatted_updates[key] = { 130 | "Value": formatted_value, 131 | "Action": action 132 | } 133 | return formatted_updates 134 | ``` 135 | 136 | 137 | ## 4 - Test the function 138 | 139 | Use the following code to test the function: 140 | 141 | ```json 142 | { 143 | "operation": "echo", 144 | "payload": { 145 | "somekey1": "somevalue1", 146 | "somekey2": "somevalue2" 147 | } 148 | } 149 | ``` 150 | 151 | Optionally, save in a text file named input.txt and execute the following CLI command: 152 | 153 | aws lambda invoke --function-name LambdaFunctionOverHttps --payload file://input.txt outputfile.txt 154 | 155 | ## 5 - Create REST API 156 | 157 | Create a REST API named "DynamoDBOperations" 158 | 159 | New API with a regional endpoint 160 | 161 | ## 6 - Create resource and method 162 | 163 | Create a resource named dynamodbmanager 164 | 165 | Create a POST method for the /dynamodbmanager resource 166 | 167 | Use a Lambda integration (non proxy) and enter the function name as LambdaFunctionOverHttps 168 | 169 | ## 7 - Create a DynamoDB table 170 | 171 | Create a table named lambda-apigateway 172 | 173 | For the partition key use "id" (string) 174 | 175 | ## 8 - Test the configuration 176 | 177 | In the REST API go to the /dynamodbmanager resource and choose the POST method 178 | 179 | In the Method Execution pane, in the Client box, choose Test 180 | 181 | In the Method Test pane, keep Query String and Headers empty, and for the request body enter the following JSON and choose "Test": 182 | 183 | ```json 184 | { 185 | "operation": "create", 186 | "tableName": "lambda-apigateway", 187 | "payload": { 188 | "Item": { 189 | "id": "1234ABCDE", 190 | "number": 5 191 | } 192 | } 193 | } 194 | ``` 195 | 196 | A 200 indicates a successful operation. Go to DynamoDB to check 197 | 198 | You can also update an item: 199 | 200 | ```json 201 | { 202 | "operation": "update", 203 | "tableName": "lambda-apigateway", 204 | "payload": { 205 | "Key": { 206 | "id": "1234ABCD" 207 | }, 208 | "AttributeUpdates": { 209 | "number": { 210 | "Value": 999 211 | } 212 | } 213 | } 214 | } 215 | ``` 216 | -------------------------------------------------------------------------------- /amazon-api-gateway/rest-api-lambda-proxy.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | def lambda_handler(event, context): 4 | name = "you" 5 | city = "World" 6 | time = "day" 7 | day = "" 8 | response_code = 200 9 | 10 | print("request:", json.dumps(event)) 11 | 12 | query_params = event.get("queryStringParameters") or {} 13 | headers = event.get("headers") or {} 14 | body = json.loads(event.get("body", "{}")) if event.get("body") else {} 15 | 16 | if "name" in query_params: 17 | print("Received name:", query_params["name"]) 18 | name = query_params["name"] 19 | 20 | if "city" in query_params: 21 | print("Received city:", query_params["city"]) 22 | city = query_params["city"] 23 | 24 | if "day" in headers: 25 | print("Received day:", headers["day"]) 26 | day = headers["day"] 27 | 28 | if "time" in body: 29 | time = body["time"] 30 | 31 | greeting = f"Good {time}, {name} of {city}." 32 | if day: 33 | greeting += f" Happy {day}!" 34 | 35 | response_body = { 36 | "message": greeting, 37 | "input": event 38 | } 39 | 40 | response = { 41 | "statusCode": response_code, 42 | "headers": { 43 | "x-custom-header": "my custom header value" 44 | }, 45 | "body": json.dumps(response_body) 46 | } 47 | 48 | print("response:", json.dumps(response)) 49 | return response -------------------------------------------------------------------------------- /amazon-api-gateway/url-commands-to-run.md: -------------------------------------------------------------------------------- 1 | ## Run the API using this URL format: 2 | 3 | https://API/STAGE/helloworld?name=NAME&city=CITY 4 | 5 | ## Or use curl: 6 | 7 | curl -v POST "API/STAGE/helloworld?name=NAME&city=CITY" -H "content-type: application/json" -H "day: DAY" -d "{ \"time\": \"TIMEOFDAY\" }" -------------------------------------------------------------------------------- /amazon-cloudfront/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/amazon-cloudfront/.DS_Store -------------------------------------------------------------------------------- /amazon-cloudfront/cloudfront-cache-and-behavior.md: -------------------------------------------------------------------------------- 1 | # Amazon CloudFront Cache and Behavior Settings 2 | 3 | 1. Set up Amazon S3 buckets: 4 | - Create two S3 buckets for the files (e.g., 'pdf-bucket' and 'jpg-bucket') 5 | - Upload sample PDF files to the 'pdf-bucket' and JPG images to the 'jpg-bucket' 6 | - Create a bucket for the static website 7 | 8 | 2. For the static website: 9 | - Enable public access 10 | - Configure as a static website 11 | - Add the index.html (when ready) 12 | 13 | 3. Configure Amazon CloudFront: 14 | - Create a new CloudFront distribution 15 | - Add the static website as an origin (use website endpoint) 16 | - Disable caching 17 | - Add 2 more origins for the buckets containing the files and create/configure OAC 18 | - Configure cache behavior settings for each origin based on file type (PDF or JPG) and default going to the static website -------------------------------------------------------------------------------- /amazon-cloudfront/files/coffee.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/amazon-cloudfront/files/coffee.jpg -------------------------------------------------------------------------------- /amazon-cloudfront/files/data-analysis.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/amazon-cloudfront/files/data-analysis.pdf -------------------------------------------------------------------------------- /amazon-cloudfront/files/documents.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/amazon-cloudfront/files/documents.pdf -------------------------------------------------------------------------------- /amazon-cloudfront/files/donut.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/amazon-cloudfront/files/donut.jpg -------------------------------------------------------------------------------- /amazon-cloudfront/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Sample Deployment 6 | 26 | 27 | 28 |
29 |

S3 Static Website

30 |

Welcome to the Static Website

31 | 37 |
38 | 39 | 40 | -------------------------------------------------------------------------------- /amazon-cloudwatch/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/amazon-cloudwatch/.DS_Store -------------------------------------------------------------------------------- /amazon-cloudwatch/custom-cloudwatch-metrics.md: -------------------------------------------------------------------------------- 1 | **These commands can be executed using AWS CloudShell** 2 | 3 | ## Create an IAM role and instance profile 4 | 5 | 1. Create an IAM policy 6 | 7 | aws iam create-policy --policy-name "CloudWatch-Put-Metric-Data" --policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["cloudwatch:PutMetricData"],"Resource":"*"}]}' 8 | 9 | 2. Create an IAM role that uses the policy document 10 | 11 | aws iam create-role --role-name "CloudWatch-Role" --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":"ec2.amazonaws.com"},"Action":"sts:AssumeRole"}]}' 12 | 13 | 3. Attach the policy to the role (update policy ARN) 14 | 15 | aws iam attach-role-policy --role-name "CloudWatch-Role" --policy-arn "" 16 | 17 | 4. Create an instance profile 18 | 19 | aws iam create-instance-profile --instance-profile-name "CloudWatch-Instance-Profile" 20 | 21 | 5. Add the role to the instance profile 22 | 23 | aws iam add-role-to-instance-profile --instance-profile-name "CloudWatch-Instance-Profile" --role-name "CloudWatch-Role" 24 | 25 | ## Launch an EC2 instance 26 | 27 | 1. Create a security group 28 | 29 | aws ec2 create-security-group --group-name CustomMetricLab --description "Temporary SG for the Custom Metric Lab" 30 | 31 | 2. Add a rule for SSH inbound to the security group 32 | 33 | aws ec2 authorize-security-group-ingress --group-name CustomMetricLab --protocol tcp --port 22 --cidr 0.0.0.0/0 34 | 35 | 3. Launch instance in US-EAST-1A 36 | 37 | aws ec2 run-instances --image-id --instance-type t2.micro --placement AvailabilityZone=us-east-1a --security-group-ids --iam-instance-profile Name="CloudWatch-Instance-Profile" 38 | 39 | # Run the remaining commands from the EC2 instance 40 | 41 | ## Install stress 42 | 43 | sudo dnf install stress-ng -y 44 | 45 | ## Configure a shell script that uses the put-metric-data API 46 | 47 | 1. Create a shell script named mem-usage.sh 48 | 49 | sudo nano mem-usage.sh 50 | 51 | 2. Add the following code and save: 52 | 53 | ```bash 54 | #!/bin/bash 55 | 56 | # Create a token for IMDSv2 that expires after 60 seconds 57 | TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 60" -s` 58 | 59 | # Use the token to fetch the EC2 instance ID 60 | INSTANCE_ID=`curl -H "X-aws-ec2-metadata-token: $TOKEN" -s http://169.254.169.254/latest/meta-data/instance-id` 61 | 62 | # Get memory usage and put metric data to CloudWatch 63 | MEMORY_USAGE=$(free | awk '/Mem/{printf("%d", ($2-$7)/$2*100)}') 64 | aws cloudwatch put-metric-data --region us-east-1 --namespace "Custom/Memory" --metric-name "MemUsage" --value "$MEMORY_USAGE" --unit "Percent" --dimensions "Name=InstanceId,Value=$INSTANCE_ID" 65 | ``` 66 | 67 | 3. Make the script executable 68 | 69 | sudo chmod +x mem-usage.sh 70 | 71 | 4. Run the following commands to install and run crontab 72 | 73 | ```bash 74 | sudo dnf install cronie 75 | sudo systemctl enable crond 76 | sudo systemctl start crond 77 | crontab -e 78 | ``` 79 | 80 | 5. Then, add the following line to execute the script every minute 81 | 82 | * * * * * /home/ec2-user/mem-usage.sh 83 | 84 | 6. Save by typing the following and pressing enter 85 | 86 | :wq 87 | 88 | ## Run the stres utility to generate load 89 | 90 | stress-ng --vm 15 --vm-bytes 80% --vm-method all --verify -t 60m -v 91 | 92 | ## Create an alarm in CloudWatch 93 | 94 | 1. Create an alarm that is based on the custom metric 95 | 96 | 97 | -------------------------------------------------------------------------------- /amazon-dynamodb/DynamoDB CLI Commands.sh: -------------------------------------------------------------------------------- 1 | # Import data 2 | aws dynamodb batch-write-item --request-items file://mystore.json 3 | 4 | #### SCANS #### 5 | 6 | # Perform scan of ProductOrders table: 7 | aws dynamodb scan --table-name mystore 8 | 9 | # Use Page-Size Parameter: 10 | aws dynamodb scan --table-name mystore --page-size 1 11 | aws dynamodb scan --table-name mystore --page-size 2 12 | 13 | # Use Max-Items Parameter: 14 | aws dynamodb scan --table-name mystore --max-items 1 15 | 16 | # Use Projection-Expression Parameter: 17 | aws dynamodb scan --table-name mystore --projection-expression "created" 18 | aws dynamodb scan --table-name mystore --projection-expression "category" 19 | aws dynamodb scan --table-name mystore --projection-expression "colour" 20 | 21 | # Use Filter-Expression Parameter: 22 | aws dynamodb scan --table-name mystore --filter-expression "clientid = :username" --expression-attribute-values '{ ":username": { "S": "chris@example.com" }}' 23 | aws dynamodb scan --table-name mystore --filter-expression "size = :n" --expression-attribute-values '{ ":n": { "N": "12" }}' 24 | aws dynamodb scan --table-name mystore --filter-expression "size > :n" --expression-attribute-values '{ ":n": { "N": "12" }}' 25 | 26 | #### QUERIES #### 27 | 28 | # Use Key-Conditions Parameter: 29 | aws dynamodb query --table-name mystore --key-conditions '{ "clientid":{ "ComparisonOperator":"EQ", "AttributeValueList": [ {"S": "chris@example.com"} ] } }' 30 | 31 | # Use Key-Condition-Expression Parameter: 32 | aws dynamodb query --table-name mystore --key-condition-expression "clientid = :name" --expression-attribute-values '{":name":{"S":"chris@example.com"}}' -------------------------------------------------------------------------------- /amazon-dynamodb/addupdateorder.json: -------------------------------------------------------------------------------- 1 | { 2 | "mystore": [ 3 | { 4 | "PutRequest": { 5 | "Item": { 6 | "clientid": { 7 | "S": "john@example.com" 8 | }, 9 | "created": { 10 | "S": "2020-03-9T08:12Z" 11 | }, 12 | "sku": { 13 | "S": "SKU-S523" 14 | }, 15 | "category": { 16 | "S": "T-Shirt" 17 | }, 18 | "size": { 19 | "S": "Small" 20 | }, 21 | "colour": { 22 | "S": "Blue" 23 | }, 24 | "qty": { 25 | "N": "1" 26 | }, 27 | "price": { 28 | "N": "30" 29 | }, 30 | "weight": { 31 | "S": "Light" 32 | } 33 | 34 | } 35 | } 36 | }, 37 | { 38 | "PutRequest": { 39 | "Item": { 40 | "clientid": { 41 | "S": "joseph@example.com" 42 | }, 43 | "created": { 44 | "S": "2020-04-04T10:12Z" 45 | }, 46 | "sku": { 47 | "S": "SKU-R203" 48 | }, 49 | "category": { 50 | "S": "Socks" 51 | }, 52 | "size": { 53 | "S": "Small" 54 | }, 55 | "qty": { 56 | "N": "1" 57 | }, 58 | "price": { 59 | "N": "20" 60 | }, 61 | "weight": { 62 | "S": "Light" 63 | } 64 | 65 | } 66 | } 67 | } 68 | ] 69 | 70 | } 71 | 72 | -------------------------------------------------------------------------------- /amazon-dynamodb/mystore.json: -------------------------------------------------------------------------------- 1 | { 2 | "mystore": [ 3 | { 4 | "PutRequest": { 5 | "Item": { 6 | "clientid": { 7 | "S": "john@example.com" 8 | }, 9 | "created": { 10 | "S": "2020-03-9T08:12Z" 11 | }, 12 | "sku": { 13 | "S": "SKU-S523" 14 | }, 15 | "category": { 16 | "S": "T-Shirt" 17 | }, 18 | "size": { 19 | "S": "Small" 20 | }, 21 | "colour": { 22 | "S": "Red" 23 | }, 24 | "qty": { 25 | "N": "1" 26 | }, 27 | "price": { 28 | "N": "30" 29 | }, 30 | "weight": { 31 | "S": "Light" 32 | } 33 | 34 | } 35 | } 36 | }, 37 | { 38 | "PutRequest": { 39 | "Item": { 40 | "clientid": { 41 | "S": "chris@example.com" 42 | }, 43 | "created": { 44 | "S": "2020-03-10T14:30Z" 45 | }, 46 | "sku": { 47 | "S": "SKU-J091" 48 | }, 49 | "category": { 50 | "S": "Pen" 51 | }, 52 | "qty": { 53 | "N": "1" 54 | }, 55 | "price": { 56 | "N": "14.99" 57 | }, 58 | "colour": { 59 | "S": "Blue" 60 | } 61 | 62 | } 63 | } 64 | }, 65 | { 66 | "PutRequest": { 67 | "Item": { 68 | "clientid": { 69 | "S": "chris@example.com" 70 | }, 71 | "created": { 72 | "S": "2020-03-10T15:30Z" 73 | }, 74 | "sku": { 75 | "S": "SKU-A234" 76 | }, 77 | "category": { 78 | "S": "Mug" 79 | }, 80 | "qty": { 81 | "N": "2" 82 | }, 83 | "price": { 84 | "N": "8.99" 85 | }, 86 | "size": { 87 | "N": "12" 88 | } 89 | 90 | } 91 | } 92 | }, 93 | { 94 | "PutRequest": { 95 | "Item": { 96 | "clientid": { 97 | "S": "sarah@example.com" 98 | }, 99 | "created": { 100 | "S": "2020-03-12T7:42Z" 101 | }, 102 | "sku": { 103 | "S": "SKU-R873" 104 | }, 105 | "category": { 106 | "S": "Chair" 107 | }, 108 | "size": { 109 | "N": "94" 110 | }, 111 | "qty": { 112 | "N": "6" 113 | }, 114 | "price": { 115 | "N": "82.99" 116 | }, 117 | "weight": { 118 | "N": "4011" 119 | } 120 | 121 | } 122 | } 123 | }, 124 | { 125 | "PutRequest": { 126 | "Item": { 127 | "clientid": { 128 | "S": "jenny@example.com" 129 | }, 130 | "created": { 131 | "S": "2020-03-13T18:29Z" 132 | }, 133 | "sku": { 134 | "S": "SKU-I019" 135 | }, 136 | "category": { 137 | "S": "Plate" 138 | }, 139 | "qty": { 140 | "N": "12" 141 | }, 142 | "price": { 143 | "N": "119.99" 144 | }, 145 | "size": { 146 | "N": "30" 147 | } 148 | 149 | } 150 | } 151 | }, 152 | { 153 | "PutRequest": { 154 | "Item": { 155 | "clientid": { 156 | "S": "jose@example.com" 157 | }, 158 | "created": { 159 | "S": "2020-04-01T20:01Z" 160 | }, 161 | "sku": { 162 | "S": "SKU-U812" 163 | }, 164 | "category": { 165 | "S": "Phone Case" 166 | }, 167 | "qty": { 168 | "N": "1" 169 | }, 170 | "price": { 171 | "N": "19.99" 172 | }, 173 | "size": { 174 | "S": "iPhone 8" 175 | } 176 | 177 | } 178 | } 179 | }, 180 | { 181 | "PutRequest": { 182 | "Item": { 183 | "clientid": { 184 | "S": "jess@example.com" 185 | }, 186 | "created": { 187 | "S": "2020-04-02T06:04Z" 188 | }, 189 | "sku": { 190 | "S": "SKU-P122" 191 | }, 192 | "category": { 193 | "S": "book" 194 | }, 195 | "qty": { 196 | "N": "1" 197 | }, 198 | "price": { 199 | "N": "24.95" 200 | }, 201 | "weight": { 202 | "N": "200" 203 | } 204 | 205 | } 206 | } 207 | }, 208 | { 209 | "PutRequest": { 210 | "Item": { 211 | "clientid": { 212 | "S": "cindy@example.com" 213 | }, 214 | "created": { 215 | "S": "2020-03-28T20:29Z" 216 | }, 217 | "sku": { 218 | "S": "SKU-L398" 219 | }, 220 | "qty": { 221 | "N": "1" 222 | }, 223 | "price": { 224 | "N": "12.99" 225 | }, 226 | "category": { 227 | "S": "Charger" 228 | } 229 | 230 | } 231 | } 232 | },{ 233 | "PutRequest": { 234 | "Item": { 235 | "clientid": { 236 | "S": "adam@example.com" 237 | }, 238 | "created": { 239 | "S": "2020-03-18T04:54Z" 240 | }, 241 | "sku": { 242 | "S": "SKU-K101" 243 | }, 244 | "category": { 245 | "S": "Bowl" 246 | }, 247 | "size": { 248 | "N": "20" 249 | }, 250 | "price": { 251 | "N": "32" 252 | }, 253 | "qty": { 254 | "N": "4" 255 | } 256 | 257 | } 258 | } 259 | },{ 260 | "PutRequest": { 261 | "Item": { 262 | "clientid": { 263 | "S": "safin@example.com" 264 | }, 265 | "created": { 266 | "S": "2020-03-21T22:27Z" 267 | }, 268 | "sku": { 269 | "S": "SKU-M011" 270 | }, 271 | "category": { 272 | "S": "Glasses" 273 | }, 274 | "model": { 275 | "S": "Champagne" 276 | }, 277 | "qty": { 278 | "N": "10" 279 | }, 280 | "price": { 281 | "N": "249.99" 282 | }, 283 | "finish": { 284 | "S": "Crystal" 285 | } 286 | 287 | } 288 | } 289 | },{ 290 | "PutRequest": { 291 | "Item": { 292 | "clientid": { 293 | "S": "carol@example.com" 294 | }, 295 | "created": { 296 | "S": "2020-03-27T19:19Z" 297 | }, 298 | "category": { 299 | "S": "Watch" 300 | }, 301 | "model": { 302 | "S": "NXC021Z" 303 | }, 304 | "qty": { 305 | "N": "2" 306 | }, 307 | "price": { 308 | "N": "349.99" 309 | }, 310 | "brand": { 311 | "S": "Garmin" 312 | } 313 | 314 | } 315 | } 316 | }, 317 | { 318 | "PutRequest": { 319 | "Item": { 320 | "clientid": { 321 | "S": "jake@example.com" 322 | }, 323 | "created": { 324 | "S": "2020-03-18T19:29Z" 325 | }, 326 | "sku": { 327 | "S": "SKU-Q012" 328 | }, 329 | "category": { 330 | "S": "Camera" 331 | }, 332 | "brand": { 333 | "S": "Cannon" 334 | }, 335 | "qty": { 336 | "N": "1" 337 | }, 338 | "price": { 339 | "N": "429.99" 340 | }, 341 | "model": { 342 | "S": "EOS 5D MIV" 343 | } 344 | 345 | } 346 | } 347 | }, 348 | { 349 | "PutRequest": { 350 | "Item": { 351 | "clientid": { 352 | "S": "chris@example.com" 353 | }, 354 | "created": { 355 | "S": "2020-04-01T12:30Z" 356 | }, 357 | "sku": { 358 | "S": "SKU-B123" 359 | }, 360 | "category": { 361 | "S": "Batteries" 362 | }, 363 | "qty": { 364 | "N": "12" 365 | }, 366 | "price": { 367 | "N": "22.99" 368 | }, 369 | "size": { 370 | "S": "A3" 371 | } 372 | 373 | } 374 | } 375 | }, 376 | { 377 | "PutRequest": { 378 | "Item": { 379 | "clientid": { 380 | "S": "chris@example.com" 381 | }, 382 | "created": { 383 | "S": "2020-03-28T18:01Z" 384 | }, 385 | "sku": { 386 | "S": "SKU-C765" 387 | }, 388 | "category": { 389 | "S": "Ear Plugs" 390 | }, 391 | "qty": { 392 | "N": "1" 393 | }, 394 | "price": { 395 | "N": "6.99" 396 | } 397 | 398 | } 399 | } 400 | }, 401 | { 402 | "PutRequest": { 403 | "Item": { 404 | "clientid": { 405 | "S": "pj@example.com" 406 | }, 407 | "created": { 408 | "S": "2020-04-03T07:04Z" 409 | }, 410 | "sku": { 411 | "S": "SKU-P122" 412 | }, 413 | "category": { 414 | "S": "book" 415 | }, 416 | "qty": { 417 | "N": "1" 418 | }, 419 | "price": { 420 | "N": "24.95" 421 | }, 422 | "weight": { 423 | "N": "200" 424 | } 425 | 426 | } 427 | } 428 | }, 429 | { 430 | "PutRequest": { 431 | "Item": { 432 | "clientid": { 433 | "S": "werner@example.com" 434 | }, 435 | "created": { 436 | "S": "2020-04-02T06:04Z" 437 | }, 438 | "sku": { 439 | "S": "SKU-P122" 440 | }, 441 | "category": { 442 | "S": "book" 443 | }, 444 | "qty": { 445 | "N": "1" 446 | }, 447 | "price": { 448 | "N": "24.95" 449 | }, 450 | "weight": { 451 | "N": "200" 452 | } 453 | 454 | } 455 | } 456 | }, 457 | { 458 | "PutRequest": { 459 | "Item": { 460 | "clientid": { 461 | "S": "charles@example.com" 462 | }, 463 | "created": { 464 | "S": "2020-04-02T06:04Z" 465 | }, 466 | "sku": { 467 | "S": "SKU-P122" 468 | }, 469 | "category": { 470 | "S": "book" 471 | }, 472 | "qty": { 473 | "N": "4" 474 | }, 475 | "price": { 476 | "N": "24.95" 477 | }, 478 | "weight": { 479 | "N": "200" 480 | } 481 | 482 | } 483 | } 484 | }, 485 | { 486 | "PutRequest": { 487 | "Item": { 488 | "clientid": { 489 | "S": "kathy@example.com" 490 | }, 491 | "created": { 492 | "S": "2020-04-02T06:04Z" 493 | }, 494 | "sku": { 495 | "S": "SKU-P122" 496 | }, 497 | "category": { 498 | "S": "book" 499 | }, 500 | "qty": { 501 | "N": "2" 502 | }, 503 | "price": { 504 | "N": "24.95" 505 | }, 506 | "weight": { 507 | "N": "200" 508 | } 509 | 510 | } 511 | } 512 | }, 513 | { 514 | "PutRequest": { 515 | "Item": { 516 | "clientid": { 517 | "S": "leonard@example.com" 518 | }, 519 | "created": { 520 | "S": "2020-04-03T06:04Z" 521 | }, 522 | "sku": { 523 | "S": "SKU-T122" 524 | }, 525 | "category": { 526 | "S": "Phone Case" 527 | }, 528 | "qty": { 529 | "N": "2" 530 | }, 531 | "price": { 532 | "N": "19.95" 533 | }, 534 | "weight": { 535 | "S": "iPhone 8" 536 | } 537 | 538 | } 539 | } 540 | }, 541 | { 542 | "PutRequest": { 543 | "Item": { 544 | "clientid": { 545 | "S": "pat@example.com" 546 | }, 547 | "created": { 548 | "S": "2020-04-03T06:04Z" 549 | }, 550 | "sku": { 551 | "S": "SKU-T122" 552 | }, 553 | "category": { 554 | "S": "Phone Case" 555 | }, 556 | "qty": { 557 | "N": "2" 558 | }, 559 | "price": { 560 | "N": "19.95" 561 | }, 562 | "weight": { 563 | "S": "iPhone 8" 564 | } 565 | 566 | } 567 | } 568 | }, 569 | { 570 | "PutRequest": { 571 | "Item": { 572 | "clientid": { 573 | "S": "colin@example.com" 574 | }, 575 | "created": { 576 | "S": "2020-04-03T06:04Z" 577 | }, 578 | "sku": { 579 | "S": "SKU-T122" 580 | }, 581 | "category": { 582 | "S": "Phone Case" 583 | }, 584 | "qty": { 585 | "N": "4" 586 | }, 587 | "price": { 588 | "N": "19.95" 589 | }, 590 | "weight": { 591 | "S": "iPhone 8" 592 | } 593 | 594 | } 595 | } 596 | } 597 | 598 | 599 | 600 | ] 601 | } -------------------------------------------------------------------------------- /amazon-ebs/amazon-ebs-volumes.md: -------------------------------------------------------------------------------- 1 | # Amazon EBS Volume Lab 2 | 3 | ## Launch Instances in two AZs 4 | 5 | 1. Launch an instance using the Amazon Linux AMI in us-east-1a 6 | 2. Launch another instnace using the Amazon Linux AMI in us-east-1b 7 | 8 | ## Create and Attach an EBS Volume 9 | 1. Create a 10GB gp2 volume in us-east-1a with a name tag of 'data-volume' 10 | 2. List non-loopback block devices on instance 11 | sudo lsblk -e7 12 | 3. Attach the volume to the instance in us-east-1a 13 | 4. Rerun the command to view block devices 14 | 15 | ## Create a filesystem and mount the volume 16 | 1. Create a filesystem on the EBS volume 17 | sudo mkfs -t ext4 /dev/xvdf 18 | 2. Create a mount point for the EBS volume 19 | sudo mkdir /data 20 | 3. Mount the EBS volume to the mount point 21 | sudo mount /dev/xvdf /data 22 | 4. Make the volume mount persistent 23 | Run: 'sudo nano /etc/fstab' then add '/dev/xvdf /data ext4 defaults,nofail 0 2' and save the file 24 | 25 | ## Add some data to the volume 26 | 27 | 1. Change to the /data mount point directory 28 | 2. Create some files and folders 29 | 30 | ## Take a snapshot and move the volume to us-east-1b 31 | 32 | 1. Take a snapshot of the data volume 33 | 2. Create a new EBS volume from the snapshot in us-east-1b 34 | 3. Mount the new EBS volume to the instance in us-east-1b 35 | 4. Change to the /data mount point and view the data 36 | 37 | -------------------------------------------------------------------------------- /amazon-ec2/create-asg-alb-cli.md: -------------------------------------------------------------------------------- 1 | 2 | ## create auto scaling group 3 | 4 | aws autoscaling create-auto-scaling-group --auto-scaling-group-name ASG2 --launch-template "LaunchTemplateName=MyEC2WebApp" --min-size 1 --max-size 3 --desired-capacity 2 --availability-zones "us-east-1a" "us-east-1b" --vpc-zone-identifier "subnet-02a94e365a7db9848, subnet-00fcec5c9dcd1077d" 5 | 6 | ## create load balancer, create listener, and attach to TG1 to ASG2 7 | 8 | aws elbv2 create-load-balancer --name ALB2 --subnets subnet-02a94e365a7db9848 subnet-00fcec5c9dcd1077d --security-groups sg-018ef94c41893157d 9 | 10 | aws elbv2 create-listener --load-balancer-arn arn:aws:elasticloadbalancing:us-east-1:821711655051:loadbalancer/app/ALB2/c3276fdb62a22113 --protocol HTTP --port 80 --default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:us-east-1:821711655051:targetgroup/TG1/e47504d36c5b8a7f 11 | 12 | aws autoscaling attach-load-balancer-target-groups --auto-scaling-group-name ASG2 --target-group-arns arn:aws:elasticloadbalancing:us-east-1:821711655051:targetgroup/TG1/e47504d36c5b8a7f 13 | 14 | ## delete ASG2 and ALB2 15 | 16 | aws elbv2 delete-load-balancer --load-balancer-arn arn:aws:elasticloadbalancing:us-east-1:821711655051:loadbalancer/app/ALB2/c3276fdb62a22113 17 | 18 | aws autoscaling delete-auto-scaling-group --auto-scaling-group-name ASG2 --force-delete -------------------------------------------------------------------------------- /amazon-ec2/ec2-user-data-web-app.md: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | yum update -y 3 | yum install -y httpd 4 | systemctl start httpd 5 | systemctl enable httpd 6 | EC2AZ=$(TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` && curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/placement/availability-zone) 7 | echo '

This Amazon EC2 instance is located in Availability Zone: AZID

' > /var/www/html/index.txt 8 | sed "s/AZID/$EC2AZ/" /var/www/html/index.txt > /var/www/html/index.html -------------------------------------------------------------------------------- /amazon-ec2/generate-load-on-alb.md: -------------------------------------------------------------------------------- 1 | # Command to generate load on the ALB 2 | 3 | ***replace with your alb dns name*** 4 | ```for i in {1..200}; do curl http://your-alb-address.com & done; wait``` 5 | -------------------------------------------------------------------------------- /amazon-ec2/user-data-metadata.md: -------------------------------------------------------------------------------- 1 | # IMDS v1 2 | 3 | ## Example commmands to run: 4 | 5 | 1. Get the instance ID: 6 | curl http://169.254.169.254/latest/meta-data/instance-id 7 | 8 | 2. Get the AMI ID: 9 | curl http://169.254.169.254/latest/meta-data/ami-id 10 | 11 | 3. Get the instance type: 12 | curl http://169.254.169.254/latest/meta-data/instance-type 13 | 14 | 4. Get the local IPv4 address: 15 | curl http://169.254.169.254/latest/meta-data/local-ipv4 16 | 17 | 5. Get the public IPv4 address: 18 | curl http://169.254.169.254/latest/meta-data/public-ipv4 19 | 20 | 21 | # IMDS v2 22 | 23 | ## Step 1 - Create a session and get a token 24 | 25 | TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600") 26 | 27 | ## Step 2 - Use the token to request metadata 28 | 29 | 1. Get the instance ID: 30 | curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-id 31 | 32 | 2. Get the AMI ID: 33 | curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/ami-id 34 | 35 | # Use metadata with user data to configure the instance 36 | 37 | This script installs a web server and uses instance metadata to retrieve information about the instance and then output the information on a webpage. 38 | 39 | ```bash 40 | #!/bin/bash 41 | 42 | # Update system and install httpd (Apache) 43 | yum update -y 44 | yum install -y httpd 45 | 46 | # Start httpd service and enable it to start on boot 47 | systemctl start httpd 48 | systemctl enable httpd 49 | 50 | # Fetch metadata using IMDSv2 51 | TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600") 52 | INSTANCE_ID=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-id) 53 | AMI_ID=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/ami-id) 54 | INSTANCE_TYPE=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-type) 55 | 56 | # Create a web page to display the metadata 57 | cat < /var/www/html/index.html 58 | 59 | 60 | EC2 Instance Metadata 61 | 62 | 63 |

EC2 Instance Metadata

64 |

Instance ID: $INSTANCE_ID

65 |

AMI ID: $AMI_ID

66 |

Instance Type: $INSTANCE_TYPE

67 | 68 | 69 | EOF 70 | ``` -------------------------------------------------------------------------------- /amazon-ec2/user-data-web-server.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Update the system and install necessary packages 4 | yum update -y 5 | yum install -y httpd 6 | 7 | # Start the Apache server 8 | systemctl start httpd 9 | systemctl enable httpd 10 | 11 | # Fetch the Availability Zone information using IMDSv2 12 | TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` 13 | AZ=`curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/placement/availability-zone` 14 | 15 | # Create the index.html file 16 | cat > /var/www/html/index.html < 18 | 19 | Instance Availability Zone 20 | 33 | 34 | 35 |
This instance is located in Availability Zone: $AZ
36 | 37 | 38 | EOF 39 | 40 | # Ensure the httpd service is correctly set up to start on boot 41 | chkconfig httpd on -------------------------------------------------------------------------------- /amazon-ecs/ec2-cluster-instances.md: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config -------------------------------------------------------------------------------- /amazon-efs/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/amazon-efs/.DS_Store -------------------------------------------------------------------------------- /amazon-efs/working-with-efs.md: -------------------------------------------------------------------------------- 1 | 2 | # Working with EFS 3 | 4 | ## Launch instances in multiple AZs 5 | 1. Create a security group 6 | aws ec2 create-security-group --group-name StorageLabs --description "Temporary SG for the Storage Service Labs" 7 | 2. Add a rule for SSH inbound to the security group 8 | aws ec2 authorize-security-group-ingress --group-name StorageLabs --protocol tcp --port 22 --cidr 0.0.0.0/0 9 | 3. Launch instance in US-EAST-1A 10 | aws ec2 run-instances --image-id ami-0440d3b780d96b29d --instance-type t2.micro --placement AvailabilityZone=us-east-1a --security-group-ids 11 | 4. Launch instance in US-EAST-1B 12 | aws ec2 run-instances --image-id ami-0440d3b780d96b29d --instance-type t2.micro --placement AvailabilityZone=us-east-1b --security-group-ids 13 | 14 | ## Create an EFS File System 15 | 1. Add a rule to the security group to allow the NFS protocol from group members 16 | 17 | ```aws ec2 authorize-security-group-ingress --group-id --protocol tcp --port 2049 --source-group ``` 18 | 19 | 2. Create an EFS file system through the console, and add the StorageLabs security group to the mount targets for each AZ 20 | 21 | ## Mount using the NFS Client (perform steps on both instances) 22 | 1. Create an EFS mount point 23 | mkdir ~/efs-mount-point 24 | 2. Install NFS client 25 | sudo yum -y install nfs-utils 26 | 3. Mount using the EFS client 27 | sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport :/ ~/efs-mount-point 28 | 4. Create a file on the file system 29 | 5. Add a file system policy to enforce encryption in-transit 30 | 6. Unmount (make sure to change directory out of efs-mount-point first) 31 | sudo umount ~/efs-mount-point 32 | 4. Mount again using the EFS client (what happens?) 33 | 34 | ## Mount using the EFS utils (perform steps on both instances) 35 | 1. Install EFS utils 36 | sudo yum install -y amazon-efs-utils 37 | 2. Mount using the EFS mount helper 38 | sudo mount -t efs -o tls :/ ~/efs-mount-point -------------------------------------------------------------------------------- /amazon-elasticache/add-retrieve-value.md: -------------------------------------------------------------------------------- 1 | # On an EC2 instance, install the following utilities 2 | 3 | ## PIP 4 | curl -O https://bootstrap.pypa.io/get-pip.py 5 | python3 get-pip.py --user 6 | ## Redis 7 | pip install redis 8 | 9 | 10 | # Create a Python script with the following code 11 | 12 | ```python 13 | import redis 14 | 15 | # Replace `your-endpoint` with the endpoint of your ElastiCache cluster, which you can find in the AWS console 16 | cache = redis.StrictRedis(host='your-endpoint', port=6379, db=0) 17 | 18 | # Storing a value in the cache 19 | cache.set('my-cached-key', 'my-cached-value') 20 | 21 | # Retrieving a value from the cache 22 | value = cache.get('my-cached-key') 23 | 24 | # Decoding the bytes literal to a string 25 | decoded_value = value.decode('utf-8') 26 | print(decoded_value) 27 | ``` 28 | -------------------------------------------------------------------------------- /amazon-s3/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/amazon-s3/.DS_Store -------------------------------------------------------------------------------- /amazon-s3/event-notification-sns.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Id": "example-ID", 4 | "Statement": [ 5 | { 6 | "Sid": "Example SNS topic policy", 7 | "Effect": "Allow", 8 | "Principal": { 9 | "Service": "s3.amazonaws.com" 10 | }, 11 | "Action": [ 12 | "SNS:Publish" 13 | ], 14 | "Resource": "SNS-topic-ARN", 15 | "Condition": { 16 | "ArnLike": { 17 | "aws:SourceArn": "arn:aws:s3:*:*:bucket-name" 18 | }, 19 | "StringEquals": { 20 | "aws:SourceAccount": "bucket-owner-account-id" 21 | } 22 | } 23 | } 24 | ] 25 | } -------------------------------------------------------------------------------- /amazon-s3/permissions-lesson/bucket-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "AllowDeleteObject", 6 | "Effect": "Allow", 7 | "Principal": { 8 | "AWS": "arn:aws:iam::123456789012:user/my-example-user" 9 | }, 10 | "Action": [ 11 | "s3:DeleteObject" 12 | ], 13 | "Resource": [ 14 | "arn:aws:s3:::my-example-bucket/*" 15 | ] 16 | } 17 | ] 18 | } 19 | -------------------------------------------------------------------------------- /amazon-s3/permissions-lesson/identity-policy-deny.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "s3:ListAllMyBuckets", 8 | "s3:GetBucketLocation" 9 | ], 10 | "Resource": [ 11 | "*" 12 | ] 13 | }, 14 | { 15 | "Effect": "Allow", 16 | "Action": [ 17 | "s3:ListBucket" 18 | ], 19 | "Resource": [ 20 | "*" 21 | ] 22 | }, 23 | { 24 | "Effect": "Allow", 25 | "Action": [ 26 | "s3:PutObject", 27 | "s3:GetObject" 28 | ], 29 | "Resource": [ 30 | "arn:aws:s3:::my-example-bucket/*" 31 | ] 32 | }, 33 | { 34 | "Sid": "DenyDeleteObject", 35 | "Effect": "Deny", 36 | "Action": "s3:DeleteObject", 37 | "Resource": [ 38 | "arn:aws:s3:::my-example-bucket/*" 39 | ] 40 | } 41 | ] 42 | } 43 | 44 | 45 | -------------------------------------------------------------------------------- /amazon-s3/permissions-lesson/identity-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "s3:ListAllMyBuckets", 8 | "s3:GetBucketLocation" 9 | ], 10 | "Resource": [ 11 | "*" 12 | ] 13 | }, 14 | { 15 | "Effect": "Allow", 16 | "Action": [ 17 | "s3:ListBucket" 18 | ], 19 | "Resource": [ 20 | "*" 21 | ] 22 | }, 23 | { 24 | "Effect": "Allow", 25 | "Action": [ 26 | "s3:PutObject", 27 | "s3:GetObject" 28 | ], 29 | "Resource": [ 30 | "arn:aws:s3:::my-example-bucket/*" 31 | ] 32 | } 33 | ] 34 | } 35 | -------------------------------------------------------------------------------- /amazon-s3/presigned_index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 12 | 13 | 14 | 15 |

Congratulations, you've accessed this S3 object using a presigned URL!

16 | 17 | -------------------------------------------------------------------------------- /amazon-s3/s3-enforce-kms-encryption.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Id": "PutObjectPolicy", 4 | "Statement": [ 5 | { 6 | "Sid": "DenyUnEncryptedObjectUploads", 7 | "Effect": "Deny", 8 | "Principal": "*", 9 | "Action": "s3:PutObject", 10 | "Resource": "arn:aws:s3:::/*", 11 | "Condition": { 12 | "StringNotEquals": { 13 | "s3:x-amz-server-side-encryption": "aws:kms" 14 | } 15 | } 16 | } 17 | ] 18 | } -------------------------------------------------------------------------------- /amazon-s3/s3-replication-permissions.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version":"2012-10-17", 3 | "Statement":[ 4 | { 5 | "Effect":"Allow", 6 | "Action":[ 7 | "s3:GetReplicationConfiguration", 8 | "s3:ListBucket" 9 | ], 10 | "Resource":[ 11 | "arn:aws:s3:::SOURCE-BUCKET" 12 | ] 13 | }, 14 | { 15 | "Effect":"Allow", 16 | "Action":[ 17 | "s3:GetObjectVersionForReplication", 18 | "s3:GetObjectVersionAcl", 19 | "s3:GetObjectVersionTagging" 20 | ], 21 | "Resource":[ 22 | "arn:aws:s3:::SOURCE-BUCKET/*" 23 | ] 24 | }, 25 | { 26 | "Effect":"Allow", 27 | "Action":[ 28 | "s3:ReplicateObject", 29 | "s3:ReplicateDelete", 30 | "s3:ReplicateTags" 31 | ], 32 | "Resource":"arn:aws:s3:::DESTINATION-BUCKET/*" 33 | } 34 | ] 35 | } -------------------------------------------------------------------------------- /amazon-s3/s3-trust-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version":"2012-10-17", 3 | "Statement":[ 4 | { 5 | "Effect":"Allow", 6 | "Principal":{ 7 | "Service":"s3.amazonaws.com" 8 | }, 9 | "Action":"sts:AssumeRole" 10 | } 11 | ] 12 | } -------------------------------------------------------------------------------- /amazon-sqs/aws-sqs-cli-commands.md: -------------------------------------------------------------------------------- 1 | ## AWS SQS CLI Commands 2 | 3 | aws sqs list-queues 4 | 5 | aws sqs send-message --queue-url QUEUE-URL --message-body test-message-1 --delay-seconds 10 6 | 7 | aws sqs receive-message --queue-url QUEUE-URL --wait-time-seconds 10 8 | 9 | aws sqs send-message --queue-url QUEUE-URL --message-body test-long-short-polling -------------------------------------------------------------------------------- /amazon-sqs/lambda-code-sns-sqs.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | def lambda_handler(event, context): 4 | for record in event.get("Records", []): 5 | try: 6 | body = record.get("body", "") 7 | print(f"Received message body: {body}") 8 | except Exception as e: 9 | print(f"Error processing record: {e}") 10 | 11 | return {"statusCode": 200, "body": "Processed successfully"} 12 | 13 | -------------------------------------------------------------------------------- /amazon-vpc/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/amazon-vpc/.DS_Store -------------------------------------------------------------------------------- /amazon-vpc/custom-vpc.md: -------------------------------------------------------------------------------- 1 | # Create VPC 2 | Name: MyVPC 3 | IPv4 CIDR Block: 10.0.0.0/16 4 | 5 | # Create Public and Private Subnets 6 | 7 | Name: Public-1A 8 | Availability Zone: us-east-1a 9 | IPv4 CIDR Block: 10.0.1.0/24 10 | 11 | Name: Public-1B 12 | Availability Zone: us-east-1b 13 | IPv4 CIDR Block: 10.0.2.0/24 14 | 15 | Name: Private-1A 16 | Availability Zone: us-east-1a 17 | IPv4 CIDR Block: 10.0.3.0/24 18 | 19 | Name: Private-1B 20 | Availability Zone: us-east-1b 21 | IPv4 CIDR Block: 10.0.4.0/24 22 | 23 | # Create private route table 24 | 25 | Name: Private-RT 26 | VPC: MyVPC 27 | Subnet associations: Private-1A, Private-1B 28 | 29 | # Create Internet Gateway 30 | 31 | Name: MyIGW 32 | VPC: MyVPC -------------------------------------------------------------------------------- /aws-cloudformation/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/aws-cloudformation/.DS_Store -------------------------------------------------------------------------------- /aws-cloudformation/1-ec2-template.yml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: Create an EC2 instance with a security group for SSH access 3 | Resources: 4 | InstanceSecurityGroup: 5 | Type: AWS::EC2::SecurityGroup 6 | Properties: 7 | GroupDescription: Enable SSH access 8 | SecurityGroupIngress: 9 | - IpProtocol: tcp 10 | FromPort: 22 11 | ToPort: 22 12 | CidrIp: 0.0.0.0/0 13 | MyInstance: 14 | Type: AWS::EC2::Instance 15 | Properties: 16 | ImageId: ami-0440d3b780d96b29d 17 | InstanceType: t2.micro 18 | SecurityGroups: 19 | - !Ref InstanceSecurityGroup -------------------------------------------------------------------------------- /aws-cloudformation/2-ec2-template.yml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: Attach an EBS volume to the EC2 instance, retaining original resources 3 | Resources: 4 | InstanceSecurityGroup: 5 | Type: AWS::EC2::SecurityGroup 6 | Properties: 7 | GroupDescription: Enable SSH access 8 | SecurityGroupIngress: 9 | - IpProtocol: tcp 10 | FromPort: 22 11 | ToPort: 22 12 | CidrIp: 0.0.0.0/0 13 | MyInstance: 14 | Type: AWS::EC2::Instance 15 | Properties: 16 | ImageId: ami-0440d3b780d96b29d 17 | InstanceType: t2.micro 18 | SecurityGroups: 19 | - !Ref InstanceSecurityGroup 20 | MyVolume: 21 | Type: AWS::EC2::Volume 22 | Properties: 23 | AvailabilityZone: !GetAtt MyInstance.AvailabilityZone 24 | Size: 10 25 | MyVolumeAttachment: 26 | Type: AWS::EC2::VolumeAttachment 27 | Properties: 28 | InstanceId: !Ref MyInstance 29 | VolumeId: !Ref MyVolume 30 | Device: /dev/sdf -------------------------------------------------------------------------------- /aws-cloudformation/3-ec2-template.yml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: '2010-09-09' 2 | Description: Add an Amazon S3 bucket to the setup, retaining all previous resources 3 | 4 | Resources: 5 | InstanceSecurityGroup: 6 | Type: AWS::EC2::SecurityGroup 7 | Properties: 8 | GroupDescription: Enable SSH access 9 | SecurityGroupIngress: 10 | - IpProtocol: tcp 11 | FromPort: 22 12 | ToPort: 22 13 | CidrIp: 0.0.0.0/0 14 | 15 | MyInstance: 16 | Type: AWS::EC2::Instance 17 | Properties: 18 | ImageId: ami-0440d3b780d96b29d # Ensure this AMI ID is valid for your region 19 | InstanceType: t2.micro 20 | SecurityGroups: 21 | - !Ref InstanceSecurityGroup 22 | 23 | MyVolume: 24 | Type: AWS::EC2::Volume 25 | Properties: 26 | AvailabilityZone: !GetAtt MyInstance.AvailabilityZone 27 | Size: 10 28 | 29 | MyVolumeAttachment: 30 | Type: AWS::EC2::VolumeAttachment 31 | Properties: 32 | InstanceId: !Ref MyInstance 33 | VolumeId: !Ref MyVolume 34 | Device: /dev/sdf 35 | 36 | MyS3Bucket: 37 | Type: AWS::S3::Bucket 38 | Properties: 39 | BucketName: my-unique-bucket-name-4rw3dda34 # Ensure this is globally unique -------------------------------------------------------------------------------- /aws-cloudformation/Create Nested Stack using the AWS CLI.md: -------------------------------------------------------------------------------- 1 | # Created Nested Stack using the AWS CLI 2 | 3 | 1. Create a file named vpc.yaml with the following content 4 | 5 | ```yaml 6 | AWSTemplateFormatVersion: '2010-09-09' 7 | Resources: 8 | VPC: 9 | Type: AWS::EC2::VPC 10 | Properties: 11 | CidrBlock: 10.0.0.0/16 12 | EnableDnsSupport: true 13 | EnableDnsHostnames: true 14 | Tags: 15 | - Key: Name 16 | Value: NestedStackVPC 17 | Outputs: 18 | VpcId: 19 | Description: VPC ID 20 | Value: !Ref VPC 21 | ``` 22 | 23 | 2. Create a file named subnet1.yaml with the following content 24 | 25 | ```yaml 26 | AWSTemplateFormatVersion: '2010-09-09' 27 | Parameters: 28 | VpcId: 29 | Type: String 30 | Resources: 31 | Subnet1: 32 | Type: AWS::EC2::Subnet 33 | Properties: 34 | AvailabilityZone: !Select [ 0, !GetAZs '' ] 35 | CidrBlock: 10.0.1.0/24 36 | VpcId: !Ref VpcId 37 | Tags: 38 | - Key: Name 39 | Value: NestedStackSubnet1 40 | Outputs: 41 | Subnet1Id: 42 | Description: Subnet 1 ID 43 | Value: !Ref Subnet1 44 | ``` 45 | 46 | 3. Create a file named subnet2.yaml with the following content 47 | 48 | ```yaml 49 | AWSTemplateFormatVersion: '2010-09-09' 50 | Parameters: 51 | VpcId: 52 | Type: String 53 | Resources: 54 | Subnet2: 55 | Type: AWS::EC2::Subnet 56 | Properties: 57 | AvailabilityZone: !Select [ 1, !GetAZs '' ] 58 | CidrBlock: 10.0.2.0/24 59 | VpcId: !Ref VpcId 60 | Tags: 61 | - Key: Name 62 | Value: NestedStackSubnet2 63 | Outputs: 64 | Subnet2Id: 65 | Description: Subnet 2 ID 66 | Value: !Ref Subnet2 67 | ``` 68 | 69 | 4. Upload the vpc.yaml, subnet1.yaml, and subnet2.yaml files to an S3 bucket and retrieve the URLs 70 | 71 | aws s3 cp s3:// 72 | 73 | aws s3api list-objects --bucket my-cloudformation-s3-bucket-3121s2 --query "Contents[].{Key: Key}" --output text | awk '{ print "https://my-cloudformation-s3-bucket-3121s2.s3.amazonaws.com/" $1 }' 74 | 75 | 5. Create a file named main.yaml with the following content (replace your-bucket-name with the name of the S3 bucket where you uploaded the templates) 76 | 77 | ```yaml 78 | AWSTemplateFormatVersion: '2010-09-09' 79 | Resources: 80 | VPCStack: 81 | Type: AWS::CloudFormation::Stack 82 | Properties: 83 | TemplateURL: https://my-cloudformation-s3-bucket-3121s2.s3.amazonaws.com/vpc.yaml 84 | 85 | Subnet1Stack: 86 | Type: AWS::CloudFormation::Stack 87 | Properties: 88 | TemplateURL: https://my-cloudformation-s3-bucket-3121s2.s3.amazonaws.com/subnet1.yaml 89 | Parameters: 90 | VpcId: !GetAtt VPCStack.Outputs.VpcId 91 | 92 | Subnet2Stack: 93 | Type: AWS::CloudFormation::Stack 94 | Properties: 95 | TemplateURL: https://my-cloudformation-s3-bucket-3121s2.s3.amazonaws.com/subnet2.yaml 96 | Parameters: 97 | VpcId: !GetAtt VPCStack.Outputs.VpcId 98 | ``` 99 | 100 | 7. Deploy the main stack using the AWS CloudFormation CLI 101 | 102 | aws cloudformation create-stack --stack-name NestedStackExample --template-body file://main.yaml --capabilities CAPABILITY_NAMED_IAM 103 | 104 | 8. The stack can be deleted with the following command 105 | 106 | aws cloudformation delete-stack --stack-name NestedStackExample -------------------------------------------------------------------------------- /aws-cloudformation/create-vpc-with-cloudformation.yaml: -------------------------------------------------------------------------------- 1 | Description: This template deploys a VPC, with a pair of public and private subnets spread 2 | across two Availability Zones. It deploys an internet gateway, with a default 3 | route on the public subnets. 4 | 5 | Parameters: 6 | EnvironmentName: 7 | Description: An environment name that is prefixed to resource names 8 | Type: String 9 | 10 | VpcCIDR: 11 | Description: Please enter the IP range (CIDR notation) for this VPC 12 | Type: String 13 | Default: 10.8.0.0/16 14 | 15 | PublicSubnet1CIDR: 16 | Description: Please enter the IP range (CIDR notation) for the public subnet in the first Availability Zone 17 | Type: String 18 | Default: 10.8.10.0/24 19 | 20 | PublicSubnet2CIDR: 21 | Description: Please enter the IP range (CIDR notation) for the public subnet in the second Availability Zone 22 | Type: String 23 | Default: 10.8.11.0/24 24 | 25 | PrivateSubnet1CIDR: 26 | Description: Please enter the IP range (CIDR notation) for the private subnet in the first Availability Zone 27 | Type: String 28 | Default: 10.8.20.0/24 29 | 30 | PrivateSubnet2CIDR: 31 | Description: Please enter the IP range (CIDR notation) for the private subnet in the second Availability Zone 32 | Type: String 33 | Default: 10.8.21.0/24 34 | 35 | Resources: 36 | VPC: 37 | Type: AWS::EC2::VPC 38 | Properties: 39 | CidrBlock: !Ref VpcCIDR 40 | EnableDnsSupport: true 41 | EnableDnsHostnames: true 42 | Tags: 43 | - Key: Name 44 | Value: !Ref EnvironmentName 45 | 46 | InternetGateway: 47 | Type: AWS::EC2::InternetGateway 48 | Properties: 49 | Tags: 50 | - Key: Name 51 | Value: !Ref EnvironmentName 52 | 53 | InternetGatewayAttachment: 54 | Type: AWS::EC2::VPCGatewayAttachment 55 | Properties: 56 | InternetGatewayId: !Ref InternetGateway 57 | VpcId: !Ref VPC 58 | 59 | PublicSubnet1: 60 | Type: AWS::EC2::Subnet 61 | Properties: 62 | VpcId: !Ref VPC 63 | AvailabilityZone: !Select [ 0, !GetAZs '' ] 64 | CidrBlock: !Ref PublicSubnet1CIDR 65 | MapPublicIpOnLaunch: true 66 | Tags: 67 | - Key: Name 68 | Value: !Sub ${EnvironmentName} Public Subnet (AZ1) 69 | 70 | PublicSubnet2: 71 | Type: AWS::EC2::Subnet 72 | Properties: 73 | VpcId: !Ref VPC 74 | AvailabilityZone: !Select [ 1, !GetAZs '' ] 75 | CidrBlock: !Ref PublicSubnet2CIDR 76 | MapPublicIpOnLaunch: true 77 | Tags: 78 | - Key: Name 79 | Value: !Sub ${EnvironmentName} Public Subnet (AZ2) 80 | 81 | PrivateSubnet1: 82 | Type: AWS::EC2::Subnet 83 | Properties: 84 | VpcId: !Ref VPC 85 | AvailabilityZone: !Select [ 0, !GetAZs '' ] 86 | CidrBlock: !Ref PrivateSubnet1CIDR 87 | MapPublicIpOnLaunch: false 88 | Tags: 89 | - Key: Name 90 | Value: !Sub ${EnvironmentName} Private Subnet (AZ1) 91 | 92 | PrivateSubnet2: 93 | Type: AWS::EC2::Subnet 94 | Properties: 95 | VpcId: !Ref VPC 96 | AvailabilityZone: !Select [ 1, !GetAZs '' ] 97 | CidrBlock: !Ref PrivateSubnet2CIDR 98 | MapPublicIpOnLaunch: false 99 | Tags: 100 | - Key: Name 101 | Value: !Sub ${EnvironmentName} Private Subnet (AZ2) 102 | 103 | PublicRouteTable: 104 | Type: AWS::EC2::RouteTable 105 | Properties: 106 | VpcId: !Ref VPC 107 | Tags: 108 | - Key: Name 109 | Value: !Sub ${EnvironmentName} Public Routes 110 | 111 | DefaultPublicRoute: 112 | Type: AWS::EC2::Route 113 | DependsOn: InternetGatewayAttachment 114 | Properties: 115 | RouteTableId: !Ref PublicRouteTable 116 | DestinationCidrBlock: 0.0.0.0/0 117 | GatewayId: !Ref InternetGateway 118 | 119 | PublicSubnet1RouteTableAssociation: 120 | Type: AWS::EC2::SubnetRouteTableAssociation 121 | Properties: 122 | RouteTableId: !Ref PublicRouteTable 123 | SubnetId: !Ref PublicSubnet1 124 | 125 | PublicSubnet2RouteTableAssociation: 126 | Type: AWS::EC2::SubnetRouteTableAssociation 127 | Properties: 128 | RouteTableId: !Ref PublicRouteTable 129 | SubnetId: !Ref PublicSubnet2 130 | 131 | 132 | PrivateRouteTable1: 133 | Type: AWS::EC2::RouteTable 134 | Properties: 135 | VpcId: !Ref VPC 136 | Tags: 137 | - Key: Name 138 | Value: !Sub ${EnvironmentName} Private Routes (AZ1) 139 | 140 | PrivateSubnet1RouteTableAssociation: 141 | Type: AWS::EC2::SubnetRouteTableAssociation 142 | Properties: 143 | RouteTableId: !Ref PrivateRouteTable1 144 | SubnetId: !Ref PrivateSubnet1 145 | 146 | PrivateRouteTable2: 147 | Type: AWS::EC2::RouteTable 148 | Properties: 149 | VpcId: !Ref VPC 150 | Tags: 151 | - Key: Name 152 | Value: !Sub ${EnvironmentName} Private Routes (AZ2) 153 | 154 | PrivateSubnet2RouteTableAssociation: 155 | Type: AWS::EC2::SubnetRouteTableAssociation 156 | Properties: 157 | RouteTableId: !Ref PrivateRouteTable2 158 | SubnetId: !Ref PrivateSubnet2 159 | 160 | NoIngressSecurityGroup: 161 | Type: AWS::EC2::SecurityGroup 162 | Properties: 163 | GroupName: "no-ingress-sg" 164 | GroupDescription: "Security group with no ingress rule" 165 | VpcId: !Ref VPC 166 | 167 | Outputs: 168 | VPC: 169 | Description: A reference to the created VPC 170 | Value: !Ref VPC 171 | 172 | PublicSubnets: 173 | Description: A list of the public subnets 174 | Value: !Join [ ",", [ !Ref PublicSubnet1, !Ref PublicSubnet2 ]] 175 | 176 | PrivateSubnets: 177 | Description: A list of the private subnets 178 | Value: !Join [ ",", [ !Ref PrivateSubnet1, !Ref PrivateSubnet2 ]] 179 | 180 | PublicSubnet1: 181 | Description: A reference to the public subnet in the 1st Availability Zone 182 | Value: !Ref PublicSubnet1 183 | 184 | PublicSubnet2: 185 | Description: A reference to the public subnet in the 2nd Availability Zone 186 | Value: !Ref PublicSubnet2 187 | 188 | PrivateSubnet1: 189 | Description: A reference to the private subnet in the 1st Availability Zone 190 | Value: !Ref PrivateSubnet1 191 | 192 | PrivateSubnet2: 193 | Description: A reference to the private subnet in the 2nd Availability Zone 194 | Value: !Ref PrivateSubnet2 195 | 196 | NoIngressSecurityGroup: 197 | Description: Security group with no ingress rule 198 | Value: !Ref NoIngressSecurityGroup -------------------------------------------------------------------------------- /aws-copilot/copilot-commands.md: -------------------------------------------------------------------------------- 1 | # Deploy demo app using AWS Copilot 2 | 3 | 1. Launch an EC2 instance 4 | aws ec2 run-instances --image-id ami-0dfcb1ef8550277af --instance-type t2.micro 5 | 2. Using EC2 Instance Connect, install Docker on the instance 6 | sudo yum install docker 7 | sudo systemctl enable docker.service 8 | sudo systemctl start docker.service 9 | sudo usermod -a -G docker ec2-user 10 | id ec2-user 11 | newgrp docker 12 | 3. Install git 13 | sudo yum install git 14 | 4. Install the Copilot CLI 15 | sudo curl -Lo /usr/local/bin/copilot https://github.com/aws/copilot-cli/releases/latest/download/copilot-linux && sudo chmod +x /usr/local/bin/copilot && copilot --help 16 | 5. Clone the Copilot repository and change into the demo-app directory 17 | git clone https://github.com/aws-samples/amazon-ecs-cli-sample-app.git demo-app 18 | cd demo-app 19 | 6. Run the init command to start the build 20 | copilot init --app demo --name api --type 'Load Balanced Web Service' --dockerfile './Dockerfile' --port 80 --deploy 21 | 7. Delete the app by running the following command 22 | copilot app delete -------------------------------------------------------------------------------- /aws-developer-tools/AppSpec Examples/appspec-example-ec2.yml: -------------------------------------------------------------------------------- 1 | version: 0.0 2 | os: linux 3 | files: 4 | - source: Config/config.txt 5 | destination: /webapps/Config 6 | - source: source 7 | destination: /webapps/myApp 8 | hooks: 9 | BeforeInstall: 10 | - location: Scripts/UnzipResourceBundle.sh 11 | - location: Scripts/UnzipDataBundle.sh 12 | AfterInstall: 13 | - location: Scripts/RunResourceTests.sh 14 | timeout: 180 15 | ApplicationStart: 16 | - location: Scripts/RunFunctionalTests.sh 17 | timeout: 3600 18 | ValidateService: 19 | - location: Scripts/MonitorService.sh 20 | timeout: 3600 21 | runas: codedeployuser -------------------------------------------------------------------------------- /aws-developer-tools/AppSpec Examples/appspec-example-ecs.yml: -------------------------------------------------------------------------------- 1 | version: 0.0 2 | Resources: 3 | - TargetService: 4 | Type: AWS::ECS::Service 5 | Properties: 6 | TaskDefinition: "arn:aws:ecs:us-east-1:111222333444:task-definition/my-task-definition-family-name:1" 7 | LoadBalancerInfo: 8 | ContainerName: "SampleApplicationName" 9 | ContainerPort: 80 10 | # Optional properties 11 | PlatformVersion: "LATEST" 12 | NetworkConfiguration: 13 | AwsvpcConfiguration: 14 | Subnets: ["subnet-1234abcd","subnet-5678abcd"] 15 | SecurityGroups: ["sg-12345678"] 16 | AssignPublicIp: "ENABLED" 17 | Hooks: 18 | - BeforeInstall: "LambdaFunctionToValidateBeforeInstall" 19 | - AfterInstall: "LambdaFunctionToValidateAfterTraffic" 20 | - AfterAllowTestTraffic: "LambdaFunctionToValidateAfterTestTrafficStarts" 21 | - BeforeAllowTraffic: "LambdaFunctionToValidateBeforeAllowingProductionTraffic" 22 | - AfterAllowTraffic: "LambdaFunctionToValidateAfterAllowingProductionTraffic" -------------------------------------------------------------------------------- /aws-developer-tools/AppSpec Examples/appspec-example-lambda.yml: -------------------------------------------------------------------------------- 1 | version: 0.0 2 | Resources: 3 | - myLambdaFunction: 4 | Type: AWS::Lambda::Function 5 | Properties: 6 | Name: "myLambdaFunction" 7 | Alias: "myLambdaFunctionAlias" 8 | CurrentVersion: "1" 9 | TargetVersion: "2" 10 | Hooks: 11 | - BeforeAllowTraffic: "LambdaFunctionToValidateBeforeTrafficShift" 12 | - AfterAllowTraffic: "LambdaFunctionToValidateAfterTrafficShift" -------------------------------------------------------------------------------- /aws-developer-tools/appspec.yml: -------------------------------------------------------------------------------- 1 | version: 0.0 2 | Resources: 3 | - TargetService: 4 | Type: AWS::ECS::Service 5 | Properties: 6 | TaskDefinition: "TASK-DEFINITION-ARN" 7 | LoadBalancerInfo: 8 | ContainerName: "httpd" 9 | ContainerPort: 80 -------------------------------------------------------------------------------- /aws-developer-tools/basic-git-commands.md: -------------------------------------------------------------------------------- 1 | ## Basic Git commands 2 | 3 | git --version 4 | 5 | ## Create a directory / folder 6 | 7 | mkdir (foldername) 8 | cd (foldername) 9 | 10 | ## Initialize repository 11 | 12 | git init 13 | 14 | ## Create and add test file 15 | 16 | touch (filename) 17 | git status 18 | git add (filename) 19 | 20 | ## Add multiple files 21 | 22 | git add -A 23 | 24 | ## Commit file(s) 25 | 26 | git commit -m "first commit" 27 | 28 | ## Clone from Repository 29 | 30 | git clone REPOSITORY-URL 31 | 32 | ## Push to Repository 33 | 34 | git push -------------------------------------------------------------------------------- /aws-developer-tools/buildspec.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | phases: 4 | install: 5 | commands: 6 | - echo "Entered the install phase..." 7 | pre_build: 8 | commands: 9 | - echo "Entered the pre_build phase..." 10 | build: 11 | commands: 12 | - echo "Entered the build phase..." 13 | - echo "Build started on `date`" 14 | - find production.txt 15 | post_build: 16 | commands: 17 | - echo "Entered the post_build phase..." 18 | - echo "Build completed on `date`" 19 | 20 | -------------------------------------------------------------------------------- /aws-developer-tools/nodejs-blue.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/aws-developer-tools/nodejs-blue.zip -------------------------------------------------------------------------------- /aws-iam/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nealdct/aws-dva-code/0b558a18c3fed95028fad930aa383d95e5a641e3/aws-iam/.DS_Store -------------------------------------------------------------------------------- /aws-iam/profile-ec2-full-access.config: -------------------------------------------------------------------------------- 1 | [profile ec2-full-access] 2 | role_arn = arn:aws:iam::821711655051:role/EC2-Full-Access 3 | source_profile = default -------------------------------------------------------------------------------- /aws-iam/sts-assume-role.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": { 4 | "Effect": "Allow", 5 | "Action": "sts:AssumeRole", 6 | "Resource": "arn:aws:iam::975050181034:role/ec2-role" 7 | } 8 | } 9 | -------------------------------------------------------------------------------- /aws-kms/kms-key-ebs-rds.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Id": "KMSKeyPolicy", 4 | "Statement": [ 5 | { 6 | "Sid": "Allow access for Key Administrators", 7 | "Effect": "Allow", 8 | "Principal": { 9 | "AWS": "arn:aws:iam::111122223333:user/User" 10 | }, 11 | "Action": [ 12 | "kms:Create*", 13 | "kms:Describe*", 14 | "kms:Enable*", 15 | "kms:List*", 16 | "kms:Put*", 17 | "kms:Update*", 18 | "kms:Revoke*", 19 | "kms:Disable*", 20 | "kms:Get*", 21 | "kms:Delete*", 22 | "kms:TagResource", 23 | "kms:UntagResource", 24 | "kms:ScheduleKeyDeletion", 25 | "kms:CancelKeyDeletion" 26 | ], 27 | "Resource": "*" 28 | }, 29 | { 30 | "Sid": "RestrictKeyUsageToEC2andRDS", 31 | "Effect": "Allow", 32 | "Principal": { 33 | "AWS": "arn:aws:iam::111122223333:user/User" 34 | }, 35 | "Action": [ 36 | "kms:Encrypt", 37 | "kms:Decrypt", 38 | "kms:ReEncrypt*", 39 | "kms:GenerateDataKey*", 40 | "kms:CreateGrant", 41 | "kms:ListGrants", 42 | "kms:DescribeKey" 43 | ], 44 | "Resource": "*", 45 | "Condition": { 46 | "StringEquals": { 47 | "kms:ViaService": [ 48 | "ec2.us-east-1.amazonaws.com", 49 | "rds.us-east-1.amazonaws.com" 50 | ] 51 | } 52 | } 53 | } 54 | ] 55 | } 56 | 57 | 58 | *** Policy Statement for Cross Account copy 59 | 60 | { 61 | "Sid": "Allow use of the key with destination account", 62 | "Effect": "Allow", 63 | "Principal": { 64 | "AWS": "arn:aws:iam::TARGET-ACCOUNT-ID:role/ROLENAME" 65 | }, 66 | "Action": [ 67 | "kms:Decrypt", 68 | "kms:CreateGrant" 69 | ], 70 | "Resource": "*", 71 | "Condition": { 72 | "StringEquals": { 73 | "kms:ViaService": "ec2.REGION.amazonaws.com", 74 | "kms:CallerAccount": "TARGET-ACCOUNT-ID" 75 | } 76 | } 77 | } 78 | 79 | 80 | -------------------------------------------------------------------------------- /aws-lambda/destinations-and-dlq.md: -------------------------------------------------------------------------------- 1 | ## Function code 2 | 3 | ```python 4 | import json 5 | from datetime import datetime 6 | 7 | def lambda_handler(event, context): 8 | event_received_at = datetime.utcnow().isoformat() 9 | print('Event received at:', event_received_at) 10 | print('Received event:', json.dumps(event, indent=2)) 11 | 12 | success_flag = event.get('Success') 13 | 14 | if success_flag is True: 15 | print("Success") 16 | return { 17 | "statusCode": 200, 18 | "body": "Function succeeded" 19 | } 20 | else: 21 | print("Failure") 22 | raise Exception("Failure from event, Success = false, I am failing!") 23 | ``` 24 | 25 | ## Generate success message 26 | 27 | aws lambda invoke --function-name desttest --invocation-type Event --payload eyJTdWNjZXNzIjp0cnVlfQ== response.json 28 | 29 | {"Success":true} 30 | 31 | ## Failure message 32 | 33 | {"Success":false} 34 | 35 | aws lambda invoke --function-name desttest --invocation-type Event --payload eyJTdWNjZXNzIjpmYWxzZX0= response.json 36 | 37 | -------------------------------------------------------------------------------- /aws-lambda/event-source-mapping.md: -------------------------------------------------------------------------------- 1 | ## Create Lambda function and SQS queue 2 | 3 | 1. Create a Lambda function using the Python runtime 4 | 2. Add the following code: 5 | 6 | ```python 7 | import json 8 | import logging 9 | 10 | # Configure logging to send logs to CloudWatch 11 | logger = logging.getLogger() 12 | logger.setLevel(logging.INFO) 13 | 14 | def lambda_handler(event, context): 15 | """ 16 | AWS Lambda function to process SQS messages. 17 | Logs each message body to CloudWatch. 18 | """ 19 | try: 20 | # Check if 'Records' exist in the event 21 | if "Records" in event: 22 | for record in event["Records"]: 23 | body = record.get("body", "No body found") 24 | 25 | # Log the message body 26 | logger.info(f"Received message: {body}") 27 | 28 | else: 29 | logger.warning("No records found in the event.") 30 | 31 | except Exception as e: 32 | logger.error(f"Error processing SQS message: {str(e)}") 33 | 34 | return {"statusCode": 200, "body": "Messages processed"} 35 | ``` 36 | 37 | 3. Add the "AWSLambdaSQSQueueExecutionRole" permissions policy to the function execution role 38 | 4. Create a standard SQS queue 39 | 40 | ## Create the event-source mapping 41 | 42 | 1. Run the following AWS CLI command to create the event source mapping: 43 | 44 | aws lambda create-event-source-mapping --function-name EventSourceSQS --batch-size 10 --event-source-arn YOUR-SQS-QUEUE-ARN 45 | 46 | 2. Run the following AWS CLI command to list the event source mappings: 47 | 48 | aws lambda list-event-source-mappings --function-name EventSourceSQS --event-source-arn YOUR-SQS-QUEUE-ARN 49 | 50 | 3. Send a test message manually in the SQS queue and check the output appears in CloudWatch Logs 51 | 52 | 4. Run the following AWS CLI command to delete the event source mapping: 53 | aws lambda delete-event-source-mapping --uuid YOUR-ESM-UUID -------------------------------------------------------------------------------- /aws-lambda/invoke-lambda-versions-aliases.md: -------------------------------------------------------------------------------- 1 | ## Invoke Lambda versions 2 | 3 | aws lambda invoke --function-name myversiontest response.json 4 | aws lambda invoke --function-name myversiontest:\$LATEST response.json 5 | 6 | aws lambda invoke --function-name myversiontest:1 response.json 7 | 8 | ## Invoke Lambda alias 9 | 10 | aws lambda invoke --function-name myversiontest:myapp response.json 11 | -------------------------------------------------------------------------------- /aws-lambda/invoking-functions.md: -------------------------------------------------------------------------------- 1 | ## Create payload 2 | 3 | Create a file named "payload.json" with the following code: 4 | 5 | {"key1": "value1", "key2": "value2"} 6 | 7 | ## Invoke function synchronously 8 | 9 | aws lambda invoke --function-name mytestfunction --payload fileb://payload.json response.json 10 | 11 | aws lambda invoke --function-name mytestfunction out 12 | 13 | ## Invoke function asynchronously 14 | 15 | aws lambda invoke --function-name mytestfunction --invocation-type Event --payload fileb://payload.json response.json 16 | 17 | -------------------------------------------------------------------------------- /aws-lambda/lambda-environ-test.md: -------------------------------------------------------------------------------- 1 | ```python 2 | import json 3 | import os 4 | 5 | def lambda_handler(event, context): 6 | DB_HOST = os.environ["DB_HOST"] 7 | DB_USER = os.environ["DB_USER"] 8 | DB_PASS = os.environ["DB_PASS"] 9 | print("Connected to %s as %s with %s" % (DB_HOST, DB_USER, DB_PASS)) 10 | return { 11 | 'statusCode': 200, 12 | 'body': json.dumps('Hello from Lambda!') 13 | } 14 | ``` 15 | 16 | ## Code for encrypted variables 17 | 18 | ```python 19 | import json 20 | import os 21 | import boto3 22 | from base64 import b64decode 23 | 24 | def lambda_handler(event, context): 25 | DB_HOST = os.environ["DB_HOST"] 26 | DB_USER = os.environ["DB_USER"] 27 | DB_PASS = os.environ["DB_PASS"] 28 | ENCRYPTED = os.environ['DB_PASS'] 29 | DECRYPTED = boto3.client('kms').decrypt( 30 | CiphertextBlob=b64decode(ENCRYPTED), 31 | EncryptionContext={'LambdaFunctionName': os.environ['AWS_LAMBDA_FUNCTION_NAME']} 32 | )['Plaintext'].decode('utf-8') 33 | print("Connected to %s as %s with %s" % (DB_HOST, DB_USER, DB_PASS)) 34 | return { 35 | 'statusCode': 200, 36 | 'body': json.dumps('Hello from Lambda!') 37 | } 38 | ``` 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | -------------------------------------------------------------------------------- /aws-lambda/sam-cli-commands.md: -------------------------------------------------------------------------------- 1 | ## Deploy Hello World application 2 | 3 | python --version 4 | 5 | ***update Python runtime version as per output from previous command*** 6 | 7 | sam init --runtime python3.9 --dependency-manager pip --app-template hello-world --name sam-app 8 | 9 | cd sam-app 10 | 11 | sam build 12 | 13 | sam deploy --guided 14 | 15 | curl API-ENDPOINT-URL 16 | 17 | -------------------------------------------------------------------------------- /aws-secrets-manager/secrets-manager-cli-commands.md: -------------------------------------------------------------------------------- 1 | ## AWS CLI commands for Secrets Manager 2 | 3 | aws secretsmanager list-secrets 4 | 5 | aws secretsmanager create-secret --name dev-db-secret --description "This is the password for the development DB" --secret-string "MySecretSecureStringXYZ" 6 | 7 | aws secretsmanager get-secret-value --secret-id dev-db-secret 8 | 9 | aws secretsmanager describe-secret --secret-id dev-db-secret 10 | 11 | aws secretsmanager update-secret --secret-id dev-db-secret --secret-string "NewSecretStringXYZ" 12 | 13 | aws secretsmanager delete-secret --secret-id dev-db-secret -------------------------------------------------------------------------------- /aws-step-functions/Step Functions with Lambda.md: -------------------------------------------------------------------------------- 1 | ## AWS Step Functions State Machine with Lambda 2 | 3 | ## Step 1 - Create a Lambda Function 4 | 5 | Create a Lambda function: 6 | Name = HelloFunction 7 | Runtime = Node.js 12.x 8 | Role = Create a new role with basic Lambda permissions 9 | Note the function ARN: *FUNCTION-ARN* 10 | Add the following code and save/deploy: 11 | 12 | exports.handler = (event, context, callback) => { 13 | callback(null, "Hello, " + event.who + "!"); 14 | }; 15 | 16 | Configure a test event with the following data: 17 | 18 | { 19 | "who": "AWS Step Functions" 20 | } 21 | 22 | ## Step 2 - Create a State Machine 23 | 24 | Create a state machine 25 | Choose to write workflow in code 26 | Choose the standard type 27 | Under Definition add the following code (update function ARN): 28 | { 29 | "Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function", 30 | "StartAt": "HelloWorld", 31 | "States": { 32 | "HelloWorld": { 33 | "Type": "Task", 34 | "Resource": "arn:aws:lambda:us-east-1:123456789012:function:HelloFunction", 35 | "End": true 36 | } 37 | } 38 | } 39 | 40 | Choose next and create a new IAM role 41 | Complete the wizard to create the state machine 42 | Choose start execution and enter the following code (update your name): 43 | 44 | { 45 | "who" : "YOUR NAME" 46 | } 47 | 48 | View the results in the execution output -------------------------------------------------------------------------------- /fargate-blue-green-ci-cd/appspec.yaml: -------------------------------------------------------------------------------- 1 | version: 0.0 2 | Resources: 3 | - TargetService: 4 | Type: AWS::ECS::Service 5 | Properties: 6 | TaskDefinition: 7 | LoadBalancerInfo: 8 | ContainerName: "sample-website" 9 | ContainerPort: 80 -------------------------------------------------------------------------------- /fargate-blue-green-ci-cd/create-service.json: -------------------------------------------------------------------------------- 1 | { 2 | "taskDefinition": "ecs-lab:1", 3 | "cluster": "ecs-cluster", 4 | "loadBalancers": [ 5 | { 6 | "targetGroupArn": "TARGET-GROUP-ARN", 7 | "containerName": "sample-website", 8 | "containerPort": 80 9 | } 10 | ], 11 | "desiredCount": 1, 12 | "launchType": "FARGATE", 13 | "schedulingStrategy": "REPLICA", 14 | "deploymentController": { 15 | "type": "CODE_DEPLOY" 16 | }, 17 | "networkConfiguration": { 18 | "awsvpcConfiguration": { 19 | "subnets": [ 20 | "SUBNET-ID", 21 | "SUBNET-ID" 22 | ], 23 | "securityGroups": [ 24 | "SECURITY-GROUP-ID" 25 | ], 26 | "assignPublicIp": "ENABLED" 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /fargate-blue-green-ci-cd/ecr-allow-all.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Id": "ECRPolicy", 4 | "Statement": [ 5 | { 6 | "Sid": "AllowAll", 7 | "Effect": "Allow", 8 | "Action": "ecr:*", 9 | "Resource": "*" 10 | } 11 | ] 12 | } -------------------------------------------------------------------------------- /fargate-blue-green-ci-cd/fargate-ci-cd-instructions.md: -------------------------------------------------------------------------------- 1 | ## Part 1 - [HOL] Create Image and Push to ECR Repository 2 | 3 | ### Using EC2 4 | 5 | 1. Launch EC2 instance - Amazon Linux AMI, t2.micro 6 | 2. Connect to instance using EC2 Instance Connect 7 | 3. Attach an IAM role to EC2 and use policy "ecr-allow-all.json" 8 | 4. Run the following commands on EC2: 9 | 10 | ```bash 11 | sudo su 12 | yum update 13 | yum install docker 14 | systemctl enable docker.service 15 | systemctl start docker.service 16 | docker pull nginx 17 | docker images (to view the images) 18 | ``` 19 | 20 | ### Using AWS CloudShell 21 | 22 | 1. Typically Docker is installed in CloudShell so we only need to run the following command: 23 | 24 | ```bash 25 | docker pull nginx 26 | ``` 27 | 28 | ### Create Repo, tag, and push image 29 | 30 | ```bash 31 | aws ecr create-repository --repository-name nginx --region us-east-1 32 | docker tag nginx:latest .dkr.ecr.us-east-1.amazonaws.com/nginx:latest 33 | aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin .dkr.ecr.us-east-1.amazonaws.com/nginx 34 | docker push .dkr.ecr.us-east-1.amazonaws.com/nginx:latest 35 | ``` 36 | 37 | ## Part 2 - [HOL] Create Task Definition and ALB 38 | 39 | 1. Update the account ID in the role ARN within the `taskdef.json` file 40 | 2. From a CLI with ECS permissions, change to a directory with the `taskdef.json` file and run the following command: 41 | 42 | ```bash 43 | aws ecs register-task-definition --cli-input-json file://taskdef.json 44 | ``` 45 | 46 | 3. Create an Application Load Balancer 47 | - Should be internet facing 48 | - Listen on HTTP port 80 49 | - Add a second listener on HTTP 8080 50 | - Choose 2 public subnets in different AZs 51 | - Create a new TG - target-group-1, protocol HTTP 80, target type = IP address 52 | - Create a second TG - target-group-2, protocol HTTP 8080, target type = IP address 53 | - For the second listener, forward to target-group-2 54 | 55 | 4. Update security group to allow inbound on 80 and 8080 56 | 57 | ## Part 3 - [HOL] Create Fargate Cluster and Service 58 | 59 | 1. Create a Fargate cluster named "ecs-cluster" 60 | 2. Update the placeholders in the `create-service.json` file 61 | 3. From a CLI with ECS permissions, change to the directory with the `create-service.json` file and run the following commands: 62 | 63 | ```bash 64 | aws ecs create-service --service-name my-service --cli-input-json file://create-service.json 65 | aws ecs describe-services --cluster ecs-cluster --services my-service 66 | ``` 67 | 68 | ## Part 4 - [HOL] ECS Lab - CodeDeploy Application and Pipeline 69 | 70 | ***You will need a personal GitHub account for these exercises*** 71 | 72 | ### Create GitHub Repo and Commit files 73 | 74 | 1. Create a GitHub repository called "ecs-lab" 75 | 2. Clone the repository 76 | 3. Edit the taskdef.json file and change the image name to "" then save and copy into repository folder 77 | 4. Also copy the `appspec.yaml` file into the repository 78 | 5. Commit files and push to GitHub using the following commands: 79 | 80 | ```bash 81 | git add -A 82 | git commit -m "first commit" 83 | git push 84 | ``` 85 | 86 | ### Create CodeDeploy Application 87 | 88 | 1. Create an IAM role for CodeDeploy 89 | 2. Use case should be CodeDeploy - ECS 90 | 3. Add the policy AWSCodeDeployRoleForECS 91 | 4. Enter the name as CodeDeployECSRole 92 | 93 | 5. Create an application in CodeDeploy named ecs-lab 94 | 6. Choose ECS as the compute platform 95 | 7. Create a deployment group named codedeploy-ecs 96 | 8. Select the service role, cluster, and service 97 | 9. Select the ALB and associated settings 98 | 99 | ### Connect your GitHub Repo to CodePipeline 100 | 101 | 1. In the Developer Tools console, go to Settings > Connections 102 | 2. Click "Create connection" 103 | 3. Select "GitHub" 104 | 4. Provide a connection name 105 | 5. Click "Connect to GitHub" 106 | 6. Authenticate to your personal GitHub account 107 | 108 | ### Create a Pipeline 109 | 110 | 1. Create a pipeline in CodePipeline named MyImagePipeline 111 | 2. Allow CodePipeline to create a new IAM role 112 | 3. For source provider select "GitHub (via GitHub App") 113 | 4. Select the connection and enter the repo name and main branch 114 | 5. Skip the build stage and test stage 115 | 6. Choose ECS (Blue/Green) for the deploy provider 116 | 7. Select the app name and deployment group 117 | 8. For ECS task definition specify the SourceArtifact as taskdef.json 118 | 9. For CodeDeploy AppSpec file specify the SourceArtifact as appspec.yaml 119 | 10. Create the Pipeline 120 | 121 | ### Edit the source stage 122 | 1. Edit the pipeline and source stage 123 | 2. Click "Add action" 124 | 3. Enter "Image" as the action name 125 | 4. Choose "Amazon ECR" as the action provider 126 | 5. Enter the repo name (nginx) 127 | 6. Set output artifacts to "MyImage" 128 | 7. Save the changes 129 | 130 | ### Edit the deploy stage 131 | 132 | 1. Edit the pipeline again and the deploy stage and action 133 | 2. Under input artifacts specify MyImage (in addition to SourceArtifact) 134 | 3. Under "Dynamically udpate task definition image" select MyImage 135 | 4. For the placeholder text in the task definition enter "IMAGE1_NAME" 136 | 4. Save the Pipeline 137 | 5. Release change 138 | 139 | ## Part 5 - Implement Blue/Green Update to ECS 140 | 141 | To cause a blue/green deployment you can make a change to the source image and push the image or delete the image from ECR and then push the image to the repository again 142 | 143 | The pipeline should execute and a replacement task set should be created 144 | To rerun, you can terminate steps 4/5 by clicking "Stop and roll back deployment" 145 | 146 | 147 | 148 | -------------------------------------------------------------------------------- /fargate-blue-green-ci-cd/taskdef.json: -------------------------------------------------------------------------------- 1 | { 2 | "executionRoleArn": "arn:aws:iam:::role/ecsTaskExecutionRole", 3 | "containerDefinitions": [ 4 | { 5 | "name": "sample-website", 6 | "image": "nginx", 7 | "essential": true, 8 | "portMappings": [ 9 | { 10 | "hostPort": 80, 11 | "protocol": "tcp", 12 | "containerPort": 80 13 | } 14 | ] 15 | } 16 | ], 17 | "requiresCompatibilities": [ 18 | "FARGATE" 19 | ], 20 | "networkMode": "awsvpc", 21 | "cpu": "256", 22 | "memory": "512", 23 | "family": "ecs-lab" 24 | } --------------------------------------------------------------------------------