├── .DS_Store
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── app
├── .DS_Store
├── docker-command.sh
├── karpenter
│ ├── Dockerfile
│ ├── karpenter-mock-sqs-post.py
│ └── karpenter-sqs-reader.py
└── keda
│ ├── .DS_Store
│ ├── Dockerfile
│ ├── keda-mock-sqs-post.py
│ └── sqs-reader.py
├── cleanup.sh
├── deployment
├── .DS_Store
├── _main.sh
├── app
│ └── keda-python-app.yaml
├── cluster
│ └── createCluster.sh
├── environmentVariables.sh
├── karpenter
│ ├── README.md
│ ├── cloudformation.yaml
│ ├── createkarpenter.sh
│ └── createkarpenter_before_v.32.sh
├── keda
│ ├── ReadME.md
│ ├── createkeda.sh
│ ├── dynamoPolicy.json
│ ├── keda-scaleobject.sh
│ ├── sqsPolicy.json
│ └── values.sh
└── services
│ └── awsService.sh
└── img
├── .DS_Store
├── Karpenterversion.jpg
├── Keda.gif
├── YouTube-Logo.jpg
├── accountverify.jpg
├── aws_kedakarpenter_arch_small.gif
├── deploymentcompleted.jpg
├── deploymentverify.jpg
├── karpenterhelm.jpg
├── mockarchitecture.jpg
├── node.jpg
├── nodescaling.jpg
├── pod.jpg
├── podscaling.jpg
├── runloadscript.jpg
└── setenv.jpg
/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/.DS_Store
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | ## Code of Conduct
2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
4 | opensource-codeofconduct@amazon.com with any additional questions or comments.
5 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing Guidelines
2 |
3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
4 | documentation, we greatly value feedback and contributions from our community.
5 |
6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
7 | information to effectively respond to your bug report or contribution.
8 |
9 |
10 | ## Reporting Bugs/Feature Requests
11 |
12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features.
13 |
14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already
15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
16 |
17 | * A reproducible test case or series of steps
18 | * The version of our code being used
19 | * Any modifications you've made relevant to the bug
20 | * Anything unusual about your environment or deployment
21 |
22 |
23 | ## Contributing via Pull Requests
24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
25 |
26 | 1. You are working against the latest source on the *main* branch.
27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
29 |
30 | To send us a pull request, please:
31 |
32 | 1. Fork the repository.
33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
34 | 3. Ensure local tests pass.
35 | 4. Commit to your fork using clear commit messages.
36 | 5. Send us a pull request, answering any default questions in the pull request interface.
37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
38 |
39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
41 |
42 |
43 | ## Finding contributions to work on
44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.
45 |
46 |
47 | ## Code of Conduct
48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
50 | opensource-codeofconduct@amazon.com with any additional questions or comments.
51 |
52 |
53 | ## Security issue notifications
54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
55 |
56 |
57 | ## Licensing
58 |
59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.
60 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT No Attribution
2 |
3 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy of
6 | this software and associated documentation files (the "Software"), to deal in
7 | the Software without restriction, including without limitation the rights to
8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
9 | the Software, and to permit persons to whom the Software is furnished to do so.
10 |
11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
12 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
13 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
14 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
15 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
16 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
17 |
18 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ## Amazon EKS scaling with KEDA and Karpenter
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 | # EKS with KEDA HPA & Karpenter cluster autoscaler
13 | This repository contains the necessary files and instructions to deploy and configure [KEDA](https://keda.sh/) (Kubernetes-based Event Driven Autoscaling) and [Karpenter](https://github.com/awslabs/karpenter) (Kubernetes Node Autoscaler) on an Amazon Elastic Kubernetes Service (EKS) cluster.
14 |
15 | KEDA enables autoscaling of Kubernetes pods based on the number of events in event sources such as Azure Service Bus, RabbitMQ, Kafka, and more. Karpenter is a Kubernetes node autoscaler that scales the number of nodes in your cluster based on resource usage.
16 |
17 | ***
18 | ## Sample Usecase
19 |
20 |
21 |
22 |
23 | ## Full Demo is available @Youtube
24 |
25 | [](https://youtu.be/yOzyXY97CrI)
26 |
27 |
28 | ## Prerequisites
29 |
30 | Before you begin, ensure that you have the following prerequisites:
31 |
32 | - An active AWS account.
33 | - Kubernetes command-line tool (`kubectl`) installed.
34 | - [eksctl](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html)
35 | - [AWS CLI](https://aws.amazon.com/cli/)
36 | - [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl)
37 | - [helm](https://helm.sh/)
38 | - [K9s](https://k9scli.io/topics/install/)
39 |
40 | ## Installation
41 |
42 | To install KEDA and Karpenter on your AWS EKS cluster, follow these steps:
43 |
44 | 1. Clone this repository to your local machine or download it as a ZIP file.
45 | ```shell
46 | git clone https://github.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter.git
47 | ```
48 |
49 | 2. Navigate to the repository's directory.
50 | ```shell
51 | cd amazon-eks-scaling-with-keda-and-karpenter
52 | ```
53 |
54 | 3. Update environmentVariables.sh file in /deployment
55 |
56 | | Variable Name | Description |
57 | |-----------------------------|-----------------------------------------------------------------------------------------------------|
58 | | `AWS_REGION` | The AWS region. |
59 | | `ACCOUNT_ID` | The AWS account ID. |
60 | | `TEMPOUT` | Temporary output file. This used to temp. store CFN for karpenter |
61 | | `DYNAMODB_TABLE` | The name of the DynamoDB table. |
62 | | `CLUSTER_NAME` | The name of the EKS cluster. |
63 | | `KARPENTER_VERSION` | The version of Karpenter. |
64 | | `NAMESPACE` | The Kubernetes namespace for KEDA. |
65 | | `SERVICE_ACCOUNT` | The Kubernetes service account for KEDA. |
66 | | `IAM_KEDA_ROLE` | The IAM role for KEDA. |
67 | | `IAM_KEDA_SQS_POLICY` | The IAM policy for KEDA to access SQS. |
68 | | `IAM_KEDA_DYNAMO_POLICY` | The IAM policy for KEDA to access DynamoDB. |
69 | | `SQS_QUEUE_NAME` | The name of the SQS queue. |
70 | | `SQS_QUEUE_URL` | The URL of the SQS queue. |
71 | | `SQS_TARGET_DEPLOYMENT` | The target deployment for KEDA to scale based on SQS messages. |
72 | | `SQS_TARGET_NAMESPACE` | The target namespace for the deployment that KEDA scales based on SQS messages. |
73 |
74 | 4. To strat deployment run
75 | ```shell
76 | sh ./deployment/_main.sh
77 | ```
78 |
79 | 5. You will be asked to verfiy the account in context
80 |
81 |
82 |
83 |
84 | 6. Select your deployment option
85 |
86 |
87 |
88 |
89 |
90 | ## 🚨🚨 Experiencing Issues 🚨🚨
91 |
92 | If you are experiencing error while deploying Karpenter, it can be due to Karpenter repository currently supports limited version and you may be havign old version in environmentVariables.sh. Please follow steps below:
93 |
94 | - Please check latest karpenter version from https://karpenter.sh -> Getting Started. environmentVariables should match with supported version
95 |
96 |
97 |
98 | - Check file createkarpenter.sh under deployment-->karpenter line 80 - 90 and compare with karpneter getting started helm https://karpenter.sh/
99 |
100 |
101 |
102 | ## Mock Testing 💻
103 | I have used [iTerm2](https://iterm2.com/downloads.html) on Mac with `K9s` to connect with cluster and run scripts
104 |
105 | ### Architecture
106 |
107 |
108 | 1. Once Cluster, Karpenter & KEDA is deployed
109 |
110 |
111 | 2. Open two more terminal and connect to cluster using K9s (check Prerequisites)
112 |
113 | 3. In first terminal with K9s, navigate to `keda-test` namespace `pod` view
114 |
115 |
116 | 4. In seconds terminal with k9s, navigate to `Nodes`
117 |
118 |
119 | 5. Open three or more terminals, copy content from `deployment/environmentVariables.sh` and run it on all three terminal
120 |
121 |
122 | 6. In the terminal run below command to test Karpenter status. If you see error first fix it before moving furthe
123 | ```
124 | kubectl logs -f -n karpenter -l app.kubernetes.io/name=karpenter -c controller
125 |
126 | ```
127 |
128 | 7. Run `keda-mock-sqs-post.py` script on all four terminals
129 |
130 | ```shell
131 | cd app/keda
132 | python3 -m venv env
133 | source env/bin/activate
134 | pip install boto3
135 | cd {your path}/amazon-eks-scaling-with-keda-and-karpenter
136 | python3 ./app/keda/keda-mock-sqs-post.py
137 | ```
138 |
139 |
140 | 8. Pod sclaing using KEDA
141 |
142 |
143 | 9. Node scaling using Karpenter
144 |
145 |
146 | ## 🚨 Cleanup 🚨
147 | 1. Navigate to the repository's root directory.
148 | ```shell
149 | cd amazon-eks-scaling-with-keda-and-karpenter
150 | ```
151 |
152 | 2. To strat cleanup
153 | ```shell
154 | sh ./cleanup.sh
155 | ```
156 |
157 | This will delelte all the AWS services and workloads created for the solution.
158 | ## Configuration
159 |
160 | The repository contains the necessary configuration files for deploying KEDA and Karpenter. You can modify these files to suit your specific requirements. Here are some important files to note:
161 |
162 | - [/deploy/keda](https://github.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/tree/main/deployment/keda): Contains the deployment files for KEDA components.
163 | - [/deploy/karpenter](https://github.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/tree/main/deployment/karpenter): Contains the deployment files for Karpenter components.
164 |
165 | Feel free to modify these files according to your needs.
166 |
167 |
168 | ## Acknowledgements
169 |
170 | - [KEDA](https://keda.sh/) - Kubernetes-based Event Driven Autoscaling
171 | - [Karpenter](https://github.com/awslabs/karpenter) - Kubernetes Node Autoscaler
172 | ## Security
173 |
174 | See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
175 |
176 | ## License
177 |
178 | This library is licensed under the MIT-0 License. See the LICENSE file..
179 |
180 |
--------------------------------------------------------------------------------
/app/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/app/.DS_Store
--------------------------------------------------------------------------------
/app/docker-command.sh:
--------------------------------------------------------------------------------
1 | #KEDA -Build the image
2 |
3 | docker buildx build -t sqs-reader --platform=linux/amd64 .
4 | docker login
5 | docker tag sqs-reader:latest khanasif1/sqs-reader:v0.12
6 | docker push khanasif1/sqs-reader:v0.12
7 |
8 |
9 | #KARPENTER -Build the image
10 |
11 | docker buildx build -t karpenter-sqs-reader --platform=linux/amd64 .
12 | docker login
13 | docker tag karpenter-sqs-reader:latest khanasif1/karpenter-sqs-reader:v0.2
14 | docker push khanasif1/karpenter-sqs-reader:v0.2
--------------------------------------------------------------------------------
/app/karpenter/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM ubuntu:latest
2 |
3 | RUN apt-get update
4 | RUN apt update
5 |
6 | RUN apt install python3 -y
7 | RUN apt install -y python3-pip
8 | RUN pip3 install --upgrade pip
9 | RUN pip install boto3
10 |
11 | WORKDIR /usr/app/src
12 |
13 | COPY karpenter-sqs-reader.py ./
14 |
15 | CMD [ "python3", "./karpenter-sqs-reader.py" ]
--------------------------------------------------------------------------------
/app/karpenter/karpenter-mock-sqs-post.py:
--------------------------------------------------------------------------------
1 | import boto3
2 | import json
3 | import time
4 | from datetime import datetime
5 | # create a function to add numbers
6 |
7 | queue_url = "https://sqs.us-west-1.amazonaws.com/809980971988/karpenter-queue.fifo"
8 |
9 | def send_message(message_body):
10 | print("Start fn send message")
11 | sqs_client = boto3.client("sqs", region_name="us-west-1")
12 | response = sqs_client.send_message(
13 | QueueUrl = queue_url,
14 | MessageBody = message_body,
15 | MessageGroupId='messageGroup1'
16 | )
17 | print(f"messages send: {response}")
18 |
19 | print("End fn send message")
20 |
21 | starttime = time.time()
22 | i = 0
23 | while True:
24 | t = time.localtime()
25 | time.sleep(1.0 - ((time.time() - starttime) % 1.0))
26 | currenttime = time.strftime("%H:%M:%S", t)
27 | print(f"Start SQS Call : {currenttime}")
28 | #while i < 20:
29 | i = i+1
30 | date_format = '%Y-%m-%d %H:%M:%S.%f'
31 | current_dateTime = datetime.utcnow().strftime(date_format)
32 | messageBody = {
33 | 'msg':f"Scale Buddy !!! : COUNT {i}",
34 | 'srcStamp': current_dateTime
35 | }
36 | print(json.dumps(messageBody))
37 | send_message(json.dumps(messageBody))
38 | currenttime = time.strftime("%H:%M:%S", t)
39 | print(f"End SQS Call {currenttime}")
40 | #time.sleep(5)
41 |
--------------------------------------------------------------------------------
/app/karpenter/karpenter-sqs-reader.py:
--------------------------------------------------------------------------------
1 | import boto3
2 | import json
3 | import time
4 | import uuid
5 | from datetime import datetime
6 |
7 | # create a function to add numbers
8 | starttime = time.time()
9 |
10 | queue_url = "https://sqs.us-west-1.amazonaws.com/809980971988/karpenter-queue.fifo"
11 |
12 |
13 | def receive_message():
14 | try:
15 | print("Start fn receive message")
16 | sqs_client = boto3.client("sqs", region_name="us-west-1")
17 | response = sqs_client.receive_message(
18 | QueueUrl= queue_url,
19 | AttributeNames=[
20 | 'SentTimestamp'
21 | ],
22 | MaxNumberOfMessages=1,
23 | MessageAttributeNames=[
24 | 'All'
25 | ],
26 | WaitTimeSeconds=0,
27 | VisibilityTimeout=60
28 | )
29 | print(f"Number of messages received: {len(response.get('Messages', []))}")
30 |
31 | #for message in response.get("Messages", []):
32 | if len(response.get('Messages', [])) != 0:
33 | message = response['Messages'][0]
34 | message_body = message["Body"]
35 | print(f"message_body : {message_body}")
36 |
37 | receipt_handle = message['ReceiptHandle']
38 |
39 | save_data(message_body)
40 |
41 | print(f"Receipt Handle: {message['ReceiptHandle']}")
42 | print(f"Deleting Message : {message_body}")
43 | # Delete received message from queue
44 | sqs_client.delete_message(
45 | QueueUrl=queue_url,
46 | ReceiptHandle=receipt_handle
47 | )
48 |
49 |
50 | print("End fn receive message")
51 | except Exception as ex:
52 | print(f"Error happened in receive_message : {ex} ")
53 |
54 |
55 | def save_data(_message):
56 | try:
57 | print(f'save data src msg :{_message}')
58 | jsonMessage = json.loads(_message)
59 | print(f'Src Message :{jsonMessage["msg"]},{jsonMessage["srcStamp"]}')
60 | #current_dateTime = json.dumps(datetime.now(),default= str)
61 | date_format = '%Y-%m-%d %H:%M:%S.%f'
62 | current_dateTime = datetime.utcnow().strftime(date_format)
63 | _id=str(uuid.uuid1())
64 | print(f"id:{_id}")
65 | dynamodb = boto3.resource('dynamodb', region_name="us-west-1")
66 | table = dynamodb.Table('karpenterpayments')
67 |
68 |
69 | messageProcessingTime = datetime.utcnow() - datetime.strptime(jsonMessage["srcStamp"],date_format)
70 | print(f'messageProcessingTime: {messageProcessingTime.total_seconds()}')
71 |
72 | response = table.put_item(
73 | Item={
74 | 'id': _id,
75 | 'data': jsonMessage["msg"],
76 | 'srcStamp':jsonMessage["srcStamp"],
77 | 'destStamp':current_dateTime,
78 | 'messageProcessingTime':str(messageProcessingTime.total_seconds())
79 | }
80 | )
81 | status_code = response['ResponseMetadata']['HTTPStatusCode']
82 | print(f"Data Save Status : {status_code}")
83 | except Exception as error:
84 | print(f"Error has happened : {error}")
85 |
86 |
87 |
88 |
89 | while True:
90 | t = time.localtime()
91 | time.sleep(1.0 - ((time.time() - starttime) % 1.0)) #sleep for sec
92 | currenttime = time.strftime("%H:%M:%S", t)
93 | print(f"Start SQS Call : {currenttime}")
94 |
95 | receive_message()
96 | #save_data("hi there")
97 |
98 | ## Date format working
99 | #date_format = '%Y-%m-%d %H:%M:%S.%f'
100 | #currentDateAndTime = datetime.now().strftime(date_format)#"2023-03-23 17:49:25.651555"
101 | #currentDateAndTime = '2023-03-23 18:38:42.536417'
102 | #print("The current date and time is", currentDateAndTime)
103 | #currentDate = datetime.strptime(currentDateAndTime, date_format)
104 | #x = datetime.now() - currentDate
105 | #print("diff",x)
106 | #print("The current time is", currentDate)
107 | '''i = 0
108 | while i < 20:
109 | i = i+1'''
110 | currenttime = time.strftime("%H:%M:%S", t)
111 | print(f"End SQS Call {currenttime}")
112 |
--------------------------------------------------------------------------------
/app/keda/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/app/keda/.DS_Store
--------------------------------------------------------------------------------
/app/keda/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM ubuntu:latest
2 |
3 | RUN apt-get update
4 | RUN apt update
5 |
6 | RUN apt install python3 -y
7 | RUN apt install -y python3-pip
8 | RUN pip3 install --upgrade pip
9 | RUN pip install boto3
10 |
11 | WORKDIR /usr/app/src
12 |
13 | COPY sqs-reader.py ./
14 |
15 | CMD [ "python3", "./sqs-reader.py" ]
16 |
--------------------------------------------------------------------------------
/app/keda/keda-mock-sqs-post.py:
--------------------------------------------------------------------------------
1 | import boto3
2 | import json
3 | import time
4 | from datetime import datetime
5 | import os
6 | from os import environ
7 | import subprocess
8 |
9 |
10 |
11 | if 'SQS_QUEUE_URL' in os.environ:
12 | queue_url = os.environ['SQS_QUEUE_URL']
13 | print (f'SQS URL : {queue_url}')
14 | else:
15 | print ('SQS URL Missing!!!!!')
16 |
17 | def send_message(message_body):
18 | print("Start fn send message")
19 | sqs_client = boto3.client("sqs", region_name=os.environ['AWS_REGION'])
20 | response = sqs_client.send_message(
21 | QueueUrl = queue_url,
22 | MessageBody = message_body,
23 | MessageGroupId='messageGroup1'
24 | )
25 | print(f"messages send: {response}")
26 |
27 | print("End fn send message")
28 |
29 | starttime = time.time()
30 | i = 0
31 | while True:
32 | if 'SQS_QUEUE_URL' in os.environ:
33 | t = time.localtime()
34 | time.sleep(1.0 - ((time.time() - starttime) % 1.0))
35 | currenttime = time.strftime("%H:%M:%S", t)
36 | print(f"Start SQS Call : {currenttime}")
37 | #while i < 20:
38 | i = i+1
39 | date_format = '%Y-%m-%d %H:%M:%S.%f'
40 | current_dateTime = datetime.utcnow().strftime(date_format)
41 | messageBody = {
42 | 'msg':f"Scale Buddy !!! : COUNT {i}",
43 | 'srcStamp': current_dateTime
44 | }
45 | print(json.dumps(messageBody))
46 | send_message(json.dumps(messageBody))
47 | currenttime = time.strftime("%H:%M:%S", t)
48 | print(f"End SQS Call {currenttime}")
49 | #time.sleep(5)
50 | else:
51 | print ("SQS URL missing from environment. Run environmentVariables.sh first ")
--------------------------------------------------------------------------------
/app/keda/sqs-reader.py:
--------------------------------------------------------------------------------
1 | import boto3
2 | import json
3 | import time
4 | import uuid
5 | from datetime import datetime
6 | import os
7 | from os import environ
8 |
9 | # create a function to add numbers
10 | starttime = time.time()
11 | if 'SQS_QUEUE_URL' in os.environ:
12 | queue_url = os.environ['SQS_QUEUE_URL']
13 | region = os.environ['AWS_REGION']
14 | print (f'SQS URL : {queue_url} - region : {region} ')
15 | else:
16 | print ('SQS URL Missing!!!!!')
17 |
18 | if 'DYNAMODB_TABLE' not in os.environ:
19 | print ('DYNAMODB_TABLE details Missing!!!!!')
20 | else:
21 | dynamodb_table = os.environ['DYNAMODB_TABLE']
22 | print (f'DYNAMODB_TABLE : {dynamodb_table}')
23 |
24 |
25 | _id = "a3b01bb8-da7c-11ed-aae2-5227b566f9a7"
26 | def receive_message():
27 | print(f'queue_url {queue_url}- region : {region} - Dynamo : {dynamodb_table} ')
28 | try:
29 | print("Start fn receive message")
30 | sqs_client = boto3.client("sqs", region_name=region)
31 | response = sqs_client.receive_message(
32 | QueueUrl= queue_url,
33 | AttributeNames=[
34 | 'SentTimestamp'
35 | ],
36 | MaxNumberOfMessages=1,
37 | MessageAttributeNames=[
38 | 'All'
39 | ],
40 | WaitTimeSeconds=0,
41 | VisibilityTimeout=60
42 | )
43 | print(f"Number of messages received: {len(response.get('Messages', []))}")
44 |
45 | #for message in response.get("Messages", []):
46 | if len(response.get('Messages', [])) != 0:
47 | message = response['Messages'][0]
48 | message_body = message["Body"]
49 | print(f"message_body : {message_body}")
50 |
51 | receipt_handle = message['ReceiptHandle']
52 |
53 | save_data(message_body)
54 |
55 | print(f"Receipt Handle: {message['ReceiptHandle']}")
56 | print(f"Deleting Message : {message_body}")
57 | # Delete received message from queue
58 | sqs_client.delete_message(
59 | QueueUrl=queue_url,
60 | ReceiptHandle=receipt_handle
61 | )
62 |
63 |
64 | print("End fn receive message")
65 | except Exception as ex:
66 | print(f"Error happened in receive_message : {ex} ")
67 |
68 |
69 | def save_data(_message):
70 | try:
71 | print(f'save data src msg :{_message}')
72 | jsonMessage = json.loads(_message)
73 | print(f'Src Message :{jsonMessage["msg"]},{jsonMessage["srcStamp"]}')
74 | #current_dateTime = json.dumps(datetime.now(),default= str)
75 | date_format = '%Y-%m-%d %H:%M:%S.%f'
76 | current_dateTime = datetime.utcnow().strftime(date_format)
77 |
78 | print(f"id:{_id}")
79 | dynamodb = boto3.resource('dynamodb', region_name=region)
80 | table = dynamodb.Table(dynamodb_table)
81 |
82 |
83 | messageProcessingTime = datetime.utcnow() - datetime.strptime(jsonMessage["srcStamp"],date_format)
84 | print(f'messageProcessingTime: {messageProcessingTime.total_seconds()}')
85 |
86 | response = table.put_item(
87 | Item={
88 | 'id': _id,
89 | 'data': jsonMessage["msg"],
90 | 'srcStamp':jsonMessage["srcStamp"],
91 | 'destStamp':current_dateTime,
92 | 'messageProcessingTime':str(messageProcessingTime.total_seconds())
93 | }
94 | )
95 | status_code = response['ResponseMetadata']['HTTPStatusCode']
96 | print(f"Data Save Status : {status_code}")
97 | except Exception as error:
98 | print(f"Error has happened : {error}")
99 |
100 |
101 |
102 |
103 | while True:
104 | t = time.localtime()
105 | time.sleep(1.0 - ((time.time() - starttime) % 1.0)) #sleep for 1 sec
106 | currenttime = time.strftime("%H:%M:%S", t)
107 | print(f"Start SQS Call : {currenttime}")
108 |
109 | receive_message()
110 | #save_data("hi there")
111 |
112 | ## Date format working
113 | #date_format = '%Y-%m-%d %H:%M:%S.%f'
114 | #currentDateAndTime = datetime.now().strftime(date_format)#"2023-03-23 17:49:25.651555"
115 | #currentDateAndTime = '2023-03-23 18:38:42.536417'
116 | #print("The current date and time is", currentDateAndTime)
117 | #currentDate = datetime.strptime(currentDateAndTime, date_format)
118 | #x = datetime.now() - currentDate
119 | #print("diff",x)
120 | #print("The current time is", currentDate)
121 | '''i = 0
122 | while i < 20:
123 | i = i+1'''
124 | currenttime = time.strftime("%H:%M:%S", t)
125 | print(f"End SQS Call {currenttime}")
126 |
127 |
--------------------------------------------------------------------------------
/cleanup.sh:
--------------------------------------------------------------------------------
1 | #******************
2 | # Clean Deployment
3 | #******************
4 | echo "${RED}******************************************************"
5 | echo "${RED}**************CLEANUP START***************************"
6 | echo "${RED}******************************************************"
7 | echo "${CYAN}Load variables"
8 | source ./deployment/environmentVariables.sh
9 |
10 | echo "${RED}Find all CFN stack names which has cluster name"
11 | for stack in $(aws cloudformation describe-stacks --region ${AWS_REGION} --output text --query 'Stacks[?StackName!=`null`]|[?contains(StackName, `'${CLUSTER_NAME}'`) == `true`].StackName')
12 | do
13 | SUB='nodegroup'
14 | if [[ "$stack" == *"$SUB"* ]]; then
15 | echo "${RED}Deleting stacks : ${stack}"
16 | echo "Node group"
17 | aws cloudformation delete-stack --stack-name $stack --region ${AWS_REGION}
18 | aws cloudformation wait stack-delete-complete --region ${AWS_REGION} --stack-name $stack
19 | else
20 | echo "${RED}Deleting stacks : ${stack}"
21 | echo "other stack"
22 | aws cloudformation delete-stack --stack-name $stack --region ${AWS_REGION}
23 | aws cloudformation wait stack-delete-complete --region ${AWS_REGION} --stack-name $stack
24 | fi
25 | done
26 |
27 | # Delete IAM Roles
28 | echo "${RED}Deleting Role"
29 |
30 | for policy in $(aws iam list-attached-role-policies --role-name ${IAM_KEDA_ROLE} --output text --query 'AttachedPolicies[*].PolicyName')
31 | do
32 | echo "${RED}Detach policy :${policy} from role :${IAM_KEDA_ROLE}"
33 | aws iam detach-role-policy --role-name ${IAM_KEDA_ROLE} --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/${policy}
34 |
35 | echo "${RED}Deleting policy :${policy}"
36 | aws iam delete-policy --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/${policy}
37 | done
38 |
39 | echo "${RED}Deleting role : ${IAM_KEDA_ROLE}"
40 | aws iam delete-role --role-name ${IAM_KEDA_ROLE}
41 |
42 | echo "${RED}Delete IAM policies, if missed earlier"
43 | # Delete IAM policies
44 | #Deleting the policies if missed during role deletion process
45 |
46 | isSQSPolicyExist=$(aws iam list-policies --output text --query 'Policies[?PolicyName==`'${IAM_KEDA_SQS_POLICY}'`].PolicyName')
47 | echo $isSQSPolicyExist
48 | if [ ! -z $isSQSPolicyExist ];then
49 | echo "${RED}Deleting policy :"$IAM_KEDA_SQS_POLICY
50 | aws iam delete-policy --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/${IAM_KEDA_SQS_POLICY}
51 | else
52 | echo "policy ${IAM_KEDA_SQS_POLICY} already deleted"
53 | fi
54 |
55 | isDynamoPolicyExist=$(aws iam list-policies --output text --query 'Policies[?PolicyName==`'${IAM_KEDA_DYNAMO_POLICY}'`].PolicyName')
56 | echo $isDynamoPolicyExist
57 | if [ ! -z $isDynamoPolicyExist ];then
58 | echo "${RED}Deleting policy :"$IAM_KEDA_DYNAMO_POLICY
59 | aws iam delete-policy --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/${IAM_KEDA_DYNAMO_POLICY}
60 | else
61 | echo "policy ${IAM_KEDA_DYNAMO_POLICY} already deleted"
62 | fi
63 |
64 |
65 | SQS_URL=$(aws sqs get-queue-url --queue-name ${SQS_QUEUE_NAME} --output text)
66 | if [ ! -z $SQS_URL ];then
67 | echo "${RED}Deleting SQS :"$SQS_URL
68 | aws sqs delete-queue --queue-url $SQS_URL --region ${AWS_REGION}
69 |
70 | fi
71 |
72 | DYNAMO_TABLE=$(aws dynamodb describe-table --table-name ${DYNAMODB_TABLE} --region ${AWS_REGION} --query 'Table.TableName' --output text)
73 | if [ ! -z $DYNAMO_TABLE ];then
74 | echo "${RED}Deleting DynamoTable :"$DYNAMO_TABLE
75 | RESPONSE=$(aws dynamodb delete-table --table-name $DYNAMO_TABLE --region ${AWS_REGION} --output text)
76 | echo $RESPONSE
77 | fi
78 | #******************
79 | # Clean Completed
80 | #******************
81 | echo "${GREEN}******************************************************"
82 | echo "${GREEN}**************CLEANUP COMPLETE************************"
83 | echo "${GREEN}******************************************************"
--------------------------------------------------------------------------------
/deployment/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/deployment/.DS_Store
--------------------------------------------------------------------------------
/deployment/_main.sh:
--------------------------------------------------------------------------------
1 | #******************
2 | # Chain Deployment
3 | #******************
4 | source ./deployment/environmentVariables.sh
5 |
6 | echo "${BLUE}Please check the details before proceeding \n AWS Account: ${ACCOUNT_ID} \n AWS Region for deployment : ${AWS_REGION} \n
7 | ${RED}Please check the Karpenter version you have selected is available at \n\n https://karpenter.sh \n\nAlso please check #Experiencing Issues# section before proceeding. \n
8 | ${RED}Casesenstive ${BLUE}Press Y = Proceed or N = Cancel"
9 | echo "${CYAN}Response: "
10 | read user_input
11 | Entry='Y'
12 | if [[ "$user_input" == *"$Entry"* ]]; then
13 | CLUSTER=1
14 | CLUSTER_KARPENTER=2
15 | CLUSTER_KARPENTER_KEDA=3
16 |
17 | echo "${BLUE} Please select the deployment modules : \n 1. Press 1 to deploy only EKS cluster \n 2. Press 2 to deploy EKS cluster with Karpenter \n 3. Press 3 if you want to deploy EKS cluster, Karpenter & KEDA"
18 | echo "${CYAN}Response: "
19 | read user_input
20 | if [[ "$user_input" == $CLUSTER ]]; then
21 | echo "Deploy EKS"
22 | echo "${GREEN} Proceed deployment"
23 | echo "Cluster!!"
24 | echo "${YELLOW}print cluster Parameters \n"
25 | echo $CLUSTER_NAME "|" $KARPENTER_VERSION "|" $AWS_REGION "|" $ACCOUNT_ID "|" $TEMPOUT
26 | chmod u+x ./deployment/cluster/createCluster.sh
27 | ./deployment/cluster/createCluster.sh
28 |
29 | elif [[ "$user_input" == $CLUSTER_KARPENTER ]]; then
30 | echo "Deploy EKS & Karpenter"
31 | echo "${GREEN} Proceed deployment"
32 | echo "Cluster!!"
33 | echo "${YELLOW}print cluster Parameters \n"
34 | echo $CLUSTER_NAME "|" $KARPENTER_VERSION "|" $AWS_REGION "|" $ACCOUNT_ID "|" $TEMPOUT
35 | chmod u+x ./deployment/cluster/createCluster.sh
36 | ./deployment/cluster/createCluster.sh
37 |
38 | echo "${GREEN}Karpenter!!"
39 | echo "${YELLOW}print karpenter Parameters \n"
40 | echo $CLUSTER_NAME "|" $KARPENTER_VERSION "|" $AWS_REGION "|" $ACCOUNT_ID "|" $TEMPOUT
41 | chmod u+x ./deployment/karpenter/createkarpenter.sh
42 | ./deployment/karpenter/createkarpenter.sh
43 |
44 | elif [[ "$user_input" == $CLUSTER_KARPENTER_KEDA ]]; then
45 | echo "Deploy EKS & Karpenter & KEDA"
46 | echo "${GREEN} Proceed deployment"
47 | echo "Cluster!!"
48 | echo "${YELLOW}print cluster Parameters \n"
49 | echo $CLUSTER_NAME "|" $KARPENTER_VERSION "|" $AWS_REGION "|" $ACCOUNT_ID "|" $TEMPOUT
50 | chmod u+x ./deployment/cluster/createCluster.sh
51 | ./deployment/cluster/createCluster.sh
52 |
53 | echo "${GREEN}Karpenter!!"
54 | echo "${YELLOW}print karpenter Parameters \n"
55 | echo $CLUSTER_NAME "|" $KARPENTER_VERSION "|" $AWS_REGION "|" $ACCOUNT_ID "|" $TEMPOUT
56 | chmod u+x ./deployment/karpenter/createkarpenter.sh
57 | ./deployment/karpenter/createkarpenter.sh
58 |
59 | echo "${GREEN}KEDA!!"
60 | echo "${YELLOW}print keda Parameters"
61 | echo $CLUSTER_NAME "||\n" $AWS_REGION "||\n" $ACCOUNT_ID "||\n" $TEMPOUT "||\n" $IAM_KEDA_ROLE "||\n" $IAM_KEDA_SQS_POLICY "||\n" $SERVICE_ACCOUNT "||\n" $NAMESPACE "||\n" $SQS_TARGET_NAMESPACE "||\n" $SQS_TARGET_DEPLOYMENT "||\n" $SQS_QUEUE_URL
62 | chmod u+x ./deployment/keda/createkeda.sh
63 | ./deployment/keda/createkeda.sh
64 |
65 | echo "${GREEN}Deploy Demo components DynamoDB and SQS!!"
66 | chmod u+x ./deployment/services/awsService.sh
67 | ./deployment/services/awsService.sh
68 |
69 | fi
70 | else
71 |
72 | echo "${RED}Cancel deployment"
73 | fi
74 |
75 |
--------------------------------------------------------------------------------
/deployment/app/keda-python-app.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: Deployment
3 | metadata:
4 | name: sqs-app
5 | namespace: keda-test
6 | spec:
7 | replicas: 1
8 | selector:
9 | matchLabels:
10 | app: sqs-reader
11 | template:
12 | metadata:
13 | labels:
14 | app: sqs-reader
15 | spec:
16 | serviceAccountName: keda-service-account
17 | containers:
18 | - name: sqs-pull-app
19 | image: khanasif1/sqs-reader:v0.12
20 | imagePullPolicy: Always
21 | env:
22 | - name: SQS_QUEUE_URL
23 | value: https://sqs.ap-southeast-2.amazonaws.com/809980971988/keda-demo-queue.fifo
24 | - name: DYNAMODB_TABLE
25 | value: payments
26 | - name: AWS_REGION
27 | value: ap-southeast-2
28 | resources:
29 | requests:
30 | memory: "64Mi"
31 | cpu: "250m"
32 | limits:
33 | memory: "128Mi"
34 | cpu: "500m"
--------------------------------------------------------------------------------
/deployment/cluster/createCluster.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #*************************
3 | # Create a Cluster with Karpenter
4 | #*************************
5 | echo "${GREEN}=========================="
6 | echo "${GREEN}Installing Cluster"
7 | echo "${GREEN}=========================="
8 | source ./deployment/environmentVariables.sh
9 |
10 | if [ -z $CLUSTER_NAME ] || [ -z $KARPENTER_VERSION ] || [ -z $AWS_REGION ] || [ -z $ACCOUNT_ID ] || [ -z $TEMPOUT ];then
11 | echo "${RED}Update values & Run environmentVariables.sh file"
12 | exit 1;
13 | else
14 | echo "${GREEN}**Start cluster provisioning**"
15 |
16 | CHECK_CLUSTER=$(aws eks list-clusters | jq -r ".clusters" | grep $CLUSTER_NAME || true)
17 | if [ ! -z $CHECK_CLUSTER ];then
18 | echo "${BLUE}Cluster Exists"
19 | else
20 | echo "${YELLOW}Cluster does not exists"
21 | echo "${GREEN} !!Create a eks cluster !!"
22 |
23 | eksctl create cluster --name ${CLUSTER_NAME} --region ${AWS_REGION} --version ${K8sversion} --tags karpenter.sh/discovery=${CLUSTER_NAME}
24 | #aws eks describe-cluster --region ${AWS_REGION} --name ${CLUSTER_NAME} --query "cluster.status"
25 |
26 | fi
27 | # Delete eks cluster
28 | #eksctl delete cluster --name eks-keda-scale --region us-west-1#
29 | echo "${GREEN}=========================="
30 | echo "${GREEN}Cluster Completed"
31 | echo "${GREEN}=========================="
32 | fi
33 |
--------------------------------------------------------------------------------
/deployment/environmentVariables.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | echo "Setting environment variables"
3 | #Shared Variables
4 | export AWS_REGION="ap-southeast-2"
5 | export ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text)"
6 | export TEMPOUT=$(mktemp)
7 | export DYNAMODB_TABLE="payments"
8 |
9 | #Cluster Variables
10 | export CLUSTER_NAME="eks-demo-scale"
11 | export K8sversion="1.28"
12 |
13 | #Karpenter Variables
14 | export KARPENTER_VERSION=v0.32.0
15 |
16 | #KEDA Variables
17 | export NAMESPACE=keda
18 | export SERVICE_ACCOUNT=keda-service-account
19 | export IAM_KEDA_ROLE="keda-demo-role"
20 | export IAM_KEDA_SQS_POLICY="keda-demo-sqs"
21 | export IAM_KEDA_DYNAMO_POLICY="keda-demo-dynamo"
22 | export SQS_QUEUE_NAME="keda-demo-queue.fifo"
23 | export SQS_QUEUE_URL="https://sqs.${AWS_REGION}.amazonaws.com/${ACCOUNT_ID}/${SQS_QUEUE_NAME}"
24 | export SQS_TARGET_DEPLOYMENT="sqs-app"
25 | export SQS_TARGET_NAMESPACE="keda-test"
26 |
27 | # echo colour
28 | RED=$(tput setaf 1)
29 | GREEN=$(tput setaf 2)
30 | CYAN=$(tput setaf 6)
31 | BLUE=$(tput setaf 4)
32 | NC=$(tput sgr0)
--------------------------------------------------------------------------------
/deployment/karpenter/README.md:
--------------------------------------------------------------------------------
1 |
2 | ## Update Karpenter configuration
3 |
4 | File environmentVariables.sh file in /deployment
5 |
6 | | Variable Name | Description |
7 | |-----------------------------|-----------------------------------------------------------------------------------------------------|
8 | | `KARPENTER_VERSION` | The version of Karpenter. |
9 |
10 |
11 | ## If you wish to configure Karpenter Provisioner
12 |
13 | - Navigate to createkarpenter.sh under folder /deployment/karpenter
14 | - Search for "#deploy Provisioner"
15 | - Update Karpenter Provisioner with attributes as needed
--------------------------------------------------------------------------------
/deployment/karpenter/cloudformation.yaml:
--------------------------------------------------------------------------------
1 | AWSTemplateFormatVersion: "2010-09-09"
2 | Description: Resources used by https://github.com/aws/karpenter
3 | Parameters:
4 | ClusterName:
5 | Type: String
6 | Description: "EKS cluster name"
7 | Resources:
8 | KarpenterNodeInstanceProfile:
9 | Type: "AWS::IAM::InstanceProfile"
10 | Properties:
11 | InstanceProfileName: !Sub "KarpenterNodeInstanceProfile-${ClusterName}"
12 | Path: "/"
13 | Roles:
14 | - Ref: "KarpenterNodeRole"
15 | KarpenterNodeRole:
16 | Type: "AWS::IAM::Role"
17 | Properties:
18 | RoleName: !Sub "KarpenterNodeRole-${ClusterName}"
19 | Path: /
20 | AssumeRolePolicyDocument:
21 | Version: "2012-10-17"
22 | Statement:
23 | - Effect: Allow
24 | Principal:
25 | Service:
26 | !Sub "ec2.${AWS::URLSuffix}"
27 | Action:
28 | - "sts:AssumeRole"
29 | ManagedPolicyArns:
30 | - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEKS_CNI_Policy"
31 | - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEKSWorkerNodePolicy"
32 | - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
33 | - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonSSMManagedInstanceCore"
34 | KarpenterControllerPolicy:
35 | Type: AWS::IAM::ManagedPolicy
36 | Properties:
37 | ManagedPolicyName: !Sub "KarpenterControllerPolicy-${ClusterName}"
38 | PolicyDocument:
39 | Version: "2012-10-17"
40 | Statement:
41 | - Effect: Allow
42 | Resource: "*"
43 | Action:
44 | # Write Operations
45 | - ec2:CreateFleet
46 | - ec2:CreateLaunchTemplate
47 | - ec2:CreateTags
48 | - ec2:DeleteLaunchTemplate
49 | - ec2:RunInstances
50 | - ec2:TerminateInstances
51 | # Read Operations
52 | - ec2:DescribeAvailabilityZones
53 | - ec2:DescribeImages
54 | - ec2:DescribeInstances
55 | - ec2:DescribeInstanceTypeOfferings
56 | - ec2:DescribeInstanceTypes
57 | - ec2:DescribeLaunchTemplates
58 | - ec2:DescribeSecurityGroups
59 | - ec2:DescribeSpotPriceHistory
60 | - ec2:DescribeSubnets
61 | - pricing:GetProducts
62 | - ssm:GetParameter
63 | - Effect: Allow
64 | Action:
65 | # Write Operations
66 | - sqs:DeleteMessage
67 | # Read Operations
68 | - sqs:GetQueueAttributes
69 | - sqs:GetQueueUrl
70 | - sqs:ReceiveMessage
71 | Resource: !GetAtt KarpenterInterruptionQueue.Arn
72 | - Effect: Allow
73 | Action:
74 | - iam:PassRole
75 | Resource: !Sub "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/KarpenterNodeRole-${ClusterName}"
76 | - Effect: Allow
77 | Action:
78 | - eks:DescribeCluster
79 | Resource: !Sub "arn:${AWS::Partition}:eks:${AWS::Region}:${AWS::AccountId}:cluster/${ClusterName}"
80 | KarpenterInterruptionQueue:
81 | Type: AWS::SQS::Queue
82 | Properties:
83 | QueueName: !Sub "${ClusterName}"
84 | MessageRetentionPeriod: 300
85 | KarpenterInterruptionQueuePolicy:
86 | Type: AWS::SQS::QueuePolicy
87 | Properties:
88 | Queues:
89 | - !Ref KarpenterInterruptionQueue
90 | PolicyDocument:
91 | Id: EC2InterruptionPolicy
92 | Statement:
93 | - Effect: Allow
94 | Principal:
95 | Service:
96 | - events.amazonaws.com
97 | - sqs.amazonaws.com
98 | Action: sqs:SendMessage
99 | Resource: !GetAtt KarpenterInterruptionQueue.Arn
100 | ScheduledChangeRule:
101 | Type: 'AWS::Events::Rule'
102 | Properties:
103 | EventPattern:
104 | source:
105 | - aws.health
106 | detail-type:
107 | - AWS Health Event
108 | Targets:
109 | - Id: KarpenterInterruptionQueueTarget
110 | Arn: !GetAtt KarpenterInterruptionQueue.Arn
111 | SpotInterruptionRule:
112 | Type: 'AWS::Events::Rule'
113 | Properties:
114 | EventPattern:
115 | source:
116 | - aws.ec2
117 | detail-type:
118 | - EC2 Spot Instance Interruption Warning
119 | Targets:
120 | - Id: KarpenterInterruptionQueueTarget
121 | Arn: !GetAtt KarpenterInterruptionQueue.Arn
122 | RebalanceRule:
123 | Type: 'AWS::Events::Rule'
124 | Properties:
125 | EventPattern:
126 | source:
127 | - aws.ec2
128 | detail-type:
129 | - EC2 Instance Rebalance Recommendation
130 | Targets:
131 | - Id: KarpenterInterruptionQueueTarget
132 | Arn: !GetAtt KarpenterInterruptionQueue.Arn
133 | InstanceStateChangeRule:
134 | Type: 'AWS::Events::Rule'
135 | Properties:
136 | EventPattern:
137 | source:
138 | - aws.ec2
139 | detail-type:
140 | - EC2 Instance State-change Notification
141 | Targets:
142 | - Id: KarpenterInterruptionQueueTarget
143 | Arn: !GetAtt KarpenterInterruptionQueue.Arn
--------------------------------------------------------------------------------
/deployment/karpenter/createkarpenter.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #*************************
3 | # Deploy Karpenter
4 | #*************************
5 | ## SWITCH CLUSTER CONTEXT
6 | echo "${GREEN}=========================="
7 | echo "${GREEN}Installing karpenter"
8 | echo "${GREEN}=========================="
9 |
10 | source ./deployment/environmentVariables.sh
11 |
12 | echo "${RED}Casesenstive ${BLUE} Press Y = Proceed \n or \n N = Cancel (change context 'kubectl config use-context {context name you can check using kubectl config view}' and run script)"
13 | read user_input
14 | #kubectl config use-context akaasif-Isengard@${CLUSTER_NAME}.${AWS_REGION}.eksctl.io
15 | #kubectl config current-context
16 | #kubectl config use-context akaasif-Isengard@eks-karpenter-scale.us-west-1.eksctl.io
17 |
18 | Entry='Y'
19 | if [[ "$user_input" == *"$Entry"* ]]; then
20 |
21 | if [ -z $CLUSTER_NAME ] || [ -z $KARPENTER_VERSION ] || [ -z $AWS_REGION ] || [ -z $ACCOUNT_ID ] || [ -z $TEMPOUT ];then
22 | echo "${RED}Update values & Run environmentVariables.sh file"
23 | exit 1;
24 | else
25 | echo "${GREEN}**Installing karpenter**"
26 | # If you have login with docker in shell execute below first
27 | docker logout public.ecr.aws
28 |
29 | #Create the KarpenterNode IAM Role
30 | echo "${GREEN}Create the KarpenterNode IAM Role"
31 |
32 | curl -fsSL https://karpenter.sh/docs/getting-started/getting-started-with-karpenter/cloudformation.yaml > $TEMPOUT \
33 | && aws cloudformation deploy \
34 | --stack-name "Karpenter-${CLUSTER_NAME}" \
35 | --template-file "${TEMPOUT}" \
36 | --capabilities CAPABILITY_NAMED_IAM \
37 | --parameter-overrides "ClusterName=${CLUSTER_NAME}" \
38 | --region ${AWS_REGION}
39 |
40 |
41 | #grant access to instances using the profile to connect to the cluster. This command adds the Karpenter node role to your aws-auth configmap,
42 | #allowing nodes with this role to connect to the cluster.
43 |
44 | eksctl create iamidentitymapping \
45 | --username system:node:{{EC2PrivateDNSName}} \
46 | --cluster ${CLUSTER_NAME} \
47 | --arn "arn:aws:iam::${ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}" \
48 | --group system:bootstrappers \
49 | --group system:nodes
50 |
51 | echo "Verify auth Map"
52 | kubectl describe configmap -n kube-system aws-auth
53 |
54 | # Create KarpenterController IAM Role
55 | echo "Create KarpenterController IAM Role"
56 |
57 | eksctl utils associate-iam-oidc-provider --cluster ${CLUSTER_NAME} --approve
58 |
59 | #Karpenter requires permissions like launching instances. This will create an AWS IAM Role, Kubernetes service account,
60 | #and associate them using IAM Roles for Service Accounts (IRSA)
61 | echo "Map AWS IAM Role Kubernetes service account"
62 |
63 | eksctl create iamserviceaccount \
64 | --cluster "${CLUSTER_NAME}" --name karpenter --namespace karpenter \
65 | --role-name "Karpenter-${CLUSTER_NAME}" \
66 | --attach-policy-arn "arn:aws:iam::${ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}" \
67 | --role-only \
68 | --approve
69 |
70 | export KARPENTER_IAM_ROLE_ARN="arn:aws:iam::${ACCOUNT_ID}:role/Karpenter-${CLUSTER_NAME}"
71 |
72 | #Create the EC2 Spot Linked Role
73 | echo "Create the EC2 Spot Linked Role"
74 | aws iam create-service-linked-role --aws-service-name spot.amazonaws.com 2> /dev/null || echo 'Already exist'
75 |
76 | #Helm Install Karpenter
77 | echo "Helm Install Karpenter"
78 | export CLUSTER_ENDPOINT="$(aws eks describe-cluster --name ${CLUSTER_NAME} --query "cluster.endpoint" --output text)"
79 |
80 | helm registry logout public.ecr.aws
81 |
82 | helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter --namespace karpenter --create-namespace \
83 | --version ${KARPENTER_VERSION} \
84 | --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=${KARPENTER_IAM_ROLE_ARN} \
85 | --set settings.aws.clusterName=${CLUSTER_NAME} \
86 | --set settings.aws.defaultInstanceProfile=KarpenterNodeInstanceProfile-${CLUSTER_NAME} \
87 | --set settings.aws.interruptionQueueName=${CLUSTER_NAME} \
88 | --set controller.resources.requests.cpu=1 \
89 | --set controller.resources.requests.memory=1Gi \
90 | --set controller.resources.limits.cpu=1 \
91 | --set controller.resources.limits.memory=1Gi \
92 | --wait
93 |
94 | #deploy Provisioner & AWSNodeTemplate
95 | echo "Providers & AWSNodeTemplate "
96 | cat < $TEMPOUT \
33 | && aws cloudformation deploy \
34 | --stack-name "Karpenter-${CLUSTER_NAME}" \
35 | --template-file "${TEMPOUT}" \
36 | --capabilities CAPABILITY_NAMED_IAM \
37 | --parameter-overrides "ClusterName=${CLUSTER_NAME}" \
38 | --region ${AWS_REGION}
39 |
40 |
41 | #grant access to instances using the profile to connect to the cluster. This command adds the Karpenter node role to your aws-auth configmap,
42 | #allowing nodes with this role to connect to the cluster.
43 |
44 | eksctl create iamidentitymapping \
45 | --username system:node:{{EC2PrivateDNSName}} \
46 | --cluster ${CLUSTER_NAME} \
47 | --arn "arn:aws:iam::${ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}" \
48 | --group system:bootstrappers \
49 | --group system:nodes
50 |
51 | echo "Verify auth Map"
52 | kubectl describe configmap -n kube-system aws-auth
53 |
54 | # Create KarpenterController IAM Role
55 | echo "Create KarpenterController IAM Role"
56 |
57 | eksctl utils associate-iam-oidc-provider --cluster ${CLUSTER_NAME} --approve
58 |
59 | #Karpenter requires permissions like launching instances. This will create an AWS IAM Role, Kubernetes service account,
60 | #and associate them using IAM Roles for Service Accounts (IRSA)
61 | echo "Map AWS IAM Role Kubernetes service account"
62 |
63 | eksctl create iamserviceaccount \
64 | --cluster "${CLUSTER_NAME}" --name karpenter --namespace karpenter \
65 | --role-name "Karpenter-${CLUSTER_NAME}" \
66 | --attach-policy-arn "arn:aws:iam::${ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}" \
67 | --role-only \
68 | --approve
69 |
70 | export KARPENTER_IAM_ROLE_ARN="arn:aws:iam::${ACCOUNT_ID}:role/Karpenter-${CLUSTER_NAME}"
71 |
72 | #Create the EC2 Spot Linked Role
73 | echo "Create the EC2 Spot Linked Role"
74 | aws iam create-service-linked-role --aws-service-name spot.amazonaws.com 2> /dev/null || echo 'Already exist'
75 |
76 | #Helm Install Karpenter
77 | echo "Helm Install Karpenter"
78 | export CLUSTER_ENDPOINT="$(aws eks describe-cluster --name ${CLUSTER_NAME} --query "cluster.endpoint" --output text)"
79 |
80 | helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter --namespace karpenter --create-namespace \
81 | --version ${KARPENTER_VERSION} \
82 | --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=${KARPENTER_IAM_ROLE_ARN} \
83 | --set settings.aws.clusterName=${CLUSTER_NAME} \
84 | --set settings.aws.defaultInstanceProfile=KarpenterNodeInstanceProfile-${CLUSTER_NAME} \
85 | --set settings.aws.interruptionQueueName=${CLUSTER_NAME} \
86 | --set controller.resources.requests.cpu=1 \
87 | --set controller.resources.requests.memory=1Gi \
88 | --set controller.resources.limits.cpu=1 \
89 | --set controller.resources.limits.memory=1Gi \
90 | --wait
91 |
92 | #deploy Provisioner & AWSNodeTemplate
93 | echo "Providers & AWSNodeTemplate "
94 | cat <./deployment/keda/trust-relationship.json <./deployment/keda/kedaScaleObject.yaml <./deployment/keda/value.yaml < 0
322 | # for: 2m
323 | # labels:
324 |
325 | permissions:
326 | metricServer:
327 | restrict:
328 | secret: false
329 | operator:
330 | restrict:
331 | secret: false
332 | EOF
333 |
--------------------------------------------------------------------------------
/deployment/services/awsService.sh:
--------------------------------------------------------------------------------
1 | #******************
2 | # Deploy AWS Services
3 | #******************
4 | echo "${BLUE} Start deploying Dynamo & SQS"
5 |
6 | source ./deployment/environmentVariables.sh
7 |
8 |
9 | echo "${GREEN} Deploy Dynamo"
10 | Dynamo=$(aws dynamodb create-table \
11 | --table-name ${DYNAMODB_TABLE} --region ${AWS_REGION} \
12 | --attribute-definitions AttributeName=id,AttributeType=S AttributeName=messageProcessingTime,AttributeType=S \
13 | --key-schema AttributeName=id,KeyType=HASH AttributeName=messageProcessingTime,KeyType=RANGE \
14 | --billing-mode PROVISIONED \
15 | --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 --output text)
16 | echo "${GREEN} DynamoInstance : ${Dynamo}"
17 |
18 | echo "${GREEN} Deploy SQS"
19 | SQS=$(aws sqs create-queue --queue-name ${SQS_QUEUE_NAME} --region ${AWS_REGION} \
20 | --attributes FifoQueue=true,VisibilityTimeout=3600,MessageRetentionPeriod=345600,ContentBasedDeduplication=true)
21 | echo "${GREEN} SQSInstance : ${SQS}"
22 |
23 | echo "${GREEN} End deploying Dynamo & SQS"
--------------------------------------------------------------------------------
/img/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/.DS_Store
--------------------------------------------------------------------------------
/img/Karpenterversion.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/Karpenterversion.jpg
--------------------------------------------------------------------------------
/img/Keda.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/Keda.gif
--------------------------------------------------------------------------------
/img/YouTube-Logo.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/YouTube-Logo.jpg
--------------------------------------------------------------------------------
/img/accountverify.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/accountverify.jpg
--------------------------------------------------------------------------------
/img/aws_kedakarpenter_arch_small.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/aws_kedakarpenter_arch_small.gif
--------------------------------------------------------------------------------
/img/deploymentcompleted.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/deploymentcompleted.jpg
--------------------------------------------------------------------------------
/img/deploymentverify.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/deploymentverify.jpg
--------------------------------------------------------------------------------
/img/karpenterhelm.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/karpenterhelm.jpg
--------------------------------------------------------------------------------
/img/mockarchitecture.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/mockarchitecture.jpg
--------------------------------------------------------------------------------
/img/node.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/node.jpg
--------------------------------------------------------------------------------
/img/nodescaling.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/nodescaling.jpg
--------------------------------------------------------------------------------
/img/pod.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/pod.jpg
--------------------------------------------------------------------------------
/img/podscaling.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/podscaling.jpg
--------------------------------------------------------------------------------
/img/runloadscript.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/runloadscript.jpg
--------------------------------------------------------------------------------
/img/setenv.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/aws-samples/amazon-eks-scaling-with-keda-and-karpenter/7601a1bf17d61e5d766e9516b02181c4d72cbf5f/img/setenv.jpg
--------------------------------------------------------------------------------