├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── app.py ├── assets ├── dashboard │ └── text │ │ ├── FlexibilityScore.md │ │ ├── InstanceDiversificationScore.md │ │ ├── LaunchTemplateScore.md │ │ ├── PolicyScore.md │ │ └── ScalingScore.md ├── func_calculate_daily_metrics │ ├── index.py │ ├── libs_finder.py │ └── scores │ │ ├── __init__.py │ │ ├── instance_diversification_score.py │ │ ├── launch_template_score.py │ │ ├── policy_score.py │ │ └── scaling_score.py ├── func_custom_widget_account_rank │ ├── index.py │ ├── libs_finder.py │ ├── templates │ │ └── template.md │ └── test.json ├── func_custom_widget_accounts_scores │ ├── index.py │ ├── libs_finder.py │ ├── templates │ │ └── template.md │ └── test.json ├── func_custom_widget_org_score │ ├── htm_templates │ │ ├── gauge.html │ │ └── large_score.html │ ├── index.py │ ├── libs_finder.py │ └── test.json └── lambda_layer │ └── python │ ├── constants.py │ ├── helpers │ ├── __init__.py │ ├── cloudtrail_helpers.py │ ├── cloudwatch_helpers.py │ ├── date_helpers.py │ ├── ec2_helpers.py │ ├── organizations_helpers.py │ ├── s3_helpers.py │ └── sts_helpers.py │ ├── resource_managers │ ├── __init__.py │ ├── account_manager.py │ ├── asg_manager.py │ ├── instance_manager.py │ ├── launch_template_manager.py │ ├── libs_finder.py │ ├── org_manager.py │ └── resource_manager.py │ └── resources │ ├── __init__.py │ ├── account.py │ ├── asg.py │ ├── instance.py │ ├── launch_template.py │ ├── libs_finder.py │ └── resource.py ├── cdk.json ├── cdk ├── __init__.py ├── main_stack.py ├── modules │ ├── __init__.py │ ├── _lambda.py │ ├── cloudwatch.py │ └── s3.py └── utils │ ├── __init__.py │ └── stack_utils.py ├── docs ├── architecture.png ├── dashboard.png └── diagrams.drawio ├── requirements-dev.txt ├── requirements.txt └── source.bat /.gitignore: -------------------------------------------------------------------------------- 1 | *.swp 2 | package-lock.json 3 | __pycache__ 4 | .pytest_cache 5 | .venv 6 | *.egg-info 7 | 8 | # CDK asset staging directory 9 | .cdk.staging 10 | cdk.out 11 | 12 | .idea* 13 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *main* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT No Attribution 2 | 3 | Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of 6 | this software and associated documentation files (the "Software"), to deal in 7 | the Software without restriction, including without limitation the rights to 8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 9 | the Software, and to permit persons to whom the Software is furnished to do so. 10 | 11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 12 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 13 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 14 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 15 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 16 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 17 | 18 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # EC2 Flexibility Score Dashboard 3 | 4 | Authors: Borja Pérez Guasch , Arpit Sapra 5 | 6 | ## Introduction 7 | 8 | EC2 Flexibility Score assesses any configuration used to launch instances through an Auto Scaling Group (ASG) against the recommended EC2 9 | best practices. It converts the best practice adoption into a “flexibility score” that can be used to identify, improve, 10 | and monitor the configurations (and subsequently, overall organization level adoption of Spot best practices) 11 | which may have room to improve the flexibility by implementing architectural best practices. 12 | 13 | The following illustration shows the EC2 Flexibility Score Dashboard: 14 | 15 | ![image](docs/dashboard.png) 16 | 17 | ## In this page 18 | 19 | - [Understanding EC2 Flexibility Score](#understanding-ec2-flexibility-score) 20 | - [Project architecture](#project-architecture) 21 | - [Installation](#installation) 22 | - [Updating](#updating) 23 | - [Cleaning up](#cleaning-up) 24 | - [Using the tool](#using-the-tool) 25 | - [Security considerations](#security-considerations) 26 | 27 | ## Understanding EC2 Flexibility Score 28 | 29 | On a scale of 1 (worse) to 10 (best), the EC2 Flexibility Score is a weighted average of the four component scores seen below. The higher the score, the more likely a configuration is set up to effectively leverage the latest EC2 features and services. 30 | 31 | ### Components of EC2 Flexibility Score 32 | 33 | #### Instance Diversification score (25% weight) 34 | 35 | The flexibility to leverage several different instance types improves the likelihood of acquiring the desired EC2 capacity, 36 | particularly for EC2 Spot instances where instance diversification helps to replace Spot instances which may receive an 37 | instance termination notification. This component score provides insight into whether Auto Scaling configurations are set up to 38 | leverage a diverse set of instance types. Being flexible across several different instance types, including families, generations, sizes, 39 | Availability Zones, and AWS regions, increases the likelihood of accessing the desired compute capacity, 40 | as well as helps to effectively replace EC2 Spot interruption events. 41 | 42 | Note: Launch configuration based ASGs receive a default score of 2. For Launch Templates, the score is calculated as below: 43 | 44 | Amount of configured instance types | Score 45 | ----|----- 46 | More than 15 or [Attribute-based Instance type selection](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-instance-type-requirements.html) | 10 47 | 11-15 | 8 48 | 6-10 | 6 49 | 2-5 | 4 50 | 1 | 2 51 | 52 | > What are the steps I can take to improve Instance Diversification Score? 53 | 54 | 1. Use Attribute Based Instance Selection (ABIS) to automate qualification and use of all possible instance types your workload can use. ABS can only be used with Launch Templates (see Launch Template Score section) 55 | 2. Use [EC2 Instance Selector](https://ec2spotworkshops.com/using_ec2_spot_instances_with_eks/040_eksmanagednodegroupswithspot/selecting_instance_types.html) to understand the various instance type options that can work for your requirements. 56 | 3. Check [EC2 Spot Best Practices](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-best-practices.html#be-instance-type-flexible): ensure that all Availability Zones are configured for use in your VPC and selected for your workload. 57 | 4. Use Karpenter to provision and scale capacity. If you are using node groups and cluster autoscaler with EKS, Karpenter can help improve instance diversification by launching right-sized compute resources in response to changing application load, thereby reducing waste as well as ensuring access to capacity across all eligible instance types. It is an open-source, highly performant cluster autoscaler for Kubernetes. 58 | 5. Use [Spot Placement Score](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-placement-score.html) which indicates how it is that a Spot Request will succeed in an AWS region or an Availability Zone. The open source project [EC2 Spot Placement Score Tracker](https://github.com/aws-samples/ec2-spot-placement-score-tracker) offers the ability to track SPS over time for different configurations. 59 | 60 | #### Launch Template score (25% weight) 61 | 62 | The recommended best practice is to use Launch Templates (LTs), which allow access to the latest instance types and features of 63 | AWS Auto Scaling groups. Launch configurations (LCs) no longer add support for new Amazon EC2 Instance types that are released after 64 | December 31, 2022. This component score is calculated as the share of vCPU-hours driven from Auto Scaling groups that use Launch Templates. 65 | Accounts using Launch Templates for all EC2 usage receive a score of 10. 66 | 67 | Launch Template Score = `vCPU-hours from LTs / (vCPU-hours from LCs + vCPU-hours from LTs) * 10` 68 | 69 | > What are the steps I can take to improve Launch Template Score? 70 | 71 | 1. Use [Launch Templates](https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-templates.html) to create any new Autoscaling Groups. 72 | 2. [Migrate your Launch Configurations to Launch Templates](https://docs.aws.amazon.com/autoscaling/ec2/userguide/migrate-to-launch-templates.html). 73 | 3. Follow [this workshop](https://ec2spotworkshops.com/ec2-auto-scaling-with-multiple-instance-types-and-purchase-options.html) to familiarise with general Auto Scaling best practices. 74 | 75 | #### Policy score (15% weight) 76 | 77 | This component measures whether you are leveraging proactive scaling policies such as Predictive scaling (score of 10), 78 | or reactive scaling policies such as Target Tracking (score of 6.67), or Simple/Step scaling policies (score of 3.33). 79 | To capture the score across the whole account, Scaling Policy score is weighted by the vCPU-Hours driven from different scaling 80 | policies across different Auto Scaling groups. 81 | 82 | > What are the steps I can take to improve Scaling Policy Score? 83 | 84 | 1. Consider using [Predictive Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html), which uses Machine Learning to predict capacity requirements based on historical usage from CloudWatch. Check out a [hands-on workshop](https://ec2spotworkshops.com/efficient-and-resilient-ec2-auto-scaling/lab1/10-predictive-scaling.html) to implement predictive scaling. 85 | 2. Evaluate if using [Target Tracking Policy](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html) can better serve your scaling needs than Simple/Step Scaling Policies. 86 | 87 | #### Scaling score (35% weight) 88 | 89 | Scaling Score measures the elasticity of the current usage patterns. 90 | It’s calculated as the ratio of Max Running instances (Peak) to the Minimum running instances (Trough) on any day. 91 | The ratio translates to a score as shown below, on a scale of 1-10. 92 | 93 | Ratio | Score 94 | ----|----- 95 | Greater than 1.07 | 10 96 | 1.05 - 1.07 | 7.5 97 | 1.02 - 1.05 | 5 98 | Lower than 1.02 | 2.5 99 | 100 | > I have a low scaling score. What does it mean? 101 | 102 | Low scaling score means that the usage in the specific account doesn't scale up or down too much over time, which could be an indicator of over-provisioning. 103 | While this is workload dependent, as some workloads may need the ability to scale more than others, it is worth evaluating if there is room for efficiency by using a more dynamic scaling approach. 104 | This could lead to greater cost savings and reducing waste of compute resources. 105 | 106 | > What are the steps I can take to improve scaling score? 107 | 108 | 1. Understand the different [scaling options](https://docs.aws.amazon.com/autoscaling/ec2/userguide/scale-your-group.html) you can use with Autoscaling Groups. 109 | 2. Adopt [Karpenter](https://aws.amazon.com/blogs/aws/introducing-karpenter-an-open-source-high-performance-kubernetes-cluster-autoscaler/). If you are using Managed Node Groups with EKS, Karpenter can help you improve your application availability and cluster efficiency by rapidly launching right-sized compute resources in response to changing application load, thus reducing waste. It is an open-source, highly performant cluster autoscaler for Kubernetes. 110 | 3. Set up [AWS Compute Optimizer](https://aws.amazon.com/compute-optimizer/) (free to use), which can help you right-size over-provisioned / under-utilized instances. 111 | 112 | ## Project architecture 113 | 114 | ![image](docs/architecture.png) 115 | 116 | The open source project consists of a CDK IaC project designed to be deployed in the payer account (1) 117 | (where you have AWS Organizations configured) for calculations to be performed in a centralised and aggregated manner. 118 | If you don’t use AWS Organizations, you can still deploy the same CDK project in any of your accounts, 119 | and the calculations will be performed only considering the resources deployed in that account. 120 | 121 | The Lambda function `DailyMetricsCalculation` is executed every 3 hours by means of an Amazon EventBridge rule (2) to pull events 122 | from CloudTrail and identify resources deployed in the account (4). It has been configured with a 3-hour rate so that the 123 | produced CloudWatch metrics don't lose resolution. By using CloudTrail events, ASGs, Launch Templates, EC2 Instances 124 | and other resources are detected even if those are deleted by the time the function is executed. 125 | 126 | Once all relevant resources are identified, scores are calculated and published in CloudWatch as custom metrics (6). 127 | Also, the same scores are uploaded to S3 (5) for you to be able to implement your own visual representations. 128 | 129 | There is a CloudWatch dashboard that feeds from CloudWatch metrics to display weighted scores in a time range (7). 130 | 131 | ## Requirements 132 | 133 | - Python >= 3.8 134 | - [CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html) >= 2.88.0 135 | - [venv](https://docs.python.org/3/library/venv.html) 136 | - [AWS Command Line Interface (CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-prereqs.html) 137 | 138 | Follow [these steps](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) to configure the AWS CLI with your AWS account. 139 | 140 | ## Installation 141 | 142 | ### 1. Cloning the repository 143 | 144 | After navigating to your path of choice, run this command to clone this repository: 145 | 146 | ```bash 147 | git clone git@github.com:aws-samples/ec2-flexibility-score-dashboard.git 148 | ``` 149 | 150 | ### 2. Creating a virtual environment and installing project dependencies 151 | 152 | #### 2.1 Creating the virtual environment 153 | 154 | ```python 155 | cd ec2-flexibility-score-dashboard 156 | python3 -m venv .venv 157 | ``` 158 | 159 | #### 2.2 Installing project dependencies in the virtual environment 160 | 161 | ```python 162 | source .venv/bin/activate 163 | python -m pip install -r requirements.txt 164 | ``` 165 | 166 | ### 3. Bootstrapping your AWS account 167 | 168 | Deploying AWS CDK apps into an AWS environment may require that you provision resources the AWS CDK needs to perform the deployment. 169 | These resources include an Amazon S3 bucket for storing files and **IAM roles that grant permissions needed to perform deployments**. 170 | Execute the following command to bootstrap your environment: 171 | 172 | ```bash 173 | cdk bootstrap 174 | ``` 175 | 176 | You can read more about this process [here](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html). 177 | 178 | ### 4. Deploying using CDK 179 | 180 | An important part of the project is the name of the IAM role that the `DailyMetricsCalculation` function assumes to retrieve 181 | CloudTrail logs from all the accounts in your organisation. The name of this IAM role is obtained from a stack parameter named `ParamOrgRoleName`, 182 | which has a default value of `OrganizationAccountAccessRole`. This is the name of the IAM role that AWS Organisations creates when you 183 | create an account in your organisation (view the [Considerations section](#considerations) for more details). 184 | 185 | To deploy the stack without overwriting the value of the `ParamOrgRoleName` parameter, run the following command: 186 | 187 | ```bash 188 | cdk deploy 189 | ``` 190 | 191 | If you want to use a custom IAM role, you can specify its name when deploying the stack as shown below: 192 | 193 | ```bash 194 | cdk deploy --parameters ParamOrgRoleName= 195 | ``` 196 | 197 | The deployment process will take roughly **3 minutes** to complete. In the meantime, you can visit [using the tool](#using-the-tool). 198 | 199 | ## Updating 200 | 201 | To update the project, navigate to the directory where you initially cloned the project and execute the following command: 202 | 203 | ```bash 204 | git pull 205 | cdk deploy 206 | ``` 207 | 208 | ## Cleaning up 209 | 210 | Option 1) deleting all the resources created by CDK using the AWS Console: 211 | 212 | 1. Navigate to the **CloudFormation** section in the AWS console. 213 | 2. Select the stack named **FlexibilityScore** and click on **Delete**. 214 | 215 | Option 2) deleting all the resources created by CDK using the CLI: 216 | 217 | Navigate to the directory where you initially cloned the project and execute the following command: 218 | 219 | ```bash 220 | cdk destroy 221 | ``` 222 | 223 | ## Using the tool 224 | 225 | Once you have deployed the project: 226 | 227 | 1. In the AWS Console, navigate to the CloudWatch service page 228 | 2. On the left-side panel, click on **Dashboards** 229 | 3. Under **Custom Dashboards**, click on **Flexibility-Score-Dashboard** 230 | 231 | The dashboard uses some Lambda-backed custom widgets to display information. The first time that you open the dashboard 232 | you'll see a message asking you to allow the widgets to execute the Lambda functions. Click on **Allow always**. 233 | 234 | The dashboard will show no data until the `DailyMetricsCalculation` function is executed for the first time. 235 | **Allow for up to 5 minutes after deploying the project to see the widgets rendering the scores.** 236 | 237 | The dashboard will show the component scores and the weighted Flexibility Score in the time range that you have selected. To update it, 238 | use the controls at the top of the screen. 239 | 240 | ## Security considerations 241 | 242 | In order for the `DailyMetricsCalculation` function to be able to retrieve CloudTrail logs from other accounts, 243 | it needs to assume an IAM role that grants it permission to do so. By default, when an account is created in your organization, 244 | AWS Organizations automatically creates an IAM role that is named `OrganizationAccountAccessRole` (you can read more about it [here](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_access.html)). 245 | This role has administrator privileges. 246 | 247 | If you want to use a custom IAM role, be aware that the role must exist in all the accounts in your organization for which 248 | you want to calculate the Flexibility Score. Also, the following IAM statements must be included for the Lambda function 249 | to execute successfully: 250 | 251 | ``` 252 | { 253 | "Action": [ 254 | "cloudtrail:LookupEvents", 255 | "cloudwatch:PutMetricData", 256 | "ec2:DescribeRegions", 257 | "organizations:DescribeOrganization", 258 | "organizations:ListAccounts" 259 | ], 260 | "Resource": "*", 261 | "Effect": "Allow" 262 | } 263 | ``` -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import cdk_nag 4 | 5 | from aws_cdk import ( 6 | Aspects, App 7 | ) 8 | from cdk import MainStack 9 | 10 | app = App() 11 | MainStack(app, "FlexibilityScoreStack") 12 | 13 | # Check for best practices 14 | Aspects.of(app).add(cdk_nag.AwsSolutionsChecks()) 15 | 16 | app.synth() 17 | -------------------------------------------------------------------------------- /assets/dashboard/text/FlexibilityScore.md: -------------------------------------------------------------------------------- 1 | # Understanding Flexibility Score 2 | 3 | EC2 Flexibility Score Dashboard assesses any configuration used to launch instances through an Autoscaling Group (ASG) against the recommended EC2 best practices. It converts the best practice adoption on the following four components into a “Flexibility Score” that can be used to identify, improve, and monitor the configurations (and subsequently, overall organization level adoption of Spot best practices) which may have room to improve the flexibility by implementing architectural best practices. On a scale of 1 (worse) to 10 (best), the Flexibility score is a weighted average of the four ‘component scores’ seen below. 4 | 5 | The higher the score, the more likely a configuration is set up to effectively leverage the latest EC2 features and services. 6 | 7 |   8 | 9 | ## Components of the Flexibility Score 10 | 11 | To build this score, we look at how a customer is using different features that indicate the Flexibility posture of a customer at any given point in time. We then compare these features against the set of best practices for each component, and generate a score for each component. Finally, each score is added up through a weight assigned to each component, in order of their importance in improving (or indicating) flexibility of a customer. This gives the Flexibility Score on a scale of 1 (worse) - 10 (good). The higher the score, the better a customer is set up to leverage the best that AWS EC2 offers, including EC2 Spot. -------------------------------------------------------------------------------- /assets/dashboard/text/InstanceDiversificationScore.md: -------------------------------------------------------------------------------- 1 | # Instance Diversification Score (25% weight) 2 | 3 | The flexibility to leverage several different instance types improves the likelihood of acquiring the desired EC2 capacity, particularly for EC2 Spot instances where instance diversification helps to replace Spot instances which may receive an instance termination notification. This component score provides insight into whether Autoscaling configurations are set-up to leverage a diverse set of instance types. Being flexible across several different instance types, including families, generations, sizes, Availability Zones, and AWS regions - increases the likelihood of accessing the desired compute capacity, as well as helps to effectively replace EC2 Spot interruption events. 4 | 5 | Note: Launch configuration based ASGs receive a default score of 2. For Launch Templates, the score is calculated as below: 6 | 7 |   8 | 9 | Amount of configured Instance types | Score 10 | ----|----- 11 | > 15 or [Attribute-based Instance type selection](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-instance-type-requirements.html) | 10 12 | 11-15 | 8 13 | 6-10 | 6 14 | 2-5 | 4 15 | 1 | 2 16 | 17 |   18 | 19 | ## What are the steps I can take to improve Instance Diversification Score? 20 | 21 | 1. Use Attribute Based Instance Selection (ABIS) to automate qualification and use of all possible instance types your workload can use. ABS can only be used with Launch Templates (see Launch Template Score section) 22 | 2. Use [EC2 Instance Selector](https://ec2spotworkshops.com/using_ec2_spot_instances_with_eks/040_eksmanagednodegroupswithspot/selecting_instance_types.html) to understand the various instance type options that can work for your requirements. 23 | 3. Check [EC2 Spot Best Practices](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-best-practices.html#be-instance-type-flexible): ensure that all Availability Zones are configured for use in your VPC and selected for your workload. 24 | 4. Use Karpenter to provision and scale capacity. If you are using node groups and cluster autoscaler with EKS, Karpenter can help improve instance diversification by launching right-sized compute resources in response to changing application load, thereby reducing waste as well as ensuring access to capacity across all eligible instance types. It is an open-source, highly performant cluster autoscaler for Kubernetes. 25 | 5. Use [Spot Placement Score](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-placement-score.html) which indicates how it is that a Spot Request will succeed in an AWS region or an Availability Zone. The open source project [EC2 Spot Placement Score Tracker](https://github.com/aws-samples/ec2-spot-placement-score-tracker) offers the ability to track SPS over time for different configurations. 26 | -------------------------------------------------------------------------------- /assets/dashboard/text/LaunchTemplateScore.md: -------------------------------------------------------------------------------- 1 | # Launch Template Score (25% weight) 2 | 3 | The recommended best practice is to use Launch Templates (LTs), which allow access to the latest instance types and features of AWS Autoscaling Groups. Launch configurations (LCs) no longer add support for new Amazon EC2 instance types that are released after December 31, 2022. This component score is calculated as the share of vCPU-hours driven from Autoscaling Groups that use Launch Templates. Accounts using Launch Templates for all EC2 usage receive a score of 10. 4 | 5 | Launch Template Score = `vCPUHours from LTs / (vCPUHours from LCs + vCPUHours from LTs) * 10` 6 | 7 |   8 | 9 | ## What are the steps I can take to improve Launch Template Score? 10 | 1. Use [Launch Templates](https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-templates.html) to create any new Autoscaling Groups. 11 | 2. [Migrate your Launch Configurations to Launch Templates](https://docs.aws.amazon.com/autoscaling/ec2/userguide/migrate-to-launch-templates.html). 12 | 3. Follow [this workshop](https://ec2spotworkshops.com/ec2-auto-scaling-with-multiple-instance-types-and-purchase-options.html) to familiarise with general Auto Scaling best practices. -------------------------------------------------------------------------------- /assets/dashboard/text/PolicyScore.md: -------------------------------------------------------------------------------- 1 | # Policy Score (15% weight) 2 | 3 | This component measures whether a customer is leveraging proactive scaling policies such as Predictive scaling (score of 10), 4 | or reactive scaling policies such as Target Tracking (score of 6.67), or Simple/Step scaling policies (score of 3.33). 5 | To capture the score across the whole account, Scaling Policy score is weighted by the vCPU-Hours driven from different scaling policies across different Autoscaling Groups. 6 | 7 |   8 | 9 | ## What are the steps I can take to improve Scaling Policy Score? 10 | 11 | 1. Consider using [Predictive Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html), which uses Machine Learning to predict capacity requirements based on historical usage from CloudWatch. Check out a [hands-on workshop](https://ec2spotworkshops.com/efficient-and-resilient-ec2-auto-scaling/lab1/10-predictive-scaling.html) to implement predictive scaling. 12 | 2. Evaluate if using [Target Tracking Policy](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html) can better serve your scaling needs than Simple/Step Scaling Policies. -------------------------------------------------------------------------------- /assets/dashboard/text/ScalingScore.md: -------------------------------------------------------------------------------- 1 | # Scaling Score (35% weight) 2 | 3 | Scaling Score measures the elasticity of the current usage patterns, captured at the level of a specific AWS Account ID. 4 | It’s calculated as the ratio of Max Running Instances (Peak) to the Minimum running instances (Trough) on any day. The ratio translates to a score as shown below, on a scale of 1-10. 5 | 6 | Ratio >1.07 (i.e. Peak instances used concurrently were 7% higher than the Trough) = Score of 10 7 | Ratio 1.05 - 1.07 = Score of 7.5 8 | Ratio 1.02 - 1.05 = Score of 5 9 | Ratio <1.02 = Score of 2.5 10 | 11 |   12 | 13 | ## I have a low scaling score. What does it mean? 14 | Low scaling score means that the usage in the specific account doesn't scale up or down too much over time, which could be an indicator of over-provisioning. 15 | While this is workload dependent, as some workloads may need the ability to scale more than others, it is worth evaluating if there is room for efficiency by using a more dynamic scaling approach. 16 | This could lead to greater cost savings and reducing waste of compute resources. 17 | 18 |   19 | 20 | ## What are the steps I can take to improve scaling score? 21 | 1. Understand the different [scaling options](https://docs.aws.amazon.com/autoscaling/ec2/userguide/scale-your-group.html) you can use with Autoscaling Groups. 22 | 2. Adopt [Karpenter](https://aws.amazon.com/blogs/aws/introducing-karpenter-an-open-source-high-performance-kubernetes-cluster-autoscaler/). If you are using Managed Node Groups with EKS, Karpenter can help you improve your application availability and cluster efficiency by rapidly launching right-sized compute resources in response to changing application load, thus reducing waste. It is an open-source, highly performant cluster autoscaler for Kubernetes. 23 | 3. Set up [AWS Compute Optimizer](https://aws.amazon.com/compute-optimizer/) (free to use), which can help you right-size over-provisioned / under-utilized instances. -------------------------------------------------------------------------------- /assets/func_calculate_daily_metrics/index.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: calculates the Flexibility Score using 4 components: Scaling Score, Launch Template Score, Scaling Policy Score and Instance Diversification Score. This index handler retrieves data that the modules need to calculate the scores. 21 | 22 | import json 23 | 24 | from datetime import datetime, timedelta 25 | from scores import * 26 | from libs_finder import * 27 | 28 | 29 | def generate_time_window_for_fetching_data(beforehand_days: int = 1) -> TimeWindow: 30 | """ 31 | Generates a time window that is used to retrieve CloudTrail events. 32 | By default, the time window is the day before the time of execution. 33 | 34 | :return: TimeWindow object with a start time and end time 35 | """ 36 | 37 | target_day = datetime.now() - timedelta(beforehand_days) 38 | 39 | start_time = datetime(target_day.year, target_day.month, target_day.day, 0, 0, 0, tzinfo=date_helpers.get_timezone()) 40 | end_time = datetime(target_day.year, target_day.month, target_day.day, 23, 59, 59, tzinfo=date_helpers.get_timezone()) 41 | 42 | return TimeWindow(start_time, end_time) 43 | 44 | 45 | def publish_metrics_in_cloudwatch(metrics_by_account: dict) -> None: 46 | metric_data = [ 47 | { 48 | 'MetricName': name, 49 | 'Value': value, 50 | 'Unit': 'None', 51 | 'Timestamp': datetime.now(), 52 | 'Dimensions': [ 53 | { 54 | 'Name': CW_DIMENSION_NAME_ACCOUNT_ID, 55 | 'Value': account_id 56 | }, 57 | ], 58 | } 59 | for account_id, metrics in metrics_by_account.items() for name, value in metrics.items() 60 | ] 61 | 62 | print('Publishing metrics to CloudWatch...', end=' ') 63 | 64 | cloudwatch_helpers.put_metric_data(CW_NAMESPACE, metric_data) 65 | 66 | print('done!') 67 | 68 | 69 | def upload_metrics_to_s3(metrics_by_account: dict) -> None: 70 | folder = date_helpers.datetime_to_str(datetime_format='%Y/%m/%d') 71 | file_name = date_helpers.datetime_to_str(datetime_format='%H%M%S') 72 | bucket = os.environ['BUCKET'] 73 | 74 | print('Uploading metrics to S3...', end=' ') 75 | 76 | s3_helpers.upload_file_contents(bucket, f'{folder}/{file_name}.json', json.dumps(metrics_by_account)) 77 | 78 | print('done!') 79 | 80 | 81 | def calculate_daily_metrics() -> dict: 82 | """ 83 | Calculates daily account and organization metrics 84 | 85 | :return: dictionary containing the metrics 86 | """ 87 | 88 | metrics = { 89 | a_id: { 90 | CW_METRIC_NAME_VCPU_H: account.vcpu_h, 91 | SCORE_LT: launch_template_score.calculate(account.instances), 92 | SCORE_POLICY: policy_score.calculate(account.instances, account.asg), 93 | SCORE_DIVERSIFICATION: instance_diversification_score.calculate(account.asg, account.lt), 94 | SCORE_SCALING: scaling_score.calculate(account.instances) 95 | } 96 | 97 | for a_id, account in account_manager.accounts.items() 98 | } 99 | 100 | metrics.update({ 101 | CW_DIMENSION_VALUE_ORG: org_manager.calculate_scores(metrics) 102 | }) 103 | 104 | return metrics 105 | 106 | 107 | def handler(event, context): 108 | # Build a time window (yesterday throughout the day) for calculating metrics 109 | tw = generate_time_window_for_fetching_data() 110 | 111 | # Fetch the organization's accounts and their resources 112 | account_manager.fetch_resources(tw) 113 | 114 | # Calculate account metrics 115 | metrics = calculate_daily_metrics() 116 | 117 | print('Calculated metrics', metrics) 118 | 119 | # Publish metrics in CloudWatch and upload them to S3 120 | publish_metrics_in_cloudwatch(metrics) 121 | upload_metrics_to_s3(metrics) 122 | 123 | 124 | if __name__ == '__main__': 125 | """ 126 | Convenience method to test the function outside of the AWS cloud 127 | """ 128 | 129 | handler({}, None) 130 | -------------------------------------------------------------------------------- /assets/func_calculate_daily_metrics/libs_finder.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | 21 | import os 22 | 23 | 24 | def is_aws_env() -> bool: 25 | return 'AWS_LAMBDA_FUNCTION_NAME' in os.environ or 'AWS_EXECUTION_ENV' in os.environ 26 | 27 | 28 | if is_aws_env(): 29 | from constants import * 30 | from helpers import * 31 | from resource_managers import * 32 | else: 33 | from assets.lambda_layer.python.constants import * 34 | from assets.lambda_layer.python.helpers import * 35 | from assets.lambda_layer.python.resource_managers import * 36 | -------------------------------------------------------------------------------- /assets/func_calculate_daily_metrics/scores/__init__.py: -------------------------------------------------------------------------------- 1 | from . import instance_diversification_score, launch_template_score, policy_score, scaling_score 2 | -------------------------------------------------------------------------------- /assets/func_calculate_daily_metrics/scores/instance_diversification_score.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: this module measures how well-diversified a Mixed Instance ASG is, to leverage EC2 Spot in the most cost-capacity effective way. 21 | 22 | 23 | def calculate(asg_data: dict, lt_data: dict): 24 | """ 25 | Calculates the Instance diversification score. 26 | 27 | :param asg_data: structure with fetched information about ASGs in the time window being evaluated 28 | :param lt_data: structure with fetched information about Launch Templates in the time window being evaluated 29 | 30 | :return: numeric value of the Instance Diversification Score 31 | """ 32 | 33 | score = 0 34 | 35 | for _, asg in asg_data.items(): 36 | # Launch configuration based ASGs get the min score 37 | if asg.lt is None: 38 | score += 1 39 | # Overrides take precedence so check those first 40 | elif asg.overrides.uses_abis: 41 | score += 5 42 | elif asg.overrides.instance_count is not None: 43 | if asg.overrides.instance_count > 15: 44 | score += 5 45 | elif 11 <= asg.overrides.instance_count <= 15: 46 | score += 4 47 | elif 6 <= asg.overrides.instance_count <= 10: 48 | score += 3 49 | elif 2 <= asg.overrides.instance_count <= 5: 50 | score += 2 51 | else: 52 | score += 1 53 | # This means that the ASG uses LT and does not have overrides. Get the score from the LT configuration 54 | else: 55 | # Verify that we fetched the data of the LT used by the ASG 56 | if asg.lt.id in lt_data and asg.lt.version in lt_data[asg.lt.id].versions: 57 | used_version = lt_data[asg.lt.id].versions[asg.lt.version] 58 | 59 | # If the version of the used LT uses ABS add the max score, otherwise add the min score 60 | score += 5 if used_version.uses_abis else 1 61 | 62 | # Average the score across all ASGs 63 | score /= max(1, len(asg_data)) 64 | 65 | # Scale the score to 10 66 | return score * 10 / 5 67 | -------------------------------------------------------------------------------- /assets/func_calculate_daily_metrics/scores/launch_template_score.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: this module is calculated as the ratio of normalized instance hours (NIH) driven by Launch Templates to NIH driven through Launch Configuration based Autoscaling groups. 21 | 22 | 23 | def calculate(instances: dict): 24 | """ 25 | Calculates the Launch Template score. 26 | 27 | :param instances: structure with fetched information about Instances 28 | 29 | :return: numeric value of the Launch Template Score 30 | """ 31 | 32 | lt_vcpu_h = 0 33 | lc_vcpu_h = 0 34 | 35 | for _, instance in instances.items(): 36 | # Launched in an ASG driven by a Launch Template 37 | if instance.asg_name is not None and instance.lt is not None: 38 | lt_vcpu_h += instance.vcpu_h 39 | # Launched in an ASG driven by a Launch Configuration 40 | elif instance.asg_name is not None and instance.lt_name is None: 41 | lc_vcpu_h += instance.vcpu_h 42 | 43 | return 0 if lt_vcpu_h == lc_vcpu_h == 0 else (lt_vcpu_h / (lt_vcpu_h + lc_vcpu_h)) * 10 44 | -------------------------------------------------------------------------------- /assets/func_calculate_daily_metrics/scores/policy_score.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: this module measures whether a customer is leveraging Predictive scaling, Target Tracking (Score of 2), or Simple/Step scaling policies. This is measured based on the NIH driven through each of these policy types. 21 | 22 | 23 | SP_TYPE_PREDICTIVE = 'PredictiveScaling' 24 | SP_TYPE_TARGET_TRACKING = 'TargetTrackingScaling' 25 | SP_TYPE_SIMPLE = 'SimpleScaling' 26 | SP_TYPE_STEP = 'StepScaling' 27 | 28 | 29 | def calculate(instance_data: dict, asg_data: dict): 30 | """ 31 | Calculates the Scaling Policy score. 32 | 33 | :param instance_data: structure with fetched information about Instances in the time window being evaluated 34 | :param asg_data: structure with fetched information about ASGs in the time window being evaluated 35 | 36 | :return: numeric value of the Scaling Policy Score 37 | """ 38 | 39 | asg_driven_launches = 0 40 | score = 0 41 | 42 | for _, instance in instance_data.items(): 43 | # The Instance was launched in an ASG 44 | if instance.asg_name is not None: 45 | # Check if we could fetch the Instance's ASG data from CloudTrail 46 | if instance.asg_name in asg_data and asg_data[instance.asg_name].sp is not None: 47 | asg_driven_launches += 1 48 | asg = asg_data[instance.asg_name] 49 | 50 | if asg.sp == SP_TYPE_PREDICTIVE: 51 | score += 3 52 | elif asg.sp == SP_TYPE_TARGET_TRACKING: 53 | score += 2 54 | elif asg.sp == SP_TYPE_SIMPLE or asg.sp == SP_TYPE_STEP: 55 | score += 1 56 | 57 | # Average the score across all Instances launched in ASGs 58 | score /= max(1, asg_driven_launches) 59 | 60 | # Scale the score to 10 61 | return score * 10 / 3 62 | -------------------------------------------------------------------------------- /assets/func_calculate_daily_metrics/scores/scaling_score.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: this module is calculated as the ratio of Max Running Instances to the Minimum running instances on any day. 21 | 22 | 23 | def calculate(instances: dict): 24 | """ 25 | Calculates the scaling score. 26 | 27 | To identify overlapping running instances, generates a sorted list with all start times and end times. 28 | Then the list is traversed and a counter is increased when finding a start time, 29 | and decreased when finding an end time. 30 | 31 | :param instances: structure with fetched information about Instances 32 | 33 | :return: numeric value of the Scaling Score 34 | """ 35 | 36 | min_i = None # Minimum concurrent running Instances 37 | max_i = 0 # Maximum concurrent running Instances 38 | count = 0 # Variable to help with concurrency calculations 39 | prev_start_time = None # Variable to store the last evaluated start time 40 | 41 | # Flatten running windows of all Instances and sort by time 42 | windows = [(window.start_time, 'start') for _, instance in instances.items() for window in instance.running_windows] 43 | windows += [(window.end_time, 'end') for _, instance in instances.items() for window in instance.running_windows 44 | if window.end_time is not None] 45 | windows = sorted(windows, key=lambda w: w[0]) 46 | 47 | for window in windows: 48 | # Increment the minimum number of running Instances if more than one consecutive start times are the same 49 | if prev_start_time is not None and prev_start_time == window[0] and window[1] == 'start': 50 | min_i += 1 51 | 52 | # Store the start time for the next iteration 53 | if window[1] == 'start': 54 | prev_start_time = window[0] 55 | 56 | count += 1 if window[1] == 'start' else -1 57 | max_i = max(max_i, count) 58 | min_i = count if min_i is None else min(count, min_i) 59 | 60 | if min_i is None: 61 | min_i = 0 62 | 63 | # To prevent division by 0 64 | min_i = max(min_i, 1) 65 | 66 | # Calculate the ratio and the score 67 | ratio = max_i / min_i 68 | 69 | if ratio > 1.07: 70 | score = 4 71 | elif ratio > 1.05: 72 | score = 3 73 | elif ratio > 1.02: 74 | score = 2 75 | else: 76 | score = 1 77 | 78 | # Scale the score to 10 79 | return score * 10 / 4 80 | -------------------------------------------------------------------------------- /assets/func_custom_widget_account_rank/index.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: Function that implements the backend of a CloudWatch custom widget 21 | 22 | import json 23 | 24 | from string import Template 25 | from datetime import datetime, timedelta 26 | from libs_finder import * 27 | 28 | DEFAULT_COUNT = 3 29 | ACCOUNT_LINE_FORMAT = '{} | {}' 30 | 31 | 32 | def extract_count_param(widgetContext: dict) -> int: 33 | try: 34 | return widgetContext['params']['count'] 35 | except (KeyError, TypeError): 36 | return DEFAULT_COUNT 37 | 38 | 39 | def generate_markdown(scores: dict, count: int) -> str: 40 | with open('templates/template.md') as fd: 41 | md = fd.read() 42 | 43 | template = Template(md) 44 | 45 | if not scores: 46 | return template.substitute( 47 | count=count, 48 | topAccounts='*No data*', 49 | bottomAccounts='*No data*' 50 | ) 51 | else: 52 | # Convert the dictionary to a list 53 | scores = [{'id': a_id, **a_scores} for a_id, a_scores in scores.items()] 54 | 55 | # Sort the scores in ascending order 56 | scores = sorted(scores, key=lambda x: x[SCORE_FLEXIBILITY], reverse=True)[:count] 57 | top_accounts = [] 58 | bottom_accounts = [] 59 | 60 | for i in range(len(scores)): 61 | top_accounts.append(ACCOUNT_LINE_FORMAT.format(scores[i]["id"], scores[i][SCORE_FLEXIBILITY])) 62 | bottom_accounts.append(ACCOUNT_LINE_FORMAT.format(scores[-i - 1]["id"], scores[-i - 1][SCORE_FLEXIBILITY])) 63 | 64 | return template.substitute( 65 | count=count, 66 | topAccounts='\n'.join(top_accounts), 67 | bottomAccounts='\n'.join(bottom_accounts) 68 | ) 69 | 70 | 71 | def handler(event, context): 72 | time_range = event['widgetContext']['timeRange'] 73 | 74 | # Extract the top accounts to show 75 | count = extract_count_param(event['widgetContext']) 76 | 77 | # Get the scores 78 | scores = account_manager.fetch_scores(start_time=time_range['start'] // 1000, 79 | end_time=time_range['end'] // 1000) 80 | 81 | # Generate the markdown 82 | markdown = generate_markdown(scores, count) 83 | 84 | # Return the HTML minified 85 | return {"markdown": markdown} 86 | 87 | 88 | if __name__ == '__main__': 89 | """ 90 | Convenience method to test the function outside the AWS cloud 91 | """ 92 | 93 | with open('test.json') as fd: 94 | test_event = json.loads(fd.read()) 95 | test_event['widgetContext']['timeRange']['start'] = (datetime.now() - timedelta(1)).timestamp() * 1000 96 | test_event['widgetContext']['timeRange']['end'] = datetime.now().timestamp() * 1000 97 | response = handler(test_event, None) 98 | print(response) 99 | -------------------------------------------------------------------------------- /assets/func_custom_widget_account_rank/libs_finder.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | 21 | import os 22 | 23 | 24 | def is_aws_env() -> bool: 25 | return 'AWS_LAMBDA_FUNCTION_NAME' in os.environ or 'AWS_EXECUTION_ENV' in os.environ 26 | 27 | 28 | if is_aws_env(): 29 | from helpers import * 30 | from constants import * 31 | from resource_managers import * 32 | else: 33 | from assets.lambda_layer.python.helpers import * 34 | from assets.lambda_layer.python.constants import * 35 | from assets.lambda_layer.python.resource_managers import * 36 | 37 | -------------------------------------------------------------------------------- /assets/func_custom_widget_account_rank/templates/template.md: -------------------------------------------------------------------------------- 1 | # Top $count accounts 2 | 3 | By average Flexibility Score are: 4 | 5 | Account ID | Score 6 | ----|----- 7 | $topAccounts 8 | 9 |   10 | 11 | # Bottom $count accounts 12 | 13 | By average Flexibility Score are: 14 | 15 | Account ID | Score 16 | ----|----- 17 | $bottomAccounts -------------------------------------------------------------------------------- /assets/func_custom_widget_account_rank/test.json: -------------------------------------------------------------------------------- 1 | { 2 | "widgetContext": { 3 | "dashboardName": "Name-of-current-dashboard", 4 | "widgetId": "widget-16", 5 | "locale": "en", 6 | "timezone": { 7 | "label": "UTC", 8 | "offsetISO": "+00:00", 9 | "offsetInMinutes": 0 10 | }, 11 | "period": 300, 12 | "isAutoPeriod": true, 13 | "timeRange": { 14 | "mode": "relative", 15 | "start": 1627282956521, 16 | "end": 1676972055000, 17 | "relativeStart": 86400012, 18 | "zoom": { 19 | "start": 1627276030434, 20 | "end": 1627282956521 21 | } 22 | }, 23 | "theme": "light", 24 | "linkCharts": true, 25 | "title": "Tweets for Amazon website problem", 26 | "forms": { 27 | "all": {} 28 | }, 29 | "params": { 30 | "scoreName": "FlexibilityScore" 31 | }, 32 | "width": 588, 33 | "height": 369 34 | } 35 | } -------------------------------------------------------------------------------- /assets/func_custom_widget_accounts_scores/index.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: Function that implements the backend of a CloudWatch custom widget 21 | 22 | import json 23 | 24 | from string import Template 25 | from datetime import datetime, timedelta 26 | from libs_finder import * 27 | 28 | ACCOUNT_LINE_FORMAT = '$id | $vcpuh | $FlexibilityScore | $InstanceDiversificationScore | $LaunchTemplateScore | $PolicyScore | $ScalingScore' 29 | 30 | 31 | def generate_markdown(metrics: dict) -> str: 32 | with open('templates/template.md') as fd: 33 | md = fd.read() 34 | 35 | template = Template(md) 36 | 37 | if not metrics: 38 | return template.substitute( 39 | accounts='*No data*' 40 | ) 41 | else: 42 | line_template = Template(ACCOUNT_LINE_FORMAT) 43 | 44 | return template.substitute( 45 | accounts='\n'.join([ 46 | line_template.substitute(id=a_id, **account_metrics) 47 | for a_id, account_metrics in metrics.items() 48 | ]) 49 | ) 50 | 51 | 52 | def handler(event, context): 53 | time_range = event['widgetContext']['timeRange'] 54 | 55 | # Get the scores 56 | scores = account_manager.fetch_scores(start_time=time_range['start'] // 1000, 57 | end_time=time_range['end'] // 1000) 58 | 59 | # Generate the markdown 60 | markdown = generate_markdown(scores) 61 | 62 | # Return the HTML minified 63 | return {"markdown": markdown} 64 | 65 | 66 | if __name__ == '__main__': 67 | """ 68 | Convenience method to test the function outside the AWS cloud 69 | """ 70 | 71 | with open('test.json') as fd: 72 | test_event = json.loads(fd.read()) 73 | test_event['widgetContext']['timeRange']['start'] = (datetime.now() - timedelta(1)).timestamp() * 1000 74 | test_event['widgetContext']['timeRange']['end'] = datetime.now().timestamp() * 1000 75 | response = handler(test_event, None) 76 | print(response) 77 | -------------------------------------------------------------------------------- /assets/func_custom_widget_accounts_scores/libs_finder.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | 21 | import os 22 | 23 | 24 | def is_aws_env() -> bool: 25 | return 'AWS_LAMBDA_FUNCTION_NAME' in os.environ or 'AWS_EXECUTION_ENV' in os.environ 26 | 27 | 28 | if is_aws_env(): 29 | from helpers import * 30 | from constants import * 31 | from resource_managers import * 32 | else: 33 | from assets.lambda_layer.python.helpers import * 34 | from assets.lambda_layer.python.constants import * 35 | from assets.lambda_layer.python.resource_managers import * 36 | -------------------------------------------------------------------------------- /assets/func_custom_widget_accounts_scores/templates/template.md: -------------------------------------------------------------------------------- 1 | # Account view: Components 2 | 3 |   4 | 5 | Account ID | vCPU hours | Flexibility Score | Instance Diversification Score | Launch Template Score | Policy Score | Scaling Score 6 | ----|------------|-----|-----|-----|-----|----- 7 | $accounts -------------------------------------------------------------------------------- /assets/func_custom_widget_accounts_scores/test.json: -------------------------------------------------------------------------------- 1 | { 2 | "widgetContext": { 3 | "dashboardName": "Name-of-current-dashboard", 4 | "widgetId": "widget-16", 5 | "locale": "en", 6 | "timezone": { 7 | "label": "UTC", 8 | "offsetISO": "+00:00", 9 | "offsetInMinutes": 0 10 | }, 11 | "period": 300, 12 | "isAutoPeriod": true, 13 | "timeRange": { 14 | "mode": "relative", 15 | "start": 1627282956521, 16 | "end": 1676972055000, 17 | "relativeStart": 86400012, 18 | "zoom": { 19 | "start": 1627276030434, 20 | "end": 1627282956521 21 | } 22 | }, 23 | "theme": "light", 24 | "linkCharts": true, 25 | "title": "Tweets for Amazon website problem", 26 | "forms": { 27 | "all": {} 28 | }, 29 | "params": { 30 | "scoreName": "FlexibilityScore" 31 | }, 32 | "width": 588, 33 | "height": 369 34 | } 35 | } -------------------------------------------------------------------------------- /assets/func_custom_widget_org_score/htm_templates/gauge.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 110 | 111 | 112 |
113 |
114 |
115 |
$score
116 |
117 |
118 |
119 |

$min

120 | 121 |

$max

122 |
123 |
124 | 125 | -------------------------------------------------------------------------------- /assets/func_custom_widget_org_score/htm_templates/large_score.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 32 | 33 | 34 | 35 |
$score
36 | 37 | -------------------------------------------------------------------------------- /assets/func_custom_widget_org_score/index.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: Function that implements the backend of a CloudWatch custom widget 21 | 22 | import json 23 | 24 | from datetime import datetime, timedelta 25 | from string import Template 26 | from libs_finder import * 27 | 28 | GAUGE_COLORS = { 29 | 'green': '#31d90f', 30 | 'yellow': '#fae505', 31 | 'orange': '#fa9c05', 32 | 'red': '#fa0505', 33 | 'theme': { 34 | 'light': { 35 | 'gaugeBackground': '#ddd', 36 | 'gaugeInner': '#fff', 37 | 'textColor': '#000' 38 | }, 39 | 'dark': { 40 | 'gaugeBackground': '#ddd', 41 | 'gaugeInner': '#000', 42 | 'textColor': '#fff' 43 | } 44 | } 45 | } 46 | 47 | 48 | def get_colors_by_theme_and_score(theme: str, score) -> dict: 49 | theme_colors = GAUGE_COLORS['theme'][theme] 50 | 51 | if score is None: 52 | fill_color = theme_colors['textColor'] 53 | elif score < 3: 54 | fill_color = GAUGE_COLORS['red'] 55 | elif score < 5: 56 | fill_color = GAUGE_COLORS['orange'] 57 | elif score < 7: 58 | fill_color = GAUGE_COLORS['yellow'] 59 | else: 60 | fill_color = GAUGE_COLORS['green'] 61 | 62 | theme_colors['gaugeFill'] = fill_color 63 | 64 | return theme_colors 65 | 66 | 67 | def generate_gauge_html(colors: dict, score) -> str: 68 | with open('htm_templates/gauge.html') as fd: 69 | html = fd.read() 70 | 71 | template = Template(html) 72 | 73 | if score is None: 74 | gauge_fill_deg = 0 75 | else: 76 | gauge_fill_deg = score * 20 if score < 1 else 180 / (10 / score) 77 | 78 | if score is None: 79 | score = '--' 80 | 81 | return template.substitute( 82 | gaugeBackground=colors['gaugeBackground'], 83 | gaugeInner=colors['gaugeInner'], 84 | gaugeFill=colors['gaugeFill'], 85 | textColor=colors['textColor'], 86 | score=score, 87 | deg=f'{gauge_fill_deg}deg', 88 | min=0, 89 | max=10 90 | ) 91 | 92 | 93 | def generate_large_score_html(colors: dict, score) -> str: 94 | with open('htm_templates/large_score.html') as fd: 95 | html = fd.read() 96 | 97 | template = Template(html) 98 | 99 | if score is None: 100 | score = EMPTY_VALUE 101 | 102 | return template.substitute( 103 | textColor=colors['gaugeFill'], 104 | score=score 105 | ) 106 | 107 | 108 | def extract_score_name_param(widgetContext: dict) -> str: 109 | try: 110 | return widgetContext['params']['scoreName'] 111 | except KeyError: 112 | return SCORE_FLEXIBILITY 113 | 114 | 115 | def handler(event, context): 116 | time_range = event['widgetContext']['timeRange'] 117 | 118 | # Extract the name of the score to fetch 119 | score_name = extract_score_name_param(event['widgetContext']) 120 | 121 | # Fetch the score 122 | score = org_manager.fetch_scores(start_time=time_range['start'] // 1000, 123 | end_time=time_range['end'] // 1000)[score_name] 124 | 125 | # Configure the colors of the chart 126 | colors = get_colors_by_theme_and_score(event['widgetContext']['theme'], score) 127 | 128 | # Generate the HTML 129 | html = generate_large_score_html(colors, score) if score_name == SCORE_FLEXIBILITY else \ 130 | generate_gauge_html(colors, score) 131 | 132 | # Return the HTML minified 133 | return html.replace('\n', '') 134 | 135 | 136 | if __name__ == '__main__': 137 | """ 138 | Convenience method to test the function outside the AWS cloud 139 | """ 140 | 141 | with open('test.json') as fd: 142 | test_event = json.loads(fd.read()) 143 | test_event['widgetContext']['timeRange']['start'] = (datetime.now() - timedelta(1)).timestamp() * 1000 144 | test_event['widgetContext']['timeRange']['end'] = datetime.now().timestamp() * 1000 145 | response = handler(test_event, None) 146 | print(response) 147 | -------------------------------------------------------------------------------- /assets/func_custom_widget_org_score/libs_finder.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | 21 | import os 22 | 23 | 24 | def is_aws_env() -> bool: 25 | return 'AWS_LAMBDA_FUNCTION_NAME' in os.environ or 'AWS_EXECUTION_ENV' in os.environ 26 | 27 | 28 | if is_aws_env(): 29 | from helpers import * 30 | from constants import * 31 | from resource_managers import * 32 | else: 33 | from assets.lambda_layer.python.helpers import * 34 | from assets.lambda_layer.python.constants import * 35 | from assets.lambda_layer.python.resource_managers import * 36 | -------------------------------------------------------------------------------- /assets/func_custom_widget_org_score/test.json: -------------------------------------------------------------------------------- 1 | { 2 | "widgetContext": { 3 | "dashboardName": "Name-of-current-dashboard", 4 | "widgetId": "widget-16", 5 | "locale": "en", 6 | "timezone": { 7 | "label": "UTC", 8 | "offsetISO": "+00:00", 9 | "offsetInMinutes": 0 10 | }, 11 | "period": 300, 12 | "isAutoPeriod": true, 13 | "timeRange": { 14 | "mode": "relative", 15 | "start": 1627282956521, 16 | "end": 1676972055000, 17 | "relativeStart": 86400012, 18 | "zoom": { 19 | "start": 1627276030434, 20 | "end": 1627282956521 21 | } 22 | }, 23 | "theme": "light", 24 | "linkCharts": true, 25 | "title": "Tweets for Amazon website problem", 26 | "forms": { 27 | "all": {} 28 | }, 29 | "params": { 30 | "scoreName": "FlexibilityScore" 31 | }, 32 | "width": 588, 33 | "height": 369 34 | } 35 | } -------------------------------------------------------------------------------- /assets/lambda_layer/python/constants.py: -------------------------------------------------------------------------------- 1 | MAX_CT_RELATIVE_DAYS_SEARCH = 90 2 | 3 | DEFAULT = '$Default' 4 | EMPTY_VALUE = '--' 5 | 6 | # CloudTrail resource types 7 | RESOURCE_TYPE_INSTANCE = 'AWS::EC2::Instance' 8 | RESOURCE_TYPE_LT = 'AWS::EC2::LaunchTemplate' 9 | RESOURCE_TYPE_ASG = 'AWS::AutoScaling::AutoScalingGroup' 10 | RESOURCE_TYPE_SP = 'AWS::AutoScaling::ScalingPolicy' 11 | 12 | # EC2 Instance Lifecycle event CloudTrail codes 13 | EVENT_CODE_RUNNING = 16 14 | EVENT_CODE_PENDING = 0 15 | 16 | # EC2 Instance market options 17 | MARKET_SPOT = 'spot' 18 | 19 | # EC2 Instance CloudTrail event names 20 | EVENT_NAME_BID_EVICTED = 'BidEvictedEvent' 21 | EVENT_NAME_TERMINATE_INSTANCES = 'TerminateInstances' 22 | EVENT_NAME_RUN_INSTANCES = 'RunInstances' 23 | EVENT_NAME_START_INSTANCES = 'StartInstances' 24 | EVENT_NAME_STOP_INSTANCES = 'StopInstances' 25 | ALL_INSTANCE_EVENT_NAMES = {EVENT_NAME_BID_EVICTED, EVENT_NAME_TERMINATE_INSTANCES, EVENT_NAME_RUN_INSTANCES, 26 | EVENT_NAME_START_INSTANCES, EVENT_NAME_STOP_INSTANCES} 27 | 28 | # Launch Template CloudTrail event names 29 | EVENT_NAME_CREATE_LT = 'CreateLaunchTemplate' 30 | EVENT_NAME_CREATE_LT_VERSION = 'CreateLaunchTemplateVersion' 31 | EVENT_NAME_MODIFY_LT = 'ModifyLaunchTemplate' 32 | ALL_LT_EVENT_NAMES = {EVENT_NAME_CREATE_LT, EVENT_NAME_CREATE_LT_VERSION, EVENT_NAME_MODIFY_LT} 33 | 34 | # ASG CloudTrail event names 35 | EVENT_NAME_CREATE_ASG = 'CreateAutoScalingGroup' 36 | EVENT_NAME_DELETE_ASG = 'DeleteAutoScalingGroup' 37 | EVENT_NAME_UPDATE_ASG = 'UpdateAutoScalingGroup' 38 | ALL_ASG_EVENT_NAMES = {EVENT_NAME_CREATE_ASG, EVENT_NAME_DELETE_ASG, EVENT_NAME_UPDATE_ASG} 39 | 40 | # Scaling Policy CloudTrail event names 41 | EVENT_NAME_PUT_SP = 'PutScalingPolicy' 42 | EVENT_NAME_DELETE_SP = 'DeletePolicy' 43 | ALL_SP_EVENT_NAMES = {EVENT_NAME_PUT_SP, EVENT_NAME_DELETE_SP} 44 | 45 | # Flex score's score component names 46 | SCORE_DIVERSIFICATION = 'InstanceDiversificationScore' 47 | SCORE_LT = 'LaunchTemplateScore' 48 | SCORE_POLICY = 'PolicyScore' 49 | SCORE_SCALING = 'ScalingScore' 50 | SCORE_FLEXIBILITY = 'FlexibilityScore' 51 | 52 | ALL_SCORES = { 53 | SCORE_DIVERSIFICATION: 0.25, 54 | SCORE_LT: 0.25, 55 | SCORE_POLICY: 0.15, 56 | SCORE_SCALING: 0.35 57 | } 58 | 59 | INSTANCE_NORMALIZATION_TABLE = { 60 | 'nano': 0.25, 61 | 'micro': 0.5, 62 | 'small': 1, 63 | 'medium': 2, 64 | 'large': 4, 65 | 'xlarge': 8, 66 | '2xlarge': 16, 67 | '3xlarge': 24, 68 | '4xlarge': 32, 69 | '6xlarge': 48, 70 | '8xlarge': 64, 71 | '9xlarge': 72, 72 | '10xlarge': 80, 73 | '12xlarge': 96, 74 | '16xlarge': 128, 75 | '18xlarge': 144, 76 | '24xlarge': 192, 77 | '32xlarge': 256, 78 | '56xlarge': 448, 79 | '112xlarge': 896 80 | } 81 | 82 | CW_NAMESPACE = 'FlexibilityScore' 83 | CW_DIMENSION_NAME_ACCOUNT_ID = 'accountId' 84 | CW_DIMENSION_VALUE_ORG = 'ORG' 85 | CW_METRIC_NAME_VCPU_H = 'vcpuh' 86 | CW_METRIC_PERIOD = 3600 87 | ALL_CW_METRICS = {SCORE_DIVERSIFICATION, SCORE_LT, SCORE_POLICY, SCORE_SCALING, CW_METRIC_NAME_VCPU_H} 88 | 89 | BUCKET_METRICS = 'BUCKET_METRICS' 90 | 91 | FUNC_ORG_SCORE = 'FUNC_ORG_SCORE' 92 | FUNC_ACCOUNT_RANK = 'FUNC_ACCOUNT_RANK' 93 | FUNC_ACCOUNTS_SCORES = 'FUNC_ACCOUNTS_SCORES' 94 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/helpers/__init__.py: -------------------------------------------------------------------------------- 1 | from . import cloudtrail_helpers, date_helpers, organizations_helpers, ec2_helpers, sts_helpers, s3_helpers, cloudwatch_helpers 2 | from .cloudtrail_helpers import InvalidEvent 3 | from .date_helpers import TimeWindow 4 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/helpers/cloudtrail_helpers.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # Author: Borja Pérez Guasch 3 | # License: Apache 2.0 4 | # Summary: module with helper methods to work with CloudTrail 5 | 6 | import boto3 7 | import json 8 | 9 | 10 | class InvalidEvent(Exception): 11 | pass 12 | 13 | 14 | def extract_event_payload(event: dict) -> dict: 15 | """ 16 | Extracts and returns the payload of a CloudTrail event. Raises a InvalidEvent exception if there's an error 17 | associated to the event. 18 | 19 | :param event: event of which to extract its payload 20 | 21 | :return: payload of the CloudTrail event 22 | """ 23 | 24 | cloudtrail_event = json.loads(event['CloudTrailEvent']) 25 | 26 | # Discard the event if it failed, as it did not incur an Instance state change 27 | if 'errorCode' in cloudtrail_event: 28 | raise InvalidEvent() 29 | 30 | return cloudtrail_event 31 | 32 | 33 | def build_events_search_expression(events: [str]) -> str: 34 | """ 35 | Builds a search expression that returns only events with the names included in the events argument 36 | 37 | :param events: list of event names to include in the search expression 38 | 39 | :return: search expression 40 | """ 41 | 42 | events = ['EventName == `{}`'.format(event) for event in events] 43 | 44 | return 'Events[?{}][]'.format(' || '.join(events)) 45 | 46 | 47 | def fetch_events_by_resource_type(resource_type: str, region: str, search_exp: str = 'Events[]', credentials=None, 48 | **kwargs) -> [dict]: 49 | """ 50 | Convenience method that uses boto3 to fetch CloudTrail events of a given resource type 51 | 52 | :param credentials: credentials to perform the operation 53 | :param resource_type: resource type of which to fetch the events 54 | :param region: region in which to operate 55 | :param search_exp: expression used to filter the results 56 | :param kwargs: additional keyword arguments, used in the API call 57 | 58 | :return: list of CloudTrail events 59 | """ 60 | 61 | if credentials is None: 62 | credentials = {} 63 | 64 | client = boto3.client('cloudtrail', region_name=region, **credentials) 65 | paginator = client.get_paginator('lookup_events') 66 | 67 | kwargs.update({ 68 | 'LookupAttributes': [ 69 | { 70 | 'AttributeKey': 'ResourceType', 71 | 'AttributeValue': resource_type 72 | } 73 | ] 74 | }) 75 | 76 | page_iterator = paginator.paginate(**kwargs) 77 | 78 | return list(page_iterator.search(search_exp)) 79 | 80 | 81 | def fetch_events_by_event_name(event_name: str, region: str, search_exp: str = 'Events[]', credentials: dict = None, 82 | **kwargs) -> [dict]: 83 | """ 84 | Convenience method that uses boto3 to fetch CloudTrail events with a given name 85 | 86 | :param credentials: credentials to perform the operation 87 | :param event_name: name of the event to fetch 88 | :param region: region in which to operate 89 | :param search_exp: expression used to filter the results 90 | :param kwargs: additional keyword arguments, used in the API call 91 | 92 | :return: list of CloudTrail events 93 | """ 94 | 95 | if credentials is None: 96 | credentials = {} 97 | 98 | client = boto3.client('cloudtrail', region_name=region, **credentials) 99 | paginator = client.get_paginator('lookup_events') 100 | 101 | kwargs.update({ 102 | 'LookupAttributes': [ 103 | { 104 | 'AttributeKey': 'EventName', 105 | 'AttributeValue': event_name 106 | } 107 | ] 108 | }) 109 | 110 | page_iterator = paginator.paginate(**kwargs) 111 | 112 | return list(page_iterator.search(search_exp)) 113 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/helpers/cloudwatch_helpers.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | 4 | def put_metric_data(namespace: str, data: [dict]) -> dict: 5 | client = boto3.client('cloudwatch') 6 | return client.put_metric_data(Namespace=namespace, MetricData=data) 7 | 8 | 9 | def get_metric_data(start_time, end_time, queries) -> dict: 10 | params = { 11 | 'MetricDataQueries': queries, 12 | 'StartTime': start_time, 13 | 'EndTime': end_time, 14 | 'ScanBy': 'TimestampDescending' 15 | } 16 | 17 | client = boto3.client('cloudwatch') 18 | return client.get_metric_data(**params) 19 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/helpers/date_helpers.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # Author: Borja Pérez Guasch 3 | # License: Apache 2.0 4 | # Summary: module with helper methods to work with dates 5 | 6 | from datetime import datetime 7 | 8 | 9 | def get_timezone(): 10 | """ 11 | Returns information about the current time zone 12 | 13 | :return: current timezone 14 | """ 15 | 16 | return datetime.now().astimezone().tzinfo 17 | 18 | 19 | def datetime_to_str(date_time: datetime = datetime.now(), datetime_format: str = '%Y-%m-%d') -> str: 20 | return date_time.strftime(datetime_format) 21 | 22 | 23 | class TimeWindow: 24 | """ 25 | Class that represents a time window with a start and an end 26 | """ 27 | 28 | def __init__(self, start_time, end_time): 29 | self.start_time = start_time 30 | self.end_time = end_time 31 | 32 | def before(self, obj) -> bool: 33 | if type(obj) is datetime: 34 | return self.end_time < obj 35 | 36 | return self.end_time < obj.start_time 37 | 38 | def contains(self, obj) -> bool: 39 | if type(obj) is datetime: 40 | return self.start_time <= obj <= self.end_time 41 | 42 | return self.start_time <= obj.start_time and self.end_time >= obj.end_time 43 | 44 | def after(self, obj) -> bool: 45 | if type(obj) is datetime: 46 | return self.start_time > obj 47 | 48 | return self.start_time > obj.end_time 49 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/helpers/ec2_helpers.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | 4 | def describe_regions(account_id, credentials=None) -> [str]: 5 | print(f'({account_id}) getting the list of enabled regions...') 6 | 7 | if credentials is None: 8 | credentials = {} 9 | 10 | client = boto3.client('ec2', **credentials) 11 | response = client.describe_regions(AllRegions=False)['Regions'] 12 | 13 | return [region['RegionName'] for region in response] 14 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/helpers/organizations_helpers.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | 4 | def describe_organization() -> dict: 5 | client = boto3.client('organizations') 6 | return client.describe_organization()['Organization'] 7 | 8 | 9 | def list_organization_accounts() -> [dict]: 10 | client = boto3.client('organizations') 11 | paginator = client.get_paginator('list_accounts') 12 | return list(paginator.paginate().search('Accounts[]')) 13 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/helpers/s3_helpers.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | 4 | def upload_file_contents(bucket: str, key: str, contents: str) -> dict: 5 | """ 6 | Uploads a file to S3 with the contents received as an argument 7 | 8 | :param bucket: S3 bucket to upload the file to 9 | :param key: full path in the bucket where to upload the file 10 | :param contents: body of the file to upload 11 | 12 | :return: operation response 13 | """ 14 | 15 | client = boto3.client('s3') 16 | return client.put_object(Bucket=bucket, Key=key, Body=contents.encode('utf-8')) 17 | 18 | 19 | def retrieve_file_contents(bucket: str, key: str) -> str: 20 | """ 21 | Gets a file from S3 and reads all its contents 22 | 23 | :param bucket: S3 bucket to get the file from 24 | :param key: full path in the bucket of the file to download 25 | 26 | :return: string with all the contents of the file 27 | """ 28 | 29 | client = boto3.client('s3') 30 | response = client.get_object(Bucket=bucket, Key=key) 31 | 32 | return response['Body'].read().decode('utf-8') 33 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/helpers/sts_helpers.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | 4 | def get_caller_identity() -> dict: 5 | client = boto3.client('sts') 6 | return client.get_caller_identity() 7 | 8 | 9 | def assume_role(role_arn: str) -> dict: 10 | client = boto3.client('sts') 11 | 12 | response = client.assume_role( 13 | RoleArn=role_arn, 14 | RoleSessionName="RoleAssume" 15 | ) 16 | 17 | return { 18 | 'aws_access_key_id': response["Credentials"]["AccessKeyId"], 19 | 'aws_secret_access_key': response["Credentials"]["SecretAccessKey"], 20 | 'aws_session_token': response["Credentials"]["SessionToken"], 21 | } 22 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resource_managers/__init__.py: -------------------------------------------------------------------------------- 1 | from .instance_manager import InstanceManager 2 | from .asg_manager import ASGManager 3 | from .launch_template_manager import LaunchTemplateManager 4 | from . import account_manager 5 | from . import org_manager 6 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resource_managers/account_manager.py: -------------------------------------------------------------------------------- 1 | import uuid 2 | 3 | from threading import Thread 4 | from botocore.exceptions import ClientError 5 | from .libs_finder import * 6 | from .instance_manager import InstanceManager 7 | from .launch_template_manager import LaunchTemplateManager 8 | from .asg_manager import ASGManager 9 | 10 | _ORG_NOT_USED_EXCEPTION = 'AWSOrganizationsNotInUseException' 11 | _ROUND_DECIMALS = 2 12 | 13 | accounts = {} 14 | 15 | 16 | class AccountResourcesFetcher(Thread): 17 | _ROLE_ARN_FORMAT = 'arn:aws:iam::{}:role/{}' 18 | 19 | def __init__(self, account: Account, tw: TimeWindow): 20 | super().__init__(name=account.id) 21 | self.tw = tw 22 | self.account = account 23 | 24 | def _get_credentials(self) -> dict: 25 | """ 26 | If the account is not the main in the organization, assume a role in the member account to make requests 27 | 28 | :return: dictionary with credentials to perform requests in the account 29 | """ 30 | 31 | role_name = os.environ['ORGS_IAM_ROLE'] 32 | 33 | return None if self.account.is_main else sts_helpers.assume_role(self._ROLE_ARN_FORMAT.format(self.account.id, 34 | role_name)) 35 | 36 | def run(self): 37 | # Get temporary credentials to access AWS resources 38 | credentials = self._get_credentials() 39 | 40 | # Get the list of enabled regions in the account 41 | regions = ec2_helpers.describe_regions(self.account.id, credentials) 42 | 43 | print(f'({self.name}) fetching resources...') 44 | 45 | instances = {} 46 | lt = {} 47 | asg = {} 48 | 49 | # Fetch resources in all regions 50 | for region in regions: 51 | instances.update(InstanceManager(self.tw, region, credentials).fetch_resources()) 52 | lt.update(LaunchTemplateManager(self.tw, region, credentials).fetch_resources()) 53 | asg.update(ASGManager(self.tw, region, credentials).fetch_resources()) 54 | 55 | self.account.set_resources(instances, lt, asg) 56 | 57 | print(f'({self.name}) done fetching.') 58 | 59 | 60 | def _fetch_accounts(): 61 | global accounts 62 | 63 | print('Getting the list of accounts in the organization...') 64 | 65 | try: 66 | account_data = organizations_helpers.list_organization_accounts() 67 | main_account_id = organizations_helpers.describe_organization()['MasterAccountId'] 68 | 69 | # Add an extra field indicating whether the account is the main in the organization 70 | for data in account_data: 71 | data['isMain'] = data['Id'] == main_account_id 72 | except ClientError as e: 73 | # Organizations not enabled, treat the account as an organization without invited accounts 74 | if e.response['Error']['Code'] == _ORG_NOT_USED_EXCEPTION: 75 | account_data = [{'Id': sts_helpers.get_caller_identity()['Account'], 'isMain': True}] 76 | else: 77 | raise Exception(f'Error getting the list of accounts: {e}') 78 | 79 | accounts = {data['Id']: Account(data) for data in account_data} 80 | 81 | 82 | def fetch_resources(tw: TimeWindow): 83 | threads = [] 84 | 85 | # Fetch account resources in parallel 86 | for _, account in accounts.items(): 87 | threads.append(AccountResourcesFetcher(account, tw)) 88 | threads[-1].start() 89 | 90 | # Wait until all the threads have executed 91 | for thread in threads: 92 | thread.join() 93 | 94 | 95 | def _weight_scores(metrics: dict) -> dict: 96 | """ 97 | Weights the score values from different days to return a single value per score in the time range 98 | 99 | :param metrics: metric values per account for a list of days in the format: 100 | 101 | account_id_1: 102 | metric_name_1: 103 | day_1: value 104 | day_2: value 105 | metric_name_2: 106 | day_1: value 107 | day_2: value 108 | 109 | :return: dictionary with weighted score values in the format: 110 | 111 | account_id_1: 112 | metric_name_1: value 113 | metric_name_2: value 114 | account_id_2: 115 | metric_name_1: value 116 | metric_name_2: value 117 | """ 118 | 119 | weighted_scores = {a_id: {} for a_id in metrics} 120 | 121 | for a_id in metrics: 122 | # First calculate the total vCPU hours in the time window 123 | total_vcpu_h = int(sum([value for day, value in metrics[a_id][CW_METRIC_NAME_VCPU_H].items()])) 124 | 125 | # Calculate the wighted component values adding the products between the daily vCPU hours and component 126 | # and dividing by the total vCPU hours 127 | weighted_scores[a_id] = { 128 | metric_name: round(sum([ 129 | metrics[a_id][metric_name][day] * 130 | metrics[a_id][CW_METRIC_NAME_VCPU_H][day] 131 | 132 | # Ensure that there are metric values on that day for both the vCPU hours and the metric being calculated 133 | for day in metrics[a_id][CW_METRIC_NAME_VCPU_H] if day in metrics[a_id][metric_name] 134 | ]) / total_vcpu_h, _ROUND_DECIMALS) if total_vcpu_h != 0 else 0 135 | 136 | for metric_name in ALL_CW_METRICS - {CW_METRIC_NAME_VCPU_H} 137 | } 138 | 139 | # Add the total vCPU hours to the account metrics 140 | weighted_scores[a_id][CW_METRIC_NAME_VCPU_H] = total_vcpu_h 141 | 142 | # Calculate the Flexibility Score 143 | weighted_scores[a_id][SCORE_FLEXIBILITY] = round(sum([ 144 | weighted_scores[a_id][score_name] * score_weight 145 | for score_name, score_weight in ALL_SCORES.items() if score_name in weighted_scores[a_id] 146 | ]), _ROUND_DECIMALS) 147 | 148 | return weighted_scores 149 | 150 | 151 | def fetch_scores(start_time, end_time, account_ids=None, metric_names=None) -> dict: 152 | """ 153 | Retrieves account metrics from CloudWatch and calculates their weighted values in the time frame 154 | 155 | :param start_time: start of the time window to get metrics 156 | :param end_time: end of the time window to get metrics 157 | :param account_ids: list of account ids of which to get metrics 158 | :param metric_names: list of metrics to get 159 | 160 | :return: dictionary containing the weighted metrics in the time frame in the following format: 161 | 162 | account_id_1: 163 | metric_name_1: value 164 | metric_name_2: value 165 | account_id_2: 166 | metric_name_1: value 167 | metric_name_2: value 168 | """ 169 | 170 | # If an id is not specified, get the metrics for all accounts 171 | if account_ids is None: 172 | account_ids = list(accounts.keys()) 173 | 174 | # If metrics are not specified, get all the metrics 175 | if metric_names is None: 176 | metric_names = ALL_CW_METRICS 177 | else: 178 | # Ensure we retrieve the vCPU hours, since it's needed for the weighted calculations 179 | metric_names.append(CW_METRIC_NAME_VCPU_H) 180 | metric_names = set(metric_names) 181 | 182 | queries = [ 183 | { 184 | 'Id': f'a_{str(uuid.uuid4()).replace("-", "_")}', 185 | 'Label': f'{a_id}_{metric}', 186 | 'MetricStat': { 187 | 'Metric': { 188 | 'Namespace': CW_NAMESPACE, 189 | 'MetricName': metric, 190 | 'Dimensions': [ 191 | { 192 | 'Name': CW_DIMENSION_NAME_ACCOUNT_ID, 193 | 'Value': a_id 194 | }, 195 | ] 196 | }, 197 | 'Period': CW_METRIC_PERIOD, 198 | 'Stat': 'Average', 199 | } 200 | } 201 | 202 | for a_id in account_ids for metric in ALL_CW_METRICS 203 | ] 204 | 205 | response = cloudwatch_helpers.get_metric_data(start_time, end_time, queries) 206 | metrics = {a_id: {name: {} for name in metric_names} for a_id in account_ids} 207 | 208 | # Generate a dictionary with the following format: 209 | # account_id_1: 210 | # metric_name_1: 211 | # day_1: value 212 | # day_2: value 213 | # metric_name_2: 214 | # day_1: value 215 | # day_2: value 216 | 217 | for result in response['MetricDataResults']: 218 | account_id = result['Label'].split('_')[0] 219 | metric_name = result['Label'].split('_')[1] 220 | 221 | # We have requested the most recent data points, so these come sorted by date in ascending mode. 222 | # Traverse the results in reverse so that the last dictionary update keeps the most recent metric value 223 | for i in range(len(result['Timestamps'])): 224 | # Generate a string with the date of the value in the format %Y-%m-%d 225 | date_key = date_helpers.datetime_to_str(result['Timestamps'][-i - 1]) 226 | metrics[account_id][metric_name][date_key] = result['Values'][-i - 1] 227 | 228 | return _weight_scores(metrics) 229 | 230 | 231 | _fetch_accounts() 232 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resource_managers/asg_manager.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime, timedelta 2 | from .resource_manager import ResourceManager 3 | from .libs_finder import * 4 | 5 | 6 | class ASGManager(ResourceManager): 7 | def _fetch_cloud_trail_events(self) -> [dict]: 8 | events = cloudtrail_helpers.fetch_events_by_resource_type( 9 | RESOURCE_TYPE_ASG, 10 | self.region, 11 | cloudtrail_helpers.build_events_search_expression(ALL_ASG_EVENT_NAMES), 12 | credentials=self.credentials, 13 | StartTime=datetime.now() - timedelta(MAX_CT_RELATIVE_DAYS_SEARCH), 14 | EndTime=self.tw.end_time 15 | ) 16 | 17 | # Sort all the events in ascending order by event time 18 | return sorted(events, key=lambda e: e['EventTime']) 19 | 20 | def _extract_resources_data_from_events(self, events: [dict]) -> dict[str: object]: 21 | resources = {} 22 | 23 | for event_data in events: 24 | try: 25 | event_payload = cloudtrail_helpers.extract_event_payload(event_data) 26 | except InvalidEvent: 27 | continue 28 | 29 | asg_name = event_data['Resources'][0]['ResourceName'] 30 | 31 | if asg_name not in resources: 32 | resources[asg_name] = ASG(event_data, event_payload) 33 | else: 34 | resources[asg_name].hydrate(event_data, event_payload) 35 | 36 | return resources 37 | 38 | def _fetch_sp_cloud_trail_events(self) -> [dict]: 39 | events = cloudtrail_helpers.fetch_events_by_resource_type( 40 | RESOURCE_TYPE_SP, 41 | self.region, 42 | cloudtrail_helpers.build_events_search_expression(ALL_SP_EVENT_NAMES), 43 | credentials=self.credentials, 44 | StartTime=datetime.now() - timedelta(MAX_CT_RELATIVE_DAYS_SEARCH), 45 | EndTime=self.tw.end_time 46 | ) 47 | 48 | # Sort all the events in ascending order by event time 49 | return sorted(events, key=lambda e: e['EventTime']) 50 | 51 | def _extract_sp_data_from_events(self, events: [dict]) -> dict[str: str]: 52 | sp = {} 53 | 54 | for event_data in events: 55 | try: 56 | event_payload = cloudtrail_helpers.extract_event_payload(event_data) 57 | except InvalidEvent: 58 | continue 59 | 60 | params = event_payload['requestParameters'] 61 | 62 | if event_data['EventName'] == EVENT_NAME_DELETE_SP: 63 | sp[params['autoScalingGroupName']] = None 64 | else: 65 | sp[params['autoScalingGroupName']] = params['policyType'] 66 | 67 | return sp 68 | 69 | def fetch_resources(self) -> dict[str: object]: 70 | events = self._fetch_cloud_trail_events() 71 | asg_data = self._extract_resources_data_from_events(events) 72 | 73 | # Discard ASGs deleted before the time window 74 | asg_data = { 75 | name: asg for name, asg in asg_data.items() 76 | if asg.deleted_at is None or self.tw.contains(asg.deleted_at) 77 | } 78 | 79 | # Fetch scaling policies events 80 | events = self._fetch_sp_cloud_trail_events() 81 | sps = self._extract_sp_data_from_events(events) 82 | 83 | # Assign each ASG its scaling policy 84 | for asg_name, sp in sps.items(): 85 | if asg_name in asg_data: 86 | asg_data[asg_name].scaling_policy = sp 87 | 88 | self.resources = asg_data 89 | 90 | return asg_data 91 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resource_managers/instance_manager.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime, timedelta 2 | from .resource_manager import ResourceManager 3 | from .libs_finder import * 4 | 5 | 6 | class InstanceManager(ResourceManager): 7 | def _fetch_cloud_trail_events(self) -> [dict]: 8 | # Lookup EC2 Instance events and keep only Run, Stop, Start and Terminate 9 | events = cloudtrail_helpers.fetch_events_by_resource_type( 10 | RESOURCE_TYPE_INSTANCE, 11 | self.region, 12 | cloudtrail_helpers.build_events_search_expression(ALL_INSTANCE_EVENT_NAMES), 13 | credentials=self.credentials, 14 | StartTime=datetime.now() - timedelta(MAX_CT_RELATIVE_DAYS_SEARCH), 15 | EndTime=self.tw.end_time 16 | ) 17 | 18 | # Lookup Spot termination events (those are not included in the EC2 Instance resource type based search) 19 | events += cloudtrail_helpers.fetch_events_by_event_name( 20 | EVENT_NAME_BID_EVICTED, 21 | self.region, 22 | credentials=self.credentials, 23 | StartTime=datetime.now() - timedelta(MAX_CT_RELATIVE_DAYS_SEARCH), 24 | EndTime=self.tw.end_time 25 | ) 26 | 27 | # Sort all the events in ascending order by event time 28 | return sorted(events, key=lambda e: e['EventTime']) 29 | 30 | def _extract_resources_data_from_events(self, events: [dict]) -> dict[str: object]: 31 | instances = {} 32 | 33 | for event_data in events: 34 | try: 35 | event_payload = cloudtrail_helpers.extract_event_payload(event_data) 36 | except InvalidEvent: 37 | continue 38 | 39 | # The structure of the bid evicted event is different, deal with it differently 40 | if event_data['EventName'] == EVENT_NAME_BID_EVICTED: 41 | # For every Instance associated to the event... 42 | for i_id in event_payload['serviceEventDetails']['instanceIdSet']: 43 | # Initialise the Instance or hydrate it if we already found an event associated to it 44 | if i_id not in instances: 45 | instances[i_id] = Instance(event_data, {'instanceId': i_id}, self.tw) 46 | else: 47 | instances[i_id].hydrate(event_data, {}) 48 | else: 49 | # For every Instance associated to the event... 50 | for instance_data in event_payload['responseElements']['instancesSet']['items']: 51 | i_id = instance_data['instanceId'] 52 | 53 | # Initialise the Instance or hydrate it if we already found an event associated to it 54 | if i_id not in instances: 55 | instances[i_id] = Instance(event_data, instance_data, self.tw) 56 | else: 57 | instances[i_id].hydrate(event_data, instance_data) 58 | 59 | return instances 60 | 61 | def fetch_resources(self) -> dict[str: object]: 62 | events = self._fetch_cloud_trail_events() 63 | instances = self._extract_resources_data_from_events(events) 64 | 65 | # Discard instances of which we couldn't fetch required attributes and instances terminated before the time window 66 | instances = { 67 | i_id: instance for i_id, instance in instances.items() 68 | if instance.is_initialised() and 69 | (instance.terminated_at is None or self.tw.contains(instance.terminated_at)) 70 | } 71 | 72 | self.resources = instances 73 | 74 | return instances 75 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resource_managers/launch_template_manager.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime, timedelta 2 | from .resource_manager import ResourceManager 3 | from .libs_finder import * 4 | 5 | 6 | class LaunchTemplateManager(ResourceManager): 7 | def _fetch_cloud_trail_events(self) -> [dict]: 8 | events = cloudtrail_helpers.fetch_events_by_resource_type( 9 | RESOURCE_TYPE_LT, 10 | self.region, 11 | cloudtrail_helpers.build_events_search_expression(ALL_LT_EVENT_NAMES), 12 | credentials=self.credentials, 13 | StartTime=datetime.now() - timedelta(MAX_CT_RELATIVE_DAYS_SEARCH), 14 | EndTime=self.tw.end_time 15 | ) 16 | 17 | # Sort all the events in ascending order by event time 18 | return sorted(events, key=lambda e: e['EventTime']) 19 | 20 | def _extract_resources_data_from_events(self, events: [dict]) -> dict[str: object]: 21 | resources = {} 22 | 23 | for event_data in events: 24 | try: 25 | event_payload = cloudtrail_helpers.extract_event_payload(event_data) 26 | except InvalidEvent: 27 | continue 28 | 29 | lt_id = event_data['Resources'][1]['ResourceName'] 30 | 31 | if lt_id not in resources: 32 | resources[lt_id] = LaunchTemplate(event_data, event_payload) 33 | else: 34 | resources[lt_id].hydrate(event_data, event_payload) 35 | 36 | # SDKs allow for creating an ASG either by specifying the LT id or name. Make the LTs referencable by their name 37 | lts_by_name = {data.name: data for _, data in resources.items()} 38 | resources.update(lts_by_name) 39 | 40 | return resources 41 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resource_managers/libs_finder.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | 4 | def is_aws_env() -> bool: 5 | return 'AWS_LAMBDA_FUNCTION_NAME' in os.environ or 'AWS_EXECUTION_ENV' in os.environ 6 | 7 | 8 | if is_aws_env(): 9 | from helpers import * 10 | from constants import * 11 | from resources import * 12 | else: 13 | from assets.lambda_layer.python.helpers import * 14 | from assets.lambda_layer.python.constants import * 15 | from assets.lambda_layer.python.resources import * 16 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resource_managers/org_manager.py: -------------------------------------------------------------------------------- 1 | from .libs_finder import * 2 | from . import account_manager 3 | 4 | _ROUND_DECIMALS = 2 5 | 6 | 7 | def fetch_scores(start_time, end_time) -> dict: 8 | """ 9 | Gets account metrics from CloudWatch and calculates the weighted component scores and Flexibility Score 10 | 11 | :param start_time: start of the time window to calculate the scores 12 | :param end_time: end of the time window to calculate the scores 13 | 14 | :return: dictionary containing the scores 15 | 16 | """ 17 | 18 | # Get the weighted metrics for all the accounts in the organization 19 | account_scores = account_manager.fetch_scores(start_time, end_time) 20 | 21 | return calculate_scores(account_scores) 22 | 23 | 24 | def calculate_scores(metrics: dict) -> dict: 25 | """ 26 | Calculates the weighted component scores and Flexibility Score at the organization level 27 | using fetched account metrics 28 | 29 | :param metrics: weighted metrics of all the accounts 30 | 31 | :return: dictionary containing the organization scores 32 | 33 | """ 34 | 35 | # Accumulate the account's vCPU hours 36 | total_vcpu_h = sum([metrics[CW_METRIC_NAME_VCPU_H] for _, metrics in metrics.items()]) 37 | 38 | # Calculate the weighted component scores 39 | scores = { 40 | score: round(sum([ 41 | metrics[a_id][score] * metrics[a_id][CW_METRIC_NAME_VCPU_H] 42 | for a_id in metrics if score in metrics[a_id] 43 | ]) / total_vcpu_h, _ROUND_DECIMALS) if total_vcpu_h != 0 else 0 44 | 45 | for score in ALL_SCORES 46 | } 47 | 48 | # Calculate the Flexibility Score 49 | scores[SCORE_FLEXIBILITY] = round(sum([ 50 | scores[score_name] * score_weight 51 | for score_name, score_weight in ALL_SCORES.items() if score_name in scores 52 | ]), _ROUND_DECIMALS) 53 | 54 | return scores 55 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resource_managers/resource_manager.py: -------------------------------------------------------------------------------- 1 | from abc import ABCMeta, abstractmethod 2 | from .libs_finder import * 3 | 4 | 5 | class ResourceManager(metaclass=ABCMeta): 6 | def __init__(self, tw: TimeWindow, region: str, credentials: dict = None): 7 | if credentials is None: 8 | credentials = {} 9 | 10 | self.credentials = credentials 11 | self.tw = tw 12 | self.region = region 13 | self.resources: dict[str: object] = {} 14 | 15 | @abstractmethod 16 | def _fetch_cloud_trail_events(self) -> [dict]: 17 | pass 18 | 19 | @abstractmethod 20 | def _extract_resources_data_from_events(self, events: [dict]) -> dict[str: object]: 21 | pass 22 | 23 | def fetch_resources(self) -> dict[str: object]: 24 | events = self._fetch_cloud_trail_events() 25 | self.resources = self._extract_resources_data_from_events(events) 26 | return self.resources 27 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resources/__init__.py: -------------------------------------------------------------------------------- 1 | from .instance import Instance 2 | from .launch_template import LaunchTemplate 3 | from .asg import ASG 4 | from .account import Account 5 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resources/account.py: -------------------------------------------------------------------------------- 1 | class Account: 2 | def _get_vcpu_h(self): 3 | vcpu_h = 0 4 | 5 | for _, instance in self.instances.items(): 6 | vcpu_h += instance.vcpu_h 7 | 8 | return vcpu_h 9 | 10 | vcpu_h = property(_get_vcpu_h) 11 | 12 | def __init__(self, data: dict): 13 | self.id = data['Id'] 14 | self.is_main = data['isMain'] 15 | self.name = data['Name'] if 'Name' in data else '' 16 | self.instances = {} 17 | self.lt = {} 18 | self.asg = {} 19 | 20 | def set_resources(self, instances, lt, asg): 21 | self.instances = instances 22 | self.lt = lt 23 | self.asg = asg 24 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resources/asg.py: -------------------------------------------------------------------------------- 1 | from enum import Enum 2 | from .resource import Resource 3 | from .libs_finder import * 4 | 5 | 6 | class LaunchTemplate: 7 | def __init__(self, data: dict): 8 | self.id = data['launchTemplateId'] if 'launchTemplateId' in data else data['launchTemplateName'] 9 | self.version = data['version'] if data['version'] == DEFAULT else int(data['version']) 10 | 11 | def __str__(self): 12 | return str(self.__dict__) 13 | 14 | def __repr__(self): 15 | return self.__str__() 16 | 17 | 18 | class Overrides: 19 | def __init__(self, event_payload: dict): 20 | self.uses_abis = False 21 | self.instance_count = None 22 | 23 | if 'mixedInstancesPolicy' in event_payload['requestParameters'] and 'overrides' in event_payload[ 24 | 'requestParameters']['mixedInstancesPolicy']['launchTemplate']: 25 | instance_count = 0 26 | 27 | for override in event_payload['requestParameters']['mixedInstancesPolicy']['launchTemplate'][ 28 | 'overrides']: 29 | if 'instanceRequirements' in override: 30 | self.uses_abis = True 31 | elif 'instanceType' in override: 32 | instance_count += 1 33 | 34 | if instance_count != 0: 35 | self.instance_count = instance_count 36 | 37 | def __str__(self): 38 | return str(self.__dict__) 39 | 40 | def __repr__(self): 41 | return self.__str__() 42 | 43 | 44 | class ASG(Resource): 45 | def __init__(self, event_data: dict, event_payload: dict): 46 | self.name = None 47 | self.created_at = None 48 | self.deleted_at = None 49 | self.lt = None 50 | self.sp = None 51 | self.overrides = Overrides(event_payload) 52 | 53 | super().__init__(event_data, event_payload) 54 | 55 | def hydrate(self, event_data: dict, event_payload: dict): 56 | if event_data['EventName'] == EVENT_NAME_CREATE_ASG: 57 | self.created_at = event_data['EventTime'] 58 | elif event_data['EventName'] == EVENT_NAME_DELETE_ASG: 59 | self.deleted_at = event_data['EventTime'] 60 | 61 | self.name = event_data['Resources'][0]['ResourceName'] 62 | self.overrides = Overrides(event_payload) 63 | self.lt = None if 'mixedInstancesPolicy' not in event_payload['requestParameters'] else LaunchTemplate( 64 | event_payload['requestParameters']['mixedInstancesPolicy']['launchTemplate'][ 65 | 'launchTemplateSpecification'] 66 | ) 67 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resources/instance.py: -------------------------------------------------------------------------------- 1 | from .resource import Resource 2 | from .libs_finder import * 3 | 4 | NORMALIZATION_TABLE = { 5 | 'nano': 0.25, 6 | 'micro': 0.5, 7 | 'small': 1, 8 | 'medium': 2, 9 | 'large': 4, 10 | 'xlarge': 8, 11 | '2xlarge': 16, 12 | '3xlarge': 24, 13 | '4xlarge': 32, 14 | '6xlarge': 48, 15 | '8xlarge': 64, 16 | '9xlarge': 72, 17 | '10xlarge': 80, 18 | '12xlarge': 96, 19 | '16xlarge': 128, 20 | '18xlarge': 144, 21 | '24xlarge': 192, 22 | '32xlarge': 256, 23 | '56xlarge': 448, 24 | '112xlarge': 896 25 | } 26 | 27 | 28 | class LaunchTemplate: 29 | def __init__(self, id: str, version: str): 30 | self.id = id 31 | self.version = version if version == DEFAULT else int(version) 32 | 33 | def __str__(self): 34 | return str(self.__dict__) 35 | 36 | def __repr__(self): 37 | return self.__str__() 38 | 39 | 40 | class LifecycleEvent: 41 | def __init__(self, event_data: dict, event_payload: dict): 42 | self.name = event_data['EventName'] 43 | self.time = event_data['EventTime'] 44 | 45 | # Determine if the Instance to which this event belongs, was running before this event and as a result of it 46 | if event_data['EventName'] == EVENT_NAME_BID_EVICTED: 47 | self.previously_running = True 48 | self.currently_running = False 49 | elif event_data['EventName'] == EVENT_NAME_RUN_INSTANCES: 50 | self.previously_running = False 51 | self.currently_running = True 52 | else: 53 | self.previously_running = event_payload['previousState']['code'] == EVENT_CODE_RUNNING 54 | self.currently_running = event_payload['currentState']['code'] in (EVENT_CODE_RUNNING, 55 | EVENT_CODE_PENDING) 56 | 57 | def __str__(self): 58 | return self.__dict__ 59 | 60 | 61 | class InstanceAttrs: 62 | def __init__(self, event_data: dict, event_payload: dict): 63 | self.type = None 64 | self.size = None 65 | self.arch = None 66 | self.is_spot = None 67 | 68 | self.hydrate(event_data, event_payload) 69 | 70 | def hydrate(self, event_data: dict, event_payload: dict): 71 | if 'instanceType' in event_payload: 72 | self.type = event_payload['instanceType'] 73 | self.size = self.type.split('.')[1] 74 | 75 | if 'architecture' in event_payload: 76 | self.arch = event_payload['architecture'] 77 | 78 | if event_data['EventName'] == EVENT_NAME_RUN_INSTANCES: 79 | self.is_spot = 'instanceLifecycle' in event_payload and event_payload['instanceLifecycle'] == MARKET_SPOT 80 | 81 | def is_initialised(self): 82 | """ 83 | Determines if required attributes for calculations have been fetched 84 | :return: True if required attributes are present, False otherwise 85 | """ 86 | 87 | return self.size is not None 88 | 89 | def __str__(self): 90 | return str(self.__dict__) 91 | 92 | def __repr__(self): 93 | return self.__str__() 94 | 95 | 96 | class Instance(Resource): 97 | _TAG_KEY_ASG_NAME = 'aws:autoscaling:groupName' 98 | _TAG_KEY_LT_ID = 'aws:ec2launchtemplate:id' 99 | _TAG_KEY_LT_VERSION = 'aws:ec2launchtemplate:version' 100 | 101 | # ---------------- GETTERS ---------------- # 102 | def _get_running_windows(self) -> [TimeWindow]: 103 | if self._running_windows is None: 104 | self._calculate_running_windows() 105 | 106 | return self._running_windows 107 | 108 | def _get_vcpu_h(self) -> int: 109 | if self._vcpu_h is None: 110 | self._calculate_vcpu_h() 111 | 112 | return self._vcpu_h 113 | 114 | # --------------- PROPERTIES -------------- # 115 | running_windows = property(_get_running_windows) 116 | 117 | vcpu_h = property(_get_vcpu_h) 118 | 119 | # ------------ PRIVATE METHODS ------------ # 120 | def _entered_running(self, tw: TimeWindow): 121 | # Note: lifecycle events are sorted by event time in ascending order, and there no events past the time window 122 | for event in self.lifecycle_events: 123 | if event.time >= tw.start_time: 124 | return event.previously_running 125 | 126 | # If we reach this point, the Instance doesn't have any event during the time window; its last event will 127 | # tell us if the instance was running before entering the time window 128 | return self.lifecycle_events[-1].currently_running 129 | 130 | def _calculate_running_windows(self) -> None: 131 | """ 132 | Uses the list of lifecycle events to calculate the running windows of the Instance. 133 | 134 | For those situations in which the Instance did run during the whole time window, or in which it run from 135 | the last event until the end of the time window, we won't store the end time of the running window to simplify 136 | the calculation of the Scaling Score. 137 | """ 138 | 139 | windows: [TimeWindow] = [] 140 | prev_start_time = self._events_time_window.start_time 141 | 142 | # Work only with lifecycle events in the time window 143 | events_in_tw = [event for event in self.lifecycle_events if self._events_time_window.contains(event.time)] 144 | 145 | # No events during the time window, check if the Instance was running during the whole of it 146 | if not events_in_tw: 147 | if self._entered_running(self._events_time_window): 148 | windows.append(TimeWindow(self._events_time_window.start_time, None)) 149 | # Check if the Instance was running between events, starting from the beginning of the time window 150 | else: 151 | for event in events_in_tw: 152 | if event.previously_running: 153 | windows.append(TimeWindow(prev_start_time, event.time)) 154 | 155 | prev_start_time = event.time 156 | 157 | # Check if the Instance was running between the last event and the end of the time window 158 | if events_in_tw[-1].currently_running: 159 | windows.append(TimeWindow(events_in_tw[-1].time, None)) 160 | 161 | self._running_windows = windows 162 | 163 | def _calculate_vcpu_h(self) -> None: 164 | running_time = 0 165 | 166 | for window in self.running_windows: 167 | if window.end_time is not None: 168 | running_time += (window.end_time - window.start_time).seconds 169 | else: 170 | running_time += (self._events_time_window.end_time - window.start_time).seconds + 1 171 | 172 | self._vcpu_h = running_time * NORMALIZATION_TABLE[self.attrs.size] // 3600 * 2 173 | 174 | # -------------- INITIALIZER -------------- # 175 | def __init__(self, event_data: dict, event_payload: dict, tw: TimeWindow): 176 | self.id = None 177 | self.launched_at = None 178 | self.terminated_at = None 179 | self.asg_name = None 180 | self.lt = None 181 | self.attrs = InstanceAttrs(event_data, event_payload) 182 | self.lifecycle_events: [LifecycleEvent] = [] 183 | 184 | self._running_windows = None 185 | self._vcpu_h = None 186 | self._events_time_window = tw 187 | 188 | super().__init__(event_data, event_payload) 189 | 190 | # ------------- PUBLIC METHODS ------------ # 191 | def hydrate(self, event_data: dict, event_payload: dict): 192 | self.lifecycle_events.append(LifecycleEvent(event_data, event_payload)) 193 | self.attrs.hydrate(event_data, event_payload) 194 | 195 | if 'instanceId' in event_payload: 196 | self.id = event_payload['instanceId'] 197 | 198 | if event_data['EventName'] == EVENT_NAME_RUN_INSTANCES: 199 | self.launched_at = event_data['EventTime'] 200 | elif event_data['EventName'] in (EVENT_NAME_TERMINATE_INSTANCES, EVENT_NAME_BID_EVICTED): 201 | self.terminated_at = event_data['EventTime'] 202 | 203 | # Explore the tags to try to find the used LT and the ASG to which the Instance has been launched 204 | if 'tagSet' in event_payload: 205 | tags = {tag['key']: tag['value'] for tag in event_payload['tagSet']['items']} 206 | 207 | if self._TAG_KEY_ASG_NAME in tags: 208 | self.asg_name = tags[self._TAG_KEY_ASG_NAME] 209 | 210 | if self._TAG_KEY_LT_ID in tags: 211 | self.lt = LaunchTemplate(tags[self._TAG_KEY_LT_ID], tags[self._TAG_KEY_LT_VERSION]) 212 | 213 | def is_initialised(self): 214 | """ 215 | Determines if required attributes for calculations have been fetched 216 | :return: True if required attributes are present, False otherwise 217 | """ 218 | 219 | return self.attrs.is_initialised() and self.launched_at is not None 220 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resources/launch_template.py: -------------------------------------------------------------------------------- 1 | from .resource import Resource 2 | from .libs_finder import * 3 | 4 | 5 | class LaunchTemplateVersion: 6 | def __init__(self, event_data: dict, event_payload: dict): 7 | self.created_at = event_data['EventTime'] 8 | 9 | if event_data['EventName'] == EVENT_NAME_CREATE_LT: 10 | self.version_number = 1 11 | self.uses_abis = 'InstanceRequirements' in event_payload['requestParameters'][ 12 | 'CreateLaunchTemplateRequest']['LaunchTemplateData'] 13 | elif event_data['EventName'] == EVENT_NAME_CREATE_LT_VERSION: 14 | self.version_number = event_payload['responseElements']['CreateLaunchTemplateVersionResponse'][ 15 | 'launchTemplateVersion']['versionNumber'] 16 | self.uses_abis = 'InstanceRequirements' in event_payload['requestParameters'][ 17 | 'CreateLaunchTemplateVersionRequest']['LaunchTemplateData'] 18 | 19 | def __str__(self): 20 | return str(self.__dict__) 21 | 22 | def __repr__(self): 23 | return self.__str__() 24 | 25 | 26 | class LaunchTemplate(Resource): 27 | def __init__(self, event_data: dict, event_payload: dict): 28 | self.id = None 29 | self.name = None 30 | self.versions = {} 31 | 32 | super().__init__(event_data, event_payload) 33 | 34 | def hydrate(self, event_data: dict, event_payload: dict): 35 | self.id = event_data['Resources'][1]['ResourceName'] 36 | self.name = event_data['Resources'][0]['ResourceName'] 37 | 38 | if event_data['EventName'] == EVENT_NAME_CREATE_LT: 39 | self.versions[1] = LaunchTemplateVersion(event_data, event_payload) 40 | self.versions[DEFAULT] = self.versions[1] 41 | elif event_data['EventName'] == EVENT_NAME_MODIFY_LT: 42 | default_version = event_payload['responseElements']['ModifyLaunchTemplateResponse']['launchTemplate'][ 43 | 'defaultVersionNumber'] 44 | 45 | if default_version in self.versions: 46 | self.versions[DEFAULT] = self.versions[default_version] 47 | elif event_data['EventName'] == EVENT_NAME_CREATE_LT_VERSION: 48 | version = event_payload['responseElements']['CreateLaunchTemplateVersionResponse'][ 49 | 'launchTemplateVersion']['versionNumber'] 50 | self.versions[version] = LaunchTemplateVersion(event_data, event_payload) 51 | 52 | # Set this version number as the default if needed 53 | if event_payload['responseElements']['CreateLaunchTemplateVersionResponse']['launchTemplateVersion'][ 54 | 'defaultVersion']: 55 | self.versions[DEFAULT] = self.versions[version] 56 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resources/libs_finder.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | 4 | def is_aws_env() -> bool: 5 | return 'AWS_LAMBDA_FUNCTION_NAME' in os.environ or 'AWS_EXECUTION_ENV' in os.environ 6 | 7 | 8 | if is_aws_env(): 9 | from helpers import * 10 | from constants import * 11 | else: 12 | from assets.lambda_layer.python.helpers import * 13 | from assets.lambda_layer.python.constants import * 14 | -------------------------------------------------------------------------------- /assets/lambda_layer/python/resources/resource.py: -------------------------------------------------------------------------------- 1 | from abc import ABCMeta, abstractmethod 2 | 3 | 4 | class Resource(metaclass=ABCMeta): 5 | def __init__(self, event_data: dict, event_payload: dict): 6 | self.hydrate(event_data, event_payload) 7 | 8 | @abstractmethod 9 | def hydrate(self, event_data: dict, event_payload: dict): 10 | pass 11 | 12 | def __str__(self): 13 | return str(self.__dict__) 14 | 15 | def __repr__(self): 16 | return self.__str__() 17 | -------------------------------------------------------------------------------- /cdk.json: -------------------------------------------------------------------------------- 1 | { 2 | "app": "python3 app.py", 3 | "watch": { 4 | "include": [ 5 | "**" 6 | ], 7 | "exclude": [ 8 | "README.md", 9 | "cdk*.json", 10 | "requirements*.txt", 11 | "source.bat", 12 | "**/__init__.py", 13 | "python/__pycache__", 14 | "tests" 15 | ] 16 | }, 17 | "context": { 18 | "@aws-cdk/aws-lambda:recognizeLayerVersion": true, 19 | "@aws-cdk/core:checkSecretUsage": true, 20 | "@aws-cdk/core:target-partitions": [ 21 | "aws", 22 | "aws-cn" 23 | ], 24 | "@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true, 25 | "@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true, 26 | "@aws-cdk/aws-ecs:arnFormatIncludesClusterName": true, 27 | "@aws-cdk/aws-iam:minimizePolicies": true, 28 | "@aws-cdk/core:validateSnapshotRemovalPolicy": true, 29 | "@aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true, 30 | "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true, 31 | "@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true, 32 | "@aws-cdk/aws-apigateway:disableCloudWatchRole": true, 33 | "@aws-cdk/core:enablePartitionLiterals": true, 34 | "@aws-cdk/aws-events:eventsTargetQueueSameAccount": true, 35 | "@aws-cdk/aws-iam:standardizedServicePrincipals": true, 36 | "@aws-cdk/aws-ecs:disableExplicitDeploymentControllerForCircuitBreaker": true, 37 | "@aws-cdk/aws-iam:importedRoleStackSafeDefaultPolicyName": true, 38 | "@aws-cdk/aws-s3:serverAccessLogsUseBucketPolicy": true, 39 | "@aws-cdk/aws-route53-patters:useCertificate": true, 40 | "@aws-cdk/customresources:installLatestAwsSdkDefault": false 41 | } 42 | } 43 | -------------------------------------------------------------------------------- /cdk/__init__.py: -------------------------------------------------------------------------------- 1 | from .main_stack import MainStack 2 | -------------------------------------------------------------------------------- /cdk/main_stack.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: main application Stack 21 | 22 | from aws_cdk import ( 23 | Stack, CfnParameter 24 | ) 25 | from constructs import Construct 26 | from .modules import * 27 | 28 | 29 | class MainStack(Stack): 30 | def _create_param_org_role_name(self) -> CfnParameter: 31 | return CfnParameter(self, 'ParamOrgRoleName', default='OrganizationAccountAccessRole') 32 | 33 | def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: 34 | super().__init__(scope, construct_id, **kwargs) 35 | 36 | # Create a parameter to hold the organizations role name 37 | org_role_name = self._create_param_org_role_name() 38 | 39 | # Create an S3 bucket to store metrics calculations 40 | buckets = S3Module.create(self) 41 | 42 | # Create the Lambda functions 43 | funcs = LambdaModule.create(self, buckets, org_role_name) 44 | 45 | # Create the CloudWatch dashboard 46 | CloudWatchModule.create(self, funcs) 47 | -------------------------------------------------------------------------------- /cdk/modules/__init__.py: -------------------------------------------------------------------------------- 1 | from ._lambda import LambdaModule 2 | from .s3 import S3Module 3 | from .cloudwatch import CloudWatchModule 4 | -------------------------------------------------------------------------------- /cdk/modules/_lambda.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | ### SPDX-License-Identifier: MIT-0 4 | ### 5 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 6 | ### software and associated documentation files (the "Software"), to deal in the Software 7 | ### without restriction, including without limitation the rights to use, copy, modify, 8 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 9 | ### permit persons to whom the Software is furnished to do so. 10 | ### 11 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 12 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 13 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 14 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 15 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 16 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 17 | # 18 | # Author: Borja Pérez Guasch 19 | # Summary: module that deploys AWS Lambda related resources 20 | 21 | from aws_cdk import ( 22 | aws_lambda as _lambda, 23 | RemovalPolicy, 24 | Duration, 25 | aws_iam as iam, 26 | aws_events as events, 27 | aws_events_targets as event_targets, 28 | Stack, 29 | custom_resources 30 | ) 31 | from constructs import Construct 32 | from assets.lambda_layer.python.constants import * 33 | from cdk_nag import NagSuppressions, NagPackSuppression 34 | 35 | 36 | class LambdaModule: 37 | context: Construct 38 | 39 | @classmethod 40 | def _create_layer(cls) -> _lambda.LayerVersion: 41 | return _lambda.LayerVersion( 42 | cls.context, 43 | 'LambdaLayer', 44 | code=_lambda.Code.from_asset('assets/lambda_layer'), 45 | compatible_runtimes=[_lambda.Runtime.PYTHON_3_10], 46 | removal_policy=RemovalPolicy.DESTROY, 47 | layer_version_name='FlexibilityScoreHelpers' 48 | ) 49 | 50 | @classmethod 51 | def _create_func_daily_metrics_calculation(cls, layer, bucket, org_role_name) -> None: 52 | func = _lambda.Function( 53 | cls.context, 54 | 'FuncDailyMetricsCalculation', 55 | function_name='calculateDailyMetrics', 56 | timeout=Duration.minutes(15), 57 | runtime=_lambda.Runtime.PYTHON_3_10, 58 | environment={ 59 | 'BUCKET': bucket.bucket_name, 60 | 'ORGS_IAM_ROLE': org_role_name.value_as_string 61 | }, 62 | code=_lambda.Code.from_asset('assets/func_calculate_daily_metrics'), 63 | handler='index.handler', 64 | layers=[layer], 65 | memory_size=256 66 | ) 67 | 68 | # Grant the function permission to write to the S3 Bucket 69 | bucket.grant_write(func) 70 | 71 | # Define a structure containing the permissions needed by the function 72 | statements = [ 73 | { 74 | 'actions': [ 75 | "organizations:ListAccounts", 76 | "organizations:DescribeOrganization", 77 | "cloudtrail:LookupEvents", 78 | "ec2:DescribeRegions", 79 | "cloudwatch:PutMetricData" 80 | 81 | ], 82 | 'resources': ['*'] 83 | }, 84 | { 85 | 'actions': ['sts:AssumeRole'], 86 | 'resources': [f'arn:aws:iam::*:role/{org_role_name.value_as_string}'] 87 | } 88 | ] 89 | 90 | # Attach the permissions to the function execution role 91 | for statement in statements: 92 | func.add_to_role_policy(iam.PolicyStatement( 93 | actions=statement['actions'], 94 | resources=statement['resources'], 95 | effect=iam.Effect.ALLOW 96 | )) 97 | 98 | # The alias will be used to trigger the function using an EventBridge Rule 99 | alias = _lambda.Alias( 100 | cls.context, 101 | 'FuncAliasDailyMetricsCalculation', 102 | alias_name='Prod', 103 | version=func.current_version 104 | ) 105 | 106 | # Create a rule that executes the prod alias every 3 hours 107 | events.Rule( 108 | cls.context, 109 | 'DailyMetricsCalculationRule', 110 | schedule=events.Schedule.rate(Duration.hours(3)), 111 | targets=[event_targets.LambdaFunction(alias)] 112 | ) 113 | 114 | # Execute the function manually once when deploying the stack 115 | custom_resources.AwsCustomResource( 116 | cls.context, 'ExecuteFuncCustomResource', 117 | on_create=custom_resources.AwsSdkCall( 118 | service='Lambda', 119 | action='invoke', 120 | physical_resource_id=custom_resources.PhysicalResourceId.of(alias.alias_name), 121 | parameters={ 122 | 'InvocationType': 'Event', 123 | 'FunctionName': alias.function_name 124 | } 125 | ), 126 | removal_policy=RemovalPolicy.DESTROY, 127 | function_name='dailyMetricsCalculationOneOffExecution', 128 | policy=custom_resources.AwsCustomResourcePolicy.from_statements( 129 | [ 130 | iam.PolicyStatement( 131 | effect=iam.Effect.ALLOW, 132 | actions=['lambda:InvokeFunction'], 133 | resources=[alias.function_arn] 134 | ) 135 | ] 136 | ) 137 | ) 138 | 139 | NagSuppressions.add_resource_suppressions(func, [NagPackSuppression( 140 | id='AwsSolutions-IAM5', 141 | reason='Wildcard is needed since individual ids are not available at deploy time.' 142 | )], True) 143 | 144 | NagSuppressions.add_resource_suppressions(func, [NagPackSuppression( 145 | id='AwsSolutions-IAM4', 146 | reason='Managed policy needed to specify permissions over other resources.' 147 | )], True) 148 | 149 | @classmethod 150 | def _create_func_custom_widget_org_score(cls, layer) -> _lambda.Alias: 151 | func = _lambda.Function( 152 | cls.context, 153 | 'FuncCustomWidgetOrgScore', 154 | function_name='customWidgetOrgScore', 155 | timeout=Duration.minutes(1), 156 | runtime=_lambda.Runtime.PYTHON_3_10, 157 | code=_lambda.Code.from_asset('assets/func_custom_widget_org_score'), 158 | handler='index.handler', 159 | layers=[layer], 160 | reserved_concurrent_executions=5 161 | ) 162 | 163 | func.add_to_role_policy(iam.PolicyStatement( 164 | actions=["cloudwatch:GetMetricData", "organizations:ListAccounts", "organizations:DescribeOrganization"], 165 | resources=['*'], 166 | effect=iam.Effect.ALLOW 167 | )) 168 | 169 | NagSuppressions.add_resource_suppressions(func, [NagPackSuppression( 170 | id='AwsSolutions-IAM5', 171 | reason='Wildcard is needed since individual ids are not available at deploy time.' 172 | )], True) 173 | 174 | NagSuppressions.add_resource_suppressions(func, [NagPackSuppression( 175 | id='AwsSolutions-IAM4', 176 | reason='Managed policy needed to specify permissions over other resources.' 177 | )], True) 178 | 179 | # The alias will be used to render the CloudWatch widget 180 | return _lambda.Alias( 181 | cls.context, 182 | 'FuncAliasCustomWidgetOrgScore', 183 | alias_name='Prod', 184 | version=func.current_version 185 | ) 186 | 187 | @classmethod 188 | def _create_func_custom_widget_account_rank(cls, layer) -> _lambda.Alias: 189 | func = _lambda.Function( 190 | cls.context, 191 | 'FuncCustomWidgetAccountRank', 192 | function_name='customWidgetAccountRank', 193 | timeout=Duration.minutes(1), 194 | runtime=_lambda.Runtime.PYTHON_3_10, 195 | code=_lambda.Code.from_asset('assets/func_custom_widget_account_rank'), 196 | handler='index.handler', 197 | layers=[layer], 198 | reserved_concurrent_executions=5 199 | ) 200 | 201 | func.add_to_role_policy(iam.PolicyStatement( 202 | actions=["cloudwatch:GetMetricData", "organizations:ListAccounts", "organizations:DescribeOrganization"], 203 | resources=['*'], 204 | effect=iam.Effect.ALLOW 205 | )) 206 | 207 | NagSuppressions.add_resource_suppressions(func, [NagPackSuppression( 208 | id='AwsSolutions-IAM5', 209 | reason='Wildcard is needed since individual ids are not available at deploy time.' 210 | )], True) 211 | 212 | NagSuppressions.add_resource_suppressions(func, [NagPackSuppression( 213 | id='AwsSolutions-IAM4', 214 | reason='Managed policy needed to specify permissions over other resources.' 215 | )], True) 216 | 217 | # The alias will be used to render the CloudWatch widget 218 | return _lambda.Alias( 219 | cls.context, 220 | 'FuncAliasCustomWidgetAccountRank', 221 | alias_name='Prod', 222 | version=func.current_version 223 | ) 224 | 225 | @classmethod 226 | def _create_func_custom_widget_accounts_scores(cls, layer) -> _lambda.Alias: 227 | func = _lambda.Function( 228 | cls.context, 229 | 'FuncCustomWidgetAccountsScores', 230 | function_name='customWidgetAccountsScores', 231 | timeout=Duration.minutes(1), 232 | runtime=_lambda.Runtime.PYTHON_3_10, 233 | code=_lambda.Code.from_asset('assets/func_custom_widget_accounts_scores'), 234 | handler='index.handler', 235 | layers=[layer], 236 | reserved_concurrent_executions=5 237 | ) 238 | 239 | func.add_to_role_policy(iam.PolicyStatement( 240 | actions=["cloudwatch:GetMetricData", "organizations:ListAccounts", "organizations:DescribeOrganization"], 241 | resources=['*'], 242 | effect=iam.Effect.ALLOW 243 | )) 244 | 245 | NagSuppressions.add_resource_suppressions(func, [NagPackSuppression( 246 | id='AwsSolutions-IAM5', 247 | reason='Wildcard is needed since individual ids are not available at deploy time.' 248 | )], True) 249 | 250 | NagSuppressions.add_resource_suppressions(func, [NagPackSuppression( 251 | id='AwsSolutions-IAM4', 252 | reason='Managed policy needed to specify permissions over other resources.' 253 | )], True) 254 | 255 | # The alias will be used to render the CloudWatch widget 256 | return _lambda.Alias( 257 | cls.context, 258 | 'FuncAliasCustomWidgetAccountsScores', 259 | alias_name='Prod', 260 | version=func.current_version 261 | ) 262 | 263 | @classmethod 264 | def create(cls, context: Construct, buckets, org_role_name) -> dict: 265 | NagSuppressions.add_stack_suppressions( 266 | stack=Stack.of(context), 267 | suppressions=[ 268 | NagPackSuppression( 269 | id='AwsSolutions-L1', 270 | reason='Deploying Lambda functions with the latest runtime is not a requirement.' 271 | ), 272 | NagPackSuppression( 273 | id='AwsSolutions-IAM4', 274 | reason='Needed to execute an SDK call.' 275 | ) 276 | ] 277 | ) 278 | 279 | cls.context = context 280 | 281 | layer = cls._create_layer() 282 | cls._create_func_daily_metrics_calculation(layer, buckets[BUCKET_METRICS], org_role_name) 283 | 284 | return { 285 | FUNC_ORG_SCORE: cls._create_func_custom_widget_org_score(layer), 286 | FUNC_ACCOUNTS_SCORES: cls._create_func_custom_widget_accounts_scores(layer), 287 | FUNC_ACCOUNT_RANK: cls._create_func_custom_widget_account_rank(layer) 288 | } 289 | -------------------------------------------------------------------------------- /cdk/modules/cloudwatch.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: module that deploys CloudWatch related resources 21 | 22 | import re 23 | 24 | from aws_cdk import ( 25 | aws_cloudwatch as cloudwatch 26 | ) 27 | from constructs import Construct 28 | from assets.lambda_layer.python.constants import * 29 | 30 | _MAX_SIZE = 24 31 | 32 | 33 | def camel_case_split(string) -> str: 34 | return ' '.join(re.findall(r'[A-Z](?:[a-z]+|[A-Z]*(?=[A-Z]|$))', string)) 35 | 36 | 37 | def read_component_description(score: str) -> str: 38 | with open(f'assets/dashboard/text/{score}.md') as fd: 39 | return fd.read() 40 | 41 | 42 | class CloudWatchModule: 43 | context: Construct 44 | funcs: dict 45 | 46 | @classmethod 47 | def _create_dashboard(cls) -> None: 48 | # ---------------------------------------- ROW 1 ---------------------------------------- # 49 | flex_score_widget = cloudwatch.CustomWidget( 50 | function_arn=cls.funcs[FUNC_ORG_SCORE].function_arn, 51 | title='Flexibility Score', 52 | width=5, 53 | height=6, 54 | update_on_resize=False, 55 | params={'scoreName': 'FlexibilityScore'} 56 | ) 57 | 58 | flex_score_text = cloudwatch.TextWidget( 59 | markdown=read_component_description(SCORE_FLEXIBILITY), 60 | height=flex_score_widget.height, 61 | width=_MAX_SIZE - flex_score_widget.width 62 | ) 63 | 64 | row_1 = cloudwatch.Row(flex_score_widget, flex_score_text) 65 | 66 | # ------------------------------------- ROWS 2 TO 3 ------------------------------------- # 67 | component_widgets = [ 68 | cloudwatch.CustomWidget( 69 | function_arn=cls.funcs[FUNC_ORG_SCORE].function_arn, 70 | title=camel_case_split(score), 71 | width=5, 72 | height=flex_score_widget.height + 1, 73 | update_on_resize=False, 74 | params={'scoreName': score} 75 | ) 76 | 77 | for score in ALL_SCORES 78 | ] 79 | 80 | component_description_widgets = [ 81 | cloudwatch.TextWidget( 82 | markdown=read_component_description(score), 83 | height=component_widgets[0].height, 84 | width=(_MAX_SIZE - (component_widgets[0].width * 2)) / 2 85 | ) 86 | 87 | for score in ALL_SCORES 88 | ] 89 | 90 | # ---------------------------------------- ROW 4 ---------------------------------------- # 91 | account_rank_widget = cloudwatch.CustomWidget( 92 | function_arn=cls.funcs[FUNC_ACCOUNT_RANK].function_arn, 93 | title='', 94 | width=_MAX_SIZE / 3, 95 | height=10, 96 | update_on_resize=False, 97 | params={'count': 3} 98 | ) 99 | 100 | accounts_scores_widget = cloudwatch.CustomWidget( 101 | function_arn=cls.funcs[FUNC_ACCOUNTS_SCORES].function_arn, 102 | title='', 103 | width=_MAX_SIZE - account_rank_widget.width, 104 | height=account_rank_widget.height, 105 | update_on_resize=False 106 | ) 107 | 108 | row_4 = cloudwatch.Row(account_rank_widget, accounts_scores_widget) 109 | 110 | # ---------------------------------------- ROW 5 ---------------------------------------- # 111 | score_trends_widget = cloudwatch.GraphWidget( 112 | height=account_rank_widget.height, 113 | width=_MAX_SIZE, 114 | title='Component scores trend' 115 | ) 116 | 117 | for score in ALL_SCORES: 118 | score_trends_widget.add_left_metric( 119 | cloudwatch.Metric( 120 | namespace='FlexibilityScore', 121 | metric_name=score, 122 | dimensions_map={ 123 | 'accountId': 'ORG' 124 | } 125 | ) 126 | ) 127 | 128 | # Dashboard 129 | cloudwatch.Dashboard( 130 | cls.context, 131 | 'Dashboard', 132 | dashboard_name='Flexibility-Score-Dashboard', 133 | period_override=cloudwatch.PeriodOverride.INHERIT, 134 | start='-D1', 135 | widgets=[ 136 | row_1.widgets, 137 | [component_widgets[0], component_description_widgets[0], component_widgets[1], component_description_widgets[1]], 138 | [component_widgets[2], component_description_widgets[2], component_widgets[3], component_description_widgets[3]], 139 | row_4.widgets, 140 | [score_trends_widget] 141 | ] 142 | ) 143 | 144 | @classmethod 145 | def create(cls, context: Construct, funcs) -> None: 146 | cls.context = context 147 | cls.funcs = funcs 148 | 149 | cls._create_dashboard() 150 | -------------------------------------------------------------------------------- /cdk/modules/s3.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: module that deploys S3 related resources 21 | 22 | from aws_cdk import ( 23 | RemovalPolicy, 24 | aws_s3 as s3, 25 | ) 26 | 27 | from constructs import Construct 28 | from assets.lambda_layer.python.constants import * 29 | from ..utils import * 30 | 31 | 32 | class S3Module: 33 | context: Construct 34 | 35 | @classmethod 36 | def _create_calculation_files_bucket(cls, suffix: str) -> s3.Bucket: 37 | bucket = s3.Bucket( 38 | cls.context, 39 | 'CalculationFilesBucket', 40 | bucket_name=f'flexibility-score-{suffix}', 41 | removal_policy=RemovalPolicy.DESTROY, 42 | enforce_ssl=True, 43 | server_access_logs_prefix='server-access-logs/', 44 | block_public_access=s3.BlockPublicAccess( 45 | block_public_policy=True, 46 | block_public_acls=True, 47 | restrict_public_buckets=True, 48 | ignore_public_acls=True 49 | ), 50 | auto_delete_objects=True 51 | ) 52 | 53 | return bucket 54 | 55 | @classmethod 56 | def create(cls, context: Construct) -> dict: 57 | cls.context = context 58 | 59 | return { 60 | BUCKET_METRICS: cls._create_calculation_files_bucket( 61 | suffix=stack_utils.stack_id_termination( 62 | context=cls.context 63 | ) 64 | ) 65 | } 66 | -------------------------------------------------------------------------------- /cdk/utils/__init__.py: -------------------------------------------------------------------------------- 1 | from . import stack_utils 2 | -------------------------------------------------------------------------------- /cdk/utils/stack_utils.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | ### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 | ### SPDX-License-Identifier: MIT-0 5 | ### 6 | ### Permission is hereby granted, free of charge, to any person obtaining a copy of this 7 | ### software and associated documentation files (the "Software"), to deal in the Software 8 | ### without restriction, including without limitation the rights to use, copy, modify, 9 | ### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to 10 | ### permit persons to whom the Software is furnished to do so. 11 | ### 12 | ### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 13 | ### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A 14 | ### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 15 | ### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 16 | ### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE 17 | ### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 18 | # 19 | # Author: Borja Pérez Guasch 20 | # Summary: module with methods to operate Stacks 21 | 22 | from aws_cdk import ( 23 | Fn 24 | ) 25 | 26 | 27 | def stack_id_termination(context) -> str: 28 | return Fn.select(0, Fn.split('-', Fn.select(2, Fn.split('/', context.stack_id)))) 29 | -------------------------------------------------------------------------------- /docs/architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/ec2-flexibility-score-dashboard/1f2e64f55b457580155516866a72c649b5a9a106/docs/architecture.png -------------------------------------------------------------------------------- /docs/dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aws-samples/ec2-flexibility-score-dashboard/1f2e64f55b457580155516866a72c649b5a9a106/docs/dashboard.png -------------------------------------------------------------------------------- /docs/diagrams.drawio: -------------------------------------------------------------------------------- 1 | 7Vxbc9o4FP41mek+hLEsXx8DhDazaZttupvmKSNsYdQYi8hygP76lXwB2xJJaCCQKUmmRceybJ/vfOciyZzA3mT+kaHp+DMNcXxiGuH8BPZPTNMzofhXChaFwAZGIYgYCQsRWAmuyS9cCqtuGQlx2ujIKY05mTaFAU0SHPCGDDFGZ81uIxo3rzpFEVYE1wGKVekNCfm4kJoQOqsDnzCJxtWlHdsqjkxQ1bt8lHSMQjqrieD5CewxSnnxaTLv4Vgqr1JMcd5gzdHlnTGc8JecYIys+18P4U1sRl9+IRx+7SP/tBzlEcVZ+cRnN9dC0ItpFpb3zReVNqaUJDzXqN0Vf+J6PePEFkd6stUx7Zag3XabAqC25BhNQbvtNgWgPTxoXR+0b7AmUFqN4Y3W9Y3aDYo/2KUZj0mCe0vbM4QwYigkApIejSkTsoQmQnvdMZ/EogXEx9mYcHw9RYHU6kzQRshGNOGl9QOzapeKl6MK++ZIXIuVY+RIYHb+iAtAij5xjKYpGS7PYjjIWEoe8TecFoNLqTDEqfw8mUeSsx00S61OxGg2zW//QlxLe/ROfLwLpGHcoZjLgTij97h60BMTit+BNL7uiMRxSwGPmHEiiHUWk0iOz6m8HCpbMR7lIwqtkCS6zFt9aJSa0F0iROkYh+UjqVyoDFtcFc9ropIbHzGdYM4Wosu84nTJ09JTWWVztqK9U1F5XGO845dCVLqaaDn0ioziQ8lHPTcf/Rn/dvX1vy9ZcGtf/zMyvU+Zhpu9LOVicCbPnExjPBGPK3RgCmcEgdTDBzrlhCYo/kvhrgYSBT/7zOl5Tl25YC1ybQtt4bQcag0OLc+11jetxasNl+93oK9AZhoayIC7Bcj0t2woesehCChlkzI+ppHE53wl7QpuJeHSlFd9LqlUdA7BT8z5ovQPKOO06VDwnPAftc+3cijhuIpWf16OnDcWtcYVZkQ8d+5UclkidPCj3qiNJJurofLWot5qD7YJyEJhNGMBfqKfWQZ/xCL81HilCUitP8l6hmPEhWdsZgxbtweVw1fZMCaCXanMEcRFSJBqbeYSDUUq1cC5cpaBpD3TMHNCwrAwKenv0SoQNAN3fwN4jLX8LVOu8iqNpEbHV6NjeMBqcLYC/8VYlINfyaepdaGjUSqMog3W8h5egR/cJ59XHL6tHXk5n/dEQWAeFgcthYP/TmOKQslAhmayOkEcHTIJ4fZICHzHbZDw4DloKvD1EYkXnwvf2UNxkMlbF0lrG8H0HvNgrNV+Wbto6xddDaOtY9RaptEtry40V2gLdTJXFQK1W1WQqEKdTFd9tc8GmrNB6+z1tc+6XL1dE4ljA9c7N6zasT4R1UoOo8xPmUzgGmWEOKdv2D3g6hLXUf7TrgoqZuYsvqIpKYcfUi7S6GdrkiXDa+x/rk5D6bRQx4jM5X3oiy3hGXJnW5Ra0lHoiq4YTYYh2tiPb1Dy2M0c2vTUmsf11Py5km3dVVvvOXvecrSFL4y2hxVsbcVZf2ckijA75CS3ShC2keNC3zz0gAoVjNYE1G+ZeOxjUH1HQXVgDbzuZkG12wPQdv6YoIrljOmQkdxd7iyyOgcWWN29lrHbmpYChzct5bzLKO0pEWAgnXo+J4WLJYXDjdbutqL1qaiGXc9sMPX0ldG7mpxu0t9pnr+70O4owOYLed+ZCPDHSP6GkfyozsNKjKBleeYxMXoiMcoXdnnuKHaXF1l+MzECjvNmiZF2jfVdz++v1utWudBtIxXaIDFqscnIf5am0EpctphB+WoGpUXK3GfC5Ovj6g2SAfQYV49x9Z2q8xhX3yauzqSjuDPfMLLCqr2vKQeg1iLveRJiy1GvWiN/fjHdPqiZg+q+a6heqznUMQIerst2jDMI3c1ctum6APw5LjuFO3TU7UVX29ZuXHxbX60u23UvRPs7pcdpo/dEbt+y+4MN8zH7DBpd+48h90NGgvu0INXOSO6DfXIcYO+CI+/WiHnv5/ThC7w2v5zuN/va205GrS7AC5Ovva3aaO9aXbc/S9Nskq/afGURSsivfNk+39dINUv3e1nDecoYt7LjAlSVxgFuaHzK+pR3ri7OPh+D7TsKtgPbci1/s2DbcwEEgz8m2JJQ3ADhizuUhHcoCHCa3k1QgqL81aU3zLKrt4X2tc6gls0nppO/RCfVn/u9yhKch4wWqqkm4WsiJ5L/D2IRJIckFpoVp14HlAlnb/SF4QwpYmE19JBVJ3yoT1evOv6l9qwk4imLO6vEzzqm2tuIoLCa4nVXo23TNVtssqB8LS1u2boSl15LhnYCbMhfjaFP0gDhjni0LMGdcKldxUkMBq6f70teG9Ve+a5be3LPdl9mzvauzFmtF9X0UuiA7y7RkLlm+U7rWkNq4byESYFP/jTS3aWfLB+n7TetHYINvGY+Y3q61xpVsCvZ1sFWJ3LVGb8j2L8H9pK2ywVxldlvCra60qluyj+C/Ztu3Gi68aVu9wX28u3OFdr2Ee2tUdtvoG1VMXJvaKuFp3NEe1tfR+A30Yb2nh25ZqVO3QV+RPv30LZBM2zbprqPbVdoUzq0Pt3cX3oXxGWQ2Fen7tV+Z3f39b0TKpSaCWCtul46AVz2e/sJYO1da17cwtOYLg5kt/5ThrmV3frQcMvhXrk9H7TO2Mpc71OGVt842P9bgUs7E7nreUDNtB7soCRklIQ6x9y3LOmA13y/1VoubrCFqTVlZ0HFp7qwY6te1bN2FEPVb44ACnbHEPossMBufu/LDnMj0Vx9o15B4tX3EsLz/wE= -------------------------------------------------------------------------------- /requirements-dev.txt: -------------------------------------------------------------------------------- 1 | pytest==6.2.5 2 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aws-cdk-lib>=2.87.0 2 | constructs==10.2.69 3 | cdk-nag==2.27.82 -------------------------------------------------------------------------------- /source.bat: -------------------------------------------------------------------------------- 1 | @echo off 2 | 3 | rem The sole purpose of this script is to make the command 4 | rem 5 | rem source .venv/bin/activate 6 | rem 7 | rem (which activates a Python virtualenv on Linux or Mac OS X) work on Windows. 8 | rem On Windows, this command just runs this batch file (the argument is ignored). 9 | rem 10 | rem Now we don't need to document a Windows command for activating a virtualenv. 11 | 12 | echo Executing .venv\Scripts\activate.bat for you 13 | .venv\Scripts\activate.bat 14 | --------------------------------------------------------------------------------