├── .gitignore ├── LICENSE ├── Makefile ├── README.md ├── cloudformation.yaml ├── hosts ├── reds ├── deps.zip └── reds.py ├── requirements.txt ├── templates ├── alarms.py ├── iam-assets.py ├── lambda-function.py └── s3.py ├── tests ├── test_alarms.yaml ├── test_reds.py ├── test_vars.yaml ├── test_vars_disabled.yaml └── test_vars_invalid_index.yaml └── vars.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | .coverage 2 | htmlcov 3 | .cache 4 | ._* 5 | .DS* 6 | json/ 7 | zip/ 8 | *.pyc 9 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2016 Dan B 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: prod test prep 2 | SHELL := /bin/bash 3 | 4 | export ANSIBLE_NOCOWS = 1 5 | 6 | prod: export ENVTYPE=prod 7 | prod: 8 | ansible-playbook -i hosts cloudformation.yaml -v 9 | 10 | prep: 11 | pip install -r requirements.txt 12 | 13 | test: 14 | py.test --cov-report html --cov=reds tests/ 15 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![GitHub Logo](http://i.imgur.com/IAMScGQ.png) 2 | # ReDS - ReActive Database System 3 | RDS automatic resizing and scheduled scale up/down 4 | 5 | ###Save up to 50% on RDS costs and save yourself headache during traffic spikes 6 | 7 | Features: 8 | - Can be dropped into any project 9 | - Handles low credit balance on T2 instances to upgrade to M/R instances automatically if they run out of credits 10 | - Allows schedules to go up to bigger DB during weekdays and back down at nights and weekends (if enabled) 11 | - Outputs logs to CloudWatch logs for review 12 | - Automatically increases/decreases capacity if CPU is too high/low 13 | - Creates multiple CloudFormation stacks to encapsulate the pieces 14 | - Built to AWS best practices in terms of security and IAM roles etc. 15 | - Uses CloudWatch alarms (that it creates) to determine when it needs to scale 16 | - Configurable thresholds and cooldowns 17 | - Virtually no cost to operate (maybe 10 cents/month?) but can provide big savings over nights and weekends (i.e. m4.large -> t2.small in off times) 18 | 19 | Requirements: 20 | - run **make prep** to install dependencies 21 | - only works on multi-AZ RDS (Single AZ instances get taken offline during resize - not good!) 22 | - Manually configure repeating source after install via Lambda console - there is no automated way to do this yet, unfortunately - but only one time (see below for instructions!) 23 | 24 | Tests 25 | - run **make test** 26 | - Check ./htmlcov/index.html for coverage report 27 | 28 | Instructions: 29 | - Modify vars.yaml to meet your needs. 30 | - set "rds_identifier" to the name of your RDS instance [Click to see picture of where to get it](http://i.imgur.com/G6gRawE.png) 31 | - If you want to enable scheduled scaling: 32 | - set schedule_enabled: True 33 | - set "cron" vars in UTC time to represent default start/stop 34 | - default cron vars are 9a-5p M-F Pacific (or close to it) 35 | - set "scheduled_index" to be the index of instance size in "instance_sizes" (starting from 0) that you want to scale up to during those hours 36 | - After loading a profile from AWS CLI that has admin access, run from root folder: 37 | **make prod** to install 38 | - This will create all the stacks, buckets, lambda, cloudwatch, IAM roles etc needed for this project 39 | 40 | - After creating, you will need to manually add in the repeating "event source" in the lambda console, there is no cloudformation or aws cli command yet that can enable the repeating of the lambda function. Recommend 5 minute repeat. [Click for picture overview](http://imgur.com/a/RP2Jt) 41 | 42 | This runs approxmiately 300,000 sec/month of lambda, well below the free tier for 128MB functions (3.2M) so it's free 43 | 44 | Logs: 45 | Sample log output: 46 | 47 | ``` 48 | START RequestId: 15e84aae-dc44-11e5-a5b6-b58f401b95b3 Version: $LATEST 49 | INFO: Startup Time: 2016-02-26 04:51:01.301149 50 | INFO: Configured instance sizes: ['db.t2.micro', 'db.t2.small', 'db.m3.medium', 'db.m4.large', 'db.m4.xlarge'] 51 | INFO: RDS rds-master size/status/MultiAZ: db.t2.small/available/True 52 | INFO: DB pointer (0-4) is currently on 1 53 | INFO: Scheduling not enabled 54 | INFO: Checking alarm statuses 55 | INFO: Low-CPU Alarm status is: ALARM 56 | INFO: Attempting scale down one size! 57 | INFO: Scaling to db.t2.micro 58 | INFO: cooldown period (minutes) for scale_down is 60 59 | INFO: Last finished modification 2016-02-26 04:20:07.665000+00:00 Diff: (Min, Sec): (30, 52) 60 | INFO: Not enough time has passed since last modification (60) 61 | NO_ACTION: scale_down Cooldown threshold not reached 62 | END RequestId: 15e84aae-dc44-11e5-a5b6-b58f401b95b3 63 | REPORT RequestId: 15e84aae-dc44-11e5-a5b6-b58f401b95b3 Duration: 1766.43 ms Billed Duration: 1800 ms Memory Size: 128 MB Max Memory Used: 29 MB 64 | ``` 65 | -------------------------------------------------------------------------------- /cloudformation.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: provision stack 3 | hosts: all 4 | connection: local 5 | 6 | vars: 7 | working_dirs: 8 | - json 9 | - tmp 10 | - lambda 11 | 12 | tasks: 13 | 14 | - name: include vars 15 | include_vars: vars.yaml 16 | 17 | - name: Create the temporary directories 18 | file: dest={{ item }} state=directory 19 | with_items: working_dirs 20 | 21 | - name: Generate CloudFormation templates from troposphere 22 | local_action: > 23 | shell python {{ item }} > {{ playbook_dir }}/json/{{ item | basename | replace('.py', '.json') }} 24 | with_fileglob: 25 | - "{{ playbook_dir }}/templates/*.py" 26 | 27 | - name: Validate the cloudformation descriptions 28 | local_action: > 29 | shell aws cloudformation validate-template --region {{ region }} --template-body file://{{ item }} --output text 30 | with_fileglob: 31 | - "{{ playbook_dir }}/json/*.json" 32 | 33 | - name: set fact for time_token 34 | set_fact: time_token="{{ ansible_date_time.iso8601 }}" 35 | 36 | - name: Create Lambda Role 37 | cloudformation: > 38 | stack_name="{{ stack_prefix }}-role" 39 | state=present 40 | region="{{region}}" 41 | template={{ playbook_dir }}/json/iam-assets.json 42 | args: 43 | template_parameters: 44 | RdsInstance: "{{ rds_identifier }}" 45 | register: iam 46 | 47 | - name: Create Lambda Bucket 48 | cloudformation: > 49 | stack_name="{{ stack_prefix }}-bucket" 50 | state=present 51 | region="{{region}}" 52 | template={{ playbook_dir }}/json/s3.json 53 | register: s3 54 | 55 | - name: Create Alarms 56 | cloudformation: > 57 | stack_name="{{ stack_prefix }}-alarms" 58 | state=present 59 | region="{{region}}" 60 | template={{ playbook_dir }}/json/alarms.json 61 | args: 62 | template_parameters: 63 | RdsInstance: "{{ rds_identifier }}" 64 | UpThreshold: "{{ scale_up['threshold'] }}" 65 | UpEvaluations: "{{ scale_up['alarm_duration'] }}" 66 | DownThreshold: "{{ scale_down['threshold'] }}" 67 | DownEvaluations: "{{ scale_down['alarm_duration'] }}" 68 | CreditThreshold: "{{ credits['threshold'] }}" 69 | CreditEvaluations: "{{ credits['alarm_duration'] }}" 70 | register: alarms 71 | 72 | - name: Write output of the alarm names to vars 73 | copy: content="---\nalarm_high{{ ':' }} {{ alarms['stack_outputs']['UpAlarm'] }}\nalarm_low{{ ':' }} {{ alarms['stack_outputs']['DownAlarm'] }}\nalarm_credits{{ ':' }} {{ alarms['stack_outputs']['CreditLowAlarm'] }}\n..." dest="{{ playbook_dir }}/tmp/alarms.yaml" 74 | 75 | - name: Unzip deps 76 | unarchive: src={{ playbook_dir }}/reds/deps.zip dest={{ playbook_dir }}/lambda 77 | 78 | - name: Copy script into working folder 79 | local_action: > 80 | shell cp {{ playbook_dir }}/reds/reds.py {{ playbook_dir }}/lambda/reds.py 81 | 82 | - name: Create local zip 83 | local_action: > 84 | shell cd {{ playbook_dir }}/lambda && zip -r {{ playbook_dir }}/tmp/reds-{{ time_token }}.zip ./* && zip -j {{ playbook_dir }}/tmp/reds-{{ time_token }}.zip {{ playbook_dir }}/vars.yaml {{ playbook_dir }}/tmp/alarms.yaml 85 | 86 | - name: Upload python files to S3 bucket 87 | local_action: > 88 | shell aws s3 cp {{ playbook_dir }}/tmp/reds-{{ time_token }}.zip s3://{{ s3['stack_outputs']['BucketName'] }} 89 | 90 | - name: Create Lambda Function 91 | cloudformation: > 92 | stack_name="{{ stack_prefix }}-lambda" 93 | state=present 94 | region="{{region}}" 95 | template={{ playbook_dir }}/json/lambda-function.json 96 | args: 97 | template_parameters: 98 | TimeToken: "{{ time_token }}" 99 | BucketName: "{{ s3['stack_outputs']['BucketName'] }}" 100 | RdsInstance: "{{ rds_identifier }}" 101 | LambdaRole: "{{ iam['stack_outputs']['LambdaRole'] }}" 102 | register: lambda 103 | 104 | - name: Clean up after ourselves 105 | file: path="{{ playbook_dir }}/{{ item }}" state=absent 106 | with_items: working_dirs 107 | 108 | - name: "Add reminder to manually set the recurring " 109 | debug: msg="~~~ Reminder!! - Go set the recurring lambda call manually - https://github.com/mediatemple/ReDS ~~~" 110 | with_sequence: start=0 end=3 111 | ... 112 | -------------------------------------------------------------------------------- /hosts: -------------------------------------------------------------------------------- 1 | 127.0.0.1 ansible_python_interpreter="/usr/bin/env python" 2 | -------------------------------------------------------------------------------- /reds/deps.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/slikk66/ReDS/31c995a609510814ae25f56e49c10cea81905ad5/reds/deps.zip -------------------------------------------------------------------------------- /reds/reds.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import yaml 3 | import datetime 4 | import croniter 5 | import pytz 6 | 7 | 8 | class reds: 9 | 10 | def __init__(self): 11 | self.rds_client = None 12 | self.cloudwatch_client = None 13 | self.now = None 14 | self.vars = None 15 | self.alarms = None 16 | self.in_scheduled_up = None 17 | self.details = None 18 | self.alarm_status = None 19 | self.execute = None 20 | self.events = None 21 | self.result = None 22 | self.on_index = None 23 | 24 | def lambda_startup(self): 25 | self.rds_client = boto3.client('rds') 26 | self.cloudwatch_client = boto3.client('cloudwatch') 27 | 28 | self.now = datetime.datetime.utcnow() 29 | 30 | self.set_vars( 31 | yaml.load(file('vars.yaml')), yaml.load(file('alarms.yaml'))) 32 | 33 | self.process( 34 | self.rds_client.describe_db_instances( 35 | DBInstanceIdentifier=self.vars['rds_identifier'] 36 | )['DBInstances'][0], 37 | self.cloudwatch_client.describe_alarms( 38 | AlarmNames=[ 39 | self.alarms['alarm_high'], 40 | self.alarms['alarm_low'], 41 | self.alarms['alarm_credits'], 42 | ]), 43 | self.rds_client.describe_events( 44 | SourceIdentifier=self.vars['rds_identifier'], 45 | SourceType="db-instance", 46 | EventCategories=[ 47 | "configuration change" 48 | ]) 49 | ) 50 | 51 | def testing_startup(self, _vars, _alarms, _details, _alarm_status, _events): 52 | self.now = datetime.datetime.utcnow() 53 | self.set_vars(_vars, _alarms) 54 | self.process(_details, _alarm_status, _events, False) 55 | self.print_logs() 56 | 57 | def set_vars(self, _vars, _alarms): 58 | self.vars = _vars 59 | self.alarms = _alarms 60 | 61 | def abort(self, msg): 62 | self.result['Action'] = 'NO_ACTION' 63 | self.result['Message'] = msg 64 | self.result['Logs'].append( 65 | "{}: {}".format(self.result['Action'], self.result['Message'])) 66 | return self.result 67 | 68 | def info(self, msg): 69 | self.result['Logs'].append("{}: {}".format("INFO", msg)) 70 | 71 | def success(self, msg): 72 | self.result['Action'] = 'RESIZE' 73 | self.result['Message'] = msg 74 | self.result['Logs'].append( 75 | "{}: {}".format(self.result['Action'], self.result['Message'])) 76 | return self.result 77 | 78 | def process(self, _details, _alarm_status, _events, _execute=True): 79 | self.in_scheduled_up = False 80 | 81 | self.result = { 82 | "Action": None, 83 | "Message": None, 84 | "Logs": [] 85 | } 86 | 87 | self.details = _details 88 | self.alarm_status = _alarm_status 89 | self.events = _events 90 | self.execute = _execute 91 | 92 | self.info("Startup Time: {}".format(self.now.utcnow())) 93 | self.info("Configured instance sizes: {}".format( 94 | self.vars['instance_sizes'])) 95 | 96 | self.info("RDS {} size/status/MultiAZ: {}/{}/{}".format( 97 | self.vars['rds_identifier'], self.details['DBInstanceClass'], 98 | self.details['DBInstanceStatus'], self.details['MultiAZ'])) 99 | 100 | if self.details['DBInstanceStatus'] != 'available': 101 | return self.abort("In middle of an operation already!") 102 | 103 | if not self.details['MultiAZ']: 104 | return self.abort("Unable to work on singleAZ RDS!") 105 | 106 | try: 107 | self.on_index = self.vars['instance_sizes'].index( 108 | self.details['DBInstanceClass']) 109 | self.info("DB pointer (0-{}) is currently on {}".format( 110 | len(self.vars['instance_sizes'])-1, self.on_index)) 111 | except ValueError: 112 | return self.abort("Instance size not in list!") 113 | 114 | if self.vars['schedule_enabled']: 115 | self.info("Checking to see if we are in a scheduled uptime") 116 | 117 | cron_up = croniter.croniter( 118 | self.vars['scale_up']['cron'], self.now) 119 | prev_up_exec = cron_up.get_prev(datetime.datetime) 120 | next_up_exec = cron_up.get_next(datetime.datetime) 121 | 122 | cron_down = croniter.croniter( 123 | self.vars['scale_down']['cron'], self.now) 124 | prev_down_exec = cron_down.get_prev(datetime.datetime) 125 | next_down_exec = cron_down.get_next(datetime.datetime) 126 | 127 | self.info("Prev CRON Up {}".format(prev_up_exec)) 128 | self.info("Prev CRON Down {}".format(prev_down_exec)) 129 | self.info("Next CRON Up {}".format(next_up_exec)) 130 | self.info("Next CRON Down {}".format(next_down_exec)) 131 | 132 | self.info("Current Time {}".format(self.now)) 133 | 134 | if prev_down_exec < prev_up_exec < next_down_exec < next_up_exec \ 135 | and prev_up_exec < self.now < next_down_exec: 136 | self.in_scheduled_up = True 137 | self.info("In middle of a scheduled uptime! " + 138 | "PrevUp/Current/NextDown {}/{}/{}".format( 139 | prev_up_exec, 140 | self.now, 141 | next_down_exec 142 | )) 143 | try: 144 | up_db = self.vars['instance_sizes'][ 145 | self.vars['scheduled_index']] 146 | except IndexError: 147 | return self.abort("invalid scheduled_index") 148 | self.info("Min allowed instance size is: {}".format(up_db)) 149 | if self.on_index < self.vars['scheduled_index']: 150 | self.info("Running scheduled scale up to {}".format( 151 | self.vars['instance_sizes'][self.vars['scheduled_index']])) 152 | return self.scale('scale_up', self.vars['scheduled_index']) 153 | else: 154 | self.info("Not in middle of a scheduled uptime") 155 | else: 156 | self.info("Scheduling not enabled") 157 | 158 | self.info("Checking alarm statuses") 159 | 160 | if self.details['DBInstanceClass'].startswith('db.t') and \ 161 | self.alarm_status['MetricAlarms'][2]['StateValue'] == 'ALARM': 162 | self.info("CPU-Credit-Low Alarm status is: ALARM") 163 | self.info("Attempting scale up to next non (T) instance") 164 | for dbtype in self.vars['instance_sizes'][int(self.on_index+1):]: 165 | if not dbtype.startswith('db.t'): 166 | return self.scale('credits', self.vars['instance_sizes'].index(dbtype)) 167 | self.info("No non-T instance found above current size!") 168 | 169 | if self.alarm_status['MetricAlarms'][0]['StateValue'] == 'ALARM': 170 | self.info("High-CPU Alarm status is: ALARM") 171 | self.info("Attempting scale up one size!") 172 | return self.scale('scale_up', int(self.on_index+1)) 173 | 174 | if self.alarm_status['MetricAlarms'][1]['StateValue'] == 'ALARM': 175 | self.info("Low-CPU Alarm status is: ALARM") 176 | self.info("Attempting scale down one size!") 177 | return self.scale('scale_down', int(self.on_index-1)) 178 | 179 | return self.abort("Nothing to do") 180 | 181 | def assert_cooldown_expired(self, reason): 182 | cooldown = self.vars[reason]['cooldown'] 183 | self.info( 184 | "cooldown period (minutes) for {} is {}".format(reason, cooldown)) 185 | for mod in self.events['Events'][::-1]: 186 | if mod['Message'].startswith("Finished applying modification to DB instance class"): 187 | delta_time = self.now.replace(tzinfo=pytz.utc) - mod['Date'] 188 | delta_time_calculated = divmod( 189 | delta_time.days * 86400 + delta_time.seconds, 60) 190 | self.info("Last finished modification {} Diff: (Min, Sec): {}".format( 191 | mod['Date'], delta_time_calculated)) 192 | if delta_time_calculated[0] < cooldown: 193 | self.info("Not enough time has passed since last modification ({})".format( 194 | cooldown)) 195 | return False 196 | break 197 | return True 198 | 199 | def scale(self, reason, to_index=None): 200 | if not self.vars['enabled']: 201 | return self.abort("Resizing disabled") 202 | if 0 <= to_index < len(self.vars['instance_sizes']): 203 | if self.in_scheduled_up and to_index < self.vars['scheduled_index']: 204 | return self.abort("Already at bottom for size during scheduled scale up") 205 | self.info( 206 | "Scaling to {}".format(self.vars['instance_sizes'][to_index])) 207 | if not self.assert_cooldown_expired(reason): 208 | return self.abort("{} Cooldown threshold not reached".format(reason)) 209 | if self.execute: 210 | self.info("Executing scale command") 211 | else: 212 | self.info("Skipping scale command per passed in param") 213 | return self.success(self.vars['instance_sizes'][to_index]) 214 | else: 215 | return self.abort("Unable to scale - invalid to_index: {}".format(to_index)) 216 | 217 | def lambda_apply_action(self): 218 | if self.result['Action'] == 'RESIZE': 219 | amz_res = self.rds_client.modify_db_instance( 220 | DBInstanceIdentifier=self.vars['rds_identifier'], 221 | DBInstanceClass=self.result['Message'], 222 | ApplyImmediately=True) 223 | self.info("AMZ response {}".format(amz_res)) 224 | 225 | def print_logs(self): 226 | for log in self.result['Logs']: 227 | print(log) 228 | 229 | 230 | def lambda_handler(context, event): 231 | red = reds() 232 | red.lambda_startup() 233 | red.lambda_apply_action() 234 | red.print_logs() 235 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | ansible 2 | awscli 3 | freezegun 4 | datetime 5 | pyyaml 6 | pytest 7 | boto3 8 | pytz 9 | croniter 10 | pytest-cov -------------------------------------------------------------------------------- /templates/alarms.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from troposphere import Template, Parameter, Ref, Output 4 | from troposphere.cloudwatch import Alarm, MetricDimension 5 | 6 | t = Template() 7 | 8 | rds_instance = t.add_parameter(Parameter( 9 | 'RdsInstance', 10 | Type='String', 11 | Description='Instance to monitor' 12 | )) 13 | 14 | up_threshold = t.add_parameter( 15 | Parameter( 16 | 'UpThreshold', 17 | Type='String' 18 | ) 19 | ) 20 | 21 | up_evaluations = t.add_parameter( 22 | Parameter( 23 | 'UpEvaluations', 24 | Type='String' 25 | ) 26 | ) 27 | 28 | down_threshold = t.add_parameter( 29 | Parameter( 30 | 'DownThreshold', 31 | Type='String' 32 | ) 33 | ) 34 | 35 | down_evaluations = t.add_parameter( 36 | Parameter( 37 | 'DownEvaluations', 38 | Type='String' 39 | ) 40 | ) 41 | 42 | credit_threshold = t.add_parameter( 43 | Parameter( 44 | 'CreditThreshold', 45 | Type='String' 46 | ) 47 | ) 48 | 49 | credit_evaluations = t.add_parameter( 50 | Parameter( 51 | 'CreditEvaluations', 52 | Type='String' 53 | ) 54 | ) 55 | 56 | high_cpu_alarm = t.add_resource( 57 | Alarm( 58 | "ReDSAlarmHigh", 59 | AlarmDescription="CPU High Alarm", 60 | Namespace="AWS/RDS", 61 | MetricName="CPUUtilization", 62 | Statistic="Average", 63 | Period=60, 64 | Dimensions=[ 65 | MetricDimension( 66 | Name="DBInstanceIdentifier", 67 | Value=Ref(rds_instance) 68 | ) 69 | ], 70 | EvaluationPeriods=Ref(up_evaluations), 71 | Threshold=Ref(up_threshold), 72 | ComparisonOperator="GreaterThanOrEqualToThreshold", 73 | AlarmActions=[], 74 | InsufficientDataActions=[], 75 | OKActions=[], 76 | ) 77 | ) 78 | 79 | low_cpu_alarm = t.add_resource( 80 | Alarm( 81 | "ReDSAlarmLow", 82 | AlarmDescription="CPU Low Alarm", 83 | Namespace="AWS/RDS", 84 | MetricName="CPUUtilization", 85 | Statistic="Average", 86 | Period=60, 87 | Dimensions=[ 88 | MetricDimension( 89 | Name="DBInstanceIdentifier", 90 | Value=Ref(rds_instance) 91 | ) 92 | ], 93 | EvaluationPeriods=Ref(down_evaluations), 94 | Threshold=Ref(down_threshold), 95 | ComparisonOperator="LessThanOrEqualToThreshold", 96 | AlarmActions=[], 97 | InsufficientDataActions=[], 98 | OKActions=[], 99 | ) 100 | ) 101 | 102 | low_credit_alarm = t.add_resource( 103 | Alarm( 104 | "ReDSNoCredits", 105 | AlarmDescription="CPU Credits Exhausted Alarm", 106 | Namespace="AWS/RDS", 107 | MetricName="CPUCreditBalance", 108 | Statistic="Maximum", 109 | Period=60, 110 | Dimensions=[ 111 | MetricDimension( 112 | Name="DBInstanceIdentifier", 113 | Value=Ref(rds_instance) 114 | ) 115 | ], 116 | EvaluationPeriods=Ref(credit_evaluations), 117 | Threshold=Ref(credit_threshold), 118 | ComparisonOperator="LessThanOrEqualToThreshold", 119 | AlarmActions=[], 120 | InsufficientDataActions=[], 121 | OKActions=[], 122 | ) 123 | ) 124 | 125 | t.add_output([ 126 | Output( 127 | 'UpAlarm', 128 | Description='Alarm name for up/high', 129 | Value=Ref(high_cpu_alarm) 130 | ), 131 | Output( 132 | 'DownAlarm', 133 | Description='Alarm name for down/low', 134 | Value=Ref(low_cpu_alarm) 135 | ), 136 | Output( 137 | 'CreditLowAlarm', 138 | Description='Alarm name for credits out', 139 | Value=Ref(low_credit_alarm) 140 | ) 141 | ]) 142 | 143 | if __name__ == '__main__': 144 | print t.to_json() 145 | -------------------------------------------------------------------------------- /templates/iam-assets.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from troposphere import Template, Parameter, iam, Join, Ref, Output 4 | from awacs.aws import Action, Allow, Policy, Principal, Statement 5 | 6 | t = Template() 7 | 8 | rds_instance = t.add_parameter(Parameter( 9 | 'RdsInstance', 10 | Type='String', 11 | Description='Instance to monitor' 12 | )) 13 | 14 | role = t.add_resource(iam.Role( 15 | "lambdarole", 16 | AssumeRolePolicyDocument=Policy( 17 | Statement=[ 18 | Statement( 19 | Effect=Allow, 20 | Principal=Principal('Service', 'lambda.amazonaws.com'), 21 | Action=[Action('sts', 'AssumeRole')] 22 | ) 23 | ] 24 | ), 25 | Path='/', 26 | Policies=[iam.Policy( 27 | 'lambdapolicy', 28 | PolicyName='lambdapolicy', 29 | PolicyDocument=Policy( 30 | Statement=[ 31 | Statement( 32 | Effect=Allow, 33 | Action=[ 34 | Action('rds', 'DescribeDBInstances'), 35 | Action('rds', 'DescribeEvents'), 36 | ], 37 | Resource=[ 38 | "*" 39 | ] 40 | ), 41 | Statement( 42 | Effect=Allow, 43 | Action=[ 44 | Action('rds', 'ModifyDBInstance'), 45 | ], 46 | Resource=[ 47 | Join('', ['arn:aws:rds:',Ref("AWS::Region"),':',Ref("AWS::AccountId"),':db:',Ref(rds_instance)]), 48 | ] 49 | ), 50 | Statement( 51 | Effect=Allow, 52 | Action=[ 53 | Action('cloudwatch', 'DescribeAlarms'), 54 | ], 55 | Resource=[ 56 | "*" 57 | ] 58 | ), 59 | Statement( 60 | Effect=Allow, 61 | Action=[ 62 | Action('logs', 'CreateLogGroup'), 63 | Action('logs', 'CreateLogStream'), 64 | Action('logs', 'PutLogEvents'), 65 | ], 66 | Resource=["arn:aws:logs:*:*:*"] 67 | ) 68 | ] 69 | ) 70 | )] 71 | )) 72 | 73 | t.add_output([ 74 | Output( 75 | 'LambdaRole', 76 | Description='ReDS Lambda Role ARN', 77 | Value=Ref(role), 78 | ) 79 | ]) 80 | 81 | if __name__ == '__main__': 82 | print t.to_json() 83 | -------------------------------------------------------------------------------- /templates/lambda-function.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | from troposphere import Template, Parameter, awslambda, Join, Ref, Output 3 | t = Template() 4 | 5 | rds_instance = t.add_parameter(Parameter( 6 | 'RdsInstance', 7 | Type='String', 8 | Description='Instance to monitor' 9 | )) 10 | 11 | lambda_role = t.add_parameter(Parameter( 12 | 'LambdaRole', 13 | Type='String', 14 | Description='Lambda Role' 15 | )) 16 | 17 | bucket_name = t.add_parameter(Parameter( 18 | 'BucketName', 19 | Type='String', 20 | Description='Lambda Code Bucket' 21 | )) 22 | 23 | time_token = t.add_parameter(Parameter( 24 | 'TimeToken', 25 | Type='String', 26 | Description='Time Token for last upload' 27 | )) 28 | 29 | lambda_function = t.add_resource( 30 | awslambda.Function( 31 | "reds", 32 | Code=awslambda.Code( 33 | S3Bucket=Ref(bucket_name), 34 | S3Key=Join("",["reds-",Ref(time_token),".zip"]) 35 | ), 36 | Handler="reds.lambda_handler", 37 | MemorySize=128, 38 | Role=Join('',['arn:aws:iam::',Ref("AWS::AccountId"),':role/',Ref(lambda_role)]), 39 | Runtime="python2.7", 40 | Timeout=30 41 | ) 42 | ) 43 | 44 | t.add_output([ 45 | Output( 46 | 'LambdaFunction', 47 | Description='ReDS Lambda Function', 48 | Value=Ref(lambda_function), 49 | ) 50 | ]) 51 | 52 | if __name__ == '__main__': 53 | print t.to_json() 54 | -------------------------------------------------------------------------------- /templates/s3.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from troposphere import Output, Template, Ref 4 | from troposphere.s3 import Bucket, Private 5 | 6 | t = Template() 7 | 8 | s3bucket = t.add_resource(Bucket( 9 | "code", 10 | AccessControl=Private 11 | )) 12 | 13 | t.add_output([ 14 | Output( 15 | "BucketName", 16 | Value=Ref(s3bucket), 17 | Description="ID of Bucket without any DNS" 18 | ) 19 | ]) 20 | 21 | if __name__ == '__main__': 22 | print t.to_json() 23 | -------------------------------------------------------------------------------- /tests/test_alarms.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | alarm_high: reds-alarms-ReDSAlarmHigh 3 | alarm_low: reds-alarms-ReDSAlarmLow 4 | alarm_credits: reds-alarms-ReDSNoCredits 5 | ... -------------------------------------------------------------------------------- /tests/test_reds.py: -------------------------------------------------------------------------------- 1 | import imp 2 | import yaml 3 | import datetime 4 | import pytz 5 | from freezegun import freeze_time 6 | 7 | reds = imp.load_source('reds', './reds/reds.py') 8 | a = reds.reds() 9 | 10 | #details 11 | unsupported_multi_az_available = { 12 | "MultiAZ": True, 13 | "DBInstanceStatus": "available", 14 | "DBInstanceClass": "db.r3.large", 15 | } 16 | micro_single_az_available = { 17 | "MultiAZ": False, 18 | "DBInstanceStatus": "available", 19 | "DBInstanceClass": "db.t2.micro", 20 | } 21 | micro_multi_az_available = { 22 | "MultiAZ": True, 23 | "DBInstanceStatus": "available", 24 | "DBInstanceClass": "db.t2.micro", 25 | } 26 | micro_multi_az_modifying = { 27 | "MultiAZ": True, 28 | "DBInstanceStatus": "modifying", 29 | "DBInstanceClass": "db.t2.micro", 30 | } 31 | medium_multi_az_available = { 32 | "MultiAZ": True, 33 | "DBInstanceStatus": "available", 34 | "DBInstanceClass": "db.m3.medium", 35 | } 36 | m4xl_multi_az_available = { 37 | "MultiAZ": True, 38 | "DBInstanceStatus": "available", 39 | "DBInstanceClass": "db.m4.xlarge", 40 | } 41 | #alarm_status 42 | high_cpu = { 43 | "MetricAlarms": [{ 44 | "AlarmDescription": "CPU High Alarm", 45 | "StateValue": "ALARM", 46 | }, { 47 | "AlarmDescription": "CPU Low Alarm", 48 | "StateValue": "OK", 49 | }, { 50 | "AlarmDescription": "CPU Credits Exhausted Alarm", 51 | "StateValue": "OK", 52 | }] 53 | } 54 | low_cpu = { 55 | "MetricAlarms": [{ 56 | "AlarmDescription": "CPU High Alarm", 57 | "StateValue": "OK", 58 | }, { 59 | "AlarmDescription": "CPU Low Alarm", 60 | "StateValue": "ALARM", 61 | }, { 62 | "AlarmDescription": "CPU Credits Exhausted Alarm", 63 | "StateValue": "OK", 64 | }] 65 | } 66 | credits_low = { 67 | "MetricAlarms": [{ 68 | "AlarmDescription": "CPU High Alarm", 69 | "StateValue": "OK", 70 | }, { 71 | "AlarmDescription": "CPU Low Alarm", 72 | "StateValue": "OK", 73 | }, { 74 | "AlarmDescription": "CPU Credits Exhausted Alarm", 75 | "StateValue": "ALARM", 76 | }] 77 | } 78 | high_cpu_credits_low = { 79 | "MetricAlarms": [{ 80 | "AlarmDescription": "CPU High Alarm", 81 | "StateValue": "ALARM", 82 | }, { 83 | "AlarmDescription": "CPU Low Alarm", 84 | "StateValue": "OK", 85 | }, { 86 | "AlarmDescription": "CPU Credits Exhausted Alarm", 87 | "StateValue": "ALARM", 88 | }] 89 | } 90 | no_alarm = { 91 | "MetricAlarms": [{ 92 | "AlarmDescription": "CPU High Alarm", 93 | "StateValue": "OK", 94 | }, { 95 | "AlarmDescription": "CPU Low Alarm", 96 | "StateValue": "OK", 97 | }, { 98 | "AlarmDescription": "CPU Credits Exhausted Alarm", 99 | "StateValue": "OK", 100 | }] 101 | } 102 | #events 103 | friday_midday_update = { 104 | "Events": [{ 105 | "Date": datetime.datetime(2016, 1, 1, 18, 0, 0, 0, tzinfo=pytz.utc), 106 | "Message": "Applying modification to database instance class", 107 | "SourceIdentifier": "mixhop-rds-master", 108 | "EventCategories": ["configuration change"], 109 | "SourceType": "db-instance" 110 | }, 111 | { 112 | "Date": datetime.datetime(2016, 1, 1, 18, 30, 0, 0, tzinfo=pytz.utc), 113 | "Message": "Finished applying modification to DB instance class", 114 | "SourceIdentifier": "mixhop-rds-master", 115 | "EventCategories": ["configuration change"], 116 | "SourceType": "db-instance" 117 | }] 118 | } 119 | saturday_midday_update = { 120 | "Events": [{ 121 | "Date": datetime.datetime(2016, 1, 2, 18, 0, 0, 0, tzinfo=pytz.utc), 122 | "Message": "Applying modification to database instance class", 123 | "SourceIdentifier": "mixhop-rds-master", 124 | "EventCategories": ["configuration change"], 125 | "SourceType": "db-instance" 126 | }, 127 | { 128 | "Date": datetime.datetime(2016, 1, 2, 18, 30, 0, 0, tzinfo=pytz.utc), 129 | "Message": "Finished applying modification to DB instance class", 130 | "SourceIdentifier": "mixhop-rds-master", 131 | "EventCategories": ["configuration change"], 132 | "SourceType": "db-instance" 133 | }] 134 | } 135 | no_update = { 136 | "Events": [] 137 | } 138 | 139 | def get_vars(extra=None): 140 | suffix = extra if extra else "" 141 | return [yaml.load(file("./tests/test_vars{}.yaml".format(suffix))),yaml.load(file('./tests/test_alarms.yaml'))] 142 | 143 | @freeze_time("2016-01-01 19:30:00", tz_offset=0) 144 | def test_noop_on_single_az(): 145 | test_yaml = get_vars() 146 | a.testing_startup(test_yaml[0], test_yaml[1], 147 | micro_single_az_available, high_cpu, friday_midday_update) 148 | assert(a.result['Action'] == 'NO_ACTION') 149 | assert(a.result['Message'] == 'Unable to work on singleAZ RDS!') 150 | 151 | @freeze_time("2016-01-02 19:30:00", tz_offset=0) 152 | def test_increase_on_high_alarm(): 153 | test_yaml = get_vars() 154 | a.testing_startup(test_yaml[0], test_yaml[1], 155 | micro_multi_az_available, high_cpu, friday_midday_update) 156 | assert(a.result['Action'] == 'RESIZE') 157 | assert(a.result['Message'] == 'db.t2.small') 158 | 159 | @freeze_time("2016-01-02 19:30:00", tz_offset=0) 160 | def test_decrease_on_low_alarm(): 161 | test_yaml = get_vars() 162 | a.testing_startup(test_yaml[0], test_yaml[1], 163 | medium_multi_az_available, low_cpu, friday_midday_update) 164 | assert(a.result['Action'] == 'RESIZE') 165 | assert(a.result['Message'] == 'db.t2.small') 166 | 167 | @freeze_time("2016-01-02 19:30:00", tz_offset=0) 168 | def test_noop_on_low_alarm_at_bottom(): 169 | test_yaml = get_vars() 170 | a.testing_startup(test_yaml[0], test_yaml[1], 171 | micro_multi_az_available, low_cpu, friday_midday_update) 172 | assert(a.result['Action'] == 'NO_ACTION') 173 | assert(a.result['Message'] == 'Unable to scale - invalid to_index: -1') 174 | 175 | @freeze_time("2016-01-02 19:30:00", tz_offset=0) 176 | def test_noop_on_high_alarm_at_top(): 177 | test_yaml = get_vars() 178 | a.testing_startup(test_yaml[0], test_yaml[1], 179 | m4xl_multi_az_available, high_cpu, friday_midday_update) 180 | assert(a.result['Action'] == 'NO_ACTION') 181 | assert(a.result['Message'] == 'Unable to scale - invalid to_index: 5') 182 | 183 | @freeze_time("2016-01-02 19:30:00", tz_offset=0) 184 | def test_credits_processed_before_low_and_multiple_index_up_jump(): 185 | test_yaml = get_vars() 186 | a.testing_startup(test_yaml[0], test_yaml[1], 187 | micro_multi_az_available, high_cpu_credits_low, friday_midday_update) 188 | assert(a.result['Action'] == 'RESIZE') 189 | assert(a.result['Message'] == 'db.m3.medium') 190 | 191 | @freeze_time("2016-01-01 19:30:00", tz_offset=0) 192 | def test_in_scheduled_scale_out_micro_to_medium_no_alarm(): 193 | test_yaml = get_vars() 194 | a.testing_startup(test_yaml[0], test_yaml[1], 195 | micro_multi_az_available, no_alarm, no_update) 196 | assert(a.result['Action'] == 'RESIZE') 197 | assert(a.result['Message'] == 'db.m3.medium') 198 | 199 | @freeze_time("2016-01-01 19:30:00", tz_offset=0) 200 | def test_in_scheduled_scale_out_medium_to_large_high_alarm(): 201 | test_yaml = get_vars() 202 | a.testing_startup(test_yaml[0], test_yaml[1], 203 | medium_multi_az_available, high_cpu, no_update) 204 | assert(a.result['Action'] == 'RESIZE') 205 | assert(a.result['Message'] == 'db.m4.large') 206 | 207 | @freeze_time("2016-01-02 19:30:00", tz_offset=0) 208 | def test_out_scheduled_scale_down_medium_to_small_low_alarm(): 209 | test_yaml = get_vars() 210 | a.testing_startup(test_yaml[0], test_yaml[1], 211 | medium_multi_az_available, low_cpu, no_update) 212 | assert(a.result['Action'] == 'RESIZE') 213 | assert(a.result['Message'] == 'db.t2.small') 214 | 215 | @freeze_time("2016-01-02 18:45:00", tz_offset=0) 216 | def test_blocked_recent_down(): 217 | test_yaml = get_vars() 218 | a.testing_startup(test_yaml[0], test_yaml[1], 219 | medium_multi_az_available, low_cpu, saturday_midday_update) 220 | assert(a.result['Action'] == 'NO_ACTION') 221 | assert(a.result['Message'] == 'scale_down Cooldown threshold not reached') 222 | 223 | @freeze_time("2016-01-02 18:35:00", tz_offset=0) 224 | def test_blocked_recent_up(): 225 | test_yaml = get_vars() 226 | a.testing_startup(test_yaml[0], test_yaml[1], 227 | medium_multi_az_available, high_cpu, saturday_midday_update) 228 | assert(a.result['Action'] == 'NO_ACTION') 229 | assert(a.result['Message'] == 'scale_up Cooldown threshold not reached') 230 | 231 | @freeze_time("2016-01-02 18:55:00", tz_offset=0) 232 | def test_allow_recent_up(): 233 | test_yaml = get_vars() 234 | a.testing_startup(test_yaml[0], test_yaml[1], 235 | medium_multi_az_available, high_cpu, saturday_midday_update) 236 | assert(a.result['Action'] == 'RESIZE') 237 | assert(a.result['Message'] == 'db.m4.large') 238 | 239 | @freeze_time("2016-01-02 18:45:00", tz_offset=0) 240 | def test_blocked_busy(): 241 | test_yaml = get_vars() 242 | a.testing_startup(test_yaml[0], test_yaml[1], 243 | micro_multi_az_modifying, {}, {}) 244 | assert(a.result['Action'] == 'NO_ACTION') 245 | assert(a.result['Message'] == 'In middle of an operation already!') 246 | 247 | @freeze_time("2016-01-02 18:45:00", tz_offset=0) 248 | def test_blocked_unsupported(): 249 | test_yaml = get_vars() 250 | a.testing_startup(test_yaml[0], test_yaml[1], 251 | unsupported_multi_az_available, {}, {}) 252 | assert(a.result['Action'] == 'NO_ACTION') 253 | assert(a.result['Message'] == 'Instance size not in list!') 254 | 255 | @freeze_time("2016-01-02 18:45:00", tz_offset=0) 256 | def test_scale_up_on_credit_low(): 257 | test_yaml = get_vars() 258 | a.testing_startup(test_yaml[0], test_yaml[1], 259 | micro_multi_az_available, credits_low, friday_midday_update) 260 | assert(a.result['Action'] == 'RESIZE') 261 | assert(a.result['Message'] == 'db.m3.medium') 262 | 263 | @freeze_time("2016-01-02 18:45:00", tz_offset=0) 264 | def test_nothing_to_do(): 265 | test_yaml = get_vars() 266 | a.testing_startup(test_yaml[0], test_yaml[1], 267 | micro_multi_az_available, no_alarm, no_update) 268 | assert(a.result['Action'] == 'NO_ACTION') 269 | assert(a.result['Message'] == 'Nothing to do') 270 | 271 | @freeze_time("2016-01-02 18:45:00", tz_offset=0) 272 | def test_scaling_disabled_scale_up(): 273 | test_yaml = get_vars('_disabled') 274 | a.testing_startup(test_yaml[0], test_yaml[1], 275 | micro_multi_az_available, high_cpu, no_update) 276 | assert(a.result['Action'] == 'NO_ACTION') 277 | assert(a.result['Message'] == 'Resizing disabled') 278 | 279 | @freeze_time("2016-01-02 18:45:00", tz_offset=0) 280 | def test_scaling_disabled_scale_down(): 281 | test_yaml = get_vars('_disabled') 282 | a.testing_startup(test_yaml[0], test_yaml[1], 283 | medium_multi_az_available, low_cpu, no_update) 284 | assert(a.result['Action'] == 'NO_ACTION') 285 | assert(a.result['Message'] == 'Resizing disabled') 286 | 287 | @freeze_time("2016-01-01 18:45:00", tz_offset=0) 288 | def test_prevent_scale_down_during_scheduled(): 289 | test_yaml = get_vars() 290 | a.testing_startup(test_yaml[0], test_yaml[1], 291 | medium_multi_az_available, low_cpu, no_update) 292 | assert(a.result['Action'] == 'NO_ACTION') 293 | assert(a.result['Message'] == 'Already at bottom for size during scheduled scale up') 294 | 295 | @freeze_time("2016-01-01 19:30:00", tz_offset=0) 296 | def test_invalid_index(): 297 | test_yaml = get_vars('_invalid_index') 298 | a.testing_startup(test_yaml[0], test_yaml[1], 299 | medium_multi_az_available, high_cpu, no_update) 300 | assert(a.result['Action'] == 'NO_ACTION') 301 | assert(a.result['Message'] == 'invalid scheduled_index') 302 | -------------------------------------------------------------------------------- /tests/test_vars.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | stack_prefix: reds 3 | region: us-west-2 4 | instance_sizes: 5 | - db.t2.micro 6 | - db.t2.small 7 | - db.m3.medium 8 | - db.m4.large 9 | - db.m4.xlarge 10 | scale_down: 11 | cron: 0 5 * * 2-6 12 | alarm_duration: 5 13 | threshold: 10 14 | cooldown: 60 15 | scale_up: 16 | cron: 0 15 * * 1-5 17 | alarm_duration: 10 18 | threshold: 80 19 | cooldown: 10 20 | credits: 21 | alarm_duration: 10 22 | threshold: 2 23 | cooldown: 10 24 | enabled: True 25 | scheduled_index: 2 26 | schedule_enabled: True 27 | rds_identifier: test-db-identifier 28 | ... 29 | -------------------------------------------------------------------------------- /tests/test_vars_disabled.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | stack_prefix: reds 3 | region: us-west-2 4 | instance_sizes: 5 | - db.t2.micro 6 | - db.t2.small 7 | - db.m3.medium 8 | - db.m4.large 9 | - db.m4.xlarge 10 | scale_down: 11 | cron: 0 5 * * 2-6 12 | alarm_duration: 5 13 | threshold: 10 14 | cooldown: 60 15 | scale_up: 16 | cron: 0 15 * * 1-5 17 | alarm_duration: 10 18 | threshold: 80 19 | cooldown: 10 20 | credits: 21 | alarm_duration: 10 22 | threshold: 2 23 | cooldown: 10 24 | enabled: False 25 | scheduled_index: 2 26 | schedule_enabled: False 27 | rds_identifier: test-db-identifier 28 | ... 29 | -------------------------------------------------------------------------------- /tests/test_vars_invalid_index.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | stack_prefix: reds 3 | region: us-west-2 4 | instance_sizes: 5 | - db.t2.micro 6 | - db.t2.small 7 | - db.m3.medium 8 | - db.m4.large 9 | - db.m4.xlarge 10 | scale_down: 11 | cron: 0 5 * * 2-6 12 | alarm_duration: 5 13 | threshold: 10 14 | cooldown: 60 15 | scale_up: 16 | cron: 0 15 * * 1-5 17 | alarm_duration: 10 18 | threshold: 80 19 | cooldown: 10 20 | credits: 21 | alarm_duration: 10 22 | threshold: 2 23 | cooldown: 10 24 | enabled: True 25 | scheduled_index: 6 26 | schedule_enabled: True 27 | rds_identifier: test-db-identifier 28 | ... 29 | -------------------------------------------------------------------------------- /vars.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | stack_prefix: reds 3 | region: us-west-2 4 | instance_sizes: 5 | - db.t2.micro 6 | - db.t2.small 7 | - db.m3.medium 8 | - db.m4.large 9 | - db.m4.xlarge 10 | scale_down: 11 | cron: 0 5 * * 2-6 12 | alarm_duration: 5 13 | threshold: 10 14 | cooldown: 60 15 | scale_up: 16 | cron: 0 15 * * 1-5 17 | alarm_duration: 10 18 | threshold: 80 19 | cooldown: 10 20 | credits: 21 | alarm_duration: 10 22 | threshold: 2 23 | cooldown: 10 24 | enabled: True 25 | scheduled_index: 2 26 | schedule_enabled: False 27 | rds_identifier: your-db-identifier 28 | ... 29 | --------------------------------------------------------------------------------