├── .gitignore ├── README.md ├── api ├── __init__.py ├── alerter.py ├── chefclient.py ├── coordinator.py ├── eddaclient.py └── instanceenricher.py ├── docs └── arch.png ├── etc ├── configfile_template.json └── statusfile_template.json ├── nessus_scan.py ├── plugins ├── __init__.py ├── ami.py ├── chef.py ├── elbs.py ├── iam.py ├── instancetags.py ├── route53.py ├── s3acl.py ├── secgroups.py └── sso.py ├── reddalert.py ├── requirements-test.txt ├── requirements.txt ├── sentry_processors.py └── tests ├── __init__.py ├── api ├── __init__.py ├── test_alerting.py ├── test_eddaclient.py └── test_instanceenricher.py ├── plugins ├── __init__.py ├── test_ami.py ├── test_chef.py ├── test_elbs.py ├── test_iam.py ├── test_missingtag.py ├── test_newtag.py ├── test_route53.py ├── test_s3acl.py ├── test_secgroups.py └── test_sso.py ├── test_data ├── test_iam_diff.txt ├── test_invalid.json └── test_status_file.json └── test_reddalert.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.py[cod] 2 | 3 | # C extensions 4 | *.so 5 | 6 | # Packages 7 | *.egg 8 | *.egg-info 9 | dist 10 | eggs 11 | parts 12 | bin 13 | var 14 | sdist 15 | develop-eggs 16 | .installed.cfg 17 | lib 18 | lib64 19 | __pycache__ 20 | 21 | # Installer logs 22 | pip-log.txt 23 | 24 | # Unit test / coverage reports 25 | .coverage 26 | .tox 27 | nosetests.xml 28 | 29 | # Translations 30 | *.mo 31 | 32 | # Virtualenv 33 | virtualenv 34 | 35 | # Mr Developer 36 | .mr.developer.cfg 37 | .project 38 | .pydevproject 39 | 40 | # OSX 41 | .DS_Store 42 | 43 | configfile.json 44 | statusfile.json 45 | 46 | # lock files 47 | etc/*.*-* 48 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # reddalert 2 | 3 | AWS security monitoring/alerting tool built on top of Netflix's [EDDA](https://github.com/Netflix/edda) project. 4 | 5 | What do we want to see? Examples: 6 | * security group whitelists some weird port(range) 7 | * ELB forwards traffic to some weird port 8 | * an EC2 instance was created from an AMI not seen before 9 | * IAM user added to a group 10 | 11 | ## Installation 12 | 13 | Installing and running the project is pretty simple: 14 | 15 | ``` 16 | $ virtualenv virtualenv 17 | $ . virtualenv/bin/activate 18 | $ pip install -r requirements.txt 19 | $ python reddalert.py --configfile --statusfile rule_1 [rule_2 [...]] 20 | ``` 21 | 22 | And setup a cronjob which calls this script periodically. 23 | 24 | ### The configuration file 25 | 26 | ```reddalert``` integrates into an AWS environment. The purpose of this file is to define this environment. The minimum you need is the address of a running [EDDA] server. See ```etc/configfile_template.json``` for an example! 27 | 28 | * ```edda``` The address of your EDDA instance. 29 | * ```output``` Comma-seperated list of alerting targets. If something strange is found, ```reddalert``` can email you and eg. copy the alert into ElasticSearch. Or just prints it to standard output. 30 | * ```store-until``` Tipically we're only interrested in events happened since the last run. If this configuration option is ```true```, the timestamp of the actual run is stored in the status file at the end of the run. It is overridable from command line, see ```--since``` and ```--until```. 31 | * ```plugin.``` Plugin-specific options. 32 | * ```es_host```, ```es_port``` If you use the ```elasticsearch``` output, define the ES address here. 33 | 34 | For plugin-specific settings, check the plugin's documentation. 35 | 36 | ### The status file 37 | 38 | This file is used to store the context needed by plugins between each run. You don't want to keep getting the same alert for the same AMI, do you? Well, that's why this file exists. 39 | 40 | Furthermore, the epoch of the last run is stored here if ```store-until``` is enabled. 41 | 42 | ## Detailed Description 43 | 44 | ![arch](docs/arch.png) 45 | 46 | ### Project Layout 47 | 48 | ``` 49 | [project dir] 50 | \- api (core files, like EDDA client lib, alerters, etc.) 51 | \- docs (documentation) 52 | \- etc (configuration files) 53 | \- plugins (plugins go here) 54 | \- tests (unit tests) 55 | \- reddalert.py (main executable) 56 | ``` 57 | 58 | ### Plugins 59 | 60 | For a detailed description of available plugins, check [wiki](https://github.com/prezi/reddalert/wiki). 61 | 62 | Each plugin class is expected to have two methods, an ```init``` and a ```run```. 63 | * ```init``` Here are some infrastructure-level classes passed, notably: 64 | * ```edda_client``` EDDA "client proxy" 65 | * ```conifig``` The ```plugin.{plugin_name}``` part of the config file 66 | * ```run``` It should return the alert objects, in the following format: 67 | 68 | ``` 69 | { 70 | "plugin_name": "identifier of this plugin", 71 | "id": "identifier of the alerted object, eg. security group id", 72 | "details": "human-readable details, eg. suspicious port range" 73 | } 74 | ``` 75 | 76 | Once you have a plugin, don't forget to add it to ```plugin_list``` in the plugin module's [init file](reddalert/plugins/__init__.py) to make it available from command line. 77 | 78 | ## How Do We Use It at Prezi? 79 | 80 | We run it periodically every 6 hours. The alerts are sent to ElasticSearch, and from ES trac tickets are automatically created (using a different system). For a long time we received the alerts through email, which was a feasible workflow as well. After the initial fine-tuning we process about 3-5 alerts daily. 81 | 82 | ### Do you want to contribute? 83 | 84 | Send a pull request. Tests are highly appreciated. 85 | 86 | ## See Also 87 | 88 | [](http://en.wikipedia.org/wiki/Edda_M%C5%B1vek) 89 | [](http://en.wikipedia.org/wiki/Command_%26_Conquer:_Red_Alert) 90 | -------------------------------------------------------------------------------- /api/__init__.py: -------------------------------------------------------------------------------- 1 | from coordinator import Coordinator 2 | from eddaclient import EddaClient 3 | from eddaclient import EddaException 4 | from alerter import Alerter 5 | from instanceenricher import InstanceEnricher 6 | from instanceenricher import instance_report 7 | -------------------------------------------------------------------------------- /api/alerter.py: -------------------------------------------------------------------------------- 1 | import StringIO 2 | import datetime 3 | import hashlib 4 | import logging 5 | import smtplib 6 | import sys 7 | from email.mime.multipart import MIMEMultipart 8 | from email.mime.text import MIMEText 9 | 10 | from elasticsearch import Elasticsearch 11 | 12 | 13 | class StdOutAlertSender: 14 | def __init__(self, tabsep, console=sys.stdout): 15 | self.tab_separated_output = tabsep 16 | self.console = console 17 | 18 | def send_alerts(self, configuration, alerts): 19 | for alert in alerts: 20 | self.console.write(self.format_alert(alert[0], alert[1], alert[2])) 21 | self.console.write("\n") 22 | 23 | def format_alert(self, plugin_name, checked_id, details): 24 | if self.tab_separated_output: 25 | return ("%s\t%s\t%s\n" % 26 | (plugin_name, checked_id, repr(details))) 27 | else: 28 | return ("Rule: %s\n" 29 | "Subject: %s\n" 30 | "Alert: %s\n\n" % 31 | (plugin_name, checked_id, details)) 32 | 33 | 34 | class EmailAlertSender: 35 | def __init__(self, msg_type="plain"): 36 | self.msg_type = msg_type 37 | 38 | def send_alerts(self, configuration, alerts): 39 | output = StringIO.StringIO() 40 | ow = StdOutAlertSender(tabsep=False, console=output) 41 | ow.send_alerts(None, alerts) 42 | mail_content = output.getvalue() 43 | 44 | email_from = configuration.get("email_from", "reddalert@localhost") 45 | email_to = configuration.get("email_to", ['root@localhost']) 46 | email_subject = configuration.get("email_subject", "[reddalert] Report") 47 | email_txt = mail_content if self.msg_type == "plain" else mail_content.replace("\n", "
") 48 | 49 | self.send_email(email_from, email_to, email_subject, email_txt, self.msg_type, config=configuration) 50 | 51 | def send_email(self, email_from, email_to, subject, txt, msg_type="plain", config={}): 52 | recipients = ", ".join(email_to) 53 | 54 | msg = MIMEMultipart() 55 | msg["Subject"] = subject 56 | msg['From'] = email_from 57 | msg['To'] = recipients 58 | 59 | msg.attach(MIMEText(txt.encode("utf-8"), "plain")) 60 | 61 | print "msg: %s" % repr(msg.as_string()) 62 | 63 | smtp = smtplib.SMTP(config.get("smtp_host", 'localhost')) 64 | smtp.sendmail(email_from, email_to, msg.as_string()) 65 | smtp.quit() 66 | 67 | 68 | class ESAlertSender: 69 | def __init__(self): 70 | self.es = None 71 | self.logger = logging.getLogger("ESAlertSender") 72 | 73 | def send_alerts(self, configuration, alerts): 74 | self.es = Elasticsearch([{"host": configuration["es_host"], "port": configuration["es_port"]}]) 75 | for alert in self.flatten_alerts(alerts): 76 | self.insert_es(alert) 77 | 78 | def insert_es(self, alert): 79 | try: 80 | alert["@timestamp"] = datetime.datetime.utcnow().isoformat() + "Z" 81 | alert["type"] = "reddalert" 82 | self.es.create(body=alert, id=hashlib.sha1(str(alert)).hexdigest(), index='reddalert', doc_type='reddalert') 83 | except Exception as e: 84 | self.logger.exception(e) 85 | 86 | def flatten_alerts(self, alerts): 87 | for alert in alerts: 88 | details = alert[2] 89 | if isinstance(details, dict): 90 | base = {"rule": alert[0], "id": alert[1]} 91 | base.update(details) 92 | yield base 93 | else: 94 | yield {"rule": alert[0], "id": alert[1], "details": details} 95 | 96 | 97 | class Alerter: 98 | AVAILABLE_ALERTERS = { 99 | "stdout": StdOutAlertSender(tabsep=False), 100 | "stdout_tabsep": StdOutAlertSender(tabsep=True), 101 | "mail_txt": EmailAlertSender(msg_type='plain'), 102 | "mail_html": EmailAlertSender(msg_type='text/html'), 103 | "elasticsearch": ESAlertSender() 104 | } 105 | 106 | def __init__(self, enabled_alert_formats): 107 | formats = enabled_alert_formats.split(',') 108 | self.enabled_alerters = [self.AVAILABLE_ALERTERS[a] for a in formats if a in self.AVAILABLE_ALERTERS] 109 | self.recorded_alerts = [] 110 | 111 | def send_alerts(self, configuration={}): 112 | if self.recorded_alerts: 113 | for alerter in self.enabled_alerters: 114 | alerter.send_alerts(configuration, self.recorded_alerts) 115 | 116 | def run(self, alert_obj): 117 | # don't store duplicate alerts 118 | alerts_obj = {a['id']: (a['plugin_name'], a['id'], d) for a in alert_obj for d in a['details']} 119 | 120 | self.recorded_alerts.extend(alerts_obj.values()) 121 | -------------------------------------------------------------------------------- /api/chefclient.py: -------------------------------------------------------------------------------- 1 | import json 2 | import urllib 3 | import time 4 | 5 | import logging 6 | from IPy import IP 7 | from chef import ChefAPI 8 | 9 | from chef.exceptions import ChefServerError 10 | 11 | 12 | class ChefClient: 13 | def __init__(self, chef_api, plugin_name): 14 | """ 15 | 16 | :type chef_api: ChefAPI 17 | """ 18 | self.chef_api = chef_api 19 | self.logger = logging.getLogger(plugin_name) 20 | 21 | def search_chef_hosts(self, requested_node_attributes, query='*:*', chunk_size=2000): 22 | results = [] 23 | for offset in xrange(5): 24 | get_params = urllib.urlencode({'q': query, 'start': offset * chunk_size, 'rows': chunk_size}) 25 | for retry in xrange(5): 26 | try: 27 | search_result = self.chef_api.request('POST', '/search/node?{}'.format(get_params), 28 | headers={'accept': 'application/json'}, 29 | data=json.dumps(requested_node_attributes) 30 | ) 31 | result_list = json.loads(search_result).get('rows', []) 32 | node_list = [node.get('data', {}) for node in result_list] 33 | results.extend(node_list) 34 | break 35 | except ChefServerError: 36 | if retry == 4: 37 | self.logger.exception("Chef API failed after 5 retries: POST /search/node") 38 | time.sleep(5) 39 | return results 40 | -------------------------------------------------------------------------------- /api/coordinator.py: -------------------------------------------------------------------------------- 1 | import inspect 2 | 3 | from instanceenricher import InstanceEnricher 4 | 5 | class Coordinator: 6 | 7 | def __init__(self, edda_client, alerter, config, status): 8 | self.edda_client = edda_client 9 | self.alerter = alerter 10 | self.status = status 11 | self.config = config 12 | self.instance_enricher = InstanceEnricher(self.edda_client) 13 | self.instance_enricher.initialize_caches() 14 | 15 | def run(self, plugin): 16 | plugin_config = self.plugin_specific(plugin.plugin_name, self.config) 17 | plugin_status = self.plugin_specific(plugin.plugin_name, self.status) 18 | init_arg_count = len(inspect.getargspec(plugin.init)[0]) 19 | if init_arg_count == 4: 20 | plugin.init(self.edda_client, plugin_config, plugin_status) 21 | else: 22 | plugin.init(self.edda_client, plugin_config, plugin_status, self.instance_enricher) 23 | results = plugin.run() 24 | self.alerter.run(results) 25 | 26 | def plugin_specific(self, plugin_name, ctx): 27 | search_name = "plugin." + plugin_name 28 | if search_name not in ctx: 29 | ctx[search_name] = {} 30 | return ctx[search_name] 31 | -------------------------------------------------------------------------------- /api/eddaclient.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import logging 3 | import json 4 | import urllib2 5 | 6 | 7 | class EddaException(Exception): 8 | 9 | def __init__(self, ret): 10 | Exception.__init__(self, 'EDDA returned with an error: ' + repr(ret)) 11 | self.response = ret 12 | 13 | 14 | class BetterHTTPErrorProcessor(urllib2.BaseHandler): 15 | # a substitute/supplement to urllib2.HTTPErrorProcessor 16 | # that doesn't raise exceptions on status codes 400 17 | 18 | def http_error_400(self, request, response, code, msg, hdrs): 19 | return response 20 | 21 | opener = urllib2.build_opener(BetterHTTPErrorProcessor) 22 | urllib2.install_opener(opener) 23 | 24 | 25 | class EddaClient: 26 | 27 | def __init__(self, edda_url): 28 | self.logger = logging.getLogger("EddaClient") 29 | self._edda_url = edda_url 30 | self._every = False 31 | self._since = None 32 | self._until = None 33 | self._updateonly = False 34 | self._cache = {} 35 | 36 | def clone(self): 37 | edda_client = EddaClient(self._edda_url) 38 | edda_client._every = self._every 39 | edda_client._since = self._since 40 | edda_client._until = self._until 41 | edda_client._updateonly = self._updateonly 42 | edda_client._cache = self._cache 43 | return edda_client 44 | 45 | def clone_modify(self, uv): 46 | edda_client = self.clone() 47 | for key, value in uv.iteritems(): 48 | edda_client.__dict__[key] = value 49 | return edda_client 50 | 51 | def query(self, uri): 52 | url = self._construct_uri(uri) 53 | if url in self._cache: 54 | return self._cache[url] 55 | else: 56 | response = self.do_query(url) 57 | self._cache[url] = response 58 | return response 59 | 60 | def do_query(self, url): 61 | self.logger.info("do_query: '%s'", url) 62 | try: 63 | response = urllib2.urlopen(url).read() 64 | ret = json.loads(response) 65 | if 'code' in ret: 66 | # indicates an error in EDDA response 67 | raise EddaException('EDDA returned with an error: ' + repr(ret)) 68 | except ValueError as e: 69 | raise ValueError('%s, response: %s' % (e, response)) 70 | 71 | return ret 72 | 73 | def raw_query(self, uri): 74 | url = self._construct_uri(uri) 75 | self.logger.info("raw_query: '%s'", url) 76 | try: 77 | file = urllib2.urlopen(url) 78 | ret = file.read() 79 | if file.getcode() != 200: 80 | print 'EDDAClient got non-200 error code.' 81 | try: 82 | # indicates an error in EDDA response 83 | raise EddaException(json.loads(ret)) 84 | except ValueError as e: 85 | raise ValueError('Failed to parse EDDA error message: %s, response: %s' % (e, ret)) 86 | return ret 87 | except urllib2.HTTPError as e: 88 | print 'Got HTTPError', e 89 | return '' 90 | 91 | def _construct_uri(self, uri): 92 | base = self._edda_url + uri 93 | if self._every: 94 | base += ';_all' 95 | if self._since is not None: 96 | base += ';_since=%s' % self._since 97 | if self._until is not None: 98 | base += ';_until=%s' % self._until 99 | if self._updateonly: 100 | base += ';_updated' 101 | 102 | return base 103 | 104 | def every(self): 105 | return self.clone_modify({'_every': True}) 106 | 107 | def updateonly(self): 108 | return self.clone_modify({'_updateonly': True}) 109 | 110 | def since(self, since): 111 | return self.clone_modify({'_since': since}) 112 | 113 | def until(self, until): 114 | return self.clone_modify({'_until': until}) 115 | 116 | def with_cache(self, cache): 117 | return self.clone_modify({'_cache': cache}) 118 | 119 | def clean(self): 120 | return EddaClient(self._edda_url) 121 | 122 | def soft_clean(self): 123 | return self.clean().with_cache(self._cache) 124 | -------------------------------------------------------------------------------- /api/instanceenricher.py: -------------------------------------------------------------------------------- 1 | import operator 2 | import string 3 | 4 | 5 | class InstanceEnricher: 6 | def __init__(self, edda_client): 7 | self.edda_client = edda_client.soft_clean() 8 | self.elbs = [] 9 | self.sec_groups = {} 10 | 11 | def initialize_caches(self): 12 | self.elbs = self._query_loadbalancers() 13 | self.sec_groups = self._query_security_groups() 14 | 15 | def _query_security_groups(self): 16 | groups = self.edda_client.query("/api/v2/aws/securityGroups;_expand") 17 | return {g["groupId"]: reduce(operator.add, self._clean_ip_permissions(g["ipPermissions"]), []) for g in groups} 18 | 19 | def _clean_ip_permissions(self, perms): 20 | return [self._clean_ip_permission(p) for p in perms] 21 | 22 | def _clean_ip_permission(self, permission): 23 | return [{"port": permission["toPort"], "range": r} for r in permission["ipRanges"]] 24 | 25 | def _query_loadbalancers(self): 26 | elbs = self.edda_client.query("/api/v2/aws/loadBalancers;_expand") 27 | return [self._clean_elb(e) for e in elbs if len(e.get("instances", [])) > 0] 28 | 29 | def _clean_elb(self, elb): 30 | return { 31 | "DNSName": elb.get("DNSName"), 32 | "instances": [i.get("instanceId") for i in elb.get("instances")], 33 | "ports": [l.get("listener", {}).get("loadBalancerPort") for l in elb.get("listenerDescriptions")] 34 | } 35 | 36 | def enrich(self, instance_data): 37 | instance_id = instance_data.get("instanceId") 38 | ami = instance_data.get("imageId") 39 | instance_data["service_type"] = self._get_type_from_tags(instance_data.get("tags", [])) or ami 40 | instance_data["elbs"] = [elb for elb in self.elbs if instance_id in elb["instances"]] 41 | self._enrich_security_groups(instance_data) 42 | return instance_data 43 | 44 | def _enrich_security_groups(self, instance_data): 45 | if "securityGroups" in instance_data: 46 | for sg in instance_data["securityGroups"]: 47 | sg["rules"] = self.sec_groups.get(sg["groupId"], []) 48 | 49 | def _get_type_from_tags(self, tags): 50 | LOOKUP_ORDER = ["service_name", "Name", "aws:cloudformation:stack-name", "aws:autoscaling:groupName"] 51 | for tag_name in LOOKUP_ORDER: 52 | matches = [t.get("value") for t in tags if t.get("key") == tag_name] 53 | if matches: 54 | return matches[0] 55 | return None 56 | 57 | def report(self, instance_data, extra={}): 58 | return instance_report(self.enrich(instance_data), extra) 59 | 60 | 61 | def instance_report(instance, extra={}): 62 | # convert list of tags to a more readable dict 63 | tags = {tag['key']: tag['value'] for tag in instance.get('tags', []) if 'key' in tag and 'value' in tag} 64 | 65 | if 'iamInstanceProfile' in instance and instance['iamInstanceProfile']: 66 | arn = instance.get('iamInstanceProfile').get('arn') 67 | account_id = arn.split(':')[4] if arn else None 68 | else: 69 | # edda does not provide account id for nodes without instance profiles 70 | account_id = None 71 | 72 | result = { 73 | "instanceId": instance.get("instanceId", None), 74 | 'tags': tags, 75 | 'keyName': instance.get('keyName', None), 76 | "started": int(instance.get("launchTime", 0)), 77 | "service_type": instance.get("service_type", None), 78 | "elbs": instance.get("elbs", []), 79 | "open_ports": reduce(operator.add, [sg["rules"] for sg in instance.get("securityGroups", [])], []), 80 | "publicIpAddress": instance.get("publicIpAddress", None), 81 | "privateIpAddress": instance.get("privateIpAddress", None), 82 | "awsRegion": instance.get('placement', {}).get('availabilityZone', '').rstrip(string.ascii_lowercase), 83 | "awsAccount": account_id 84 | } 85 | result.update(extra) 86 | return result 87 | -------------------------------------------------------------------------------- /docs/arch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prezi/reddalert/c6748999a4015c305fccfd596adefbbe883ad19f/docs/arch.png -------------------------------------------------------------------------------- /etc/configfile_template.json: -------------------------------------------------------------------------------- 1 | { 2 | "edda": "http://localhost:8080/edda", 3 | "output": "stdout", 4 | "store-until": true, 5 | "es_host": "localhost", 6 | "es_port": 9200, 7 | "email_from": "reddalert@localhost", 8 | "email_to": [ 9 | "root@localhost" 10 | ], 11 | "email_subject": "[reddalert] Report", 12 | "plugin.ami": { 13 | "allowed_tags": [ 14 | "jenkins" 15 | ] 16 | }, 17 | "plugin.elbs": { 18 | "allowed_ports": [ 19 | 80, 20 | 443 21 | ] 22 | }, 23 | "plugin.secgroups": { 24 | "allowed_ports": [ 25 | 22 26 | ], 27 | "allowed_protocols": [ 28 | "icmp" 29 | ], 30 | "whitelisted_ips": [ 31 | "127.0.0.1" 32 | ], 33 | "whitelisted_entries": { 34 | "sg-1234567 (foobar)": { 35 | "22": [ 36 | "0.0.0.0/0" 37 | ], 38 | "8000-9000": [ 39 | "1.2.3.4/32" 40 | ] 41 | } 42 | } 43 | }, 44 | "plugin.sso_unprotected": { 45 | "godauth_url": "", 46 | "sso_url": "", 47 | "zone": "", 48 | "legit_domains": [], 49 | "exception_domains": [ 50 | "cdn04.stage.prezi.com" 51 | ] 52 | }, 53 | "plugin.security_headers": { 54 | "zone": "", 55 | "legit_domains": [], 56 | "exception_domains": [] 57 | }, 58 | "plugin.iam": { 59 | "allowed": [ 60 | "^.*Deployment$" 61 | ] 62 | }, 63 | "plugin.s3acl": { 64 | "user": "", 65 | "key": "", 66 | "excluded_buckets": [ 67 | "cached\\w*" 68 | ], 69 | "excluded_keys": [ 70 | "^.*tmp_\\w*.json$" 71 | ], 72 | "allowed": [ 73 | { 74 | "uid": "deadbeef", 75 | "op": "READ" 76 | }, 77 | { 78 | "uid": "deadbeef", 79 | "op": "READ" 80 | }, 81 | { 82 | "uid": "deadbeef", 83 | "op": "READ_ACP" 84 | }, 85 | { 86 | "uid": "deadbeef", 87 | "op": "WRITE" 88 | }, 89 | { 90 | "uid": "deadbeef", 91 | "op": "WRITE_ACP" 92 | }, 93 | { 94 | "uid": "deadbeef", 95 | "op": "FULL_CONTROL" 96 | } 97 | ], 98 | "allowed_specific": { 99 | "foobar": [ 100 | { 101 | "uid": "deadbeef", 102 | "op": "FULL_CONTROL" 103 | } 104 | ] 105 | } 106 | }, 107 | "plugin.chef": { 108 | "chef_server_url": "https://api.opscode.com/organizations/xxx", 109 | "client_key_file": "/etc/chef/client.pem", 110 | "client_name": "foo", 111 | "excluded_instances": [ 112 | "jenkins" 113 | ] 114 | } 115 | } 116 | -------------------------------------------------------------------------------- /etc/statusfile_template.json: -------------------------------------------------------------------------------- 1 | { 2 | "plugin.newtag": {}, 3 | "plugin.iam": {}, 4 | "since": 1391003442000, 5 | "plugin.elbs": {}, 6 | "plugin.missingtag": {}, 7 | "plugin.s3acl": {}, 8 | "plugin.secgroups": {}, 9 | "plugin.ami": { 10 | "first_seen": { 11 | "ami-deadbeef": 1382092709000 12 | } 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /nessus_scan.py: -------------------------------------------------------------------------------- 1 | from api.instanceenricher import InstanceEnricher 2 | from reddalert import Reddalert 3 | 4 | if __name__ == '__main__': 5 | import argparse 6 | import logging 7 | import time 8 | import random 9 | import sys 10 | import boto.sqs 11 | import json 12 | import itertools 13 | from api import EddaClient 14 | 15 | parser = argparse.ArgumentParser(description='Runs tests against AWS configuration') 16 | parser.add_argument('--configfile', '-c', default='etc/configfile.json', help='Configuration file') 17 | parser.add_argument('--policy-id', '-p', help='Nessus policy id used for scanning') 18 | parser.add_argument('--scan-name', help='Name of the created Nessus scan') 19 | parser.add_argument('--service-type', '-t', help='Service type to scan') 20 | parser.add_argument('--random-service-types', help='Number of random service types to scan') 21 | parser.add_argument('--instances', '-n', default='1', help='Number of (random) instances to scan (all|)') 22 | parser.add_argument('--region', default='us-east-1', help='Region of output SQS queue') 23 | parser.add_argument('--queue-name', default='sccengine-prod', help='Name of output SQS queue') 24 | # hack to avoid race condition within EDDA: it's possible instances are synced while eg security groups aren't. 25 | parser.add_argument('--until', '-u', default=int(time.time()) * 1000 - 5 * 60 * 1000, help='Until, epoch in ms') 26 | parser.add_argument('--edda', '-e', help='Edda base URL') 27 | parser.add_argument('--sentry', default=None, help='Sentry url with user:pass (optional)') 28 | parser.add_argument('--silent', '-l', action="count", help='Supress log messages lower than warning') 29 | args = parser.parse_args() 30 | 31 | root_logger = logging.getLogger() 32 | # create formatter and add it to the handlers 33 | formatter = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s') 34 | # create console handler with a higher log level 35 | ch = logging.StreamHandler() 36 | ch.setFormatter(formatter) 37 | 38 | # Setup logger output 39 | root_logger.addHandler(ch) 40 | 41 | # Supress logging 42 | if args.silent: 43 | root_logger.setLevel(logging.WARNING) 44 | else: 45 | root_logger.setLevel(logging.DEBUG) 46 | 47 | root_logger.info('Called with %s', args) 48 | 49 | if args.sentry: 50 | from raven import Client 51 | from raven.handlers.logging import SentryHandler 52 | 53 | client = Client(args.sentry) 54 | handler = SentryHandler(client) 55 | handler.setLevel(logging.WARNING) 56 | root_logger.addHandler(handler) 57 | 58 | # Load configuration: 59 | config = Reddalert.load_json(args.configfile, root_logger) 60 | # root_logger.debug('Config: %s' % config) 61 | 62 | # Check arguments 63 | if not (args.policy_id and args.scan_name): 64 | root_logger.critical('Missing policy-id or scan-name argument.') 65 | sys.exit() 66 | 67 | # Setup EDDA client 68 | edda_url = Reddalert.get_config('edda', config, args.edda) 69 | edda_client = EddaClient(edda_url).until(args.until) 70 | instance_enricher = InstanceEnricher(edda_client) 71 | instance_enricher.initialize_caches() 72 | 73 | instances = edda_client.query("/api/v2/view/instances;_expand") 74 | enriched_instances = [instance_enricher.report(instance) for instance in instances] 75 | grouped_by_service_type = itertools.groupby(sorted(enriched_instances, key=lambda i: i['service_type']), 76 | key=lambda i: i['service_type']) 77 | service_types = [] 78 | for k, g in grouped_by_service_type: 79 | service_types.append(list(g)) # Store group iterator as a list 80 | 81 | # print 'service_types', service_types 82 | if args.service_type: 83 | instance_candidates = [instance for instance in enriched_instances if 84 | instance.get('service_type') == args.service_type] 85 | count = len(instance_candidates) if args.instances == 'all' else min(int(args.instances), 86 | len(instance_candidates)) 87 | filtered_instances = [instance_candidates[i] for i in random.sample(xrange(len(instance_candidates)), count)] 88 | root_logger.debug('Got service type, filtered instances: %s' % json.dumps(filtered_instances, indent=4)) 89 | elif args.random_service_types: 90 | count = min(int(args.random_service_types), len(service_types)) 91 | chosen_service_types = [service_types[i] for i in random.sample(xrange(len(service_types)), count)] 92 | 93 | # print 'chosen_service_types', chosen_service_types 94 | filtered_instances = [random.choice(instances) for instances in chosen_service_types] 95 | 96 | root_logger.debug('Got random service types, filtered instances: %s' % json.dumps(filtered_instances, indent=4)) 97 | else: 98 | filtered_instances = instances 99 | 100 | conn = boto.sqs.connect_to_region(args.region, aws_access_key_id=config['plugin.s3acl']['user'], 101 | aws_secret_access_key=config['plugin.s3acl']['key']) 102 | sqs_queue = conn.get_queue(args.queue_name) 103 | sqs_queue.set_message_class(boto.sqs.message.RawMessage) 104 | 105 | messages_to_send = [] 106 | 107 | for enriched_instance in filtered_instances: 108 | root_logger.debug('Enriched instance: %s', json.dumps(enriched_instance, indent=4)) 109 | 110 | target_ip = enriched_instance.get("privateIpAddress", enriched_instance.get("publicIpAddress")) 111 | open_ports = enriched_instance.get("open_ports", []) 112 | target_ports = list(set([int(p.get("port")) for p in open_ports if int(p.get("port", 0)) > 0])) 113 | target_ports.sort() 114 | 115 | if target_ip and target_ports: 116 | messages_to_send.append({"type": "nessus_scan", 117 | "targets": [target_ip], 118 | "nessus_ports": ",".join(str(p) for p in target_ports), 119 | "policy_id": args.policy_id, 120 | "scan_name": "%s %s %s" % ( 121 | args.scan_name, enriched_instance.get("service_type"), target_ports)}) 122 | 123 | target_elbs = enriched_instance.get("elbs", []) or [] 124 | for elb in target_elbs: 125 | target_host = elb.get("DNSName") 126 | target_ports = list(set(elb.get("ports", []))) 127 | target_ports.sort(key=int) 128 | if target_host and target_ports: 129 | messages_to_send.append({"type": "nessus_scan", 130 | "targets": [target_host], 131 | "nessus_ports": ",".join(str(p) for p in target_ports), 132 | "policy_id": args.policy_id, 133 | "scan_name": "%s %s %s - ELB" % ( 134 | args.scan_name, enriched_instance.get("service_type"), target_ports)}) 135 | 136 | for event in messages_to_send: 137 | root_logger.info('Sending event to SQS queue: %s' % event) 138 | message = boto.sqs.message.RawMessage() 139 | message.set_body(json.dumps(event)) 140 | sqs_queue.write(message) 141 | -------------------------------------------------------------------------------- /plugins/__init__.py: -------------------------------------------------------------------------------- 1 | from ami import NewAMIPlugin 2 | from elbs import ElasticLoadBalancerPlugin 3 | from iam import UserAddedPlugin 4 | from instancetags import MissingInstanceTagPlugin, NewInstanceTagPlugin 5 | from s3acl import S3AclPlugin 6 | from secgroups import SecurityGroupPlugin 7 | from chef import NonChefPlugin 8 | from route53 import Route53Unknown, Route53Changed 9 | from sso import SSOUnprotected, SecurityHeaders 10 | 11 | 12 | plugin_list = { 13 | 'secgroups': SecurityGroupPlugin(), 14 | 'ami': NewAMIPlugin(), 15 | 'elbs': ElasticLoadBalancerPlugin(), 16 | 'newtag': NewInstanceTagPlugin(), 17 | 'missingtag': MissingInstanceTagPlugin(), 18 | 'iam': UserAddedPlugin(), 19 | 's3acl': S3AclPlugin(), 20 | 'non_chef': NonChefPlugin(), 21 | 'route53unknown': Route53Unknown(), 22 | 'route53changed': Route53Changed(), 23 | 'sso_unprotected': SSOUnprotected(), 24 | 'security_headers': SecurityHeaders() 25 | } 26 | -------------------------------------------------------------------------------- /plugins/ami.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from api import InstanceEnricher 4 | 5 | class NewAMIPlugin: 6 | 7 | def __init__(self): 8 | self.plugin_name = 'ami' 9 | 10 | def init(self, edda_client, config, status, instance_enricher): 11 | self.edda_client = edda_client 12 | self.allowed_services = config["allowed_tags"] if "allowed_tags" in config else [] 13 | self.instance_enricher = instance_enricher 14 | self.initialize_status(status) 15 | 16 | def initialize_status(self, status): 17 | if 'first_seen' not in status: 18 | status['first_seen'] = {} 19 | self.status = status 20 | 21 | def run(self): 22 | return list(self.do_run()) 23 | 24 | def is_blacklisted(self, machine): 25 | return machine.get("service_type") in self.allowed_services 26 | 27 | def generate_details(self, machines): 28 | return [self.instance_enricher.report(machine) for instanceId, started, machine in machines] 29 | 30 | def do_run(self): 31 | since = self.edda_client._since if self.edda_client._since is not None else 0 32 | machines = self.edda_client.soft_clean().query("/api/v2/view/instances;_expand") 33 | 34 | grouped_by_ami = {} 35 | for m in machines: 36 | grouped_by_ami.setdefault(m["imageId"], []).append((m["instanceId"], 37 | int(m["launchTime"]), 38 | self.instance_enricher.enrich(m))) 39 | 40 | first_seen = {imageId: min([i[1] for i in instances]) for imageId, instances in grouped_by_ami.iteritems()} 41 | first_seen_updates = {imageId: min(first_seen[imageId], launchTime) if imageId in first_seen else launchTime 42 | for imageId, launchTime in self.status['first_seen'].iteritems()} 43 | first_seen.update(first_seen_updates) 44 | self.status['first_seen'] = dict(first_seen) 45 | 46 | for ami_id, instances in grouped_by_ami.iteritems(): 47 | if first_seen[ami_id] >= since: 48 | new_machines = [i for i in instances if i[1] >= since and not self.is_blacklisted(i[2])] 49 | if len(new_machines) > 0: 50 | yield { 51 | "plugin_name": self.plugin_name, 52 | "id": ami_id, 53 | "details": self.generate_details(new_machines) 54 | } 55 | -------------------------------------------------------------------------------- /plugins/chef.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from __future__ import absolute_import 4 | 5 | import logging 6 | import re 7 | 8 | from IPy import IP 9 | from chef import ChefAPI 10 | 11 | from api.chefclient import ChefClient 12 | 13 | 14 | class NonChefPlugin: 15 | """ 16 | Returns those EC2 instances which do not have a corresponding Chef entry based on the public IPv4 address. 17 | """ 18 | 19 | def __init__(self): 20 | self.plugin_name = 'non_chef' 21 | self.logger = logging.getLogger(self.plugin_name) 22 | 23 | def init(self, edda_client, config, status, instance_enricher): 24 | self.edda_client = edda_client 25 | try: 26 | self.api = ChefAPI(config['chef_server_url'], config['client_key_file'], config['client_name']) 27 | except IOError: 28 | self.logger.exception('Failed to open config file: %s', config['client_key_file']) 29 | self.api = None 30 | self.excluded_instances = config.get('excluded_instances', []) 31 | self.initialize_status(status) 32 | self.instance_enricher = instance_enricher 33 | 34 | def initialize_status(self, status): 35 | if 'first_seen' not in status: 36 | status['first_seen'] = {} 37 | self.status = status 38 | 39 | def is_excluded_instance(self, tags): 40 | if 'elasticbeanstalk:environment-name' in tags or 'aws:cloudformation:stack-name' in tags: 41 | return True # Amazon ElasticBeanstalk and CloudFormation hosts are not Chef managed 42 | 43 | if 'cloudbees:pse:type' in tags: 44 | return True # New CI nodes 45 | 46 | if 'aws:elasticmapreduce:instance-group-role' in tags: 47 | return True # EMR nodes 48 | 49 | if tags.get('purpose') in ['conversion-to-xenial']: 50 | return True # PoC machines for the Xenial upgrade, TODO: remove it once experiment is done 51 | 52 | service_name = tags.get('service_name', None) or tags.get('Name', None) 53 | for excluded_instance in self.excluded_instances: 54 | if service_name is not None and re.match(excluded_instance, service_name): 55 | return True 56 | return False 57 | 58 | def run(self): 59 | return list(self.do_run()) if self.api else [] 60 | 61 | def get_chef_hosts(self): 62 | def get_public_ip(chef_node): 63 | public_ipv4 = chef_node.get('cloud_public_ipv4', None) 64 | ipaddress = chef_node.get('ipaddress', None) 65 | if public_ipv4: 66 | return public_ipv4 67 | elif ipaddress: 68 | return ipaddress 69 | 70 | requested_node_attributes = { 71 | 'cloud_public_ipv4': ['cloud', 'public_ipv4'], 72 | 'ipaddress': ['ipaddress'], 73 | 'cloud_provider': ['cloud', 'provider'], 74 | 'machinename': ['machinename'], 75 | 'name': ['name'], 76 | 'fqdn': ['fqdn'], 77 | 'platform': ['platform'], 78 | 'os': ['os'], 79 | 'os_version': ['os_version'] 80 | } 81 | 82 | node_list = ChefClient(self.api, self.plugin_name).search_chef_hosts(requested_node_attributes) 83 | return {get_public_ip(node): node for node in node_list 84 | if get_public_ip(node) and IP(get_public_ip(node)).iptype() != 'PRIVATE'} 85 | 86 | def do_run(self): 87 | def _create_alert(plugin_name, alert_id, details): 88 | return { 89 | "plugin_name": plugin_name, 90 | "id": alert_id, 91 | "details": [details] 92 | } 93 | 94 | def _enrich_with_chef(chef_node): 95 | return { 96 | 'chef_node_name': chef_node.get('name'), 97 | 'hostname': chef_node.get('machinename'), 98 | 'fqdn': chef_node.get('fqdn'), 99 | 'platform': chef_node.get('platform'), 100 | 'operating_system': chef_node.get('os'), 101 | 'operating_system_version': chef_node.get('os_version'), 102 | } 103 | 104 | # NOTE! an instance has 3 hours to register itself to chef! 105 | aws_to_chef_delay = 3 * 60 * 60 * 1000 106 | since = self.edda_client._since or 0 107 | until = self.edda_client._until or (since + 30 * 60 * 3600) 108 | check_since = since - aws_to_chef_delay 109 | check_until = until - aws_to_chef_delay 110 | 111 | chef_hosts = self.get_chef_hosts() 112 | 113 | if not chef_hosts: 114 | self.logger.warning('No chef hosts were found.') 115 | return 116 | 117 | # handle EC2 instances first 118 | ec2_instances = self.edda_client.soft_clean().query("/api/v2/view/instances;_expand") 119 | for machine in ec2_instances: 120 | enriched_instance = self.instance_enricher.report(machine) 121 | 122 | instance_id = enriched_instance['instanceId'] 123 | launch_time = enriched_instance['started'] 124 | tags = enriched_instance['tags'] 125 | public_ip_address = enriched_instance['publicIpAddress'] 126 | alert_id = "%s-%s" % ( 127 | enriched_instance.get('keyName', enriched_instance['instanceId']), 128 | enriched_instance.get("service_type", "unknown_service")) 129 | 130 | if not self.is_excluded_instance(tags) and \ 131 | public_ip_address and public_ip_address != 'null': 132 | 133 | # found a not excluded machine 134 | if public_ip_address not in chef_hosts \ 135 | and check_since <= launch_time <= check_until and instance_id not in self.status['first_seen']: 136 | 137 | # found a non-chef managed host which has not been seen before 138 | self.status['first_seen'][instance_id] = launch_time 139 | yield _create_alert(self.plugin_name, alert_id, enriched_instance) 140 | elif public_ip_address in chef_hosts: 141 | # found a chef managed EC2 host, create an event so we can run conformity checks on it 142 | chef_node = chef_hosts[public_ip_address] 143 | enriched_instance.update(_enrich_with_chef(chef_node)) 144 | yield _create_alert('chef_managed', alert_id, enriched_instance) 145 | 146 | # handle non-ec2 chef hosts 147 | ec2_public_ips = [m['publicIpAddress'] for m in ec2_instances] 148 | for public_ip, chef_node in chef_hosts.iteritems(): 149 | if public_ip not in ec2_public_ips and chef_node.get('cloud_provider') != 'ec2': 150 | # found a chef managed non-EC2 host, create an event so we can run conformity checks on it 151 | chef_details = _enrich_with_chef(chef_node) 152 | chef_details['publicIpAddress'] = public_ip 153 | 154 | yield _create_alert('chef_managed', chef_details['chef_node_name'], chef_details) 155 | -------------------------------------------------------------------------------- /plugins/elbs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | 4 | class ElasticLoadBalancerPlugin: 5 | 6 | def __init__(self): 7 | self.plugin_name = 'elbs' 8 | 9 | def init(self, edda_client, config, status): 10 | self.edda_client = edda_client 11 | self.status = status 12 | self.config = config 13 | self.allowed_elb_ports = config["allowed_ports"] if "allowed_ports" in config else [] 14 | 15 | def run(self): 16 | return list(self.do_run()) 17 | 18 | def do_run(self): 19 | elbs = self.edda_client.updateonly().query("/api/v2/aws/loadBalancers;_expand") 20 | for elb in elbs: 21 | if self.is_suspicious(elb): 22 | yield { 23 | "plugin_name": self.plugin_name, 24 | "id": elb["loadBalancerName"], 25 | "details": [self.create_details(elb)] 26 | } 27 | 28 | def is_suspicious(self, elb): 29 | for listener in elb["listenerDescriptions"]: 30 | if int(listener['listener']['loadBalancerPort']) not in self.allowed_elb_ports: 31 | return True 32 | 33 | return False 34 | 35 | def create_details(self, elb): 36 | return { 37 | 'canonicalHostedZoneName': elb['canonicalHostedZoneName'], 38 | 'numberOfInstances': len(elb['instances']), 39 | 'listenerDescriptions': elb['listenerDescriptions'] 40 | } 41 | -------------------------------------------------------------------------------- /plugins/iam.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import re 3 | import pprint 4 | from api.eddaclient import EddaException 5 | 6 | 7 | class UserAddedPlugin: 8 | 9 | def __init__(self): 10 | self.plugin_name = 'iam' 11 | 12 | def init(self, edda_client, config, status): 13 | self.edda_client = edda_client 14 | self.status = status 15 | self.config = config 16 | self.pp = pprint.PrettyPrinter(indent=4) 17 | self.allowed = self.init_allowed_list_cache() 18 | 19 | def init_allowed_list_cache(self): 20 | return list(re.compile(allowed) for allowed in self.config['allowed']) if 'allowed' in self.config else [] 21 | 22 | def run(self): 23 | return list(self.do_run()) 24 | 25 | def do_run(self): 26 | users = self.edda_client.updateonly().query("/api/v2/aws/iamUsers") 27 | # a user might have changed several times, but 28 | # we want to look them up only once 29 | modifed_users = list(set(users)) 30 | 31 | for username in modifed_users: 32 | # skip allowed users 33 | if any(regex.match(username) for regex in self.allowed): 34 | continue 35 | details = [] 36 | try: 37 | diff = self.edda_client.raw_query("/api/v2/aws/iamUsers/%s;_diff=200" % username) 38 | # find group changes in diff 39 | m = re.findall(r'"groups" : \[([^\]]+)\]', diff) 40 | if m: 41 | added = [] 42 | removed = [] 43 | for match in m: 44 | for group in match.strip().split('\n'): 45 | stripped = group[1:].strip(' ",') 46 | if stripped and group[0] == '+': 47 | added.append(stripped) 48 | elif stripped and group[0] == '-': 49 | removed.append(stripped) 50 | if added: 51 | # alert on group addon 52 | details.append('Groups the user has been added to: %s' % ', '.join(added)) 53 | except EddaException as e: 54 | # print repr(e) 55 | if e.response['code'] == 400 and e.response['message'] == '_diff requires at least 2 documents, only 1 found': 56 | print 'Got error 400, don\'t worry, we\'re fetching user detail without diffing then.' 57 | user_object = self.pp.pformat( 58 | self.edda_client.updateonly().query("/api/v2/aws/iamUsers/%s" % username)) 59 | # alert on user addon 60 | details.append('New user has been added: %s\n' % user_object) 61 | # except Exception as e: 62 | # print 'Got unexpected exception: ', repr(e) 63 | 64 | if details: 65 | yield { 66 | "plugin_name": self.plugin_name, 67 | "id": username, 68 | "details": details 69 | } 70 | -------------------------------------------------------------------------------- /plugins/instancetags.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import itertools 3 | from api import instance_report 4 | 5 | 6 | IGNORE_TAGS = ['testapp'] 7 | 8 | 9 | class NewInstanceTagPlugin: 10 | def __init__(self): 11 | self.plugin_name = 'newtag' 12 | 13 | def init(self, edda_client, config, status, instance_enricher): 14 | self.edda_client = edda_client 15 | self.status = status 16 | self.config = config 17 | self.instance_enricher = instance_enricher 18 | 19 | def run(self): 20 | return list(self.do_run()) 21 | 22 | def do_run(self): 23 | machines = self.edda_client.clean().query("/api/v2/view/instances;_expand") 24 | since = self.edda_client._since if self.edda_client._since is not None else 0 25 | tags = [{"tag": t["value"], "started": int(m["launchTime"]), "machine": m} 26 | for m in machines 27 | for t in m["tags"] if t["key"] == "service_name"] 28 | grouped_by_tag = itertools.groupby(sorted(tags, key=lambda e: e["tag"]), key=lambda e: e["tag"]) 29 | 30 | for tag_name, instances in grouped_by_tag: 31 | instances = list(instances) 32 | if all([i["started"] >= since for i in instances]) and tag_name not in IGNORE_TAGS: 33 | yield { 34 | "plugin_name": self.plugin_name, 35 | "id": tag_name, 36 | "details": [self.instance_enricher.report(i["machine"]) for i in instances] 37 | } 38 | 39 | 40 | class MissingInstanceTagPlugin: 41 | def __init__(self): 42 | self.plugin_name = 'missingtag' 43 | 44 | def init(self, edda_client, config, status, instance_enricher): 45 | self.edda_client = edda_client 46 | self.status = status 47 | self.config = config 48 | self.instance_enricher = instance_enricher 49 | 50 | def run(self): 51 | return list(self.do_run()) 52 | 53 | def do_run(self): 54 | machines = self.edda_client.clean().query("/api/v2/view/instances;_expand") 55 | since = self.edda_client._since if self.edda_client._since is not None else 0 56 | suspicious_machines = [m for m in machines if self.is_suspicious(m, since)] 57 | for machine in suspicious_machines: 58 | machine = self.instance_enricher.enrich(machine) 59 | yield { 60 | "plugin_name": self.plugin_name, 61 | "id": machine.get("service_type", machine.get("instanceId")), 62 | "details": [instance_report(machine)] 63 | } 64 | 65 | def is_suspicious(self, machine, since): 66 | tags = [t["value"] for t in machine["tags"] if t["key"] == "service_name"] 67 | return int(machine["launchTime"]) > since and len(tags) == 0 68 | -------------------------------------------------------------------------------- /plugins/route53.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from __future__ import absolute_import 4 | 5 | import hashlib 6 | import logging 7 | import re 8 | import urllib2 9 | from multiprocessing import Pool 10 | 11 | from IPy import IP 12 | from chef import ChefAPI 13 | 14 | from api.chefclient import ChefClient 15 | 16 | 17 | def is_ip_unknown(ip, ip_set): 18 | return ip not in ip_set 19 | 20 | 21 | def is_cname_unknown(record, ip_set, legit_domains): 22 | if any(record.endswith(le) for le in legit_domains): 23 | return False 24 | 25 | if record.startswith("ec2-"): 26 | ipaddr = record[4:record.find(".compute")].replace("-", ".") 27 | return is_ip_unknown(ipaddr, ip_set) 28 | 29 | return True 30 | 31 | 32 | def is_external(entry, ip_set, legit_domains): 33 | aliases = [r.get("value") for r in entry.get("resourceRecords")] 34 | return any(is_ip_unknown(r, ip_set) and is_cname_unknown(r, ip_set, legit_domains) for r in aliases) 35 | 36 | 37 | def is_ip_private(ip_addr): 38 | try: 39 | ip = IP(ip_addr) 40 | return ip.iptype() != 'PRIVATE' 41 | except: 42 | return False 43 | 44 | 45 | def load_route53_entries(edda_client, zone=None): 46 | zone_selector = ";zone.name=%s" % zone if zone else "" 47 | route53_entries_raw = edda_client.clean().query("/api/v2/aws/hostedRecords%s;_expand" % zone_selector) 48 | route53_entries_dict = {e.get("name"): e for e in route53_entries_raw} # make it distinct 49 | return route53_entries_dict.values() 50 | 51 | 52 | route53_changed_no_content_regexes = [ 53 | re.compile(r"NoSuchBucket|NoSuchKey|NoSuchVersion"), # NoSuch error messages from S3 54 | re.compile(r"[Ee]xpir(ed|y|es)"), # expiry messages 55 | re.compile(r"not exists?") # generic does not exist 56 | ] 57 | 58 | 59 | def page_process_for_route53changed(location): 60 | try: 61 | if location.endswith("."): 62 | location = location[:-1] 63 | page_content = urllib2.urlopen(location, timeout=3).read() 64 | page_top = page_content[255:512] 65 | if len(page_content) <= 255: 66 | page_top = page_content 67 | 68 | matches = [] 69 | for regex in route53_changed_no_content_regexes: 70 | if re.search(regex, page_content[:2048]): 71 | matches.append(regex.pattern) 72 | 73 | return location, {"hash": hashlib.sha224(page_top).hexdigest(), "matches": matches} 74 | except: 75 | return location, {"hash": "-", "matches": []} 76 | 77 | 78 | class Route53Unknown: 79 | def __init__(self): 80 | self.plugin_name = 'route53unknown' 81 | self.logger = logging.getLogger(self.plugin_name) 82 | 83 | def init(self, edda_client, config, status): 84 | self.edda_client = edda_client 85 | self.config = config 86 | self.status = status 87 | self.chef_api = self._initialize_chef_api() 88 | self._initialize_status() 89 | 90 | def _initialize_chef_api(self): 91 | try: 92 | return ChefAPI(self.config['chef_server_url'], self.config['client_key_file'], self.config['client_name']) 93 | except: 94 | self.logger.exception('Failed to open config file: %s', self.config['client_key_file']) 95 | return None 96 | 97 | def _initialize_status(self): 98 | if 'known' not in self.status: 99 | self.status['known'] = [] 100 | 101 | def run(self): 102 | registered_ips = self.load_known_ips() 103 | legit_domains = self.config.get("legit_domains", []) 104 | route53_zone = self.config.get("zone") 105 | route53_entries = load_route53_entries(self.edda_client, route53_zone) 106 | external_entries = [e for e in route53_entries 107 | if e.get("type") in ("A", "CNAME") and is_external(e, registered_ips, legit_domains)] 108 | alerts = [] 109 | for e in external_entries: 110 | records = [r.get("value") for r in e.get("resourceRecords")] 111 | for r in records: 112 | if is_ip_private(r): 113 | continue 114 | if is_ip_unknown(r, registered_ips) and is_cname_unknown(r, registered_ips, legit_domains): 115 | alerts.append((e.get("name", ""), r)) 116 | alerts_filtered = [a for a in alerts if ("%s-%s" % a) not in self.status['known']] 117 | self.status['known'] = ["%s-%s" % a for a in alerts] 118 | for a in alerts_filtered: 119 | yield { 120 | "plugin_name": self.plugin_name, 121 | "id": a[0], 122 | "details": [a[1]] 123 | } 124 | 125 | def load_known_ips(self): 126 | self.logger.debug("Loading public IP list from chef") 127 | 128 | requested_node_attributes = { 129 | 'cloud_public_ips': ['cloud', 'public_ips'], 130 | 'network_interfaces': ['network', 'interfaces'], 131 | } 132 | 133 | nodes = ChefClient(self.chef_api, self.plugin_name).search_chef_hosts(requested_node_attributes) 134 | 135 | cloud_ips = [node.get("cloud_public_ips", []) for node in nodes if node.get("cloud_public_ips")] 136 | network_interface_list = [node.get("network_interfaces", {}).values() for node in nodes 137 | if node.get('network_interfaces')] 138 | phy_ifaces = sum(network_interface_list, []) 139 | phy_ips = [i.get("addresses", {}).keys() for i in phy_ifaces] 140 | self.logger.debug("Loading public IP list from AWS") 141 | aws_machines = self.edda_client.soft_clean().query("/api/v2/view/instances;_expand") 142 | aws_ips = [m.get("publicIpAddress") for m in aws_machines] 143 | return set(sum(cloud_ips, []) + sum(phy_ips, []) + aws_ips) # flatten 144 | 145 | 146 | class Route53Changed: 147 | def __init__(self): 148 | self.plugin_name = 'route53changed' 149 | self.logger = logging.getLogger(self.plugin_name) 150 | 151 | def init(self, edda_client, config, status): 152 | self.edda_client = edda_client 153 | self.config = config 154 | self.status = status 155 | self._initialize_status() 156 | 157 | def _initialize_status(self): 158 | if 'hashes' not in self.status: 159 | self.status['hashes'] = {} 160 | 161 | def run(self): 162 | ips = self.load_aws_ips() 163 | legit_domains = self.config.get("legit_domains", []) 164 | exempts = self.config.get("exception_domains", []) 165 | dns_names = self.load_known_dns() 166 | # using bugbounty.prezi.com for testing 167 | not_aws = {name: entry for name, entry in dns_names.iteritems() 168 | if 169 | name == "bugbounty.prezi.com." or (is_external(entry, ips, legit_domains) and name not in exempts)} 170 | locations_http = ["http://%s" % name for name in not_aws.keys()] 171 | locations_https = ["https://%s" % name for name in not_aws.keys()] 172 | locations = list(locations_http + locations_https) 173 | self.logger.info("fetching %d urls on 16 threads" % len(locations)) 174 | hashed_items = Pool(16).map(page_process_for_route53changed, locations) 175 | hashes = dict(hashed_items) 176 | old_hashes = self.status.get("hashes", {}) 177 | 178 | def backward_compatible_hash_compare(location, old_hashes, new_hash): 179 | if type(old_hashes[location]) is dict: 180 | # according to the new JSON schema. e.g.: 181 | # { "https://something.prezi.com": { "hash": "...", "matches": [ ... ] }, ... } 182 | return old_hashes[location]["hash"] != new_hash["hash"] 183 | else: 184 | # according to the old JSON schema, e.g.: 185 | # { "https://something.prezi.com": "sha224", ... } 186 | return old_hashes[location] != new_hash 187 | 188 | hash_changed_alerts = {loc: h for loc, h in hashes.iteritems() if 189 | loc not in old_hashes or backward_compatible_hash_compare(loc, old_hashes, h)} 190 | self.status["hashes"] = hashes 191 | for location, info in hash_changed_alerts.iteritems(): 192 | dns_name = "%s." % re.search('http[s]*://(.*)', location).group(1) 193 | dns_entry_info = not_aws.get(dns_name, "no_dns_entry_info_found") 194 | 195 | if len(info["matches"]) > 0: 196 | yield { 197 | "plugin_name": self.plugin_name, 198 | "id": location, 199 | "details": ("location matches the following regexes: %s" % info[ 200 | "matches"] + "\n (S3 bucket removed or domain expired?) \n dns_entry_info: %s" % dns_entry_info,) 201 | } 202 | 203 | def load_aws_ips(self): 204 | aws_machines = self.edda_client.soft_clean().query("/api/v2/view/instances;_expand") 205 | return [m.get("publicIpAddress") for m in aws_machines] 206 | 207 | def load_known_dns(self): 208 | route53_zone = self.config.get("zone") 209 | entries = load_route53_entries(self.edda_client, route53_zone) 210 | name_map = {e.get("name"): e for e in entries} 211 | return name_map 212 | -------------------------------------------------------------------------------- /plugins/s3acl.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import logging 4 | import random 5 | import re 6 | 7 | import boto 8 | from boto.exception import S3ResponseError 9 | from boto.s3.connection import S3Connection 10 | from boto.s3.key import Key 11 | 12 | 13 | if not boto.config.has_section('Boto'): 14 | boto.config.add_section('Boto') 15 | boto.config.set('Boto', 'http_socket_timeout', '10') 16 | 17 | 18 | class S3AclPlugin: 19 | def __init__(self): 20 | self.plugin_name = 's3acl' 21 | self.logger = logging.getLogger('s3acl') 22 | logging.getLogger("boto").disabled = True 23 | 24 | def init(self, edda_client, config, status): 25 | self.config = config 26 | self.edda_client = edda_client 27 | self.conn = S3Connection(self.config['user'], self.config['key'], calling_format='boto.s3.connection.ProtocolIndependentOrdinaryCallingFormat') 28 | self.p = config['visit_probability'] if 'visit_probability' in config else 0.1 29 | self.maxdir = config['visit_max'] if 'visit_max' in config else 5 30 | self.excluded_buckets = self.init_cache_from_list_in_config('excluded_buckets') 31 | self.excluded_keys = self.init_cache_from_list_in_config('excluded_keys') 32 | self.allowed = config['allowed'] if 'allowed' in config else [] 33 | self.allowed_specific = config['allowed_specific'] if 'allowed_specific' in config else {} 34 | 35 | def init_cache_from_list_in_config(self, cache_name): 36 | return list(re.compile(rule_item) for rule_item in self.config[cache_name]) if cache_name in self.config else [] 37 | 38 | def run(self): 39 | return list(self.do_run(self.conn)) 40 | 41 | def get_region_aware_buckets(self, buckets): 42 | for bucket in buckets: 43 | try: 44 | location = bucket.get_location() 45 | if not location: 46 | location = 'us-east-1' 47 | elif location == 'EU': 48 | location = 'eu-west-1' 49 | conn = boto.s3.connect_to_region(location, 50 | aws_access_key_id=self.config['user'], 51 | aws_secret_access_key=self.config['key'], 52 | calling_format='boto.s3.connection.ProtocolIndependentOrdinaryCallingFormat') 53 | 54 | yield conn.get_bucket(bucket.name) 55 | except S3ResponseError as e: 56 | if e.status != 404: 57 | self.logger.exception("Failed to get bucket location for %s", bucket.name) 58 | 59 | def do_run(self, conn): 60 | buckets = list(self.filter_excluded_buckets(self.get_region_aware_buckets(conn.get_all_buckets()))) 61 | 62 | for b in self.sample_population(buckets): 63 | bucket_alerts = self.suspicious_bucket_grants(b) 64 | if bucket_alerts: 65 | yield { 66 | "plugin_name": self.plugin_name, 67 | "id": "%s" % (b.name), 68 | "url": "https://s3.amazonaws.com/%s" % (b.name), 69 | "details": bucket_alerts 70 | } 71 | keys = self.filter_excluded_keys(self.traverse_bucket(b, "")) 72 | for k in keys: 73 | object_alerts = self.suspicious_object_grants(k) 74 | if object_alerts: 75 | yield { 76 | "plugin_name": self.plugin_name, 77 | "id": "%s:%s" % (k.bucket.name, k.name), 78 | "url": "https://s3.amazonaws.com/%s/%s" % (k.bucket.name, k.name), 79 | "details": object_alerts 80 | } 81 | 82 | def filter_excluded_buckets(self, buckets): 83 | return [bs for bs in buckets if not any(regex.match(bs.name) for regex in self.excluded_buckets)] 84 | 85 | def filter_excluded_keys(self, keys): 86 | return [key for key in keys if 87 | not any(regex.match('%s:%s' % (key.bucket.name, key.name)) for regex in self.excluded_keys)] 88 | 89 | def traverse_bucket(self, b, prefix): 90 | self.logger.debug("traverse_bucket('%s', '%s')" % (b.name, prefix)) 91 | try: 92 | elems = list(b.list(prefix, "/")) # we'll iterate twice 93 | keys = list([e for e in elems if isinstance(e, Key)]) 94 | prefix_names = [e.name for e in elems if not isinstance(e, Key)] 95 | selected_prefixes = self.sample_population(prefix_names, sum(c == '/' for c in prefix)) 96 | selected_keys = self.sample_population(keys) 97 | 98 | for sp in selected_prefixes: 99 | selected_keys.extend(self.traverse_bucket(b, sp)) 100 | return selected_keys 101 | except S3ResponseError as e: 102 | self.logger.exception("S3 error: %s:%s %s", b.name, prefix, e.message) 103 | return [] 104 | 105 | def sample_population(self, population, offset=0): 106 | pnl = len(population) 107 | k = int(min(max(1, self.maxdir - offset), max(1, pnl * self.p))) 108 | return [] if pnl == 0 else random.sample(population, k) 109 | 110 | def suspicious_grants(self, acp, bucket_name): 111 | grants = acp.acl.grants if acp is not None else [] 112 | allowed = list(self.allowed) 113 | 114 | if bucket_name in self.allowed_specific: 115 | allowed.extend(self.allowed_specific[bucket_name]) 116 | 117 | return ["%s %s" % (g.id or g.uri or 'Everyone', g.permission) for g in grants if self.is_suspicious(g, allowed)] 118 | 119 | def suspicious_object_grants(self, key): 120 | try: 121 | acp = key.get_acl() 122 | return self.suspicious_grants(acp, key.bucket.name) 123 | except S3ResponseError as e: 124 | if e.error_code != 'NoSuchKey': 125 | self.logger.exception("ACL fetching error: %s %s %s", key.name, e.message, e.error_code) 126 | return [] 127 | 128 | def suspicious_bucket_grants(self, bucket): 129 | try: 130 | acp = bucket.get_acl() 131 | return self.suspicious_grants(acp, bucket.name) 132 | except S3ResponseError as e: 133 | self.logger.exception("ACL fetching error: %s %s %s", bucket.name, e.message, e.error_code) 134 | return [] 135 | 136 | def is_suspicious(self, grant, allowed): 137 | actual_op = grant.permission 138 | if grant.type == u'Group': 139 | actual_group_id = grant.uri 140 | actual_user_id = None 141 | else: 142 | actual_group_id = None 143 | actual_user_id = grant.id if grant.id is not None else '*' 144 | 145 | for allowed_rule in allowed: 146 | allowed_user_id = allowed_rule.get('uid') 147 | allowed_group_id = allowed_rule.get('gid') 148 | allowed_op = allowed_rule['op'] 149 | 150 | is_allowed_user = (False if actual_user_id is None else actual_user_id == allowed_user_id) 151 | is_allowed_group = (False if actual_group_id is None else actual_group_id == allowed_group_id) 152 | is_allowed_op = (actual_op == allowed_op) 153 | 154 | if is_allowed_op and (is_allowed_user or is_allowed_group): 155 | return False 156 | return True 157 | -------------------------------------------------------------------------------- /plugins/secgroups.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import socket 4 | import string 5 | from netaddr import IPNetwork, AddrFormatError 6 | 7 | 8 | class SecurityGroupPlugin: 9 | def __init__(self): 10 | self.plugin_name = 'secgroups' 11 | self.allowed_protocols = ["icmp"] 12 | self.allowed_ports = [] 13 | self.whitelisted_ips = [] 14 | self.whitelisted_entries = {} 15 | 16 | def init(self, edda_client, config, status): 17 | self.edda_client = edda_client 18 | self.status = status 19 | if "allowed_protocols" in config: 20 | self.allowed_protocols = config["allowed_protocols"] 21 | if "allowed_ports" in config: 22 | self.allowed_ports = config["allowed_ports"] 23 | if "whitelisted_entries" in config: 24 | self.whitelisted_entries = config["whitelisted_entries"] 25 | if "whitelisted_ips" in config: 26 | self.whitelisted_ips = list(self.create_network_objects_from_valid_ips(config["whitelisted_ips"])) 27 | 28 | def create_network_objects_from_valid_ips(self, ip_range_list): 29 | for ip_range in ip_range_list: 30 | try: 31 | yield IPNetwork(ip_range) 32 | except AddrFormatError: 33 | pass 34 | 35 | def run(self): 36 | return list(self.do_run()) 37 | 38 | def do_run(self): 39 | groups = self.edda_client.updateonly().query("/api/v2/aws/securityGroups;_expand") 40 | machines = self.edda_client.query("/api/v2/view/instances;_expand") 41 | for security_group in groups: 42 | perms = list(self.suspicious_perms(security_group)) 43 | if perms: 44 | yield { 45 | "plugin_name": self.plugin_name, 46 | "id": '%s (%s)' % (security_group["groupId"], security_group["groupName"]), 47 | "details": list(self.create_details(perms, machines, security_group)) 48 | } 49 | 50 | def machines_with_group(self, machines, groupId): 51 | return [machine for machine in machines if self.machine_in_group(machine, groupId)] 52 | 53 | def machine_in_group(self, machine, groupId): 54 | return "securityGroups" in machine and any([sg["groupId"] == groupId for sg in machine["securityGroups"]]) 55 | 56 | def is_whitelisted_perm(self, security_group, perm): 57 | port = str(perm["fromPort"]) if perm["fromPort"] == perm["toPort"] else '{fromPort}-{toPort}'.format( 58 | fromPort=perm["fromPort"], toPort=perm["toPort"]) 59 | entry_name = '{sg_id} ({sg_name})'.format(sg_id=security_group["groupId"], sg_name=security_group["groupName"]) 60 | whitelisted_ip_ranges = self.whitelisted_entries.get(entry_name, {}).get(port, []) 61 | return bool(whitelisted_ip_ranges and all( 62 | [actual_ip_range in whitelisted_ip_ranges for actual_ip_range in perm.get("ipRanges", [])])) 63 | 64 | def suspicious_perms(self, security_group): 65 | perms = security_group.get("ipPermissions", []) 66 | for perm in perms: 67 | if not self.is_whitelisted_perm(security_group, perm) and self.is_suspicious_permission(perm): 68 | yield perm 69 | 70 | def is_suspicious_ip_range(self, ip_range): 71 | return not any([IPNetwork(ip_range) in allowed_range for allowed_range in self.whitelisted_ips]) 72 | 73 | def is_suspicious_permission(self, perm): 74 | # fromPort and toPort defines a range for incoming connections 75 | # note: fromPort is not the peer's src port 76 | proto_ok = "ipProtocol" in perm and perm["ipProtocol"] in self.allowed_protocols 77 | iprange_nok = "ipRanges" in perm and any( 78 | [self.is_suspicious_ip_range(ip_range) for ip_range in perm["ipRanges"]]) 79 | if (not proto_ok) and iprange_nok: 80 | f = int(perm["fromPort"] if "fromPort" in perm and perm["fromPort"] is not None else -1) 81 | t = int(perm["toPort"] if "toPort" in perm and perm["toPort"] is not None else 65536) 82 | # allowing port range is considered to be suspicious 83 | return f != t or f not in self.allowed_ports 84 | return False 85 | 86 | def create_details(self, perms, machines, group): 87 | affected_machines = self.machines_with_group(machines, group["groupId"]) 88 | aws_availability_zone = '' if not affected_machines else affected_machines[0]['placement']['availabilityZone'] 89 | aws_region = aws_availability_zone.rstrip(string.ascii_lowercase) 90 | aws_account = group['ownerId'] 91 | for perm in perms: 92 | mproc = [(m["instanceId"], m["publicIpAddress"] or m["privateIpAddress"], 93 | ",".join([t["value"] for t in m["tags"]])) 94 | for m in affected_machines] 95 | yield { 96 | 'port_open': len(mproc) > 0 and self.is_port_open(mproc[0][1], perm['fromPort'], perm['toPort']), 97 | 'ipAddresses': [m[1] for m in mproc if m[1]], 98 | 'machines': ["%s (%s): %s" % m for m in mproc], 99 | 'fromPort': perm['fromPort'], 100 | 'ipRanges': perm['ipRanges'], 101 | 'toPort': perm['toPort'], 102 | 'ipProtocol': perm['ipProtocol'], 103 | 'awsRegion': aws_region, 104 | 'awsAccount': aws_account 105 | } 106 | 107 | def is_port_open(self, host, port_from, port_to): 108 | if host and port_to and port_from and 0 <= port_to <= 65535 and 0 <= port_from <= 65535: 109 | if abs(port_to - port_from) > 20: 110 | return None 111 | for port_to_check in range(int(port_from), int(port_to) + 1): 112 | try: 113 | print 'connecting' 114 | # default timeout is 3 sec 115 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 116 | s.settimeout(3) 117 | s.connect((host, port_to_check)) 118 | s.close() 119 | return True 120 | except (socket.timeout, socket.error) as e: 121 | print 'got exception', e 122 | pass 123 | return False 124 | -------------------------------------------------------------------------------- /plugins/sso.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import urllib 3 | from multiprocessing import Pool 4 | 5 | import re 6 | import requests 7 | from .route53 import load_route53_entries, is_external 8 | 9 | 10 | def fetch_url(location): 11 | try: 12 | resp = requests.get(location, allow_redirects=False, timeout=3) 13 | return location, {'code': resp.status_code, 'headers': resp.headers} 14 | except: 15 | pass 16 | 17 | return location, None 18 | 19 | 20 | def one_starts_with_another(one, two): 21 | return one.startswith(two) or two.startswith(one) 22 | 23 | 24 | class BaseClass: 25 | PROCESSING_POOL_SIZE = 8 26 | 27 | def __init__(self): 28 | pass 29 | 30 | def run(self): 31 | raise NotImplementedError() 32 | 33 | def load_aws_ips(self): 34 | aws_machines = self.edda_client.soft_clean().query("/api/v2/view/instances;_expand") 35 | return [m.get("publicIpAddress") for m in aws_machines] 36 | 37 | def load_known_dns(self): 38 | route53_zone = self.config.get("zone") 39 | entries = load_route53_entries(self.edda_client, route53_zone) 40 | name_map = {e.get("name"): e for e in entries} 41 | return name_map 42 | 43 | def get_all_my_domains(self): 44 | ips = self.load_aws_ips() 45 | legit_domains = self.config.get("legit_domains", []) 46 | exempts = self.config.get("exception_domains", []) 47 | dns_names = self.load_known_dns() 48 | return [name.rstrip('.') for name, entry in dns_names.iteritems() 49 | if is_external(entry, ips, legit_domains) and name not in exempts] 50 | 51 | def get_all_my_domains_response(self): 52 | all_my_domains = self.get_all_my_domains() 53 | locations_http = ["http://%s" % name for name in all_my_domains] 54 | locations_https = ["https://%s" % name for name in all_my_domains] 55 | locations = list(locations_http + locations_https) 56 | 57 | self.logger.info("fetching %d urls on %d threads" % (len(locations), self.PROCESSING_POOL_SIZE)) 58 | 59 | processing_pool = Pool(self.PROCESSING_POOL_SIZE) 60 | result = {url: resp for url, resp in processing_pool.map(fetch_url, locations) if resp} 61 | processing_pool.close() 62 | return result 63 | 64 | 65 | class SSOUnprotected(BaseClass): 66 | UNPROTECTED = 'unprotected' 67 | SSO_URL = None 68 | GODAUTH_URL = None 69 | VALID_URL_RE = re.compile(r'https?://(.*)/?', re.IGNORECASE) 70 | 71 | def __init__(self): 72 | self.plugin_name = 'sso_unprotected' 73 | self.logger = logging.getLogger(self.plugin_name) 74 | 75 | def init(self, edda_client, config, status): 76 | self.edda_client = edda_client 77 | self.config = config 78 | self.status = status 79 | self._initialize_status() 80 | 81 | def _initialize_status(self): 82 | self.GODAUTH_URL = re.compile(self.config['godauth_url'], re.IGNORECASE) 83 | self.SSO_URL = re.compile(self.config['sso_url'], re.IGNORECASE) 84 | if 'redirects' not in self.status: 85 | self.status['redirects'] = [] 86 | 87 | def get_successful_responses(self, responses): 88 | return {url: urllib.unquote(response['headers'].get('location', '')) 89 | for url, response in responses.iteritems() 90 | if not response['code'] >= 400} 91 | 92 | def run(self): 93 | responses = self.get_all_my_domains_response() 94 | redirects = self.get_successful_responses(responses) 95 | 96 | old_redirects = self.status.get("redirects", {}) 97 | alerts = { 98 | loc: redirect_url for loc, redirect_url in redirects.iteritems() 99 | if loc not in old_redirects or old_redirects[loc] != redirect_url 100 | } 101 | self.status["redirects"] = redirects 102 | for tested_url, location_header in alerts.iteritems(): 103 | tested_url_domain_re = self.VALID_URL_RE.match(tested_url) 104 | 105 | if not tested_url_domain_re: 106 | self.logger.error('Invalid tested URL or location header: %s %s', tested_url, location_header) 107 | else: 108 | tested_domain = tested_url_domain_re.group(1) 109 | godauth_match = self.GODAUTH_URL.match(location_header) 110 | sso_match = self.SSO_URL.match(location_header) 111 | 112 | # full HTTPS site or redirects to SSO URLs 113 | if location_header.startswith('https://' + tested_domain) or \ 114 | (godauth_match and godauth_match.group(1) == tested_domain) or \ 115 | (sso_match and sso_match.group(1) == tested_domain): 116 | continue 117 | 118 | yield { 119 | "plugin_name": self.plugin_name, 120 | "id": tested_url, 121 | "details": list( 122 | ["This domain (%s) is neither behind SSO nor GODAUTH because redirects to %s" % ( 123 | tested_url, location_header)]) 124 | } 125 | 126 | 127 | class SecurityHeaders(BaseClass): 128 | def __init__(self): 129 | self.plugin_name = 'security_headers' 130 | self.logger = logging.getLogger(self.plugin_name) 131 | 132 | def init(self, edda_client, config, status): 133 | self.edda_client = edda_client 134 | self.config = config 135 | self.status = status 136 | self.already_checked = self.status.setdefault('already_checked', []) 137 | 138 | def run(self): 139 | for location, response in self.get_all_my_domains_response().iteritems(): 140 | if location not in self.already_checked and \ 141 | not response['headers'].get('x-frame-options') and 200 <= response['code'] < 300: 142 | self.already_checked.append(location) 143 | yield { 144 | "plugin_name": self.plugin_name, 145 | "id": location, 146 | "details": list(["This webpage (%s) does not have X-Frame-Options header" % location]) 147 | } 148 | self.status['already_checked'] = self.already_checked 149 | -------------------------------------------------------------------------------- /reddalert.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import json 3 | import time 4 | import calendar 5 | import sys 6 | 7 | from lockfile import LockFile, LockTimeout 8 | from sentry_processors import RemoveLockProcessor 9 | 10 | 11 | class Reddalert: 12 | def __init__(self, logger): 13 | self.logger = logger 14 | 15 | @staticmethod 16 | def get_since(since): 17 | if since: 18 | try: 19 | # return UTC timestamp in milliseconds 20 | # TODO: support millisecond string format as well? 21 | return calendar.timegm(time.strptime(since, "%Y-%m-%d %H:%M:%S")) * 1000 22 | except ValueError: 23 | if since.isdigit(): 24 | # consider it as an epoch timestamp 25 | return int(since) 26 | except: 27 | pass 28 | return None 29 | 30 | @staticmethod 31 | def load_json(json_file, logger): 32 | if json_file is not None: 33 | try: 34 | with open(json_file, 'r') as config_data: 35 | return json.load(config_data) 36 | except IOError: 37 | logger.exception("Failed to read file '%s'", json_file) 38 | except ValueError: 39 | logger.exception("Invalid JSON file '%s'", json_file) 40 | return {} 41 | 42 | @staticmethod 43 | def save_json(json_file, content, logger): 44 | if not content: 45 | logger.warning('Got empty JSON content, not updating status file!') 46 | return 47 | 48 | if json_file is not None: 49 | try: 50 | with open(json_file, 'w') as out_data: 51 | json.dump(content, out_data, indent=4) 52 | return True 53 | except IOError: 54 | logger.exception("Failed to write file '%s'", json_file) 55 | return False 56 | 57 | @staticmethod 58 | def get_config(key, config, arg=None, default=None): 59 | if arg is not None: 60 | return arg 61 | if key in config: 62 | return config[key] 63 | return default 64 | 65 | 66 | if __name__ == '__main__': 67 | import argparse 68 | import logging 69 | from api import EddaClient, Coordinator, Alerter 70 | from plugins import plugin_list 71 | 72 | parser = argparse.ArgumentParser(description='Runs tests against AWS configuration') 73 | parser.add_argument('--configfile', '-c', default='etc/configfile.json', help='Configuration file') 74 | parser.add_argument('--statusfile', '-f', default='etc/statusfile.json', help='Persistent store between runs') 75 | parser.add_argument('--since', '-s', default=None, 76 | help='Override statusfile, epoch in ms, Y-m-d H-M-S format or file') 77 | # hack to avoid race condition within EDDA: it's possible instances are synced while eg security groups aren't. 78 | parser.add_argument('--until', '-u', default=int(time.time()) * 1000 - 5 * 60 * 1000, help='Until, epoch in ms') 79 | parser.add_argument('--store-until', action="count", help='Use file in --since to store back the until epoch') 80 | parser.add_argument('--edda', '-e', default=None, help='Edda base URL') 81 | parser.add_argument('--sentry', default=None, help='Sentry url with user:pass (optional)') 82 | parser.add_argument('--output', '-o', default=None, 83 | help='Comma sepparated list of outputs to use (stdout,stdout_tabsep,mail_txt,mail_html,elasticsearch)') 84 | parser.add_argument('--silent', '-l', action="count", help='Supress log messages lower than warning') 85 | parser.add_argument('rules', metavar='rule', nargs='*', default=plugin_list.keys(), help='Rules to check') 86 | args = parser.parse_args() 87 | 88 | root_logger = logging.getLogger() 89 | 90 | # create formatter and add it to the handlers 91 | formatter = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s') 92 | # create console handler with a higher log level 93 | ch = logging.StreamHandler() 94 | ch.setFormatter(formatter) 95 | # Setup logger output 96 | root_logger.addHandler(ch) 97 | 98 | # Supress logging 99 | if args.silent: 100 | root_logger.setLevel(logging.WARNING) 101 | else: 102 | root_logger.setLevel(logging.DEBUG) 103 | 104 | root_logger.info('Called with %s', args) 105 | 106 | if args.sentry: 107 | from raven import Client 108 | from raven.handlers.logging import SentryHandler 109 | 110 | client = Client(args.sentry) 111 | client.processors += ('sentry_processors.RemoveLockProcessor', ) 112 | 113 | handler = SentryHandler(client) 114 | handler.setLevel(logging.ERROR) 115 | root_logger.addHandler(handler) 116 | 117 | try: 118 | RemoveLockProcessor.lock_file = LockFile(args.statusfile) 119 | RemoveLockProcessor.lock_file.acquire(timeout=3) 120 | root_logger.debug("Lock file not found, creating %s.lock" % RemoveLockProcessor.lock_file.path) 121 | except LockTimeout as e: 122 | root_logger.critical('Locked, script running... exiting.') 123 | sys.exit() 124 | 125 | # Load configuration: 126 | config = Reddalert.load_json(args.configfile, root_logger) 127 | 128 | # Load data from previous run: 129 | status = Reddalert.load_json(args.statusfile, root_logger) 130 | since = Reddalert.get_config('since', status, Reddalert.get_since(args.since), 0) 131 | 132 | # Setup EDDA client 133 | edda_url = Reddalert.get_config('edda', config, args.edda, 'http://localhost:8080/edda') 134 | edda_client = EddaClient(edda_url).since(since).until(args.until) 135 | 136 | # Setup the alerter 137 | output_targets = Reddalert.get_config('output', config, args.output, 'stdout') 138 | alerter = Alerter(output_targets) 139 | 140 | # Setup the Coordinator 141 | coordinator = Coordinator(edda_client, alerter, config, status) 142 | 143 | # Run checks 144 | for plugin in [plugin_list[rn] for rn in args.rules if rn in plugin_list]: 145 | root_logger.info('run_plugin: %s', plugin.plugin_name) 146 | coordinator.run(plugin) 147 | 148 | # Send alerts 149 | alerter.send_alerts(config) 150 | 151 | # Save results 152 | if Reddalert.get_config('store-until', config, args.output, False): 153 | status['since'] = args.until 154 | Reddalert.save_json(args.statusfile, status, root_logger) 155 | 156 | root_logger.info("Reddalert finished successfully.") 157 | if RemoveLockProcessor.lock_file: 158 | RemoveLockProcessor.lock_file.release() 159 | -------------------------------------------------------------------------------- /requirements-test.txt: -------------------------------------------------------------------------------- 1 | -r requirements.txt 2 | httpretty==0.8.8 3 | mock==1.0.1 4 | nose==1.3.0 5 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | argparse==1.2.1 2 | boto==2.45.0 3 | elasticsearch==2.1.0 4 | requests==2.6.0 5 | lockfile==0.10.2 6 | raven==5.26 7 | ipy==0.83 8 | pychef==0.2.3 9 | netaddr==0.7.19 10 | -------------------------------------------------------------------------------- /sentry_processors.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | from raven.processors import Processor 4 | 5 | 6 | class RemoveLockProcessor(Processor): 7 | lock_file = None 8 | 9 | def process(self, data, **kwargs): 10 | level_name = 'FATAL' if data.get('level', 0) == 'fatal' else logging.getLevelName(data.get('level', 0)) 11 | 12 | if not RemoveLockProcessor.lock_file or not RemoveLockProcessor.lock_file.is_locked(): 13 | return data 14 | 15 | if level_name == 'FATAL': 16 | RemoveLockProcessor.lock_file.break_lock() 17 | logging.getLogger().info("Sentry caught a %s message - lock file removed", level_name) 18 | else: 19 | logging.getLogger().info("Sentry caught a %s message - lock file WAS NOT removed", level_name) 20 | 21 | return data 22 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prezi/reddalert/c6748999a4015c305fccfd596adefbbe883ad19f/tests/__init__.py -------------------------------------------------------------------------------- /tests/api/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prezi/reddalert/c6748999a4015c305fccfd596adefbbe883ad19f/tests/api/__init__.py -------------------------------------------------------------------------------- /tests/api/test_alerting.py: -------------------------------------------------------------------------------- 1 | import StringIO 2 | import unittest 3 | from mock import patch, Mock, call 4 | 5 | from api.alerter import Alerter 6 | from api.alerter import EmailAlertSender 7 | from api.alerter import ESAlertSender 8 | from api.alerter import StdOutAlertSender 9 | 10 | 11 | class StdOutSenderTestCase(unittest.TestCase): 12 | def setUp(self): 13 | self.alerts = [ 14 | ("simple", "_1_", "simple text"), 15 | ("simple", "_2_", "array1"), 16 | ("simple", "_2_", "array2"), 17 | ("complex", "_3_", {"foo": "bar"}), 18 | ("complex", "_3_", {"foo": "woo"}) 19 | ] 20 | 21 | def test_normal_output(self): 22 | ow = StringIO.StringIO() 23 | StdOutAlertSender(False, ow).send_alerts({}, self.alerts) 24 | c = ow.getvalue() 25 | 26 | self.assertIn("Rule: simple", c) 27 | self.assertIn("Alert: simple text", c) 28 | self.assertIn("Alert: array2", c) 29 | self.assertIn("foo", c) 30 | 31 | 32 | class EmailSenderTestCase(unittest.TestCase): 33 | def setUp(self): 34 | self.alerts = [ 35 | ("simple", "_1_", "simple text"), 36 | ("simple", "_2_", "array1"), 37 | ("simple", "_2_", "array2"), 38 | ("complex", "_3_", {"foo": "bar"}), 39 | ("complex", "_3_", {"foo": "woo"}) 40 | ] 41 | 42 | def test_defaults_applied(self): 43 | eas = EmailAlertSender() 44 | eas.send_email = Mock() 45 | 46 | eas.send_alerts({}, self.alerts) 47 | 48 | calls = eas.send_email.call_args 49 | self.assertEquals("reddalert@localhost", calls[0][0]) 50 | 51 | def test_defaults_override(self): 52 | eas = EmailAlertSender() 53 | eas.send_email = Mock() 54 | 55 | eas.send_alerts({"email_from": "foobar"}, self.alerts) 56 | 57 | calls = eas.send_email.call_args 58 | self.assertEquals("foobar", calls[0][0]) 59 | 60 | def test_email_text(self): 61 | eas = EmailAlertSender() 62 | eas.send_email = Mock() 63 | 64 | eas.send_alerts({}, self.alerts) 65 | 66 | calls = eas.send_email.call_args 67 | self.assertIn("Rule: simple", calls[0][3]) 68 | self.assertIn("Alert: simple text", calls[0][3]) 69 | self.assertIn("Alert: array2", calls[0][3]) 70 | self.assertIn("foo", calls[0][3]) 71 | 72 | 73 | class AlerterTestCase(unittest.TestCase): 74 | def setUp(self): 75 | self.alerts = [ 76 | {"plugin_name": "simple", "id": "_1_", "details": ["simple text"]}, 77 | {"plugin_name": "simple", "id": "_1_", "details": ["simple text"]}, 78 | {"plugin_name": "simple", "id": "_2_", "details": ["array1", "array2"]}, 79 | {"plugin_name": "complex", "id": "_3_", "details": [{"foo": "bar"}, {"foo": "woo"}]} 80 | ] 81 | 82 | def test_survive_nonexisting_senders(self): 83 | a = Alerter("foo,bar") 84 | 85 | a.run(self.alerts) 86 | a.send_alerts() 87 | 88 | self.assertTrue(a.recorded_alerts is not None) 89 | 90 | def test_flatten_details(self): 91 | a = Alerter("") 92 | 93 | a.run(self.alerts) 94 | 95 | self.assertEquals(3, len(a.recorded_alerts)) 96 | self.assertTrue(any(x[0] == "complex" for x in a.recorded_alerts)) 97 | self.assertTrue(any(x[1] == "_2_" for x in a.recorded_alerts)) 98 | 99 | def test_multiple_run_keeps_details(self): 100 | a = Alerter("") 101 | 102 | a.run(self.alerts[0:2]) 103 | a.run(self.alerts[2:]) 104 | 105 | self.assertEquals(3, len(a.recorded_alerts)) 106 | 107 | 108 | class ElasticSearchWriterTestCase(unittest.TestCase): 109 | def setUp(self): 110 | pass 111 | 112 | def test_flatten_details(self): 113 | esa = ESAlertSender() 114 | 115 | flat_alerts = list(esa.flatten_alerts([("complex", "_3_", {"foo": "bar"})])) 116 | 117 | self.assertEquals(1, len(flat_alerts)) 118 | self.assertIn("foo", flat_alerts[0]) 119 | self.assertIn("rule", flat_alerts[0]) 120 | self.assertEquals("_3_", flat_alerts[0]["id"]) 121 | 122 | def test_leave_simple_details(self): 123 | esa = ESAlertSender() 124 | 125 | flat_alerts = list(esa.flatten_alerts([("complex", "_3_", "foobar")])) 126 | 127 | self.assertEquals(1, len(flat_alerts)) 128 | self.assertIn("details", flat_alerts[0]) 129 | self.assertEquals("foobar", flat_alerts[0]["details"]) 130 | self.assertIn("rule", flat_alerts[0]) 131 | self.assertEquals("_3_", flat_alerts[0]["id"]) 132 | -------------------------------------------------------------------------------- /tests/api/test_eddaclient.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import json 3 | import unittest 4 | from mock import patch, Mock 5 | from httpretty import HTTPretty, httprettified 6 | from urllib2 import HTTPError 7 | 8 | from api.eddaclient import EddaClient, EddaException 9 | 10 | 11 | class EddaClientTestCase(unittest.TestCase): 12 | 13 | def setUp(self): 14 | self.eddaURL = 'http://localhost:8888/edda' 15 | self.eddaclient = EddaClient(self.eddaURL) 16 | self.expected_response = ["i-111", "i-222"] 17 | 18 | def test_clone(self): 19 | self.assertEqual(self.eddaclient.clone().__dict__, self.eddaclient.__dict__) 20 | 21 | def test_clone_modify(self): 22 | self.assertEqual(self.eddaclient._since, None) 23 | self.assertEqual(self.eddaclient.clone_modify({'_since': 1})._since, 1) 24 | 25 | @patch('api.eddaclient.EddaClient.do_query', return_value=["i-111", "i-222"]) 26 | def test_query(self, *mocks): 27 | res = self.eddaclient.query('/api/v2/view/instances') 28 | self.assertEqual(res, self.expected_response) 29 | self.assertEqual(self.eddaclient._cache[self.eddaURL + '/api/v2/view/instances'], self.expected_response) 30 | 31 | @httprettified 32 | def test_do_query(self): 33 | HTTPretty.register_uri(HTTPretty.GET, self.eddaURL + '/api/v2/view/instances', 34 | body=json.dumps(self.expected_response), 35 | status=200) 36 | self.assertEqual(self.eddaclient.query('/api/v2/view/instances'), self.expected_response) 37 | 38 | @httprettified 39 | def test_do_query_exception(self): 40 | HTTPretty.register_uri(HTTPretty.GET, self.eddaURL + '/api/v2/view/instances', 41 | body='error', 42 | status=500) 43 | self.assertRaises(HTTPError, self.eddaclient.query, ('/api/v2/view/instances')) 44 | 45 | @httprettified 46 | def test_do_query_error(self): 47 | HTTPretty.register_uri(HTTPretty.GET, self.eddaURL + '/api/v2/view/instances', 48 | body='{"code": "xxxx", "asd": "b"}', 49 | status=200) 50 | self.assertRaises(EddaException, self.eddaclient.query, ('/api/v2/view/instances')) 51 | 52 | @httprettified 53 | def test_do_query_invalid_json(self): 54 | HTTPretty.register_uri(HTTPretty.GET, self.eddaURL + '/api/v2/view/instances', 55 | body='invalid', 56 | status=200) 57 | self.assertRaises(ValueError, self.eddaclient.query, ('/api/v2/view/instances')) 58 | 59 | @httprettified 60 | def test_raw_query(self): 61 | HTTPretty.register_uri(HTTPretty.GET, self.eddaURL + '/api/v2/view/instances', 62 | body='raw_response', 63 | status=200) 64 | self.assertEqual(self.eddaclient.raw_query('/api/v2/view/instances'), 'raw_response') 65 | 66 | 67 | def main(): 68 | unittest.main() 69 | 70 | if __name__ == '__main__': 71 | main() 72 | -------------------------------------------------------------------------------- /tests/api/test_instanceenricher.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from mock import Mock, patch 3 | 4 | from api.instanceenricher import InstanceEnricher 5 | 6 | 7 | class InstanceEnricherTestCase(unittest.TestCase): 8 | def setUp(self): 9 | self.edda_client = Mock() 10 | edda_outer = Mock() 11 | edda_outer.soft_clean = Mock(return_value=self.edda_client) 12 | self.instance_enricher = InstanceEnricher(edda_outer) 13 | 14 | self.mock_instance_data = { 15 | "tags": [ 16 | { 17 | "value": "lucid", 18 | "maksim_node_type": "lucid", 19 | "key": "maksim_node_type", 20 | "class": "com.amazonaws.services.ec2.model.Tag" 21 | }, 22 | { 23 | "value": "jenkins", 24 | "service_name": "jenkins", 25 | "key": "service_name", 26 | "class": "com.amazonaws.services.ec2.model.Tag" 27 | } 28 | ], 29 | "instanceId": "A", 30 | "iamInstanceProfile": {'arn': 'arn:aws:iam::783721547467:instance-profile/dummyservice'}, 31 | 'placement': {'availabilityZone': 'us-east-1b'}, 32 | 'launchTime': 1234, 33 | 'privateIpAddress': '1.2.3.4', 34 | 'publicIpAddress': '10.20.30.40', 35 | "securityGroups": [ 36 | { 37 | "groupName": "jenkins", 38 | "groupId": "sg-XXXXX1", 39 | "class": "com.amazonaws.services.ec2.model.GroupIdentifier" 40 | } 41 | ] 42 | } 43 | self.mock_instance_enrich_elbs = [ 44 | {"DNSName": "foo.prezi.com", "instances": ["A", "B"], "ports": ["80", "81"]}, 45 | {"DNSName": "bar.prezi.com", "instances": ["A", "C"], "ports": ["80", "81"]} 46 | ] 47 | self.mock_instance_enrich_secgroups = {"sg-XXXXX1": [{"port": "22", "range": "0.0.0.0/0"}]} 48 | 49 | def test_query_securitygroups(self): 50 | SECURITY_GROUPS = [ 51 | { 52 | "vpcId": None, 53 | "class": "com.amazonaws.services.ec2.model.SecurityGroup", 54 | "description": "security-log-source", 55 | "groupId": "sg-XXXXX1", 56 | "groupName": "security-log-source", 57 | "ipPermissions": [ 58 | { 59 | "userIdGroupPairs": [], 60 | "toPort": 22, 61 | "ipRanges": [ 62 | "0.0.0.0/0" 63 | ], 64 | "ipProtocol": "tcp", 65 | "fromPort": 22, 66 | "class": "com.amazonaws.services.ec2.model.IpPermission" 67 | } 68 | ], 69 | "ipPermissionsEgress": [], 70 | "ownerId": "123", 71 | "tags": [] 72 | }, 73 | { 74 | "vpcId": None, 75 | "class": "com.amazonaws.services.ec2.model.SecurityGroup", 76 | "description": "security-log-drain", 77 | "groupId": "sg-XXXXX2", 78 | "groupName": "security-log-drain", 79 | "ipPermissions": [ 80 | { 81 | "userIdGroupPairs": [], 82 | "toPort": 22, 83 | "ipRanges": [ 84 | "0.0.0.0/0" 85 | ], 86 | "ipProtocol": "tcp", 87 | "fromPort": 22, 88 | "class": "com.amazonaws.services.ec2.model.IpPermission" 89 | }, 90 | { 91 | "userIdGroupPairs": [], 92 | "toPort": 22, 93 | "ipRanges": [ 94 | "10.1.0.0/16", 95 | "192.168.0.0/16", 96 | ], 97 | "ipProtocol": "tcp", 98 | "fromPort": 22, 99 | "class": "com.amazonaws.services.ec2.model.IpPermission" 100 | } 101 | ], 102 | "ipPermissionsEgress": [], 103 | "ownerId": "123", 104 | "tags": [] 105 | }, 106 | ] 107 | self.edda_client.query = Mock(return_value=SECURITY_GROUPS) 108 | 109 | sgs = self.instance_enricher._query_security_groups() 110 | 111 | self.assertIsInstance(sgs, dict) 112 | self.assertIn("sg-XXXXX1", sgs) 113 | self.assertIn("sg-XXXXX2", sgs) 114 | self.assertEqual(3, len(sgs["sg-XXXXX2"])) 115 | 116 | def test_enrich(self): 117 | self.instance_enricher.elbs = [ 118 | {"DNSName": "foo.prezi.com", "instances": ["A", "B"], "ports": ["80", "81"]}, 119 | {"DNSName": "bar.prezi.com", "instances": ["A", "C"], "ports": ["80", "81"]} 120 | ] 121 | self.instance_enricher.sec_groups = { 122 | "sg-XXXXX1": [{"port": "22", "range": "0.0.0.0/0"}] 123 | } 124 | INSTANCE_DATA = { 125 | "tags": [ 126 | { 127 | "value": "lucid", 128 | "maksim_node_type": "lucid", 129 | "key": "maksim_node_type", 130 | "class": "com.amazonaws.services.ec2.model.Tag" 131 | }, 132 | { 133 | "value": "jenkins", 134 | "service_name": "jenkins", 135 | "key": "service_name", 136 | "class": "com.amazonaws.services.ec2.model.Tag" 137 | } 138 | ], 139 | "instanceId": "A", 140 | "iamInstanceProfile": None, 141 | "securityGroups": [ 142 | { 143 | "groupName": "jenkins", 144 | "groupId": "sg-XXXXX1", 145 | "class": "com.amazonaws.services.ec2.model.GroupIdentifier" 146 | } 147 | ] 148 | } 149 | 150 | self.instance_enricher.enrich(INSTANCE_DATA) 151 | 152 | self.assertIn("elbs", INSTANCE_DATA) 153 | self.assertEqual(2, len(INSTANCE_DATA["elbs"])) 154 | self.assertIn("rules", INSTANCE_DATA["securityGroups"][0]) 155 | self.assertEqual(1, len(INSTANCE_DATA["securityGroups"][0]["rules"])) 156 | self.assertEqual("jenkins", INSTANCE_DATA["service_type"]) 157 | 158 | @patch('api.instanceenricher.InstanceEnricher._clean_ip_permissions', return_value=[]) 159 | def test_empty_secgroup_query(self, *mocks): 160 | self.edda_client.query = Mock( 161 | return_value=[{"ipPermissions": [{"ipRanges": ['1', '2'], "toPort": '22'}], "groupId": "G"}]) 162 | self.assertEqual([], self.instance_enricher._clean_ip_permissions([{"ipRanges": [], "toPort": None}])) 163 | self.assertEquals({"G": []}, self.instance_enricher._query_security_groups()) # shall not throw exception 164 | 165 | def test_tag_extraction(self): 166 | tags = [ 167 | { 168 | "value": "nessus", 169 | "key": "Name", 170 | "class": "com.amazonaws.services.ec2.model.Tag", 171 | "Name": "nessus" 172 | }, 173 | { 174 | "value": "nessus", 175 | "service_name": "nessus", 176 | "key": "service_name", 177 | "class": "com.amazonaws.services.ec2.model.Tag" 178 | } 179 | ] 180 | name = self.instance_enricher._get_type_from_tags(tags) 181 | self.assertEqual("nessus", name) 182 | 183 | def test_instance_report(self): 184 | self.instance_enricher.elbs = self.mock_instance_enrich_elbs 185 | self.instance_enricher.sec_groups = self.mock_instance_enrich_secgroups 186 | self.instance_enricher.enrich(self.mock_instance_data) 187 | 188 | report = self.instance_enricher.report(self.mock_instance_data, {'dummy_extra_key': 'dummy_extra_value'}) 189 | self.assertItemsEqual(report.keys(), ['privateIpAddress', 'service_type', 'publicIpAddress', 'elbs', 190 | 'instanceId', 'awsRegion', 'awsAccount', 'keyName', 'dummy_extra_key', 191 | 'open_ports', 'started', 'tags' 192 | ]) 193 | self.assertEqual(report['awsAccount'], '783721547467') 194 | self.assertEqual(report['awsRegion'], 'us-east-1') 195 | self.assertEqual(report['started'], 1234) 196 | self.assertEqual(report['dummy_extra_key'], 'dummy_extra_value') 197 | self.assertEqual(report['privateIpAddress'], '1.2.3.4') 198 | self.assertEqual(report['publicIpAddress'], '10.20.30.40') 199 | self.assertEqual(report['service_type'], 'jenkins') 200 | self.assertEqual(report['instanceId'], 'A') 201 | 202 | def test_instance_report_no_profile(self): 203 | self.mock_instance_data['iamInstanceProfile'] = None 204 | self.instance_enricher.elbs = self.mock_instance_enrich_elbs 205 | self.instance_enricher.sec_groups = self.mock_instance_enrich_secgroups 206 | self.instance_enricher.enrich(self.mock_instance_data) 207 | 208 | report = self.instance_enricher.report(self.mock_instance_data, {'dummy_extra_key': 'dummy_extra_value'}) 209 | self.assertIsNone(report['awsAccount']) 210 | -------------------------------------------------------------------------------- /tests/plugins/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/prezi/reddalert/c6748999a4015c305fccfd596adefbbe883ad19f/tests/plugins/__init__.py -------------------------------------------------------------------------------- /tests/plugins/test_ami.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import os 3 | import unittest 4 | from mock import patch, Mock, call 5 | 6 | from plugins import NewAMIPlugin 7 | from api import InstanceEnricher 8 | 9 | 10 | class PluginAmiTestCase(unittest.TestCase): 11 | 12 | def setUp(self): 13 | self.plugin = NewAMIPlugin() 14 | self.assertEqual(self.plugin.plugin_name, 'ami') 15 | self.config = {"allowed_tags": ['jenkins']} 16 | 17 | def test_initialize(self): 18 | self.plugin.init(Mock(), self.config, {}, Mock()) 19 | self.assertEqual(self.plugin.status, {'first_seen': {}}) 20 | expected = {'first_seen': {"ami-111": 1392015440000}, 'a': 3} 21 | self.plugin.init(Mock(), self.config, expected, Mock()) 22 | self.assertEqual(self.plugin.status, expected) 23 | 24 | def test_run(self, *mocks): 25 | instance_enricher = InstanceEnricher(Mock()) 26 | 27 | eddaclient = Mock() 28 | eddaclient._since = 500 29 | 30 | def ret_list(args): 31 | return [ 32 | {'imageId': 'ami-1', 'instanceId': 'a', 'launchTime': '500', 33 | 'tags': [{'key': 'service_name', 'value': 'conversion'}, {'key': 'started_by', 'value': 'john'}]}, 34 | {'imageId': 'ami-1', 'instanceId': 'b', 'launchTime': '2000', 35 | 'tags': [{'key': 'service_name', 'value': 'router'}]}, 36 | {'imageId': 'ami-2', 'instanceId': 'c', 'launchTime': '400'}] 37 | 38 | m = Mock() 39 | m.query = Mock(side_effect=ret_list) 40 | eddaclient.soft_clean = Mock(return_value=m) 41 | self.plugin.init(eddaclient, self.config, {'first_seen': {"ami-1": 1000, "ami-2": 400}}, instance_enricher) 42 | 43 | result = self.plugin.run() 44 | 45 | self.assertEqual(1, len(result)) 46 | result = result[0] 47 | self.assertEqual('ami-1', result['id']) 48 | self.assertEqual(2, len(result['details'])) 49 | self.assertIn('a', [d['instanceId'] for d in result['details']]) 50 | self.assertIn('b', [d['instanceId'] for d in result['details']]) 51 | 52 | m.query.assert_has_calls([call('/api/v2/view/instances;_expand')]) 53 | self.assertEqual(self.plugin.status, {'first_seen': {'ami-1': 500, 'ami-2': 400}}) 54 | 55 | def test_skipped_service(self): 56 | instance_enricher = InstanceEnricher(Mock()) 57 | eddaclient = Mock() 58 | eddaclient.query = Mock(return_value=[ 59 | {'imageId': 'ami-1', 'instanceId': 'b', 'launchTime': '2000', 60 | 'tags': [{'key': 'service_name', 'value': 'jenkins'}]}]) 61 | uncleaned_eddaclient = Mock() 62 | uncleaned_eddaclient.soft_clean = Mock(return_value=eddaclient) 63 | uncleaned_eddaclient._since = 500 64 | 65 | self.plugin.init(uncleaned_eddaclient, self.config, {'first_seen': {}}, instance_enricher) 66 | 67 | real = self.plugin.run() 68 | expected = [] 69 | 70 | self.assertEqual(expected, real) 71 | eddaclient.query.assert_has_calls([call('/api/v2/view/instances;_expand')]) 72 | 73 | 74 | def main(): 75 | unittest.main() 76 | 77 | if __name__ == '__main__': 78 | main() 79 | -------------------------------------------------------------------------------- /tests/plugins/test_chef.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import json 3 | import unittest 4 | 5 | from mock import patch, Mock, MagicMock 6 | 7 | from api import InstanceEnricher 8 | from plugins import NonChefPlugin 9 | 10 | 11 | class PluginNonChefTestCase(unittest.TestCase): 12 | def setUp(self): 13 | self.plugin = NonChefPlugin() 14 | self.assertEqual(self.plugin.plugin_name, 'non_chef') 15 | # self.buckets = ['bucket1', 'bucket2', 'assets', 'bucket3'] 16 | self.config = {'chef_server_url': 'foo', 17 | 'client_name': 'bar', 'client_key_file': '', 18 | "excluded_instances": ["jenkins"]} 19 | 20 | def test_initialize(self, *mocks): 21 | with patch('plugins.chef.ChefAPI') as mock: 22 | self.plugin.init(Mock(), self.config, {}, Mock()) 23 | self.assertEqual(self.plugin.excluded_instances, ['jenkins']) 24 | mock.assert_called_once_with('foo', '', 'bar') 25 | 26 | def wrap_chef_result(self, node): 27 | if node: 28 | return json.dumps({'rows': [{'data': node}]}) 29 | else: 30 | return json.dumps({'rows': []}) 31 | 32 | @patch('plugins.chef.ChefAPI') 33 | def test_handle_invalid_chef_data(self, *mocks): 34 | instance_enricher = InstanceEnricher(Mock()) 35 | eddaclient = Mock() 36 | 37 | def ret_list(args): 38 | return [ 39 | {'keyName': 'keyName1', 'instanceId': 'a', 'privateIpAddress': '10.1.1.1', 'publicIpAddress': '1.1.1.1', 40 | "tags": [{"key": "Name", "value": "tag1"}, {'a': 'b'}], 'launchTime': 1 * 3600000}, 41 | {'keyName': 'keyName2', 'instanceId': 'b', 'privateIpAddress': '10.1.1.2', 'publicIpAddress': '2.1.1.1', 42 | "tags": [{"key": "service_name", "value": "foo"}], 'launchTime': 1 * 3600000}, 43 | {'keyName': 'keyName3', 'instanceId': 'c', 'privateIpAddress': '10.1.1.3', 'publicIpAddress': '3.1.1.1', 44 | 'launchTime': 1 * 3600000}, 45 | {'keyName': 'keyName4', 'instanceId': 'd', 'privateIpAddress': '10.1.1.4', 'publicIpAddress': '4.1.1.1', 46 | 'launchTime': 1 * 3600000} 47 | ] 48 | 49 | m = Mock() 50 | m.query = Mock(side_effect=ret_list) 51 | eddaclient.soft_clean = Mock(return_value=m) 52 | eddaclient._since = 3 * 3600000 53 | eddaclient._until = 4 * 3600000 + 1 54 | 55 | chef_result_list = [ 56 | self.wrap_chef_result({'name': 'host3', 'cloud_foo': '1.1.1.1'}), 57 | self.wrap_chef_result({'name': 'host4'}), 58 | self.wrap_chef_result({'foo_what': 'bar', 'cloud_public_ipv6': ':da7a::', 'cloud_provider': 'ec2'}), 59 | self.wrap_chef_result(None), 60 | self.wrap_chef_result(None) 61 | ] 62 | 63 | with patch('plugins.chef.ChefAPI', return_value=MagicMock(request=MagicMock(side_effect=chef_result_list))): 64 | self.plugin.init(eddaclient, self.config, {}, instance_enricher) 65 | 66 | alerts = list(self.plugin.do_run()) 67 | # no valid chef data was returned 68 | self.assertEqual(0, len(alerts)) 69 | 70 | @patch('plugins.chef.ChefAPI') 71 | def test_empty_status(self, *mocks): 72 | instance_enricher = InstanceEnricher(Mock()) 73 | eddaclient = Mock() 74 | 75 | def ret_list(args): 76 | return [ 77 | {'keyName': 'keyName1', 'instanceId': 'a', 'privateIpAddress': '10.1.1.1', 'publicIpAddress': '1.1.1.1', 78 | "tags": [{"key": "Name", "value": "tag1"}, {'a': 'b'}], 'launchTime': 1 * 3600000}, 79 | {'keyName': 'keyName2', 'instanceId': 'b', 'privateIpAddress': '10.1.1.2', 'publicIpAddress': '2.1.1.1', 80 | "tags": [{"key": "service_name", "value": "foo"}], 'launchTime': 1 * 3600000}, 81 | {'keyName': 'keyName3', 'instanceId': 'c', 'privateIpAddress': '10.1.1.3', 'publicIpAddress': '3.1.1.1', 82 | 'launchTime': 1 * 3600000}, 83 | {'keyName': 'keyName4', 'instanceId': 'd', 'privateIpAddress': '10.1.1.4', 'publicIpAddress': '4.1.1.1', 84 | 'launchTime': 1 * 3600000}, 85 | {'keyName': 'keyName5', 'instanceId': 'e', 'privateIpAddress': 'null', 'publicIpAddress': 'null', 86 | 'launchTime': 1 * 3600000}, 87 | {'keyName': 'keyName6', 'instanceId': 'f', 'privateIpAddress': None, 'publicIpAddress': None, 88 | 'launchTime': 1 * 3600000} 89 | ] 90 | 91 | m = Mock() 92 | m.query = Mock(side_effect=ret_list) 93 | eddaclient.soft_clean = Mock(return_value=m) 94 | eddaclient._since = 3 * 3600000 95 | eddaclient._until = 4 * 3600000 + 1 96 | 97 | chef_result_list = [ 98 | self.wrap_chef_result({'name': 'ec2 alive', 'cloud_public_ipv4': '1.1.1.1', 'cloud_provider': 'ec2'}), 99 | self.wrap_chef_result({'name': 'non-ec2 but cloud host alive', 'cloud_public_ipv4': '2.1.1.1'}), 100 | self.wrap_chef_result({'name': 'ec2 host dead', 'cloud_public_ipv4': '255.1.1.1', 'cloud_provider': 'ec2'}), 101 | self.wrap_chef_result({'name': 'non-ec2 host', 'ipaddress': '5.1.1.1'}), 102 | self.wrap_chef_result(None) 103 | ] 104 | 105 | with patch('plugins.chef.ChefAPI', return_value=MagicMock(request=MagicMock(side_effect=chef_result_list))): 106 | self.plugin.init(eddaclient, self.config, {}, instance_enricher) 107 | alerts = list(self.plugin.do_run()) 108 | non_chef_alerts = [i for i in alerts if i['plugin_name'] == 'non_chef'] 109 | chef_managed_alerts = [i for i in alerts if i['plugin_name'] == 'chef_managed'] 110 | 111 | # there are two reportable instances, 3.1.1.1 and 4.1.1.1 112 | self.assertEqual(2, len(non_chef_alerts)) 113 | self.assertTrue(any(a["details"][0]["publicIpAddress"] == "3.1.1.1" for a in non_chef_alerts)) 114 | self.assertTrue(any(a["details"][0]["publicIpAddress"] == "4.1.1.1" for a in non_chef_alerts)) 115 | 116 | self.assertEqual(3, len(chef_managed_alerts)) 117 | self.assertTrue(any(a["details"][0]["publicIpAddress"] == "1.1.1.1" for a in chef_managed_alerts)) 118 | self.assertTrue(any(a["details"][0]["publicIpAddress"] == "2.1.1.1" for a in chef_managed_alerts)) 119 | self.assertTrue(any(a["details"][0]["publicIpAddress"] == "5.1.1.1" for a in chef_managed_alerts)) 120 | 121 | @patch('plugins.chef.ChefAPI') 122 | def test_nonempty_status(self, *mocks): 123 | instance_enricher = InstanceEnricher(Mock()) 124 | eddaclient = Mock() 125 | 126 | def ret_list(args): 127 | return [ 128 | {'keyName': 'keyName1', 'instanceId': 'a', 'privateIpAddress': '10.1.1.1', 'publicIpAddress': '1.1.1.1', 129 | "tags": [{"key": "Name", "value": "tag1"}, {'a': 'b'}], 'launchTime': 6 * 3600000 + 1}, 130 | {'keyName': 'keyName2', 'instanceId': 'b', 'privateIpAddress': '10.1.1.2', 'publicIpAddress': '2.1.1.1', 131 | "tags": [{"key": "service_name", "value": "foo"}], 'launchTime': 7 * 3600000 + 1}, 132 | {'keyName': 'keyName3', 'instanceId': 'c', 'privateIpAddress': '10.1.1.3', 'publicIpAddress': '3.1.1.1', 133 | 'launchTime': 8 * 3600000 + 1}, 134 | {'keyName': 'keyName4', 'instanceId': 'd', 'privateIpAddress': '10.1.1.4', 'publicIpAddress': '4.1.1.1', 135 | 'launchTime': 9 * 3600000 + 1}, 136 | {'instanceId': 'e', 'privateIpAddress': 'x', 'publicIpAddress': 'x', 'launchTime': 10 * 3600000 + 1}, 137 | {'instanceId': 'f', 'privateIpAddress': 'x', 'publicIpAddress': '7.1.1.1', 'launchTime': 11 * 3600000 + 1}, 138 | ] 139 | 140 | m = Mock() 141 | m.query = Mock(side_effect=ret_list) 142 | eddaclient.soft_clean = Mock(return_value=m) 143 | eddaclient._since = 10 * 3600000 144 | eddaclient._until = 11 * 3600000 145 | 146 | chef_result_list = [ 147 | self.wrap_chef_result({'name': 'host0', 'cloud_public_ipv4': '4.1.1.1'}), 148 | self.wrap_chef_result({'name': 'host1', 'cloud_public_ipv4': '6.1.1.1'}), 149 | self.wrap_chef_result(None), 150 | self.wrap_chef_result(None), 151 | self.wrap_chef_result(None) 152 | ] 153 | 154 | with patch('plugins.chef.ChefAPI', return_value=MagicMock(request=MagicMock(side_effect=chef_result_list))): 155 | self.plugin.init(eddaclient, self.config, {"first_seen": {'f': 8}}, instance_enricher) 156 | 157 | alerts = list(self.plugin.do_run()) 158 | non_chef_alerts = [i for i in alerts if i['plugin_name'] == 'non_chef'] 159 | chef_managed_alerts = [i for i in alerts if i['plugin_name'] == 'chef_managed'] 160 | 161 | # there is one problematic node (2.1.1.1) 162 | self.assertEqual(1, len(non_chef_alerts)) 163 | self.assertTrue(any(a["details"][0]["publicIpAddress"] == "2.1.1.1" for a in non_chef_alerts)) 164 | 165 | # there is one chef managed node (4.1.1.1) 166 | self.assertEqual(2, len(chef_managed_alerts)) 167 | self.assertTrue(any(a["details"][0]["publicIpAddress"] == "4.1.1.1" for a in chef_managed_alerts)) 168 | self.assertTrue(any(a["details"][0]["publicIpAddress"] == "6.1.1.1" for a in chef_managed_alerts)) 169 | 170 | 171 | def main(): 172 | unittest.main() 173 | 174 | 175 | if __name__ == '__main__': 176 | main() 177 | -------------------------------------------------------------------------------- /tests/plugins/test_elbs.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import os 3 | import unittest 4 | from mock import patch, Mock, call 5 | 6 | from plugins import ElasticLoadBalancerPlugin 7 | 8 | 9 | class PluginElbTestCase(unittest.TestCase): 10 | 11 | def setUp(self): 12 | self.plugin = ElasticLoadBalancerPlugin() 13 | self.assertEqual(self.plugin.plugin_name, 'elbs') 14 | self.config = {"allowed_ports": [80, 443]} 15 | 16 | def test_is_suspicious(self): 17 | self.plugin.init(Mock(), self.config, {}) 18 | elb = {"listenerDescriptions": [ 19 | {"listener": {"SSLCertificateId": None, "instancePort": 8443, "instanceProtocol": 20 | "TCP", "loadBalancerPort": 22, "protocol": "TCP"}, "policyNames": []} 21 | ]} 22 | 23 | self.assertTrue(self.plugin.is_suspicious(elb)) 24 | 25 | elb = {"listenerDescriptions": [ 26 | {"listener": {"SSLCertificateId": None, "instancePort": 8443, "instanceProtocol": 27 | "TCP", "loadBalancerPort": 443, "protocol": "TCP"}, "policyNames": []} 28 | ]} 29 | self.assertFalse(self.plugin.is_suspicious(elb)) 30 | 31 | def test_run(self, *mocks): 32 | 33 | eddaclient = Mock() 34 | 35 | def ret_list(args): 36 | return [{'loadBalancerName': 'test-elb', 'canonicalHostedZoneName': 'test-hostname', 37 | 'instances': [{}, {}], "listenerDescriptions": [ 38 | {"listener": { 39 | "SSLCertificateId": None, "instancePort": 8443, "instanceProtocol": 40 | "TCP", "loadBalancerPort": 22, "protocol": "TCP"}, "policyNames": [] 41 | } 42 | ]}, 43 | {'loadBalancerName': 'production-elb', 'canonicalHostedZoneName': 'production-hostname', 44 | 'instances': [{}, {}, {}, {}, {}], "listenerDescriptions": [ 45 | {"listener": { 46 | "SSLCertificateId": None, "instancePort": 8443, "instanceProtocol": 47 | "TCP", "loadBalancerPort": 443, "protocol": "TCP"}, "policyNames": [] 48 | } 49 | ]}] 50 | 51 | m = Mock() 52 | m.query = Mock(side_effect=ret_list) 53 | eddaclient.updateonly = Mock(return_value=m) 54 | self.plugin.init(eddaclient, self.config, {}) 55 | 56 | # run the tested method 57 | result = self.plugin.run() 58 | self.assertEqual(1, len(result)) 59 | self.assertIn('id', result[0]) 60 | self.assertIn('plugin_name', result[0]) 61 | self.assertIn('details', result[0]) 62 | 63 | self.assertTrue(isinstance(result[0]['details'], list)) 64 | self.assertEqual(1, len(result[0]['details'])) 65 | 66 | m.query.assert_has_calls([call('/api/v2/aws/loadBalancers;_expand')]) 67 | 68 | 69 | def main(): 70 | unittest.main() 71 | 72 | if __name__ == '__main__': 73 | main() 74 | -------------------------------------------------------------------------------- /tests/plugins/test_iam.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import os 3 | import unittest 4 | from mock import Mock, call 5 | 6 | from plugins import UserAddedPlugin 7 | 8 | APPDIR = "%s/" % os.path.dirname(os.path.realpath(__file__ + '/../')) 9 | 10 | 11 | class PluginIamTestCase(unittest.TestCase): 12 | 13 | def setUp(self): 14 | self.plugin = UserAddedPlugin() 15 | self.assertEqual(self.plugin.plugin_name, 'iam') 16 | 17 | def test_run(self, *mocks): 18 | 19 | eddaclient = Mock() 20 | default_users = ['bob', 'alice'] 21 | whitelisted_users = ['whitelisteduser123123'] 22 | allowed_list = ['^whitelisteduser[\d]{6}$'] 23 | users = default_users + whitelisted_users 24 | diff_call_format = '/api/v2/aws/iamUsers/%s;_diff=200' 25 | 26 | def ret_list(args): 27 | return users 28 | 29 | def ret_user_diff(args): 30 | if args == diff_call_format % 'alice': 31 | return open(APPDIR + 'test_data/test_iam_diff.txt').read() 32 | else: 33 | return '"diff" without group change' 34 | 35 | m = Mock() 36 | m.query = Mock(side_effect=ret_list) 37 | eddaclient.raw_query = Mock(side_effect=ret_user_diff) 38 | eddaclient.updateonly = Mock(return_value=m) 39 | 40 | mocked_config = {} 41 | mocked_config['allowed'] = allowed_list 42 | 43 | self.plugin.init(eddaclient, mocked_config, {}) 44 | 45 | # run the tested method 46 | self.assertEqual(self.plugin.run(), [ 47 | {'id': 'alice', 'plugin_name': 'iam', 48 | 'details': ['Groups the user has been added to: developers, devops']}]) 49 | 50 | m.query.assert_has_calls([call('/api/v2/aws/iamUsers')]) 51 | # switched to assertEqual so we can detect if the whitelisted user is indeed not checked 52 | self.assertEqual(eddaclient.raw_query.call_args_list, [call(diff_call_format % username) for username in default_users]) 53 | 54 | 55 | def main(): 56 | unittest.main() 57 | 58 | if __name__ == '__main__': 59 | main() 60 | -------------------------------------------------------------------------------- /tests/plugins/test_missingtag.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import socket 3 | import unittest 4 | from mock import patch, Mock, call 5 | 6 | from api import InstanceEnricher 7 | from plugins import MissingInstanceTagPlugin 8 | 9 | 10 | class PluginMissingInstanceTagTestCase(unittest.TestCase): 11 | 12 | def setUp(self): 13 | self.plugin = MissingInstanceTagPlugin() 14 | self.assertEqual(self.plugin.plugin_name, 'missingtag') 15 | 16 | def test_run(self, *mocks): 17 | instance_enricher = InstanceEnricher(Mock()) 18 | eddaclient = Mock() 19 | eddaclient._since = 200 20 | 21 | def ret_list(args): 22 | return [ 23 | {'imageId': 'ami-1', 'instanceId': 'a', 'launchTime': 400, "tags": [{"key": "Name", "value": "tag1"}]}, 24 | {'imageId': 'ami-2', 'instanceId': 'b', 'launchTime': 600, 25 | "tags": [{"key": "service_name", "value": "foo"}]}, 26 | {'imageId': 'ami-3', 'instanceId': 'c', 'launchTime': 800, "tags": 27 | [{"key": "Name", "value": "tag1"}, {"key": "service_name", "value": "foo"}]}, 28 | ] 29 | 30 | m = Mock() 31 | m.query = Mock(side_effect=ret_list) 32 | eddaclient.clean = Mock(return_value=m) 33 | self.plugin.init(eddaclient, Mock(), {}, instance_enricher) 34 | 35 | # run the tested method 36 | result = self.plugin.run() 37 | 38 | self.assertEqual(1, len(result)) 39 | result = result[0] 40 | self.assertEqual("tag1", result["id"]) # service_type became the new id, which in this case is the Name tag 41 | self.assertEqual(1, len(result["details"])) 42 | self.assertIn("instanceId", result["details"][0]) 43 | 44 | m.query.assert_has_calls([call('/api/v2/view/instances;_expand')]) 45 | 46 | 47 | def main(): 48 | unittest.main() 49 | 50 | if __name__ == '__main__': 51 | main() 52 | -------------------------------------------------------------------------------- /tests/plugins/test_newtag.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import unittest 3 | from mock import patch, Mock, call 4 | 5 | from plugins import NewInstanceTagPlugin 6 | from api import InstanceEnricher 7 | 8 | 9 | class PluginNewInstanceTagTestCase(unittest.TestCase): 10 | def setUp(self): 11 | self.plugin = NewInstanceTagPlugin() 12 | self.assertEqual(self.plugin.plugin_name, 'newtag') 13 | 14 | def test_run(self, *mocks): 15 | instance_enricher = InstanceEnricher(Mock()) 16 | eddaclient = Mock() 17 | eddaclient._since = 500 18 | 19 | def ret_list(args): 20 | return [ 21 | {'imageId': 'ami-1', 'instanceId': 'a', 'launchTime': 400, "tags": [{"key": "Name", "value": "tag1"}]}, 22 | {'imageId': 'ami-2', 'instanceId': 'b', 'launchTime': 600, 23 | "tags": [{"key": "service_name", "value": "foo"}]}, 24 | {'imageId': 'ami-3', 'instanceId': 'c', 'launchTime': 800, "tags": 25 | [{"key": "Name", "value": "tag1"}, {"key": "service_name", "value": "foo"}]}, 26 | ] 27 | 28 | m = Mock() 29 | m.query = Mock(side_effect=ret_list) 30 | eddaclient.clean = Mock(return_value=m) 31 | self.plugin.init(eddaclient, Mock(), {}, instance_enricher) 32 | 33 | # run the tested method 34 | result = self.plugin.run() 35 | self.assertEqual(1, len(result)) 36 | result = result[0] 37 | self.assertEqual("foo", result["id"]) 38 | self.assertEqual(2, len(result["details"])) 39 | self.assertIn("b", [d["instanceId"] for d in result["details"]]) 40 | self.assertIn("c", [d["instanceId"] for d in result["details"]]) 41 | 42 | m.query.assert_has_calls([call('/api/v2/view/instances;_expand')]) 43 | 44 | def test_ignore_tags(self, *mocks): 45 | instance_enricher = InstanceEnricher(Mock()) 46 | eddaclient = Mock() 47 | eddaclient._since = 500 48 | 49 | def ret_list(args): 50 | return [ 51 | {'imageId': 'ami-1', 'instanceId': 'a', 'launchTime': 400, "tags": [{"key": "Name", "value": "tag1"}]}, 52 | {'imageId': 'ami-2', 'instanceId': 'b', 'launchTime': 600, 53 | "tags": [{"key": "service_name", "value": "testapp"}]} 54 | ] 55 | 56 | m = Mock() 57 | m.query = Mock(side_effect=ret_list) 58 | eddaclient.clean = Mock(return_value=m) 59 | self.plugin.init(eddaclient, Mock(), {}, instance_enricher) 60 | 61 | # run the tested method 62 | result = self.plugin.run() 63 | self.assertEqual(0, len(result)) 64 | 65 | m.query.assert_has_calls([call('/api/v2/view/instances;_expand')]) 66 | 67 | 68 | def main(): 69 | unittest.main() 70 | 71 | 72 | if __name__ == '__main__': 73 | main() 74 | -------------------------------------------------------------------------------- /tests/plugins/test_route53.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import unittest 3 | 4 | from mock import Mock, call, patch, MagicMock 5 | from plugins.route53 import load_route53_entries, is_external, page_process_for_route53changed 6 | 7 | 8 | class LoadRoute53EntriesTestCase(unittest.TestCase): 9 | def test_run(self, *mocks): 10 | eddaclient = Mock() 11 | eddaclient._since = 500 12 | 13 | def ret_list(args): 14 | return [ 15 | {'name': 'info.prezi.com', 'instanceId': 'a', 'launchTime': 400}, 16 | {'name': 'info.prezi.com', 'instanceId': 'a', 'launchTime': 400}, 17 | {'name': 'info.prezi.com', 'instanceId': 'a', 'launchTime': 400}, 18 | {'name': 'info1.prezi.com', 'instanceId': 'b', 'launchTime': 600}, 19 | {'name': 'info1.prezi.com', 'instanceId': 'b', 'launchTime': 600} 20 | ] 21 | 22 | m = Mock() 23 | m.query = Mock(side_effect=ret_list) 24 | eddaclient.clean = Mock(return_value=m) 25 | 26 | self.assertEqual([{'name': 'info.prezi.com', 'instanceId': 'a', 'launchTime': 400}, 27 | {'name': 'info1.prezi.com', 'instanceId': 'b', 'launchTime': 600}], 28 | load_route53_entries(eddaclient)) 29 | m.query.assert_has_calls([call('/api/v2/aws/hostedRecords;_expand')]) 30 | 31 | 32 | class IsExternalTestCase(unittest.TestCase): 33 | def test_run(self, *mocks): 34 | values = [ 35 | {'imageId': 'ami-1', 'instanceId': 'a', 'launchTime': 400, 36 | "resourceRecords": [{"value": "127.0.0.1.prezi.com"}]}, 37 | {'imageId': 'ami-2', 'instanceId': 'b', 'launchTime': 600, "resourceRecords": [{"value": "127.0.0.2"}]}, 38 | {'imageId': 'ami-3', 'instanceId': 'c', 'launchTime': 800, 39 | "resourceRecords": [{"value": "127.0.0.3.prezi.com"}, {"value": "127.0.0.4"}]}, 40 | ] 41 | 42 | ip_set = ['127.0.0.1.prezi.com', '127.0.0.3.prezi.com'] 43 | domains_set = ['prezi.com'] 44 | result = [v for v in values if is_external(v, ip_set, domains_set)] 45 | 46 | # run the tested method 47 | self.assertEqual(2, len(result)) 48 | result = result[0] 49 | self.assertEqual("ami-2", result["imageId"]) 50 | self.assertEqual(1, len(result["resourceRecords"])) 51 | 52 | 53 | class Route53ChangedDoesNotExist(unittest.TestCase): 54 | @patch('plugins.route53.urllib2') 55 | def test_run(self, mock_urllib2): 56 | import re 57 | 58 | def mocked_get_location_content(location, *args, **kwargs): 59 | if location == "https://prezi.com/nosuch1": 60 | return MagicMock(read=MagicMock(side_effect=lambda: "somethingNoSuchBucketsomethingsomething")) 61 | if location == "https://meh.prezi.com": 62 | return MagicMock(read=MagicMock(side_effect=lambda: "OK, whatevs")) 63 | if location == "https://my404.prezi.com": 64 | return MagicMock(read=MagicMock(side_effect=lambda: "this page does not exist")) 65 | 66 | mock_urllib2.urlopen = MagicMock(side_effect=mocked_get_location_content) 67 | results = list(page_process_for_route53changed('https://prezi.com/nosuch1')) 68 | self.assertEquals(['NoSuchBucket|NoSuchKey|NoSuchVersion'], results[1]["matches"]) 69 | results = list(page_process_for_route53changed('https://meh.prezi.com')) 70 | self.assertEquals([], results[1]["matches"]) 71 | results = list(page_process_for_route53changed('https://my404.prezi.com')) 72 | self.assertEquals(["not exists?"], results[1]["matches"]) 73 | 74 | 75 | def main(): 76 | unittest.main() 77 | 78 | 79 | if __name__ == '__main__': 80 | main() 81 | -------------------------------------------------------------------------------- /tests/plugins/test_s3acl.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import socket 3 | import unittest 4 | from mock import patch, Mock, call, MagicMock 5 | 6 | from plugins import S3AclPlugin 7 | from boto.s3.key import Key 8 | from boto.exception import S3ResponseError 9 | 10 | 11 | class PluginS3AclTestCase(unittest.TestCase): 12 | 13 | def setUp(self): 14 | self.plugin = S3AclPlugin() 15 | self.assertEqual(self.plugin.plugin_name, 's3acl') 16 | self.buckets = ['bucket1', 'bucket2', 'assets', 'bucket3'] 17 | 18 | def test_initialize(self): 19 | self.plugin.init(Mock(), {'user': 'bob', 'key': 'xxx'}, {}) 20 | self.assertEqual(self.plugin.p, 0.1) 21 | self.assertEqual(self.plugin.maxdir, 5) 22 | self.assertEqual(self.plugin.excluded_buckets, []) 23 | self.assertEqual(self.plugin.excluded_keys, []) 24 | self.assertEqual(self.plugin.allowed, []) 25 | self.assertEqual(self.plugin.allowed_specific, {}) 26 | 27 | @patch('random.sample', return_value=["bucket1", "bucket2"]) 28 | def test_sample_population(self, *mocks): 29 | self.plugin.init(Mock(), {'user': 'bob', 'key': 'xxx'}, {}) 30 | self.assertEqual(self.plugin.sample_population([], 2), []) 31 | self.plugin.p = 30 32 | self.assertEqual(self.plugin.sample_population(self.buckets, 3), ["bucket1", "bucket2"]) 33 | 34 | def test_suspicious_object_grants(self, *mocks): 35 | with patch('boto.s3.key.Key') as MockClass: 36 | key = MockClass.return_value 37 | key.bucket.name = 'allowed_bucket' 38 | 39 | acp = Mock() 40 | acp.acl.grants = [Mock(), Mock()] 41 | acp.acl.grants[0].id = 'id' 42 | acp.acl.grants[0].permission = 'permission' 43 | acp.acl.grants[1].id = 'id2' 44 | acp.acl.grants[1].permission = 'permission2' 45 | 46 | key.get_acl = Mock(return_value=acp) 47 | self.plugin.init(Mock(), {'user': 'bob', 'key': 'xxx', 'allowed_specific': { 48 | 'allowed_bucket': [{'uid': 'id2', 'op': 'permission2'}]}}, {}) 49 | self.assertEqual(self.plugin.suspicious_object_grants(key), ['id permission']) 50 | 51 | def test_is_suspicious_object(self, *mocks): 52 | is_suspicious = self.plugin.is_suspicious 53 | 54 | mock_user_whitelist = {'uid': 'mock_user', 'op': 'FULL_ACCESS'} 55 | mock_group_whitelist = {'gid': 'aws/mock_group', 'op': 'FULL_ACCESS'} 56 | mock_user_group_whitelist = {'uid': 'mock_user', 'gid': 'aws/mock_group', 'op': 'FULL_ACCESS'} 57 | 58 | mock_grant = Mock(type='Group', id=None, uri='aws/mock_group', permission='FULL_ACCESS') 59 | self.assertTrue(is_suspicious(mock_grant, [])) 60 | self.assertFalse(is_suspicious(mock_grant, [mock_group_whitelist])) 61 | self.assertFalse(is_suspicious(mock_grant, [mock_user_group_whitelist])) 62 | 63 | mock_grant = Mock(type='Group', id=None, uri='aws/different_group', permission='FULL_ACCESS') 64 | self.assertTrue(is_suspicious(mock_grant, [])) 65 | self.assertTrue(is_suspicious(mock_grant, [mock_group_whitelist])) 66 | self.assertTrue(is_suspicious(mock_grant, [mock_user_group_whitelist])) 67 | 68 | mock_grant = Mock(type='CanonicalUser', id='mock_user', uri=None, permission='FULL_ACCESS') 69 | self.assertTrue(is_suspicious(mock_grant, [])) 70 | self.assertFalse(is_suspicious(mock_grant, [mock_user_whitelist])) 71 | self.assertFalse(is_suspicious(mock_grant, [mock_user_group_whitelist])) 72 | 73 | mock_grant = Mock(type='CanonicalUser', id='different_user', uri=None, permission='FULL_ACCESS') 74 | self.assertTrue(is_suspicious(mock_grant, [])) 75 | self.assertTrue(is_suspicious(mock_grant, [mock_user_whitelist])) 76 | self.assertTrue(is_suspicious(mock_grant, [mock_user_group_whitelist])) 77 | 78 | mock_grant = Mock(type='CanonicalUser', id='whatever_user', uri=None, permission='FULL_ACCESS') 79 | self.assertRaises(KeyError, is_suspicious, mock_grant, [{}]) 80 | self.assertTrue(is_suspicious(mock_grant, [{'op': 'whatever_no_gid_or_uid'}])) 81 | 82 | mock_grant = Mock(type='UnknownType', id=None, uri=None, permission='FULL_ACCESS') 83 | self.assertTrue(is_suspicious(mock_grant, [{'op': 'whatever_op', 'uid': 'whatever_uid', 'gid': 'whatever_gid'}])) 84 | 85 | def test_suspicious_bucket_grants(self, *mocks): 86 | bucket = MagicMock( 87 | get_acl=MagicMock( 88 | return_value=MagicMock(acl=MagicMock(grants=[MagicMock(id='id', permission='permission'), 89 | MagicMock(id='id2', permission='permission2')])) 90 | ) 91 | ) 92 | bucket.name = 'allowed_bucket' 93 | 94 | self.plugin.init(Mock(), {'user': 'bob', 'key': 'xxx', 'allowed_specific': { 95 | 'allowed_bucket': [{'uid': 'id2', 'op': 'permission2'}]}}, {}) 96 | self.assertEqual(self.plugin.suspicious_bucket_grants(bucket), ['id permission']) 97 | 98 | def test_traverse_bucket(self, *mocks): 99 | 100 | def ret_sample(population, offset=1): 101 | if population and offset == 1: 102 | # sample keys 103 | return [population[0]] 104 | else: 105 | return population 106 | 107 | with patch('plugins.S3AclPlugin.sample_population', side_effect=ret_sample) as MockClass: 108 | bucket = Mock() 109 | bucket.name = 'bucket1' 110 | prefix = Mock() 111 | prefix.name = 'prefix' 112 | key = Mock(Key) 113 | key.name = 'key1' 114 | key2 = Mock(Key) 115 | key2.name = 'key2' 116 | 117 | def ret_list(pref, slash): 118 | if pref == '': 119 | return [key, prefix, key2] 120 | else: 121 | return [] 122 | 123 | bucket.list.side_effect = ret_list 124 | 125 | self.plugin.init(Mock(), {'user': 'bob', 'key': 'xxx'}, {}) 126 | self.assertEqual(self.plugin.traverse_bucket(bucket, ''), [key]) 127 | 128 | @patch('plugins.S3AclPlugin.sample_population', return_value=[Mock()]) 129 | def test_do_run(self, *mocks): 130 | 131 | key1 = Mock(Key) 132 | key1.name = 'key1' 133 | key1.bucket = Mock() 134 | key1.bucket.name = 'bucket1' 135 | key2 = Mock(Key) 136 | key2.name = 'key2' 137 | key2.bucket = Mock() 138 | key2.bucket.name = 'bucket1' 139 | 140 | def ret_keys(key): 141 | if key == key1: 142 | return ['id permission'] 143 | return [] 144 | 145 | def ret_buckets(key): 146 | return [] 147 | 148 | with patch('plugins.S3AclPlugin.traverse_bucket', return_value=[key1, key2]) as MockClass,\ 149 | patch('plugins.S3AclPlugin.suspicious_object_grants', side_effect=ret_keys),\ 150 | patch('plugins.S3AclPlugin.suspicious_bucket_grants', side_effect=ret_buckets): 151 | 152 | self.plugin.init(Mock(), {'user': 'bob', 'key': 'xxx'}, {}) 153 | # run the tested method 154 | self.assertEqual(list(self.plugin.do_run(MagicMock())), [ 155 | {'details': ['id permission'], 'id': 'bucket1:key1', 156 | 'url': 'https://s3.amazonaws.com/bucket1/key1', 'plugin_name': 's3acl'}]) 157 | 158 | def test_survive_s3error_traverse(self): 159 | bucket = Mock() 160 | bucket.list = Mock(side_effect=S3ResponseError(404, 'Not found', '')) 161 | self.plugin.init(Mock(), {'user': 'bob', 'key': 'xxx'}, {}) 162 | 163 | r = self.plugin.traverse_bucket(bucket, '') 164 | 165 | self.assertEqual([], r) 166 | 167 | def test_survive_s3error_suspicious(self): 168 | k = Mock() 169 | k.get_acl = Mock(side_effect=S3ResponseError(404, 'Not found', '')) 170 | self.plugin.init(Mock(), {'user': 'bob', 'key': 'xxx'}, {}) 171 | 172 | r = self.plugin.suspicious_object_grants(k) 173 | 174 | self.assertEqual([], r) 175 | 176 | def test_filter_excluded_buckets(self): 177 | bucket1 = Mock(Key) 178 | bucket1.name = 'bucket1' 179 | bucket2 = Mock(Key) 180 | bucket2.name = 'bucket2' 181 | bucket3 = Mock(Key) 182 | bucket3.name = 'bucket3' 183 | 184 | self.plugin.init(Mock(), {'user': 'bob', 'key': 'xxx', 'excluded_buckets': ['bucket[13]+', 'shouldntmatter.*']}, {}) 185 | r = self.plugin.filter_excluded_buckets([bucket1, bucket2, bucket3]) 186 | self.assertEqual([bucket2], r) 187 | 188 | def test_filter_excluded_keys(self): 189 | key1 = Mock(Key) 190 | key1.name = 'key1' 191 | key1.bucket = Mock() 192 | key1.bucket.name = 'bucket1' 193 | key2 = Mock(Key) 194 | key2.name = 'key2' 195 | key2.bucket = Mock() 196 | key2.bucket.name = 'bucket1' 197 | 198 | self.plugin.init(Mock(), {'user': 'bob', 'key': 'xxx', 'excluded_keys': ['^bucket[13]:.*2$', 'shouldntmatter.*']}, {}) 199 | r = self.plugin.filter_excluded_keys([key1, key2]) 200 | self.assertEqual([key1], r) 201 | 202 | 203 | def main(): 204 | unittest.main() 205 | 206 | if __name__ == '__main__': 207 | main() 208 | -------------------------------------------------------------------------------- /tests/plugins/test_secgroups.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import socket 3 | import unittest 4 | 5 | from netaddr import IPNetwork 6 | from mock import patch, Mock, call 7 | from plugins import SecurityGroupPlugin 8 | 9 | 10 | class PluginSecurityGroupTestCase(unittest.TestCase): 11 | def setUp(self): 12 | self.plugin = SecurityGroupPlugin() 13 | self.assertEqual(self.plugin.plugin_name, 'secgroups') 14 | self.config = {'allowed_ports': [22], 'whitelisted_ips': ['1.2.3.4/24', '2.2.2.2/32', '172.16.0.0/12']} 15 | self.example_security_group = {'groupId': 'sg-1', "groupName": "group1"} 16 | 17 | def test_is_suspicious(self): 18 | self.plugin.init(Mock(), self.config, {}) 19 | 20 | self.assertTrue(self.plugin.is_suspicious_permission( 21 | {"fromPort": None, "ipProtocol": "-1", "ipRanges": ["0.0.0.0/0"], "toPort": None})) 22 | self.assertTrue(self.plugin.is_suspicious_permission( 23 | {"fromPort": 25, "ipProtocol": "tcp", "ipRanges": ["0.0.0.0/0"], "toPort": 25})) 24 | self.assertTrue(self.plugin.is_suspicious_permission( 25 | {"fromPort": 21, "ipProtocol": "tcp", "ipRanges": ["6.6.6.6/32"], "toPort": 22})) 26 | self.assertTrue(self.plugin.is_suspicious_permission( 27 | {"fromPort": 80, "ipProtocol": "tcp", "ipRanges": ["6.6.6.6/32"], "toPort": 80})) 28 | 29 | self.assertFalse(self.plugin.is_suspicious_permission( 30 | {"fromPort": 0, "ipProtocol": "icmp", "ipRanges": ["0.0.0.0/0"], "toPort": -1})) 31 | self.assertFalse(self.plugin.is_suspicious_permission( 32 | {"fromPort": 8, "ipProtocol": "icmp", "ipRanges": ["0.0.0.0/0"], "toPort": -1})) 33 | self.assertFalse(self.plugin.is_suspicious_permission( 34 | {"fromPort": 22, "ipProtocol": "tcp", "ipRanges": ["0.0.0.0/0"], "toPort": 22}, )) 35 | self.assertFalse(self.plugin.is_suspicious_permission( 36 | {"fromPort": 25, "ipProtocol": "icmp", "ipRanges": ["0.0.0.0/0"], "toPort": 26}, )) 37 | self.assertFalse(self.plugin.is_suspicious_permission( 38 | {"fromPort": 80, "ipProtocol": "tcp", "ipRanges": ["2.2.2.2/32"], "toPort": 80})) 39 | self.assertFalse(self.plugin.is_suspicious_permission( 40 | {"fromPort": None, "ipProtocol": "-1", "ipRanges": ["1.2.3.4/24", "2.2.2.2/32"], "toPort": None})) 41 | 42 | def test_is_whitelisted_perm_true(self): 43 | self.plugin.init(Mock(), self.config, {}) 44 | self.plugin.whitelisted_entries = {'sg-1 (group1)': {'22': "2.2.2.2/32"}} 45 | perm = {"fromPort": 22, "ipProtocol": "-1", "ipRanges": ["2.2.2.2/32"], "toPort": 22} 46 | result = self.plugin.is_whitelisted_perm(self.example_security_group, perm) 47 | 48 | self.assertTrue(result) 49 | 50 | def test_is_whitelisted_perm_port_range(self): 51 | self.plugin.init(Mock(), self.config, {}) 52 | self.plugin.whitelisted_entries = {'sg-1 (group1)': {'8000-9000': "2.2.2.2/32"}} 53 | perm = {"fromPort": 8000, "ipProtocol": "-1", "ipRanges": ["2.2.2.2/32"], "toPort": 9000} 54 | result = self.plugin.is_whitelisted_perm(self.example_security_group, perm) 55 | 56 | self.assertTrue(result) 57 | 58 | def test_is_whitelisted_perm_false(self): 59 | self.plugin.init(Mock(), self.config, {}) 60 | self.plugin.whitelisted_entries = {'sg-1 (group1)': {'22': "2.2.2.2/32"}} 61 | perm = {"fromPort": 22, "ipProtocol": "-1", "ipRanges": ["2.2.2.2/32", "1.1.1.1/32"], "toPort": 22} 62 | 63 | result = self.plugin.is_whitelisted_perm(self.example_security_group, perm) 64 | 65 | self.assertFalse(result) 66 | 67 | def test_suspicious_perms(self): 68 | self.plugin.init(Mock(), self.config, {}) 69 | self.plugin.whitelisted_entries = {'sg-2 (group2)': {'8000-9000': "2.2.2.2/32"}} 70 | security_group = {"groupId": "sg-2", "groupName": "group2", "ownerId": "222222", 71 | "ipPermissions": [ 72 | {"fromPort": 139, "ipProtocol": "tcp", "ipRanges": ["0.0.0.0/0"], "toPort": 139}, 73 | {"fromPort": 8000, "ipProtocol": "tcp", "ipRanges": ["2.2.2.2/32"], "toPort": 9000} 74 | ]} 75 | 76 | result = list(self.plugin.suspicious_perms(security_group)) 77 | 78 | suspicious_perms = [{'toPort': 139, 'fromPort': 139, 'ipRanges': ['0.0.0.0/0'], 'ipProtocol': 'tcp'}] 79 | self.assertListEqual(suspicious_perms, result) 80 | 81 | def test_is_suspicious_ip_range(self): 82 | self.plugin.init(Mock(), self.config, {}) 83 | self.assertFalse(self.plugin.is_suspicious_ip_range('172.16.0.0/16')) 84 | self.assertTrue(self.plugin.is_suspicious_ip_range('172.16.0.0/0')) 85 | 86 | def test_is_port_open(self, *mocks): 87 | self.plugin.init(Mock(), self.config, {}) 88 | 89 | with patch('socket.socket') as MockClass: 90 | instance = MockClass.return_value 91 | # too big range 92 | self.assertEqual(self.plugin.is_port_open('127.0.0.1', 1, 443), None) 93 | 94 | # bad arguments 95 | self.assertFalse(self.plugin.is_port_open(None, 22, 22)) 96 | self.assertFalse(self.plugin.is_port_open(22, None, 22)) 97 | self.assertFalse(self.plugin.is_port_open(22, 22, None)) 98 | self.assertFalse(self.plugin.is_port_open('22', -1, 22)) 99 | self.assertFalse(self.plugin.is_port_open('22', 22, -1)) 100 | self.assertFalse(self.plugin.is_port_open('22', 65536, 22)) 101 | self.assertFalse(self.plugin.is_port_open('22', 22, 65536)) 102 | 103 | # should be ok 104 | self.assertTrue(self.plugin.is_port_open('127.0.0.1', 22, 22)) 105 | 106 | # socket error/timeout 107 | instance.connect.side_effect = socket.timeout 108 | self.assertFalse(self.plugin.is_port_open('127.0.0.1', 22, 22)) 109 | instance.connect.side_effect = socket.error 110 | self.assertFalse(self.plugin.is_port_open('127.0.0.1', 22, 22)) 111 | 112 | @patch('plugins.SecurityGroupPlugin.is_port_open', return_value=True) 113 | def test_run(self, *mocks): 114 | eddaclient = Mock() 115 | 116 | def ret_list(args): 117 | return [ 118 | {"groupId": "sg-1", "groupName": "group1", "ownerId": "111111", 119 | "ipPermissions": [ 120 | {"fromPort": 22, "ipProtocol": "tcp", "ipRanges": ["0.0.0.0/0"], "toPort": 22}, 121 | {"fromPort": 0, "ipProtocol": "icmp", "ipRanges": ["0.0.0.0/0"], "toPort": -1} 122 | ]}, 123 | {"groupId": "sg-2", "groupName": "group2", "ownerId": "222222", 124 | "ipPermissions": [ 125 | {"fromPort": 139, "ipProtocol": "tcp", "ipRanges": ["0.0.0.0/0"], "toPort": 139} 126 | ]}, 127 | {"groupId": "sg-3", "groupName": "empty group", "ownerId": "333333"}, 128 | {"groupId": "sg-6", "groupName": "group6", "ownerId": "444444", 129 | "ipPermissions": [ 130 | {"fromPort": 445, "ipProtocol": "tcp", "ipRanges": ["0.0.0.0/0"], "toPort": 445} 131 | ]} 132 | ] 133 | 134 | def ret_machines(args): 135 | return [ 136 | {'imageId': 'ami-1', 'instanceId': 'a', 'publicIpAddress': '1.1.1.1', "tags": [], 137 | "securityGroups": [{"groupId": "sg-1", "groupName": "group1"}], 138 | 'placement': {'availabilityZone': 'us-east-1a'} 139 | }, 140 | {'imageId': 'ami-1', 'instanceId': 'b', 'publicIpAddress': '2.1.1.1', 141 | "tags": [{"key": "Name", "value": "tag1"}], 142 | 'securityGroups': [ 143 | {"groupId": "sg-2", "groupName": "group2"}, 144 | {"groupId": "sg-1", "groupName": "group1"} 145 | ], 146 | 'placement': {'availabilityZone': 'us-east-2b'} 147 | }, 148 | {'imageId': 'ami-2', 'instanceId': 'c', 'publicIpAddress': '3.1.1.1', "tags": [], 149 | 'securityGroups': [], 150 | 'placement': {'availabilityZone': 'us-east-3c'} 151 | }, 152 | {'imageId': 'ami-3', 'instanceId': 'd', 'publicIpAddress': '4.1.1.1', "tags": [], 153 | 'securityGroups': [{"groupId": "sg-4", "groupName": "group4"}], 154 | 'placement': {'availabilityZone': 'us-east-4d'} 155 | }, 156 | {'imageId': 'ami-4', 'instanceId': 'e', 'publicIpAddress': None, 'privateIpAddress': '192.168.0.1', 157 | "tags": [{"key": "Name", "value": "tag1"}], 158 | 'securityGroups': [ 159 | {"groupId": "sg-6", "groupName": "group6"}, 160 | {"groupId": "sg-5", "groupName": "group5"} 161 | ], 162 | 'placement': {'availabilityZone': 'us-east-2b'} 163 | } 164 | ] 165 | 166 | m1 = Mock() 167 | m1.query = Mock(side_effect=ret_list) 168 | eddaclient.updateonly = Mock(return_value=m1) 169 | 170 | eddaclient.query = Mock(side_effect=ret_machines) 171 | self.plugin.init(eddaclient, self.config, {}) 172 | 173 | # run the tested method 174 | result = self.plugin.run() 175 | # print 'result', result 176 | self.assertListEqual(result, [ 177 | { 178 | 'id': 'sg-2 (group2)', 'plugin_name': 'secgroups', 179 | 'details': [{ 180 | 'fromPort': 139, 'ipRanges': ['0.0.0.0/0'], 'toPort': 139, 'ipProtocol': 'tcp', 'port_open': True, 181 | 'awsAccount': '222222', 'awsRegion': 'us-east-2', 'machines': ['b (2.1.1.1): tag1'], 182 | 'ipAddresses': ['2.1.1.1'] 183 | }] 184 | }, 185 | { 186 | 'id': 'sg-6 (group6)', 'plugin_name': 'secgroups', 187 | 'details': [{ 188 | 'fromPort': 445, 'ipRanges': ['0.0.0.0/0'], 'toPort': 445, 'ipProtocol': 'tcp', 'port_open': True, 189 | 'awsAccount': '444444', 'awsRegion': 'us-east-2', 'machines': ['e (192.168.0.1): tag1'], 190 | 'ipAddresses': ['192.168.0.1'] 191 | }] 192 | } 193 | ]) 194 | 195 | m1.query.assert_has_calls([call('/api/v2/aws/securityGroups;_expand')]) 196 | eddaclient.query.assert_has_calls([call('/api/v2/view/instances;_expand')]) 197 | 198 | def test_whitelist_ip_config(self): 199 | test_config = {"whitelisted_ips": ["^ just a comment", "192.168.0.1", "1.2.3.4/32", "8.8.8.8/24"]} 200 | plugin = SecurityGroupPlugin() 201 | plugin.init(edda_client=None, status=None, config=test_config) 202 | self.assertIn(IPNetwork("192.168.0.1/32"), plugin.whitelisted_ips) 203 | self.assertIn(IPNetwork("1.2.3.4/32"), plugin.whitelisted_ips) 204 | self.assertIn(IPNetwork("8.8.8.8/24"), plugin.whitelisted_ips) 205 | 206 | 207 | def main(): 208 | unittest.main() 209 | 210 | 211 | if __name__ == '__main__': 212 | main() 213 | -------------------------------------------------------------------------------- /tests/plugins/test_sso.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import unittest 3 | 4 | from mock import Mock, call 5 | from httpretty import HTTPretty, httprettified 6 | from plugins import SSOUnprotected, SecurityHeaders 7 | import plugins.sso 8 | 9 | 10 | class PluginSsoTestCase(unittest.TestCase): 11 | def setUp(self): 12 | self.plugin = SSOUnprotected() 13 | self.assertEqual(self.plugin.plugin_name, 'sso_unprotected') 14 | 15 | @httprettified 16 | def test_run(self, *mocks): 17 | eddaclient = Mock() 18 | eddaclient._since = 500 19 | 20 | def ret_list(args): 21 | return [ 22 | {'name': 'full-https.prezi.com', 'instanceId': 'a', 'launchTime': 400, 23 | "resourceRecords": [{"value": "127.0.0.2"}]}, 24 | {'name': 'godauth.prezi.com', 'instanceId': 'b', 'launchTime': 600, 25 | "resourceRecords": [{"value": "127.0.0.2"}]}, 26 | {'name': 'vuln.prezi.com', 'instanceId': 'b', 'launchTime': 600, 27 | "resourceRecords": [{"value": "127.0.0.2"}]}, 28 | {'name': 'prezi-sso.prezi.com', 'instanceId': 'b', 'launchTime': 600, 29 | "resourceRecords": [{"value": "127.0.0.2"}]}, 30 | {'name': 'prezi-sso2.prezi.com', 'instanceId': 'b', 'launchTime': 600, 31 | "resourceRecords": [{"value": "127.0.0.2"}]}, 32 | {'name': 'prezi-sso3.prezi.com', 'instanceId': 'b', 'launchTime': 600, 33 | "resourceRecords": [{"value": "127.0.0.2"}]}, 34 | {'name': 'full-https2.prezi.com', 'instanceId': 'a', 'launchTime': 400, 35 | "resourceRecords": [{"value": "127.0.0.2"}]}, 36 | {'name': 'full-https3.prezi.com', 'instanceId': 'a', 'launchTime': 400, 37 | "resourceRecords": [{"value": "127.0.0.2"}]}, 38 | {'name': 'tbd-https.prezi.com', 'instanceId': 'a', 'launchTime': 400, 39 | "resourceRecords": [{"value": "127.0.0.2"}]}, 40 | ] 41 | 42 | def public_ip(args): 43 | return [ 44 | {'imageId': 'ami-1', 'publicIpAddress': 'a', 'launchTime': 400, 45 | "resourceRecords": [{"value": "127.0.0.1.prezi.com"}]}, 46 | {'imageId': 'ami-2', 'publicIpAddress': 'b', 'launchTime': 600, 47 | "resourceRecords": [{"value": "127.0.0.2"}]}, 48 | {'imageId': 'ami-3', 'publicIpAddress': 'c', 'launchTime': 800, 49 | "resourceRecords": [{"value": "127.0.0.3.prezi.com"}, {"value": "127.0.0.4"}]}, 50 | ] 51 | 52 | m = Mock() 53 | m.query = Mock(side_effect=ret_list) 54 | m1 = Mock() 55 | m1.query = Mock(side_effect=public_ip) 56 | eddaclient.clean = Mock(return_value=m) 57 | eddaclient.soft_clean = Mock(return_value=m1) 58 | 59 | self.plugin.init(eddaclient, { 60 | 'godauth_url': '^https://prezi\\.com/api/v2/auth/godauth/\\?ref=https?://([^/]+)/?', 61 | 'sso_url': '^https://sso\\.prezi\\.com/auth/\\?redirect_uri=https://([^/]+)/?'}, {}) 62 | 63 | HTTPretty.register_uri(HTTPretty.GET, 'http://vuln.prezi.com', 64 | body='[{"title": "Test Deal"}]', 65 | adding_headers={ 66 | 'Location': 'None' 67 | }, 68 | status=200) 69 | HTTPretty.register_uri(HTTPretty.GET, 'https://godauth.prezi.com', 70 | body='[{"title": "Test Deal"}]', 71 | adding_headers={ 72 | 'Location': "https://prezi.com/api/v2/auth/godauth/?ref=https://godauth.prezi.com" 73 | }, 74 | status=302) 75 | HTTPretty.register_uri(HTTPretty.GET, 'http://full-https.prezi.com', 76 | body='[{"title": "Test Deal"}]', 77 | adding_headers={ 78 | 'Location': 'https://full-https.prezi.com' 79 | }, 80 | status=302) 81 | HTTPretty.register_uri(HTTPretty.GET, 'http://full-https2.prezi.com', 82 | body='[{"title": "Test Deal"}]', 83 | adding_headers={ 84 | 'Location': 'https://full-https2.prezi.com/' 85 | }, 86 | status=302) 87 | HTTPretty.register_uri(HTTPretty.GET, 'http://full-https3.prezi.com/', 88 | body='[{"title": "Test Deal"}]', 89 | adding_headers={ 90 | 'Location': 'https://full-https3.prezi.com' 91 | }, 92 | status=302) 93 | HTTPretty.register_uri(HTTPretty.GET, 'http://tbd-https.prezi.com/', 94 | body='[{"title": "Test Deal"}]', 95 | adding_headers={ 96 | 'Location': 'https://tbd-https2.prezi.com' 97 | }, 98 | status=302) 99 | HTTPretty.register_uri(HTTPretty.GET, 'https://prezi-sso.prezi.com', 100 | body='[{"title": "Test Deal"}]', 101 | adding_headers={ 102 | 'Location': "https://sso.prezi.com/auth/?redirect_uri=https://prezi-sso.prezi.com" 103 | }, 104 | status=302) 105 | HTTPretty.register_uri(HTTPretty.GET, 'https://prezi-sso2.prezi.com', 106 | body='[{"title": "Test Deal"}]', 107 | adding_headers={ 108 | 'Location': "https://sso.prezi.com/auth/?redirect_uri=https://prezi-sso2.prezi.com/" 109 | }, 110 | status=302) 111 | HTTPretty.register_uri(HTTPretty.GET, 'https://prezi-sso3.prezi.com/', 112 | body='[{"title": "Test Deal"}]', 113 | adding_headers={ 114 | 'Location': "https://sso.prezi.com/auth/?redirect_uri=https://prezi-sso3.prezi.com" 115 | }, 116 | status=302) 117 | 118 | # run the tested method 119 | result = list(self.plugin.run()) 120 | 121 | self.assertEqual("https://sso.prezi.com/auth/?redirect_uri=https://prezi-sso.prezi.com", 122 | plugins.sso.fetch_url('https://prezi-sso.prezi.com')[1]['headers']['location']) 123 | self.assertEqual("https://full-https.prezi.com", 124 | plugins.sso.fetch_url('http://full-https.prezi.com')[1]['headers']['location']) 125 | self.assertEqual("https://prezi.com/api/v2/auth/godauth/?ref=https://godauth.prezi.com", 126 | plugins.sso.fetch_url('https://godauth.prezi.com')[1]['headers']['location']) 127 | self.assertEqual(('http://bla.prezi.com', None), plugins.sso.fetch_url('http://bla.prezi.com')) 128 | self.assertEqual(2, len(result)) 129 | self.assertListEqual([ 130 | {'id': 'http://tbd-https.prezi.com', 131 | 'plugin_name': 'sso_unprotected', 132 | 'details': [ 133 | 'This domain (http://tbd-https.prezi.com) is neither behind SSO nor GODAUTH because redirects to https://tbd-https2.prezi.com']}, 134 | {'id': 'http://vuln.prezi.com', 135 | 'plugin_name': 'sso_unprotected', 136 | 'details': [ 137 | 'This domain (http://vuln.prezi.com) is neither behind SSO nor GODAUTH because redirects to None']}], 138 | result) 139 | 140 | m.query.assert_has_calls([call('/api/v2/aws/hostedRecords;_expand')]) 141 | 142 | def test_get_successful_responses(self): 143 | responses = { 144 | 'foo.prezi.com': {'code': 301, 'headers': {'location': 'bar'}}, 145 | 'bar.prezi.com': {'code': 401, 'headers': {}}, 146 | 'non-working.prezi.com': {'code': 500, 'headers': {}}, 147 | } 148 | 149 | result = self.plugin.get_successful_responses(responses) 150 | 151 | self.assertEqual({'foo.prezi.com': 'bar'}, result) 152 | 153 | 154 | class PluginSecurityHeadersTestCase(unittest.TestCase): 155 | def setUp(self): 156 | self.plugin = SecurityHeaders() 157 | self.assertEqual(self.plugin.plugin_name, 'security_headers') 158 | 159 | @httprettified 160 | def test_run(self, *mocks): 161 | eddaclient = Mock() 162 | eddaclient._since = 500 163 | 164 | def ret_list(args): 165 | return [ 166 | {'name': 'full-https.prezi.com', 'instanceId': 'a', 'launchTime': 400, 167 | "resourceRecords": [{"value": "127.0.0.2"}]}, 168 | {'name': 'godauth.prezi.com', 'instanceId': 'b', 'launchTime': 600, 169 | "resourceRecords": [{"value": "127.0.0.2"}]}, 170 | {'name': 'vuln.prezi.com', 'instanceId': 'b', 'launchTime': 600, 171 | "resourceRecords": [{"value": "127.0.0.2"}]}, 172 | {'name': 'prezi-sso.prezi.com', 'instanceId': 'b', 'launchTime': 600, 173 | "resourceRecords": [{"value": "127.0.0.2"}]}, 174 | ] 175 | 176 | def public_ip(args): 177 | return [ 178 | {'imageId': 'ami-1', 'publicIpAddress': 'a', 'launchTime': 400, 179 | "resourceRecords": [{"value": "127.0.0.1.prezi.com"}]}, 180 | {'imageId': 'ami-2', 'publicIpAddress': 'b', 'launchTime': 600, 181 | "resourceRecords": [{"value": "127.0.0.2"}]}, 182 | {'imageId': 'ami-3', 'publicIpAddress': 'c', 'launchTime': 800, 183 | "resourceRecords": [{"value": "127.0.0.3.prezi.com"}, {"value": "127.0.0.4"}]}, 184 | ] 185 | 186 | m = Mock() 187 | m.query = Mock(side_effect=ret_list) 188 | m1 = Mock() 189 | m1.query = Mock(side_effect=public_ip) 190 | eddaclient.clean = Mock(return_value=m) 191 | eddaclient.soft_clean = Mock(return_value=m1) 192 | 193 | self.plugin.init(eddaclient, {}, {}) 194 | 195 | HTTPretty.register_uri(HTTPretty.GET, 'http://vuln.prezi.com', 196 | body='[{"title": "Test Deal"}]', 197 | adding_headers={ 198 | }, 199 | status=200) 200 | HTTPretty.register_uri(HTTPretty.GET, 'https://godauth.prezi.com', 201 | body='[{"title": "Test Deal"}]', 202 | adding_headers={ 203 | 'x-frame-options': "INVALID" 204 | }, 205 | status=200) 206 | HTTPretty.register_uri(HTTPretty.GET, 'http://full-https.prezi.com', 207 | body='[{"title": "Test Deal"}]', 208 | adding_headers={ 209 | 'Location': 'https://full-https.prezi.com', 210 | 'X-FRAME-OPTIONS': 'DENY' 211 | }, 212 | status=201) 213 | HTTPretty.register_uri(HTTPretty.GET, 'https://prezi-sso.prezi.com', 214 | body='[{"title": "Test Deal"}]', 215 | adding_headers={ 216 | 'X-FRAME-OPTIONS': 'SAMEORIGIN' 217 | }, 218 | status=200) 219 | 220 | # run the tested method 221 | result = list(self.plugin.run()) 222 | 223 | self.assertEqual(1, len(result)) 224 | result = result[0] 225 | self.assertEqual(["This webpage (http://vuln.prezi.com) does not have X-Frame-Options header"], 226 | result["details"]) 227 | self.assertEqual("http://vuln.prezi.com", result["id"]) 228 | 229 | m.query.assert_has_calls([call('/api/v2/aws/hostedRecords;_expand')]) 230 | 231 | 232 | def main(): 233 | unittest.main() 234 | 235 | 236 | if __name__ == '__main__': 237 | main() 238 | -------------------------------------------------------------------------------- /tests/test_data/test_iam_diff.txt: -------------------------------------------------------------------------------- 1 | --- /edda/api/v2/aws/iamUsers/alice;_pp;_at=1391699439200 2 | +++ /edda/api/v2/aws/iamUsers/alice;_pp;_at=1391699990294 3 | @@ -1,32 +1,25 @@ 4 | { 5 | "accessKeys" : [ 6 | { 7 | "accessKeyId" : "xxx", 8 | "class" : "com.amazonaws.services.identitymanagement.model.AccessKeyMetadata", 9 | "createDate" : "2013-06-19T20:19:43.000Z", 10 | "status" : "Active", 11 | "userName" : "alice" 12 | - }, 13 | - { 14 | - "accessKeyId" : "xxx", 15 | - "class" : "com.amazonaws.services.identitymanagement.model.AccessKeyMetadata", 16 | - "createDate" : "2014-02-06T15:05:45.000Z", 17 | - "status" : "Active", 18 | - "userName" : "alice" 19 | } 20 | ], 21 | "attributes" : { 22 | "arn" : "arn:aws:iam::111:user/alice", 23 | "class" : "com.amazonaws.services.identitymanagement.model.User", 24 | "createDate" : "2013-06-19T20:19:43.000Z", 25 | "path" : "/", 26 | "userId" : "xxx", 27 | "userName" : "alice" 28 | }, 29 | "groups" : [ 30 | "developers", 31 | "devops" 32 | ], 33 | "name" : "alice", 34 | "userPolicies" : [ ] 35 | } 36 | --- /edda/api/v2/aws/iamUsers/alice;_pp;_at=1391699139228 37 | +++ /edda/api/v2/aws/iamUsers/alice;_pp;_at=1391699390361 38 | @@ -1,25 +1,32 @@ 39 | { 40 | "accessKeys" : [ 41 | { 42 | "accessKeyId" : "xxx", 43 | "class" : "com.amazonaws.services.identitymanagement.model.AccessKeyMetadata", 44 | "createDate" : "2013-06-19T20:19:43.000Z", 45 | "status" : "Active", 46 | "userName" : "alice" 47 | + }, 48 | + { 49 | + "accessKeyId" : "xxx", 50 | + "class" : "com.amazonaws.services.identitymanagement.model.AccessKeyMetadata", 51 | + "createDate" : "2014-02-06T15:05:45.000Z", 52 | + "status" : "Active", 53 | + "userName" : "alice" 54 | } 55 | ], 56 | "attributes" : { 57 | "arn" : "arn:aws:iam::111:user/alice", 58 | "class" : "com.amazonaws.services.identitymanagement.model.User", 59 | "createDate" : "2013-06-19T20:19:43.000Z", 60 | "path" : "/", 61 | "userId" : "xxx", 62 | "userName" : "alice" 63 | }, 64 | "groups" : [ 65 | "developers", 66 | "devops" 67 | ], 68 | "name" : "alice", 69 | "userPolicies" : [ ] 70 | } 71 | --- /edda/api/v2/aws/iamUsers/alice;_pp;_at=1383750287313 72 | +++ /edda/api/v2/aws/iamUsers/alice;_pp;_at=1391699090249 73 | @@ -1,24 +1,25 @@ 74 | { 75 | "accessKeys" : [ 76 | { 77 | "accessKeyId" : "xxx", 78 | "class" : "com.amazonaws.services.identitymanagement.model.AccessKeyMetadata", 79 | "createDate" : "2013-06-19T20:19:43.000Z", 80 | "status" : "Active", 81 | "userName" : "alice" 82 | } 83 | ], 84 | "attributes" : { 85 | "arn" : "arn:aws:iam::111:user/alice", 86 | "class" : "com.amazonaws.services.identitymanagement.model.User", 87 | "createDate" : "2013-06-19T20:19:43.000Z", 88 | "path" : "/", 89 | "userId" : "xxx", 90 | "userName" : "alice" 91 | }, 92 | "groups" : [ 93 | - "developers" 94 | + "developers", 95 | + "devops" 96 | ], 97 | "name" : "alice", 98 | "userPolicies" : [ ] 99 | } 100 | -------------------------------------------------------------------------------- /tests/test_data/test_invalid.json: -------------------------------------------------------------------------------- 1 | { 2 | "plugin.newtag": {}, 3 | } 4 | } -------------------------------------------------------------------------------- /tests/test_data/test_status_file.json: -------------------------------------------------------------------------------- 1 | { 2 | "plugin.newtag": {}, 3 | "plugin.iam": {}, 4 | "since": 1392126281000, 5 | "plugin.elbs": {}, 6 | "plugin.missingtag": {}, 7 | "plugin.s3acl": {}, 8 | "plugin.secgroups": {}, 9 | "plugin.ami": { 10 | "first_seen": { 11 | "ami-111": 1392100947000 12 | } 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /tests/test_reddalert.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import os 3 | import unittest 4 | 5 | from mock import Mock, call 6 | from reddalert import Reddalert 7 | 8 | 9 | APPDIR = "%s/" % os.path.dirname(os.path.realpath(__file__)) 10 | 11 | 12 | class ReddalertTestCase(unittest.TestCase): 13 | def setUp(self): 14 | self.reddalert = Reddalert 15 | self.test_status_file = APPDIR + 'test_data/test_status_file.json' 16 | self.test_invalid_json = APPDIR + 'test_data/test_invalid.json' 17 | self.test_json_data = { 18 | u'plugin.newtag': {}, 19 | u'plugin.iam': {}, 20 | u'since': 1392126281000, 21 | u'plugin.elbs': {}, u'plugin.missingtag': {}, 22 | u'plugin.s3acl': {}, 23 | u'plugin.secgroups': {}, 24 | u'plugin.ami': {u'first_seen': {u'ami-111': 1392100947000}} 25 | } 26 | 27 | def test_get_since(self): 28 | self.assertEqual(self.reddalert.get_since('2014-02-10 00:00:00'), 1391990400000) 29 | self.assertEqual(self.reddalert.get_since('123456789'), 123456789) 30 | self.assertEqual(self.reddalert.get_since('asd'), None) 31 | self.assertEqual(self.reddalert.get_since(''), None) 32 | self.assertEqual(self.reddalert.get_since(12345678), None) 33 | 34 | def test_load_json(self): 35 | logger = Mock() 36 | 37 | self.assertEqual(self.reddalert.load_json('asd', logger), {}) 38 | self.assertEqual(self.reddalert.load_json(self.test_status_file, logger), self.test_json_data) 39 | 40 | self.assertEqual(self.reddalert.load_json(self.test_invalid_json, logger), {}) 41 | self.assertEqual(logger.mock_calls, [ 42 | call.exception("Failed to read file '%s'", 'asd'), 43 | call.exception("Invalid JSON file '%s'", self.test_invalid_json) 44 | ]) 45 | 46 | def test_save_json(self): 47 | logger = Mock() 48 | 49 | self.assertFalse(self.reddalert.save_json('/tmp', {}, logger)) 50 | self.assertFalse(self.reddalert.save_json('/tmp' * 100, {'foo': 'bar'}, logger)) 51 | self.assertTrue(self.reddalert.save_json('/tmp/reddalert_test.tmp', self.test_json_data, logger)) 52 | 53 | self.assertEqual(logger.mock_calls, [ 54 | call.warning('Got empty JSON content, not updating status file!'), 55 | call.exception("Failed to write file '%s'", '/tmp' * 100) 56 | ]) 57 | 58 | def test_get_config(self): 59 | config = {'b': 1} 60 | self.assertEqual(self.reddalert.get_config('b', config), 1) 61 | self.assertEqual(self.reddalert.get_config('a', config), None) 62 | self.assertEqual(self.reddalert.get_config('a', config, 'arg'), 'arg') 63 | self.assertEqual(self.reddalert.get_config('b', config, 'arg'), 'arg') 64 | self.assertEqual(self.reddalert.get_config('b', config, None, 'default'), 1) 65 | self.assertEqual(self.reddalert.get_config('a', config, None, 'default'), 'default') 66 | 67 | 68 | def main(): 69 | unittest.main() 70 | 71 | 72 | if __name__ == '__main__': 73 | main() 74 | --------------------------------------------------------------------------------