├── .gitignore ├── README.md ├── config.yaml ├── greynoise.py └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | MANIFEST 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | .pytest_cache/ 49 | 50 | # Translations 51 | *.mo 52 | *.pot 53 | 54 | # Django stuff: 55 | *.log 56 | local_settings.py 57 | db.sqlite3 58 | 59 | # Flask stuff: 60 | instance/ 61 | .webassets-cache 62 | 63 | # Scrapy stuff: 64 | .scrapy 65 | 66 | # Sphinx documentation 67 | docs/_build/ 68 | 69 | # PyBuilder 70 | target/ 71 | 72 | # Jupyter Notebook 73 | .ipynb_checkpoints 74 | 75 | # pyenv 76 | .python-version 77 | 78 | # celery beat schedule file 79 | celerybeat-schedule 80 | 81 | # SageMath parsed files 82 | *.sage.py 83 | 84 | # Environments 85 | .env 86 | .venv 87 | env/ 88 | venv/ 89 | ENV/ 90 | env.bak/ 91 | venv.bak/ 92 | 93 | # Spyder project settings 94 | .spyderproject 95 | .spyproject 96 | 97 | # Rope project settings 98 | .ropeproject 99 | 100 | # mkdocs documentation 101 | /site 102 | 103 | # mypy 104 | .mypy_cache/ 105 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | The Python script `greynoise.py` makes use of GreyNoise's [public/alpha API](https://github.com/GreyNoise-Intelligence/api.greynoise.io) to perform its operations. 2 | 3 | ## Requirements 4 | * Python 3.5 or higher 5 | * Have the Python packages `Requests` and `PyYAML` installed 6 | 7 | ## Installation 8 | 1. `git clone https://github.com/marcusbakker/Greynoise` 9 | 2. `pip install -r requirements.txt` 10 | 11 | ## Usage 12 | The script has the following features: 13 | * Query for all tags associated with a given IP or CIDR IP range: `-ip` 14 | * Query all IPs/CIDR ranges within the provided file: `-f FILE, --file FILE` 15 | * Get a list of all GreyNoise\'s current tags: `-l, --list` 16 | * Get all IPs and its associated metadata for the provided tag: `-t TAG_ID, --tag TAG_ID` 17 | * Identify the noise and add context on the noise in the provided CSV file. The output filename has 'greynoise_' as prefix. First argument should point to the CSV file and the second argument to the index value (starting at 1) at which the IP address is located in the CSV file. `--csv CSV_FILE IP_COLUMN_INDEX` 18 | * Output the result to a file using the argument `-o FILE_LOCATION, --output FILE_LOCATION`. Default file format is txt and other supported file formats are CSV and JSON: `--format {txt,csv,json}` 19 | * Hide results for IP addresses which have the status "unknown" using the argument: `-u, --hide-unknown` 20 | * GreyNoise's response for an IP address is cached for 24 hours. 21 | * Expire all entries within the IP cache: `--cache-expire` 22 | * Set the IP cache timeout in seconds: `--cache-timeout SECONDS` 23 | * The default cache timeout can be changed within `config.yaml` using the setting `cache_timeout` 24 | * Set an API key which enables you to receive more than 500 results per query: `-k KEY, --key KEY`. The API key can also be permanently set within `config.yaml` 25 | 26 | ## Configuration file 27 | The configuration file `config.yaml` contains the following three settings: 28 | * `api_key`: Permanently set an API key which enables you to receive more than 500 results per query. 29 | * `cache_timeout`: Set the cache timeout in seconds. The default is 24 hours. 30 | * `tags`: Adding or modifying GreyNoise's tag IDs and corresponding names. 31 | -------------------------------------------------------------------------------- /config.yaml: -------------------------------------------------------------------------------- 1 | # API key 2 | api_key: 3 | 4 | # timeout the IP ip_cache in seconds 5 | cache_timeout: 86400 6 | 7 | # Greynoise Tag IDs and corresponding names 8 | tags: 9 | A10_NETWORKS: A10 Networks 10 | ADB_WORM: ADB Worm 11 | AHREFS: Ahrefs 12 | AIHIT: aiHit 13 | AMPERE_INNOTECH: Ampere Innotech 14 | ARCHIVE: Archive.org 15 | ASTERISK_BRUTEFORCER: Asterisk Bruteforcer 16 | AVTECH_IP_CAMERA_WORM: Avtech IP Camera Worm 17 | BAIDU_SPIDER: Baidu Spider 18 | BELKIN_N750_WORM_CVE_2014_1635: Belkin N750 Worm CVE-2014-1635 19 | BINARYEDGE: BinaryEdge.io 20 | BINGBOT: BingBot 21 | BITCOIN_NODE_SCANNER_HIGH: Bitcoin Node Scanner 22 | BITCOIN_NODE_SCANNER_LOW: Bitcoin Node Scanner 23 | BROWN_UNIVERSITY: Brown University 24 | CAMBRIDGE_CYBERCRIME_CENTRE: Cambridge Cybercrime Centre 25 | CASSANDRA_SCANNER_HIGH: Cassandra Scanner 26 | CASSANDRA_SCANNER_LOW: Cassandra Scanner 27 | CCBOT: CCBot 28 | CENSYS: Censys 29 | CENSYS: Censys 30 | CGI_SCRIPT_SCANNER: CGI Script Scanner 31 | CHECKMARKNETWORK: CheckMarkNetwork 32 | CHINANET_SSH_BRUTEFORCER: CHINANET SSH Bruteforcer 33 | CLIQZ: Cliqz 34 | CLOUD_SYSTEM_NETWORKS: Cloud System Networks 35 | COBALT_STRIKE_SCANNER_HIGH: Cobalt Strike Scanner 36 | COBALT_STRIKE_SCANNER_LOW: Cobalt Strike Scanner 37 | COCCOC: Coc Coc 38 | COMPANYBOOK: CompanyBook 39 | COMPROMISED_RASPBERRY_PI: Compromised Raspberry Pi 40 | COUNTERSTRIKE_SERVER_SCANNER_HIGH: CounterStrike Server Scanner 41 | COUNTERSTRIKE_SERVER_SCANNER_LOW: CounterStrike Server Scanner 42 | CPANEL_SCANNER_HIGH: CPanel Scanner 43 | CPANEL_SCANNER_LOW: CPanel Scanner 44 | CYBERGREEN: CyberGreen 45 | DATAPROVIDER: DataProvider 46 | DLINK_2750B_WORM: D-Link 2750B Worm 47 | DLINK_850L_WORM: D-Link 850L Worm 48 | DNS_SCANNER_HIGH: DNS Scanner 49 | DNS_SCANNER_LOW: DNS Scanner 50 | DOCKERD_SCANNER_HIGH: Dockerd Scanner 51 | DOCKERD_SCANNER_LOW: Dockerd Scanner 52 | DOMAINTOOLS: DomainTools 53 | DRUPAL_CVE_2018_7600_WORM: Drupal CVE-2018-7600 Worm 54 | EIR_D1000_ROUTER_WORM: Eir D1000 Router Worm 55 | ELASTICSEARCH_SCANNER: Elasticsearch Scanner 56 | ELASTICSEARCH_SCANNER_HIGH: Elasticsearch Scanner 57 | ELASTICSEARCH_SCANNER_LOW: Elasticsearch Scanner 58 | EMBEDDED_DEVICE_WORM: Embedded Device Worm 59 | ETHEREUM_NODE_SCANNER_HIGH: Ethereum Node Scanner 60 | ETHEREUM_NODE_SCANNER_LOW: Ethereum Node Scanner 61 | EXPOSURE_MONITORING: ExposureMonitoring 62 | FACEBOOK_NETPROBE: Facebook NetProbe 63 | FAST_HTTP_AUTH_SCANNER_CLIENT: Fast HTTP Auth Scanner Client 64 | FINDMALWARE: FindMalware 65 | FTP_BRUTEFORCER: FTP Bruteforcer 66 | FTP_SCANNER_HIGH: FTP Scanner 67 | FTP_SCANNER_LOW: FTP Scanner 68 | GOOGLEBOT: GoogleBot 69 | GO_HTTP_CLIENT: Go HTTP Client 70 | GO_SSH_SCANNER: Go SSH Scanner 71 | GPON_CVE_2018_10561_ROUTER_WORM: GPON CVE-2018-10561 Router Worm 72 | HADOOP_YARN_WORM: Hadoop Yarn Worm 73 | HTTP_ALT_SCANNER_HIGH: HTTP Alt Scanner 74 | HTTP_ALT_SCANNER_LOW: HTTP Alt Scanner 75 | HUAWEI_HG532_UPNP_WORM_CVE_2017_17215: Huawei HG532 UPnP Worm CVE-2017-17215 76 | IBM_OBOT: IBM oBot 77 | IIS_WEBDAV_REMOTE_CODE_EXECUTION_CVE_2017_7269: IIS WebDAV Remote Code Execution CVE-2017-7269 78 | IMAP_SCANNER_HIGH: IMAP Scanner 79 | IMAP_SCANNER_LOW: IMAP Scanner 80 | INTERNET_CENSUS: Internet Census 81 | INTRINSEC: Intrinsec 82 | IOT_MQTT_SCANNER_HIGH: IOT MQTT Scanner 83 | IOT_MQTT_SCANNER_LOW: IOT MQTT Scanner 84 | IPIP: ipip.net 85 | IPSEC_VPN_SCANNER_HIGH: IPSec VPN Scanner 86 | IPSEC_VPN_SCANNER_LOW: IPSec VPN Scanner 87 | JAVA_HTTP_CLIENT: Java HTTP Client 88 | JBOSS_WORM: Jboss Worm 89 | JORGEE_HTTP_SCANNER: Jorgee HTTP Scanner 90 | LDAP_SCANNER_HIGH: LDAP Scanner 91 | LDAP_SCANNER_LOW: LDAP Scanner 92 | LINKSYS_E_SERIES_THEMOON_ROUTER_WORM: Linksys E-Series TheMoon Router Worm 93 | LINK_NET_LW_N605R_WORM_CVE_2018_16752: LINK-NET LW-N605R Worm CVE-2018-16752 94 | LITECOIN_NODE_SCANNER_HIGH: LiteCoin Node Scanner 95 | LITECOIN_NODE_SCANNER_LOW: LiteCoin Node Scanner 96 | LOSEC: LoSec 97 | MAIL_RU: Mail.RU 98 | MASSCAN_CLIENT: Masscan Client 99 | MAUIBOT: MauiBot 100 | MEMCACHED_SCANNER_HIGH: Memcached Scanner 101 | MEMCACHED_SCANNER_LOW: Memcached Scanner 102 | MINECRAFT_SCANNER_HIGH: Minecraft Scanner 103 | MINECRAFT_SCANNER_LOW: Minecraft Scanner 104 | MIRAI: Mirai 105 | MJ12BOT: MJ12bot 106 | MOJEEK: Mojeek 107 | MONGODB_SCANNER_HIGH: MongoDB Scanner 108 | MONGODB_SCANNER_LOW: MongoDB Scanner 109 | MOZ_DOTBOT: Moz DotBot 110 | MSSQL_BRUTEFORCER: MSSQL Bruteforcer 111 | MSSQL_SCANNER_HIGH: MSSQL Scanner 112 | MSSQL_SCANNER_LOW: MSSQL Scanner 113 | MYSQL_SCANNER_HIGH: MySQL Scanner 114 | MYSQL_SCANNER_LOW: MySQL Scanner 115 | NETBIOS_SCANNER_HIGH: NETBIOS Scanner 116 | NETBIOS_SCANNER_LOW: NETBIOS Scanner 117 | NETCRAFT: NetCraft 118 | NETIS_ROUTER_ADMIN_SCANNER_HIGH: Netis Router Admin Scanner 119 | NETIS_ROUTER_ADMIN_SCANNER_LOW: Netis Router Admin Scanner 120 | NET_SYSTEMS_RESEARCH: Net Systems Research 121 | NMAP: Nmap 122 | NTP_SCANNER_HIGH: NTP Scanner 123 | NTP_SCANNER_LOW: NTP Scanner 124 | ONYPHE: ONYPHE 125 | OPENLINKPROFILER: OpenLinkProfiler 126 | OPEN_PROXY_SCANNER: Open Proxy Scanner 127 | ORACLE_WEBLOGIC_CVE_2017_10271_WORM: Oracle WebLogic CVE-2017-10271 Worm 128 | PANSCIENT: Panscient 129 | PDRLABS: PDRLabs.net 130 | PHPMYADMIN_WORM: PHPMyAdmin Worm 131 | PHP_WORM: PHP Worm 132 | PINGDOM: Pingdom.com 133 | PINGZAPPER: PingZapper 134 | PING_SCANNER_HIGH: Ping Scanner 135 | PING_SCANNER_LOW: Ping Scanner 136 | PING_SCANNER_LOW: Ping Scanner 137 | POP3_SCANNER_HIGH: POP3 Scanner 138 | POP3_SCANNER_LOW: POP3 Scanner 139 | POSTGRES_BRUTEFORCER: Postgres Bruteforcer 140 | POSTGRES_SCANNER_HIGH: Postgres Scanner 141 | POSTGRES_SCANNER_LOW: Postgres Scanner 142 | PPTP_VPN_SCANNER_HIGH: PPTP VPN Scanner 143 | PPTP_VPN_SCANNER_LOW: PPTP VPN Scanner 144 | PRINTER_SCANNER_HIGH: Printer Scanner 145 | PRINTER_SCANNER_LOW: Printer Scanner 146 | PRIVOXY_PROXY_SCANNER_HIGH: Privoxy Proxy Scanner 147 | PRIVOXY_PROXY_SCANNER_LOW: Privoxy Proxy Scanner 148 | PROBETHENET: ProbeTheNet.com 149 | PROJECT25499: Project25499 150 | PROJECT_SONAR: Project Sonar 151 | PROXYBROKER: ProxyBroker 152 | PYCURL_HTTP_CLIENT: PycURL HTTP Client 153 | PYTHON_REQUESTS_CLIENT: Python Requests Client 154 | QUADMETRICS: Quadmetrics.com 155 | QWANT: Qwant 156 | RABBITMQ_SCANNER_HIGH: RabbitMQ Scanner 157 | RABBITMQ_SCANNER_LOW: RabbitMQ Scanner 158 | RDP_SCANNER_HIGH: RDP Scanner 159 | RDP_SCANNER_LOW: RDP Scanner 160 | REALTEK_MINIIGD_UPNP_WORM_CVE_2014_8361: Realtek Miniigd UPnP Worm CVE-2014-8361 161 | REDIS_SCANNER_HIGH: Redis Scanner 162 | REDIS_SCANNER_LOW: Redis Scanner 163 | RESIDENTIAL: Residential 164 | RIDDLER: Riddler.io 165 | ROUTER_RPC_SCANNER_HIGH: Router RPC Scanner 166 | ROUTER_RPC_SCANNER_LOW: Router RPC Scanner 167 | RUHR_UNIVERSITTT_BOCHUM: Ruhr-Universitat Bochum 168 | RWTH_AACHEN_UNIVERSITY: RWTH AACHEN University 169 | SAFEDNS: SafeDNS 170 | SEMRUSH: SEMrush 171 | SEZNAM: Seznam 172 | SHADOWSERVER: ShadowServer.org 173 | SHODAN: Shodan.io 174 | SIEMENS_PLC_SCANNER_HIGH: Siemens PLC Scanner 175 | SIEMENS_PLC_SCANNER_LOW: Siemens PLC Scanner 176 | SITEEXPLORER: SiteExplorer 177 | SMB_SCANNER_HIGH: SMB Scanner 178 | SMB_SCANNER_LOW: SMB Scanner 179 | SMTP_SCANNER_HIGH: SMTP Scanner 180 | SMTP_SCANNER_LOW: SMTP Scanner 181 | SNMP_SCANNER_HIGH: SNMP Scanner 182 | SNMP_SCANNER_LOW: SNMP Scanner 183 | SOCKS_PROXY_SCANNER_HIGH: SOCKS Proxy Scanner 184 | SOCKS_PROXY_SCANNER_LOW: SOCKS Proxy Scanner 185 | SOGOU: Sogou 186 | SQUID_PROXY_SCANNER_HIGH: Squid Proxy Scanner 187 | SQUID_PROXY_SCANNER_LOW: Squid Proxy Scanner 188 | SSDP_UPNP_SCANNER_HIGH: SSDP/UPNP Scanner 189 | SSDP_UPNP_SCANNER_LOW: SSDP/UPNP Scanner 190 | SSH_SCANNER_HIGH: SSH Scanner 191 | SSH_SCANNER_LOW: SSH Scanner 192 | SSH_WORM_HIGH: SSH Worm 193 | SSH_WORM_LOW: SSH Worm 194 | STANFORD_UNIVERSITY: Stanford University 195 | STATASTICO: Statastico 196 | STRETCHOID: Stretchoid.com 197 | TALAIA: Talaia 198 | TEAM_CYMRU: Team Cymru 199 | TELNET_SCANNER_HIGH: Telnet Scanner 200 | TELNET_SCANNER_LOW: Telnet Scanner 201 | TELNET_WORM_HIGH: Telnet Worm 202 | TELNET_WORM_LOW: Telnet Worm 203 | TFTP_SCANNER_HIGH: TFTP Scanner 204 | TFTP_SCANNER_LOW: TFTP Scanner 205 | TOR: Tor 206 | UNAUTHENTICATED_REDIS_WORM: Unauthenticated Redis Worm 207 | UNIVERSITY_OF_CALIFORNIA_BERKELEY: University of California Berkeley 208 | UNIVERSITY_OF_MICHIGAN: University of Michigan 209 | UNIVERSITY_OF_NEW_MEXICO: University of New Mexico 210 | UNKNOWN_LINUX_WORM: Unknown Linux Worm 211 | UPTIME: Uptime.com 212 | VNC_BRUTEFORCER: VNC Bruteforcer 213 | VNC_SCANNER_HIGH: VNC Scanner 214 | VNC_SCANNER_LOW: VNC Scanner 215 | VOIP_SCANNER_HIGH: VOIP Scanner 216 | VOIP_SCANNER_LOW: VOIP Scanner 217 | WEB_CRAWLER: Web Crawler 218 | WEB_SCANNER_HIGH: Web Scanner 219 | WEB_SCANNER_LOW: Web Scanner 220 | WHATWEB_SCANNER: WhatWeb Scanner 221 | WINDOWS_RDP_COOKIE_HIJACKER_CVE_2014_6318: Windows RDP Cookie Hijacker CVE-2014-6318 222 | WINRM_SCANNER_HIGH: WinRM Scanner 223 | WINRM_SCANNER_LOW: WinRM Scanner 224 | WORDPRESS_WORM: Wordpress Worm 225 | WORDPRESS_XML_RPC_WORM: Wordpress XML RPC Worm 226 | WinRM_SCANNER_LOW: WinRM Scanner 227 | YANDEX_SEARCH_ENGINE: Yandex Search Engine 228 | ZGRAB_SSH_SCANNER: ZGrab SSH Scanner 229 | ZMAP_CLIENT: ZMap Client 230 | ZMEU_WORM: ZmEu Worm 231 | ZYXEL_OS_COMMAND_INJECTION_CVE_2017_6884: Zyxel OS Command Injection CVE-2017-6884 232 | -------------------------------------------------------------------------------- /greynoise.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import re 4 | import requests 5 | import ipaddress 6 | from collections import OrderedDict 7 | from datetime import datetime as dt 8 | import argparse 9 | import os 10 | import pickle 11 | import csv 12 | import yaml 13 | import sys 14 | import json 15 | 16 | 17 | VERSION = '0.1' 18 | CONFIG_FILE = 'config.yaml' 19 | API_KEY = None 20 | HIDE_UNKNOWN = False 21 | IPV4_ADDRESS = re.compile('^(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$') 22 | TAGS = {} 23 | 24 | CSV_HEADER_ENRICHMENT = ['Noise', 'rDNS', 'ASN', 'Organisation', 'Tag', 'Category', 'Intention', 'Confidence', 25 | 'Datacenter', 'Operating_system', 'Link', 'Tor'] 26 | CSV_HEADER_LIST = ['Tag_ID', 'Tag_name'] 27 | CSV_HEADER_IP = ['IP', 'Noise', 'rDNS', 'rDNS_parent', 'ASN', 'Organisation', 'Tag_id', 'Tag_name', 'Category', 28 | 'Intention', 'Confidence', 'Datacenter', 'Operating_system', 'Link', 'Tor', 'First_seen', 29 | 'Last_updated'] 30 | 31 | CACHE_TIMEOUT = 60*60*24 # 24 hours, is here as a backup if not within config.yaml 32 | CACHE_LOCATION = '.api_ip_cache' 33 | CACHE_MODIFIED = False # is used to determine if a new version of the ip cache should be written to disk 34 | # structure of the dict: {ip: {'date': datetime, raw:{} }} 35 | ip_cache = {} 36 | 37 | URL_API_IP = 'http://api.greynoise.io:8888/v1/query/ip' 38 | URL_API_LIST = 'http://api.greynoise.io:8888/v1/query/list' 39 | URL_API_TAG = 'http://api.greynoise.io:8888/v1/query/tag' 40 | 41 | INDENT = 15 42 | COLUMN_NAME = 40 43 | COLUMN_CONFIDENCE = 14 44 | COLUMN_CATEGORY = 12 45 | COLUMN_INTENTION = 12 46 | COLUMN_COUNT = 8 47 | COLUMN_FIRST_SEEN = 14 48 | COLUMN_LAST_UPDATED = 14 49 | 50 | processed_IPs = set() 51 | session = None 52 | 53 | 54 | def init_menu(): 55 | menu_parser = argparse.ArgumentParser(description='Query GreyNoise', 56 | epilog='https://github.com/marcusbakker/GreyNoise') 57 | group = menu_parser.add_mutually_exclusive_group() 58 | 59 | group.add_argument('-ip', type=str, help='query for all tags associated with a given IP or CIDR IP range', metavar='IP') 60 | group.add_argument('-f', '--file', help='query all IPs/CIDR ranges within the provided file', metavar='FILE') 61 | group.add_argument('-l', '--list', help='get a list of all GreyNoise\'s current tags', action='store_true') 62 | group.add_argument('-t', '--tag', help='get all IPs and its associated metadata for the provided tag', metavar='TAG_ID') 63 | group.add_argument('--csv', help='identify the noise and add context on the noise in the provided CSV file. ' 64 | 'The output filename has \'greynoise_\' as prefix', 65 | metavar=('CSV_FILE', 'IP_COLUMN_INDEX'), nargs=2) 66 | menu_parser.add_argument('-o', '--output', help='output the result to a file (default format = txt)', 67 | metavar='FILE_LOCATION') 68 | menu_parser.add_argument('--format', help='specify the format of the output file', choices=['txt', 'csv', 'json', ], default='txt') 69 | menu_parser.add_argument('-u', '--hide-unknown', help='hide results for IP addresses which have the status "unknown"' 70 | , action='store_true') 71 | menu_parser.add_argument('--cache-expire', help='expire all entries within the IP ip cache', action='store_true') 72 | menu_parser.add_argument('--cache-timeout', help='set the IP ip cache timeout in seconds (default = 24 hours)', 73 | metavar='SECONDS') 74 | menu_parser.add_argument('-k', '--key', help='API key', metavar='KEY') 75 | menu_parser.add_argument('--version', action='version', version='%(prog)s ' + VERSION) 76 | return menu_parser 77 | 78 | 79 | # load the YAML config and set some variables 80 | def load_config(): 81 | global CACHE_TIMEOUT 82 | global TAGS 83 | global API_KEY 84 | 85 | with open(CONFIG_FILE, 'r') as yaml_file: 86 | config = yaml.load(yaml_file) 87 | 88 | API_KEY = config['api_key'] 89 | CACHE_TIMEOUT = config['cache_timeout'] 90 | TAGS = config['tags'] 91 | 92 | 93 | # define the length for the column tag name 94 | def initialize_column_name(): 95 | global COLUMN_NAME 96 | COLUMN_NAME = 0 97 | 98 | # v = tag name 99 | for k, v in TAGS.items(): 100 | tag_length = len(v) 101 | if tag_length > COLUMN_NAME: 102 | COLUMN_NAME = tag_length 103 | 104 | COLUMN_NAME += (INDENT + 3) 105 | 106 | 107 | # remove cached items older then CACHE_TIMEOUT 108 | def purge_cache(): 109 | global ip_cache 110 | global CACHE_MODIFIED 111 | cache_purged = dict(ip_cache) 112 | now = dt.now() 113 | 114 | for k, v in ip_cache.items(): 115 | if (now-v['date_added']).total_seconds() >= CACHE_TIMEOUT: 116 | del cache_purged[k] 117 | CACHE_MODIFIED = True 118 | 119 | ip_cache = dict(cache_purged) 120 | 121 | 122 | def expire_cache(): 123 | if os.path.exists(CACHE_LOCATION): 124 | os.remove(CACHE_LOCATION) 125 | 126 | 127 | # load the ip_cache from disk 128 | def initialize_cache(): 129 | global ip_cache 130 | if os.path.exists(CACHE_LOCATION): 131 | with open(CACHE_LOCATION, 'rb') as f: 132 | ip_cache = pickle.load(f) 133 | 134 | purge_cache() 135 | 136 | 137 | # save the ip_cache to disk 138 | def save_cache(): 139 | if CACHE_MODIFIED: 140 | with open(CACHE_LOCATION, 'wb') as f: 141 | pickle.dump(ip_cache, f) 142 | 143 | 144 | # structure of the data: {ip: {'date': datetime, raw:{} }} 145 | def add_to_cache(ip, raw_data): 146 | global ip_cache 147 | global CACHE_MODIFIED 148 | if ip not in ip_cache: 149 | now = dt.now() 150 | ip_cache[ip] = {'date_added': now, 'raw': raw_data} 151 | CACHE_MODIFIED = True 152 | 153 | 154 | # dict structure: {'item': count} 155 | def add_record_item_to_dict(item, d): 156 | if item != '': 157 | if item in d: 158 | d[item] += 1 159 | else: 160 | d[item] = 1 161 | 162 | return d 163 | 164 | 165 | # convert Greynoise's date to a datetime object 166 | def get_datetime(date_string, full_date=False): 167 | date_string = re.sub('Z$', '', date_string) 168 | if '.' in date_string: 169 | date = dt.strptime(re.sub('\.[0-9]+$', '', date_string), '%Y-%m-%dT%H:%M:%S') 170 | else: 171 | date = dt.strptime(date_string, '%Y-%m-%dT%H:%M:%S') 172 | if full_date: 173 | return date.strftime('%Y-%m-%d %H:%M:%S') 174 | else: 175 | return date.strftime('%Y-%m-%d') 176 | 177 | 178 | # dict structure: {'tag_id': {'tag_name': '...', 'confidence': '...', 'category': '...', 'intention': '...', 179 | # 'first_seen': '...', 'last_updated': '...', 'count': '...'} } 180 | def add_tag(tag_id, confidence, category, intention, first_seen, last_updated, dic): 181 | if tag_id in dic: 182 | dic[tag_id]['count'] += 1 183 | else: 184 | tag_name = tag_id 185 | if tag_id in TAGS: 186 | tag_name = TAGS[tag_id] 187 | 188 | first_seen_str = get_datetime(first_seen) 189 | last_updated_str = get_datetime(last_updated) 190 | 191 | dic[tag_id] = {'tag_name': tag_name, 'confidence': confidence, 'category': category, 'intention': intention, 192 | 'first_seen': first_seen_str, 'last_updated': last_updated_str, 'count': 1} 193 | return dic 194 | 195 | 196 | def print_tags(d): 197 | print('Tags:') 198 | format_string = '{:<'+str(COLUMN_NAME)+'s}{:<'+str(COLUMN_CONFIDENCE)+'s}{:<'+str(COLUMN_CATEGORY)+'s}{:<' + \ 199 | str(COLUMN_INTENTION)+'s}{:<'+str(COLUMN_COUNT)+'s}{:<'+str(COLUMN_FIRST_SEEN)+'s}{:<' + \ 200 | str(COLUMN_LAST_UPDATED)+'s}' 201 | 202 | print(format_string.format(' ' * INDENT + 'Name', 'Confidence', 'Category', 'intention', 'Count', 'First seen', 203 | 'Last updated')) 204 | 205 | print(' ' * INDENT + '-' * (COLUMN_NAME + COLUMN_CONFIDENCE + COLUMN_CATEGORY + COLUMN_INTENTION + COLUMN_COUNT + 206 | COLUMN_FIRST_SEEN + COLUMN_LAST_UPDATED - INDENT - 2)) 207 | 208 | od = OrderedDict(sorted(d.items(), key=lambda x: x[1]['count'], reverse=True)) 209 | 210 | for tag, v in od.items(): 211 | print(format_string.format(' ' * INDENT + v['tag_name'], v['confidence'], v['category'], v['intention'] 212 | , str(v['count']), v['first_seen'], v['last_updated'])) 213 | 214 | 215 | def print_single_record_item(name, record_item, colon=False): 216 | if record_item != '': 217 | if colon: 218 | name += ':' 219 | l_name = len(name) 220 | for i in range(l_name, INDENT): 221 | name += ' ' 222 | 223 | print(name + str(record_item)) 224 | 225 | 226 | def print_multi_record_item(name, record_item): 227 | name += ':' 228 | 229 | # indent 230 | name_length = len(name) 231 | for i in range(name_length, INDENT): 232 | name += ' ' 233 | 234 | printed_name = False 235 | if len(record_item) > 1: 236 | indent = 0 237 | for record, count in record_item.items(): 238 | if not printed_name: 239 | printed_name = True 240 | metadata = '[' + str(count) + '][last seen] ' 241 | indent = len(metadata)-3 242 | print(name + metadata + record) 243 | else: 244 | len_count = len(str(count)) 245 | print_single_record_item('', '[' + str(count) + '] ' + ' '*(indent-len_count) + record) 246 | elif len(record_item) == 1: 247 | item = list(record_item.keys())[0] 248 | print_single_record_item(name, str(item)) 249 | 250 | 251 | # read a file from disk 252 | def read_file(file_location): 253 | if os.path.exists(file_location): 254 | with open(file_location, 'r') as f: 255 | return f.readlines() 256 | else: 257 | print('[!] file does not exist') 258 | 259 | 260 | def print_ip_query(data, single_ip): 261 | if data: 262 | len_records = 0 263 | 264 | if 'records' in data: 265 | len_records = len(data['records']) 266 | 267 | if len_records == 0 and not HIDE_UNKNOWN: 268 | print_single_record_item('IP', data['ip'], colon=True) 269 | print_single_record_item('Status', data['status'], colon=True) 270 | 271 | if len_records > 0: 272 | print_single_record_item('IP', data['ip'], colon=True) 273 | 274 | records = data['records'] 275 | tags = OrderedDict() 276 | ASN = OrderedDict() 277 | reverse_DNS = OrderedDict() 278 | datacenter = OrderedDict() 279 | operating_system = OrderedDict() 280 | link = OrderedDict() 281 | organisation = OrderedDict() 282 | tor = OrderedDict() 283 | 284 | for r in records: 285 | tags = add_tag(r['name'], r['confidence'], r['category'], r['intention'], r['first_seen'], 286 | r['last_updated'], tags) 287 | reverse_DNS = add_record_item_to_dict(r['metadata']['rdns'], reverse_DNS) 288 | ASN = add_record_item_to_dict(r['metadata']['asn'], ASN) 289 | organisation = add_record_item_to_dict(r['metadata']['org'], organisation) 290 | datacenter = add_record_item_to_dict(r['metadata']['datacenter'], datacenter) 291 | operating_system = add_record_item_to_dict(r['metadata']['os'], operating_system) 292 | link = add_record_item_to_dict(r['metadata']['link'], link) 293 | tor = add_record_item_to_dict(r['metadata']['tor'], tor) 294 | 295 | print_multi_record_item('rDNS', reverse_DNS) 296 | print_multi_record_item('ASN', ASN) 297 | print_multi_record_item('Organisation', organisation) 298 | print_multi_record_item('Datacenter', datacenter) 299 | print_multi_record_item('OS', operating_system) 300 | print_multi_record_item('Link', link) 301 | print_multi_record_item('TOR', tor) 302 | print_single_record_item('Records', '[' + str(len_records) + ']', colon=True) 303 | print_tags(tags) 304 | 305 | if not single_ip and not (len_records == 0 and HIDE_UNKNOWN): 306 | print('\n' + '=' * (COLUMN_NAME + COLUMN_CONFIDENCE + COLUMN_CATEGORY + COLUMN_INTENTION + COLUMN_COUNT + 307 | COLUMN_FIRST_SEEN + COLUMN_LAST_UPDATED - 2)) 308 | 309 | 310 | # query Greynloise for an IP 311 | def query_ip(ip, single_ip=True, source_csv=False): 312 | if IPV4_ADDRESS.match(ip): 313 | global session 314 | if not session: 315 | session = requests.Session() 316 | 317 | if ip not in processed_IPs: 318 | processed_IPs.add(ip) 319 | 320 | # get the data from the ip_cache if possible 321 | if ip in ip_cache: 322 | data = ip_cache[ip]['raw'] 323 | else: 324 | post_data = {'ip': ip} 325 | if API_KEY is not None: 326 | post_data['key'] = API_KEY 327 | 328 | try: 329 | response = session.post(URL_API_IP, data=post_data) 330 | except Exception as e: 331 | print('[!] error:\n', e) 332 | return 333 | data = response.json() 334 | 335 | add_to_cache(ip, data) 336 | return data 337 | else: 338 | if source_csv: 339 | return ip_cache[ip]['raw'] 340 | else: 341 | return None 342 | else: 343 | print('[!] invalid ip: '+ip) 344 | if not single_ip: 345 | print('\n' + '=' * (COLUMN_NAME + COLUMN_CONFIDENCE + COLUMN_CATEGORY + COLUMN_INTENTION + COLUMN_COUNT + 346 | COLUMN_FIRST_SEEN + COLUMN_LAST_UPDATED - 2)) 347 | return None 348 | 349 | 350 | def print_tag_list(data): 351 | if data: 352 | tags = sorted(data['tags']) 353 | 354 | tag_length = 0 355 | for tag in tags: 356 | tmp_tag_length = len(tag) 357 | if tmp_tag_length > tag_length: 358 | tag_length = tmp_tag_length 359 | format_string = '{:<' + str(tag_length+3) + 's}{:<' + str(tag_length+3) + 's}' 360 | 361 | print(format_string.format('Tag ID', 'Tag name')) 362 | print('-'*(tag_length*2+6)) 363 | 364 | for tag in tags: 365 | tag_name = '' 366 | if tag in TAGS: 367 | tag_name = TAGS[tag] 368 | print(format_string.format(tag, tag_name)) 369 | 370 | 371 | # query Greynloise for the tag list 372 | def query_tag_list(): 373 | global session 374 | if not session: 375 | session = requests.Session() 376 | 377 | try: 378 | response = session.get(URL_API_LIST) 379 | except Exception as e: 380 | print('[!] error:\n', e) 381 | return 382 | 383 | data = response.json() 384 | if data['status'] == 'ok': 385 | return data 386 | else: 387 | print('[!] status is not "ok"') 388 | return None 389 | 390 | 391 | def print_tag_records(data): 392 | if 'records' not in data: 393 | print('No results for tag: ', data['tag']) 394 | return 395 | 396 | records = data['records'] 397 | 398 | print_single_record_item('Tag', data['tag'], colon=True) 399 | print_single_record_item('Records', '[' + str(data['returned_count']) + ']', colon=True) 400 | print('\n' + '=' * (COLUMN_NAME + COLUMN_CONFIDENCE + COLUMN_CATEGORY + COLUMN_INTENTION + COLUMN_COUNT + 401 | COLUMN_FIRST_SEEN + COLUMN_LAST_UPDATED - 2)) 402 | 403 | for r in records: 404 | print_single_record_item('IP', r['ip'], colon=True) 405 | print_single_record_item('rDNS', r['metadata']['rdns'], colon=True) 406 | print_single_record_item('ASN', r['metadata']['asn'], colon=True) 407 | print_single_record_item('Organisation', r['metadata']['org'], colon=True) 408 | print_single_record_item('Datacenter', r['metadata']['datacenter'], colon=True) 409 | print_single_record_item('OS', r['metadata']['os'], colon=True) 410 | print_single_record_item('Link', r['metadata']['link'], colon=True) 411 | print_single_record_item('TOR', r['metadata']['tor'], colon=True) 412 | 413 | print('\n' + '=' * (COLUMN_NAME + COLUMN_CONFIDENCE + COLUMN_CATEGORY + COLUMN_INTENTION + COLUMN_COUNT + 414 | COLUMN_FIRST_SEEN + COLUMN_LAST_UPDATED - 2)) 415 | 416 | 417 | # query Greynloise for a specific tag 418 | def query_tag(tag): 419 | global session 420 | if not session: 421 | session = requests.Session() 422 | 423 | post_data = {'tag': tag} 424 | if API_KEY is not None: 425 | post_data['key'] = API_KEY 426 | 427 | try: 428 | response = session.post(URL_API_TAG, data=post_data) 429 | except Exception as e: 430 | print('[!] error:\n', e) 431 | return 432 | data = response.json() 433 | return data 434 | 435 | 436 | def add_record_item_to_set(item, s): 437 | if item != '': 438 | s.add(item) 439 | return s 440 | 441 | 442 | def get_enriched_csv_row(row, data): 443 | if data: 444 | len_records = 0 445 | 446 | row.append(data['status']) 447 | 448 | if 'records' in data: 449 | len_records = len(data['records']) 450 | 451 | if len_records > 0: 452 | records = data['records'] 453 | 454 | reverse_DNS = set() 455 | ASN = set() 456 | organisation = set() 457 | tag = set() 458 | category = set() 459 | intention = set() 460 | datacenter = set() 461 | confidence = set() 462 | operating_system = set() 463 | link = set() 464 | tor = set() 465 | 466 | for r in records: 467 | reverse_DNS = add_record_item_to_set(r['metadata']['rdns'], reverse_DNS) 468 | ASN = add_record_item_to_set(r['metadata']['asn'], ASN) 469 | organisation = add_record_item_to_set(r['metadata']['org'], organisation) 470 | tag = add_record_item_to_set(r['name'], tag) 471 | category = add_record_item_to_set(r['category'], category) 472 | intention = add_record_item_to_set(r['intention'], intention) 473 | confidence = add_record_item_to_set(r['confidence'], confidence) 474 | datacenter = add_record_item_to_set(r['metadata']['datacenter'], datacenter) 475 | operating_system = add_record_item_to_set(r['metadata']['os'], operating_system) 476 | link = add_record_item_to_set(r['metadata']['link'], link) 477 | tor = add_record_item_to_set(str(r['metadata']['tor']), tor) 478 | 479 | row.append(', '.join(reverse_DNS)) 480 | row.append(', '.join(ASN)) 481 | row.append(', '.join(organisation)) 482 | row.append(', '.join(tag)) 483 | row.append(', '.join(category)) 484 | row.append(', '.join(intention)) 485 | row.append(', '.join(confidence)) 486 | row.append(', '.join(datacenter)) 487 | row.append(', '.join(operating_system)) 488 | row.append(', '.join(link)) 489 | row.append(', '.join(tor)) 490 | 491 | return row 492 | 493 | 494 | def process_csv_file(cmd_args, output_to_file): 495 | csv_file_path = cmd_args[0] 496 | ip_column_idx = int(cmd_args[1]) - 1 497 | csv_header = [] 498 | csv_column_amount = 0 499 | response_data = [] 500 | 501 | if os.path.exists(csv_file_path): 502 | with open(csv_file_path, newline='') as csvfile: 503 | # sniff into 10KB of the file to get its dialect 504 | dialect = csv.Sniffer().sniff(csvfile.read(10*1024)) 505 | csvfile.seek(0) 506 | 507 | # check with sniff if the CSV file has a header 508 | has_header = csv.Sniffer().has_header(csvfile.read(10*1024)) 509 | csvfile.seek(0) 510 | 511 | reader = csv.reader(csvfile, dialect=dialect) 512 | if has_header: 513 | csv_header = next(reader) 514 | csv_column_amount = len(csv_header) 515 | 516 | new_csv_rows = [] 517 | for row in reader: # query GreyNoise 518 | csv_column_amount = len(row) 519 | data = query_ip(row[ip_column_idx], source_csv=True, single_ip=False) 520 | new_csv_rows.append(get_enriched_csv_row(row, data)) 521 | print_ip_query(data, single_ip=False) 522 | 523 | if output_to_file: 524 | response_data.append(data) 525 | 526 | # create the CSV header 527 | if not has_header: 528 | for i in range(0, csv_column_amount): 529 | csv_header.append('column_'+str(i)) 530 | csv_header[ip_column_idx] = 'IP-address' 531 | for column_name in CSV_HEADER_ENRICHMENT: 532 | csv_header.append(column_name) 533 | 534 | # write the new CSV file 535 | dialect.quoting = csv.QUOTE_ALL 536 | dialect.escapechar = '"' 537 | greynoise_csv_file_path = os.path.dirname(csv_file_path)+'/greynoise_'+os.path.basename(csv_file_path) 538 | with open(greynoise_csv_file_path, 'w', newline='') as csvfile: 539 | writer = csv.writer(csvfile, dialect=dialect) 540 | writer.writerow(csv_header) 541 | writer.writerows(new_csv_rows) 542 | 543 | return response_data 544 | else: 545 | print('[!] CSV file does not exist') 546 | 547 | 548 | def write_json_output_file(data, filepath, query_type): 549 | with open(filepath, mode='w', encoding='utf-8') as json_file: 550 | if query_type == 'ip' and HIDE_UNKNOWN: 551 | data_without_unknown = [] 552 | for ip_response in data: 553 | if ip_response['status'] != 'unknown': 554 | data_without_unknown.append(ip_response) 555 | json.dump(data_without_unknown, json_file) 556 | else: 557 | json.dump(data, json_file) 558 | 559 | 560 | def write_csv_output_file(data, filepath, query_type): 561 | if data: 562 | if query_type == 'ip': 563 | with open(filepath, 'w', newline='') as csvfile: 564 | writer = csv.writer(csvfile) 565 | writer.writerow(CSV_HEADER_IP) 566 | 567 | for ip_response in data: 568 | len_records = 0 569 | 570 | if 'records' in ip_response: 571 | len_records = len(ip_response['records']) 572 | 573 | if len_records == 0 and not HIDE_UNKNOWN: 574 | row = [ip_response['ip'], ip_response['status']] 575 | row += [''] * (len(CSV_HEADER_IP) - 2) 576 | writer.writerow(row) 577 | 578 | if len_records > 0: 579 | ip = ip_response['ip'] 580 | status = ip_response['status'] 581 | 582 | for r in ip_response['records']: 583 | tag_id = r['name'] 584 | tag_name = tag_id 585 | if tag_id in TAGS: 586 | tag_name = TAGS[tag_id] 587 | 588 | row = list([ip, status]) 589 | row.append(r['metadata']['rdns']) 590 | row.append(r['metadata']['rdns_parent']) 591 | row.append(r['metadata']['asn']) 592 | row.append(r['metadata']['org']) 593 | row.append(tag_id) 594 | row.append(tag_name) 595 | row.append(r['category']) 596 | row.append(r['intention']) 597 | row.append(r['confidence']) 598 | row.append(r['metadata']['datacenter']) 599 | row.append(r['metadata']['os']) 600 | row.append(r['metadata']['link']) 601 | row.append(r['metadata']['tor']) 602 | row.append(get_datetime(r['first_seen'], full_date=True)) 603 | row.append(get_datetime(r['last_updated'], full_date=True)) 604 | 605 | writer.writerow(row) 606 | 607 | if query_type == 'tag': 608 | with open(filepath, 'w', newline='') as csvfile: 609 | writer = csv.writer(csvfile) 610 | writer.writerow(CSV_HEADER_IP) 611 | 612 | status = data['status'] 613 | for r in data['records']: 614 | 615 | tag_id = r['name'] 616 | tag_name = tag_id 617 | if tag_id in TAGS: 618 | tag_name = TAGS[tag_id] 619 | 620 | row = list([r['ip'], status]) 621 | row.append(r['metadata']['rdns']) 622 | row.append(r['metadata']['rdns_parent']) 623 | row.append(r['metadata']['asn']) 624 | row.append(r['metadata']['org']) 625 | row.append(tag_id) 626 | row.append(tag_name) 627 | row.append(r['category']) 628 | row.append(r['intention']) 629 | row.append(r['confidence']) 630 | row.append(r['metadata']['datacenter']) 631 | row.append(r['metadata']['os']) 632 | row.append(r['metadata']['link']) 633 | row.append(r['metadata']['tor']) 634 | row.append(get_datetime(r['first_seen'], full_date=True)) 635 | row.append(get_datetime(r['last_updated'], full_date=True)) 636 | 637 | writer.writerow(row) 638 | 639 | if query_type == 'list': 640 | with open(filepath, 'w', newline='') as csvfile: 641 | writer = csv.writer(csvfile) 642 | writer.writerow(CSV_HEADER_LIST) 643 | 644 | tags = sorted(data['tags']) 645 | 646 | for tag in tags: 647 | tag_name = '' 648 | if tag in TAGS: 649 | tag_name = TAGS[tag] 650 | writer.writerow([tag, tag_name]) 651 | 652 | def menu(menu_parser): 653 | global API_KEY 654 | global HIDE_UNKNOWN 655 | response_data = [] 656 | query_type = None 657 | 658 | load_config() 659 | args = menu_parser.parse_args() 660 | 661 | if args.cache_timeout: 662 | global CACHE_TIMEOUT 663 | if not re.match('^[0-9]+$', args.cache_timeout): 664 | print(args.cache_timeout+' is not a valid integer') 665 | quit() 666 | else: 667 | CACHE_TIMEOUT = int(args.cache_timeout) 668 | 669 | if args.key: 670 | API_KEY = args.API_kEY 671 | 672 | if args.hide_unknown: 673 | HIDE_UNKNOWN = True 674 | 675 | if args.cache_expire: 676 | expire_cache() 677 | 678 | if args.output and args.format == 'txt': 679 | sys.stdout = open(args.output, 'w') 680 | 681 | if args.ip: 682 | query_type = 'ip' 683 | initialize_column_name() 684 | initialize_cache() 685 | 686 | # try catch for invalid IP CIDR ranges 687 | try: 688 | net4 = ipaddress.ip_network(args.ip) 689 | except ValueError as e: 690 | print(e) 691 | quit() 692 | 693 | for ip in net4: 694 | data = query_ip(str(ip), single_ip=True) 695 | 696 | if data and args.output and args.format != 'txt': 697 | response_data.append(data) 698 | 699 | if net4.num_addresses > 1: 700 | print_ip_query(data, single_ip=False) 701 | else: 702 | print_ip_query(data, single_ip=True) 703 | 704 | save_cache() 705 | 706 | if args.file: 707 | query_type = 'ip' 708 | initialize_column_name() 709 | initialize_cache() 710 | lines = read_file(args.file) 711 | 712 | for line in lines: 713 | line = line.lstrip().rstrip() 714 | if line != '': # skip emtpy lines 715 | # try catch for invalid IP CIDR ranges 716 | try: 717 | net4 = ipaddress.ip_network(line) 718 | except ValueError as e: 719 | print(e) 720 | quit() 721 | 722 | for ip in net4: 723 | data = query_ip(str(ip), single_ip=False) 724 | if data and args.output and args.format != 'txt': 725 | response_data.append(data) 726 | print_ip_query(data, single_ip=False) 727 | 728 | save_cache() 729 | 730 | if args.list: 731 | query_type = 'list' 732 | response_data = query_tag_list() 733 | print_tag_list(response_data) 734 | 735 | if args.tag: 736 | query_type = 'tag' 737 | response_data = query_tag(args.tag.upper()) 738 | print_tag_records(response_data) 739 | 740 | # enrich CSV file 741 | if args.csv: 742 | query_type = 'ip' 743 | initialize_column_name() 744 | initialize_cache() 745 | 746 | if not re.match('^[0-9]+$', args.csv[1]): 747 | print(args.csv[1]+' is not a valid integer') 748 | quit() 749 | else: 750 | response_data = process_csv_file(args.csv, args.output) 751 | 752 | save_cache() 753 | 754 | # write all responses to a file (json or csv) 755 | if len(response_data) > 0 and args.output: 756 | if args.format == 'json': 757 | write_json_output_file(response_data, args.output, query_type) 758 | 759 | if args.format == 'csv': 760 | write_csv_output_file(response_data, args.output, query_type) 761 | 762 | 763 | if __name__ == "__main__": 764 | menu_parser = init_menu() 765 | menu(menu_parser) 766 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | requests 2 | PyYAML --------------------------------------------------------------------------------