├── .gitignore ├── LICENSE ├── README.md ├── TODO.org ├── greylost-ignore ├── greylost-pid-check.sh ├── greylost.py ├── pypacket.py ├── pysniffer.py ├── record_types.py └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | *.log 3 | *.pid 4 | .#* 5 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2019 Daniel Roberson 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining 4 | a copy of this software and associated documentation files (the 5 | "Software"), to deal in the Software without restriction, including 6 | without limitation the rights to use, copy, modify, merge, publish, 7 | distribute, sublicense, and/or sell copies of the Software, and to 8 | permit persons to whom the Software is furnished to do so, subject to 9 | the following conditions: 10 | 11 | The above copyright notice and this permission notice shall be 12 | included in all copies or substantial portions of the Software. 13 | 14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 15 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 16 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 17 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE 18 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 19 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 20 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Greylost 2 | 3 | This sniffs DNS traffic and logs queries. It implements a time-based 4 | filter to narrow the scope of DNS logs for analysts to examine; if 5 | traffic to Google is typical for your environment, you won't be 6 | innundated with these query logs, but WILL get logs for 7 | malwaredomain123.xyz if that is an atypical query. 8 | 9 | This can be installed locally, on a resolver/forwarder, or on a 10 | machine plugged into a switchport that is mirroring ports. 11 | 12 | ## Installation 13 | ``` 14 | pip3 install -r requirements.txt 15 | ``` 16 | 17 | ## Usage: 18 | ``` 19 | usage: greylost.py [-h] [--alllog ALLLOG] [--notdnslog NOTDNSLOG] 20 | [--greylistmisslog GREYLISTMISSLOG] [-b BPF] [-d] 21 | [--learningtime LEARNINGTIME] [--logging] [--ignore IGNORE] 22 | [-i INTERFACE] [-o] [-p PRECISION] [-r PIDFILE] 23 | [-s FILTERSIZE] [-t FILTERTIME] [-v] [-w DUMPFILE] 24 | 25 | greylost by @dmfroberson 26 | 27 | optional arguments: 28 | -h, --help show this help message and exit 29 | --alllog ALLLOG /path/to/all-log -- log of all DNS queries 30 | --notdnslog NOTDNSLOG 31 | /path/to/not-dns-log -- log of non-DNS protocol 32 | traffic 33 | --greylistmisslog GREYLISTMISSLOG 34 | /path/to/greylist-miss-log -- log of greylist misses 35 | -b BPF, --bpf BPF BPF filter to apply to the sniffer 36 | -d, --daemonize Daemonize 37 | --learningtime LEARNINGTIME 38 | Time to baseline queries before alerting on greylist 39 | misses 40 | --logging Toggle logging 41 | --ignore IGNORE File containing list of domains to ignore when 42 | greylisting 43 | -i INTERFACE, --interface INTERFACE 44 | Interface to sniff 45 | -o, --stdout Toggle stdout output 46 | -p PRECISION, --precision PRECISION 47 | Precision of bloom filter. Ex: 0.001 48 | -r PIDFILE, --pidfile PIDFILE 49 | Path to PID file 50 | -s FILTERSIZE, --filtersize FILTERSIZE 51 | Size of bloom filter 52 | -t FILTERTIME, --filtertime FILTERTIME 53 | Filter time 54 | -v, --verbose increase verbosity 55 | -w DUMPFILE, --dumpfile DUMPFILE 56 | Write captured packets to a dumpfile 57 | ``` 58 | 59 | Example: 60 | ``` 61 | ./greylost.py -i eth0 --stdout --logging 62 | ``` 63 | 64 | ## Splunk 65 | The JSON logs provided by greylost can be indexed by Splunk. 66 | 67 | ### Quickstart 68 | Add indexes: 69 | ``` 70 | greylost-all 71 | greylost-misses 72 | greylost-malware 73 | ``` 74 | 75 | Assuming you have Universal Forwarder installed and configured: 76 | ``` 77 | splunk add monitor /path/to/greylost-all.log -index greylost-all 78 | splunk add monitor /path/to/greylost-misses.log -index greylost-misses 79 | splunk add monitor /path/to/greylost-malware.log -index greylost-malware 80 | splunk add monitor /path/to/greylost-notdns.log -index greylost-notdns 81 | ``` 82 | 83 | ### Searching 84 | No dashboards or application exists (yet), but here are some queries 85 | I've found useful: 86 | 87 | Search for resolutions of _malware.com_: 88 | ``` 89 | index=greylost-all "questions{}.qname"="malware.com." 90 | ``` 91 | 92 | Counts of queries per host: 93 | ``` 94 | index=greylost-misses | chart count by saddr 95 | ``` 96 | 97 | Counts of query types: 98 | ``` 99 | index=greylost-misses |chart count by "questions{}.qtype" 100 | ``` 101 | 102 | Hosts sending non-DNS traffic: 103 | ``` 104 | index=greylost-notdns | chart count by saddr 105 | ``` 106 | 107 | Hosts querying lots of TXT records: 108 | ``` 109 | index=greylost-misses "questions{}.qtype"=TXT | chart count by saddr 110 | ``` -------------------------------------------------------------------------------- /TODO.org: -------------------------------------------------------------------------------- 1 | Greylost TODO 2 | 3 | This looks dumb rendered in GitHub. Open with org-mode for better 4 | results. 5 | 6 | * DONE get basic PoC working 7 | CLOSED: [2019-11-24 Sun 19:51] 8 | * DONE timestamps 9 | CLOSED: [2019-11-26 Tue 08:04] 10 | * DONE sort responses before adding to bloom filter 11 | CLOSED: [2019-11-26 Tue 15:26] 12 | queries with multiple responses arent guaranteed to be in the same 13 | order each time they are queried. These should be sorted prior to 14 | adding to the bloom filter so that they arent counted dozens of times 15 | due to being out of order 16 | * DONE baseline timer 17 | CLOSED: [2019-11-27 Wed 14:49] 18 | don't alert on new queries before N time passes. This allows the 19 | software to baseline DNS queries and not give alerts. 20 | * DONE argparse for interface, promisc, etc 21 | CLOSED: [2019-11-28 Thu 15:28] 22 | 23 | * DONE logging 24 | CLOSED: [2019-11-28 Thu 19:54] 25 | * DONE HUP signal reopens log files. 26 | CLOSED: [2019-11-28 Thu 22:03] 27 | * DONE daemonize 28 | CLOSED: [2019-11-29 Fri 09:31] 29 | * DONE finish IPv6 in pypacket 30 | CLOSED: [2019-11-29 Fri 22:12] 31 | * DONE investigate pypacket alternatives 32 | CLOSED: [2019-11-29 Fri 22:12] 33 | * DONE offline mode? 34 | CLOSED: [2019-12-08 Sun 09:25] 35 | +This might not work great; dont know if pcaps keep timestamps in a 36 | manner that I can utilize.+ 37 | 38 | Abandoning this idea. Might do a different toolset to analyze pcaps. 39 | 40 | https://www.elvidence.com.au/understanding-time-stamps-in-packet-capture-data-pcap-files/ 41 | * DONE add mmh3 to requirements.txt 42 | CLOSED: [2019-12-08 Sun 10:24] 43 | This should speed it up a bit 44 | * DONE Splunk/ELK 45 | CLOSED: [2019-12-08 Sun 11:43] 46 | Add examples of how to ingest this data. Don't really have to add any 47 | code for this... 48 | * DONE ignore list for bloom filter 49 | CLOSED: [2019-12-11 Wed 10:08] 50 | mcafee is making a ton of random resolutions. we know that this 51 | particular case is benign, so add some feature to ignore these 52 | queries. 53 | * DONE cli flags to set logfile paths 54 | CLOSED: [2019-12-12 Thu 07:30] 55 | * DONE ability to save/reload filter (for reboots/restarts) 56 | CLOSED: [2019-12-12 Thu 14:05] 57 | * DONE log in pcap format 58 | CLOSED: [2019-12-12 Thu 14:19] 59 | * DONE test on authoritative DNS server 60 | CLOSED: [2019-12-12 Thu 14:19] 61 | * DONE remove repetitive patterns 62 | CLOSED: [2019-12-12 Thu 22:35] 63 | * DONE cli flags to enable/disable specific logs (all, not dns, ...) 64 | CLOSED: [2019-12-13 Fri 08:58] 65 | * DONE webhook alerts 66 | CLOSED: [2019-12-13 Fri 09:09] 67 | For really important events, send a webhook alert. 68 | Closing this, should be done via Splunk or ELK 69 | * DONE TimeFilter stores decimal currently. Look into storing as int instead. 70 | CLOSED: [2019-12-13 Fri 21:17] 71 | Since we don't need this precision, look into storing integers to save 72 | space in RAM and on disk when its pickled. 73 | * DONE pid file watchdog script for crontab 74 | CLOSED: [2019-12-13 Fri 22:40] 75 | * DONE handle out of memory issues gracefully 76 | CLOSED: [2019-12-13 Fri 22:31] 77 | Currently if there's not enough RAM, it throws a memory error and 78 | crashes. Catch these exceptions and be able to calculate how much RAM 79 | a filter at a given size will require. 80 | * DONE cleanup: are _functions necessary? 81 | CLOSED: [2019-12-14 Sat 07:56] 82 | * DONE use syslog when daemonized; service starts, stops, signal received, ... 83 | CLOSED: [2019-12-14 Sat 11:49] 84 | * TODO config file 85 | * TODO systemd and init scripts to start as a service 86 | * TODO rotate pcap files? 87 | * TODO Alerting for resolutions of known-bad domains 88 | 89 | http.kali.org 90 | start.parrotsec.org 91 | 92 | ** TODO ability to pull in from feeds 93 | This might be worthy of an entire new tool. Be able to pull in 94 | multiple sources and store them in a manner that can be used 95 | universally. 96 | * TODO shared bloom filter when using multiple resolvers 97 | This will be another project, but has other potential use cases: 98 | - NSRL 99 | - known bad malware hashes 100 | - is a password known to be in a breach? 101 | - known good hashes for webpress, drupal, joomla, ... 102 | 103 | 104 | example HTTP API: 105 | /add?filter=name_here&element=element_goes_here 106 | /lookup?filter=name_here&element=element_goes_here 107 | * TODO add malicious domains to blocklist when using w/ dnsmasq 108 | * TODO detect dns protocol abuses 109 | - weird TXT/NULL records 110 | - reallylongsubdomaintosqueezeineverypossiblebyte.whatever.com 111 | - hex/baseN encoded stuff: aabbccddeeff.whatever.com 112 | - volume 113 | - +not dns at all.. they are just sending data over port 53+ 114 | * TODO setup.py 115 | * TODO log to socket 116 | Splunk and ELK can receive input from a TCP or UDP socket. Add an 117 | option to ship logs in this manner. This may be useful when operating 118 | as a sensor with limited resources. 119 | 120 | Nice to have: 121 | - encryption 122 | - compression 123 | - maintain integrity if networking fails 124 | * TODO interactive mode 125 | ** TODO command prompt w/ readline and whatnot. 126 | ** TODO ability to toggle settings. 127 | ** TODO ability to query/add elements to ignore/malware lists 128 | ** TODO highlight output 129 | -------------------------------------------------------------------------------- /greylost-ignore: -------------------------------------------------------------------------------- 1 | # These give a high volume of queries and are generally safe to ignore in 2 | # my particular environment. YMMV!! 3 | # Also, note the trailing "." -- these are necessary! 4 | .netflix.com. 5 | .nflxso.com. 6 | .amazonvideo.com. 7 | 8 | -------------------------------------------------------------------------------- /greylost-pid-check.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # greylost-pid-check.sh 4 | # -- respawn greylost if it dies. 5 | # -- meant to be placed in your crontab! 6 | # -- 7 | # -- * * * * * /path/to/greylost-pid-check.sh 8 | 9 | PIDFILE="/var/run/greylost.pid" 10 | GREYLOST="/path/to/greylost.py -i eth0 -d --logging --pidfile $PIDFILE" 11 | 12 | 13 | if [ ! -f $PIDFILE ]; then 14 | echo "greylost not running. Attempting to start." 15 | $GREYLOST 16 | exit 17 | else 18 | kill -0 $(cat $PIDFILE |head -n 1) 2>/dev/null 19 | if [ $? -eq 0 ]; then 20 | exit 0 21 | else 22 | echo "greylost not running. Attempting to start." 23 | $GREYLOST 24 | fi 25 | fi 26 | 27 | -------------------------------------------------------------------------------- /greylost.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | """ greylost - DNS threat hunting. """ 4 | 5 | import os 6 | import sys 7 | import json 8 | import copy 9 | import argparse 10 | import signal 11 | import pickle 12 | import atexit 13 | import syslog 14 | import statistics 15 | from base64 import b64encode 16 | from time import sleep, time 17 | from datetime import datetime 18 | 19 | import dnslib 20 | from dmfrbloom.timefilter import TimeFilter 21 | 22 | from pysniffer import Sniffer 23 | from record_types import DNS_RECORD_TYPES 24 | 25 | 26 | class RollingLog(): 27 | def __init__(self, size): 28 | self.size = size 29 | self.log = [None for _ in range(size)] 30 | 31 | def add(self, data): 32 | self.log += [data] 33 | if len(self.log) > self.size: 34 | self.log = self.log[-1 * self.size:] 35 | 36 | def clear(self): 37 | self.log = [] 38 | 39 | def error_fatal(msg, exit_value=os.EX_USAGE): 40 | """ error_fatal() - Log an error and exit. 41 | 42 | Args: 43 | msg (str) - Error message. 44 | exit_value - Value to pass to exit() 45 | 46 | Returns: 47 | Nothing. 48 | """ 49 | error(msg) 50 | exit(exit_value) 51 | 52 | 53 | def error(msg): 54 | """ error() - Log an error. 55 | 56 | Args: 57 | msg (str) - Error message. 58 | 59 | Returns: 60 | Nothing. 61 | """ 62 | if not Settings.get("daemonize"): 63 | print("[-] %s" % msg, file=sys.stderr) 64 | syslog.syslog(msg) 65 | 66 | 67 | def log(msg): 68 | """ log() - Log a message. 69 | 70 | Args: 71 | msg (str) - Message. 72 | 73 | Returns: 74 | Nothing 75 | """ 76 | if not Settings.get("daemonize"): 77 | print("[+] %s" % msg) 78 | syslog.syslog(msg) 79 | 80 | 81 | def parse_dns_response(response_list): 82 | """ parse_dns_response() - Parses DNS responses. 83 | 84 | Args: 85 | response_list (list) - Response list 86 | 87 | Returns: 88 | List of responses sorted by "rdata" key. 89 | """ 90 | response_sorted_list = [] 91 | for response in response_list: 92 | # 93 | if isinstance(response.rdata, dnslib.EDNSOption): 94 | rdata = {"code": response.rdata.code, "data": response.rdata.data} 95 | else: 96 | rdata = str(response.rdata) 97 | 98 | response_sorted_list.append({"rname": str(response.rname), 99 | "rtype": DNS_RECORD_TYPES[response.rtype], 100 | "rtype_id": response.rtype, 101 | "rclass": response.rclass, 102 | "rdata": rdata, 103 | "ttl": response.ttl}) 104 | # Sort results to avoid logical duplicates in the bloom filter 105 | return sorted(response_sorted_list, key=lambda k: k["rdata"]) 106 | 107 | 108 | def parse_dns_packet(packet): 109 | """ parse_dns_packet() - Converts DNS packet to a dict. 110 | 111 | Args: 112 | packet (Packet object) - The packet to parse. 113 | 114 | Returns: 115 | dict representing the DNS packet. 116 | """ 117 | output = {} 118 | 119 | output["timestamp"] = datetime.now().replace(microsecond=0).isoformat() 120 | output["protocol"] = packet.protocol 121 | output["saddr"] = packet.saddr 122 | output["daddr"] = packet.daddr 123 | output["sport"] = packet.sport 124 | output["dport"] = packet.dport 125 | 126 | try: 127 | dns_packet = dnslib.DNSRecord.parse(packet.data) 128 | except (dnslib.dns.DNSError, TypeError): 129 | if packet.data: 130 | # TODO don't encode if everything is printable 131 | output["payload"] = b64encode(packet.data).decode("utf-8") 132 | return output 133 | 134 | output["id"] = dns_packet.header.id 135 | output["q"] = dns_packet.header.q 136 | output["a"] = dns_packet.header.a 137 | 138 | if dns_packet.questions: 139 | output["questions"] = [] 140 | for question in dns_packet.questions: 141 | output["questions"].append({"qname": str(question.qname), 142 | "qtype": DNS_RECORD_TYPES[question.qtype], 143 | "qtype_id": question.qtype, 144 | "qclass": question.qclass}) 145 | 146 | if dns_packet.rr: 147 | output["rr"] = parse_dns_response(dns_packet.rr) 148 | if dns_packet.auth: 149 | output["auth"] = parse_dns_response(dns_packet.auth) 150 | if dns_packet.ar: 151 | output["ar"] = parse_dns_response(dns_packet.ar) 152 | 153 | return output 154 | 155 | 156 | def stdout_packet_json(packet_dict): 157 | """ stdout_packet_json() - Prints DNS packet in JSON format to stdout. 158 | 159 | Args: 160 | packet_dict (dict) - dict derived from parse_dns_packet() 161 | 162 | Returns: 163 | True 164 | """ 165 | print(json.dumps(packet_dict)) 166 | return True 167 | 168 | 169 | def stdout_greylist_miss(packet_dict): 170 | """ stdout_greylist_miss() - Prints greylist misses to stdout. 171 | 172 | Args: 173 | packet_dict (dict) - dict containing information about the packet. 174 | 175 | Returns: 176 | True 177 | """ 178 | print("Not in filter:", json.dumps(packet_dict, indent=4)) 179 | return True 180 | 181 | 182 | def check_greylist_ignore_list(packet_dict): 183 | """ check_greylist_ignore_list() - Check the greylist ignore list prior to 184 | adding to the greylist; this benign and 185 | trusted services such as Netflix that 186 | have a ton of IP addresses and subdomains 187 | from wasting space in the filter. 188 | 189 | Args: 190 | packet_dict (dict) - dict derived from parse_dns_packet() 191 | 192 | Returns: 193 | True if query should be ignored. 194 | False if query should be added to the greylist. 195 | """ 196 | try: 197 | for question in packet_dict["questions"]: 198 | for ignore in Settings.get("greylist_ignore_domains"): 199 | if question["qname"].endswith(ignore): 200 | return True 201 | # .example.com. matches foo.example.com. and example.com. 202 | if ignore.startswith(".") and question["qname"].endswith(ignore[1:]): 203 | return True 204 | except KeyError: 205 | pass 206 | return False 207 | # TODO should I check responses too? need to collect more data... 208 | # for response in ["rr", "ar", "auth"]: 209 | # try: 210 | # for chk in packet_dict[response]: 211 | # for ign in Settings.get("greylist_ignore_domains"): 212 | # if chk["rdata"].endswith(ign) or chk["rname"].endswith(ign): 213 | # return True 214 | # # .example.com. matches foo.example.com. and example.com. 215 | # if ign[0] == "." and chk["rname"].endswith(ign[1:]): 216 | # return True 217 | # if ign[0] == "." and chk["rdata"].endswith(ign[1:]): 218 | # return True 219 | # except KeyError: 220 | # pass 221 | # return False 222 | 223 | 224 | def average_packet(packet): 225 | current = Settings.get("average_current") 226 | if packet: 227 | try: 228 | current[packet.saddr + " " + packet.daddr] += 1 229 | except KeyError: 230 | current[packet.saddr + " " + packet.daddr] = 1 231 | Settings.set("average_current", current) 232 | 233 | history = Settings.get("average_history") 234 | if (time() - Settings.get("average_last")) > 10: 235 | for key in current: 236 | if key in history.keys(): 237 | history[key].add(current[key]) 238 | else: 239 | history[key] = RollingLog(60 * 10) # 60 * 10 240 | history[key].add(current[key]) 241 | 242 | # Reap expired host pairs 243 | current_keys = set(current.keys()) 244 | new_history = history 245 | for key in history.copy(): 246 | if key not in current_keys: 247 | new_history[key].add(0) 248 | if set(history[key].log) == {0}: 249 | new_history.pop(key) 250 | del history 251 | 252 | # Detect spikes in volume of traffic. This currently does not work 253 | # correctly. It detects spikes in traffic, but takes several minutes 254 | # or hours to correct itself. 255 | 256 | # TODO if current minute is N% higher than average? 257 | # TODO calculate median absolute deviation? 258 | for x in new_history: 259 | z = [i for i in new_history[x].log if i != None] 260 | if len(z[:-1]) > 1: 261 | print(x) 262 | mean = statistics.mean(z[:-1]) 263 | stddev = statistics.stdev(z[:-1]) 264 | print(" ", 265 | "SPIKE" if stddev > mean else "", 266 | max(z), 267 | mean, 268 | new_history[x].log[-1], 269 | stddev) 270 | #if stddev > mean: 271 | # print("SPIKE", x) 272 | 273 | Settings.set("average_history", new_history) 274 | Settings.set("average_current", {}) 275 | Settings.set("average_last", time()) 276 | 277 | 278 | def timefilter_packet(packet_dict): 279 | """ timefilter_packet() - Add a packet to the greylist. This sorts and omits 280 | volatile data such as port numbers, timestamps, 281 | and the order of responses to prevent duplicate 282 | elements from being added to the filter. 283 | 284 | Args: 285 | packet_dict (dict) - dict derived from parse_dns_packet() 286 | 287 | Returns: 288 | True if element was successfully added to the filter. 289 | False if element was not added to the filter. 290 | """ 291 | 292 | # Check if domain is in ignore list. 293 | if check_greylist_ignore_list(packet_dict): 294 | return False 295 | 296 | timefilter = Settings.get("timefilter") 297 | 298 | # Remove volatile fields before adding to bloom filter 299 | element_dict = copy.copy(packet_dict) 300 | element_dict.pop("timestamp") 301 | element_dict.pop("sport") 302 | element_dict.pop("dport") 303 | try: 304 | element_dict.pop("id") 305 | except KeyError: 306 | return False 307 | 308 | element = json.dumps(element_dict) 309 | 310 | # Are we still baselining DNS traffic? 311 | elapsed = time() - Settings.get("starttime") 312 | learning = not elapsed > Settings.get("filter_learning_time") 313 | 314 | if timefilter.lookup(element) is False and not learning: 315 | for method in Settings.get("greylist_miss_methods"): 316 | # Log everything rather than stripped down element_dict 317 | if method is _greylist_miss_log: 318 | _greylist_miss_log(packet_dict) 319 | continue 320 | method(element_dict) 321 | timefilter.add(element) 322 | del element_dict 323 | 324 | return True 325 | 326 | 327 | def all_log(packet_dict): 328 | """all_log() - log a DNS packet 329 | 330 | Args: 331 | packet_dict (dict) - dict derived from parse_dns_packet() 332 | 333 | Returns: 334 | True if successful, False if unsuccessful 335 | """ 336 | 337 | log_fd = Settings.get("all_log_fd") 338 | if log_fd: 339 | log_fd.write(json.dumps(packet_dict) + "\n") 340 | log_fd.flush() 341 | return True 342 | return False 343 | 344 | 345 | def _greylist_miss_log(packet_dict): 346 | """_greylist_miss_log() - log a greylist miss 347 | 348 | Args: 349 | packet_dict (dict) - dict derived from parse_dns_packet() 350 | 351 | Returns: 352 | True if successful, False if unsuccessful 353 | """ 354 | log_fd = Settings.get("greylist_miss_log_fd") 355 | if log_fd: 356 | log_fd.write(json.dumps(packet_dict) + "\n") 357 | log_fd.flush() 358 | return True 359 | return False 360 | 361 | 362 | def not_dns_log(packet_dict): 363 | """not_dns_log() - log non-DNS protocol traffic 364 | 365 | Args: 366 | packet_dict (dict) - dict derived from parse_dns_packet() 367 | 368 | Returns: 369 | True if successful, False if unsuccessful 370 | """ 371 | log_fd = Settings.get("not_dns_log_fd") 372 | if log_fd: 373 | log_fd.write(json.dumps(packet_dict) + "\n") 374 | log_fd.flush() 375 | return True 376 | return False 377 | 378 | 379 | def parse_cli(): # pylint: disable=R0915,R0912 380 | """parse_cli() -- parse CLI arguments 381 | 382 | Args: 383 | None 384 | 385 | Returns: 386 | Nothing 387 | """ 388 | description = "greylost by @dmfroberson" 389 | parser = argparse.ArgumentParser(description=description) 390 | 391 | parser.add_argument( 392 | "--alllog", 393 | default=False, 394 | help="/path/to/all-log -- log of all DNS queries") 395 | 396 | parser.add_argument( 397 | "--notdnslog", 398 | default=False, 399 | help="/path/to/not-dns-log -- log of non-DNS protocol traffic") 400 | 401 | parser.add_argument( 402 | "--greylistmisslog", 403 | default=False, 404 | help="/path/to/greylist-miss-log -- log of greylist misses") 405 | 406 | parser.add_argument( 407 | "-b", 408 | "--bpf", 409 | default="port 53 or port 5353", 410 | help="BPF filter to apply to the sniffer") 411 | 412 | parser.add_argument( 413 | "-d", 414 | "--daemonize", 415 | default=False, 416 | action="store_true", 417 | help="Daemonize") 418 | 419 | parser.add_argument( 420 | "--learningtime", 421 | default=0, 422 | type=int, 423 | help="Time to baseline queries before alerting on greylist misses") 424 | 425 | parser.add_argument( 426 | "--logging", 427 | default=False, 428 | action="store_true", 429 | help="Toggle logging") 430 | 431 | parser.add_argument( 432 | "--ignore", 433 | default=None, 434 | help="File containing list of domains to ignore when greylisting") 435 | 436 | parser.add_argument( 437 | "-i", 438 | "--interface", 439 | default="eth0", 440 | help="Interface to sniff") 441 | 442 | parser.add_argument( 443 | "-o", 444 | "--stdout", 445 | action="store_true", 446 | default=False, 447 | help="Toggle stdout output") 448 | 449 | parser.add_argument( 450 | "-p", 451 | "--precision", 452 | default=0.001, 453 | type=int, 454 | help="Precision of bloom filter. Ex: 0.001") 455 | 456 | parser.add_argument( 457 | "-r", 458 | "--pidfile", 459 | default=None, 460 | help="Path to PID file") 461 | 462 | parser.add_argument( 463 | "--filterfile", 464 | default=None, 465 | help="Path to timefilter's state file.") 466 | 467 | parser.add_argument( 468 | "-s", 469 | "--filtersize", 470 | default=10000000, 471 | type=int, 472 | help="Size of bloom filter") 473 | 474 | parser.add_argument( 475 | "--statistics", 476 | action="store_true", 477 | help="Toggle statistics collection") 478 | 479 | parser.add_argument( 480 | "--syslog", 481 | action="store_true", 482 | help="Toggle syslog logging") 483 | 484 | parser.add_argument( 485 | "-t", 486 | "--filtertime", 487 | default=60*60*24, 488 | type=int, 489 | help="Filter time") 490 | 491 | parser.add_argument( 492 | "--toggle-all-log", 493 | action="store_true", 494 | help="Toggle all log") 495 | 496 | parser.add_argument( 497 | "--toggle-not-dns-log", 498 | action="store_true", 499 | help="Toggle not DNS log") 500 | 501 | parser.add_argument( 502 | "--toggle-greylist-miss-log", 503 | action="store_true", 504 | help="Toggle greylist miss log") 505 | 506 | parser.add_argument( 507 | "-v", 508 | "--verbose", 509 | default=False, 510 | action="store_true", 511 | help="Increase verbosity") 512 | 513 | parser.add_argument( 514 | "-w", 515 | "--dumpfile", 516 | default=None, 517 | help="Write captured packets to a dumpfile") 518 | 519 | args = parser.parse_args() 520 | 521 | Settings.set("bpf", args.bpf) 522 | Settings.set("interface", args.interface) 523 | Settings.set("filter_size", args.filtersize) 524 | Settings.set("filter_precision", args.precision) 525 | Settings.set("filter_time", args.filtertime) 526 | Settings.set("filter_learning_time", args.learningtime) 527 | Settings.set("verbose", args.verbose) 528 | Settings.set("daemonize", args.daemonize) 529 | Settings.set("pcap_dumpfile", args.dumpfile) 530 | 531 | if args.syslog: 532 | Settings.toggle("syslog") 533 | if args.toggle_all_log: 534 | Settings.toggle("logging_all") 535 | if args.toggle_not_dns_log: 536 | Settings.toggle("logging_not_dns") 537 | if args.toggle_greylist_miss_log: 538 | Settings.toggle("logging_greylist_miss") 539 | if args.statistics: 540 | Settings.toggle("statistics") 541 | 542 | if args.filterfile: 543 | try: 544 | with open(args.filterfile, "ab"): 545 | pass 546 | except PermissionError as exc: 547 | error_fatal("Unable to open filter %s: %s" % (args.filterfile, exc), 548 | exit_value=os.EX_OSFILE) 549 | Settings.set("filter_file", args.filterfile) 550 | 551 | if args.stdout: 552 | packet_methods = Settings.get("packet_methods") 553 | if stdout_packet_json not in packet_methods: 554 | packet_methods.append(stdout_packet_json) 555 | Settings.set("packet_methods", packet_methods) 556 | 557 | greylist_miss_methods = Settings.get("greylist_miss_methods") 558 | if stdout_greylist_miss not in greylist_miss_methods: 559 | greylist_miss_methods.append(stdout_greylist_miss) 560 | Settings.set("greylist_miss_methods", greylist_miss_methods) 561 | 562 | if args.logging: 563 | Settings.toggle("logging") 564 | if Settings.get("logging"): 565 | packet_methods = Settings.get("packet_methods") 566 | if all_log not in packet_methods: 567 | packet_methods.append(all_log) 568 | Settings.set("packet_methods", packet_methods) 569 | 570 | greylist_miss_methods = Settings.get("greylist_miss_methods") 571 | if _greylist_miss_log not in greylist_miss_methods: 572 | greylist_miss_methods.append(_greylist_miss_log) 573 | Settings.set("greylist_miss_methods", greylist_miss_methods) 574 | 575 | if args.pidfile: 576 | Settings.set("pid_file_path", args.pidfile) 577 | 578 | if args.ignore: 579 | Settings.set("greylist_ignore_domains_file", args.ignore) 580 | populate_greylist_ignore_list(args.ignore) 581 | 582 | if args.alllog: 583 | Settings.set("all_log", args.alllog) 584 | if args.notdnslog: 585 | Settings.set("not_dns_log", args.notdnslog) 586 | if args.greylistmisslog: 587 | Settings.set("greylist_miss_log", args.greylistmisslog) 588 | 589 | 590 | def populate_greylist_ignore_list(ignore_file): 591 | """populate_greylist_ignore_list() - populate ignore list from file 592 | 593 | Args: 594 | ignore_file (str) - path to file 595 | 596 | Returns: 597 | Nothing on success. Exits with os.EX_OSFILE if the file doesn't exist 598 | """ 599 | ignore_list = set() 600 | try: 601 | with open(ignore_file, "r") as ignore: 602 | for line in ignore: 603 | if line.rstrip() == "" or line.startswith("#"): 604 | continue 605 | ignore_list.add(line.rstrip()) 606 | except FileNotFoundError as exc: 607 | error_fatal("Unable to open ignore list %s: %s" % (ignore_file, exc), 608 | exit_value=os.EX_OSFILE) 609 | Settings.set("greylist_ignore_domains", ignore_list) 610 | 611 | 612 | def sig_hup_handler(signo, frame): # pylint: disable=W0613 613 | """sig_hup_handler() - Handle SIGHUP signals 614 | 615 | Args: 616 | signo (unused) - signal number 617 | frame (unused) - frame 618 | 619 | Returns: 620 | Nothing 621 | """ 622 | log("Caught HUP signal.") 623 | log("Reloading log files.") 624 | if Settings.get("logging"): 625 | log_fds = ["all_log_fd", "greylist_miss_log_fd", "not_dns_log_fd"] 626 | for log_fd in log_fds: 627 | if not Settings.get(log_fd): 628 | continue 629 | Settings.get(log_fd).close() 630 | new_fd = open_log_file(Settings.get(log_fd[:-3])) # strip "_fd" 631 | Settings.set(log_fd, new_fd) 632 | 633 | if Settings.get("greylist_ignore_domains_file"): 634 | log("Reloading greylist ignore list") 635 | populate_greylist_ignore_list(Settings.get("greylist_ignore_domains_file")) 636 | # TODO rotate pcaps? 637 | 638 | 639 | def startup_blurb(): 640 | """startup_blurb() - show settings and other junk""" 641 | if not Settings.get("verbose"): 642 | return 643 | print(" interface: %s" % Settings.get("interface")) 644 | print(" bpf: %s" % Settings.get("bpf")) 645 | print(" filter size: %s" % Settings.get("filter_size")) 646 | print(" filter time: %s" % Settings.get("filter_time")) 647 | print(" filter precision: %s" % Settings.get("filter_precision")) 648 | print(" learning time: %s" % Settings.get("filter_learning_time")) 649 | print(" logging: %s" % Settings.get("logging")) 650 | if Settings.get("logging"): 651 | print(" miss log: %s" % Settings.get("greylist_miss_log")) 652 | print(" all log: %s" % Settings.get("all_log")) 653 | if Settings.get("pcap_dumpfile"): 654 | print(" pcap dumpfile: %s" % Settings.get("pcap_dumpfile")) 655 | if Settings.get("greylist_ignore_domains_file"): 656 | print(" ignore list: %s" % Settings.get("greylist_ignore_domains_file")) 657 | 658 | method_types = ["packet_methods", "greylist_miss_methods", "not_dns_methods"] 659 | for method_type in method_types: 660 | if not Settings.get(method_type): 661 | continue 662 | count = 0 663 | sys.stdout.write(method_type.replace("_", " ").rjust(21) + ": ") 664 | for method in Settings.get(method_type): 665 | count += 1 666 | print(method) 667 | if count < len(Settings.get(method_type)): 668 | sys.stdout.write("".rjust(23)) 669 | 670 | 671 | def write_pid_file(path): 672 | """write_pid_file() - Write current PID to specified path 673 | 674 | Args: 675 | path (str) - path to PID file 676 | 677 | Returns: 678 | True if successful, False if unsuccessful 679 | """ 680 | try: 681 | with open(path, "w") as pidfile: 682 | pidfile.write(str(os.getpid())) 683 | except PermissionError: 684 | return False 685 | return True 686 | 687 | 688 | def open_log_file(path): 689 | """open_log_file() - Open a log file for writing. 690 | 691 | Args: 692 | path (str) - Path to log file 693 | 694 | Returns: 695 | fd on success, None on failure 696 | """ 697 | try: 698 | log_fd = open(path, "a") 699 | except PermissionError: 700 | return None 701 | log_fd.flush() 702 | return log_fd 703 | 704 | 705 | def open_log_files(): 706 | """open_log_files() - Open up log files and store fds in Settings()""" 707 | # TODO what in the wild world of extreme sports is going on in here? 708 | if not Settings.get("logging"): 709 | return 710 | if Settings.get("all_log") and Settings.get("logging_all"): 711 | all_log_fd = open_log_file(Settings.get("all_log")) 712 | if all_log_fd: 713 | Settings.set("all_log_fd", all_log_fd) 714 | else: 715 | error("Something went wrong opening all_log") 716 | if Settings.get("greylist_miss_log") and Settings.get("logging_greylist_miss"): 717 | greylist_miss_fd = open_log_file(Settings.get("greylist_miss_log")) 718 | if greylist_miss_fd: 719 | Settings.set("greylist_miss_log_fd", greylist_miss_fd) 720 | else: 721 | error("Something went wrong opening greylist_miss_log") 722 | if Settings.get("not_dns_log") and Settings.get("logging_not_dns"): 723 | not_dns_fd = open_log_file(Settings.get("not_dns_log")) 724 | if not_dns_fd: 725 | Settings.set("not_dns_log_fd", not_dns_fd) 726 | else: 727 | error("Something went wrong with not_dns_log") 728 | 729 | 730 | def save_timefilter_state(): 731 | """" save_timefilter_state() - Save TimeFilter to disk 732 | 733 | Args: 734 | None. 735 | 736 | Returns: 737 | Nothing. 738 | """ 739 | if Settings.get("filter_file"): 740 | log("Saving TimeFilter state to %s" % Settings.get("filter_file")) 741 | try: 742 | with open(Settings.get("filter_file"), "wb") as filterfile: 743 | try: 744 | pickle.dump(Settings.get("timefilter"), filterfile) 745 | except KeyboardInterrupt: # Don't let user ^C and jack up filter 746 | pass 747 | except PermissionError as exc: 748 | error_fatal("Unable to open filter %s: %s" % \ 749 | (Settings.get("filter_file"), exc), 750 | os.EX_OSFILE) 751 | 752 | 753 | def cleanup(): 754 | """ cleanup() - Function to run at exit. 755 | 756 | Args: 757 | None. 758 | 759 | Returns: 760 | Nothing. 761 | """ 762 | log("Exiting.") 763 | 764 | 765 | def memory_free(): 766 | """ memory_free() - Determine free memory on Linux hosts. 767 | 768 | Args: 769 | None. 770 | 771 | Returns: 772 | Free memory in bytes (int) on success. 773 | None on failure 774 | """ 775 | # TODO: error check. not portable!!! 776 | with open("/proc/meminfo") as meminfo: 777 | for line in meminfo: 778 | if line.startswith("MemFree:"): 779 | return int(line.split()[1]) * 1024 780 | return None 781 | 782 | 783 | class Settings(): 784 | """ Settings - Application settings object. """ 785 | __config = { 786 | "average_last": None, 787 | "average_current": {}, 788 | "average_history": {}, 789 | "statistics": False, 790 | "starttime": None, # Program start time 791 | "logging": False, # Toggle logging 792 | "logging_all": True, # Toggle all log 793 | "logging_greylist_miss": True, # Toggle logging misses 794 | "logging_not_dns": True, # Toggle logging non-DNS 795 | "daemonize": False, # Toggle daemonization 796 | "timefilter": None, # TimeFilter object 797 | "interface": None, # Interface to sniff 798 | "bpf": None, # libpcap bpf filter 799 | "filter_file": None, # Path to filter state file 800 | "filter_size": None, # Size of filter 801 | "filter_precision": 0.001, # Filter accuracy 802 | "filter_time": 60*60*24, # Shelf life for elements 803 | "filter_learning_time": 0, # Baseline for N seconds 804 | "packet_methods": [timefilter_packet], # Run these on all packets 805 | "not_dns_methods": [not_dns_log], # Run these against non-DNS 806 | "greylist_miss_methods": [], # Run these against misses 807 | "greylist_ignore_domains": set(), # Set of domains to ignore 808 | "greylist_ignore_domains_file": None, # Greylist ignore list 809 | "not_dns_log": "greylost-notdns.log", # Path to not DNS logfile 810 | "not_dns_log_fd": None, # Not DNS file descriptor 811 | "greylist_miss_log": "greylost-misses.log", # Path to miss logfile 812 | "greylist_miss_log_fd": None, # Miss log file descriptor 813 | "all_log": "greylost-all.log", # Path to all logfile 814 | "all_log_fd": None, # All log file descriptor 815 | "verbose": False, # Verbose 816 | "pid_file_path": "greylost.pid", # Path to PID file 817 | "pcap_dumpfile": None, # Path to pcap logfile 818 | "syslog": True, # Toggle syslog 819 | } 820 | 821 | @staticmethod 822 | def get(name): 823 | """ Settings.get() - Retrieve the value of a setting. 824 | 825 | Args: 826 | name (string) - Setting to retrieve. 827 | 828 | Returns: 829 | Contents of the setting on success, raises NameError if setting 830 | doesn't exist. 831 | """ 832 | try: 833 | return Settings.__config[name] 834 | except KeyError: 835 | raise NameError("Not a valid setting for get():", name) 836 | 837 | 838 | @staticmethod 839 | def set(name, value): 840 | """ Settings.set() - Apply a setting. 841 | 842 | Args: 843 | name (string) - Name of setting. 844 | value - Value to apply to setting. 845 | 846 | Returns: 847 | True on success. Raises NameError if setting doesn't exist. 848 | """ 849 | if name in [x for x in Settings.__config]: 850 | Settings.__config[name] = value 851 | return True 852 | raise NameError("Not a valid setting for set():", name) 853 | 854 | 855 | @staticmethod 856 | def toggle(name): 857 | """ Settings.toggle() - Toggle a setting: True->False, False->True. 858 | 859 | Args: 860 | name (string) - Name of setting to toggle. 861 | 862 | Returns: 863 | Value of setting after it has been toggled on success. 864 | Raises NameError if setting doesn't exist. 865 | Raises TypeError if setting isn't a bool. 866 | """ 867 | try: 868 | if isinstance(Settings.__config[name], bool): 869 | Settings.__config[name] = not Settings.__config[name] 870 | return Settings.__config[name] 871 | raise TypeError("Setting %s is not a boolean." % name) 872 | except KeyError: 873 | raise NameError("Not a valid setting for toggle():", name) 874 | 875 | 876 | def main(): # pylint: disable=R0912,R0915 877 | """ main() - Entry point. """ 878 | parse_cli() 879 | 880 | # Set up logging 881 | syslog.openlog(ident="greylost", logoption=syslog.LOG_PID) 882 | open_log_files() 883 | syslog.syslog("Started") 884 | 885 | # Fork into the background, if desired. 886 | if Settings.get("daemonize"): 887 | try: 888 | pid = os.fork() 889 | if pid > 0: 890 | exit(os.EX_OK) 891 | except OSError as exc: 892 | error_fatal("fork(): %s" % exc, exit_value=os.EX_OSERR) 893 | 894 | # Write PID file 895 | write_pid_file(Settings.get("pid_file_path")) 896 | 897 | # Signal and atexit handlers 898 | signal.signal(signal.SIGHUP, sig_hup_handler) 899 | atexit.register(save_timefilter_state) 900 | atexit.register(cleanup) 901 | 902 | # Used for scheduling... 903 | Settings.set("starttime", time()) 904 | Settings.set("average_last", time()) 905 | 906 | # Create timefilter 907 | # TODO save other metadata to disk; start time, filter attributes, etc. 908 | # i think this will cause problems if invoked again with different 909 | # values while using a filter file, but have not tested yet. 910 | try: 911 | Settings.set("timefilter", TimeFilter(Settings.get("filter_size"), 912 | Settings.get("filter_precision"), 913 | Settings.get("filter_time"))) 914 | except MemoryError: 915 | timefilter = TimeFilter(1, 0.1, 1) # Dummy filter for ideal_size() 916 | error_fatal("Out of memory. Filter requires %d bytes. %d available." % \ 917 | (timefilter.ideal_size(Settings.get("filter_size"), 918 | Settings.get("filter_precision")), 919 | memory_free()), 920 | exit_value=os.EX_SOFTWARE) 921 | 922 | # Restore TimeFilter's saved state from disk 923 | if Settings.get("filter_file"): 924 | try: 925 | with open(Settings.get("filter_file"), "rb") as filterfile: 926 | timefilter = filterfile.read() 927 | except FileNotFoundError: 928 | pass 929 | if bool(timefilter): 930 | # TODO other filter settings? currently not saving these and 931 | # relying on CLI/config to be correct. 932 | try: 933 | Settings.set("timefilter", pickle.loads(timefilter)) 934 | except Exception as exc: # pylint: disable=W0703 935 | error_fatal("Failed to unpickle time filter %s: %s" % \ 936 | (Settings.get("filter_file"), exc), 937 | exit_value=os.EX_SOFTWARE) 938 | del timefilter 939 | 940 | # Display startup settings (debugging) 941 | startup_blurb() 942 | 943 | # Start sniffer 944 | capture = Sniffer(Settings.get("interface"), bpf=Settings.get("bpf")) 945 | result = capture.start() 946 | if not result[0]: 947 | error_fatal("Unable to open interface %s for sniffing: %s" % \ 948 | (Settings.get("interface"), result[1]), 949 | exit_value=os.EX_NOINPUT) 950 | capture.setnonblock() 951 | 952 | # Open pcap logfile if desired 953 | if Settings.get("pcap_dumpfile"): 954 | result = capture.dump_open(Settings.get("pcap_dumpfile")) 955 | if not result[0]: 956 | error_fatal("Unable to open %s for writing: %s" % \ 957 | (Settings.get("pcap_dumpfile"), result[1]), 958 | exit_value=os.EX_OSFILE) 959 | 960 | while True: 961 | packet = capture.next() 962 | if packet is False: 963 | error_fatal("Interface %s went down. Exiting." % \ 964 | Settings.get("interface"), 965 | exit_value=os.EX_UNAVAILABLE) 966 | 967 | # Detect spikes in DNS traffic 968 | if Settings.get("statistics"): 969 | average_packet(packet) 970 | 971 | # No packet is available, sleep and try again. 972 | if packet is None: 973 | sleep(0.05) # Avoid 100% CPU 974 | continue 975 | 976 | output = parse_dns_packet(packet) 977 | 978 | # Do stuff to non-DNS packets 979 | try: 980 | if output["payload"]: 981 | for method in Settings.get("not_dns_methods"): 982 | method(output) 983 | except KeyError: 984 | pass 985 | 986 | # Do stuff to packets 987 | for method in Settings.get("packet_methods"): 988 | method(output) 989 | 990 | # Not Reached 991 | return True 992 | 993 | 994 | if __name__ == "__main__": 995 | try: 996 | main() 997 | except KeyboardInterrupt: 998 | pass 999 | -------------------------------------------------------------------------------- /pypacket.py: -------------------------------------------------------------------------------- 1 | """ pypacket - parse packets so a human can use them """ 2 | 3 | import socket 4 | import ipaddress 5 | from struct import unpack 6 | 7 | class Packet(): 8 | """Packet class - Parses an Ethernet frame so you dont have to!@#""" 9 | # pylint: disable=too-many-instance-attributes 10 | def __init__(self, raw_packet): # pylint: disable=R0915 11 | self.packet = raw_packet # Raw packet contents 12 | self.data = None # Packet data 13 | self.ethertype = None # Ethertype. 14 | self.protocol = None # Protocol: TCP, UDP, ICMP, ARP, IGMP, ... 15 | self.saddr = None # Source IP 16 | self.daddr = None # Destination IP 17 | self.shwaddr = None # Source MAC 18 | self.dhwaddr = None # Destination MAC 19 | self.sport = None # Source port 20 | self.dport = None # Destination port 21 | self.ttl = None # Time to Live 22 | self.seq = None # Sequence number 23 | self.ack = None # Acknowledgement number 24 | self.window = None # TCP window 25 | self.tcpflags = None # TCP flags 26 | self.tcpheaderlen = None # TCP header length 27 | self.length = None # Length 28 | self.checksum = None # Checksum 29 | self.icmptype = None # ICMP type 30 | self.icmpcode = None # ICMP code 31 | self.dot1q_offset = 0 # Offset for 802.1q header 32 | 33 | # Constants 34 | self.ethernet_header_length = 14 35 | self.ip_header_length = 20 36 | self.ipv6_header_length = 40 37 | self.tcp_header_length = 20 38 | self.udp_header_length = 8 39 | self.icmp_header_length = 4 40 | self.dot1q_header_length = 4 41 | 42 | # Parse Ethernet header 43 | self.ethernet_header = \ 44 | unpack("!6s6sH", raw_packet[:self.ethernet_header_length]) 45 | 46 | self.dhwaddr = self.mac_address(self.ethernet_header[0]) 47 | self.shwaddr = self.mac_address(self.ethernet_header[1]) 48 | self.ethertype = self.ethernet_header[2] 49 | 50 | # Check Ethertype and parse accordingly 51 | if self.ethertype == 0x0800 or self.ethertype == 0x8100: # IP 52 | # Check for 802.1q 53 | if self.ethertype == 0x8100: 54 | self.dot1q_offset = 4 55 | self.parse_ip_header() 56 | 57 | if self.protocol == 6: 58 | self.parse_tcp_header() 59 | elif self.protocol == 17: 60 | self.parse_udp_header() 61 | elif self.protocol == 1: 62 | self.parse_icmp_header() 63 | elif self.protocol == 2: 64 | self.parse_igmp_header() 65 | elif self.protocol == 89: 66 | self.parse_ospf_header() 67 | elif self.protocol == 103: 68 | self.parse_pim_header() 69 | else: # UNKNOWN PROTOCOL 70 | # TODO shouldn't print() this... 71 | print(self.protocol, self.packet) 72 | 73 | elif self.ethertype == 0x86dd: # IPv6 74 | self.parse_ipv6_header() 75 | 76 | elif self.ethertype == 0x0806: # ARP 77 | self.parse_arp() 78 | 79 | def parse_arp(self): 80 | """Packet.parse_arp() - Parse ARP packets.""" 81 | # TODO: finish this 82 | self.protocol = "ARP" 83 | 84 | def parse_ospf_header(self): 85 | """Packet.parse_ospf_header() - Parse OSPF headers.""" 86 | # TODO finish this 87 | self.protocol = "OSPF" 88 | 89 | def parse_pim_header(self): 90 | """Packet.parse_pim_header() - Parse PIM headers.""" 91 | # TODO finish this 92 | self.protocol = "PIM" 93 | 94 | def parse_ipv6_header(self): 95 | """Packet.parse_ipv6_header() - Parse IPv6 packets.""" 96 | # TODO: finish this 97 | self.protocol = "IPv6" 98 | offset = self.ethernet_header_length 99 | ipv6_header = unpack("!LHBB16s16s", 100 | self.packet[offset:offset+self.ipv6_header_length]) 101 | self.length = ipv6_header[1] 102 | self.hoplimit = ipv6_header[3] 103 | self.saddr = str(ipaddress.IPv6Address(ipv6_header[4])) 104 | self.daddr = str(ipaddress.IPv6Address(ipv6_header[5])) 105 | if ipv6_header[2] == 6: 106 | self.parse_tcp_header() 107 | elif ipv6_header[2] == 17: 108 | self.parse_udp_header() 109 | elif ipv6_header[2] == 58: # ICMPv6 110 | self.parse_icmp_header() 111 | 112 | def parse_igmp_header(self): 113 | """Packet.parse_igmp_header() - Parse IGMP header.""" 114 | # TODO: finish this 115 | self.protocol = "IGMP" 116 | 117 | def parse_icmp_header(self): 118 | """Packet.parse_icmp_header() - Parse ICMP header.""" 119 | self.protocol = "ICMP6" if self.protocol == "IPv6" else "ICMP" 120 | offset = self.dot1q_offset + self.ethernet_header_length 121 | if self.protocol == "ICMP": 122 | offset += self.ip_header_length 123 | elif self.protocol == "ICMP6": 124 | offset += self.ipv6_header_length 125 | icmp_header = unpack("!BBH", 126 | self.packet[offset:offset+self.icmp_header_length]) 127 | self.icmptype = icmp_header[0] 128 | self.icmpcode = icmp_header[1] 129 | self.checksum = icmp_header[2] 130 | self.data = self.packet[offset + self.icmp_header_length:] 131 | 132 | def parse_udp_header(self): 133 | """Packet.parse_udp_header() - Parse UDP header.""" 134 | self.protocol = "UDP6" if self.protocol == "IPv6" else "UDP" 135 | offset = self.dot1q_offset + self.ethernet_header_length 136 | if self.protocol == "UDP": 137 | offset += self.ip_header_length 138 | elif self.protocol == "UDP6": 139 | offset += self.ipv6_header_length 140 | udp_header = unpack("!HHHH", 141 | self.packet[offset:offset + self.udp_header_length]) 142 | self.sport = udp_header[0] 143 | self.dport = udp_header[1] 144 | self.length = udp_header[2] 145 | self.checksum = udp_header[3] 146 | self.data = self.packet[offset + self.udp_header_length:] 147 | 148 | def parse_tcp_header(self): 149 | """Packet.parse_tcp_header() - Parse TCP header.""" 150 | self.protocol = "TCP6" if self.protocol == "IPv6" else "TCP" 151 | offset = self.dot1q_offset + self.ethernet_header_length 152 | if self.protocol == "TCP": 153 | offset += self.ip_header_length 154 | elif self.protocol == "TCP6": 155 | offset += self.ipv6_header_length 156 | tcp_header = unpack("!HHLLBBHHH", 157 | self.packet[offset:offset + self.tcp_header_length]) 158 | self.sport = tcp_header[0] 159 | self.dport = tcp_header[1] 160 | self.seq = tcp_header[2] 161 | self.ack = tcp_header[3] 162 | self.tcpheaderlen = ((tcp_header[4] >> 4) & 0xff) * 4 163 | self.tcpflags = self.parse_tcp_flags(tcp_header[5]) 164 | self.window = tcp_header[6] 165 | self.checksum = tcp_header[7] 166 | self.data = self.packet[offset+self.tcpheaderlen:] 167 | 168 | @staticmethod 169 | def parse_tcp_flags(control): 170 | """parse_tcp_flags() - Determine which TCP flags are set. 171 | 172 | Args: 173 | control (str) - TCP control bytes 174 | 175 | Returns: 176 | tcpdump-style flags (str). 177 | """ 178 | tcp_flags = "" 179 | if control & 0x01: # FIN 180 | tcp_flags += "F" 181 | if control & 0x02: # SYN 182 | tcp_flags += "S" 183 | if control & 0x04: # RST 184 | tcp_flags += "R" 185 | if control & 0x08: # PSH 186 | tcp_flags += "P" 187 | if control & 0x10: # ACK 188 | tcp_flags += "A" 189 | if control & 0x20: # URG 190 | tcp_flags += "U" 191 | if control & 0x40: # ECE 192 | tcp_flags += "E" 193 | if control & 0x80: # CWR 194 | tcp_flags += "C" 195 | return tcp_flags 196 | 197 | def parse_ip_header(self): 198 | """Packet.parse_ip_header() - Parse IP header.""" 199 | # TODO: make this complete. May need some of the other header elements 200 | # for other tools instead of ttl, protocol, source, and 201 | # destination: http://www.networksorcery.com/enp/protocol/ip.htm 202 | start = self.dot1q_offset + self.ethernet_header_length 203 | end = self.dot1q_offset + self.ethernet_header_length + self.ip_header_length 204 | ip_header = unpack("!BBHHHBBH4s4s", self.packet[start:end]) 205 | 206 | self.ttl = ip_header[5] 207 | self.protocol = ip_header[6] 208 | self.saddr = socket.inet_ntoa(ip_header[8]) 209 | self.daddr = socket.inet_ntoa(ip_header[9]) 210 | 211 | @staticmethod 212 | def mac_address(address): 213 | """mac_address() - Convert 6 bytes to human-readable MAC address. 214 | 215 | Args: 216 | address - MAC address bytes. 217 | 218 | Returns: 219 | Human-readable MAC address (str). Ex: '00:11:22:33:44:55' 220 | """ 221 | result = "" 222 | if len(address) != 6: 223 | return None 224 | for byte in address: 225 | result += "%.2x:" % byte 226 | return result[:-1] # Strip trailing colon 227 | -------------------------------------------------------------------------------- /pysniffer.py: -------------------------------------------------------------------------------- 1 | """ pysniffer - libpcap-based sniffer. """ 2 | 3 | # pylint: disable=I1101 4 | import pcapy 5 | 6 | from pypacket import Packet 7 | 8 | 9 | class Sniffer(): # pylint: disable=R0902 10 | """ Sniffer() - Sniffer object. 11 | 12 | Attributes: 13 | interface (string) - Interface to sniff on. 14 | promisc (int) - Promiscuous mode; 1 = yes, 0 = no 15 | bpf (str) - BPF filter for sniffer. 16 | snaplen (int) - snaplen 17 | timeout (int) - timeout. 18 | sniffer - fd used by libpcap. 19 | dumper - Dumper object for logging pcap data 20 | dumpfile - Path to pcap dumpfile 21 | """ 22 | # pylint: disable=R0913 23 | def __init__(self, interface, promisc=1, bpf="", snaplen=65535, timeout=100): 24 | self.interface = interface 25 | self.promisc = promisc 26 | self.bpf = bpf 27 | self.snaplen = snaplen 28 | self.timeout = timeout 29 | self.sniffer = None 30 | self.dumper = None 31 | self.dumpfile = None 32 | 33 | def start(self): 34 | """ Sniffer.start() - Start the sniffer. 35 | 36 | Args: 37 | None. 38 | 39 | Returns: 40 | tuple: (True/False, None/exception message). 41 | """ 42 | try: 43 | self.sniffer = pcapy.open_live(self.interface, 44 | self.snaplen, 45 | self.promisc, 46 | self.timeout) 47 | except pcapy.PcapError as exc: 48 | return (False, exc) 49 | 50 | self.sniffer.setfilter(self.bpf) 51 | return (True, None) 52 | 53 | def next(self): 54 | """ Sniffer.next() - Get next packet from sniffer. 55 | 56 | Args: 57 | None. 58 | 59 | Returns: 60 | Packet object of the next packet on success. 61 | None if no packet exists (non-blocking). 62 | False on error. 63 | """ 64 | try: 65 | header, packet = self.sniffer.next() 66 | except pcapy.PcapError: 67 | return False 68 | 69 | if packet: 70 | # Log packet if a dumper exists. 71 | if self.dumper: 72 | self.dumper.dump(header, packet) 73 | return Packet(packet) 74 | return None 75 | 76 | def setnonblock(self): 77 | """ Sniffer.setnonblock() - set sniffer to non-blocking mode. 78 | 79 | Args: 80 | None. 81 | 82 | Returns: 83 | True if successful, False if unsuccessful. 84 | """ 85 | try: 86 | self.sniffer.setnonblock(1) 87 | except AttributeError: 88 | return False 89 | return True 90 | 91 | def dump_open(self, path): 92 | """ Sniffer.dump_open() - open dumpfile to write pcaps to. 93 | 94 | Args: 95 | path (str) - path to dumpfile. 96 | 97 | Returns: 98 | tuple: (True/False, None/exception message) 99 | """ 100 | self.dumpfile = path 101 | try: 102 | self.dumper = self.sniffer.dump_open(path) 103 | except pcapy.PcapError as exc: 104 | return (False, exc) 105 | return (True, None) 106 | 107 | def dump_close(self): 108 | """ Sniffer.dump_close() - close pcap dumpfile. 109 | 110 | Args: 111 | None. 112 | 113 | Returns: 114 | True if successful, False if unsuccessful. 115 | """ 116 | try: 117 | self.dumper.close() 118 | except AttributeError: 119 | return False 120 | return True 121 | -------------------------------------------------------------------------------- /record_types.py: -------------------------------------------------------------------------------- 1 | """ DNS Record Types """ 2 | 3 | # https://en.wikipedia.org/wiki/List_of_DNS_record_types 4 | DNS_RECORD_TYPES = { 5 | 1: "A", 6 | 28: "AAAA", 7 | 18: "AFSDB", 8 | 42: "APL", 9 | 257: "CAA", 10 | 60: "CDNSKEY", 11 | 59: "CDS", 12 | 37: "CERT", 13 | 5: "CNAME", 14 | 62: "CSYNC", 15 | 49: "DHCID", 16 | 32769: "DLV", 17 | 39: "DNAME", 18 | 48: "DNSKEY", 19 | 43: "DS", 20 | 55: "HIP", 21 | 45: "IPSECKEY", 22 | 25: "KEY", 23 | 36: "KX", 24 | 29: "LOC", 25 | 15: "MX", 26 | 35: "NAPTR", 27 | 2: "NS", 28 | 47: "NSEC", 29 | 50: "NSEC3", 30 | 51: "NSEC3PARAM", 31 | 61: "OPENPGPKEY", 32 | 12: "PTR", 33 | 46: "RRSIG", 34 | 17: "RP", 35 | 24: "SIG", 36 | 53: "SMIMEA", 37 | 6: "SOA", 38 | 33: "SRV", 39 | 44: "SSHFP", 40 | 32768: "TA", 41 | 249: "TKEY", 42 | 52: "TLSA", 43 | 250: "TSIG", 44 | 16: "TXT", 45 | 256: "URI", 46 | 47 | 255: "*", 48 | 252: "AXFR", 49 | 251: "IXFR", 50 | 41: "OPT", 51 | 52 | # Obsolete record types 53 | 3: "MD", 54 | 4: "MF", 55 | 254: "MAILA", 56 | 7: "MB", 57 | 8: "MG", 58 | 9: "MR", 59 | 14: "MINFO", 60 | 253: "MAILB", 61 | 11: "WKS", 62 | #32: "NB", # now assigned to NIMLOC 63 | #33: "NBTSTAT", # now assigned to SRV 64 | 10: "NULL", 65 | 38: "A6", 66 | 13: "HINFO", 67 | 19: "X25", 68 | 20: "ISDN", 69 | 21: "RT", 70 | 22: "NSAP", 71 | 23: "NSAP-PTR", 72 | 26: "PX", 73 | 31: "EID", 74 | 32: "NIMLOC", # duplicate id: NB 75 | 34: "ATMA", 76 | 40: "SINK", 77 | 27: "GPOS", 78 | 100: "UINFO", 79 | 101: "UID", 80 | 102: "GID", 81 | 103: "UNSPEC", 82 | 99: "SPF", 83 | 56: "NINFO", 84 | 57: "RKEY", 85 | 58: "TALINK", 86 | 104: "NID", 87 | 105: "L32", 88 | 106: "L64", 89 | 107: "LP", 90 | 108: "EUI48", 91 | 109: "EUI64", 92 | 259: "DOA", 93 | } 94 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | dnslib==0.9.17 2 | dmfrbloom==0.0.7 3 | pcapy==0.11.4 4 | 5 | # You can comment this out if you have problems installing it. 6 | mmh3==2.5.1 7 | --------------------------------------------------------------------------------