├── Readme.md
├── TypoScraper
├── README.md
├── TypoScraper.py
└── dnstwist.py
├── crc32tester.py
└── sample-crc32-packet.bin
/Readme.md:
--------------------------------------------------------------------------------
1 | # Random Scripts
2 |
3 | ###Digital Bond's Random Script Repository
4 |
5 | These scripts are occasionally handy for tasks such as reverse engineering proprietary protocols, learning about binary file formats, and otherwise violating your End User License Agreement.
6 |
7 | Please only use these tools for research purposes, and only to help the Good Guys.
8 |
9 | Each script is documented below and available in this repository.
10 |
11 | * [crc32tester.py] Identify how a CRC32 is produced in a piece of binary data.
12 |
13 | ###crc32tester.py
14 |
15 | ####Authors
16 |
17 | Reid Wightman [Digital Bond, Inc](http://www.digitalbond.com)
18 |
19 | ####Prerequisite packages
20 |
21 | This tool was built assuming that one has the binascii, struct, sys, and
22 | hexdump Python libraries installed.
23 |
24 | All but hexdump are part of a default Python 2.7 install. Hexdump
25 | is not required if you do not wish to see debugging/troubleshooting
26 | output, and you may comment out the 'import hexdump' line if you do not
27 | plan to use debugging.
28 |
29 | ####Purpose and Description
30 |
31 | This script determines how a CRC32 is generated when its position is known inside of a piece of binary data.
32 |
33 | If you are analyzing a protocol or file format and believe that you have identified a 4-byte CRC, you can use this tool to determine how the CRC was generated.
34 |
35 | The script assumes that the CRC location provided is correct. It tries little-endian and big-endian byte orders for the CRC, and assumes that the CRC was calculated by placing all null bytes into the CRC field when the CRC was calculated.
36 |
37 | ####Sample usage
38 |
39 | Let's analyze an uncommon protocol. The file 'sample-crc32-packet.bin', included
40 | in this repository, is the data payload from a single CoDeSys V3 protocol
41 | packet.
42 |
43 | This packet was generated from Wireshark by observing a TCP session packet,
44 | selecting the unrecognized data payload, and choosing "Export Packet Bytes".
45 |
46 | If we analyze several of these packets, we notice that there appears to be a
47 | CRC32 located at offset 44 (hex 0x2C). These four bytes change values, and
48 | the values vary greatly from one packet to the next.
49 |
50 | We run the crc32tester with:
51 |
52 | $ python crc32tester.py codesysv3-crc32.bin 44
53 |
54 | Candidate found. Params: trimfront 48 trimend 0 LE
55 |
56 | The output means that we trim the first 48 bytes off of the packet in order to
57 | calculate the CRC32. If bytes were also trimmed off of the end of the packet,
58 | the tool would have discovered this also. Now we can produce our own packets,
59 | since we know how to calculate the CRC in this proprietary protocol.
60 |
61 |
--------------------------------------------------------------------------------
/TypoScraper/README.md:
--------------------------------------------------------------------------------
1 | # TypoScraper
2 |
3 | ###A Scraper that Looks for Typos in Links
4 |
5 | ####Purpose and Description
6 |
7 | We wrote this tool while researching typosquat and bitsquat domains for ICS vendor. Notably domains such as 'sLemens.com' and 'siemsns.com' and 'schneide-electric.com' were found hosting malware during our research. We thought that it would be a good idea to have a tool to crawl existing websites for typo'd links, because they seemed like good candidates for attackers to register.
8 |
9 | Fortunately, not many occurences of such typos have been found. We thought that making the tool publicly available would be nice, though.
10 |
11 | One terrific use for this tool is for vendors to scour their support forums. Malicious users may post links on these forums to typo'd domains. This tool may help uncover such links.
12 |
13 | ####Authors
14 |
15 | Reid Wightman [Digital Bond, Inc](http://www.digitalbond.com)
16 |
17 | ####Prerequisite packages
18 |
19 | Python 2.7
20 | Scrapy
21 |
22 | ####Sample usage
23 |
24 | You will have to edit a few lines in the Scraper in order to use it on your website.
25 |
26 | locate the line that begins with 'start_urls' and set this to be a url (or multiple urls) corresponding to the sites that you'd like to start spidering. For example you might set this to
27 |
28 | start_urls = ['https://www.digitalbond.com']
29 |
30 | locate the line that begins with 'rootdomains' and set this to the domains that you'd like to spider. For example you might set this to:
31 |
32 | rootdomains = ['digitalbond.com']
33 |
34 | You may also wish to change a few of the other hard-coded values. In particular you may want to add filenames to the 'ignoresuffixes' list, for files that should not be retrieved. Adding 'exe' to this list might be a good idea for example, unless your web application actually hosts cgi scripts with .exe scripts or something.
35 |
36 | Finally you may need to pay attention to the scraper output. Some websites contain recursive links that will confuse the spider. One example from the digitalbond site is at http://digitalbond.com/page/wiki/Bandolier, which itself contains a link to a 'page/wiki' URI. The scraper will continue blinding following these links, resulting in retrieving URIs like http://digitalbond.com/page/wiki/Bandolier/page/wiki/Bandolier/page/wiki/Bandolier/page/wiki... forever. So, we will ignore any URIs which begin with '/page/wiki/Bandolier'. We can add additional directories to this list if we notice other recursion loops.
37 |
38 | ####Sample usage
39 |
40 | Set the start_url variable to ['http://www.killerrobotsinc.com/'], and set the rootdomains to be ['killerrobotsinc.com']. Then run the scraper as so:
41 |
42 | $ scrapy runspider TypoScraper.py
43 |
44 | You should see that a typo'd link is detected to 'www.killerobotsinc.com' (note the ommission of one of the 'r' characters), as in the screenshot below.
45 |
46 | ![Sample Output] (http://digibond.wpengine.netdna-cdn.com/wp-content/uploads/2016/06/typoscraper-example.jpg)
47 |
48 | ####Limitations
49 |
50 | There are unfortunately a few annoyances with the code. It is great for scraping and finding typo and bitsquat links in simple websites, however any site with links which rely upon heavy use of JavaScript will almost certainly fail.
51 |
52 | An unfortunate design decision with separating the URLs from the Domains the way that we do it now, is that the scraper will follow off-site links. For examples if you begin scraping a forum at forum.siemens.com, the scraper will crawl to siemens.com, blog.siemens.com, etc. The excluder will need a bit of rewriting to deal with limiting domains a bit better. Sorry for that.
53 |
54 | The code is, in general, not well-written. Use at your own risk with a deprivileged account.
55 |
56 | Only run this on your own website. It does not honor robots.txt files, and may cause your webserver some trouble, if your applications can't handle spidering.
57 |
--------------------------------------------------------------------------------
/TypoScraper/TypoScraper.py:
--------------------------------------------------------------------------------
1 | import scrapy
2 | from scrapy import signals
3 | from scrapy.xlib.pydispatch import dispatcher
4 |
5 | import dnstwist
6 | import sys
7 |
8 |
9 |
10 | class TypoScraper(scrapy.Spider):
11 | name = 'Squat Typo Searcher'
12 | start_urls = ['http://www.killerrobotsinc.com'] #['https://support.industry.siemens.com/tf/us/en/'] #['http://www.readingfordummies.com']
13 | rootdomains = ['killerrobotsinc.com'] # ['siemens.com'] #['readingfordummies.com']
14 | squatdomains = dnstwist.fuzz_domain(rootdomains[0]) # only works for one domain right now, whee
15 | checkedPages = []
16 | bogons = []
17 | ignoresuffixes = ['mov', 'jpg', 'pdf', 'doc', 'zip', 'bin', 'docx']
18 | ignoredirectories = ['/page/wiki/Bandolier'] # digitalbond.com has a loop on this uri
19 | download_maxsize = 1048576
20 |
21 | #def __init__(self):
22 | #dispatcher.connect(self.allDone, signals.spider_closed)
23 |
24 |
25 | # extract just the fully qualified domain name from a URL
26 | # e.g. "www.digitalbond.com"
27 | def extract_fqdn(self, full_url):
28 | return
29 | # extract the root domain from a URL
30 | # e.g. "digitalbond.com"
31 | # sorry this function has so many exits, should clean it up...
32 | def extract_root_domain(self, full_url):
33 | if full_url[0:7] == "mailto:":
34 | print "email link: ", full_url
35 | if "@" in full_url:
36 | mantissa = full_url.split("@")[1]
37 | return mantissa
38 | else:
39 | return None
40 | else:
41 | try:
42 | mantissa = full_url.split("://")[1].split("/")[0] # gets out the main part, e.g. 'www.digitalbond.com'
43 | domain = '.'.join(mantissa.split('.')[-2:]) # get the last two chunks and re-append them with '.', e.g. 'digitalbond.com'
44 | return domain
45 | except:
46 | #print "error on url: ", full_url
47 | return None
48 |
49 |
50 | def url_in_domain(self, full_url):
51 | domain = self.extract_root_domain(full_url)
52 | if domain in self.rootdomains:
53 | return True
54 | else:
55 | return False
56 | # check whether the filename extension is in the ignore list (e.g. don't download ".mov" files)
57 | def url_suffix_OK(self, full_url):
58 | suffix = full_url.split(".")[-1]
59 | if suffix in self.ignoresuffixes:
60 | return False
61 | else:
62 | return True
63 | # check whether the url is in a list of directories to ignore
64 | # we may want to add 'repetitious/circular link' detection as well.
65 | def url_ignore(self, full_url):
66 | if full_url[0:7] == "mailto:":
67 | return True
68 | print "checking %s" % full_url
69 | directory = "/".join(full_url.split("://")[1].split("/"))
70 | for ignoredirectory in self.ignoredirectories:
71 | if ignoredirectory in directory:
72 | return True # ignore this path
73 | return False
74 | # check if we should follow a link
75 | # checks if link is:
76 | # 1) in the right domain for us to care about
77 | # 2) url was checked already
78 | # 3) url has a suffix that we want to retrieve
79 | def link_shouldfollow(self, full_url):
80 | if not self.url_in_domain(full_url):
81 | return False
82 | if full_url in self.checkedPages:
83 | return False
84 | if not self.url_suffix_OK(full_url):
85 | return False
86 | if self.url_ignore(full_url):
87 | return False # url was in ignore list
88 | return True
89 |
90 | def testforTypo(self, full_url):
91 | linkdomain = self.extract_root_domain(full_url)
92 | for domain in self.squatdomains:
93 | if linkdomain == domain['domain']:
94 | return True
95 | # no match found
96 | return False
97 |
98 | def testforTyposR(self, response):
99 | # check response urls for typos
100 | for href in response.css('a::attr(href)'):
101 | full_url = response.urljoin(href.extract())
102 | if self.testforTypo(full_url):
103 | # got a bogon! Do not follow the link
104 | print "=========bogon detected! %s" % full_url
105 | self.bogons.append(full_url)
106 | continue # continue to next href in response
107 | if self.link_shouldfollow(full_url): # check if link is to a page we care about, and hasn't been checked already
108 | ## see if the page has been checked
109 | # add to checkedPages
110 | self.checkedPages.append(full_url)
111 | # dispatch to subprocess
112 | yield scrapy.Request(full_url, self.testforTyposR)
113 | else:
114 | # url is to some outside website (twitter, etc), so drop it
115 | continue
116 | # then add to
117 |
118 | return # yield?
119 |
120 | def parse(self, response):
121 | for href in response.css('a::attr(href)'):
122 | full_url = response.urljoin(href.extract())
123 | if self.testforTypo(full_url):
124 | self.bogons.append(full_url)
125 | continue
126 | if self.url_in_domain(full_url):
127 | # recurse
128 | yield scrapy.Request(full_url, callback=self.testforTyposR)
129 | else:
130 | continue # skip over external links
131 | print "The following bogons were found:"
132 | print self.bogons
133 |
134 | def start_requests(self):
135 | for url in self.start_urls:
136 | print 'building form request for ', url
137 | yield scrapy.FormRequest(url, callback=self.parse)
138 | return
139 |
140 | def close(self, spider):
141 | print "Penultimate final list of bogons: ", self.bogons
142 | return
143 |
144 |
145 | def main():
146 | print "Making a scraper object"
147 | myscraper = TypoScraper(name='Squat Typo Scraper')
148 | argnum = 1
149 | myscraper.start_urls = []
150 | print "Building URL list"
151 | while argnum < len(sys.argv):
152 | myscraper.start_urls.append(sys.argv[argnum])
153 | argnum += 1
154 | print "Building domain list"
155 | myscraper.rootdomains = []
156 | for url in myscraper.start_urls:
157 | myscraper.rootdomains.append(myscraper.extract_root_domain(url))
158 | print "domain list: ", myscraper.rootdomains
159 | print "Building squat list"
160 | # dictionary of domains, each domain gets
161 | myscraper.squatdomains = {}
162 | for domain in myscraper.rootdomains:
163 | myscraper.squatdomains[domain] = dnstwist.fuzz_domain(domain)
164 |
165 | # setup is done, print info?
166 | print "url list: "
167 | for url in myscraper.start_urls:
168 | print url
169 | for squat in myscraper.squatdomains[myscraper.extract_root_domain(url)]:
170 | print "\t", squat['domain']
171 | print "starting requests"
172 | myscraper.start_requests()
173 |
174 | return
175 |
176 | #if __name__ == '__main__':
177 | #print "Calling main"
178 | #main()
179 |
--------------------------------------------------------------------------------
/TypoScraper/dnstwist.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #
3 | # dnstwist by marcin@ulikowski.pl
4 | # Generate and resolve domain variations to detect typo squatting,
5 | # phishing and corporate espionage.
6 | #
7 | #
8 | # dnstwist is free software; you can redistribute it and/or modify
9 | # it under the terms of the GNU General Public License as published by
10 | # the Free Software Foundation; either version 2 of the License, or
11 | # (at your option) any later version.
12 | #
13 | # dnstwist is distributed in the hope that it will be useful,
14 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 | # GNU General Public License for more details.
17 | #
18 | # You should have received a copy of the GNU General Public License
19 | # along with dnstwist. If not, see .
20 |
21 | __author__ = 'Marcin Ulikowski'
22 | __version__ = '20150622'
23 | __email__ = 'marcin@ulikowski.pl'
24 |
25 | import re
26 | import sys
27 | import socket
28 | import signal
29 | try:
30 | import dns.resolver
31 | module_dnspython = True
32 | except:
33 | module_dnspython = False
34 | pass
35 | try:
36 | import GeoIP
37 | module_geoip = True
38 | except:
39 | module_geoip = False
40 | pass
41 |
42 | def sigint_handler(signal, frame):
43 | print('You pressed Ctrl+C!')
44 | sys.exit(0)
45 |
46 | # Internationalized domains not supported
47 | def validate_domain(domain):
48 | if len(domain) > 255:
49 | return False
50 | if domain[-1] == ".":
51 | domain = domain[:-1]
52 | allowed = re.compile('\A([a-z0-9]+(-[a-z0-9]+)*\.)+[a-z]{2,}\Z', re.IGNORECASE)
53 | return allowed.match(domain)
54 |
55 | def bitsquatting(domain):
56 | out = []
57 | dom = domain.rsplit('.', 1)[0]
58 | tld = domain.rsplit('.', 1)[1]
59 | masks = [1, 2, 4, 8, 16, 32, 64, 128]
60 |
61 | for i in range(0, len(dom)):
62 | c = dom[i]
63 | for j in range(0, len(masks)):
64 | b = chr(ord(c) ^ masks[j])
65 | o = ord(b)
66 | if (o >= 48 and o <= 57) or (o >= 97 and o <= 122) or o == 45:
67 | out.append(dom[:i] + b + dom[i+1:] + '.' + tld)
68 |
69 | return out
70 |
71 | def homoglyph(domain):
72 | glyphs = {
73 | 'd':['b', 'cl'], 'm':['n', 'rn'], 'l':['1', 'i'], 'o':['0'],
74 | 'w':['vv'], 'n':['m'], 'b':['d'], 'i':['l'], 'g':['q'], 'q':['g']
75 | }
76 | out = []
77 | dom = domain.rsplit('.', 1)[0]
78 | tld = domain.rsplit('.', 1)[1]
79 |
80 | for ws in range(0, len(dom)):
81 | for i in range(0, (len(dom)-ws)+1):
82 | win = dom[i:i+ws]
83 |
84 | j = 0
85 | while j < ws:
86 | c = win[j]
87 | if c in glyphs:
88 | for g in range(0, len(glyphs[c])):
89 | win = win[:j] + glyphs[c][g] + win[j+1:]
90 |
91 | if len(glyphs[c][g]) > 1:
92 | j += len(glyphs[c][g]) - 1
93 | out.append(dom[:i] + win + dom[i+ws:] + '.' + tld)
94 |
95 | j += 1
96 |
97 | return list(set(out))
98 |
99 | def repetition(domain):
100 | out = []
101 | dom = domain.rsplit('.', 1)[0]
102 | tld = domain.rsplit('.', 1)[1]
103 |
104 | for i in range(0, len(dom)):
105 | if dom[i].isalpha():
106 | out.append(dom[:i] + dom[i] + dom[i] + dom[i+1:] + '.' + tld)
107 |
108 | return out
109 |
110 | def transposition(domain):
111 | out = []
112 | dom = domain.rsplit('.', 1)[0]
113 | tld = domain.rsplit('.', 1)[1]
114 |
115 | for i in range(0, len(dom)-1):
116 | if dom[i+1] != dom[i]:
117 | out.append(dom[:i] + dom[i+1] + dom[i] + dom[i+2:] + '.' + tld)
118 |
119 | return out
120 |
121 | def replacement(domain):
122 | keys = {
123 | '1':'2q', '2':'3wq1', '3':'4ew2', '4':'5re3', '5':'6tr4', '6':'7yt5', '7':'8uy6', '8':'9iu7', '9':'0oi8', '0':'po9',
124 | 'q':'12wa', 'w':'3esaq2', 'e':'4rdsw3', 'r':'5tfde4', 't':'6ygfr5', 'y':'7uhgt6', 'u':'8ijhy7', 'i':'9okju8', 'o':'0plki9', 'p':'lo0',
125 | 'a':'qwsz', 's':'edxzaw', 'd':'rfcxse', 'f':'tgvcdr', 'g':'yhbvft', 'h':'ujnbgy', 'j':'ikmnhu', 'k':'olmji', 'l':'kop',
126 | 'z':'asx', 'x':'zsdc', 'c':'xdfv', 'v':'cfgb', 'b':'vghn', 'n':'bhjm', 'm':'njk'
127 | }
128 | out = []
129 | dom = domain.rsplit('.', 1)[0]
130 | tld = domain.rsplit('.', 1)[1]
131 |
132 | for i in range(0, len(dom)):
133 | if dom[i] in keys:
134 | for c in range(0, len(keys[dom[i]])):
135 | out.append(dom[:i] + keys[dom[i]][c] + dom[i+1:] + '.' + tld)
136 |
137 | return out
138 |
139 |
140 | def omission(domain):
141 | out = []
142 | dom = domain.rsplit('.', 1)[0]
143 | tld = domain.rsplit('.', 1)[1]
144 |
145 | for i in range(0, len(dom)):
146 | out.append(dom[:i] + dom[i+1:] + '.' + tld)
147 |
148 | return out
149 |
150 | def insertion(domain):
151 | keys = {
152 | '1':'2q', '2':'3wq1', '3':'4ew2', '4':'5re3', '5':'6tr4', '6':'7yt5', '7':'8uy6', '8':'9iu7', '9':'0oi8', '0':'po9',
153 | 'q':'12wa', 'w':'3esaq2', 'e':'4rdsw3', 'r':'5tfde4', 't':'6ygfr5', 'y':'7uhgt6', 'u':'8ijhy7', 'i':'9okju8', 'o':'0plki9', 'p':'lo0',
154 | 'a':'qwsz', 's':'edxzaw', 'd':'rfcxse', 'f':'tgvcdr', 'g':'yhbvft', 'h':'ujnbgy', 'j':'ikmnhu', 'k':'olmji', 'l':'kop',
155 | 'z':'asx', 'x':'zsdc', 'c':'xdfv', 'v':'cfgb', 'b':'vghn', 'n':'bhjm', 'm':'njk'
156 | }
157 | out = []
158 | dom = domain.rsplit('.', 1)[0]
159 | tld = domain.rsplit('.', 1)[1]
160 |
161 | for i in range(1, len(dom)-1):
162 | if dom[i] in keys:
163 | for c in range(0, len(keys[dom[i]])):
164 | out.append(dom[:i] + keys[dom[i]][c] + dom[i] + dom[i+1:] + '.' + tld)
165 | out.append(dom[:i] + dom[i] + keys[dom[i]][c] + dom[i+1:] + '.' + tld)
166 |
167 | return out
168 |
169 | def fuzz_domain(domain):
170 | domains = []
171 |
172 | for i in bitsquatting(domain):
173 | domains.append({ 'type':'Bitsquatting', 'domain':i })
174 | for i in homoglyph(domain):
175 | domains.append({ 'type':'Homoglyph', 'domain':i })
176 | for i in repetition(domain):
177 | domains.append({ 'type':'Repetition', 'domain':i })
178 | for i in transposition(domain):
179 | domains.append({ 'type':'Transposition', 'domain':i })
180 | for i in replacement(domain):
181 | domains.append({ 'type':'Replacement', 'domain':i })
182 | for i in omission(domain):
183 | domains.append({ 'type':'Omission', 'domain':i })
184 | for i in insertion(domain):
185 | domains.append({ 'type':'Insertion', 'domain':i })
186 |
187 | return domains
188 |
189 | def main():
190 | if len(sys.argv) == 3:
191 | output_csv = True
192 | else:
193 | output_csv = False
194 |
195 | if not output_csv:
196 | print('dnstwist (' + __version__ + ') by ' + __email__)
197 |
198 | if len(sys.argv) < 2:
199 | print('Usage: ' + sys.argv[0] + ' example.com [csv]')
200 | sys.exit()
201 |
202 | if not validate_domain(sys.argv[1]):
203 | sys.stderr.write('ERROR: invalid domain name !\n')
204 | sys.exit(-1)
205 |
206 | domains = fuzz_domain(sys.argv[1].lower())
207 |
208 | if not module_dnspython:
209 | sys.stderr.write('NOTICE: missing dnspython module - DNS functionality is limited !\n')
210 | sys.stderr.flush()
211 |
212 | if not module_geoip:
213 | sys.stderr.write('NOTICE: missing GeoIP module - geographical location not available !\n')
214 | sys.stderr.flush()
215 |
216 | if not output_csv:
217 | sys.stdout.write('Processing ' + str(len(domains)) + ' domains ')
218 | sys.stdout.flush()
219 |
220 | signal.signal(signal.SIGINT, sigint_handler)
221 |
222 | total_hits = 0
223 |
224 | for i in range(0, len(domains)):
225 | try:
226 | ip = socket.getaddrinfo(domains[i]['domain'], 80)
227 | except:
228 | pass
229 | else:
230 | for j in ip:
231 | if '.' in j[4][0]:
232 | domains[i]['a'] = j[4][0]
233 | break
234 | for j in ip:
235 | if ':' in j[4][0]:
236 | domains[i]['aaaa'] = j[4][0]
237 | break
238 |
239 | if module_dnspython:
240 | resolv = dns.resolver.Resolver()
241 | resolv.lifetime = 1
242 | resolv.timeout = 1
243 |
244 | try:
245 | ns = resolv.query(domains[i]['domain'], 'NS')
246 | domains[i]['ns'] = str(ns[0])[:-1]
247 | except:
248 | pass
249 |
250 | if 'ns' in domains[i]:
251 | try:
252 | mx = resolv.query(domains[i]['domain'], 'MX')
253 | domains[i]['mx'] = str(mx[0].exchange)[:-1]
254 | except:
255 | pass
256 |
257 | if module_geoip:
258 | gi = GeoIP.new(GeoIP.GEOIP_MEMORY_CACHE)
259 | try:
260 | country = gi.country_name_by_addr(domains[i]['a'])
261 | except:
262 | pass
263 | else:
264 | if country:
265 | domains[i]['country'] = country
266 |
267 | if not output_csv:
268 | if 'a' in domains[i] or 'ns' in domains[i]:
269 | sys.stdout.write('!')
270 | sys.stdout.flush()
271 | total_hits += 1
272 | else:
273 | sys.stdout.write('.')
274 | sys.stdout.flush()
275 |
276 | if not output_csv:
277 | sys.stdout.write(' ' + str(total_hits) + ' hit(s)\n\n')
278 |
279 | for i in domains:
280 | if not output_csv:
281 | zone = ''
282 |
283 | if 'a' in i:
284 | zone += i['a']
285 | if 'country' in i:
286 | zone += '/' + i['country']
287 | elif 'ns' in i:
288 | zone += 'NS:' + i['ns']
289 | if 'aaaa' in i:
290 | zone += ' ' + i['aaaa']
291 | if 'mx' in i:
292 | zone += ' MX:' + i['mx']
293 | if not zone:
294 | zone = '-'
295 |
296 | sys.stdout.write('%-15s %-15s %s\n' % (i['type'], i['domain'], zone))
297 | sys.stdout.flush()
298 | else:
299 | print(
300 | '%s,%s,%s,%s,%s,%s,%s' % (i.get('type'), i.get('domain'), i.get('a', ''),
301 | i.get('aaaa', ''), i.get('mx', ''), i.get('ns', ''), i.get('country', ''))
302 | )
303 |
304 | return 0
305 |
306 | if __name__ == '__main__':
307 | main()
308 |
--------------------------------------------------------------------------------
/crc32tester.py:
--------------------------------------------------------------------------------
1 | # This script takes a file and an offset of a suspected CRC32.
2 | # It attempts to find which portion of the file was used to generate
3 | # the CRC32. Set 'debug=False' in the call to crc32find() to lessen
4 | # the output.
5 | #
6 | # This script is not particularly efficient with memory and may be
7 | # improved by modifying the original file data to replace the CRC
8 | # just once.
9 | #
10 | # Reid Wightman, Digital Bond Labs, 2015
11 |
12 | import binascii
13 | import struct
14 | import hexdump
15 | import sys
16 |
17 |
18 | def usage():
19 | print "Usage: %s " % sys.argv[0]
20 | exit(1)
21 | # take a chunk of data and find a crc32 in it
22 | # assume that the crc32 was computed with the original value \x00\x00\x00\x00 in the data
23 |
24 | def crc32find(instring, offset = -1, debug = False):
25 | if offset == -1:
26 | print "Error: can't do autooffset yet"
27 | mycrc = instring[offset:offset+4]
28 | if debug:
29 | print "Searching for "
30 | hexdump.hexdump(mycrc)
31 | print "Original packet"
32 | hexdump.hexdump(instring)
33 | newstring = instring[0:offset] + "\x00\x00\x00\x00" + instring[offset + 4:]
34 | if debug:
35 | print "Searching packet"
36 | hexdump.hexdump(newstring)
37 | trimfront = 0
38 | trimend = 0
39 | while(trimend < len(instring)):
40 | trimfront = 0 # reset it on each pass
41 | while (trimfront + trimend < len(instring)):
42 | te = 0 - trimend
43 | if te == 0:
44 | teststring = newstring[trimfront:]
45 | else:
46 | teststring = newstring[trimfront:te]
47 | if debug:
48 | print "Trying"
49 | hexdump.hexdump(teststring)
50 | testcrc = binascii.crc32(teststring) & 0xffffffff
51 | testcrcbe = struct.pack(">I", testcrc)
52 | testcrcle = struct.pack("